Home The Influence of Computer-Related Attributions on System Evaluation in Usability Testing
Article Publicly Available

The Influence of Computer-Related Attributions on System Evaluation in Usability Testing

  • Adelka Niels

    Adelka Niels is a research assistant at the Human‐Computer Interaction Research Group at CoSA Center of Excellence, Luebeck University of Applied Sciences, Germany. Furthermore, she is a Ph.D. student at the University of Bremen, Germany, in the Department of Mathematics and Computer Science. Her research focuses on Human‐Computer Interaction, User‐Centered Design, and User Experience Design with a special focus on computer‐related causal attributions (i.e., how do people perceive computer‐related success and failure) and the way findings on human behavior can inform the design of computer technology.

    and Monique Janneck

    Monique Janneck is a professor for Human‐Computer Interaction and head of the HCI Research Group at CoSA Center of Excellence, Luebeck University of Applied Sciences, Germany. Her research focus is on the interplay between human behavior, social structures and technological development: She is interested in the way humans interact with technology, the way theories and findings on human behavior can inform the design of information technology, and the way technology impacts individual, organizational, and social behavior and structures.

    EMAIL logo
Published/Copyright: April 5, 2017

Abstract

Computer-related attributions are cognitions related to the causes and effects of user interactions – or, in other words, subjective explanations of users of why specific system reactions occur. Prior research has revealed different attribution styles, which influence how users interact with computers and how they perceive situations of failure and success. In this paper, we report on a study investigating how computer-related attributions influence users’ perceptions and evaluations of interactive systems. To that end, we conducted usability tests with N=74 users and measured both system evaluations and attributions. Results show correlations between attributions and usability as well as user experience measures, indicating that users’ attributions do influence their evaluations of the test systems. Furthermore, gender differences were revealed. Practical and research implications are described.

1 Introduction and Related Work

Usability testing is often complemented with standardized questionnaires measuring different usability criteria, e.g. effectiveness, efficiency, controllability, learnability, and the like. Measuring personality traits and other user characteristics is less common. However, prior research shows that users’ personality traits, namely the ‘Big Five’ personality traits of openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism, influence the results of usability tests as well as drop-out rates [9]. In this work, we present a novel approach of including user characteristics in the overall picture of usability testing by investigating the relationship between computer-related attributions and users’ system evaluations. The goal of our research is to understand better how user characteristics influence usability and user experience and, in the end, design systems that fit their users’ need better.

Attribution Theory is rooted in Social Psychology and describes a well-known everyday phenomenon: Individuals perceive their environment and their own behavior according to typical patterns and strive for general and consistent explanations for external events. Therefore, attributions describe individual and subjective causal explanations [3] and cognitions regarding the extent of control people perceive to have over external events [19].

A distinctive aspect of attribution is Locus, i.e. whether people perceive that certain events are caused internally (i.e. by one’s own actions) or externally (i.e. by other people or influences) [5]. Besides locus, three further attributional dimensions have been described: Stability, Controllability, and Globality.

Stability differentiates between temporally stable and instable causes. Temporally stable means that causes are of a permanent nature (such as knowledge, intelligence, abilities) – for example, attributing successful computer usage to one’s solid computer knowledge (or attributing computer problems to a lack thereof). Instable causes are more singular or unique (e.g. a momentary power breakdown) and less likely to occur again [1].

Controllability refers to the amount of control a person feels he or she has in a certain situation (e.g., the aforementioned power breakdown would likely be perceived as an uncontrollable cause, whereas misspelling a search term might rather be judged as a controllable cause) [18].

Globality describes whether a certain cause is perceived to have similar effects in a wide range of different situations (i.e. aspects which are believed to affect one’s computer use in general) or to apply only to a specific situation (e.g., when a user experiences issues with a certain application but does not generalize this to computer use in general) [14].

Attributions have been shown to influence emotions, motivation, as well as behavior in a decisive way [18]. For example, attributing success internally to one’s own abilities may spark positive emotions such as satisfaction, pride, and confidence. On the other hand, internal attribution of failure may induce resignation or feelings of guilt, shame, or even depression [1].

Regarding Human-Computer Interaction, prior research has identified several specific computer-related attribution styles, showing some notable differences regarding general computer-related attitudes and behaviors (e.g. [10]). Furthermore, there is research investigating gender differences regarding computer-related attributions. Some authors found that girls tend to explain successful computer use with external factors (simple tasks, being lucky) while blaming failures on their lack of competencies. Contrary, boys tend to attribute success to their own skills and failures to external circumstances (e.g. bad usability) (e.g. [2, 8, 16]). However, detailed analyses by Niels et al. [12] indicate that women and men have essentially the same attribution styles. Unlike prior studies, they did not find women’s attribution styles to be less favorable than men’s.

So far, there is little research how different attribution styles influence system evaluations and user experience. For example, it is feasible to assume that people experiencing different levels of controllability (as an attributional dimension) might evaluate the controllability of a computer application (as a system quality defined by e.g. ISO 9241-110 [6]) differently, irrespective of the system design. Likewise, users with internal or external locus of causality, respectively, might evaluate effectiveness and efficiency of a system differently. Furthermore, users with predominantly stable and global attribution patterns might generalize prior use experiences to current use situations, affecting their system evaluation.

The results of a first study by Niels et al. [13] indicate that indeed attribution patterns influence systems evaluations. However, their sample size was relatively small and some of their findings were inconsistent. The current study is designed to replicate, confirm and deepen these findings using a larger sample. Furthermore, we aim to investigate possible gender differences.

The goal of our research is to provide empirical results regarding the relationship between users’ attributions and their system evaluations to provide a basis for future research and practical recommendations for conducting usability tests.

2 Research Methods

2.1 Study Design

To analyze the relationship between computer-related attributions and system evaluations, substantial and extensive usability tests were conducted with different interactive systems. Standardized research instruments were used to measure system evaluations and attributional patterns.

The tests were conducted in a usability lab, ensuring consistent external conditions (lighting, noise, disturbances etc.). As test objects several different applications were chosen, which were tested on different devices (desktop computer, laptop, tablet, smartwatch). The test applications included games, an online journey planner, digital instruction manuals, task management software and several websites. For each application, a number of typical tasks were defined for the tests and presented in a pre-defined order (for example, to look up an error description in a digital instruction manual and identify the right steps for troubleshooting). We purposely chose a wide variety of different applications to be able to measure relationships between attributions and system evaluations independent of a specific application or specific types of systems.

Each application was tested with up to 10 different participants. None of the participants tested more than one system. Socio-demographic characteristics of test users did not vary across the applications.

All tests followed a standardized procedure: The test persons received a short introduction and then worked on the respective tasks on their own. Each test lasted about 10–20 minutes. After completing the tasks, the test persons filled out a standardized questionnaire containing items regarding socio-demographic data (age, gender, educational background, computer use and skills), a standardized assessment of the test system, and items measuring attributional patterns.

For system evaluation, we used the User Experience Questionnaire (UEQ, [7, 15]). The UEQ provides a simple and short questionnaire, which has been proven to possess high reliability and validity in numerous studies. The UEQ measures users’ subjective assessment of system interaction on six scales: Perspicuity, Efficiency, Dependability, Stimulation, Novelty, and Attractiveness. Perspicuity, Efficiency, and Dependability measure basic usability criteria (goal-directed aspects), while Stimulation and Novelty measure ‘hedonic’ aspects. Attractiveness is a valence dimension, measuring users’ general attitude toward the system. Thus, the UEQ provides both usability and user experience measures.

Table 1

User Experience Questionnaire Scales and Items [7].

Scales Description Items
Attractiveness Overall impression of the product. Do users like or dislike it? annoying / enjoyable, good / bad, unlikable / pleasing, unpleasant / pleasant, attractive / unattractive, friendly / unfriendly
Perspicuity Is it easy to get familiar with the product? not understandable / understandable, easy to learn / difficult to learn, complicated / easy, clear / confusing
Efficiency Can users solve their tasks with the product without unnecessary effort? fast / slow, inefficient / efficient, impractical / practical, organized / cluttered
Dependability Does the user feel in control of the interaction? unpredictable / predictable, obstructive / supportive, secure / not secure, meets expectations / does not meet expectations
Stimulation Is it exciting and motivating to use the product? valuable / inferior, boring / exciting, not interesting / interesting, motivating / demotivating
Novelty Is the product innovative and creative? creative / dull, inventive / conventional, usual / leading edge, conservative / innovative

The questionnaire consists of word pairs of contrasting attributes that may apply to the tested system or software. The items have the format of a seven-stage semantic differential. Table 1 shows all scales and items.

Computer-related attributional patterns were measured using a standardized questionnaire (AQ, Attribution Questionnaire) developed by Guczka and Janneck [4]. The AQ measures the four attributional dimensions of Locus, Globality, Controllability, and Stability.

The questionnaire consists of two parts, relating to situations of success and failure, respectively. Table 2 shows the part of the questionnaire relating to situations of failure (items measuring attributions of success are worded analogously).

Before answering the questions, test persons were asked to briefly describe a situation of successful computer use (and failure, respectively) that occurred during the usability test. Subsequently, they rated the supposed cause of their success / failure regarding the attributional dimensions on seven-stage Likert scales. Since attributions are determined by the subjective views of the person, we left it to the test persons to judge in what situations success or failure had occurred.

Table 2

Excerpt from the Attribution Questionnaire for failure situations [4].

What caused the breakdown?
1. I would locate the cause of the breakdown…
  internally (I am to blame) 1 2 3 4 5 6 7 externally (the system is to blame)
2. The cause of this breakdown is…
  a singular event 1 2 3 4 5 6 7 recurring
3. The cause of the breakdown is…
  controllable 1 2 3 4 5 6 7 uncontrollable
4. The cause of this breakdown is likely to promote other breakdowns…
  just in this situation 1 2 3 4 5 6 7 in other situations as well

2.2 Sample

The sample consisted of N=74 test persons aged 19 to 72 years (M=27.11years, SD=10.3years). 42 of test persons were male, 31 female (one person did not indicate gender). Most test persons had higher educational degrees (secondary school level: 5.4%; vocational diploma: 20.3%; university entrance diploma: 51.4%; vocational training: 10.8%; university degree: 12.2%).

Test persons were rather experienced computer users with one to 20 years of experience in computer use (M=14.14years, SD=7years). They used computers between one and 12 hours daily (M = 6.11 hours, SD=3.54hours). The participants self-rated their computer skills as rather high (M=5.70, SD=1.56 on a seven-stage Likert scale ranging from 1=low to 7=advanced).

Participation in the study was voluntary. The test persons received no compensation for their participation. Recruitment of test persons and conducting of the tests was done by participants of a Usability course at a German university under the supervision of the course teacher.

2.3 Data Analysis

For analysis of the UEQ data the Excel-based analysis tool provided by the UEQ research community was used (www.ueq-online.org). All questionnaires (N=74) were analyzed together in order to investigate the relationship between attributions and system evaluations independent of the test system used.

For correlation analyses, we used the mean values of the UEQ scales. Mean values between −0.8 and 0.8 can be regarded as neutral, values >0.8 reflect a positive system evaluation, and values <−0.8 indicate a negative system evaluation. The test applications received mainly positive ratings. However, we refrain from a detailed description of the specific UEQ ratings because the UEQ results as such are not of principal interest in the study, but only relating to the attributional patterns.

For the attributional dimensions mean values of all test persons were calculated for success and failure situations, respectively (see table 3). For success situations, participants mainly attributed the cause of the success as external (i.e. caused by the system) stable, controllable and global. Failures were also mainly attributed externally (to the system) and controllable, albeit at a lower level. Furthermore, causes are perceived as rather instable and less global.

Table 3

Mean values of attributional dimensions.

Locus Stability Controllability Globality
Success (n=74) (n=73) (n=74) (n=74)
Mean 4.650 5.560 2.470 5.430
SD 1.625 1.667 1.510 1.689
Failure (n=70) (n=69) (n=69) (n=69)
Mean 4.190 3.590 2.320 3.330
SD 1.875 2.103 1.539 2.034

For correlation analyses, we calculated Spearman’s Rho regarding the six UEQ scales (Perspicuity, Efficiency, Dependability, Stimulation, Novelty, and Attractiveness) and the four attributional dimensions (Locus, Globality, Controllability, and Stability). Analyses were carried out separately for situations of success and failure. Non-parametric tests were used because the data was not normally distributed.

3 Results

3.1 Correlations Regarding Situations of Success

For situations of success, we found correlations regarding all four attributional dimensions. The correlation coefficients are shown in table 4. Significant correlations are printed in bold; levels of significance are marked by asterisks (*: p0.05, **: p0.01).

Table 4

Correlations of attributional dimensions and UEQ scales for situations of success.

Locus (n=74) Stability (n=73) Controllability (n=74) Globality (n=74)
Perspicuity 0.285* 0.570** –0.484** 0.294*
Efficiency 0.240* 0.361** –0.436** 0.225
Dependability 0.221 0.304** –0.390** 0.223
Stimulation 0.061 0.167 –0.139 0.169
Novelty 0.142 0.211 –0.089 0.100
Attractiveness 0.112 0.259* –0.308** 0.249*

Two-tailed test, Spearman’s Rho, *: p0.05, **: p0.01; bold = correlation is significant.

Regarding Locus, we found correlations regarding goal-directed usability measures: Participants with an external locus of causality – thus attributing their success mainly to system qualities – evaluated the test applications significantly more positive regarding Perspicuity and Efficiency than participants with an internal locus of causality.

Furthermore, we found positive correlations between Stability and Attractiveness as well as all goal-directed usability scales. Thus, persons who attribute the causes of success as temporally stable, evaluated Attractiveness, Perspicuity, Efficiency, and Dependability significantly more positive than persons who attribute their success to less stable causes (such as chance or luck).

Similar relationships can be shown regarding Controllability. Contrary to the other dimensions, low values denominate a high level of controllability (this is due to the wording of the questionnaire). Therefore, correlations are negative. Participants with high levels of controllability (thus believing that they can control the situation) rated Attractiveness, Perspicuity, Efficiency, and Dependability significantly more positive than persons who feel less in control.

Regarding Globality, we also found positive correlations with Attractiveness and Perspicuity. Thus, people who perceive more global causes for their success (i.e. that will occur again in other situations), also evaluate the system in use more positively regarding attractiveness and perspicuity.

There were no significant correlations between the attributional dimensions and the hedonic system qualities measured by the UEQ scales Stimulation and Novelty.

3.2 Correlations Regarding Situations of Failure

For situations of failures, we also found several correlations between attributional dimensions and UEQ scales, albeit to a lesser extent and also on lower levels. The correlation coefficients are shown in table 5. Significant correlations are printed in bold; levels of significance are marked by asterisks (*: p0.05, **: p0.01).

Table 5

Correlations of attributional dimensions and UEQ scales for situations of failure.

Locus (n=70) Stability (n=69) Controllability (n=69) Globality (n=69)
Perspicuity –0.206 –0.042 –0.277* –0.303*
Efficiency –0.112 –0.175 –0.231 –0.219
Dependability –0.105 –0.177 –0.238* –0.159
Stimulation –0.043 –0.167 –0.264* –0.177
Novelty –0.059 –0.239* –0.261* –0.142
Attractiveness –0.038 –0.130 –0.235 –0.119

Two-tailed test, Spearman’s Rho, *: p0.05, **: p0.01; bold = correlation is significant.

Regarding Stability, we found a negative correlation with the hedonic system quality Novelty. That means, persons who perceive stable and persistent causes for their failures rate the novelty of a system significantly more negative.

Regarding Controllability, again low values denominate a high level of controllability (due to the wording of the questionnaire). Therefore, correlations are negative. Controllability shows significant negative correlations with both goal-directed usability criteria (namely Perspicuity and Dependability) and hedonic systems qualities (Stimulation and Novelty). Thus, users who feel less in control in situations of failure tend to evaluate systems more negatively than users with high levels of controllability.

Regarding Globality, there is a negative correlation with Perspicuity, indicating that persons who believe that the causes of their failure will persist in other situations as well evaluate this aspect of system quality more negatively than users who see unique causes.

Regarding the Locus dimension, there were no correlations with UEQ scales.

3.3 Gender-related Analysis

To investigate possible gender differences, we calculated the correlation analyses shown in section 3.1 and 3.2 separately for men and women. Tables 6 and 7 show the results.

Table 6

Correlations of attributional dimensions and UEQ scales for situations of success, separately for male (m) and female (f) participants.

Locus Stability Controllability Globality
Perspicuity m .215 .353** –.530** 0.067
f .332 + .770** –.458** .517**
Efficiency m .267 + .223 –.418** .043
f .141 .509** –.464** .448*
Dependability m .262 + .081 –.360** .049
f .158 .518** –.390* .397*
Stimulation m –.022 .178 –.344** .039
f .152 .134 .093 .287
Novelty m .114 .174 –.211 –.146
f .194 .167 .107 .311
Attractiveness m .086 .120 –.418** .133
f .268 .529** –.213 .484**

Two-tailed test, Spearman’s Rho, +: p0.1, *: p0.05, **: p0.01; bold = correlation is significant on the indicated level.

Table 7

Correlations of attributional dimensions and UEQ scales for situations of failure, separately for male (m) and female (f) participants.

Locus Stability Controllability Globality
Perspicuity m –.309* –.157 –.277 + –.388*
f –.215 .007 –.19 –.183
Efficiency m –.144 –.213 –.308* –.099
f –.117 –.098 –.204 –.323 +
Dependability m –.148 –.456** –.222 –.276 +
f –.087 .128 –.237 –.027
Stimulation m –.315* –.307 + –.252 –.201
f .381* .041 –.129 –.16
Novelty m –.161 –.413** –.346* –.143
f .279 –.022 –.17 –.069
Attractiveness m –.108 –.130 –.201 –.017
f .24 –.176 –.277 + –.315

Two-tailed test, Spearman’s Rho, +: p0.1, *: p0.05, **: p0.01; bold = correlation is significant on the indicated level.

Regarding situations of success, we found more significant correlations between UEQ scales and the attributional dimensions for women than for men. This is especially true for the Locus, Stability and Globality dimensions. For Controllability, the correlations for the goal-directed criteria do not differ between men and women. For Stimulation and Attractiveness, we found additional significant correlations for men (Table 6). It is also interesting to note that Attractiveness is significantly correlated with Stability and Globality for the female sample and with Controllability for the male sample.

Regarding situations of failure, the opposite picture emerged: We mainly observed significant correlations between UEQ scales and attributional dimensions for men. For women, almost no significant relationships could be shown.

There were no significant differences between male and female participants regarding attributional patterns or UEQ ratings in general.

4 Discussion

The results of our study indicate that users’ attributional patterns do indeed influence their system evaluations, as was first reported by Niels et al. [13]. Especially regarding situations of success we found notable correlations between attributional dimensions and the attractiveness of the system as well as its goal-directed usability: Users who attribute their success externally to characteristics of the interactive system, who feel more in control when using the system, who perceive causes of their success to be temporally stable and likely to recur in other situations assess the respective application more positively.

It is not surprising that external locus of causality is associated with positive system evaluations: Users who attribute their success to the system design rather than their own skills consequently rate the qualities of the system – which in their view are responsible for successful use – better.

The most considerable correlations were found between Controllability and all goal-directed usability criteria as well as Attractiveness, meaning that feeling in control when using computers is associated with better evaluations of system quality. This is a very interesting finding as ‘controllability’ is typically defined as an objective system quality, e.g. in ISO 9241-110 [6]. The results of our study suggest that how users perceive controllability of an interactive system does not only depend on the system design, but also on their own attributional patterns, which can be seen as persistent personality traits (e.g. [1, 17]).

Similarly, there are high correlations regarding the attributional dimension of Stability. That is if users perceive their positive experience of use as temporally stable and persistent – and thus expect to be successful in future interactions again – their system evaluation turns out more positive as well. This could be interpreted as a kind of general confidence, which eases system use and also emanates to a positive rating of the application. Likewise, this applies to Globality – i.e. the extent to which users believe that the cause of their success will take effect in other situations of computer use as well, e.g. when using a different application – albeit correlations are lower here.

It can be noticed that at large attributional patterns which have been identified as beneficial for overall successful computer use – especially high levels of controllability (e.g. [10, 16]) – are associated with more positive system evaluations. This is also true for situations of failure, albeit less and also lower correlations were observed here: Low levels of control – indicating an unfavorable, insecure-resigned attribution style [10] – are associated with more negative system evaluations. In other words: Users with more positive, favorable attribution styles – such as the ‘confident’ or ‘realistic’ types described by [10] – also evaluate applications they use better than users with unfavorable attribution patterns, such as the ‘resigned’ or ‘humble’ types [10]. Of course, it has to be noted that correlational analyses cannot be interpreted in a causal way. However, as attribution styles are seen as persistent personal characteristics (e.g. [1, 17]), it is reasonable to assume that attribution patterns influence system evaluations and not the other way around.

In situation of success, attributional dimensions correlate with goal-directed usability criteria and also overall attractiveness, but not with hedonic system qualities such as Novelty and Stimulation. Quite interesting, in situations of failure, both correlations with goal-directed and hedonic UEQ scales were observed. A possible explanation is that problems using computers – resulting in situations of failure – directly impair user experience as a whole, while success situations might be associated with better usability assessments, but do not automatically improve hedonic aspects of system use.

It should be noted that the relationships between attributions and system evaluations turned out differently for male and female users. For women, we found more significant correlations between the AQ and UEQ scales in situations of success, while in situations of failure this was the case for the male sample, even though there were no general differences between men and women regarding their attributional patterns or system evaluations as such. This indicates that for men, special notice should be paid to possible effects of attribution patterns on system evaluations when a lot of problems occur during the tests. Likewise, for women this effect might be especially strong when they succeed easily in the given tasks. However, this finding needs to be replicated and extended in further research.

To sum up, our study confirms that users’ attributional patterns might have a significant effect on their system evaluations: That means, system evaluations in usability tests probably do not only reflect ‘objective’ system qualities, but also users’ personal characteristics. The same system might be rated significantly different by users with distinct attribution styles. Neglecting this variable in usability tests might lead to misinterpretations and eventually inappropriate design decisions.

Therefore, as a practical implication we recommend including users’ attributional patterns as a variable in usability evaluations. As was shown in this study, attributions can be measured easily by using a short standardized questionnaire. In doing so, the effects of different attribution patterns on usability ratings can be measured and considered (e.g. overly positive or negative evaluations). Likewise, attribution styles could be included in design processes right away, for example using personas with typical attribution patterns (cf. [11]).

Compared to the study by Niels et al. [13] we succeeded in recruiting a larger and more heterogeneous sample, which increases the generalizability of results. Nevertheless, our sample still mainly consists of younger, well-educated, experienced and skilled computer users. Furthermore, the test systems were mainly rated positively. This might provide an explanation for the less pronounced results regarding situations of failures – most test persons obviously did not experience major problems in the test situation.

Another limitation is the somewhat artificial test situation. Using a standardized test procedure and test environment improves the comparability of the tests. As a drawback, however, users interacted with systems they don’t normally use on an everyday basis, so that solving the tasks was probably not overly important to them. We suppose that attributions have an even larger impact in situations of computer use when personal involvement, motivation, and interest for use are high.

Therefore, in future studies, we aim to include even more diverse user groups, especially regarding age and computer skills. Furthermore, the investigation should include test applications that are of more interest to the participants and also include more difficult tasks in order to investigate situations of failures better.

About the authors

Adelka Niels

Adelka Niels is a research assistant at the Human‐Computer Interaction Research Group at CoSA Center of Excellence, Luebeck University of Applied Sciences, Germany. Furthermore, she is a Ph.D. student at the University of Bremen, Germany, in the Department of Mathematics and Computer Science. Her research focuses on Human‐Computer Interaction, User‐Centered Design, and User Experience Design with a special focus on computer‐related causal attributions (i.e., how do people perceive computer‐related success and failure) and the way findings on human behavior can inform the design of computer technology.

Monique Janneck

Monique Janneck is a professor for Human‐Computer Interaction and head of the HCI Research Group at CoSA Center of Excellence, Luebeck University of Applied Sciences, Germany. Her research focus is on the interplay between human behavior, social structures and technological development: She is interested in the way humans interact with technology, the way theories and findings on human behavior can inform the design of information technology, and the way technology impacts individual, organizational, and social behavior and structures.

References

[1] Abramson, L. Y., Seligman, M. E., & Teasdale, J. D. (1978). Learned helplessness in humans: critique and reformulation. Journal of Abnormal Psychology, 87(1), 49–74. 10.1037/0021-843X.87.1.49.Search in Google Scholar

[2] Campbell, N. J. (1990). High School Student’s Computer Attitudes and Attributions: Gender and Ethnic Group Differences. Journal of Adolescence Research, 5, 485–499.10.1177/074355489054007Search in Google Scholar

[3] Försterling, F. (2001). Attribution. An introduction to theories, research, and applications. .Social Psychology. A modular course.Search in Google Scholar

[4] Guczka, S. R., & Janneck, M. (2012). Erfassung von Attributionsstilen in der MCI – eine empirische Annäherung. In Reiterer, H., Deussen, O. (Hrsg.), Mensch & Computer 2012: interaktiv informiert – allgegenwärtig und allumfassend!? (S. 223–232).10.1524/9783486718782.223Search in Google Scholar

[5] Heider, F. (1958). The Psychology of Interpersonal Relations. Hillsdale, New Jersey: Lawrence Erlbaum Association.10.1037/10628-000Search in Google Scholar

[6] ISO 9241 (2006). Ergonomics of Human System Interaction – Part 110: Dialogue Principles. International Standards Organization.Search in Google Scholar

[7] Laugwitz, B., Held, T., & Schrepp, M. (2008). Construction and Evaluation of a User Experience Questionnaire. In HCI and Usability for Education and Work (pp. 63–76). 10.1007/978-3-540-89350-9_6.Search in Google Scholar

[8] Nelson, L. J., Cooper, J. (1997) Gender Differences in Children’s Reactions to Success and Failure with Computers. Computers in Human Behavior 13, 247–267.10.1016/S0747-5632(97)00008-3Search in Google Scholar

[9] Nestler, S., Thielsch, M., Vasilev, E., & Back, M. D., (2015). Will They Stay or Will They Go? Personality Predictors of Dropout in an Online Study. International Journal of Internet Science 10(1), 37–48.Search in Google Scholar

[10] Niels, A. & Janneck, M. (2015a). Computer-related attribution styles: Typology and data collection methods. In Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9297 LNCS, pp. 274–291). 10.1007/978-3-319-22668-2_22.Search in Google Scholar

[11] Niels, A. & Janneck, M. (2015b). Computerbezogene Attributionsstile: Ein Persona-Toolkit für UE-Prozesse. In Diefenbach, S., Henze, N. & Pielot, M. (Hrsg.), Mensch und Computer 2015 Tagungsband, Stuttgart: Oldenbourg Wissenschaftsverlag (S. 275–278).10.1515/9783110443929-032Search in Google Scholar

[12] Niels, A., Guczka, S.R., Janneck, M. (2015). Computer-related Causal Attributions: The Role of Sociodemographic Factors. In Proceedings of 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, AHFE 2015, Elsevier (pp. 2483–2490).10.1016/j.promfg.2015.07.632Search in Google Scholar

[13] Niels, A., Guczka, S., Janneck, M. (2016). The Impact of Causal Attributions on System Evaluations in Usability Tests. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). New York, NY, USA: ACM (pp. 3115–3125).10.1145/2858036.2858471Search in Google Scholar

[14] Peterson, C., & Buchanan, G. M. (1995). Explanatory style: History and evolution of the field. In Explanatory style (pp. 1–20).Search in Google Scholar

[15] Schrepp, M., Hinderks, A., & Thomaschewski, J. (2014). Applying the user experience questionnaire (UEQ) in different evaluation scenarios. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8517 LNCS, pp. 383–392). 10.1007/978-3-319-07668-3-37.Search in Google Scholar

[16] Sølvberg, A. M. (2002). Gender differences in computer-related control beliefs and home computer use. Scandinavian Journal of Educational Research 46(4), 409–426.10.1080/0031383022000024589Search in Google Scholar

[17] Stiensmeier-Pelster, J., & Heckhausen, H. (2006). Kausalattribution von Verhalten und Leistung. In Heckhausen, J., Heckhausen, H. (Hrsg.), Motivation und Handeln, 3. Auflage. Berlin: Springer-Verlag (S. 355–392).10.1007/3-540-29975-0_14Search in Google Scholar

[18] Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological Review, 92(4), 548–573. 10.1037/0033-295X.92.4.548.Search in Google Scholar

[19] Wenninger, G. (2002). Lexikon der Psychologie. Spektrum. Akad. Verlag.Search in Google Scholar

Published Online: 2017-04-05
Published in Print: 2017-04-01

© 2017 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 24.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/icom-2017-0001/html
Scroll to top button