Abstract
One of the key questions in human–robot interaction research is whether humans perceive robots as intentional agents, or rather only as mindless machines. Research has shown that, in some contexts, people do perceive robots as intentional agents. However, the role of prior exposure to robots as a factor potentially playing a role in the attribution of intentionality is still poorly understood. To this end, we asked two samples of high school students, which differed with respect to the type of education they were pursuing (scientific/technical vs. artistic) to complete the InStance Test, measuring individual tendency to attribute intentionality toward robots. Results showed that, overall, participants were more prone to attribute intentionality to robots after being exposed to a theoretical lecture about robots’ functionality and use. Moreover, participants’ scientific/technical education resulted in a higher likelihood of attribution of intentionality to robots, relative to those with artistic education. Therefore, we suggest that the type of education, as well as individually acquired knowledge, modulates the likelihood of attributing intentionality toward robots.
1 Introduction
Throughout human history, we have made ourselves increasingly dependent on machines. It has become particularly evident since the Industrial Revolution, when the massive increase in the use of machines transformed the industrial system and, by extension, the economic system, and the society as a whole. At present, machines are not only used in industrial settings, but their presence is becoming more pervasive in social spaces of human life [1,2]. For example, machines, sometimes in the shape of humanoid robots, can serve as assistants in workplaces, educators, or social companions in elderly care (for a review see ref. [3]). Therefore, individuals are increasingly exposed to the presence of these types of agents, and more and more often share various kinds of social contexts with them (e.g., schools, hospitals, and workplaces) [1,3]. Repeated exposure to robots should naturally lead people to gain knowledge about how such artifacts work. Thus, it is crucial to examine how exposure to robots and acquisition of knowledge regarding their functioning affect individuals’ tendency to treat robots as intentional (and/or perhaps social) agents.
1.1 Intentional stance toward robots?
Given the robots’ mechanical nature, people should treat them as artificial systems that have been programmed to display specific kinds of behaviors. Following Daniel Dennett’s framework [4,5], people should adopt the Design Stance toward robots and, thus, interpret and explain their behavior with reference to the way that the robots were programmed to behave [4,5]. However, in some contexts, people are also inclined to attribute mental states to robots, in order to explain their behaviors. It would mean that, under certain circumstances, people might adopt the Intentional Stance toward robots [4,5]. The adoption of the Intentional Stance leads to treating robots as intentional agents whose behaviors result from mental (intentional) states such as beliefs or desires ([6,7]; for a review see ref. [8]).
For instance, recent evidence [6] showed that, when presented with a series of various behaviors displayed by a human or by a humanoid robot, participants ascribed intentional states to the humanoid robot to a similar degree as when they observed the human displaying the same behaviors. In a similar vein, Marchesi and colleagues [7] asked participants to rate whether the behaviors displayed by the humanoid robot iCub [9] were motivated by a mechanical cause (such as malfunctioning or calibration, thus referring to the Design Stance) or by a mentalistic cause (such as desire or curiosity, thus referring to the Intentional Stance). Results showed that, overall, people adopted the Intentional Stance toward the humanoid robot to some degree [7]. Interestingly, individual differences among participants seem to play a role in the likelihood of adopting the Intentional Stance, as some of them were more prone to choose the mechanistic explanations overall, whereas some others tended to choose the mentalistic explanations [10]. Thus, the authors argued that the likelihood of adopting the Intentional Stance might depend on several factors, such as individual differences in attitudes toward robots [11,12,13].
1.2 Individual differences in the tendency to adopt the Intentional Stance toward robots
Epley and colleagues conceptualized the role of individual differences in attitudes toward robots and presented a three-factors theory explaining whether, and under which conditions, people are likely to anthropomorphize them [14]. According to the authors, three psychological determinants are crucial for people’s tendency of anthropomorphizing robots: (i) the elicited agent knowledge, namely the amount of knowledge about the other agent that people have access to; (ii) the motivation to understand and explain the behavior of the other agent (i.e., effectance motivation); and (iii) the desire for social contact and affiliation. In line with this, the authors suggest that interpersonal differences in these determinants strongly predict people’s likelihood of anthropomorphizing the other agent (i.e., a social robot), and it would explain why anthropomorphism is so variable [14]. In a similar vein, also Fischer and colleagues underlined the role of individual differences in human–robot interaction (HRI), arguing that there is a considerable amount of interpersonal variation with respect to whether artificial agents are treated as social actors [15]. Recent pieces of evidence further confirmed the role of individual differences in attitudes toward robots [16,17,18]. For example, individual differences in ascribing human-like features to a non-human-like agent predict the degree of moral care and concern related to an agent, the amount of responsibility and trust, and the extent to which the agent can serve as a source of social influence on the self [13,18]. Moreover, differences in the perception of the trustworthiness of a robot resulted to be predictive of humans’ behavior toward robots in a human–robot team [19,20].
In sum, knowing that people’s attitudes toward robots (and, potentially, their tendency to adopt the Intentional Stance) can vary depending on context and individual differences, calls for addressing the question of what factors are at play in this phenomenon. In the present study, we chose to focus on the role of prior exposure to robots, which may be a likely candidate factor affecting the adoption of the Intentional Stance toward them.
Specifically, one hypothesis may be that the more people are exposed to robots, and thus, they gain knowledge about the way robots are programmed and controlled, the less likely they would adopt the Intentional Stance toward robots. In other words, acquiring notions about robots’ functionality and use would allow people to explain their behavior based on the assumption that it is the outcome of a designed system [21]. This hypothesis would be in line with Epley and colleagues’ theory of anthropomorphism [14], according to which repetitive interactions can decrease individuals’ likelihood to attribute mental states (i.e., adopting the Intentional Stance) toward a robot, since the acquired knowledge about it would weaken the motivation to understand its behavior. Notably, a recent study seems to support this view [22]. The authors used a gaze-cueing paradigm to investigate participants’ likelihood of attribution of intentionality as a function of the duration of the exposure to the robot’s repetitive gaze behavior. Results showed that a short exposure had a positive change in participants’ attribution of intentionality. On the contrary, long exposure did not increase the initial likelihood of attributing intentionality toward robots [22].
2 Aims
The present study aimed at understanding whether exposure to robots, in terms of acquired theoretical knowledge, modulates participants’ likelihood of adopting the Intentional Stance toward robots and whether it depends on participants’ type of education (scientific/technical vs. artistic). We decided to examine the factor of education type, as we reasoned that a more technical/scientific profile of education might be more likely to familiarize students with technology in general and perhaps even robotics. On the other hand, students undergoing artistic education might be less exposed to technology, programming classes, and acquisition of knowledge about how technological artifacts, such as robots, work.
We designed an experiment in which we compared the likelihood of attribution of intentionality toward robots before and after a theoretical lecture about robots’ functionality and use. We examined two samples of high school students: one attending a scientific/technical high school (thus specializing in technical subjects), and the other attending an art high school (thus specializing in art). We asked the two groups of high school students to attend a 45-min presentation in which they were given a theoretical lecture about robots’ functionalities and potential applications. At the end of the presentation, participants watched two videos in which we showed how the humanoid robot iCub [9] could be controlled using the Wizard-of-Oz (WoOZ) technique [23,24] in a laboratory setting. This meant that the students observed an experimenter remotely operating the robot to control a few behaviors, such as its speech and movements. Specifically, we presented a video of a short sequence of a lab experiment (see ref. [25] for more information related to the experimental procedure). In the video, a scene was depicted in which a person watched movies together with iCub, controlled by an experimenter who was located in a different room. Two videos were presented to participants: in the first video, the scene was shown from the participant’s perspective, whereas in the second video, the scene was shown from the experimenter’s perspective (Figure 1). This was done to demonstrate that what may appear as an intentional behavior of the robot from the participants’ perspective is in fact a behavior fully pre-programmed and controlled by an experimenter. This should highlight to participants that a robot is just a machine without a will of its own.
![Figure 1
Representation of the WoOZ experiment described in Marchesi and colleagues’ work [25]. On the left side (Panel a), the experiment was shown from the participant’s perspective, i.e., with the experimenter hidden from the participants’ view while controlling the robot. In contrast, on the right side (Panel b), the experiment was shown from the experimenter’s perspective, to unveil that the robot was not autonomously interacting with participants, but the experimenter remotely controlled it.](/document/doi/10.1515/pjbr-2022-0103/asset/graphic/j_pjbr-2022-0103_fig_001.jpg)
Representation of the WoOZ experiment described in Marchesi and colleagues’ work [25]. On the left side (Panel a), the experiment was shown from the participant’s perspective, i.e., with the experimenter hidden from the participants’ view while controlling the robot. In contrast, on the right side (Panel b), the experiment was shown from the experimenter’s perspective, to unveil that the robot was not autonomously interacting with participants, but the experimenter remotely controlled it.
As a measure of the tendency to adopt the Intentional Stance toward robots, we employed the InStance Test (IST) [7]. The test consists of 34 fictional scenarios depicting the humanoid robot iCub while performing various daily activities. Each scenario comprises three pictures showing a sequence of events, with a scale (ranging from 0 to 100) providing a mechanistic description of the scenario on one extreme and an mentalistic description on the other. By moving the cursor on a slider’s scale toward one of the extremes, in each scenario participants have to rate whether they think that iCub’s behavior has a mechanical or a mentalistic explanation (see Figure 2 for an example scenario).

Screenshot of the example scenario of the IST. Mechanistic explanation on the left and mentalistic on the right.
To test whether the tendency to attribute intentionality toward robots was modulated by participants’ acquired knowledge about robots’ functionalities and use, we asked participants to fill out the IST test in two separate sessions, namely half of the test before (“Pre” Session), and the other half after the theoretical lecture and the two videos showing the WoOZ experiment [25] (“Post” Session).
Furthermore, to examine the role of education type, we tested one sample comprising students who attend the scientific/technical high school (i.e., “Scientific–Technical” sample), while the other sample comprised students attending an art high school (i.e., “Artistic” sample).
3 Materials and methods
3.1 Participants
Two samples of Italian high school students were recruited during two different science dissemination sessions at schools. The first sample included students with a scientific–technical background (“Scientific–Technical” sample; N = 41 students from scientific high school, N = 17 from scientific–technical high school, and N = 20 from technical high school, for a total N = 80; N = 2 students did not explicitly state which school they were enrolled, and thus, they were considered as NAs). The second sample included students with an artistic background (“Artistic” sample: N = 56 students from an art high school). All students that took part in the study were adults (age range = 18–19). All participants gave informed consent and declared themselves to be of legal age by ticking the appropriate box in the online form before completing the IST. All participants were naïve to the purpose of the study.
The study was approved by the local Ethical Committee (Comitato Etico Regione Liguria), and it was conducted following the Code of Ethics of the World Medical Association (2013 Declaration of Helsinki).
3.2 Apparatus and stimuli
Students were invited to attend a theoretical lecture about robots’ functionality and use, held by the authors. Before and after the authors’ presentation, participants were asked to complete the IST [7], which was programmed using the online platform SoSci (https://www.soscisurvey.de/) [26]. At the end of the lecture, participants watched two videos. First, they watched a typical HRI in a laboratory context from participants’ point of view; subsequently, they watched the same experimental context from the experimenter’s point of view, controlling the robot through a WoOZ manipulation [23,24].
3.3 Procedure
Participants were asked to complete the first half of the IST before the lecture (“IST-Pre”), whereas the other half of the IST was sent and completed at the end of the presentation, specifically after participants watched the WoOZ videos (“IST-Post”). Notably, the IST-Pre was made accessible the day before the lecture session, whereas access to the IST-Post was available only at the end of the lecture. This was done to ensure that participants filled out the second part of the IST only after attending the lecture and watching the videos.
Given the specific requirements of our study, namely the need to provide the two parts of the IST at different time points and the need to match the anonymized answers of the same participant, we designed a custom solution using the tools provided by SoSci Survey. In detail, we created another survey (called “IST-init”) in the form of a Web Form asking only participants’ email addresses. By submitting this form, a custom script incrementally generated a unique identifier for each new request and sent an email to the specified address with two links, one to access IST-Pre and another to access IST-Post. These generated links have encoded the identifier of the participant, so they were unique for each participant.
It is important to point out that participants’ email addresses were used only to send the links; thus, participants’ sensitive data (such as email addresses) were not accessible by the experimenters in any phase of the study.
4 Results
All analyses were conducted using R Studio v.4.0.2 [27] and JASP v.0.14.1 (2020).
4.1 Overall analysis
Regarding the Scientific–Technical sample, we excluded from further analyses the two NA participants who did not declare which school they were enrolled in. Thus, the final sample resulted in N = 78. First, we were interested in investigating whether both factors, namely participants’ education type and acquired knowledge, affected the likelihood of adopting the Intentional Stance toward robots. Thus, we performed a Repeated Measures ANOVA, with the IST mean scores in the two sessions (i.e., Pre vs. Post) as the repeated measures factor, and the education type (scientific/technical vs. artistic) as the between-subject factor (“Artistic” sample, N = 56; Mean IST scorePre = 34.47, SDPre = 20.89; Mean IST scorePost = 41.67, SDPost = 24.99. “Scientific–Technical” sample, N = 78; Mean IST scorePre = 34.97, SDPre = 19.77; Mean IST scorePost = 41.79, SDPost = 24.11). Only the main effect of IST emerged as significant [F (1, 132) = 10.04, p = 0.02, η² = 0.02] (Figure 3).

Mean IST scores, plotted as a function of Session (IST Pre vs. Post), separately for Education Type (Artistic vs. Scientific–Technical students). Bars indicate the 95% confidence interval.
4.2 Analyses of the artistic vs. technical groups
To further explore the impact of technical background on the adoption of the Intentional Stance toward a humanoid robot in adult high school students, we decided to select only students with a technical background. In other words, we excluded students who attended a purely scientific high school without a technical component. We compared their mean scores to the students with an artistic background education with a Repeated Measures ANOVA (“Artistic” sample, N = 56; Mean IST scorePre = 34.47, SDPre = 20.89; Mean IST scorePost = 41.67, SDPost = 24.99. “Technical” sample, N = 37; Mean IST scorePre = 39.77, SDPre = 17.8; Mean IST scorePost = 53.86, SDPost = 19.32).
Results showed both the within-subject main effect of IST scores in the two sessions (Pre vs. Post) [F(1, 91) = 13.84, p < 0.001, η² = 0.05] and the between-subject main effect of the education type (technical vs. artistic) [F (1, 91) = 6.2, p < 0.0001, η² = 0.04] (Figure 4). No interaction emerged as significant [F (1, 91) = 1.4, p = 0.23, η² = 0.006].

Mean IST scores, plotted as a function of Session (Pre vs. Post), separately for each education type (artistic vs. technical students). Bars indicate the 95% confidence interval.
4.3 Analysis of social and non-social factors of IST
In a recent work, Spatola and colleagues [28] explored the factorial composition of a shorter version of the IST, identifying two factors. Factor 1 identifies scenarios where the robot is involved in social interaction with a human (“Social” factor), whereas Factor 2 identifies scenarios where the robot is alone, and thus, the social component is absent (“Non-Social” factor). That being said, we were interested in assessing whether there was a difference, in terms of likelihood of attributing intentionality to robots (i.e., IST scores), between the scenarios in which iCub was depicted while interacting with a human (“Social” scenarios) compared to the ones in which the robot was depicted alone (“Non-Social” scenarios), and whether it might depend on participants’ education type (i.e., Scientific–Technical vs. Artistic). Therefore, we decided to subselect only the IST items identified by Spatola and colleagues [28]. Then, we explored the two factors with a Repeated Measures ANOVA, where the IST scores for the Social vs. Non-Social items (Pre and Post) were considered as within-subject factors, and the education type (Scientific/Technical vs. Artistic group) was considered as a between-subject factor. Given that the assumption of sphericity was not met, results are reported with the Huynh-Feldt correction for sphericity. Results showed a significant main effect of IST Pre vs. Post scores [F (2.313, 91) = 39.48, p < 0.0001, η² = 0.2]. The main effect of education type was not significant [F (1, 91) = 3.68, p = 0.06, η² = 0.01], as well as no significant interaction between education type and IST Factors (Social and Non-Social) emerged [F (1, 91) = 2.24, p = 0.10, η² = 0.01] (Figure 5).

Mean IST scores for Social and Non-Social items, plotted as a function of Session (Pre vs. Post), separately for each education type (artistic vs. scientific–technical students). Bars indicate the 95% confidence interval.
5 Discussion
The present study aimed at understanding whether exposure to information about robots’ functionality and use affects participants’ tendency to attribute intentionality to robots, namely to adopt the Intentional Stance [4,5], and whether this depends on the type of education. The tendency to adopt the Intentional Stance was operationalized as responses in the IST [7].
We asked adult high school students (split into two samples: students attending an art high school, and students attending a scientific–technical school) to take part in a theoretical lecture about robots, in which they were given an overview of robots’ functionalities and use in a laboratory setting. In addition, they were presented with videos in which a robot was depicted as a purely mechanical device controlled by a human (i.e., WoOZ videos: see Data Availability statement at the end of the manuscript). All the students completed the IST in two separate sessions, namely before and after the lecture (i.e., Pre and Post sessions). Results showed that, overall, participants tended to attribute more intentionality to robots (i.e., higher IST scores) in the Post session compared to Pre, regardless of participants’ education type (scientific–technical vs. artistic).
However, when subselecting from the “Scientific–Technical” sample of only students with a purely technical background, results showed that students enrolled in technical schools tended to attribute more intentionality to robots (i.e., higher IST scores) in the Post session compared to the Pre. Notably, the same did not occur with students enrolled in an art school, as no significant differences in the IST scores emerged between the two sessions. Furthermore, the two-way interaction (Session × Education Type) was not significant, indicating that participants’ acquired knowledge about robots after the theoretical lesson did not modulate the likelihood of adopting the Intentional Stance depending on their type of education.
These results are in contrast to the initial hypothesis, according to which more information about the robot’s functionality and a higher degree of technical education should decrease the tendency to adopt the Intentional Stance toward robots, as technical education should inform participants that robots are just artifacts and machines without will or mind of its own.
One possible explanation may refer to a psychological phenomenon called the mere exposure effect [29]. This phenomenon shows that the mere exposure to a novel stimulus, if reiterated over time, increases individuals’ likeability, and positive attitudes, toward it [30,31,32]. Indeed, repeated exposure to a stimulus leads people to gain knowledge about it, which would allow for more fluent processing. In turn, this perceptual and cognitive fluency seems to positively affect the liking of the stimulus [33]. Interestingly, it can also occur when individuals are exposed to other humans’ faces [34,35] or in situations of interactions with other humans (see ref. [32] for a more complete overview). In the context of HRI, the same phenomenon seems to take place also when interacting with robots, as demonstrated by people reporting more positive attitudes toward robots after being repeatedly exposed to them [36,37]. For instance, it emerged that performing an interactive game with a robot significantly improved participants’ perception of the robot on dimensions like anthropomorphism or likeability [38]. Along the same line, other evidence showed that people already familiar with robots are more likely to display positive attitudes toward ascribing intentions to robots [39].
Therefore, it may be that the more people are exposed to robots, and thus gain knowledge about them, the more they would be prone to interact with them and to consider them as part of their own in-group [32,34,40]. It would translate into a higher likelihood of perceiving robots as intentional agents (like other humans), i.e., a greater tendency to adopt the Intentional Stance, toward them [25].
An alternative explanation might be that, by watching the WoOZ video of the teleoperated robot, participants acquired the knowledge that robots are actually often controlled by humans. Thus, our reasoning was that, perhaps, the presence of a human might boost participants’ tendency to attribute intentionality toward robots, as participants might have assumed that the depicted human was actually controlling the robot. In other words, it might have led participants to attribute intentionality to the depicted human, whom they might have imagined controlling the robot from “behind the scenes.”
With this in mind, we assessed whether seeing iCub depicted during social interaction with a human (i.e., IST “Social” scenarios) might evoke a higher attribution of intentionality compared to those scenarios in which iCub was depicted alone, i.e., where the social component was absent (i.e., IST “Non-Social” scenarios).
Our results confirmed an overall increase in IST scores (i.e., higher intentionality attribution) in Post compared to Pre. However, this effect emerged regardless of the type of scenario (social vs. non-social), suggesting that the presence of a human in the IST scenarios did not make any difference in participants’ likelihood of attributing intentionality toward robots.
In sum, our results did not confirm the initial hypothesis, according to which the more people are exposed to information about robots, the less they are prone to attribute intentionality to them. However, the observed effect, which was in the opposite direction than expected, could be in line with the mere exposure effect [29] phenomenon. However, this interesting result should be further confirmed in future research.
6 Conclusions
Taken together, these findings suggest that exposure to knowledge about robots’ functionality and use modulates the likelihood of adopting the Intentional Stance toward robots. However, the directionality of this effect needs to be addressed in future studies, which may help to clarify whether, and under what conditions, the tendency to perceive robots as intentional agents could be enhanced.
Acknowledgments
All the authors thank I. Rivara and Prof. S. Conradi, L. Golinelli, L. Piazza, and S. Salomone, who helped us to organize the activity in the schools. Moreover, all the authors thank the schools which kindly agreed to participate in the activity: Istituto Tecnico Industriale Mario Delpozzo (CN), IIS Polo Tecnico di Lugo (RA), Liceo Scientifico Leonardo da Vinci (GE), and Liceo Artistico Statale Klee Barabino (GE). Above all, the authors thank the students who volunteered to take part in the study.
-
Funding information: This work has received support from the European Research Council under the European Union’s Horizon 2020 research and innovation program, ERC Starting Grant, G.A. number: ERC-2016-StG-715058, awarded to Agnieszka Wykowska. The content of this article is the sole responsibility of the authors. The European Commission or its services cannot be held responsible for any use that may be made of the information it contains.
-
Author contributions: C.R. and S.M. designed the study, collected and analyzed the data, discussed and interpreted the results, and wrote the manuscript. D.D.T. designed the study, programmed the customized IST online, and wrote the manuscript. A.W. designed the study, discussed and interpreted the results, and wrote the manuscript. All the authors revised the manuscript.
-
Conflict of interest: The authors declare that the research was conducted in the absence of any commercial or financial relationship that could be construed as a potential conflict of interest.
-
Informed consent: Informed consent was obtained from all individuals included in the study.
-
Ethical approval: The research related to human use has been complied with all the relevant national regulations, institutional policies, and in accordance with the tenets of the Helsinki Declaration, and has been approved by the authors' institutional review board or equivalent committee (Comitato Etico Regione Liguria).
-
Data availability statement: WoOZ videos, together with the presentation related to the theoretical lecture, are made available at the following link: https://osf.io/m7wa8 (Project Name: “The role of prior exposure in the likelihood of adopting the Intentional Stance towards a humanoid robot”).
References
[1] T. J. Prescott and J. M. Robillard, “Are friends electric? The benefits and risks of human robot relationships,” iScience, vol. 24, no. 1, p. 101993, 2021, 10.1016/j.isci.2020.101993.Search in Google Scholar PubMed PubMed Central
[2] H. Samani, E. Saadatian, N. Pang, D. Polydorou, O. N. Fernando, R. Nakatsu, et al., “Cultural robotics: The culture of robotics and robotics in culture,” Int. J. Adv. Robot. Syst., vol. 10, no. 12, p. 400, 2013, 10.5772/57260.Search in Google Scholar
[3] A. Wykowska, “Robots as mirrors of the human mind,” Curr. Dir. Psychol. Sci., vol. 30, no. 1, pp. 34–40, 2021, 10.1177/0963721420978609.Search in Google Scholar
[4] D. C. Dennett, “Intentional systems,” J. Philos., vol. 68, no. 4, pp. 87–106, 1971, 10.2307/2025382.Search in Google Scholar
[5] D. C. Dennett, “Intentional systems in cognitive ethology: The ‘Panglossian paradigm’ defended,” Behav. Brain Sci., vol. 6, no. 3, pp. 43–355, 1983, 10.1017/S0140525X00016393.Search in Google Scholar
[6] S. Thellman, A. Silvervarg, and T. Ziemke, “Folk-psychological interpretation of human vs humanoid robot behavior: Exploring the intentional stance toward robots,” Front. Psychol., vol. 8, p. 1962, 2017, 10.3389/fpsyg.2017.01962.Search in Google Scholar PubMed PubMed Central
[7] S. Marchesi, D. Ghiglino, F. Ciardo, J. Perez-Osorio, E. Baykara, and A. Wykowska, “Do we adopt the intentional stance toward humanoid robots? Front. Psychol., vol. 10, p. 450, 2019, 10.3389/fpsyg.2019.00450.Search in Google Scholar PubMed PubMed Central
[8] J. Perez-Osorio and A. Wykowska, “Adopting the intentional stance toward natural and artificial agents,” Philos. Psychol., vol. 3, no. 3, pp. 369–395, 2020, 10.1080/09515089.2019.1688778.Search in Google Scholar
[9] G. Metta, L. Natale, F. Nori, G. Sandini, D. Vernon, L. Fadiga, et al., “The iCub humanoid robot: An open-systems platform for research in cognitive development,” Neural Netw, vol. 23, no. 8–9, pp. 1125–1134, 2010, 10.1016/j.neunet.2010.08.010.Search in Google Scholar PubMed
[10] S. Marchesi, N. Spatola, J. Perez-Osorio, and A. Wykowska, “Human vs Humanoid. A behavioral investigation of the individual tendency to adopt the intentional stance,” Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boulder, USA, 2021 Mar 9-11. 10.1145/3434073.3444663.Search in Google Scholar
[11] S. Marchesi, J. Perez-Osorio, D. De Tommaso, and A. Wykowska, “Don’t overthink: fast decision making combined with behavior variability perceived as more human-like, 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN),” Naples, Italy, 2020 31 Aug- 4 Sept. 10.1109/RO-MAN47096.2020.9223522.Search in Google Scholar
[12] S. Marchesi, F. Bossi, D. Ghiglino, D. De Tommaso, and A. Wykowska, “I am looking for your mind: Pupil dilation predicts individual differences in sensitivity to hints of human-likeness in robot behavior,” Front. Robot. AI, vol. 8, p. 653537, 2021, 10.3389/frobt.2021.653537.Search in Google Scholar PubMed PubMed Central
[13] S. Naneva, M. Sarda Gou, T. L. Webb, and T. J. Prescott, “A systematic review of attitudes, anxiety, acceptance, and trust towards social robots,” Int. J. Soc. Robot., vol. 12, no. 6, pp. 1179–1201, 2020, 10.1007/s12369-020-00659-4.Search in Google Scholar
[14] N. Epley, A. Waytz, and J. T. Cacioppo, “On seeing human: A three-factor theory of anthropomorphism,” Psychol. Rev., vol. 114, no. 4, pp. 864–886, 2007, 10.1037/0033-295X.114.4.864.Search in Google Scholar PubMed
[15] K. Fischer, “Interpersonal variation in understanding robots as social actors,” Proceedings of the 6th international ACM/IEEE Conference on Human-robot interaction (HRI), Lausanne, Switzerland, 2011 Mar 6-9. 10.1145/1957656.1957672.Search in Google Scholar
[16] A. Waytz, J. Cacioppo, and N. Epley, “Who sees human?: The stability and importance of individual differences in anthropomorphism,” Perspect. Psychol. Sci., vol. 5, no. 3, pp. 219–232, 2010, 10.1177/1745691610369336.Search in Google Scholar PubMed PubMed Central
[17] K. F. MacDorman and S. O. Entezari, “Individual differences predict sensitivity to the uncanny valley,” Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst., vol. 16, no. 2, pp. 141–172, 2015, 10.1075/is.16.2.01mac.Search in Google Scholar
[18] D. S. Syrdal, K. Dautenhahn, K. L. Koay, and M. L. Walters, The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study, 2009. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.159.9791&rep=rep1&type = pdf Accessed August 2022.Search in Google Scholar
[19] S. Rossi, M. Staffa, L. Bove, R. Capasso, and G. Ercolano, “User’s personality and activity influence on HRI comfortable distances,” Social Robotics, A. Kheddar, E. Yoshida, S. S. Ge, K. Suzuki, J. J. Cabibihan, F. Eyssel, et al. Eds., vol. 10652, Cham, Springer International Publishing, 2017, pp. 167–177. 10.1007/978-3-319-70022-9_17.Search in Google Scholar
[20] S. Rossi, G. Santangelo, M. Staffa, S. Varrasi, D. Conti, and A. Di Nuovo, “Psychometric Evaluation Supported by a Social Robot: Personality Factors and Technology Acceptance, 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN),” Nanjing, China, 2018 Aug 27-31. 10.1109/ROMAN.2018.8525838.Search in Google Scholar
[21] E. Schellen and A. Wykowska, “Intentional mindset toward robots – open questions and methodological challenges,” Front. Robot. AI, vol. 5, p. 139, 2019, 10.3389/frobt.2018.00139.Search in Google Scholar PubMed PubMed Central
[22] A. Abubshait and A. Wykowska, “Repetitive robot behavior impacts perception of intentionality and gaze-related attentional orienting,” Front. Robot. AI, vol. 7, p. 565825, 2020, 10.3389/frobt.2020.565825.Search in Google Scholar PubMed PubMed Central
[23] J. F. Kelley, “An iterative design methodology for user-friendly natural language office information applications,” ACM Trans. Inf. Syst., vol. 2, no. 1, pp. 26–41, 1984, 10.1145/357417.357420.Search in Google Scholar
[24] L. Riek, “Wizard of Oz studies in HRI: A systematic review and new reporting guidelines,” J. Hum. Robot. Interact., vol. 1, pp. 119–136, 2012, 10.5898/JHRI.1.1.Riek.Search in Google Scholar
[25] S. Marchesi, D. De Tommaso, J. Perez-Osorio, and A. Wykowska, “Belief in sharing the same phenomenological experience increases the likelihood of adopting the intentional stance towards a humanoid robot,” TMB, vol. 3, no. 3, 2022, 10.1037/tmb0000072.Search in Google Scholar
[26] D. J. Leiner, SoSci Survey. 2019. https://www.soscisurvey.de.Search in Google Scholar
[27] R. C. R. Team: A language and environment for statistical computing. 2013. http://www.R-project.org/.Search in Google Scholar
[28] N. Spatola, S. Marchesi, and A. Wykowska, “The intentional stance Test-2: How to measure the tendency to adopt intentional stance towards robots,” Front. Robot. AI, vol. 8, p. 666586, 2021, 10.3389/frobt.2021.666586.Search in Google Scholar PubMed PubMed Central
[29] R. B. Zajonc, “Attitudinal effects of mere exposure,” J. Pers. Soc. Psychol., vol. 9, no. 2, pp. 1–27, 1968, 10.1037/h0025848.Search in Google Scholar
[30] M. Montoya, R. S. Horton, J. L. Vevea, M. Citkowicz, and E. A. Lauber, “A re-examination of the mere exposure effect: The influence of repeated exposure on recognition, familiarity, and liking,” Psychol. Bull., vol. 143, no. 5, pp. 459–498, 2017, 10.1037/bul0000085.Search in Google Scholar PubMed
[31] K. Mrkva and L. Van Boven, “Salience theory of mere exposure: Relative exposure increases liking, extremity, and emotional intensity,” J. Pers. Soc. Psychol., vol. 118, no. 6, pp. 1118–1145, 2020, 10.1037/pspa0000184.Search in Google Scholar PubMed
[32] R. F. Bornstein, “Exposure and affect: overview and meta-analysis of research, 1968–1987,” Psychol. Bull., vol. 106, no. 2, pp. 265–289, 1989, 10.1037/0033-2909.106.2.265.Search in Google Scholar
[33] R. F. Bornstein and P. R. D’Agostino, “The attribution and discounting of perceptual fluency: Preliminary tests of a perceptual fluency/attributional model of the mere exposure effect,” Soc. Cogn., vol. 12, no. 2, pp. 103–128, 1994, 10.1521/soco.1994.12.2.103.Search in Google Scholar
[34] L. A. Zebrowitz, B. White, and B. Wieneke, “Mere exposure and racial prejudice: Exposure to other-race faces increases liking for strangers of that race,” Soc. Cogn., vol. 26, no. 3, pp. 259–275, 2008, 10.1521/soco.2008.26.3.259.Search in Google Scholar PubMed PubMed Central
[35] G. Rhodes, J. Halberstadt, and G. Brajkovich, “Generalization of mere exposure effects to averaged composite faces,” Soc. Cogn., vol. 19, no. 1, pp. 57–70, 2001, 10.1521/soco.19.1.57.18961.Search in Google Scholar
[36] C. Bartneck, T. Suzuki, T. Kanda, and T. Nomura, “The influence of people’s culture and prior experiences with Aibo on their attitude towards robots,” AI Soc, vol. 21, no. 1–2, pp. 217–230, 2006, 10.1007/s00146-006-0052-7.Search in Google Scholar
[37] F. Ciardo, D. Ghiglino, C. Roselli, and A. Wykowska, “The effect of individual differences and repetitive interactions on explicit and implicit measures towards robots,” In Social Robotics. ICSR 2020. Lecture Notes in Computer Science, A. R. Wagner, et al. eds, 12483, Springer, Cham, 2020, p. 466. 10.1007/978-3-030-62056-1_39.Search in Google Scholar
[38] M. Paetzel and G. Castellano, “Let me get to know you better: Can interactions help to overcome uncanny feelings?” Proceedings of the 7th International Conference on Human-Agent Interaction (HAI), Kyoto, Japan, 2019, Oct 6-10. 10.1145/3349537.3351894.Search in Google Scholar
[39] M. Bossema, R. Saunders, and R. B. Allouch Robot body movements and the intentional stance. https://malulu.github.io/HRI-Design-2020/assets/pdf/Bossema%20et%20al.pdf. Accessed August 2022.Search in Google Scholar
[40] M. Brewer and Miller N. “Contact and cooperation.” In Katz P. A., Taylor D. A. Eds, Eliminating Racism. Perspectives in Social Psychology (A Series of Texts and Monographs). Springer, Boston, MA, 1988. 10.1007/978-1-4899-0818-6_16.Search in Google Scholar
© 2023 the author(s), published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Regular Article
- The role of prior exposure in the likelihood of adopting the Intentional Stance toward a humanoid robot
- Review Articles
- Robot-assisted therapy for upper limb impairments in cerebral palsy: A scoping review and suggestions for future research
- Is integrating video into tech-based patient education effective for improving medication adherence? – A review
- Special Issue: Recent Advancements in the Role of Robotics in Smart Industries and Manufacturing Units - Part II
- Adoption of IoT-based healthcare devices: An empirical study of end consumers in an emerging economy
- Early prediction of cardiovascular disease using artificial neural network
- IoT-Fog-enabled robotics-based robust classification of hazy and normal season agricultural images for weed detection
- Application of vibration compensation based on image processing in track displacement monitoring
- Control optimization of taper interference coupling system for large piston compressor in the smart industries
- Vibration and control optimization of pressure reducer based on genetic algorithm
- Real-time image defect detection system of cloth digital printing machine
- Ultra-low latency communication technology for Augmented Reality application in mobile periphery computing
- Improved GA-PSO algorithm for feature extraction of rolling bearing vibration signal
- COVID bell – A smart doorbell solution for prevention of COVID-19
- Mechanical equipment fault diagnosis based on wireless sensor network data fusion technology
- Deep auto-encoder network for mechanical fault diagnosis of high-voltage circuit breaker operating mechanism
- Control strategy for plug-in electric vehicles with a combination of battery and supercapacitors
- Reconfigurable intelligent surface with 6G for industrial revolution: Potential applications and research challenges
- Hybrid controller-based solar-fuel cell-integrated UPQC for enrichment of power quality
- Power quality enhancement of solar–wind grid connected system employing genetic-based ANFIS controller
- Hybrid optimization to enhance power system reliability using GA, GWO, and PSO
- Digital healthcare: A topical and futuristic review of technological and robotic revolution
- Artificial neural network-based prediction assessment of wire electric discharge machining parameters for smart manufacturing
- Path reader and intelligent lane navigator by autonomous vehicle
- Roboethics - Part III
- Discrimination against robots: Discussing the ethics of social interactions and who is harmed
- Special Issue: Humanoid Robots and Human-Robot Interaction in the Age of 5G and Beyond - Part I
- Visual element recognition based on profile coefficient and image processing technology
- Application of big data technology in electromechanical operation and maintenance intelligent platform
- UAV image and intelligent detection of building surface cracks
- Industrial robot simulation manufacturing based on big data and virtual reality technology
Articles in the same Issue
- Regular Article
- The role of prior exposure in the likelihood of adopting the Intentional Stance toward a humanoid robot
- Review Articles
- Robot-assisted therapy for upper limb impairments in cerebral palsy: A scoping review and suggestions for future research
- Is integrating video into tech-based patient education effective for improving medication adherence? – A review
- Special Issue: Recent Advancements in the Role of Robotics in Smart Industries and Manufacturing Units - Part II
- Adoption of IoT-based healthcare devices: An empirical study of end consumers in an emerging economy
- Early prediction of cardiovascular disease using artificial neural network
- IoT-Fog-enabled robotics-based robust classification of hazy and normal season agricultural images for weed detection
- Application of vibration compensation based on image processing in track displacement monitoring
- Control optimization of taper interference coupling system for large piston compressor in the smart industries
- Vibration and control optimization of pressure reducer based on genetic algorithm
- Real-time image defect detection system of cloth digital printing machine
- Ultra-low latency communication technology for Augmented Reality application in mobile periphery computing
- Improved GA-PSO algorithm for feature extraction of rolling bearing vibration signal
- COVID bell – A smart doorbell solution for prevention of COVID-19
- Mechanical equipment fault diagnosis based on wireless sensor network data fusion technology
- Deep auto-encoder network for mechanical fault diagnosis of high-voltage circuit breaker operating mechanism
- Control strategy for plug-in electric vehicles with a combination of battery and supercapacitors
- Reconfigurable intelligent surface with 6G for industrial revolution: Potential applications and research challenges
- Hybrid controller-based solar-fuel cell-integrated UPQC for enrichment of power quality
- Power quality enhancement of solar–wind grid connected system employing genetic-based ANFIS controller
- Hybrid optimization to enhance power system reliability using GA, GWO, and PSO
- Digital healthcare: A topical and futuristic review of technological and robotic revolution
- Artificial neural network-based prediction assessment of wire electric discharge machining parameters for smart manufacturing
- Path reader and intelligent lane navigator by autonomous vehicle
- Roboethics - Part III
- Discrimination against robots: Discussing the ethics of social interactions and who is harmed
- Special Issue: Humanoid Robots and Human-Robot Interaction in the Age of 5G and Beyond - Part I
- Visual element recognition based on profile coefficient and image processing technology
- Application of big data technology in electromechanical operation and maintenance intelligent platform
- UAV image and intelligent detection of building surface cracks
- Industrial robot simulation manufacturing based on big data and virtual reality technology