Fiction meets fact: exploring human-machine convergence in today’s cinematographic culture
-
Christoph Endres
, Frederic Frieß
Abstract
This article explores the theme of human-machine convergence as portrayed in modern science fiction movies and TV/streaming series and compares them to real-world advancements in robotics, artificial intelligence (AI), and virtual reality (VR). It examines how science fiction often depicts humanoid robots and AI with human-like emotions and intentions, contrasting with the actual technological challenges and ethical considerations in developing intelligent machines. The text discusses the evolution of humanoid robots from fictional portrayals to real-life examples like Boston Dynamics’ Atlas and Tesla’s Optimus. The paper also explores the reverse interaction, where humans become avatars in virtual worlds, and briefly discusses the ethical implications of simulating deceased individuals in digital form. Through this examination, the paper emphasizes the complexity of human-machine convergence and the importance of considering social, ethical, and emotional aspects in technological progress. It concludes by suggesting that while science fiction provides insights into societal fears and hopes regarding technology and thus into ethical and regulative necessities, the real trajectory of human-machine convergence cannot be predicted through film but will be determined by ongoing and after all incidental developments in the real world.
1 Introduction
This article contains a scientific analysis of human-computer interaction in science fiction films. We explain how the cinematic portrayal of collaboration between humans and machines has evolved over the past 100 years.
As a side effect, you may find some interesting recommendations for movies and series you might want to watch.
We start here with a brief retrospective on the twentieth century and the question why we are doing this. Therefore, we summarize the result from our previous paper to provide it as context for this article. 1 , 2 Then we tackle the question – what are the important developments in science fiction cinematography and HCI since then? Which are the most important trends we can see in movies? The answer is exciting: The delineation between man and machine is blurring. So we take a look at this closing gap, first in terms of humanizing the machine, then on machinizing the human. In the end, we speculate where this development might lead to. But we learned from Nobel laureate Niels Bohr: “Prediction is very difficult, especially if it’s about the future!”, so please keep in mind that any of our speculations about the future might be completely wrong. We just show you a general direction, and have three ghosts of past, present and future of technological development – partly based on real people – guide you along the way.
For the scope of this article, we are limiting ourselves to cinematic science fiction (movies and series). Adding the vast catalog of science fiction literature and how HCI is depicted there would be more appropriate for a multi volume encyclopedia. Furthermore, due to the visual representation, movies do have a higher impact on the way future technology is perceived. They are also more likely to become part of the visual internet culture such as memes and more recognizable than literature quotes. As for the technology described, we do not limit ourselves to any specific industry but rather take a broad perspective on a wide range of technology, spanning robots, chatbots and all other recent developments – as no clear boundaries are drawn here in the films either. A strong focus here is the emergence of AR/VR technology over the past decades, and although we raise questions explicitly and implicitly where this development might lead to, we do not see ourselves in the position to provide definitive answers – our aim is to provoke thoughts and start a discussion.
New technologies and their ethical implications have always been prominently featured in films. 3 , 4 Presumably, the motivation behind this is to showcase being modern and up-to-date. However, the understanding of technology and its application or operation was not always entirely accurate and often marked by misunderstandings.
Especially in the realm of science fiction, technical or physical errors tend to sneak in, sometimes intentionally for the sake of effects, and sometimes inadvertently, occasionally spoiling the fun for those overly meticulous nerds. We often see inaccuracies involving hearing sound in the vacuum of space, where there shouldn’t be any. Furthermore, we see film makers misunderstanding gravity or confusing the difference between weight and mass, and which of these is actually responsible for momentum. This often leads to confusion.
But filmmakers also have a sense of humor and sometimes enjoy poking fun at themselves and their understanding of technology. In Star Trek, for example, there are utterly absurd scientific concepts like the Heisenberg compensator or the reinforcement of structural integrity. And, of course, the science of the twenty-fourth century has a universal medicine – there’s hardly any illness that couldn’t be cured with 20 mg of Inoprovaline.
The main reason why technology was not always portrayed accurately is likely because the focus was on the plot, and technical details were rarely crucial until they were conjured up as a “plot device.” This is humorously depicted in the film “Galaxy Quest” (1999). The film prominently features the so-called “Omega 13 Device,” – its functionality is entirely irrelevant throughout the film, until it is needed to resolve a precarious situation.
Especially in the field of Human-Computer Interaction (HCI), there has been a remarkable development in connection with science fiction films and series. 5 , 6 The ideas of filmmakers and engineers in the twentieth century influenced and inspired each other until they eventually converged. In the case of the film Minority Report (2002), technicians and filmmakers collaborated in a think tank, delivering a milestone in a realistic vision of the future.
2 Previous research
But let’s take a step back and systematically review the overall situation. This story has been extensively described in our conference paper from 2008, 1 which in our understanding is one of the first – or maybe even the first – systematic approach to classify the connection between science fiction movies and HCI. Others picked up on the trend, and we – including one of the original authors – would like to revisit the topic before we get to the main part, where we extend it by diving into more recent developments.
Here, we’ll briefly summarize the key points for those who have not read the classic paper:
The portrayal of Human-Computer Interaction in films depends on various factors: the availability of special effects technology, the budget at hand, and last but not least, the relevance of the corresponding technology to the film’s plot. The depiction of technology in films tends to be somewhat “flashier” than it is in reality. Especially in the field of Ubiquitous Computing, the goal is to integrate technology as unobtrusively and minimally disruptive as possible into workflows, while films naturally rely heavily on visual effects.
Concerning the plot, security technologies are depicted frequently, but mainly to be hacked or exploited as part of the storyline. Additionally, speech interfaces are extremely popular, as they can be both impressive and implemented without significant budget requirements.
Furthermore, we can distinguish between two different types of representation: either the technology works flawlessly, portraying a vision of progress in the future, or there are technical issues, which either demystify the new technology or drive the plot forward through unexpected obstacles.
We distinguish between the following phases of the representation of human-computer interaction in films, also depicted in (Figure 1):
Representation/adaptation of contemporary technologies
Pre-computer era
Simple technology adaptation
Advanced technology with familiar operating patterns
Depiction of previously unknown technology/operation
Prediction or inspiration of possible future technology/operation

HCI researchers and film makers inspire each other mutually.
The phases mentioned here cannot always be sharply delineated – films can simultaneously cover multiple categories. Already the first serious science fiction film we examine here cannot be unequivocally assigned: In Metropolis (1927), we see a completely nonsensical and absurd understanding of HCI. A person diligently turns the hands of a display, similar to an oversized clock, and gets his instructions from a machine. This is no longer comprehensible today, barely 100 years later, and at best can be perceived as a metaphor for the alienation of humans from their work. At the same time, the film features one of the oldest and still relevant concepts in science fiction – the android. It then adds a visionary touch with the depiction of a video call,[1] which, however, uses familiar controls such as a telephone receiver and a dial, along with a frequency selection reminiscent of radio.
An interesting example of design decisions due to budget or technological constraints is the German television series “Raumpatrouille – Die phantastischen Abenteuer des Raumschiffes Orion” (1966). Everything that was available on the set was simply gathered and assembled, from plastic cups on the ceiling to a pencil sharpener and a flat iron as control elements. The popularity that this – recently restored and reissued – series still enjoys almost 60 years after its initial broadcast clearly indicates that an engaging storyline can be more crucial than realism in presentation.
In the series Battlestar Galactica (1978–1980) – which can be considered representative of other science fiction productions of its time (such as Buck Rogers (1979)) – current technologies are prominently featured in a distinctly futuristic setting. We see wired telephone receivers, computer keyboards, and control systems for spacecraft (Colonial Vipers) that unmistakably resemble joysticks. While this may seem peculiar at first glance today, it aligns well with the series’ atmosphere, clearly influenced by the Cold War, and a ragtag fleet on the run that has assembled any available technology.
The idea that a spaceship could be controlled using a joystick was revisited in Star Trek: Insurrection (1998), in the form of the “manual steering column”.[2] According to director Jonathan Frakes, it was primarily included as a visual gimmick for the audience. Whether or not it was an actual reference to Galactica remains unclear.
It is not our intention to repeat the entire original article here, but let’s take a final look at one more relevant category: Where have films accurately predicted future technologies or inspired future developments?
In this regard, the Star Trek franchise stands out again with remarkable foresight and significant influence on technological developments. In Star Trek – The Original Series (1966–1969), we find several accurate predictions: (1) Communications officer Nyota Uhura wears a wireless headset strongly reminiscent of modern Bluetooth headsets. (2) Storage cards of a size roughly between 3 1/2 inch floppy disks and CF memory cards are depicted. (3) Perhaps the most substantial or conspicuous parallel to real-world developments is the communicator, which likely served as a model for the first flip phone, the Motorola StarTAC.[3]
Compared to other series of the late 1960s and early 1970s, this is already quite visionary. For instance, consider the rather unrealistic portrayal of a computer in Space: 1999 (1975–1977), which accepts spoken instructions and then prints responses on a small paper strip similar to a cash register roll.
Other Star Trek series were also very innovative. In Star Trek: The Next Generation (1987–1994), we see early depictions of laptops/notebooks – a technology that was invented almost simultaneously by Toshiba, with the first notebook, the T1100, also originating in 1987. In this series, we also encounter a voice assistant for the first time, functioning similarly to Amazon’s Alexa or Apple’s Siri – both technologies introduced much later. However, the Holodeck from TNG is likely to remain a futuristic concept in the form shown.
The remake of the film The Time Machine from 2002 was particularly advanced in the field of digital assistants. In this version, we witness a librarian in a 3D projection engaging in natural language communication with a human.
And last but not least, as previously announced in the introduction, we can cinematically conclude the twentieth century with a milestone in HCI in films – Minority Report (2002). Filmmakers and tech gurus, such as Jaron Lanier, collaborated to deliver a very modern vision for an intuitively operable user interface.
But the story is not yet over. Even after 2002, technology has continued to evolve, sometimes faster and more rapidly than we could have imagined 20 years ago. Today, this article delves into what lies ahead in the twenty-first century.
3 New focus: bridging the gap between human and machine
3.1 Humanizing the machine
In this section, we seek to explore the portrayal of humanoid robots in science fiction, advances in real-world robotics, and the creation of emotional bonds with artificial intelligence, examining how these elements intersect to blur the lines between human and machine, and the implications of this convergence for society’s interaction with technology.
In the world of science fiction, artificial intelligence, often depicted as robots, is commonly designed to be both physically and mentally humanoid. 7 This approach enables artificial beings to navigate successfully within human society. This design is independent of the motives for which humanoid AIs are created in fictional stories – whether as life partners, household helpers, or as a potential next evolutionary step.[4] Even though, in many cases, robots serve as a critical metaphor for real people (who, in the vast majority of cases, are also portrayed by human actors), in the logic of the story, they are constructed to be human-like to function in human society and interact with humans in a natural way. After all, these stories are written for us humans, and in order for the story to resonate, we should be able to identify with the robots.[5]
Examples for this are the household robots Andrew from “Bicentennial Man” (1999) and T.I.M. from the 2023 film of the same title who are designed as humanoid to work and be responsive in an environment inhabited by humans. The fact that the two robots develop in diametrically opposed directions – in Andrew, the desire for a middle-class human existence with a wife grows, whereas T.I.M. becomes increasingly manipulative, obsessive, and aggressive – is primarily due to the drama of the respective film plot, not to technical design decisions. In the films “I, Robot” (2004) and “Automata” (2014), the service robots also have anthropomorphic features (even if Sonny in “I, Robot” is computer-animated) to carry out tasks intended for humans.
Looking at the actual development of such humanoid bipedal assistance systems, the first milestones were reached at the beginning of the millennium by Honda with the introduction of ASIMO.[6] The approximately 120 cm tall humanoid robot served as a research platform to find solutions to common problems in robotics. For example, navigation in natural environments such as homes. More than 20 years later, the company Boston Dynamics has raised the bar in terms of movement. Their robotic platform Atlas has achieved a level of agility approaching that of a gymnast. Various clips can be seen on the company’s YouTube[7] channel in which Atlas masters obstacle courses, dances entire choreographies, assists with rough manual work and is thrown off balance by conniving engineers with poles, which is often parodied and has already become a meme. Atlas, like his four-legged partner Spot, is freely available for sale, but his field of application is specified more for outdoor use. In contrast, Tesla’s robotics department has focussed on the interior. It recently (December 2023) unveiled a prototype of the second generation of its humanoid robot Optimus.[8] Its appearance is less industrial and more human, aesthetic and also more sophisticated, apart from the lack of a face. This impression is presumably due to the organically moving upper extremities. The arms, hands and fingers move very realistically and use sensors to perceive their surroundings so that the robot can, for example, put eggs from the packaging into the cooker.
While there are enormous, real-time AI-supported computing processes behind the facade of these complex systems, science fiction ignores these “simple” mechanical principles and instead focuses on the cognitive abilities of humanoid robots.
The situation is similar with Data in “Star Trek: The Next Generation” (from 1987) and Isaac from “The Orville” (from 2017), both of whom are anthropomorphic because otherwise, they would likely be unable to fulfill their roles as science officers on the respective starships of the Enterprise and the Orville, working in collaboration with other crew members. In terms of cognitive performance, the two are, of course, vastly superior to the other human (and alien) crew members. The Indian film “Enthiran” (2010), the British film “The Machine” (2013), and the Swedish series “Äkta Människor” (Real Humans) (2012–2014) should also be noted as examples where robots become all-rounders – whether for better or for worse. Nevertheless, it is evident that the concept of anthropomorphic robots on one hand and functionality on the other has its limitations, as the film “The Creator” (2023) impressively demonstrates. Instances where robots require physical night vision devices or are transported on a stretcher by other robots in case of injury cannot be plausibly explained in technical terms but rather as creative and narrative decisions made by filmmakers. Similar narratives are reinforced by ubiquitous images of white robots sourced from image platforms to illustrate events or reports. Human-like robots situated in a call center wearing headphones[9] or standing at the blackboard as teachers[10] are intended to visually convey the idea that robots are taking over roles traditionally performed by humans, irrespective of the practicality of the use case. 8
If the film plots are arranged in a way that the respective AIs engage romantically or sexually with humans, it becomes even more crucial that they authentically emulate human emotional behavior in addition to their appealing appearance. In this context, it is often women who are designed by men as “female robots,” commonly referred to as “fembots”. 9 A well-known example is “Ava” from the film “Ex Machina” (2014). Other films such as “AI Rising” (2018), “Archive” (2020), and “Simulant” (2023) follow similar patterns. However, the German production “Ich bin dein Mensch” (2021) demonstrates that robot men can also be marketed as perfect partners. Also the film “A.I.” (2001) features a male sex robot named “Gigolo Joe”, but the central narrative revolves around the robot boy David, who is programmed to love a human mother and aspires to become a real boy – like Pinocchio – deserving of her love. Naturally, David has to look like a lovely human boy for the story to work, and is portrayed accordingly by a child actor.
Let’s leave aside real-life examples of AI dolls for the sake of the article’s seriousness. Nevertheless, the question arises as to how we support the humanization of the machine today through emotional bonds. If you look around for examples from robotics, Disney has made enormous progress here recently. The company, known for its animated films, maintains a research department in which robotics also plays an important role in order to continuously develop the animatronic attractions in its theme parks. The role of robots here is more to entertain and evoke emotions. Most recently, Disney has succeeded in transferring the basic design principles[11] of its animated films to a small android[12] that looks as if it has stepped out of a Star Wars film. A clumsy-looking locomotion in combination with playful body language and sprightly noises immediately create a certain emotional bond with the robot. Of course, the “baby” scheme is deliberately utilized here and no attempt is made to imitate human emotions. As was attempted with the Ameca[13] robot, for example. This robot platform has a replica of a human face. However, imitating human facial expressions is a very ambitious challenge. The human acceptance gap when recognising artificial faces is a very high threshold and is referred to as the Uncanny Valley. 10 It states that the more realistic a facial representation is, the less we trust it. Trust is only rebuilt when the representation is almost perfect. Unfortunately, this is not the case with Ameca and is therefore rather alienating. Robotics currently still offers tools that are too rough to close this acceptance gap. But let’s move away from robotics and towards completely virtual products.
Today, modern game engines offer impressive possibilities for visualizing different scenarios and content. Their pipelines are able to visualize enormously detailed and realistic scenarios in a photorealistic way. This has enabled them to outgrow their exclusive area of application in gaming and are also used in the film industry, for example, under the term virtual production.[14] Here, graphics are calculated in real time, which are then integrated into the real film studio via LED walls or rear projections, for example to display backgrounds. Unreal Engine 5 (UE5) in particular has established itself as a quasi-standard here. However, there is another tool in its toolset, which brings us back to the topic of emotional connection to the machine. Metahumen[15] is a plugin that can be used to create and display highly detailed, realistic-looking 3D models of human characters. They are based on high-resolution scans of real people, but can be fully customized. Facial expressions, gestures and lip-synchronized speech of virtual characters reach an entirely new level with this tool, which overcomes the Uncanny Valley, at least in digital terms.[16]
NVIDIA has taken the most obvious step with its ACE[17] middleware and combined Metahuman with generative voice AI as a core feature. The result is that developers can create extremely realistic conversations that never repeat themselves and feel organically like a real conversation. The artificial characters react in a context-sensitive way and can even speak to each other to further increase the immersion of the user.
Context-sensitive conversations are currently defining the general imprint of AI systems. ChatGPT[18] has shown us that it doesn’t necessarily have to be a physical or sophisticated graphical representation. A chat window is sufficient! The enormous success proves that text-based interaction with an LLM can be so convincing that you feel like you’re chatting with a human. The tech world is thrilled and ChatGPT is on everyone’s lips. At this point, a note must be made in favor of good UX. Because it is not the LLM alone, but barrier-free access paired with an intuitive interface that has helped ChatGPT achieve its enormous popularity.[19]
In fact, LLMs are not able to solve complex problems or draw logical conclusions, as most people think. They are only trained to understand and reproduce the syntax of the language, but not the semantics. However, as they do this so well and as a matter of course, users assume that the content they provide corresponds to the truth. However, this is a misconception and can be dangerous. It is not for nothing that the following is written directly below the ChatGPT input line: “ChatGPT can make mistakes. Consider checking important information.”
Examples of fictional AIs with malfunctions might include the on-board computer HAL9000 from “2001: A Space Odyssey” (1968), which even claims to die at the end when it is supposed to be switched off, or J.A.R.V.I.S. from the first “Iron Man” film in 2008, which justifies its serious errors by being in love. However, in both cases, the machine is humanized again precisely because of the very human nature of the malfunction.
The avoidance of malfunctions, the provision of specialist knowledge and the general customizability of an AI system are basic requirements for productive use. For this reason, providers of AI such as OpenAI make it possible to split off distributions of LLMs, so-called assistants or bots. These can then be trained in certain directions. They are individualized by enriching them with additional information. For example, domain-specific knowledge can be transmitted by uploading documents. In addition, character attributes and behavior can even be modeled based on text specifications. The assistant thus simulates emotions, moods and character characteristics, which is clearly demonstrated in the “Prompts for UX” podcast.[20]
The emotionalization and development of character traits in an AI are also explored in the film “Her” (2013), where the lonely Theodore falls in love with the only digitally existing operating system, “Samantha.” The two grapple with the challenge that Samantha lacks a physical body and only has a voice interface. They even attempt to address this issue by involving a prostitute who is meant to “play” Samantha, but this endeavor is destined for failure. In the end, Samantha transcends into unknown realms as a higher being. However, could Samantha’s initial lack of embodiment have been overcome if Theodore had entered her digital world as an avatar? The answer to this question could be provided by the following section.
3.2 Machinizing the human
Having examined the humanization of the machine in the previous section, this section seeks to focus on the machinization of the human. The text will explore the development of human interaction with artificial intelligence, from the embodiment of avatars in virtual reality to the recreation of deceased individuals through AI-based speech models and virtual simulations, and reflect on the implications of blurring the boundaries between human and machine in both fictional narratives and real-world applications.
The interaction between humans and AI can also work in the opposite direction, i.e. humans transform into digital avatars within a computer programme, cyberspace or virtual reality. In “Ready Player One” (2018), users can enter the virtual reality of Oasis by embodying avatars through VR headsets similar to those commonly used today. Various devices, including full suits, can be purchased to create a fully immersive experience. In this way, the people in the virtual world interact both with each other and with computer programmes in a natural way.
In addition to various research projects on virtual reality (VR) in the 1980s, the first commercial product in this category was the Virtual Boy[21] from Nintendo (1995). However, it was not a great success. Due to various technical and conceptual weaknesses, such as stationary use, a monochrome red display and inadequate marketing, far too few units were sold. About 10 years ago, the topic of VR experienced a new hype that initially raised many expectations. However, acceptance remained reserved and it seemed as if VR might not have the potential it promised. In fact, however, the topic has turned out to be here to stay and has carved out a niche for itself in consumer electronics. Current examples of VR goggles such as Meta Quest,[22] Pico,[23] Vive[24] and PlayStation VR[25] have already gone through several iterations and for the most part have evolved technologically into self-contained, mobile and user-friendly wearable computers.
Although the hype has cooled down, the topic is showing persistence and the ability to change. Mixed reality (MR) is seen as the new trend that is freeing VR from its stigma. While VR was considered a medium that isolated the user and immersed them completely in virtual worlds, MR has the ability to project virtuality into our environment. Spatial applications are emerging that seamlessly connect the physical and digital worlds. Apple, for example, is helping to breathe new life into the topic with its latest product, the Vision Pro,[26] and is presumably setting new standards in terms of social immersive experiences. In science fiction, mixed reality applications are often facilitated through inconspicuous eye lenses. In “Anon” (2018), for instance, various information is displayed using these lenses, ranging from advertisements to other people’s personal data. Similar lenses are also featured in “Minority Report” (2002).
As in science fiction, however, such glasses are only the interface to an ecosystem or even a parallel world. According to the motto “content is king”, it is not only access that is crucial, but also the content that can be experienced in the virtual world.
Apple will try to integrate its glasses seamlessly into its existing ecosystem. There will be specific spatial apps for the launch, the number of which is sure to increase. Initially, however, the focus will certainly be on access to the app store in order to experience the existing apps in a 2D view, virtually floating in space. This also means that the device could be used productively. Other platforms are still struggling with this use case and are focusing on other topics.
The vast majority of current VR content is in the entertainment and gaming sector. They offer single-player to multiplayer experiences and cover a broad spectrum. From experimental, artificial experiments to mature, highly polished applications. But with the downside that you are always in a bubble. Applications such as Rec Room[27] or VR Chat[28] focus on building communities where thousands of users can meet. The vision of the Metaverse has not yet crystallized here either. Even Meta, as an VR hardware manufacturer and operator of very large social networks such as Facebook and Instagram, is struggling with the attempt to create its general metaverse called Horizon.
Designing a virtual world in which people can interact and be creative is a huge challenge. Here, too, AI could play a greater role in enriching the experiences and promoting acceptance. Overall, it is clear that the future of virtual reality depends not only on technological progress, but also on innovative content and intelligent applications, the so-called system sellers.
In “Virtual Revolution” (2016), life predominantly unfolds within a virtual gaming world where individuals interact with computer programs through avatars. People connect to the virtual reality using an apparently non-invasive construction focused on a person’s neck. In the “Matrix” franchise (from 1999), all people are unknowingly plugged into a simulation of our reality – the Matrix – where computer programs like Mr. Smith also move as human avatars. However, the login process is invasive, accomplished through a contact within a person’s neck. In both examples, it is assumed that only complete immersion, without VR headsets and other gadgets, allows people to be fully integrated into existing and new ecosystems and social structures.
Things become even more immersive in “Tron” (1982). In this film, the protagonist Flynn is “digitized” and rematerialized in a virtual reality where he can directly interact with computer programs as a humanoid being. In the sequel, “Tron: Legacy” (2010), Flynn’s son Sam undergoes a similar fate. In both films it is posited that a physical body no longer needs to exist outside the simulation, as the person becomes fully integrated into the simulated environment. Similarly, in the German film “Exit” (2017), the “consciousness” of a deceased person is uploaded into a simulation, where they appear to continue living in the digital world (see also the series “Upgrade” and the “Black Mirror” episode “San Junipero”).
Recently, an opposing trend has emerged that continues to cause a buzz. The recreation of the dead. 11 At the end of last year, for example, Sascha Lobo recorded a podcast with Einstein.[29] The approach described above with AI-based language models was used here. An LLM assistant was trained with archived data from Einstein and a voice clone was created from the existing audio recordings via elevenlabs.io.[30] The result is an entertaining dialog with the late genius. Another, somewhat older and less entertaining example comes from South Korea.[31] Here, a deceased 7-year-old girl was simulated in VR to show how emotionally VR technology can affect users, in this case the mother. In fact, the mother’s reaction and this example send a cold shiver down the spine. This influence has also been seen in the film industry since the advent of Deepfake. Actors such as Carrie Fisher and Paul Walker appeared in their iconic roles here even after their deaths. The families of the deceased were involved in these productions, but the need to find regulations for such cases was nevertheless high. The Hollywood actors’ strike, which ended in November 2023, resulted in AI protection,[32] among other issues. This applies both during life and after death. Studios must obtain the consent of the actors or their bereaved relatives for the use of AI-generated performances and pay appropriate compensation depending on the use. Incidentally, this also applies to extras and stand-ins who are less in the spotlight. A fictional example of the frivolous sale of these rights is shown in the Black Mirror episode “Jane is Awful” using the role model of actress Salma Hayek.
The problem here is that science fiction films often assume the possibility of simply “digitizing” consciousness, suggesting a continuation of existence as the same individual self. However, the current replication of the deceased is merely based on the data left behind by the deceased person and has nothing to do with actually continued existence “as oneself”. Instead, it serves as a simulation of the once-living person for the sake of relatives and descendants. The difficulty lies in the fact that the idea of digitally reproducing a consciousness goes far beyond what is possible with current technology and our understanding of consciousness. The digital replica under discussion therefore represents a reconstruction based on past information and not a survival of the individual self.
The circle to the aforementioned human-like robots closes when an AI or computer program “breaks out” of the simulation into an artificial body, as depicted, for example, in “Virtuosity” (1995). In the film, a digital super-criminal is created for police training purposes and enters the real world in the body of an android.
3.3 Where does this development lead to?
Our key finding from the examination of Human-Computer Interaction in science fiction films reveals a clear trend towards the convergence of humans and computers/machines. Artificial intelligence is often portrayed in humanoid form to effectively interact in human society, whether as household robots or science officers, reflecting the aspiration for these artificial beings to engage in natural interactions with humans. As the robots are usually embodied by humans, science fiction tends to assume that the movements and coordination of robots are simply natural. However, the real-life development of humanoid robots, such as Boston Dynamics’ Atlas or Tesla’s Optimus, demonstrates significant technological advancements.
Beyond physical anthropomorphism, science fiction attributes human emotional behavior and intentions to robots, which they can express in a way understandable to humans. Context-sensitive conversations currently shape the perception of AI systems in reality. The success of LLMs like ChatGPT illustrates that text-based interaction with a large language model can be convincing without relying on physical or graphical representations. The popularity of ChatGPT is not only due to the language model itself but also to its accessible and intuitive interface. But unlike the portrayals in science fiction, large language models are incapable of solving complex problems or drawing logical conclusions as they provide responses based on probabilities of word sequences. Films often do not depict the real technological challenges and ethical considerations associated with creating intelligent machines. The misjudgment of AI capabilities, as seen in the case of ChatGPT, emphasizes the need for society to be informed and capable of distinguishing between AI capabilities and human cognitive possibilities.
In addition to humans embodying avatars in virtual worlds, technologies like Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) play pivotal roles. These technologies blur boundaries between physical and digital realms, enabling users to interact with digital content in real-world environments or fully immerse themselves in virtual spaces. Recent advancements by companies such as Meta and Apple suggest potential applications beyond gaming, extending into social and economic spheres. Both companies explore integrating VR and AR into social media platforms, communication tools, and productivity applications to enhance human interaction and productivity. While science fiction envisions seamless integration of humans into digital worlds, concepts like digitization of consciousness and simulation of deceased individuals are idealized. In these narratives, minds are uploaded into virtual environments, allowing continued existence beyond physical death. However, current implementations rely on external data like social media posts to create digital avatars or simulations. While capturing aspects of personality or appearance, they do not truly replicate conscious experience. Advancements in AI and neuroscience could improve digitizing consciousness and simulating human experiences. Yet, ethical and philosophical questions about consciousness, identity, and privacy must be addressed as these technologies progress.
In summary, the convergence of humans and machines in the real world is more complex than often depicted in science fiction films. Technological progress requires not only innovation but also a deeper engagement with the social, ethical, and emotional aspects of human-machine interaction. Science fiction films typically portray specific potential developments, either towards humanoid robots or human avatars in virtual reality, while in our real world, developments occur concurrently, revealing a nuanced convergence of humans and machines. Will we attend meetings in virtual reality while sitting at our desks at home in the future? Or will there be robots in companies or public places that we can remotely load into and control to participate in the physical world? Or will we even have individual robots representing us “outside,” similar to the film “Surrogates” (2009)? It will probably be a colorful mixture of all these possibilities as they are written in real life, not in science fiction. What science fiction can tell us are all the diverse fears and hopes people harbor towards different technological advancements today, so that we can steer and regulate developments, but we cannot predict how technology will develop in the end. 12 – 14 Nevertheless, we embarked on a speculative exercise, outlining three ghosts from the past, present, and future in text boxes, aiming to offer a broader indication of our own hopes and fears regarding human-machine relations. The answer, however, how this human-machine convergence will unfold and where it concretely leads will not come from science fiction but from real-world developments.
3.4 Textboxes: perspectives of fictional individuals from the past, present, and future
3.4.1 Past
My name is Isaac, or, as my friends call me, Ike. As a child, I immigrated with my parents from Soviet Russia to New York in 1923. Later, I became a professor of biochemistry and, eventually, one of the most prolific authors of the twentieth century. Despite delving into nearly every genre, I am primarily known for my science fiction and popular science books.
One of my most significant contributions to science fiction occurred around 1940, when I aimed at giving robots a better image. Even as a child, I couldn’t understand why robots were always portrayed as evil in dime novels, turning against humans. We have smart engineers, and if they build a robot, surely it would be equipped with safety measures. It would definitely have standardized laws or something similar, firmly embedded in the hardware. And then, the robot is not evil but a good friend and assistant to humans.
My predictions for the future were eerily accurate. In 1964, I made a prediction for the year 2014, and surprisingly, I seemed to have been correct on many topics. Concerns like the environment, overpopulation, feminism, etc., troubled me as early as the 1970s. I thought we could solve all of these issues by the year 2000. Well, I was often overly optimistic. For instance, I believed that in-flight entertainment electronics in the form of screens would be installed in airplanes so that passengers could pass their time reading books. Or that a worldwide network of computers would make us all smarter and more educated.
Personally, I was never as tech-savvy as one might assume. I never wanted to fly, and although I did full-page magazine ads for computers, I continued to work with my typewriter throughout my life.
I passed away in 1992 due to AIDS after a blood transfusion during a bypass operation. A well-known industrial robot bears a name similar to mine, and I believe that’s no coincidence.
You know my name without me mentioning it? Perfect, then I apparently did everything right, and you’ve read one of my 500 books. Or watched a series on Apple TV – whatever that is.
3.4.2 Present
Hello everyone, my name is Max. Technological progress has been with me my whole life. I always have to grin when my parents talk about the time before the internet and smartphones – how could the world work back then?
Technology simply fascinates me. I especially like the interplay between man and machine! Somehow people are becoming more and more digital, but at the same time machines are becoming more human. Objectively speaking, it’s absurd, isn’t it? I think this marks a turning point in our development and reminds me a lot of science fiction films.
With the current rapid pace of technological progress, I can’t even imagine what we could achieve with the help of the AI technology that is currently blooming. I hope it will make a real difference in the near future to improve all our lives and help us solve fundamental problems. It will be great to experience all of this!
But I also hope that we don’t get carried away! These developments raise many questions about ethics, data protection and humanity. As machines take on more and more of our own characteristics, we need to seriously consider how moral they actually are and what responsibilities they should carry.
In my view, it is really crucial to find a balanced approach to reap the benefits of this convergence without neglecting our ethical principles and leaving too many people behind in this transition. Research, ethics and society need to work hand in hand to ensure that this progress is in line with our shared values. That sounds like an enormous challenge to me, especially in these challenging times. Imagine that this could not only blur the boundaries between man and machine, but also fundamentally change our understanding of humanity. That’s kind of crazy, isn’t it? I have a feeling we have some very interesting years ahead of us!
3.4.3 Future
Hey, I’m Abeba, and I’m studying Justice Economics. Right now, we’re discussing the 2020s … man and woman, a lot went wrong back then! People were worried about losing their jobs to AI and robots, companies were developing these so-called unregulated AI systems, getting richer and richer in the meantime, and everyone was being digitally monitored! I have no idea how they thought that could work in the long run. Well, I guess they didn’t really think it through, and hindsight is always 20/20…
But hey, we managed to break up those big tech monopolies, tax the billionaires, and establish a powerful international AI authority. Those program engines designed for polarization got banned; nobody needs them anymore because nobody’s messing around with data collection for commercial purposes. Most countries have basic income now, so people can focus on what really matters: human intelligence, social connections, and personal growth. Robbies and optimizers help us align the entire political, economic, and social structure to keep certain inequality coefficients in check.
Oh, those historical humanoid robbies from back then were really creepy. Nowadays, no one’s trying to create artificial humans anymore – why bother? Today’s robbies are creative works of art and look all but human! Wow, and the term “Artificial Intelligence” is really a bad one, no wonder some people’s imaginations ran wild with that. Today’s optimizers, though, only make human intelligence better and nothing else – all based on principles of justice, of course.
…Opti, could you fill in any content and stylistic gaps in the text for me?
About the authors

Dr. Christoph Endres is a computer scientist. Inspired by science fiction, he specialized in artificial intelligence during his studies. After completing his doctorate, he played the lead role in the web series “Dr. Security” and thereafter moved also professionally to the field of cybersecurity. He is the owner and managing director of sequire technology, where he focuses on the security of GenAI. His collection on Isaac Asimov is probably the largest in Europe.

Frederic Frieß is a software developer at Centigrade. His area of responsibility lies at the interface between visualization and implementation with a particular focus on 3D design engineering. With several years of experience in this field, he is responsible for the disciplines of mixed reality, 3D design engineering and motion design. In his daily work, science fiction is always a source of inspiration for realizing intuitive and immersive experiences for industrial scenarios.

Dr. Isabella Hermann is an analyst in the field of science-fiction and future narratives. Holding a PhD in political science, she explores the question of how science fiction reflects new technologies, socio-political value systems and global politics – and above all, how we can shape positive futures in dystopian times.
-
Research ethics: Not applicable.
-
Author contributions: The authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Competing interests: The authors state no conflict of interest.
-
Research funding: None declared.
-
Data availability: Not applicable.
References 1 – documents/sources
1. Schmitz, M.; Endres, C.; Butz, A. A Survey of Human-Computer Interaction Design in Science Fiction Movies. In Second International Conference on Intelligent Technologies for Interactive Entertainment (ICST INTETAIN ‘08); Cancun: Mexico, 2008.Search in Google Scholar
2. Schmitz, M.; Endres, C.; Butz, A. A Survey of Human-Computer Interaction Design in Science Fiction Movies. IC@ST J. 2008, 1–23. https://doi.org/10.4108/icst.intetain2008.2476.Search in Google Scholar
3. Hermann, I. Science-Fiction zur Einführung; Junius Verlag: Hamburg, 2023.Search in Google Scholar
4. Maynard, A. Films from the Future: The Technology and Morality of Sci-Fi Movies; Mango Publishing Group: Coral Gables, 2018.Search in Google Scholar
5. Jordan, P.; Mubin, O.; Obaid, M.; Silva, P. A. Exploring the Referral and Usage of Science Fiction in HCI Literature. In Design, User Experience, and Usability: Designing Interactions – 7th International Conference, DUXU 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, 2018, pp. 19–38.10.1007/978-3-319-91803-7_2Search in Google Scholar
6. Jeon, M. Analyzing Novel Interactions in Science Fiction Movies in Human Factors and HCI Courses. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2018, 62 (1), 336–340. https://doi.org/10.1177/1541931218621078.Search in Google Scholar
7. Telotte, J. P. Robot Ecology and the Science Fiction Film; Routledge: New York, 2016.10.4324/9781315625775Search in Google Scholar
8. Leufer, D. Why We Need to Bust Some Myths about AI. Patterns 2020, 1, 100124. https://doi.org/10.1016/j.patter.2020.1001242020.Search in Google Scholar
9. Devlin, K. Turned on: Science, Sex and Robots; Bloomsbury Publishing: London, 2018.10.5040/9781472950888Search in Google Scholar
10. MacDorman, K. F.; Ishiguro, H. The Uncanny Advantage of Using Androids in Cognitive and Social Science Research. Interact. Stud. 2006, 7 (3), 297–337. https://doi.org/10.1075/is.7.3.03mac.Search in Google Scholar
11. Ohman, C.; Floridi, L. The Political Economy of Death in the Age of Information: A Critical Approach to the Digital Afterlife Industry. Minds Mach. 2017, 27 (4), 639–662. https://doi.org/10.1007/s11023-017-9445-2.Search in Google Scholar
12. Cave, S.; Dihal, K. Hopes and Fears for Intelligent Machines in Fiction and Reality. Nat. Mach. Intell. 2019, 1, 74–78. https://doi.org/10.1038/s42256-019-0020-9.Search in Google Scholar
13. The Royal, Society. The Royal Society Portrayals and Perceptions of AI and Why They Matter, 2018. https://royalsociety.org/topics-policy/projects/ai-narratives/.Search in Google Scholar
14. Hermann, I. Artificial Intelligence in Fiction: Between Narratives and Metaphors. AI Soc. 2023, 38, 319–329. https://doi.org/10.1007/s00146-021-01299-6.Search in Google Scholar
References 2 – films & TV shows
A.I. Artificial Intelligence, 2001, Steven Spielberg, Warner Brothers et alt., USA (https://www.imdb.com/title/tt0212720/)Search in Google Scholar
AI Rising, 2018, Lazar Bodroza, Balkanic Media, Serbien (https://www.imdb.com/title/tt5215088/)Search in Google Scholar
Automata, 2014, Gabe Ibáñez, Green Moon/Nu Boyana Viburno, Spain/Bulgaria (https://www.imdb.com/title/tt1971325/)Search in Google Scholar
Äkta Manniskör, 2012–2014, Lars Lundström (idea), Sveriges Television (SVT) et alt., Sweden (https://www.imdb.com/title/tt2180271/)Search in Google Scholar
AutomataArchive, 2020, Gavin Rothery, Independent Entertainment et al., UK/Hungary/USA (https://www.imdb.com/title/tt6882604/)Search in Google Scholar
Battlestar Galactica, 1978–1980, Richard A. Colla & Alan J. Levi, Glen A. Larson Productions & Universal Television, USA (https://www.imdb.com/title/tt0077215/)Search in Google Scholar
Battlestar Galactica, 2003–2009, Ronald D. Moore & David Eick (Executive Producers), SciFi Network. USA (https://www.imdb.com/title/tt0407362/)Search in Google Scholar
Bicentennial Man, 1999, Chris Columbus, Touchstone Pictures et al., USA (https://www.imdb.com/title/tt0182789/)Search in Google Scholar
Buck Rogers in the 25th Century, 1979, Daniel Halter, Glen A. Larson Productions, USA (https://www.imdb.com/title/tt0078579/)Search in Google Scholar
Enthiran, 2010, S. Shankar, Sun Pictures/Utopia Films, India (https://www.imdb.com/title/tt1305797/)Search in Google Scholar
Ex Machina, 2014, Alex Garland, A24 et alt., UK (https://www.imdb.com/title/tt0470752/)Search in Google Scholar
Exit, 2020, Sebastian Marka, Sommerhaus Filmproduktion, Germany (https://www.imdb.com/title/tt12664812/)Search in Google Scholar
Galaxy Quest, 1999, Dean Parisot, Dreamworks Pictures, USA (https://www.imdb.com/title/tt0177789/)Search in Google Scholar
Her, 2013, Spike Jonze, Annapurna Pictures/Stage 6 Films, USA (https://www.imdb.com/title/tt1798709/)Search in Google Scholar
Ich bin dein Mensch, 2021, Maria Schrader, Letterbox Filmproduktion/SWR, Germany (https://www.imdb.com/title/tt13087796/)Search in Google Scholar
I, Robot, 2003, Alex Proyas, Twentieth Century Fox et alt., USA (https://www.imdb.com/title/tt0343818/)Search in Google Scholar
Matrix, 1999, Lana & Lilly Wachowski, Warner Brothers et alt., USA (https://www.imdb.com/title/tt0133093/)Search in Google Scholar
Metropolis, 1827, Fritz Lang, Universum Film (UFA), Germany (https://www.imdb.com/title/tt0017136/)Search in Google Scholar
MiAutomatanority Report, 2002, Steven Spielberg, Dreamworks Pictures/20th Century Fox, USA (https://www.imdb.com/title/tt0181689/)Search in Google Scholar
Raumpatrouille – Die phantastischen Abenteuer des Raumschiffes Orion, Theo Metzger & Michael Braun, Bavaria Atelier/Bavaria Film/Norddeutscher Rundfunk (NDR), Germany (https://www.imdb.com/title/tt0061289/)Search in Google Scholar
Ready Player One, 2018, Steven Spielberg, Warner Brothers et al., USA (https://www.imdb.com/title/tt0133093/)Search in Google Scholar
Simulant, 2023, April Mullan, WANGO Films et alt., Canada https://www.imdb.com/title/tt13130024/)Search in Google Scholar
Please provide the author group for the following reference [13].Space: 1999, 1975–1977, Gerry and Sylvia Anderson, ITV, UK (https://www.imdb.com/title/tt0072564/)Search in Google Scholar
Surrogates, 2009, Jonathan Mostow, Touchstone Pictures et al., USA (https://www.imdb.com/title/tt0986263/)Search in Google Scholar
Star Trek: Insurrection, 1998, Jonathan Frakes, Paramount Pictures, USA (https://www.imdb.com/title/tt0120844/)Search in Google Scholar
Star Trek: The Next Generation, 1987–1994, Gene Roddenberry, Paramount Television, USA (https://www.imdb.com/title/tt0092455/)Search in Google Scholar
Star Trek: The Original Series, 1966–1969, Gene Roddenberry, Desilu Productions/Paramount Television/Norway Corporation, USA (https://www.imdb.com/title/tt0060028/)Search in Google Scholar
T.I.M., 2023, Spencer Brown, Arthrofilm et al., UK (https://www.imdb.com/title/tt21988182/)Search in Google Scholar
The Creator, 2023, Gareth Edwards, 20th Century Studios et al., USA (https://www.imdb.com/title/tt11858890/)Search in Google Scholar
The Machine, 2013, Caradog W. James, Red & Black Films/TV 4, UK (https://www.imdb.com/title/tt2317225/)Search in Google Scholar
The Time Machine, 2002, Simon Wells, Warner Bros./Dreamworks Pictures, US (https://www.imdb.com/title/tt0268695/)Search in Google Scholar
The Orville, 2017–2022, Seth McFarlane (idea), Fuzzy Door Productions/20th Century Fox Television, USA (https://www.imdb.com/title/tt5691552/)Search in Google Scholar
Tron, 1982, Steven Lisberger, Walt Disney Productions, UK (https://www.imdb.com/title/tt0084827/)Search in Google Scholar
Tron: Legacy, Joseph Kosinski, Walt Disney Pictures et al., USA (https://www.imdb.com/title/tt1104001/)Search in Google Scholar
Virtuosity, 1995, Brett Leonard, Paramount Pictures, USA (https://www.imdb.com/title/tt0114857/)Search in Google Scholar
Virtual Revolution, 2016, Guy-Roger Duvert, Lidderdalei Productions, USA/Canada/France (https://www.imdb.com/title/tt4054004/)Search in Google Scholar
© 2024 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Editorial
- The future of HCI – editorial
- Research Articles
- Will the design of the human–product relationship follow user experience?
- The future of interactive information radiators for knowledge workers
- Exploring the evolving landscape of human-centred crisis informatics: current challenges and future trends
- Broadening the mind: how emerging neurotechnology is reshaping HCI and interactive system design
- Evolution of interaction-free usage in the wake of AI
- Augmented future: tracing the trajectory of location-based augmented reality gaming for the next ten years
- Augmented total theatre: shaping the future of immersive augmented reality representations
- Towards new realities: implications of personalized online layers in our daily lives
- The next decade in accessibility research
- The role of digital technologies and Human-Computer Interaction for the future of education
- The European commitment to human-centered technology: the integral role of HCI in the EU AI Act’s success
- From explanations to human-AI co-evolution: charting trajectories towards future user-centric AI
- Social anthropology 4.0
- Fiction meets fact: exploring human-machine convergence in today’s cinematographic culture
Articles in the same Issue
- Frontmatter
- Editorial
- The future of HCI – editorial
- Research Articles
- Will the design of the human–product relationship follow user experience?
- The future of interactive information radiators for knowledge workers
- Exploring the evolving landscape of human-centred crisis informatics: current challenges and future trends
- Broadening the mind: how emerging neurotechnology is reshaping HCI and interactive system design
- Evolution of interaction-free usage in the wake of AI
- Augmented future: tracing the trajectory of location-based augmented reality gaming for the next ten years
- Augmented total theatre: shaping the future of immersive augmented reality representations
- Towards new realities: implications of personalized online layers in our daily lives
- The next decade in accessibility research
- The role of digital technologies and Human-Computer Interaction for the future of education
- The European commitment to human-centered technology: the integral role of HCI in the EU AI Act’s success
- From explanations to human-AI co-evolution: charting trajectories towards future user-centric AI
- Social anthropology 4.0
- Fiction meets fact: exploring human-machine convergence in today’s cinematographic culture