Home Augmented total theatre: shaping the future of immersive augmented reality representations
Article Open Access

Augmented total theatre: shaping the future of immersive augmented reality representations

  • Sergio Cicconi

    Dr. Sergio Cicconi, a PhD in Information and Communication Technologies from the University of Trento, blends expertise in Philosophy and Computer Science with a focus on Augmented Reality and e-learning. He previously worked at several US Universities (Duke, University of Florida, SUNY) teaching courses on Literature and new media. His research spans textual semiotics, digital media, and the intersection of literature with technology, leading to publications on augmented reality, e-learning, and hypertextuality. Recently, he has developed a Augmented Learning Environment for HoloLens, designed to introduce the elderly to digital culture.

    ORCID logo EMAIL logo
Published/Copyright: May 22, 2024

Abstract

This work introduces Augmented Total Theatre (ATT), a new theatrical form that combines Total Theatre with Augmented Reality (AR) to transform theatrical experiences. We first explore ATT features, highlighting its capabilities in creating theatrical representations that surpass traditional theatre. We also examine current technological limitations that hinder the deployment of ATT potential. We venture then into a journey into the future, particularly focusing on the next decade. We try to envisage the evolution of AR and assess whether future advancements will yield a form of AR capable of creating digital worlds that can deceive human senses. Additionally, we explore the role of Generative AI systems in addressing the problems that hold back the current ATT. Specifically, we probe the feasibility of a cost-effective, autonomous, and highly efficient generative AI system to reshape and empower ATT, making it capable of real-time production of (theatrical and non-theatrical) representations of many events in the world. Finally, we try to imagine the ATT of the future: a sophisticated device that integrates cutting-edge AR technology with a super-performing generative AI system. This ATT, transcending its theatrical origins, emerges as a powerful tool for augmenting our sensory experiences and enriching our perception of reality.

1 Introduction

In this work, we aim to explore a new theatrical form we have defined as Augmented Total Theatre (ATT). This theatrical form utilizes the expressive potentials of Total Theatre (TT) and Augmented Reality (AR), expanding the traditional scope of theatrical representations. Our goal is to analyze ATT by examining its origins, its key components, and its revolutionary impact in the performing arts. Additionally, we are interested in understanding if and how technology in the coming decade will be able to enhance this new theatrical form, transforming it into a genuine tool for augmenting our perception of the world.

TT, developed by Richard Wagner in the mid-19th century, aimed to integrate various art forms to achieve deeper audience engagement in theatrical performances. Throughout the 20th century, TT evolved through contributions from various artists, incorporating innovative techniques and technologies to further engage audiences in theatrical representations.

In this evolution, ATT emerges as the most recent and advanced theatrical form, potentially more effective than TT, as it is able to fully engage the audience in its representations.

Such complete engagement is made possible through Augmented Environments (AEs), innovative forms of communication created using AR technology. The AEs host, or rather are, the theatrical representations for ATT. Our detailed analysis of the characteristics and expressive potential of AEs has enabled us to refine the concept of ATT as a new form of theatre capable of enhancing and renewing the current TT. But more importantly, this analysis revealed an unexpected and remarkable ability of ATT, which has become one of its most significant features: ATT is potentially capable of representing the world. Indeed, in addition to allowing the creation of innovative theatrical representations, ATT also enables the creation of a wide range of representations of events in the world that are normally excluded from theatre.

Unfortunately, ATT must contend with reality. The current technology does not allow easy implementation of these representations of events, thus hindering the full deployment of ATT’s potential. This realization has guided us to continue our investigation on ATT from a different perspective. In the second part of this work, we therefore set out to understand if and how future technology will allow us to transform the current ATT into a tangible device for creating and experiencing representations of theatrical and non-theatrical events.

To this end, we projected ourselves ten years into the future and made hypotheses about the evolution of AR technologies, to determine if these technologies would indeed enable the construction of a device for experiencing multisensory virtual spaces that seamlessly blend with reality and fully deceive our senses.

On the other hand, our future ATT should also be capable of autonomously producing representations of theatrical and non-theatrical events. We thus imagined the evolution of what are now called Generative AI systems, to see if and how they could be used to automate the production of representations of events.

In our envisioned future, AR hardware and generative AI software will merge into a sophisticated, and nearly invisible device, distributed on and within our bodies, capable of enhancing our perception of the world. This device will be our ATT of the future. Perhaps, at that point, it will be difficult to think of such ATT as merely a new form of theatre. It will rather be a powerful tool capable of radically transforming not only our perception of reality, but also ourselves.

2 Augmented total theatre

2.1 Total theatre

The Total Theatre concept embodies the notion that theatre is a convergence of arts, aiming to engage the audience fully by integrating various artistic and sensory elements. This form of theatre gained popularity around 1850, following the writings 1 of Richard Wagner. He perceived theatre as a form of representation of reality. However, he found conventional theatre restrictive and lacking in audience engagement. As a result, he proposed an enhanced form of theatre: the Gesamtkunstwerk, or “total work of art”. This refers to an artwork in which several artistic disciplines are merged to make a unified, coherent whole, providing the audience with a meaningful and immersive experience.

When applied to theatre, TT implies a holistic approach: TT comprises elements such as music and sounds, lighting, movement, voices, painting, costumes, makeup, design, set design, architecture, literary expressions, and more. These components are composed into a unit greater than the sum of its parts. The essence of TT lies not in the accumulation of these components, but in the interaction and integration among them.

Since Wagner, TT has engaged artists, theorists, and artistic movements. While we won’t delve into the specifics of TT’s evolution here, it’s worth noting the authors who have played pivotal roles in shaping and enriching the concept. Following Wagner, contributors include Maeterlinck, Mallarmé, Appia, Claudel, Artaud, Graham, Grotowski, Cage, Barba, Brook, the Living Theatre, Wilson, and La Fura del Baus. These authors, along with others, have worked to broaden and occasionally deconstruct the boundaries of traditional theatre, with the aim of captivating the audience through innovative and multisensory experiences. 2 5

Our focus here is the common thread through many theories and authors who have explored TT: traditional theatre has a restricted way of depicting reality; TT tries to go beyond this mode by incorporating more aspects of reality into the theatrical experience, aiming to engage the audience in a deeper and more comprehensive manner.

Additionally, we find another idea worth considering. It concerns a different view on the concept of Gesamtkunstwerk. This view stems from Wagner’s writings titled “The Artwork of the Future”, 1 where Gesamtkunstwerk is also understood as the artwork of the future. From this perspective, TT, as a total work of art, can be seen as the future of theatre. As such, TT represents a never fully realized potential, so that its ultimate expression is constantly pursued but never fully achieved. 2

The 20th Century witnessed many realizations of TT thanks to technological evolution. Technological innovations have provided TT with new creative opportunities, enabling the integration of newer multisensory elements into performances. We can think of the use of computerized lighting systems, 3D sound spaces, interactive sets, or video projections on stage. These, and other technological components have enriched the sensory experience of spectators and expanded the limits of traditional performances.

However, there is a boundary that even recent representations of TT struggle to face: the physicality of the real world. In a form of theatre that is total, but still essentially physical and mechanical, even if contaminated by digital interventions, many objects on stage, the stage itself and the actors cannot evade the laws of physics and the materiality of the real world.[1]

This boundary appears challenging to overcome, even with the most advanced technologies found in TT representations. That is, unless TT embraces a new technology: Augmented Reality. AR seems to be the only technology capable of overcoming the barrier that even the most recent forms of TT encounter.

This notion forms the core of our investigation into the relationship between AR and TT. We believe that the integration of AR with TT could result in a more evolved and enriched form of TT.

2.2 Augmented reality and augmented environments

Before discussing how AR can create an evolved form of TT, it is necessary to introduce some ideas about AR. We have examined AR features in depth in other works. 6 8 Here, we explore only AR features useful for our discourse. According to a classic definition, “AR allows digital content to be seamlessly overlapped and combined into our perceptions of the real world in real-time”. 9 This means that, through a technological device, the user’s view of the real world can be enriched in real-time with digital content to integrate seamlessly with real content and interact with it.

This definition, although correct, is too general for this work. It needs refinement, specifying that the AR we deal with is immersive, thus more complex than non-immersive created in mobile devices. Immersive AR uses wearable devices (Head-Mounted Displays (HMDs)), like Microsoft HoloLens 2, Meta Quest Pro, Magic Leap 2, and Apple Vision Pro) to create the augmented spaces the user is immersed in.

Later, we will make extensive use of augmented spaces, so from now on we defined them more clearly as Augmented Environments (AEs): complex spaces arising from merging real world elements with those created by AR technology.

We treat these AEs as cultural objects, adopting an approach to AR different from the more widespread one that sees AR simply as a technology. We think of AR as a medium, that is, a system of signification and communication.[2] This perspective allows us to sees AEs as the products of that system of signification and communication, that is, as forms of communication that can be produced, studied, and used in various cultural contexts and knowledge domains like new media, digital arts, communication studies, semiotics, human-computer interaction, etc. 6

Next, we examine the most important characteristics of AEs, meant as forms of communication, refining characteristics of AR introduced in the 1990s. 9 , 10

Augmented environments using AR technology are:

  1. Augmented: This is a distinctive feature of AR and AEs. A layer of the virtual world is seamlessly overlaid in real-time onto the real world, creating a unified, complex and richer environment.

  2. Three-Dimensional: All holograms (virtual objects) in the virtual world are 3D and remain realistically attached to real-world elements. They continually re-adapt their features to the real environment, which, from the user’s perspective, constantly changes in relation to their movements.

  3. Interactive: This is not a unique feature of AR and AEs. Many interactive performances have been created in physical environments without AR. Usually, in such environments, users are free to interact with physical objects or even with virtual objects, mostly projected onto walls or onto real 3D objects existing in the environments (thus simulating a semblance of three-dimensionality). However, with AR this interactivity is enhanced. All virtual objects are 3D, can be anywhere within the AEs, and can interact with each other, with real-world objects, and with the user. They react in real-time to visual and auditory stimuli, voice commands or user’s gestures, or to commands given through input devices such as pads and controllers, or from scripts attached to the objects themselves. Most importantly, all virtual objects are programmable [ 11 , p. 49]: thanks to the scripts, it is possible to define the rules for interaction, to modify the shape, color, size, and position, and, in general, the behavior of virtual objects in response to various inputs or conditions that arise during the user’s experience in the AEs.

  4. Immersive: Immersivity is not a unique feature of AEs, nor is it new to theatre.[3] Rather, it is a property that emerges from the other features examined so far. Given an AE which is 3D and interactive, and is also visitable and, in a sense, “inhabitable” by the user, it is reasonable to state that such an environment immerses the user within it.

  5. Unconstrained by the physical laws: All holograms within an AEs are not constrained by the physical laws of the real world and can display features and behaviors that go beyond what is possible in the real world (see note 1).

  6. Unbound by specific times: All AEs are generated by apps running on HMDs that can be launched at any time chosen by the user. This feature becomes relevant when, as in this work, we talk about particular AEs that are spaces for theatrical performances. Indeed, such AEs hosting theatrical performances free users from time constraint of traditional theatrical performances.

  7. Portable: Portability of AEs is a consequence of AR real-time adaptability to various real environments, to its flexibility in interacting with the surrounding physical world. This enables users – regardless of their location – to instantly access any AE. Traditional installations and theatrical works, though portable during tours, require time-consuming setup in new locations. In contrast, AR’s adaptability to various environments is almost instantaneous, highlighting the superior portability of AEs.

  8. Enabling Non-linear Storytelling: Any AE can host the representation of a story[4] in which the user plays the dual role of spectator and main actor. At every stage of the story, the user can interact with the holograms existing within the AE, and their actions and decisions determine the story’s evolution. This allows for the construction of dynamic and non-linear narratives, offering the user a unique, personalized and interactive immersive experience.

2.3 Augmented total theatre enhances (and renews) total theatre

We have gathered the components necessary to define this innovative form of theatre, which we refer to as Augmented Total Theatre. ATT representations are crafted in AEs, harnessing the features of AR and AEs. At the same time, ATT also retains the tools and capabilities of TT, making it an enhanced version of TT. All the features of AEs are also the features of ATT representations. All the objects and actions that exist or are performed in AEs can similarly exist and be performed within ATT representations. Therefore, any author looking to create representations for ATT should take those features, objects, and actions into account, and then use them to give shape to new theatrical representations.

Comparing ATT with Wagner’s Total Theatre, the differences are clear. ATT is multisensory, immersive, interactive, unconstrained by physical laws, and portable. In ATT, the user becomes a performer, with the power to alter the narrative they are immersed in. ATT immerses users in captivating multisensory experiences, well beyond Wagner’s imagination.

Undoubtedly, AR’s role in shaping ATT representations makes ATT a unique form of theatre, clearly distinct from other forms. This is particularly noticeable in the different ways theatrical representations are experienced: in ATT, the experience is typically individual, and requires the user to wear some type of HMD. This perhaps is the most distinguishing feature of ATT, setting it apart from traditional theatrical forms, which value the collective enjoyment of an artistic event as a key aspect of the theatre-going experience.

Now, one might question the necessity of invoking theatre when discussing forms of communication produced with AR in contexts greatly different from theatrical ones, and involving technologies that have little to do with theatre. Why not simply discuss immersive AR experiences without the constant references to theatre?

We believe that there are undeniable benefits to preserving this connection between AR and theatre. In fact, the representations for ATT share many characteristics with the representations created for more traditional theatrical forms. Both at the level of the creative process that gives life to the representations, and in the expressive languages used and in the methods of constructing the narrative structures. This should not be disregarded. By maintaining the AR-theatre connection, we can immediately use established theories and methodologies for theatre and media to design, develop and analyze new ATT representations without the need to reinvent theoretical and practical tools from scratch.

We wish to conclude this section with some considerations on ATT that will prove important in the rest of this work. Our research on ATT originated with the goal of defining an innovative form of theatre capable of combining Wagnerian Total Theatre with the latest technologies for the augmentation of reality. However during the development of this idea, we realized that ATT has an even broader potential than we initially anticipated. Thanks to its characteristics, ATT radically extend the very object of theatrical representation. ATT allows us to “stage” a variety of real experiences and events[5] that are normally considered thematically unsuitable, narratively uninteresting, or insufficiently structured to become representable events in a conventional theatrical representation.

Such events, although not conventionally theatrical, are all reproducible, or representable, as they are perceivable through one or more of our senses. And if they are representable, then it is possible to create their representation suitable for ATT. Thus, ATT becomes a powerful tool for the production of forms of communication in the arts and in entertainment, but also in education and in culture. And, most importantly, in many events of ordinary life.

2.4 Augmented total theatre in the real world

So far, our discussion on ATT has been primarily theoretical. We have explored the nature of ATT and also established that the use of AR to create representations of events, whether theatrical or not, implies the creation of AEs with all the characteristics already examined, also exploiting all the expressive potential of TT.

Now, the critical question is: are such representations for ATT practically achievable? In other words: once we have identified an event that can be represented, can we concretely create a representation of that event for ATT that can be experienced through a HMD?

The answer is affirmative, though with significant limitations.

An example of a perfectly working narrative AE is Microsoft’s Fragments, a crime thriller for the HoloLens, 12 where the user must interact with the holograms of some detectives to help them solve a crime. Another example of a complex AE, in this case non-narrative, is the Learning Augmented Environment that we ourselves have developed to introduce the elderly to digital culture. 13 According to the perspective outlined so far on ATT, both AEs have all the characteristics to be considered in all respects as real representations for ATT, confirming the feasibility of our idea: ATT is a new theatrical form that can really be used to create immersive theatrical representations.

However, ATT, meant as a tool to guide the creation of representations, must confront reality, and the real process of creation of those representations. And it must also confront the inevitable limitations of this process.

The process of development of virtual and augmented environments largely follows that used to develop videogames, with five main phases: conception, pre-production, production, iteration and review, and finalization. 14 , 15 It involves generating creative ideas, detailed development of elements, coding and graphics to create prototypes, testing and refining, and resolving bugs for the final version.

This process is lengthy, complex, and costly, requiring the prolonged commitment of a team of professionals in various disciplines, including game artists, game designers and level designers, programmers, game developers, project managers, sound designers, audio engineers, cinematic and visual special effects artists. The time span from the inception to the launch of the finished product, whether it is a videogame or an AE, can stretch from months to years. And this situation remains true, although on a smaller scale, even for smaller and more manageable products.

This description of the process to create AEs – which, we recall, are representations for the ATT – helps us understand the meaning of our previous statement: we can create representations of many events, including non-theatrical ones, but with significant limitations. Limitations that arise precisely from the process necessary to create such representations. The process does work (otherwise, we would not be able to produce videogames and games in AR and VR). However, it requires a significant commitment of human and economic resources that inevitably require a return on investment. So, it is not realistic to think that such resources can be used to create non-business-oriented representations, or representations of events that are not particularly important, such as everyday events.

Currently, overcoming these limitations seems impossible. Eliminating them would need a radical change in the process of creating augmented and virtual environments, which is currently hardly achievable.

In conclusion, the idea of an ATT capable of representing many events in the world is possible and has great potential, but becomes difficult to realize when faced with reality. And as long as this situation remains, the ambition of the Augmented Total Theatre is significantly curtailed.

3 The future of augmented total theatre

To understand if ATT can fully deploy its potential, we must ask what the future of ATT could be. Will future technology perfect AR to the point that digital content will be seamlessly overlapped and combined into our perceptions of the real world in real-time? What will be the consequences of this advanced form of AR on ATT? Will technology enable a fast, efficient, and economical process of creation of representations of events which does not involve significant human resources? Can this new way of creating representations integrate into ATT?

In the remain of this work, we explore possible answers to these questions, trying to offer an articulated vision on what could be the future, ten years from now, of ATT and those technologies related to ATT.

We begin with the assumption that creating representations for ATT requires a robust AR technology. Without it, we lack the foundation to achieve the sensory illusion that lies at the heart of ATT. Therefore, to speculate about the future of ATT, we must first consider the future of AR. However, AR technology alone cannot create representations for ATT. As we have seen, the creation of representations is a lengthy and complex process that has little to do with AR technology.

So, in talking about the future of ATT, we divided the discussion into two parts: first, we address AR technological aspects, examining the components that enable the creation of the multisensory virtual space that, in the user’s perception, will seamlessly integrate with the real space. Then, we address the issue related to the creation of representations for ATT, examining the applications and programs that make it possible a new form of creation and management of such representations. Only then will we define the look and features of a device for the production of representations for ATT.

3.1 Technologies for the creation of multisensory virtual spaces

From a technological standpoint, the primary objective of technologies for immersive AR is to create devices that completely deceive our senses. When confronted with a person, or a fragrant flower, a monument, or a cup of tea on a table, if we can no longer discern its nature (is it real or virtual?), the goal of AR will have been achieved.

The current state of immersive technologies does not yet suffice to fully deceive human senses – a pivotal condition for creating augmented worlds that Augmented Total Theatre aims to realize. Current research developing these technologies primarily focuses on sight and hearing, as these are the senses we use most to construct the most significant parts of our knowledge of the world. Yet, touch, taste, and smell are also important for understanding the world, and AR cannot overlook their existence.

Thus, technologies for sensory deception aim to create distinct virtual spaces for each sense. An ideal device for experiencing fully multisensory immersive AR would integrate technologies that deceive all senses by constructing a virtual multisensory space that seamlessly overlaps with the real multisensory space.

Current technologies have not yet achieved this goal. To assess how close we are to such an ideal device, we will examine one by one the sensory spaces created by our senses, and also the technologies for creating virtual elements within those spaces.

3.1.1 Visual space

Given that the visual space is the most significant, complex, and utilized by AR, it is also the space that demands the most attention. 16 19 With respect to the visual space, the research objective is to develop displays that show vivid and realistic images, with a “retinal resolution”. This means a pixel density that matches the resolution of the human eye, equivalent to about 60 pixels per degree per eye. Displays with such resolution would render virtual images indistinguishable from real ones. However, the quality of the virtual images, particularly when in motion, also depends on other vision-related aspects such as the perception of movement and depth, as well as the breadth of the visual field. Each of these aspects must be considered and then artificially recreated through the development of specific hardware components.

For instance, to achieve realistic movement perception, displays need retinal resolution and also low latency (the delay between the user’s head movement and the corresponding change in the image on the displays). High latency causes disorientation and discomfort and reduces realism. However, low latency requires displays with high refresh rates which, in turn, demand high-performance processors and graphic cards, consuming significant energy and generating heat that must be dissipated, and so on, each addition further complicating an already intricate system. Moreover, we should also consider that the virtual images produced on the displays are moving within a three-dimensional virtual space; therefore, the displays must be able to mimic the human eye’s “depth of focus”, ensuring focus for virtual objects at varying distances without losing image definition. And not even this is enough to deceive the eye. In fact, ideal displays should show images at retinal resolution, low latency and depth of focus across the eye’s full visual field (about 160° horizontal by 150° vertical) to avoid limitations in peripheral perception that would inevitably reveal the artificiality of the virtual images. Finally, and it is not a detail, all the hardware components enabling these features must be small enough to be integrated in displays that must be lightweight and manageable, and comfortable for extended wear.

It is clear by now that the development of displays capable of perfectly and indistinguishably merging the real world with a virtual visual space is an extremely complex technological challenge. And although technology has made much progress in a short time, there are still several problems to be solved on many fronts.

Currently, there is still no display on the market with performances capable of deceiving the human eye. Even the most performing HMDs, the Varjo Aero (with a pixel density of about 35), or the Apple Vision Pro (which seems to have a pixel density of about 32), have pixel density values still far from the 60 of retinal resolution. As for the refresh rate, on average, the refresh rate of devices on the market is around 90 Hz or more, thus guaranteeing acceptable latency values, even if these devices often have low pixel density values, thus producing not particularly realistic virtual images. And with regard to the field of view, the highest-performing devices on the market can create virtual fields of view spanning approximately 115° × 135°. This is still insufficient to fully overlap with the area covered by the human eye’s field of view.

However, there are researches and prototypes that give us hope for the future. New technologies and materials could change development perspectives and produce rapid evolutions in the AR sector. For instance, a prototype boasting near-retinal resolution has been developed by Meta’s Display Systems Research, which equates to a pixel density value of 55. 20 Therefore, considering technological advancement speed, it is reasonable to think that within a decade we will have displays capable of deceiving the human eyes and making them believe that the moving people and objects they see through the displays belong to reality.

3.1.2 Auditory space

In reality, auditory space encompasses the distribution and perception of sound sources with positional and qualitative characteristics. The human ear perceives sounds ranging from 20 Hz to 20,000 Hz. Creating an augmented auditory space involves replicating sound propagation in a virtual environment, mirroring reality. Essential to this are audio playback devices capable of reproducing sounds within this frequency range. Sound spatialization, simulating sound propagation in three-dimensional virtual space, is also crucial. Spatialization lets users identify sound direction and distance, as if originating from a specific point. Finally, for seamless blending of real and virtual auditory spaces, accurate representation of virtual and real objects in the user’s environment is essential. This requires applying effects like reflection, refraction, and absorption based on virtual surfaces and environmental geometry. 21 , 22

Current digital audio technologies can reproduce all frequencies audible to humans. So much so that current audio devices, such as speakers and headphones, can deceive human hearing, making digitally produced environmental sounds, noises, music, and voices indistinguishable from real ones. Moreover, as early as 2016, the Microsoft HoloLens HMD used sound spatialization and all the above-mentioned effects to enhance the quality of the auditive experience. The spatialization system was subsequently adopted and refined by new HMDs released on the market, and is now an integral part of every HMD for mixed reality.

Thus, regarding virtual auditory space, the future has already arrived. In the next decade, sound production devices will be miniaturized, and software will enhance the user’s presence in 3D sound spaces. But ear-deceiving technology is already within our reach.

3.1.3 Olfactory and gustatory space

The senses of taste and smell are important for our perception of the world. However, research in AR has primarily concentrated on defining virtual spaces for sight and hearing. Consequently, the development of technologies that can create artificial spaces with the appropriate stimuli to convincingly deceive human nose and tongue is still relatively in its early stages.

Smell, through olfactory receptors, makes it possible to perceive the concentration, quality, and identity of volatile molecules in the air. The human nose hosts about 10 million of these receptors, capable of perceiving thousands of volatile compounds, and it is precisely on these olfactory receptors that the digital scent technology focuses. The idea is to construct an artificial olfactory space using devices called scentography devices that are capable of releasing mixtures of volatile compounds in a programmable, and safe manner.

Since the 1960s, there have been many more or less successful attempts to construct systems for releasing odors in ludic environments [ 23 , Ch. 5]. The most recent example is the film Postcard from Earth (2023), 24 by Darren Aronofsky. The experience of the film, projected with an 18 K resolution on the 270° screen of the Sphere in Las Vegas, has been enriched thanks to a device capable of creating an artificial olfactory space, with smells connected to the images projected on the screen. Postcard from Earth shows the maturity of a technology that makes it possible to construct a world of artificial odors associated with images and videos.

However, when it comes to AR, the technological challenge is even more daunting, and there are not effective solutions, yet. Indeed, when dealing with AR, the goal is to create scentography devices that not only are effective in creating artificial odors, but are also small and non-invasive, despite the need to remain close to the nose and its olfactory receptors. In addition, these scentography devices must be integrable into or connected to the HMDs. The requirement to store in refillable containers the liquids used to create mixtures of molecules that generate various odors, adds complexity to the development of scentography devices for AR.

Similar challenges arise when discussing taste and artificial gustatory space. The tongue contains about 10,000 taste buds, which are the receptors for the five basic tastes (sweet, salty, sour, bitter, umami). Therefore, research on artificial taste focuses on the development of devices that can adequately stimulate the tongue to reproduce the basic tastes, and mix them in a controlled and programmable manner. Various devices focus only on simulating some of the basic tastes. Some other prototypes experiment with hybrid solutions, such as the controlled delivery of tasty substances through oral interfaces, which are then applied to the palate and generate electrical impulses to simulate sweet or salty. Other devices use thermal stimulations instead of electrical ones to achieve the same result. 25 However, the accurate reproduction of the five fundamental tastes remains complex and not yet within the reach of current technologies.

Furthermore, when discussing artificial taste in relation to AR, we must address additional requirements, which turn into technological challenges. Indeed, as with the devices generating artificial odors, also the devices generating artificial taste stimuli need to be compact and non-invasive (a particularly challenging aspect, given that the receptors to be stimulated are located inside the mouth and are unevenly distributed across the tongue and palate), and they must be safe for health. Adding to the complexity, these devices make use of chemical substances to generate taste stimuli, and therefore need storage containers that require periodic refilling.

When it comes to taste and smell, it is clear that the potential of technologies capable of creating artificial sensory spaces that can deceive our olfactory and gustatory systems is significant. However, these technologies are still not up to par, especially in the realm of AR and VR. The challenges of miniaturizing these devices, making them portable, and ensuring their safety for health make their development particularly complex.

On the other hand, technology in these fields is advancing rapidly. We are optimistic that within the next decade we will see the emergence of devices that can generate artificial smell and taste stimuli. These devices will be compact, portable, non-invasive, and efficient enough to convincingly deceive the human nose and tongue.

3.1.4 Tactile space

The sense of touch is fundamental for perceiving the surrounding world and physical interaction. Natural sensors designed to detect contact stimuli are located on human skin, thus distributed on the external surface of the human body. To artificially recreate an equivalent of the natural tactile space, it is necessary to develop a technology capable of transferring to the human body the tactile stimuli (such as touches, blows, temperatures, rubbings, prolonged pressures) that come from the intangible objects (the holograms) of the virtual world. This is precisely what existing haptic suits and haptic gloves do. 26 29

To virtually recreate the tactile space, haptic suits use so-called tactile actuators incorporated into the suit. The actuators are tiny electronic components distributed on the inner surface of the suit in contact with the skin, which generate various stimuli on the skin in response to stimuli coming from the holograms. Such actuators use vibration micro-motors, systems to modulate temperature, or electromyographic systems that directly stimulate the muscles. Clearly, the greater the number of actuators, the better the quality of the tactile experience. However, the greater the number of actuators, the greater the weight and bulk of the haptic suit, as well as the heat generated by the suit and its energy consumption.

The suits currently on the market, even the most sophisticated and expensive ones, such as the HaptX Suit or the Tesla Suit, exhibit the above problems and limitations to varying degrees: they are still bulky, heavy, and uncomfortable to wear; often, they need to be connected to an electrical system to function; most importantly, they have a limited ability to reproduce complex tactile sensations like temperature, pressure, or pain. In short, these suits may be sufficiently suitable for immersive experiences in VR, in well-delimited environments capable of providing the necessary energy to keep the suits operational. However, their limited portability, high energy consumption, bulkiness, and discomfort in wearing them are factors that make current haptic suits less suitable for immersive AR experiences.

Anyway, it is worth noting that research is focusing on the development of new types of actuators. Fields such as soft robotics and microfluidics aim to create tactile actuators that are increasingly lightweight, soft, and versatile, capable of simulating a wide range of skin-level sensations in an increasingly realistic way, with less heat generation and reduced energy consumption. It is therefore easy to imagine[6] that in a decade, haptic suit technology will guarantee suits that are light enough and comfortable to wear, energy-efficient, and above all, capable of reproducing all the tactile stimuli generated during users’ interaction with holograms in the virtual world.

A similar discussion applies to haptic gloves. Their role is crucial in simulating touch, especially given the high sensitivity of our hands in physical interactions. The development of these gloves, however, faces significant challenges. Current models are often heavy, and uncomfortable, with the mechanisms for force feedback to the fingers being too bulky, visible, and not energy-efficient. Moreover, the tactile feedback they provide still lacks sophistication and differentiation.

Looking ahead, the future of haptic glove technology remains less certain than that of haptic suits. The key obstacle lies in miniaturizing force feedback mechanisms while retaining their effectiveness. Advances in technology over the next decade may lead to innovations in this area, but the exact trajectory is harder to predict compared to haptic suits. The evolution of haptic gloves will likely depend on breakthroughs in making these mechanisms more compact and efficient without compromising their functionality.

3.2 Ten years in the future: the device for multisensory augmentation of reality

Now we combine the components examined in the previous sections to create an ideal Device for Multisensory Augmentation of Reality (DMAR), as shown in Figure 1 (with the mentioned components clearly visible).

Figure 1: 
Four components of the DMAR, each designed to produce stimuli in the different sensory spaces: visual, auditory, olfactory, and gustatory (image by the author using DALL-E).
Figure 1:

Four components of the DMAR, each designed to produce stimuli in the different sensory spaces: visual, auditory, olfactory, and gustatory (image by the author using DALL-E).

In ten years, the display will likely be the only visible component, albeit less so than in Figure 1. As for the other components, it is reasonable to predict that in the next decade they will become smaller, more fashionable, and less invasive.

Sensors, cameras, speakers, microphones, and modules for environment tracking and mapping, and for connectivity may integrate into the display, as it happens with current devices. However, more likely they will be concealed into smart clothing, gadgets, jewelry, or smart piercings (in the nose, tongue, ears) and smart tattoos. 30 , 31 And regardless of those components size and shape, they will be not far from the head, given that 4 of the 5 human sensory organs that need to be deceived receive their inputs from various areas of the head and face.

Micro-displays implanted in contact lenses could replace larger displays (see box in Figure 1), assuming that in a decade display technology achieves retinal resolution within a few square millimeters 32 , 33 and fully caters to all vision-related needs as discussed in Section 3.1.1. The adoption of contact lenses would eliminate the need for a visible display, thereby contributing to the creation of an invisible, comfortable Device for Multisensory Augmentation of Reality for everyday use.

To complete the design of our ideal DMAR, we must also consider the interfaces the user will wear to create a virtual tactile space: haptic gloves and suits (see Figure 2). As discussed in Section 3.1.4, existing interfaces are not perfect at creating believable virtual tactile spaces, but advancements are expected in ten years. Haptic interfaces will be lighter, more comfortable, and capable of simulating a broader range of tactile sensations. However, we don’t foresee any groundbreaking transformations in those interfaces. Indeed, we imagine that despite potential enhancements in comfort and performance, the prospect of donning gloves or a full haptic suit in everyday situations might not be appealing or particularly useful to the majority of people.

Figure 2: 
The fifth component of the DMAR: haptic gloves integrated within a haptic suit, designed for the production of tactile stimuli (image by the author using DALL-E).
Figure 2:

The fifth component of the DMAR: haptic gloves integrated within a haptic suit, designed for the production of tactile stimuli (image by the author using DALL-E).

We believe that high-performance haptic interfaces will be used mostly in controlled environments (such as Labs or Research Centers, or Gaming Centers or Haptic Simulators) for specific experiences like games, rehabilitation, training, or other special activities requiring artificial tactile stimulations throughout the body. For everyday use, we instead foresee the development of more affordable, discreet solutions like haptic bands, patches, or clothes with integrated haptic actuators on body parts more exposed to contacts.

All other DMAR components, like CPUs, GPUs, RAM, power cells, and storage devices will be “everyware”, 34 wirelessly connected to each other and the Cloud. They will be distributed across smart gadgets, ornaments, piercings, and clothing. Ultimately, the entire system will be powered by the wearer’s body movements and heat.

Wearable technologies are not a novelty. 35 , 36 However, in the next decade, technological advancements will further miniaturize these wearable components, enhancing their efficiency, reducing their energy consumption, making them more affordable, and more stylish. So much so that wearable computers will truly be everyware. And they will be invisible, unnoticeable, on our bodies and within our bodies. Which is precisely what we believe will happen with our DMAR.

3.3 Beyond hardware: the generator of representations for augmented total theatre

Finally, we can focus again on the Augmented Total Theatre, the central topic of this work, addressing the questions that concluded Section 2.4: What could be the future of ATT? Will future technology allow the creation of representations of events for ATT through a fast, efficient, economical process that does not require significant human resources? Will it be possible to integrate into ATT this new way of creating representations of events?

We will therefore describe an ATT from the near-future, ten years from now. And, obviously, such ATT will be equipped with the DMAR previously discussed.

We begin with some general observations. 2023 marked the rise of Artificial Intelligence (AI), specifically Generative AI systems, or Large-Language-Models (LLMs) like ChatGPT, Claude, Bard, Gemini, etc. These are forms of Weak Artificial Intelligence (WAI)[7] that have rapidly gained popularity, impacting various production processes in a variety of fields. We believe these AI systems, in more advanced versions, will also play an essential role in the evolution of ATT.

Currently, Generative AI systems, using textual, vocal, and visual inputs, can produce high-quality images, videos, diagrams, music, voices that speak and sing, programming code, 3D models, novels, scripts, and a range of texts suitable for the most diverse personal and professional contexts. And almost every day we discover new ways to use Generative AI systems to create forms of multimedia communication that increasingly resemble works of human ingenuity.

Particularly, we think of the so called GPTs, or ChatBots, or AI Agents, that is, customized Generative AI systems created and trained by individual users or organizations. Currently, these AI agents can handle a variety of tasks efficiently and swiftly, using multiple knowledge bases that contain sets of specific information on an individual, or a topic, or a domain of knowledge. After training, AI Agents can act as digital alter-egos of the users who created them and fed them with personal information, or they can also take on specific professional roles, assisting flesh-and-blood professionals in their work.

A year ago, all of this was hardly imaginable. At the moment, these AI agents are imperfect: on various occasions they are unreliable, and produce disappointing results. But we are only at the beginning. Considering the speed at which they are evolving, it is easy to imagine that, regardless of their AI level, in ten years they will be able to take on the roles of real professionals across a variety of domains of knowledge. And they will most likely be economical, reliable, efficient, and fast.

We believe these Generative AI systems will be crucial for the development of a powerful ATT. This new ATT, far from being just a theoretical tool, will have the capability to create in real-time the representations of numerous events in the world, and will also facilitate their immediate use through the DMAR. We envision this new ATT as a concrete device that integrates the DMAR with a unique piece of software, which we refer to as the Generator of Representations for ATT (GR). The GR is essentially a management system for augmented multisensory immersive experiences. Its function is to ensure the smooth operation of the various hardware components previously examined (the DMAR), which are essential for deceiving human senses. More importantly, the GR is also responsible for the real-time creation and management of all elements of the virtual multisensory space that seamlessly blend with the real multisensory space to create immersive augmented experiences.

We can even try to imagine some details about the operation of this GR.

The GR will have at its disposal several custom knowledge bases, both private and public, stored on the Cloud or on local storage devices. These knowledge bases will contain information to create a wide range of immersive experiences. The GR will also coordinate a team of AI agents, each being the digital equivalent, in terms of knowledge, skills, and ability, of one of the professional figures (e.g., game artist, designer, developer, sound designer, visual effects artist, etc.) involved in the process of creation of representations of events examined in Section 2.4. These AI agents will all be connected to each other and to the Internet, constantly updating their knowledge and skills. The GR will manage and coordinate these AI agents, deciding how and when to use their specific skills during the process of creation of a particular representation of an event.

Building on this basic description of the GR, let’s try to think about its operation when a user, equipped with the ATT, is faced with an event (see footnote5).

The GR, using DMAR’s cameras and tracking systems, determines the user’s location and creates a 3-D map of the environment (or, to speed up the process, uses pre-existing maps, like Google Map Live View does). It also identifies the event type using a catalog of previously classified events in the knowledge bases.

If a user interacts with the GR, asking for example to augment the event they are facing with a story (maybe an adventure, mystery, fantasy, or romance story), the GR gathers information about the user (their age, interests, cultural background, education level, etc.), and about the environment (its history, geography, cultural and artistic significance, etc.) from the knowledge bases. Then, it instructs storytelling AI agents to craft a story fitting the environment and the user’s interests and request. The story will also be interactive, so that the user can change its evolution through real-time interaction with the characters and objects of the story. To concretely make that story alive in the multisensory virtual space the user will perceive, other AI agents create all the story elements (3D holograms of people, things, and environments, sounds and voices, gustatory, olfactory, and tactile stimuli) in the various sensory virtual spaces, and also define the behaviors of such elements. In almost no time, the story is ready for the user to experience.

3.4 Immersive encounters: joining Monet at the Jardin d’Eau through augmented total theatre

We would like to conclude this work with a real-world scenario that showcases the capabilities of the ATT device described in the previous sections. Here, we will see the ATT of the future at work. We will see how the ATT, through augmented reality, can profoundly enhance and enrich a user’s experience by merging history, fiction, art, and reality.

In this section, we explore the immersive journey of a user named Paola, as she uses her ATT to bring a literary world to life in contemporary Milan. Paola lives in Milan in the year 2035. She has just finished reading Michel Bussi’s Black Water Lilies, a compelling novel written back in 2011 and set around the 1960s in Giverny, Normandy, where the impressionist painter Monet worked until his death in 1926. Paola greatly enjoyed the book and now wishes to personally visit the locations described within. However, she realizes that a trip to Giverny would not suffice, as the Giverny of 2035 no longer resembles the town depicted in the narrative. Furthermore, Paola seeks to experience the impossible: not only does she want to meet Fanette, the young painter protagonist of the novel, but she also wishes to resurrect the Master painter, Monet, who died many years before the fictional Fanette was born, and to observe him as he works on one of his famous water lily paintings. Paola’s desire is complex: she wants to step into a chapter of Bussi’s novel that he never wrote.

She shares this wish with her ATT device that she is wearing while strolling toward Parco Sempione, a large park in the center of Milan. The ATT’s GR briefly engages in a dialogue with Paola to more precisely understand the shapes of her desires. Once this dialogue is over, the GR integrates this new information with existing data about Paola stored in one of its knowledge bases. The GR is now ready to create an augmented representation for ATT that can fulfill Paola’s requests. It immediately geolocates Paola: she is on Via Montello, a street not far from Parco Sempione. The GR accesses city maps provided by Google and prepares a reconstruction of the path that will lead Paola to the small lake in Parco Sempione. Meanwhile, the GR has relayed Paola’s requests to its team of AI agents. The storytelling experts, having already read the digital version of Black Water Lilies provided by Paola, possess a thorough understanding of the book and its characters, and have crafted a script for a new, phantom chapter – a brief narrative set in an imaginary Giverny that foresees a meeting between Paola and Fanette, followed by an encounter with Monet at work on a new painting. Other AI agents specialized in 3D modeling are still scouring the Internet for photos, possibly some video footage, maps, prints, and drawings of Giverny from 1920, to gather all elements necessary for creating a realistic interactive reproduction of the town, its people, their attire, automobiles, sounds, scents, and more. Additional AI agents are tasked with creating high-definition, life-size avatars for the two protagonists: Monet and Fanette, while other AI agents work on writing scripts that control the behavior of all interactive holograms in the augmented representation. Once they all have completed their tasks, Milan of 2035 is ready to merge with a Giverny of 1920 into a single, imaginary augmented city, within which Paola can live out her desires.

Paola is still on Via Montello, and the buildings she saw moments ago have disappeared: now, she sees only low houses typical of the rural dwellings of Giverny from 1920, with rough stone walls and peeling plaster. The windows boast flower boxes overflowing with petunias and geraniums. The slate roofs gently slope, the street is cobbled, and the men walking along the street wear suspenders and wide-brimmed hats, while the women are adorned in long skirts and colorful shawls. Suddenly, behind Paola, a black Citroën with huge, thin wheels, typical of the 1920s, noisily appears and slowly drives away.

Soon, Paola reaches Piazza della Lega Lombarda, which has now transformed into the main square of Giverny. She inhales the sweet scent of flower gardens surrounding the Moulin des Chennevieres, which is right in front of her. The sound of the mill’s water blends with the chirping of birds and the rustling of leaves.

Paola continues her walk in Parco Sempione, now transfigured into Giverny’s Jardin d’Eau. The paths around her are still those of Parco Sempione, which Paola knows well, but this time the flowers, flower beds, and plants mimic those from Monet’s iconic works. As she approaches the small Ponte delle Sirenette, she is thrilled to find that it closely resembles the well-known Japanese Bridge, complete with colorful water lilies floating on the water below. As Paola crosses the bridge, she glimpses the two figures she had wished for: Fanette, brought to life from the pages of a book, stands before her easel, painting. Nearby, at her side, is Monet, alive once more thanks to the almost-magical abilities of the ATT. With his black hat and beard, wearing a blue jacket and dark trousers, palette and brush in hand, he closely resembles the painter depicted in Renoir’s famous 1873 painting, Claude Monet Painting in His Garden at Argenteuil. However, this Monet looks very real, and much older, for it is now 1920. Paola approaches Master Monet; his canvas displays water lilies of various colors – dark blue, green, burgundy – alongside pale pink, intense pink, and light blue lilies. The water lilies seem to move in the clear water that reflects the white clouds in the sky. The scent of the water lilies intensifies. Fanette notices Paola approaching, turns around, and greets her, smiling just as she did in the book …

We leave Paola to continue her walk in the augmented environment that her ATT device has crafted for her. Later, she will have the opportunity to converse with Fanette, as outlined in the script created for the occasion by the AI agents. And she will also be able to get even closer to Monet and observe him at work, just as she requested, while remaining silent so as not to disturb him. Eventually, Fanette will accompany Paola out of Giverny’s Jardin d’Eau, towards her quaint brick house, marking the culmination of Paola’s journey into her desires …

In almost no time, the ATT of the future, through its Generator of Representations for ATT has transformed Paola’s fantasy into a vivid multisensory story to be experience through the Device for Multisensory Augmentation of Reality. Thanks to the efficiency of AR technologies of the future – which, we must remember, are also technologies for sensory deception – Paola will no longer be able to distinguish the familiar and real Milan she knows well from the multisensory virtual Giverny. She will simply have the perfect illusion of being immersed in the 1920 Giverny crafted by her ATT device: a unique form of reality that is rich, interesting, instructive, engaging, and stimulating.

4 Conclusions

Freed from the limitations of the complex, lengthy, and costly process of production of representations of events, examined in Section 2.4, the ATT of the future can express its potential without restrictions. Thanks to the GR, it will be possible to create a wide variety of representations of events. Hence, not only representations of stories, as seen in the previous example, but also of many diverse events: for instance, in medicine, where the GR could create three-dimensional representations of patient anatomy for surgical interventions, or in theatre, allowing playwrights to experience immersive versions of their work in progress, or in technical maintenance, guiding technicians through complex procedures, or in architecture, enabling architects to test building design variations in situ, or in digital art, giving artists the chance to invent large installations in urban territories and in remote and inaccessible spaces. And we could consider many more examples in education, tourism, entertainment, history, archaeology, cooking …

All these representations will be created in real-time; they will certainly be economical and will not require the use of human resources. They can exist for the duration of a single use, or can be saved and reused later by users in many different ways. They can be representations of events that are significant to an entire community, but they can also be representations of simple and even trivial events, significant only to a single user. Like Paola’s desire to meet the character of a book she read. The customization of the knowledge bases will enable ATT to adapt to many contexts, needs, and tastes, opening up a world of practically limitless expressive possibilities.

All that we have seen so far leads us to conclude our work with a question that now seems inevitable: what consequences will this technology have on us? The answer to this question is complex, raising significant technological, but also and above all, ethical and social issues that deserve an exploration we cannot undertake here. Therefore, we limit ourselves to providing only a few brief considerations, which are the same that will guide our future research.

We believe that the consequences will be significant. ATT, or any technological tool with similar characteristics, will change our way of perceiving the world, will gift us with an extended mind, and will allow us to transform into Augmented Humans.

Referring to the extended mind, we draw inspiration from David Chalmers’ concept: the extended mind emerges from the hybridization of the natural mind with technology; technological tools become integral parts of our mind to the point of extending it and making it more functional and efficient [ 37 , Ch. 16]. This happens with many technologies. And it is even more true with AR, whose ultimate purpose is precisely to augment and enrich our natural perceptions. And even more so with ATT which makes AR its strength, but – as we have seen – has even greater power than AR. Indeed, ATT not only enhances our perception of the world, but also, thanks to the possibility of being discreetly usable in everyday life, in any situation, at any time, manages to change over time even the way we perceive the world, and therefore extends our mind. ATT transforms us into enhanced human beings, capable of interacting with the surrounding environment in a richer and deeper way. Drawing from a term used by Helen Papagiannis, 23 we assert that ATT paves the way for us to become Augmented Humans (AHs).

In describing Augmented Humans, Papagiannis argues that AHs differ from conventional humans in their amplified and improved sensory abilities. Vision can be extended with advanced functionalities such as X-rays or thermography, hearing enhanced with high-quality aids, and touch, taste, and smell intensified through advanced digital interfaces. However, obviously, AHs are more than this; indeed, they will not be limited to using the traditional five senses but will develop new ones through artificial sensory devices, thus expanding the perceptual spectrum. Moreover, AHs will be able to connect to vast databases, and with other augmented individuals, and with artificial intelligence systems, enabling them to significantly improve the effectiveness in solving complex problems and, more generally, to confront a world that is becoming increasingly complex. Upon closer inspection, these characteristics of AHs are very similar to those we have attributed to normal humans equipped with our ATT of the future. Therefore, we can state that it is precisely through ATT, or similar devices, that humans will evolve into AHs.

If we compare our ATT of the future with Wagner’s Total Theatre, we realize that the two theatrical forms are significantly different and far apart from each other (assuming that it is still possible to think of this ATT of the future as a form of theatre). However, if we pause for a moment to reflect, we realize that despite the differences, ATT still remains faithful to the same idea that led Wagner to propose his Total Theatre: to engage the audience of a theatrical representation in a deep and comprehensive experience. Wagner, with the technologies of his time, only partially succeeded in this intent. ATT, on the other hand, fully realizes it. With ATT, and through the contribution of AR technology, the theatrical stage becomes as large as the world, and its representations are almost unlimited, and the audience is so involved in the representations that they can no longer distinguish them from reality itself.


Corresponding author: Sergio Cicconi, Department of Information and Communication Technologies, University of Trento, Trento, Italy, E-mail:

About the author

Sergio Cicconi

Dr. Sergio Cicconi, a PhD in Information and Communication Technologies from the University of Trento, blends expertise in Philosophy and Computer Science with a focus on Augmented Reality and e-learning. He previously worked at several US Universities (Duke, University of Florida, SUNY) teaching courses on Literature and new media. His research spans textual semiotics, digital media, and the intersection of literature with technology, leading to publications on augmented reality, e-learning, and hypertextuality. Recently, he has developed a Augmented Learning Environment for HoloLens, designed to introduce the elderly to digital culture.

  1. Research ethics: Not applicable.

  2. Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: The author states no competing interests.

  4. Research funding: None declared.

  5. Data availability: Not applicable.

References

1. Wagner, R.; Goldman, A. H.; Sprinchorn, E. Wagner on Music and Drama: a Compendium of Richard Wagner’s Prose Works. A Da Capo Paperback; Da Capo Press: New York, N.Y., 1988.Search in Google Scholar

2. Kirby, E. T. Total Theatre. A Critical Anthology; E. P. Dutton & Co., Inc.: New York, NY, 1969.Search in Google Scholar

3. Prior, D. M., What is Total Theatre?|Total Theatre Magazine Print Archive, 2001. [Online]. http://totaltheatre.org.uk/archive/features/what-total-theatre.Search in Google Scholar

4. Balme, C. B. The Cambridge Introduction to Theatre Studies. In Cambridge Introductions to Literature; Cambridge University Press: Cambridge, UK; New York, 2008.10.1017/CBO9780511817021Search in Google Scholar

5. Innes, C., Shevtsova, M. The Cambridge Introduction to Theatre Directing, 1st ed; Cambridge University Press: Cambridge, 2013.10.1017/CBO9781139016391Search in Google Scholar

6. Cicconi, S.; Marchese, M. Analysis of an E-Learning Augmented Environment: A Semiotic Approach to Augmented Reality Applications. In ICERI2019 Proceedings: Sevilla, 2019; pp. 4921–4931.10.21125/iceri.2019.1204Search in Google Scholar

7. Cicconi, S.; Marchese, M. Augmented Classrooms: A Generator of Augmented Environments for Learning. In INTED2023 Proceedings: Valencia, 2023; pp. 3405–3413.10.21125/inted.2023.0929Search in Google Scholar

8. Cicconi, S.; Marchese, M. Augmented Learning: an E-Learning Environment in Augmented Reality for Older Adults. In INTED 2019 Proceedings; IATED: Valencia, 2019; pp. 3652–3662.Search in Google Scholar

9. Azuma, R. A Survey of Augmented Reality. Presence: Teleoperators Virtual Environ. 1997, 21 (6), 34–47.10.1162/pres.1997.6.4.355Search in Google Scholar

10. MacIntyre, B.; Bolter, J. D.; Moreno, E.; Hannigan, B. Augmented Reality as a New Media Experience. In Proceedings – IEEE and ACM International Symposium on Augmented Reality, ISAR 2001; IEEE, 2001; pp. 197–206.10.1109/ISAR.2001.970538Search in Google Scholar

11. Lev. Manovich. The Language of New Media; MIT Press: Cambridge, MA; London, England, 27, 2001.Search in Google Scholar

12. Studio, A. Fragments, 2017. [Online]. https://www.asobostudio.com/games/fragments.Search in Google Scholar

13. Cicconi, S. Augmented Learning: The Development of a Learning Environment in Augmented Reality. Ph.D. Dissertation; University of: Trento, 2020.10.21125/inted.2019.0937Search in Google Scholar

14. Jiménez de Luis, A., The Production Stages of Video Game Development|Domestika, 2023. [Online]. https://www.domestika.org/en/blog/2899-the-production-stages-of-video-game-development.Search in Google Scholar

15. Stefyn, N. How Video Games Are Made|The Game Development Process|CG Spectrum, 2022. [Online]. https://www.cgspectrum.com/blog/game-development-process.Search in Google Scholar

16. Boger, Y. Understanding Pixel Density & Retinal Resolution, and Why It’s Important for AR/VR Headsets. [Online]; Road to VR. https://www.roadtovr.com/understanding-pixel-density-retinal-resolution-and-why-its-important-for-vr-and-ar-headsets/ (accessed 2023-12-03).Search in Google Scholar

17. Cheng, D., Wang, Q., Liu, Y., Chen, H., Ni, D., Wang, X., Yao, C., Hou, W., Luo, G., Wang, Y. Design and Manufacture AR Head-Mounted Displays: A Review and Outlook. Light: Adv. Manuf. 2021, 2(3), 336; https://doi.org/10.37188/lam.2021.024.Search in Google Scholar

18. Butler, S. How Important Are Refresh Rates in VR? How-To Geek. [Online]. https://www.howtogeek.com/758894/how-important-are-refresh-rates-in-vr/ (accessed 2023-12-29).Search in Google Scholar

19. Chioka.in, What Is Motion-To-Photon Latency?, 2015. [Online]. https://www.chioka.in/what-is-motion-to-photon-latency/.Search in Google Scholar

20. Heaney, D. Meta Presents Retinal Resolution & Ultra Bright HDR Prototype Headsets. 2022. [Online]; UploadVR. https://www.uploadvr.com/meta-butterscotch-starburst-retinal-hdr-prototypes/.Search in Google Scholar

21. Yang, J.; Barde, A.; Billinghurst, M. Audio Augmented Reality: A Systematic Review of Technologies, Applications, and Future Research Directions. J. Audio Eng. Soc. 2022, 70 (10), 788–809. https://doi.org/10.17743/jaes.2022.0048.Search in Google Scholar

22. Gupta, R., He, J., Ranjan, R., Gan, W. S., Klein, F., Schneiderwind, C., Neidhardt, A., Brandenburg, K., Välimäki, V. Augmented/Mixed Reality Audio for Hearables: Sensing, Control, and Rendering. IEEE Signal Process. Mag. 2022, 39(3), 63–89; https://doi.org/10.1109/MSP.2021.3110108.Search in Google Scholar

23. Papagiannis, H. Augmented Human: How Technology is Shaping the New Reality, 1st ed.; O’Reilly: Beijing, 2017.Search in Google Scholar

24. Gray, M. Postcard from Earth’: Darren Aronofsky’s 18K Film Rocks the Sphere, 2023. [Online]; Rolling Stone. https://www.rollingstone.com/tv-movies/tv-movie-features/postcard-from-earth-the-sphere-las-vegas-darren-aronofsky-18k-film-1234848611/.Search in Google Scholar

25. Vi, C. T.; Ablart, D.; Arthur, D.; Obrist, M. Gustatory Interface: the Challenges of ‘how’ to Stimulate the Sense of Taste. In Proceedings of the 2nd ACM SIGCHI International Workshop on Multisensory Approaches to Human-Food Interaction, in MHFI 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 29–33.10.1145/3141788.3141794Search in Google Scholar

26. Wonderverse Haptic Suit? What Is It? and What Are The Top Suits?, 2022. [Online]; Wondeverse. https://wondeverse.com/haptic-suit-what-is-it-and-what-are-the-top-suits/.Search in Google Scholar

27. HaptX Haptic Technology for VR and Robotics – Tactile, Force, and Motion, 2023. [Online]; HaptX. https://haptx.com/technology/.Search in Google Scholar

28. Caeiro-Rodríguez, M.; Otero-González, I.; Mikic-Fonte, F. A.; Llamas-Nistal, M. A Systematic Review of Commercial Smart Gloves: Current Status and Applications. Sensors 2021, 21 (8), 8. https://doi.org/10.3390/s21082667.Search in Google Scholar PubMed PubMed Central

29. Reality Labs Inside Reality Labs Research: Meet the Team That’s Working to Bring Touch to the Digital World, 2021. [Online]; Tech at Meta. https://tech.facebook.com/reality-labs/2021/11/inside-reality-labs-meet-the-team-thats-bringing-touch-to-the-digital-world/.Search in Google Scholar

30. Staff Writer Are You Ready for a Smart Tattoo? – the Latest in Wearable Technologies, 2020; [Online]; Bold Business. https://www.boldbusiness.com/digital/smart-tattoo-wearable-technologies/.Search in Google Scholar

31. gazettebeckycoleman Harvard Researchers Help Develop ‘smart’ Tattoos, 2017. [Online]; Harvard Gazette. https://news.harvard.edu/gazette/story/2017/09/harvard-researchers-help-develop-smart-tattoos/.Search in Google Scholar

32. Chen, J.; Mi, L.; Chen, C. P.; Liu, H.; Jiang, J.; Zhang, W. Design of Foveated Contact Lens Display for Augmented Reality. Opt. Express 2019, 27 (26), 38204–38219. https://doi.org/10.1364/OE.381200.Search in Google Scholar PubMed

33. Perry, T. S., Augmented Reality in a Contact Lens: It’s the Real Deal – IEEE Spectrum, Jan 2020. [Online]. https://spectrum.ieee.org/ar-in-a-contact-lens-its-the-real-deal.Search in Google Scholar

34. Greenfield, A. Everyware: The Dawning Age of Ubiquitous Computing; New Riders: Berkeley, CA, 2006.Search in Google Scholar

35. Sazonov, E., Daoud, W. A. Grand Challenges in Wearable Electronics. Front. Electron. 2021, 2, https://doi.org/10.3389/felec.2021.668619. https://www.frontiersin.org/articles/10.3389/felec.2021.668619/full.Search in Google Scholar

36. Domb, M. Wearable Devices and Their Implementation in Various Domains. In Wearable Devices – the Big Wave of Innovation; IntechOpen, 2019.10.5772/intechopen.86066Search in Google Scholar

37. Chalmers, D. J. Reality+: Virtual Worlds and the Problems of Philosophy, 1st ed.; W. W. Norton & Company: New York, NY, 2022.Search in Google Scholar

38. The McKittrick Hotel, Sleep No More: A Legendary Hotel - Shakespeare's Fallen Hero - A film Noir - Shadow of Suspense, 2011, NYC & Gallow Green. [Online]. https://mckittrickhotel.com/.Search in Google Scholar

Received: 2024-02-01
Accepted: 2024-04-30
Published Online: 2024-05-22

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 6.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/icom-2024-0011/html?lang=en
Scroll to top button