Abstract
AI-generated images of “cats” offer novel opportunities to consider the semic role of expectations in sign formation where they act as constraints on semiosis through the potential identification of the AImage as “correct,” or as a “glitch.” Because the identification of “errors” depends on a range of technical and cultural expertise, they offer valuable insights into the interpretive process. The automated generation of media by AI separates the artist’s decision-making process from image production, continuing a trajectory that began with the invention of photography in the nineteenth century that brings the sign formation process into consciousness by distinguishing “intentional” and “unintentional” encoding. The identification AI-produced images as-glitched provides a vehicle to consider how the sign formation process informs identifications of creative action as an intentional action: aesthetic appraisals are central to this process where cultural beliefs about creativity become ideological constraints on interpretation. The potential to understand AI “glitches” as expressive features of the image-object, rather than errors, proceeds via the aesthetics and affects of earlier art, such as the “painterly motion” shown in old master paintings by Peter Paul Rubens, or via the heritage of Surrealism. These affective constraints on sign formation reveal the central role of “glitches” in the distinction of creative and uncreative action.
1 Introduction
When looking at the AI-generated cats on the website by Ryan Hoover, “These Cats Do Not Exist,” the resulting collection of felines almost all appear “real” (see Figure 1). Each 256 × 256 pixel image offers a mundane picture of a cat, a perennial subject and favourite of internet memes; however, there are occasionally problems, as when cat2319.jpg is cut by a stripe of background landscape, or with the other cat images that have uncanny bulges, growths, or distortions in the body (cat1036.jpg, cat1496.jpg, cat4778.jpg). Because the identification of these “errors” is not a given, but depends on a range of technical and cultural expertise (Groupe Mu 1977), they offer valuable insights into the interpretive process when confronting the automated generation of media. The role of “glitch” in assessing the outputs of AI systems remains constant, despite changes in technology. Although each of these “AImages” was produced by an adversarial generative network (an older system of AImage generation that is the result of an arms race between two machine learning systems, one which proposed images, and the other which evaluated them, both sides becoming ever more adept at the task of cat creation), the interpretive role of the glitch in their assessment remains constant. The AImages created by “These Cats Do Not Exist,” or those from any text-to-image system, introduce a secondary level of articulations entirely different than those of traditional language: both are reflections of mechanical operations, rote, determined by the fixed system of a computational operation. It is this novel layer of statistical interpretations (the generative apparatus itself) that raises new questions about the ways the image-object is a realization of encoded information – the training data – instrumental in its fabrication: what the text-prompts are is a convenient method to invoke specific aspects of the training data, such as a collection of specific features humans identify as “cat,” but which in themselves signify nothing and are not understood by the system as having a specific representational valence – the “intelligence” of AI is strikingly unintelligent.

A selection of AI generated cats by Ryan Hoover from the website “These Cats Do Not Exist.” Generated using AI.
The adage that a “picture is worth 1,000 words” has become an instrumental vehicle for imagistic creation with contemporary text-to-image generative AI (prompt-based) systems. The constraints imposed by text prompts link AImages to the human (intentional) demands that direct their generative productions, avoiding some of these problematics without resolving them: telling the machine what to do rather than directly programming it reanimates the historical debate over the photographic camera and the status of photographs-as-art because it superficially endows the computer with agency over its productive action, replacing the artist (Robinson 1966 [1892]: 82–88). The same question of intentionality and encoding is posed by both AI and glitches – an ideology of creativity where evidence of human action becomes a crucial “proof of intent.” Thus these machines offer a liminal moment revealing cultural and ideological constraints on the creative act. AImage generators proceed independently from understanding what the image-object shown might be; human agency intervenes before and after the image-object’s production, but not during its fabrication. Artists set the boundaries and constrain the generative system, but have less direct control over its operations than the photographer working in a darkroom. Aside from the human audience watching the results with interest, amusement, or simply terror, these systems run in an autonomous fashion without need for outside direction. This attenuation of action continues the earlier expansion of art to include photography by further reducing the immanence of artistic control. The human responses to the AImage exploit the projective capacities of the audience viewing the work, evident as the diagnostic recognition – “a cat!”
The weird effects that distort the “cats” in Hoover’s AImages of cat1036.jpg, cat1496.jpg, and cat4778.jpg are a reflection of the disconnection between significance and generation – unlike a human artist whose creation of the image is inevitably guided by the wholistic gestalt of what is depicted, AImages are untethered from these constraints. Each image-object, for example the “cat,” is created as a response to a prompt guided not by a concern for meaning but through an algorithm designed to produce a particular result of statistical probabilities; not a response to the significance of the instruction, but of how it corresponds to material organized/indexed within a database. These generative “cats” are a hypothetical reflection of how “cat” is identified in/by a database of cat-like images that are its elements. This technology is not concerned with the significance or recognition of “cat,” but only how the database encodes data responding to that identifier: it instrumentalizes the iterative process philosopher Immanuel Kant described as “determinative judgement” in his book Critique of Pure Reason:
Determinative judgement [always operates] under universal transcendental laws given by the understanding, is only subsumptive. The law is marked out for it a priori, and hence it does not need to devise a law of its own so that it can subsume the particular in nature under the universal. (Kant 1987: 179–181)
Kant’s “determinative judgment” is a mechanical operation where significance is absent from consideration, except in the sense of describing a meta-category that contains a series of features organized within a dataset. While this dataset might be informative about the features associated with “cat,” it does not concern the significance or meaning of the identifier – understanding is absent from this process. Kant’s “determinative judgment” does not create meanings but simply (re)arranges what is already known (the data). This separation of significance from the collecting process converges on the instrumental organization of the AI database, mirroring the separation of meaning from productive action in the assembly line of the industrial factory. The protocol of machine learning that creates the AImage corresponds to his proposal where a hypothetical listing of all the features about a specific idea or thing have been marshalled as a vehicle to create an example of that thing, without requiring comprehension (Kant 1986: 675). AI systems are parasitic on existing meaning – their operations entirely reflect their training data: thus the AI system realizes Kant’s rhetorical catalogue, reified by the literal role of text-to-image prompts as tokens divorced from significance or comprehension in systems such as Runway, MidJourney, or Stable Diffusion.
Machine learning relies on how the data is structured and catalogued in the database, not any understanding of the text-prompt. These instructions, while written in everyday language, initiate a retrievals of features from a database – their meaning is not important, only their “fit” to the tagging within the dataset. What generates the AImage is a selection from a vast collection of non-signifying features that provide semantic cues for the human-readable output, but which lack a definitive, coherent, or universal significance: they define the elements employed by the sign formation process rather than the sign itself. The discrepancies offered by AImages are thus of interest to considerations of how visual semiosis emerges out of perceptional encounters. The elements which are assembled into the image-object are a result of encoded data being compiled and composited, processed, to arrange an output (ideally) corresponding to the human expectations about the content, morphology, and structure of “representational” artworks. Thus the generative output, whether naturalistic (as with a photograph) or stylized (as with a cartoon or abstraction), functions by converting the material-to-render into a dataset whose denotative contents provide the semantic cues employed by humans in assessing the image-object. This process separates meaning and significance from the materials being organized and rendered, resulting in an autonomously generated “hyperreal” without basis in reality or existence, even if it converges on reality (see Figure 2).

An AImage of “Mont Saint Michel” that corresponds to some features of the actual location that make it recognizable, but is entirely fabricated. Generated by the author using AI.
AImages instrumentalize determinative judgement since their invocation/manipulation of data proceeds without any necessary concern for its significance: type, concept, or recognition of occurrence are irrelevant. Every feature of the resulting images, even any suggestions of intentionality, are products of empirically present non-signifying features drawn from within the database (the semantic cues) assembled into an AImage which the computer generates autonomously, not out of an expressive communication, but because the training data also contains those features. The weirdness of cat1036.jpg, cat1496.jpg, and cat4778.jpg arises precisely because, while these outputs match the dataset and are thus computationally accurate, their human audience understands the deviations as uncanny affects. Although each resulting “cat” remains coherent as a “cat” despite deviating dramatically from the “ideal expectation,” human identifications depend on an array of semantic cues, some of which are properly present while others are not – the presence of these non-signifying features collectively provokes a correct identification because human intelligence can manage the absence, distortion, or inaccuracy of some uncoded features while still producing a coherent understanding. This capacity is what differentiates intelligent from unintelligent action and is the reason that AI systems frequently fail at self-correcting these errors.
However, AI comprehends neither prompt nor data: what the text-prompts used by these generating systems do is employ everyday language as tokens to direct an automated process of database calls. This instrumental function is different than that of encoded meaning; it allows a theoretical consideration not only of the machine system itself, but its relationships to human interpretations. All these superficial convergences of human language and tagged data confuses this instrumental role for these terms, thus enabling an imaginary transfer of agency to the computer system’s operations, and also illuminating the interpretive process that constrains human understanding. Yet these terms (prompts) are not signifiers, but uncoded/asemic triggers for machine operations which reflect how the database is structured. Any token would suffice. Familiar language provides utility for the human operators, but is meaninless to the system. Yet the convergence enables the machinic deviations from commonplace signification to become apparent in how human audiences consider the variety of AImages; their potential designation as “glitched” is a central dimension of this evaluation, distinguishing between high and low quality renderings. The aesthetic function of digitally generative artifacts produced by deviations from the “desired” output offer an opportunity to consider the question of ‘machine creativity’ as a specific illusion created by the ways ‘erroneous’ outputs can converge on historical avant-garde art (such as Surrealist painting) or the contemporary exploits of “errors” by “Glitch Art” – a term that was initially proposed by the digital artist Ant Scott on his website www.beflix.com in 2001 to describe his digitally generative photograms made by exposing a sheet of black and white photographic paper using a computer monitor, but which now identifies what has variously been identified with “post-digital,” “new aesthetic,” or even “post-internet” art: the expressive use of features that evoke flaws or errors specific to digital media. This artistic exploit of technology that emphasizes noise, vectorization, and quantization (Menkman 2011) belongs to an avant-garde lineage which draws attention to technology itself despite the wide range of media, protocols, and technologies used by artists (Cloninger 2011: 33). Semic questions around glitches, glitching, and the aesthetics of Glitch Art offer insights into the novel affordances created by AI: interpreting Glitch Art demonstrates the role of human anticipations and interpretations in relation to aesthetic evaluations and semiotics. Identifying “errors” as an aesthetic exploit utilizes the same set of anticipations and expectations (expressed in cultural, social, and ideological beliefs) which shape interpretations precisely because glitches can reveal the hidden dimensions of image engagement and assessment (Betancourt 2023b), making them relevant models for considering the questions about sign formation emergent with the nonintentional-yet-apparently-encoded expressions produced by AImages and machine learning systems.
2 The autonomous encoding paradox
Technical systems for image production, not to mention the role of machinery in making art, have a lengthy history that continues with the various questions arising around AImages. Fears of technological replacement are rampant in the arts precisely because the first impacts of automation were cultural: photography eliminated the work of painters, reducing their role in society from documentarians of cultural truths to being purveyors of decorative objects. (The denigration of the “decorative” attests to the demotion this change entails.) Semic questions about aesthetic values (via encoding, intentions, and expression) reflect the ideology of human creativity that emerged from the disruptive impacts of industrialization, which transformed the arts of the nineteenth century before intensifying in the twentieth, parallelling the disruption created by avant-garde art generally (Betancourt 2002a): this cultural response to mechanization since the invention of photography in the 1820s justified the avant-garde’s abandonment of traditional aesthetics (Wall 1998: 83). The progressive improvements in fidelity, plasticity, and distribution for photographic imagery throughout the twentieth century are amplified by the advent of generative systems. The historical fears of industrialization by John Ruskin, William Morris, and the Romantics conceived of all mechanization as dehumanizing since it entails a surrender of agency to the productive apparatus – the photograph’s rejection as art derives from these wide-spread nineteenth century concerns with human agency (Betancourt 2022). These cultural ideologies associating mind/reflection/human and body/determination/machine are products of concerns with agency as the definitional feature of being human (Graeber 2014: 165–168). Such problematics are central to semiotics because they define the boundaries that justify sign formation, thus policing the distinctions of encoded/noncoded. By assigning “creativity” to the AImage, this system is ideologically granted an agency it lacks in practice; however such anthropomorphic projections are a fallacy. Acknowledging this cultural dimension which shapes audience apperceptions (Groupe Mu 1977) in the evaluative process offers an opportunity to consider AImages as semic revelations of how the ideology of creativity shapes sign formation – thus governing the interpretations of the various outputs of these systems.
The decision to address any contents of an image-object as potentially signifying (i.e., interpret its visible features as elements of an aesthetic expression) is the same issue as deciding that it is “intentional,” a creative action performed to communicate something. For aesthetics this understanding is not problematic. Despite the obvious challenges automated image creation poses for romanticized notions about art and artistry residing with the actions of a human artist, the pedigree of “AImages as-art” poses few obstacles in/for Contemporary art aside from questions derived from the marketplace. There are multiple lineages of artistic production without the artist directly producing the work: from sculptors such as Auguste Rodin whose works were cast by others and continued to be released after his death in 1917 (Krauss 1985), to Marcel Duchamp’s proposal of the readymade (Duchamp 1989), or both Minimalist sculpture and Conceptual Art whose works were realized through instructions (Meyer 1972), there are ample examples to draw from that anticipate the generative work done by AI. But while these generative systems may intensify the displacement of the human agent in the direct production of the image, demonstrating the distinction between an AImage and other forms of CGI is only a matter of degree of displacement and the technical operations of the machine.
Prior to the invention of photography whose development and subsequent improvements to both the optical machine (camera) and the chemical processes employed to record photographs displace human labor from direct aesthetic expression, image production was almost entirely dependent upon human action. Even reproductive technologies (such as print making) that allowed multiple copies of images to be struck required human labour to fabricate the image; however for AI even more than photography, the human agency directing the image-object remains tangential to these autonomous systems, even as they oversee and orchestrate their use, a fact that remains constant for all these systems. Photography thus becomes a primary model for all automatic image fabrication: by interposing a technical system between the artist and the production of the image, automation introduces a radically limited set of craft skills for the production of an image that is centred on the selective choices made by the artist rather than manual skill. This historical context shaped Modernism and the avant-gardes of the twentieth century, and continues to inform contemporary responses to computer automation and the AImage (Betancourt 2022): Glitch Art belongs to this lineage of artistic responses that challenge traditional representational art. The semiosis emergent from the moment where “glitches” are both diagnostically recognizable as-errors and expressive depends on an intentional affect that links Glitch Art to both the photograph and AImages. The physical structure of the image-carrier and the conditions of the automated operations that produce the final image-object in all three cases are entangled by the audience’s identification of semantic cues created by the productive actions of generation and the physical structures of presentation: the photo-chemical reactions plus processing baths of photography parallel the generative functions of digital images for Glitch Art, and the role of the database in AI, allowing the semiotics of these different autonomous systems to converge.
These parallels between Glitch Art and photography for the AImage provide insights into the role of the artist’s selection process and their directorial control (reflective agency). The creative impacts of automation and the novel potentials offered by AI continue a lineage that begins with the invention of photography and the novel ways it separates agency from artistic creation. The historical trajectory towards autonomous production with a minimum of human oversight began with photography where the creative decision is simultaneously made at the instant the shutter is activated, yet depends on anticipatory actions made in the framing, exposure, and mise-en-scène. These photo-chemical operations restrict the decision making process of the artist employing the machine in a parallel to the limitations of the AI system guided by a text prompt, and then refined and shaped over successive iterations by the artist/user. The framing and operations of the photographic camera and the text-to-image system makes this parallel explicit. Where a photographer might move or slightly reframe a shot, with AImages the selection of models (rhizomatic databases), coupled with the addition and modification of prompts, serves the same role in the direct shaping of a result that is not precisely controlled: each stage of the operation constrains the result, first by choosing the models used to generate the output, then through the prompts, and again in choosing which candidate output to develop or accept as final. These multiple avenues of reconsideration and reorganization shape the AImage prior to completion, whether the artist/user is able to select the training data (model) employed in making the image or not.
The interpretive “paradox” produced by all AImages also describes all AI systems because these machines separate the apparent creation of significance (expressive utterance) from intention and comprehension, raising a philosophical quagmire where expressive encoding appears to exist in situations where no expressive encoding is possible – precisely because the machine is unintelligent: where there can be no intent, there can also be no signification because there is no encoding of meaning for the audience to decode (interpret). Although the database contains a large selection of works produced by human agency, and thus potentially containing residues of intentional actions, their recombination does not necessarily retain those features, even if that foundation may account for the presence of “intentional” cues in the generative work. Because the AImage is not simply a variation on an existing template, but a novel product of complex statistical operations, the paradox of the AI’s apparent expressiveness converges on the semic problems posed by Glitch Art, making the identification of AI-glitches of great interest to resolving this problem precisely because while glitches function as asemic interruptions in the normative progression of encoding, they can also become expressive (thus semic) features of that encoding, while still retaining the asemic valence essential to their identity as-glitches. This superficial paradox is resolved by the different levels these articulations occupy within sign formation. This disruption is instructive since it brings the semantic cues and articulation into consciousness, demonstrating how the “intentional function” used by the humans interpreting the work tautologically justifies that understanding (Betancourt 2023b: 13–17).
3 The AImage
While all digital media are always generative products of a computational operation – a fact endemic to computer technology – the AImage adds an additional layer of processing that has more in common with the interactive generative images of a video game than the sampled “reality” shown by a photograph: even if the AImage might resemble a photograph, it was not produced using a camera. Although the degree of correspondence between the AImage and a photograph (i.e., the degree of naturalism in the image) invites the same critical and semiotic engagement that photography employs, a capacity reflecting the quantity of photographs employed in training these systems, this convergence is illusory. Instead of the computer display simply rendering a pre-generated file as when it displays a digital photograph or other image, the AImage itself is an invention of a quotational rhizome (the database) directed by the user, but the limitations of this system become evident through the constraints apparent in how ML (machine learning) models do not necessarily generalize from their data, and instead depend on a precise match between training data and prompt. The fabrication made by text-to-image systems relates directly to the semiotic organization of the database – thus attenuates questions of sign formation through their re-composition of existing semiotic materials – in that their visible appearance derives entirely from how the prompts invoke their training data, an operation directing how the system parses those instructions to create the AImage. When the match is too restrictive or precise, known as “overfitting,” the results demonstrate a limitation that is a common problem for data science as a whole (Elite Data Science 2023).
Ambiguous, ambivalent, or otherwise metaphoric prompts behave in unanticipated ways. The results of such indeterminate instructions may appear aberrant or otherwise fallacious precisely because the relationship between those instructions and their implementation within the sea of data that defines AI systems cannot be readily anticipated. This incapacity to anticipate is neither a fault in the system, nor a breakdown of its operations, but instead corresponds to the nature of machine learning: an exploit of the unknown dimensions and operations possible within the dataset, these anomalous results are not glitches, but instead reflect the difficulty of controlling complex systems. These generative products are interpretive lapses (not erroneous interpretations) productive of deviations that identify alternative potential significations within the language that guides the system. Their heuristic dimensions force a confrontation between the human user directing the machine and the operations of the device, but remain invisible for those encountering the outputs in themselves.
Aesthetic concerns enter this transfer from user prompt to generative output to reveal the role of expectations in assessing AImages. Understanding any generative image as worthy of aesthetic contemplation – as Art – relies on the inclusion of cultural cues within the materials contained by the database; their presence in an AImage is an artifact of the training data, and is not indicative of an expressive choice, intention, or action. Encultured knowledge informs the sign formation process directly by dictating when and how it is appropriate to interpret expressively, and indirectly through the knowledge of iconography and its lexicon that shapes the interpretation. These cultural/lexical constraints may shape the prompts and how the training data is indexed, but are irrelevant to the operations of the machine. Instabilities in response to what seems to be a “given” in the text prompt are what give the AI system its uncanny affect – that it appears to be engaging in creative or inventive behaviours suggestive of human consciousness and agency; these beliefs are instances of the pathetic fallacy. To ask for an AImage of jazz composer, conductor, and trumpeter Miles Davis (see Figure 3) using a prompt such as “the cat Miles Davis” reveals there is nothing in the AImage itself to indicate a correspondence or deviation from its prompt.[1] The output is a product of an unintelligent process based on probabilities. It converges on the literal significance of the prompts through how they are defined in the training data, offering multiple points and opportunities for potential failures:
The simulative: ask for a picture of a cat and it generates a picture of a cat
The incorrect: ask for a picture of a cat and it generates a picture of a beach scene
The unexpected: ask for a picture of a cat and it generates a picture of Miles Davis
The composite: ask for a picture of the cat Miles Davis and it generates a picture of a cat that looks like Miles Davis
The dysfunctional: ask for a picture of a cat and it generates noise

“Miles Davis” photograph by Tom Palumbo, from Wikipedia, 2008.
Only [A] is not some type of unexpected output – even if there are irregularities with the result, or its appearances are inaccurate (see Figures 2 and 4). Distortions and problems that impact the realism of [A] are readily apparent in the sampling of cats from “These Cats Do Not Exist” (Figure 1) or in Figure 5, but issues with fidelity in the rendering of a cat are corrected with sustained use of the system, suggesting they are not technical failures – i.e., not “glitches” at all – but instead reveal some aspect of the data/system in operation and should be regarded as a generative parallel to the grain of photography that reduces the fidelity of the image. The representational errors of [A] are flaws in the rendering, and like [B] and [C] are a product of how well the system performs at its task, and the degree to which the database provides an appropriate guide for its processing. [B] is an example of “overfitting” where the data does not include feline “cats” (see Figure 6). Much of what is or might be identified as an AI-glitch is simply poor rendering [A] where the typically expected and “desired” outcome showing a “cat” is imperfect.
![Figure 4:
Example of [A] The simulative: ask for a picture of a cat and it generates a picture of a cat. Generated by the author using AI.](/document/doi/10.1515/sem-2023-0108/asset/graphic/j_sem-2023-0108_fig_004.jpg)
Example of [A] The simulative: ask for a picture of a cat and it generates a picture of a cat. Generated by the author using AI.
![Figure 5:
Example of [A] The simulative: ask for a picture of a cat and it generates a picture of a cat; this instance includes distortions and other obvious flaws in the rendering. Generated by the author using AI.](/document/doi/10.1515/sem-2023-0108/asset/graphic/j_sem-2023-0108_fig_005.jpg)
Example of [A] The simulative: ask for a picture of a cat and it generates a picture of a cat; this instance includes distortions and other obvious flaws in the rendering. Generated by the author using AI.
![Figure 6:
Example of [B] The incorrect: ask for a picture of a cat and it generates a picture of a beach scene. Generated by the author using AI.](/document/doi/10.1515/sem-2023-0108/asset/graphic/j_sem-2023-0108_fig_006.jpg)
Example of [B] The incorrect: ask for a picture of a cat and it generates a picture of a beach scene. Generated by the author using AI.
Both [B] and [C] are also suggestive of the instabilities typical of glitches: i.e., both are indicative of potential technical failure, and thus could be addressed diagnostically in an attempt to correct or repair the operations of the system. However, [C] creates an uncertainty: while Miles Davis is doubtless one of the “coolest cats” in twentieth century jazz, he is also not typically the anticipated response to the request for a picture of a “cat.” The ambiguity of the unexpected outcome does not indicate that [C] is an error, only that the AI system did not behave in the anticipated fashion, yet at the same time it has not actually produced an error, either.
Human slang and colloquial expressions, such as the metaphoric catness of Miles Davis, make the instrumental functions of these generative systems apparent: no matter how “creative” the outputs might seem to the human audience, they are statistical results of mathematical processes where the ambivalence and instability of human language and discourse must be maximally minimized. The emergent quality of creative invention appears precisely in AImages where a prompt such as “the cat Miles Davis” results in [D] a mixture of elements that seem creative, but are simply a literal fusion of two otherwise unrelated terms whose significance is entirely lost in the generative process (see Figure 7). Prompts are the antithesis of poetic expressions when interpreted by a machine. [D] is both an accurate response to the prompt and at the same time an entirely incorrect response: neither error nor not-error, it resides in the realm of miscommunication precisely because the AI system is not intelligent – it does not understand the significance of the instructions it receives, merely renders what is the statistically likely product of each term strung together. Replacing a prompt such as “the cat Miles Davis” with another prompt such as “cool Miles Davis” and the result is just as likely to be a picture of Miles Davis in a snow storm or other frigid environment. What these prompts function as are “search terms” that constrain the processing which ultimately results in an image; in that regard their limitations become immediately apparent. Purely determinative, the text-to-image prompt requires constants and additional limits to produce a result that approximates human expectations.
![Figure 7:
Example of [D] The composite: ask for a picture of the cat Miles Davis and it generates a picture of a cat that looks like Miles Davis. Generated by the author using AI.](/document/doi/10.1515/sem-2023-0108/asset/graphic/j_sem-2023-0108_fig_007.jpg)
Example of [D] The composite: ask for a picture of the cat Miles Davis and it generates a picture of a cat that looks like Miles Davis. Generated by the author using AI.
Simultaneously these constraints are linked to the significance of the terms and their arrangement (even if the machine itself cannot or does not incorporate that dimension of meaning into its operations). While outputs [A–D] do not correspond to the encoded significance of the instruction to generate a picture of “the cat Miles Davis,” these deviations from the encoded meaning of these terms appear “creative” because they do not seem to be glitches. Coherent results that violate audience expectations are qualitatively different than those results of system malfunctions and other technical breakdowns that generate noise. [E] is an explicit failure that creates a sense that it is “broken” or failing to work properly (see Figure 8). Only [E] is unquestionably a glitch or error because the output not only cannot be related to the instructions, it also suggests some type of internal, digital operation that is normally hidden from view: a system failure. Generating noise is the only explicit output of this system that would prompt a sense that it is “broken” or failing to work properly (Figure 8).
![Figure 8:
Example of [E] The dysfunctional: ask for a picture of a cat and it generates noise. Generated by the author using AI.](/document/doi/10.1515/sem-2023-0108/asset/graphic/j_sem-2023-0108_fig_008.jpg)
Example of [E] The dysfunctional: ask for a picture of a cat and it generates noise. Generated by the author using AI.
4 Enculturation creates “glitches”
The challenges for traditional and existing ideas about digital imagery presented by AImages are explicit, provoking photography to reject them. Photographer Boris Eldagsen won the World Photography Organization’s 2023 Sony World Photography Awards with The Electrician, a black and white image of two women, only to then refuse the award because his picture was an AImage (Figure 9; Glynn 2023); conversely, amateur photographer Suzi Dougherty’s iPhone photograph of her 18-year-old son was rejected from a photography show because the judges believed it was an AImage, an invention independent of reality (Shepherd 2023; Villa 2023). The determinative role of expectations enter into these decisions as a conscious factor where anticipation becomes central to semiosis. Both these examples reveal ideological beliefs as determining factors in their aesthetic validation as expressive utterances. For “photographs” this essential potential for an indexical relationship to subjects extant in the world – commonly assumed for “documentary” photographs – separates them from AImages, but originates in cultural beliefs; however, it is something of a chimera because this ontological link is precisely hidden from view. Realism is a doppelgänger for this ontological relationship, revealing photographic indexicality is defined by a cultural construct: the degree to which the image-object conforms to past experiences of other photographs whose relation to “the real” is not in doubt. This conventionality reframes these evaluations as a reflection of ideological concerns with separating photography from the AImage.

“Pseudomnesia: The Electrician” AImage by Boris Eldagsen, 2023. Generated using AI.
These distinctions identifying “successful” and “unsuccessful” AImages thus return to semic questions that shape how audiences parse the depiction and the ways they engage the image-object, evaluating its contents (what the AImage shows, the degree of stylization, the aesthetics invoked by the composition, etc.). The diagnostic apprehension of any content in a traditional photograph involves seeking a causal linkage (indexicality) that assumes a connection to “the real” which aesthetic philosopher Arthur Danto explained by noting that “to be real is simply to satisfy a semantic function, but not as a semantic vehicle” (Danto 1981: 81). This assessment of what appears in any image proceeds without direct concern for its origins: whether traditionally made by hand, with a camera, or as an AImage, the interpretation of what is “real” employs the same encultured knowledge guided by expectations about realism and representation, not an innate ontological quality that is apparent apart from the semiosis which makes its recognition possible. One does not reasonably enquire of reality what its expressive “intent” may be, one simply regards it as a moot fact. As Danto observes, to be real is to lack an encoded function. When confronting any image, this concern addresses whether its contents are what they appear to be. This distinction is diagnostic. It sets assessments of indexicality and denoted “contents” apart from their expressive functions and symbolic potentials; it blocks the procession of sign formation by asserting that semiosis is inappropriate. The identification of something as a “glitch” (as in Glitch Art) problematizes this recognition by rendering it expressive. Diagnosing an error precisely locates the denoted features of the work in a “real” malfunction whose autonomy from expressive action (agency) precludes being a semantic vehicle. This change in interpretive role identifies a semiosis coincident with the “material function,” an expression referencing the substance and materiality of its medium (Betancourt 2020). When the AImage contains something unexpected, such as distortion or digital fragmentation, the tendency is to regard that image as “glitched” whether the results reflect an actual breakdown or not (see Figure 10). This identification of the image as “broken” reflects the heritage of computer glitches and malfunctions understood diagnostically and serving an evaluative role as a corrective mechanism in the development of digital and electronic systems: they provide a symptom for what needs to be fixed through their correlation with specific types of breakdown and system failure.
![Figure 10:
Example of [A] The simulative: AImage where colour blocks suggest the glitched fragmentation produced by databending. Generated by the author using AI.](/document/doi/10.1515/sem-2023-0108/asset/graphic/j_sem-2023-0108_fig_010.jpg)
Example of [A] The simulative: AImage where colour blocks suggest the glitched fragmentation produced by databending. Generated by the author using AI.
Glitch Art revels in the ambiguous separation between denotation (semiotic) and depiction (diagnostic) considerations. Separate, but parallel to the diagnostic uses of glitches are their semic roles that in Glitch Art impose an expressive use. This changed function provides an example for the entanglement of human perceptions and interpretations: to be “glitched” means to evoke a malfunction, but to be expressive, “Glitch Art,” it must also function as a semantic vehicle for meaning. Employing AI to create glitched images renders works where the most typically anticipated features of technical failure appear as misfunctions because they are not generative products of breakdown and are instead reifications of the “material function” that match audience expectations for digital materiality in past images. The AI system renders these expectations conventionally apparent through its generative processes. The audience engages the AImage by relating a diagnostic recognition of its contents to their past experiences with similar work not generated by AI. The conventionality of this association is a product of how past experience informs all interpretations of perception, guided by informal “rules of engagement” that allow audiences to determine when and how it is appropriate to interpret (Harris 2009); AI generation identifies and correlates these features to text prompts for on demand production.
The audience’s expectations have a determining role in the encounter that defines the their acceptance/rejection of the image-object as an expressive utterance. Expectations are central to all these evaluations, but are explicit when confronting glitches since if they are not considered “creative expressions,” they are merely technical failures. The case of Glitch Art is instructive when confronting ambiguous AImages such as the pair of images in Figure 11. The decision that something is a glitch, as is often the case with AI-glitches (see Figure 11, top), depends on a decision about its relationship to the digital computer itself: if viewers interpret this abstract collection of forms, shapes, and gradients as symbolically alluding to the “substance” of the digital medium, that apperception transforms them into expressive articulation. If they do not see the marks as references to the digital medium, no matter how abstract they may be, they do not produce the recognition of originating with that medium, as with Figure 11 (bottom). Glitch Art requires a pre-semic change of category (diagnostic to symbolic) as well as an apprehension of the error as the results of a creative act. Generative AI systems make the semiotic basis of this judgment apparent by separating the range of technical affects associated with the “material function” in Glitch Art (see Figure 12) from their physical source in the error or breakdown, revealing the recognition of a glitch is a product of specific iconography. Any “materiality” expressed by these semantic cues should not be linked to any specific physical cause. Their presence signifying glitches and breakdowns in historical media, but reproduced by/in the AImage, results from their inclusion in the rhizomatic database, allowing them to be reproduced on demand. This instrumentality renders what was formerly a technical breakdown as a symbolic form deployed to promote sign-formation and direct the human audience to identify the generative result as “intentional,” thus a creative product of intelligent action and expression (Betancourt 2023a, 2023b).
![Figure 11:
Example of [E] The dysfunctional: Two examples of Glitch Art made using the generative protocols that render AImages where the top image suggests a traditional glitch because it matches expectations about digital materiality; the bottom image does not typically suggest a glitch, but is also a product of one. Generated by the author using AI.](/document/doi/10.1515/sem-2023-0108/asset/graphic/j_sem-2023-0108_fig_011.jpg)
Example of [E] The dysfunctional: Two examples of Glitch Art made using the generative protocols that render AImages where the top image suggests a traditional glitch because it matches expectations about digital materiality; the bottom image does not typically suggest a glitch, but is also a product of one. Generated by the author using AI.

The empirical referents of “material function” appearing in both analogue and digital images emerge from the technical basis for the glitches; the malfunctions shown by AImages are products of software glitches behaving like hardware failures – software-based analogues to distortion and mechanical failure.
These constraints are explicit in appraisals of Glitch Art, demonstrating their role in the semiosis of generative systems: the greater the naturalism of the output, the more glitch-like any stylized deviations from that realism become. In the case of photography and the camera, the formal cues built-in to the machine itself – the shape of the photograph, its colour range, depth of field, sharpness of focus, etc. – provide a fixed reference point for their evaluation, while for digital systems these features become plastic and easily changed, yet remain implicit. Recognizing a “glitch” depends on the ways that it disrupts the stability of these reference points, appearing as a deviation from the anticipated order – a violation of the expectations that are also definitional for “Glitch Art” (Betancourt 2016, 2023b). The confrontation between photography and AImages apparent in the Eldagsen and Dougherty cases are “cultural glitches” that reveal how ideological beliefs about intentional action, creativity, and art also shape the sign formation process which determines the aesthetic and semic status of the work itself, as well as beliefs about its ontology.
5 Anticipation and intentionality
The ideology of creativity emphasizes the instrumental nature of prompts as a way to recover the artist’s agency, while at the same time denigrating the products of the machine itself, but this connection of instrumental prompt to the evaluation of the AImage reanimates what analytic philosophers W. K. Wimsatt and Monroe C. Beardsley termed the “intentional fallacy”: the how and the what that describe the role for prompts are separate from the audience’s consideration of the AImage. Their argument for an absolute separation between the claims of authorial intention (in the case of AImages, the prompt) and the resulting work the audience engages is crucial, but is tempered by the “intentional function” that designates when it is appropriate to consider phenomena encountered in life as if they were products of agency, an intentionally performed and calculating action. Separating metaphysical claims about intentions masquerading as a discourse about significance and origins (prompt) from the empirically present features (cues) that inform and guide interpretation becomes essential to the consideration of AImages precisely because the system itself develops that differential as an explicit part of its operations. The role of the prompt as a vehicle for human agency (née intention) entangles the artistic element of production, the human creativity, with the production of the AImage in itself. This system will only produce those images it has been both (a) constructed via the rhizomatic database and (b) tasked via the prompt to make. The implicit manipulations of these functions – model and prompt guiding generation – are complementary but independent interpretive concerns. The results are an interaction between the instructions that direct the machinic operation and the design of the device; however, understanding the output of the generative process as “creative” is a choice that becomes explicit when an AImage contains distorted imagery that could be identified as-glitched, or considered expressively. The blobby, distorted shapes that evoke a group of bodies in Figure 13 have analogues in avant-garde painting and Surrealist art, offering a shifting series of non/figures that emerge and disappear (O’Meara and Murphy 2023). These outputs are often simply rejected or ignored because of their low fidelity to photography, even if they have expressive potential. Regarding these AImages as-expressive depends on shifts in apperception to allow what could be a product of some malfunction in the system to be instead a consciously chosen (i.e., “intentional”) expressive act for any aesthetic potential to be realized. It is the same transition between malfunction and aesthetic reverie that defines Glitch Art.

AImage containing abstract forms whose proportions and relationships suggest bodies evoking Surrealist paintings. Generated by the author using AI.
The ideology of creativity that defines human intentions/action via the iconography of “machinic failings” is evident in these unstable “bodies” precisely because the recognitions that produce the ‘bodies’ address human perception and cognition in ways that a machine is not expected to be able to do, even though digital compositing software has been used to create surrealistic images since the 1980s (Popper 1993). The distorted, surreal bodies’ coherence as expressive works depends on precisely those human recognitions that AI systems do not have; generation of the image is an unintelligent process guided by statistical correspondences rather than gestalt and perceptual order. When a generative product is accepted as demonstrative of a creative action, this tendency by AI to combine or confuse the outlines of human bodies becomes an automated exaggeration of historical painterly techniques for suggesting movement in static images (Betancourt 2002b). The twisting body (see Figure 14, left) that teasingly turns away from the viewer employs the same technical effect, “painterly motion” used by Baroque painter Peter Paul Rubens in a picture of his wife, Portrait of Helene Fourment in a Fur Wrap (c. 1636–1638; Figure 14, right). The dynamic effect created in both pictures is produced by showing a contorting body that corresponds to the same historical painterly technique, bridging the generative product and familiar aesthetic form. Recognizing the unnatural form of these figures as depicting motion invites considering that distortion as an intentional decision – thus expressive and necessarily a creative choice. Rubens' work challenges the proposal that the displacement which creates this effect is unconscious or unplanned (Berger 1972), even if its appearance in the AImage must be an unplanned artifact which is both unexpected and could potentially be called a glitch. The transition to understanding it as an expressive movement depends on human interpretation rather than an intentional choice by the machine. What for Rubens is a specific painterly effect requiring training and practice becomes something entirely different for AI – an aberration autonomously resulting from unintelligent generative operations.
![Figure 14:
(Left) AImage of a posed body that creates apparent motion due to its distortion (Generated by the author using AI); (Right) Peter Paul Rubens, Das Pelzchen [The Little Fur, or Portrait of Helene Fourment in a Fur Wrap] Gemäldegalerie, 688, Kunsthistorisches Museum, Vienna (c. 1636–1638).](/document/doi/10.1515/sem-2023-0108/asset/graphic/j_sem-2023-0108_fig_014.jpg)
(Left) AImage of a posed body that creates apparent motion due to its distortion (Generated by the author using AI); (Right) Peter Paul Rubens, Das Pelzchen [The Little Fur, or Portrait of Helene Fourment in a Fur Wrap] Gemäldegalerie, 688, Kunsthistorisches Museum, Vienna (c. 1636–1638).
Conceiving these fusions that combine human figures in unnatural ways as expressive offers an autonomous variant of the synthetic recognitions at the heart of Surrealist painter Salvador Dalí’s “paranoiac-critical method.” His “metamorphic images,” a particular type of optical illusion, show images where the contents are ambiguous, allowing a series of discrete, unique visual interpretations of the depiction (Caws 2001). The results of his protocol are designed to enable viewers to see a series of discrete, independent, but converging pictures:
The way in which it has been possible to obtain a double image is clearly paranoiac. By double image it is meant such a representation of an object that it is also, without the slightest physical or anatomical change, the representation of another entirely different object, the second representation being equally devoid of any deformation or abnormality betraying arrangement. (Dalí 1998: 180)
What Dalí calls the “delirium of interpretation” is the result of disparate and otherwise unrelated images joining to form composites through an illogical, associative process based on resemblance. His paranoiac-critical methodology differs from the traditional Surrealist automatisme by requiring the audience to actively create the sequence of images encountered, unlike the more typical approach where the resulting work is the document of a process happening in the mind of the artist and presented to an audience (Breton 1972). The shifting recognitions in metamorphic images demonstrate the active role and limit of interpretation by presenting the audience with visual ambiguity. These illusions are of interest precisely because they happen exclusively in the human mind, directing attention to the play of perception and recognition; AI can also produce these types of metamorphic image (see Figure 15). The apparent “creativity” and “expressiveness” they offer demonstrates these identifications are a projection of the audience in response to their complexity and ambivalence. This amplification of subjective response separates the guiding intelligence from the expressive result, distancing the AI’s output from input (prompt, rhizomatic database) to explain the AImage as an autonomous product that is constrained by the human actions constructing and then operating the machine.

AImage producing a metamorphic illusion of Miles Davis as a beach scene. Generated by the author using AI.
The acceptance or rejection of these outputs as aesthetic objects does not alter the dynamic presentation in Figure 15, nor diminish the apparent motion that both bodies in Figure 14 create – an affect and modulation characteristic of “old master” painting. These aesthetic apperceptions begin as a perceptual phenomenon, demonstrating the semiotics of generative works are no different than interpreting any other image-object, no matter its source: even the earlier generative images of photography are distinct from whatever claims to documentary information that might be proposed for them. These distinctions revitalize the “intentional fallacy” that distinguishes between claims of what the human operator “intended” and the contents of the generated art work: it separates the qualia of any work and all those other decisions – even determinative ones – productive and informative about that work, including the prompts responsible for its generation, from the work itself. The capacity to reproduce Surrealist metamorphic images or the motion effects of old master paintings brings the familiar imagistic and plastic concerns with depicting life-like figures into consideration as one potential among others, complicating the assessment of generative art and the identification of expressive AImages. Thus separating instrumental instructions from AImages brings the pathos of the metaphor “artificial intelligence” into consciousness: the machine is not conscious; it does not know the significance of what it produces, nor the meaning of the prompts for its human audience.
6 Conclusions: the “creativity” of AImages
An insistence on the constructive and conditional nature of interpretation as the determining factor in assessing “creativity” is an inevitable dimension of engagement with AImages that complicates, if not fundamentally obviates, the constraints posed by the ideology of creativity. AImages amplify the problem of separating a malfunction from an expressive product. Glitches in traditional digital media are typically understood as a reflection of the “materiality” of the technical medium, a revelation of internal structure and organization; relating the generative artifacts of .jpeg compression that appear in the image to the hidden and essential operations of the computer in creating that image depends on an a priori familiarity with these artifacts from past encounters to shape and inform any engagement with them: their identification as “materiality” for these digital images is a conventional interpretation arising from those past experiences. This diagnostic for specific elements within the image-object then informs any symbolic meanings that are assigned to these material markers in a further complexification of their initial interpretation; however, their dismissal as non-signifying noise is equally potential, a diagnostic apprehension that ejects these aspects of the image from additional consideration. “Materiality” derives from this diagnostic moment, a decision by the viewer to consider some aspects of the image as diagnostics for the technology of its presentation, but simultaneously as non-lexical: as dimensions of articulation whose symbolic meaning, if any, is connected to their self-referential potential, rather than as a lexical function based in the encoded signs of language.
The model (rhizomatic database) employed autonomously by the generative system brings these uncoded material features forward for examination, excavating the hidden dimensions of sign formation and their role in the apprehension of a “creative act” whether performed directly by a human or generated by a machine. This productive system depends on the anteriority of human actions that shape and fill the datasets that define its potentials. In deriving the AImage from this data it brings these semantic cues into the foreground precisely because the machine is unconscious – any “human creativity” it displays must begin as something empirically present in the data, resulting in a generative process that accentuates their semic role in prompting an aesthetic reverie made possible by considering the image-object’s features as expressive actions guided by the “intentional function.” The lack of direct human control over the generative process neither negates the human basis for its semiotic assessment as-expressive, nor elides the humanity of interpretation in deciding the creative status of the image-object encountered as aesthetic: it elevates these actions to the level of axioms for sign formation without which, nothing.
Deviations from both the prompt and the audience’s expectations for the image’s fidelity to photography shape their response to the work as being glitched (i.e., an error) or being expressive. What determines the fidelity of the AImage output is not merely a question of resemblance to the text prompts but of what the audience believes should be output from those prompts. This decision is not about the contents of the database being employed, but a reflection of the encultured knowledge summoned by the prompt. This chasm between operational functions of prompts and their meaning defines the ideology of creativity by a boundary that distinguishes a consciously formed and meaningful work from one that is merely the output of a mechanical operation. Expectations are revealed by how the audience anticipates and then evaluates their encounter with an AImage, thus showing that signification depends on understanding the logic of the utterance and sequence of its terms as a permeable but self-contained entity constrained by and parallel to the empirical dimensions of perception (Eco 1994). Any deviations from the constrains produced by these expectations gain the appearance of “invention” or “creative action” because ambivalence is an expressive dimension valued by human aesthetic pursuits: the human audience embraces a duality of address that reveals creativity to be tautological, an artificial construct produced by that audience as an evaluation of what they encounter. The capacity of the AImage to match the same “historical precedent” found in the innovations of avant-garde art threatens the ideology of creativity that restricts sign formation to products of intentional human activity. This ascription of “intent” relies on semantic cues physically present in perception – and which AI systems can readily reproduce without understanding or even any knowledge that they do so. It is the impossibility of AI’s actions being intentional that poses the dilemma. AI-based “creativity” exploits the hallucinatory capacities of human interpretation to link together and articulate meaning from disparate and disconnected elements in the same ways that the Surrealist painter Salvador Dalí noted in 1929 when he first proposed his paranoiac-critical method:
It is enough that the delirium of interpretation should have linked together the implications of the images of the different pictures covering a wall for the real existence of this link to be no longer deniable. Paranoia uses the external world in order to assert its dominating idea and has the disturbing characteristic of making others accept this idea’s reality. The reality of the external world is used for illustration and proof, and so comes to serve the reality of the mind. (Dalí 1998: 180–181)
The illogical associative process based on resemblance that Dalí exploits also shapes the generative process of the AImage, apparent in the surrealistic affect of the bottom AImage in Figure 11 or in Figure 13. Recognizing these shapes as figures requires the viewer to assimilate their biomorphic qualities to those of stylized or abstracted human figures, thus placing this picture in the same category as Surrealist paintings by Arp, Dalí, Tanguy, or Picasso produced in the 1920s or 1930s that have a similar affect of truncated recognition. Distortion becomes an aesthetic qualia. Accepting the AImage as categorically similar to the formal order of avant-garde art undermines the ideological claims for those earlier works’ exceptional status. Figure 13 is not a product of a conscious abstraction of human form, nor even necessarily a synthesis of these historical artists’ artwork, but instead is potentially a malfunction in the generative process: a glitch in the system’s attempt to fabricate something approaching photographic verisimilitude. The audience’s decision to accept or reject the AImage’s distortions is not a reflection upon the qualia of the work, but on the audience’s internalized aesthetic beliefs and how that ideology shapes their responses to those qualia. The question “whether AI is capable of creativity” reveals itself to be a product of these internalized beliefs in advance of any evidence provided yay or nay. What the computer creates depends on a human decision that either embraces or rejects its outputs as candidates for aesthetic consideration in advance of their presentation. The question of “machine creativity” is thus instructive about the framework used to make the evaluation, rather than the evaluation itself – it becomes a “magic mirror” revealing how cultural, social, and ideological beliefs which shape semiosis by dictating the conditions for sign formation, and thus determining in advance what is and is not an expressive utterance.
References
Berger, John. 1972. Ways of seeing. New York: Penguin.Search in Google Scholar
Betancourt, Michael. 2002a. Disruptive technology: The avant-gardeness of avant-garde art. Ctheory. https://journals.uvic.ca/index.php/ctheory/article/view/14580/5459.Search in Google Scholar
Betancourt, Michael. 2002b. Motion perception in movies and paintings: Towards a new kinetic art. Ctheory. https://journals.uvic.ca/index.php/ctheory/article/view/14573/5420.Search in Google Scholar
Betancourt, Michael. 2016. Glitch art in theory and practice: Critical failures and post-digital aesthetics. New York: Routledge.10.4324/9781315414812Search in Google Scholar
Betancourt, Michael. 2020. The “material function” in cinema: Resolving the paradox of the glitch. Semiotica 236–237(1/4). 251–273. https://doi.org/10.1515/sem-2019-0006.Search in Google Scholar
Betancourt, Michael. 2022. Art, AI, and culture: On the social identity threat posed by technological change. Savannah, GA: I’m Press’d.Search in Google Scholar
Betancourt, Michael. 2023a. The “intentional function” in still and moving photographic images. Semiotica 253(1/4). 71–80. https://doi.org/10.1515/sem-2020-0065.Search in Google Scholar
Betancourt, Michael. 2023b. Glitch theory: Art and semiotics. Savannah, GA: I’m Press’d.Search in Google Scholar
Breton, André. 1972. Manifestos of surrealism. Ann Arbor, MI: University of Michigan Press.Search in Google Scholar
Caws, Mary Ann (ed.). 2001. The Surrealist painters and poets: An anthology. Cambridge, MA: MIT Press.10.7551/mitpress/6565.001.0001Search in Google Scholar
Cloninger, Curt. 2011. GltchLnguistx: The machine in the ghosts/static trapped in the mouths. In Nick Briz, Evan Meaney, Rosa Menkman, William Robertson, Jon Satrom & Jessica Westbrook (eds.), Glitch reader(ror), 23–41. Chicago: Unsorted.Search in Google Scholar
Dalí, Salvador. 1998. Oui. Boston, MA: Exact Change.Search in Google Scholar
Danto, Arthur. 1981. Transfiguration of the commonplace. Cambridge, MA: Harvard University Press.Search in Google Scholar
Duchamp, Marcel. 1989. The writings of Marcel Duchamp, Michael Sanouillet & Elmer Peterson (eds.). New York: Da Capo Press.Search in Google Scholar
Eco, Umberto. 1994. The limits of interpretation. Bloomington, IN: Indiana University Press.Search in Google Scholar
Elite Data Science. 2023. Overfitting in machine learning: What it is and how to prevent it. https://elitedatascience.com/overfitting-in-machine-learning (accessed 15 July 2023).Search in Google Scholar
Glynn, Paul. 2023. Sony World Photography Award 2023: Winner refuses award after revealing AI creation. Entertainment & Arts, BBC News. https://www.bbc.com/news/entertainment-arts-65296763 (accessed 5 May 2024).Search in Google Scholar
Graeber, David. 2014. Debt: The first 5,000 years. Brooklyn, NY: Melville House.Search in Google Scholar
Groupe Mu. 1977. Rhétorique de la poésie: Lecture linéaire, lecture tabulaire. Paris: Seuil.Search in Google Scholar
Harris, Roy. 2009. Integrationist notes and papers 2006–2008. Gamlingay: Bright Pen.Search in Google Scholar
Kant, Immanuel. 1986. The critique of pure reason, Werner Pluhar (trans.). Indianapolis: Hackett.Search in Google Scholar
Kant, Immanuel. 1987. The critique of judgment, Werner Pluhar (trans.). Indianapolis: Hackett.Search in Google Scholar
Krauss, Rosalind. 1985. The originality of the avant-garde and other myths. Cambridge, MA: MIT Press.Search in Google Scholar
Menkman, Rosa. 2011. A vernacular of file formats. In The glitch moment(um) (Network Notebooks 4), 17–25. Amsterdam: Institute of Network Cultures.Search in Google Scholar
Meyer, Ursula. 1972. Conceptual art. New York: Dutton.Search in Google Scholar
O’Meara, Jennifer & Cait Murphy. 2023. Aberrant AI creations: Co-creating surrealist body horror using the DALL-E mini text-to-image generator. Convergence 29(4). 1070–1096. https://doi.org/10.1177/13548565231185865.Search in Google Scholar
Popper, Frank. 1993. Art of the electronic age. New York: Abrams.Search in Google Scholar
Robinson, Henry Peach. 1966 [1892]. Paradoxes of art, science and photography. In Nathan Lyons (ed.), Photographers on photography, 82–88. Englewood Cliffs, NJ: Prentice-Hall.Search in Google Scholar
Shepherd, Tory. 2023. Woman’s iPhone photo of son rejected from Sydney competition after judges ruled it could be AI. The Guardian. https://www.theguardian.com/australia-news/2023/jul/11/mothers-iphone-photo-of-son-rejected-from-sydney-competition-after-judges-ruled-it-could-be-ai (accessed 15 July 2023).Search in Google Scholar
Villa, Angelica. 2023. AI-suspected image taken at Gucci exhibition disqualified from photography contest. Art News. https://www.artnews.com/art-news/artists/ai-suspected-image-disqualified-australian-photography-contest-1234674107/ (accessed 15 July 2023).Search in Google Scholar
Wall, Jeff. 1998. Marks of indifference: Aspects of photography in, or as, conceptual art. In Elizabeth Janus (ed.), Veronica’s revenge, 73–100. New York: Scalo.Search in Google Scholar
© 2024 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Signs and Images
- Computer creates a cat: sign formation, glitching, and the AImage
- Spatial pedagogy: exploring semiotic functions of one teacher’s movement in an Active Learning Classroom
- A semiotic analysis of the canonical image macro meme
- From the Socio-cultural
- The beheading of James Foley: a crossing of gazes between East and West
- An allegory of Fama and Historia: rumor studies, collective memory, and semiotics
- Towards the Cultural
- Pour une approche sémiotique de la traduction de la chanson : l’exemple de La chanson des vieux amants de Jacques Brel et de son adaptation turque Şarap Mevsimi
- Beyond “Made in China”: visual rhetoric and cultural functionality in translating the traditional Chinese totem Loong 龙
- Advertising Semiotics
- La marque comme service ayant une vision propre : une approche sémiotique des architectures de marques
- Advertising fragrance through visual and audible information: a multimodal metaphor analysis of perfume commercials
- Brand identity construction through the heritage of Chinese destination logos
Articles in the same Issue
- Frontmatter
- Signs and Images
- Computer creates a cat: sign formation, glitching, and the AImage
- Spatial pedagogy: exploring semiotic functions of one teacher’s movement in an Active Learning Classroom
- A semiotic analysis of the canonical image macro meme
- From the Socio-cultural
- The beheading of James Foley: a crossing of gazes between East and West
- An allegory of Fama and Historia: rumor studies, collective memory, and semiotics
- Towards the Cultural
- Pour une approche sémiotique de la traduction de la chanson : l’exemple de La chanson des vieux amants de Jacques Brel et de son adaptation turque Şarap Mevsimi
- Beyond “Made in China”: visual rhetoric and cultural functionality in translating the traditional Chinese totem Loong 龙
- Advertising Semiotics
- La marque comme service ayant une vision propre : une approche sémiotique des architectures de marques
- Advertising fragrance through visual and audible information: a multimodal metaphor analysis of perfume commercials
- Brand identity construction through the heritage of Chinese destination logos