Startseite Linguistik & Semiotik Pan-indexicality and prompt: developing a teaching model for AI-mediated academic writing
Artikel Open Access

Pan-indexicality and prompt: developing a teaching model for AI-mediated academic writing

  • Jing Zhu

    Jing Zhu received PhD from the School of Foreign Languages, Soochow University in 2019, and is now an associate professor at the School of Foreign Languages, Soochow University. Her research interests include cognitive linguistics, semiotics and foreign language teaching.

    ORCID logo
    und Chunyun Duan

    Chunyun Duan is an associate professor at the Dongwu College, Soochow University. Her major research interests are cognitive linguistics, semiotics and foreign language teaching. Her academic papers have been published on such journals as Foreign Language Education, Journal of Tianjin Foreign Studies University, etc.

    ORCID logo EMAIL logo
Veröffentlicht/Copyright: 4. April 2025

Abstract

AI-mediated academic writing calls for new pedagogical approaches to the application of prompt engineering for writing courses. Whereas previous studies mainly inform students of prompt engineering techniques, little is known about how prompt engineering functions from the perspective of meaning negotiation between the human and generative AI. This paper explores the integration of the Pan-indexical process of linguistic signs into a prompt-based teaching model (PBTM), emphasizing its potential to facilitate meaning negotiation in prompt engineering during the early stage of AI-mediated academic writing. The PBTM consists of four key components: encyclopedic knowledge, contextual information, evaluative critical thinking, and iterative design. The application of PBTM lies in the early stage of idea development of academic writing and is organized around four major steps: crafting the initial prompt; refining the prompt with contextual information; engaging in evaluative critical thinking; iterative progression toward a desired response. This paper suggests that the Pan-indexical process of linguistics signs can be employed to AI-mediated pedagogical approaches to enhance students’ ability in optimization of prompts through a deeper understanding of meaning negotiation between the students and generative AI to support their academic writing process.

1 Introduction

Academic writing is a multi-faceted activity that demands the integration of various cognitive skills and knowledge to effectively orchestrate writing processes, including goal setting, problem-solving, and the strategic management of memory resources (Allen and McNamara 2017, Flower and Hayes 1981). The writing process can be even more challenging and time-consuming for English as a foreign language students who often encounter language barriers. In recent years, the integration of generative artificial intelligence (AI) has emerged as an essential productivity tool, facilitating at least six core academic domains: idea development and research design; content development and structuring; literature review and synthesis; data management and analysis; editing, reviewing and publishing support; communication outreach and ethical compliance (Khalifa and Albadawy 2024). Generative AI can enhance writing proficiency and skills through delivering feedback on grammar, coherence, and style (Aljanabi et al. 2023; Tarchi et al. 2024), providing citations (Aydin and Karaarslan 2022), generating abstracts (Gao et al. 2023), even creating quality essays on diverse topics (Hoang and La 2023; Nguyen 2023; Susnjak 2023). It also leads to questions on the effectiveness of using generative AI in academia. For example, Tarchi et al. (2024) found that the quality of written essays showed not to be associated with students’ integration with ChatGPT, furthermore, the integration with ChatGPT showed a negative association with the amount of source-text information that students quoted or paraphrased in their written essays. This raises concerns about what really matters in facilitating the proper use of generative AI in academic writing.

Prompt engineering, the practice of designing specific inputs to optimize AI-generated outputs, has been identified as a crucial factor in leveraging the potential of generative AI in academic contexts (Özçelik and Ekşi 2024). However, a successful integration of generative AI in academic writing is not merely about technology, for instance, prompt engineering techniques, it requires a deep understanding of the meaning negotiation between the human and generative AI and its impact on pedagogical approaches. While much has been written about prompt engineering as technical proficiency accelerator (Cain 2024; Giray 2023; Walter 2024), usually to inform teachers and students about how to use it efficiently, we are only at the beginning of learning its function from the perspective of how meaning is constructed and interpreted, both by the human and generative AI.

In this context, the Pan-indexical process of linguistic signs (Wang 2019), a recent development in semiotic theory, offers a valuable theoretical framework for enhancing the meaning negotiation between the human and generative AI through prompt engineering. By considering both the shared information of interpretanti or the addresser and the addressee (human and generative AI), and the personal encyclopedic knowledge, and perception of the contextual information of interpretantii or the addressee (generative AI), the Pan-indexical process provides critical insights into how meaning is negotiated in prompt engineering. This semiotic approach promises to refine the pedagogical and practical applications of prompt engineering, particularly in the context of academic writing.

For the pedagogical dimension, AI-mediated academic writing calls for new pedagogical approaches to the application of prompt engineering for writing courses and reshapes human understanding of academic integrity (Parker et al. 2024). It emphasizes the necessity to adapt teaching methods in response to generative AI’s growing capacities, preparing students better for an increasingly AI-mediated educational context and future workplace (Chan 2023). This paper explores the integration of generative AI in academic writing pedagogy through proposing a prompt-base teaching model. Grounded in the pan-indexical process of linguistic signs, this model identifies key components to optimize prompt engineering, facilitating more effective student engagement with generative AI during the early stage of academic writing. Specifically, this paper focuses on the idea development phase, where the research topic, its significance, and relevant background information are articulated to formulate a clear research problem (Creswell 2015). By emphasizing this early stage, the model ensures that students can leverage emerging technologies without ethical concerns related to AI-generated text in finalized research papers.

2 Literature review

2.1 The Pan-indexical process of linguistic signs

The theoretical framework of this study is grounded in the Pan-indexical process of linguistic signs, which is based on Peirce’s trichotomy of signs and the concept of indexicality (Wang 2019). This model provides a framework for interpreting linguistic signs through the interaction of key semiotic components: sign, object, and interpretant (Pierce 1955). Figure 1 illustrates the pan-indexical process, which influences the meaning of a linguistic sign. According to this process, a linguistic sign does not inherently possess meaning but points to an object, guiding the addressee to identify both the literal and figurative references of the object. The interpretanti refers to the shared understanding between the addresser and the addressee, which ensures a correct interpretation of the object’s reference. Interpretantii represents the addressee’s personal encyclopedic knowledge, personal emotions, and contextual information, which shape the interpretation in a specific communicative context. The formal sense of a linguistic sign, which refers to its conventional meaning, is typically understood automatically. The conventional sense corresponds to the standard definition of a sign, often found in sources such as dictionaries. The overall sense of a sign, in general, encompasses a broader range of factors, including personal encyclopedic knowledge, personal emotions, and contextual information, which is primarily linked to interpretantii.

Figure 1: 
The Pan-indexical process of linguistic signs (Wang 2019:57).
Figure 1:

The Pan-indexical process of linguistic signs (Wang 2019:57).

As the pan-indexical process of linguistics signs has proven effective in general semiotic analysis (Zhu and Duan 2022; Zhu et al. 2023), this study applies it to the construction of a pedagogical approach for the AI-mediated academic writing.

2.2 Prompt engineering in academic writing

Prompt engineering is a relatively new discipline that focuses on the process of drafting, crafting and refining prompts to elicit desired AI-generated responses (Brown et al. 2020). In educational contexts, the essence of prompt engineering lies in its ability to transmit from a repository of information into an interactive tool that stimulates deeper learning and understanding (Lee et al. 2023). The core aim of prompt engineering is to guide students toward the responsible and effective use of AI tools, recognizing that prompt engineering itself is an experimental process. By engaging in trial and error, students can refine their prompts to generate more relevant and accurate outputs.

Over time, relevant prompt engineering techniques have evolved, ranging from simple input–output prompting to more sophisticated method like tree-of-thought prompting. input–output prompting represents the most basic and common approach, where prompts lead directly to outputs (Liu et al. 2021). This method has been expanded into chain-of-thought prompting, where reasoning steps are inserted to guide the AI toward a specific outcome. For instance, the phrase “Take a deep breath and do it step by step” has become a widely used addendum to prompts, helping AI systems generate more structured and coherent responses (Zou et al. 2023). Such universal and transferable prompt suffixes have even been employed steering them toward desired outputs.

Another development in prompt engineering is expert prompting developed by Xu et al. (2023), which involves assigning AI a role as an expert aligned with the query’s context, and then integrating the identity into the prompt. This technique yields responses that are more concrete and less vague, as AI generates answers from an expert’s perspective. A further improvement, self-consistency prompting, extends the reasoning capabilities of AI, enabling it to evaluate multiple responses and select the one that best aligns with a given rubric or criteria (Wang et al. 2023). The practice of Self-Consistency Prompting is to minimize the risk of AI generating fabricated or inaccurate information by encouraging consistency across multiple outputs.

For those who are uncertain about how to begin designing effective prompts, two additional techniques can be particularly helpful. Automatic prompt engineering involves providing AI with one or more examples and asking it to generate prompts that would produce high-quality responses (Zhou et al. 2023). Similarly, generated knowledge prompting (Liu et al. 2021) sets the scene by having AI use its own generated knowledge to construct the narrative framework, rather than relying on human-provided examples. Among the most complex methods is tree-of-thought prompting, a combination of chain-of-thought and self-consistency techniques (Yao et al. 2023). In this approach, AI is presented with a complex scenario and generates several lines of thought. It then revises its responses if inconsistencies are found, ultimately converging on the most accurate and coherent responses.

In the context of academic writing, various elements, components and process of effective prompt engineering have been explored. Giray (2023) identified the key elements of a prompt, which include the instruction, content, input data, and output indicators. Cain (2024) highlighted three crucial components for successful prompt engineering in educational settings: content knowledge, critical thinking, and iterative design. These components serve as the foundation for refining AI-generated outputs and improving the quality of academic writing outcomes. Marvin et al. (2024) proposed the process structured in five steps involving in creating effective prompts: defining the goal, understanding the model’s capabilities, choosing the right prompt format, providing context, and testing and refining. Thanasi-Boçe and Hoxha (2024) develop innovative prompt engineering techniques that are well-aligned with entrepreneurial learning. However, Kumar (2023) designed a series of prompts and found a few pitfalls: instructions were not well-followed, references were overall inaccurate, intext references were not cited, and lack of practical examples.

Whereas previous studies have already underscored the importance of recognizing prompt engineering as a way of mastering technology, this paper claims that integrating prompt engineering into academic writing also requires a deep understanding of meaning negotiation between the human and generative AI. Therefore, engaging in an ongoing, iterative cycle of meaning negotiation with generative AI (Parker et al. 2024) is fundamental because we are only at the beginning of learning its function from the perspective of how meaning is constructed and interpreted.

3 Construction of a prompt-based teaching model

While a substantial body of literature exists on academic writing development and pedagogy (e.g. Aitchison and Guerin 2014; Lee and Danby 2012), there is a notable lack of research on the specific pedagogical approach to integrating generative AI into academic writing process. Instructors acknowledge that generative AI can be helpful in the early stages of academic writing and enhance the efficiency of writing (Cardon et al. 2023). Students want educational institutions to offer training in the balanced use of generative AI (Khalifa and Albadaway 2024). Therefore, this paper seeks to develop a teaching model that enhances students’ ability to design and refine prompts for effective use in academic writing, with a particular focus on meaning negotiation informed by the Pan-indexical process of linguistic signs. By focusing on the dynamic interaction between students and generative AI, this approach aims to cultivate a deeper understanding of meaning negotiation through prompt engineering in AI-mediated academic writing.

3.1 Theoretical foundation

The Pan-indexical process of linguistic signs, developed by Wang (2019), provides a robust theoretical foundation for interpreting the prompt-based communication between the human and generative AI. According to this framework, a linguistic sign, such as a prompt designed by a student, does not inherently carry meaning on its own; rather, it functions as a signal that points to an object (generative AI response). This interaction occurs through the interpretants of both the addresser (a student) and addressee (generative AI). In the context of prompt engineering, a sign refers to the prompt that a student inputs into the generative AI system. This could be a question, instruction, or request that generative AI is expected to respond to. The prompt, as a semiotic unit, does not carry meaning by itself but serves as a cue to elicit a response from generative AI. The object, in turn, refers to the intended generative AI response, which is shaped by the interaction of the formal sense, conventional sense, and overall senses as long as we treat the generative AI as a human-like intellectual entity. In academic writing, an object might be a suggested research topic, a structured outline, or an analysis of data. An interpretant refers to the meaning-making process that occurs when the sign (the prompt) interacts with the object (generative AI response). Interpretanti refers to the shared understanding between a student and generative AI system, which informs the formal sense and conventional sense of the prompt. For example, Interpretanti would encompass the shared understanding of what constitutes coherent research questions or what the typical structure of an academic essay should look like. Interpretantii involves the encyclopedic knowledge embedded in the generative AI system, and the specific contextual information provided by the student who drafts the prompt. Together, these elements contribute to the overall sense that “affect the final reading of the linguistic sign” (Wang 2019: 58).

In the context of AI-mediated academic writing, prompt engineering is an iterative process of meaning negotiation. When students draft prompts, they engage in a dynamic interaction with generative AI, where both parties shape the resulting output. It is crucial to emphasize the iterative nature (Cain 2024) of this process: students refine their prompts through supplementing contextual information, critically evaluating and revising their inputs accordingly. This iterative reflective process not only enhances students’ prompt engineering skills but also fosters deeper understanding of the meaning negotiation occurring between the human and machine. A successful negotiation of meaning relies on the process of prompt engineering transcending a mechanical act of coding language and becoming a more nuanced form of meaning-making, where both students and generative AI co-generate responses. Understanding how meaning is constructed and interpreted in the human-AI interaction will enable them to use it more effectively, improving both the quality of their academic writing and their overall research process.

In summary, the pan-indexical process of linguistic signs offers a valuable semiotic framework for understanding the dynamic meaning negotiation between students and generative AI in prompt-based academic writing.

3.2 Key components of a prompt-based teaching model

Based on the elements of encyclopedic knowledge and contextual information of the Pan-indexical process, and the features of hallucination and iterative design of the prompt engineering, four interrelated components are constructed as: encyclopedic knowledge, contextual information, evaluative critical thinking, and iterative design. Together, these elements constitute a comprehensive framework for prompt-based teaching in the academic writing process.

Encyclopedic knowledge refers to students’ understanding of the world, ranging from academic knowledge, disciplinary norms, to subject-specific concepts. In academic writing, encyclopedic knowledge encompasses theories, facts, frameworks, and methodologies that students are expected to know within their field of study. When they design prompts for generative AI, their knowledge base significantly influences the specificity and relevance of the prompts they input. The role of encyclopedic knowledge extends beyond the design of prompts, it also influences the interpretation and evaluation of generative AI responses. Well-informed students will be better equipped to critically assess the quality and accuracy of the AI-generated responses, recognizing both the strengths and limitations of the generated text. Generative AI may produce incorrect or erroneous responses, which are referred to as “hallucination” (Weise and Metz 2023). Therefore, students need to be taught how to activate and apply their encyclopedic knowledge when interacting with generative AI, ensuring that their prompts are aligned with academic conventions.

Contextual information refers to external or supplementary information that provides specific background to the LLM, enabling it to generate responses that align with students’ objectives (Giray 2023). In academic writing, this includes clarifying the purpose of research project, adopting a formal tone and discipline-specific terminology, and ensuring academic integrity through proper referencing and citation of sources (Morris 2018). Despite being one of the most underestimated elements, contextual information significantly influences the accuracy of information generated by generative AI. Without such context, LLMs often produce generic yet coherent responses (Kumar 2023). Moreover, with limited contextual information, LLMs are prone to generating misinformation, particularly for underrepresented contexts within existing data (Marvin et al. 2024). Providing specific and detailed contextual information is therefore essential for generative AI to produce responses that meet the expectations and objectives of the academic writing process.

Evaluative critical thinking, in the context of prompt engineering, involves the ability to evaluate, verify, question the responses of generative AI, and to refine prompts iteratively through critical evaluation (Walter 2024). Students evaluate AI-generated responses against academic writing standards, identifying hallucination, runaway, inaccuracy, biases, inappropriate content in the responses, or gaps in information (Bostrom 2002, 2012). Furthermore, evaluative critical thinking enables studentss to refine their prompts iteratively (Brown et al. 2020). If the first response from AI is not sufficiently detailed or aligned with the task, students should be able to revise the prompt to clarify or narrow down the scope of the request. This iterative process requires them to engage in a cycle of reflection, evaluation, and refinement – a fundamental aspect of academic writing. As students cultivate analytical and critical thinking skills, they become more proficient in leveraging AI tools to enhance their academic writing while avoiding over-reliance on it and mitigating the risk of AI-assisted plagiarism.

Iterative design is the process where prompt engineering starts with an objective, undergoes planning, design, evaluation, and subsequent refinement (Cain 2024). Arriving at a desired generative AI response is not straightforward, it usually requires an iterative approach (Reynolds and McDonell 2021). In the context of academic writing, it is even more challenging. An initial prompt, once drafted, is the precursor to an AI generated response. Upon assessing the response for its relevance, precision, and suitability, refinements to the prompt are adjusted, thus restarting the cycle. This iterative process is more than a technique; it is a roadmap to attaining a desired outcome. Continuous refinements enable generative AI to inch closer to the optimal response. The role of iteration in prompt engineering also mirrors the iterative nature of academic writing itself (Cohen et al. 2002). Just as academic writing is rarely perfect in its first draft, effective prompt engineering requires ongoing iterative cycle of refinements.

The four key factors – encyclopedic knowledge, contextual information, evaluative critical thinking, and iterative design – are interdependent and collectively enhance the effectiveness of prompt engineering in academic writing process. By integrating these components, students are not only equipped with the technical proficiency in prompt engineering but also a deeper understanding of how meaning is negotiated in human-AI interactions.

3.3 A pedagogical approach to prompt-based teaching

Based on the theoretical framework and four key components illustrated in the above section, the prompt-based teaching model (PBTM) (see Figure 2) is developed, aiming to enhance students’ ability to engage in iterative cycles of critical evaluation and optimization of prompts to support their academic writing process.

Figure 2: 
The prompt-based teaching model for academic writing.
Figure 2:

The prompt-based teaching model for academic writing.

Figure 2 is a description of the PBTM that outlines an iterative process integrating students’ encyclopedic knowledge, contextual information, and evaluative critical thinking to enhance their prompt engineering skills through meaning negotiation for AI-mediated academic writing. The process begins with students designing an initial prompt based on their objectives and prior encyclopedic knowledge, such as disciplinary norms, theoretical frameworks, or subject-specific concepts. This initial prompt serves as the foundation for interaction with generative AI, which then produces an initial response.

Following this, the model emphasizes an iterative refinement process, designed to improve the quality of prompts and their corresponding AI-generated outputs. In the first iteration, students compare the generative AI initial response with their encyclopedic knowledge and integrate related and more specific contextual information, such as the purpose of the research project, the formal tone and specific terminology, or maintaining academic integrity with adequate referencing and citing of sources (Morris 2018). By revising the prompt with this additional specificity, students guide generative AI to produce a refined response that better aligns with their academic objectives.

The iterative process continues with a second refinement cycle, during which students engage in evaluative critical thinking. The goal of critical evaluation is not to clarify if generative AI is right or wrong, but to apply it as a tool to facilitate critical thinking and self-reflection. This stage involves critically assessing the generative AI refined response for accuracy, relevance, and maintenance of ethical academic standards. Students identify potential inaccuracies, biases, or gaps in the response and further refine the prompt to address these issues. Generative AI then outputs another refined response, reflecting these improvements.

The recursive process is iterative and progresses forward until generative AI produces a desired response. This recursive process mirrors the iterative nature of academic writing, encouraging students to approach prompt engineering as a dynamic and reflective practice. By refining their prompts through multiple iterations, students develop a deeper understanding of the meaning negotiation during human-AI interactions.

4 Application of the prompt-based teaching model

In this section, the application of the Prompt-Based Teaching Model would lies in the idea development of academic writing process. It has been proved that the integration of generative AI in the idea development is transformative (Alshater 2022). AI algorithms significantly enhance brainstorming process by providing information derived from historical data, current trend, and cross-disciplinary studies. The idea development includes tasks such as brainstorming, crafting an introduction and background, identifying literature gap, and determining research problems and objectives (Khalifa and Alabadawy 2024). Focusing on the idea development would guide students to utilize AI while maintaining ethical integrity by avoiding reliance on AI-generated texts in finished research outputs. The application of the proposed teaching model will be discussed in the teaching procedure structured in four steps (see Figure 3): (1) crafting the initial prompt; (2) refining the prompt with contextual information; (3) engaging in evaluative critical thinking; (4) iterative progression toward a desired response.

Figure 3: 
The teaching procedure for the idea development.
Figure 3:

The teaching procedure for the idea development.

4.1 Crafting an initial prompt

Students often encounter challenges of narrowing down research topic, developing a coherent theoretical and conceptual framework, defining research problems, and formulating clear research objectives and questions (Anik et al. 2024). The use of generative AI provides them with a partner to assist them to brainstorm, promote their creativity, and make them feel supported in the academic writing process (Kim and Cho 2023). For instance, a student exploring Margaret Atwood’s novels from the perspective of environmental literature might start the process by crafting an initial prompt:

What are the key theories and frameworks used to analyze Margaret Atwood’s novels from the perspective of environmental literature?

This prompt initiates a dialogue with generative AI, aiming to output a broad overview of theories and frameworks within the field of environmental literature. In response, generative AI outlines the definition of ecocriticism and its application in The Handmaid’s Tale, Oryx and Crake, and The Year of the Flood, through analyzing depictions of climate change, genetic engineering, and ecological destruction as warnings about unsustainable human behavior. More fundamental concepts are provided with the same structure, such as ecofeminism, posthumanism, anthropocene studies, climate fiction framework, deep ecology, utopian and dystopian studies, and interdisciplinary approaches like postcolonial ecocriticism, biosemiotics, and biopolitics. The student is informed of the possibility to analyze themes like climate change, ecological collapse, animal turn, and human-nature relationships in Atwood’s novels. He might narrow his interest to the concept of posthumanism and its application in Atwood’s novels by investigating the blurred boundaries between humans, animals, and technology. This step helps the student map out the theoretical landscape, setting the foundation for further inquiries.

4.2 Refining the prompt with contextual information

After receiving the initial response from generative AI, the student may refine the prompt by incorporating contextual information as follows:

Supposing you are an expert in the field of environmental literature, could you please provide with the key theories and frameworks on the topic of posthumanism that can be applied to investigate the blurred boundaries between humans, animals, and technology in Margaret Atwood’s novels? Please categorize related studies into three groups: the most recent, the most influential, and representative studies spanning the field’s history.

This refined prompt provides more specific instructions, resulting in an AI-generated response that looks like tailored to academic expectations. It lists three most recent studies in posthumanism, particularly in relation to literature, ecology, and Atwood’s novel, including Posthuman Knowledge by Rosi Braidotti (2019), What Is Posthumanism by Cary Wolfe (2010), and Death of the PostHuman by Claire Colebrook (2014) (the third one is fabricated by generative AI and therefore not included in the reference list). Additionally, it categorizes three influential studies and three representative studies spanning the field’s history in the same format. However, according to academic standards, works from 2019, 2014, and 2010 may not be qualified as the most recent studies, as newer research is often published in academic journals rather than books. The student, leveraging the encyclopedic knowledge, hypothesizes that this discrepancy arises from the temporal lag between groundbreaking research and its publication in monograph form. To address this, the student may refine the prompt as follows:

Supposing you are an expert in the field of environmental literature, could you please provide with 20 research papers on the topic of posthumanism that can be applied to investigate the blurred boundaries between humans, animals, and technology in Margaret Atwood’s novels? Please categorize these research papers into three groups: the most recent, the most influential, and representative studies spanning the field’s history.

In response, it lists five most recent studies capturing the evolution of posthumanist thought and its relevance to Atwood’s exploration of human, animal, and technological intersections. Additionally, it includes five influential studies and six representative studies, presented in the same format. However, while this refined generative AI response demonstrates improvement by listing two research papers published in 2024, the remaining three papers, dated 2018, 2014, and 2012, still fall short of meeting rigorous academic standards for recency. Although this refinement enhances the prompt’s utility, students must critically evaluate the generative AI output, scrutinizing its reliability and relevance to ensure academic rigor.

4.3 Engaging in evaluative critical thinking

The most critical step involves engaging in evaluative critical thinking to assess the AI-generated response for accuracy, relevance, and coherence. Students often face challenges when using generative AI, particularly when it produces inaccurate or fabricated references and occasionally provides misinformation regarding theoretical frameworks and literature reviews. Addressing these issues requires students to verify AI-generated responses using external tools like databases available through university libraries. Ruksakulpiwat et al. (2023) stress the importance of academic integrity and the necessity of cross-checking AI-generated responses to ensure alignment with research objectives and academic standards. After identifying misinformation through cross-checking, the student would further refine the prompt to address these gaps, exemplified as follows:

Please revise the above list through solely providing the related studies with reliable research resources with APA in-text citation. Please list the 20 literature entries in APA citation format at the end of the section.

However, the refined generative AI response continues to produce fabricated or unverifiable references. When the student requests proper referencing and citation of sources, generative AI would generate a diplomatic response, such as:

Thank you for pointing that out. I understand the importance of using accessible, reliable sources. I’ll revise the references with more easily accessible academic sources. Here’s the updated version with references that can be found via Google Scholar or other academic databases.

Through this critical evaluation, the student identifies the limitations of generative AI in academic contexts. When it fails to provide accurate or verifiable references, students must turn to traditional academic resources, such as library journals and databases, to uphold the academic integrity of their research. Evaluative critical thinking not only highlights the capabilities and limitations of generative AI as a tool for academic writing but also underscores the importance of traditional research methods as the foundation upon which rigorous scholarship is built.

4.4 Iterative progression toward a desired response

The final step involves iteratively refining the prompt and engaging with generative AI to achieve a response that aligns with the student’s academic objectives. This iterative process reflects the natural progression of academic writing, where ideas are refined and become increasingly focused over time. For instance, after consulting reliable sources, the student may choose to narrow his or her interest to the environmental theme of the ‘animal turn’ in the discipline of ecocriticism, and anchor the analysis within a theoretical framework of the pan-indexicality model. This process might lead to further refinement of the research scope, resulting in a more precise prompt that addresses his specific interest:

Supposing you are you are an expert in the field of environmental literature and semiotic studies, could you please organize research questions based on the environmental theme of the ‘animal turn’ in the discipline of ecocriticism, and situate the analysis firmly within the theoretical framework of the pan-indexicality model.

In response, it outputs a holistic approach to addressing the ‘animal turn’ while situating the analysis firmly within the pan-indexicality model, bridging semiotics, ecocriticism, and literary studies. A more precise research question is generated as follows:

To what extent does the ‘animal turn’ manifest in Margaret Atwood’s novels? And how can animal representations in her novels be analyzed through the pan-indexicality model?

Additionally, it outputs two or three questions for the domains of thematic and contextual exploration, pan-indexicality model application, interdisciplinary analysis, comparative and critical dimensions, as well as pedagogical and practical implications, which are less relevant to the student’s primary objective. Consequently, the student refines the prompt further:

Supposing you are you are an expert in the field of environmental literature and semiotic studies, could you please provide an outline of research design for the research question: To what extent does the ‘animal turn’ manifest in Margaret Atwood’s novels? And how can animal representations in her novels be analyzed through the pan-indexicality model?

In response, generative AI provides a research design outline extending to more specific detailed subtitles for sections of introduction, literature review, methodology, data analysis, discussion, and conclusion. For example, within the background section, it outputs:

Background and Rationale:

–    Overview of the ‘animal turn’ in environmental literature;

–   Significance of studying Margaret Atwood’s novels from a biosemiotic and pan-indexical perspective;

–   Relevance of the pan-indexicality model to literary analysis.

Using traditional literature review methods, the student supplements iterative engagement with generative AI by providing increasingly specific contextual information and applying evaluative critical thinking. This iterative process sharpens the student’s academic focus, ultimately preparing them for the next stage of academic writing.

The PBTM facilitates the early stage of academic writing by guiding students through a structured, iterative process of crafting, evaluating, and refining prompts. Through the four-step procedure, students learn to use generative AI as a tool for exploring ideas and generating research questions while critically assessing its outputs.

5 Conclusions

This paper has explored the integration of the pan-indexical process of linguistic signs into a prompt-based teaching model, showcasing its potential to foster meaning negotiation in prompt engineering during the early stage of AI-mediated academic writing. The proposed model is structured around four key components: encyclopedic knowledge, contextual information, evaluative critical thinking, and iterative design. Based on the key components, the teaching procedure structured in four major steps, ranging from crafting an initial prompt, refining the prompt with contextual information, engaging in evaluative critical thinking, to iterative progression toward a desire response.

A significant contribution of this paper is its demonstration of how the Pan-indexical process of linguistic signs can provide a theoretical framework for the development of a prompt-based teaching model. By engaging in iterative process of prompt refinement, contextual integration, and critical evaluation, students are equipped to formulate research questions and explore related studies that align with their academic objectives. This approach not only empowers students to navigate the complexities of prompt engineering but also makes them aware of the capabilities and limitations of generative AI as a tool for academic writing. It is important to note that the application of the PBTM is currently limited to the early stage of academic writing due to ethical considerations. The concern of plagiarism and authorship arises when students apply generative AI to draft their assignments (Ingley and Pack 2023). This concern has been voiced by many researchers since the release of generative AI, like ChatGPT, in November of 2022 (Sullivan et al. 2023). However, future research could investigate possibilities to implement it in every academic module across various disciplines (Walter 2024). Such expansions would provide a more comprehensive understanding of how the pan-indexical process can enhance human-AI interaction across the broader spectrum of academic research.

By integrating semiotic theory into AI-mediated pedagogy, this study contributes to the expanding body of research advocating for a symbiotic interaction between the human and generative AI. Aligned with post-humanist theories, such as actor-network theory, which emphasize the agency of both the human and technology as active participants influencing each other in educational context (Kim et al. 2024), the findings of this paper highlight the significance of practices that ensure generative AI functions as a supportive tool to enhance, rather than replace, traditional research methodologies.


Corresponding author: Chunyun Duan, Soochow Univeristy, Suzhou, P. R. China, E-mail:

Award Identifier / Grant number: 23SWB-22

About the authors

Jing Zhu

Jing Zhu received PhD from the School of Foreign Languages, Soochow University in 2019, and is now an associate professor at the School of Foreign Languages, Soochow University. Her research interests include cognitive linguistics, semiotics and foreign language teaching.

Chunyun Duan

Chunyun Duan is an associate professor at the Dongwu College, Soochow University. Her major research interests are cognitive linguistics, semiotics and foreign language teaching. Her academic papers have been published on such journals as Foreign Language Education, Journal of Tianjin Foreign Studies University, etc.

Acknowledgments

The authors are very grateful to the anonymous reviewers and the editor for their comments and constructive feedback. Any remaining errors are our own.

  1. Research funding: This research was supported by a fund from Social Science Foundation of Jiangsu Province (Award reference: 23SWB-22) and a fund from Social Science Foundation of Jiangsu Province (Award reference: 24SWC-03).

References

Aitchison, Claire & Cally Guerin. 2014. Writing groups for doctoral education and beyond: Innovations in practice and theory. London: Routledge.10.4324/9780203498811Suche in Google Scholar

Aljanabi, Mohammad, Mohanad Ghazi, Ahmed Hussein Ali & Saad Abas Abed. 2023. ChatGpt: Open possibilities. Iraqi Journal for Computer Science and Mathematics 4. https://doi.org/10.52866/ijcsm.2023.01.01.0018.Suche in Google Scholar

Allen, Laura K. & Danielle S. McNamara. 2017. Five building blocks for comprehension strategy instruction. In José A. León & Inmaculada Escudero (eds.), Reading Comprehension in educational settings, 125–144. Amsterdam: John Benjamins Publishing Company.10.1075/swll.16.05allSuche in Google Scholar

Alshater, Muneer. 2022. Exploring the role of artificial intelligence in enhancing academic performance: A case study of ChatGPT. https://doi.org/10.2139/ssrn.4312358.Suche in Google Scholar

Anik, Mehedi Hasan, Shahriar Nafees Chowdhury Raaz & Nushat Khan. 2024. Embracing AI assistants: Unraveling young researchers’ journey with ChatGPT in science education thesis writing. International Journal of Artificial Intelligence in Education 35(1). 225–244. https://doi.org/10.1007/s40593-024-00438-6.Suche in Google Scholar

Aydın, Ömer & Enis Karaarslan. 2022. OpenAI ChatGPT generated literature review: Digital twin in healthcare. Emerging Computer Technologies 2. 22–31. https://doi.org/10.2139/ssrn.4308687.Suche in Google Scholar

Bostrom, Nick. 2002. Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology 9.Suche in Google Scholar

Bostrom, Nick. 2012. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines 22(2). 71–85. https://doi.org/10.1007/s11023-012-9281-3.Suche in Google Scholar

Braidotti, Rosi. 2019. Posthuman knowledge. Cambridge: Polity Press.Suche in Google Scholar

Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever & Dario Amodei. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 33. 1877–1901.Suche in Google Scholar

Cain, William. 2024. Prompt change: Exploring prompt engineering in Large Language Mode AI and its potential to transform education. Tech Trends 68. 47–57. https://doi.org/10.1007/s11528-023-00896-0.Suche in Google Scholar

Cardon, Peter, Carolin Fleischmann, Jolanta Aritz, Minna Logemann & Jeanette Heidewald. 2023. The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age. Business and Professional Communication Quarterly 86(3). 257–295. https://doi.org/10.1177/23294906231176517.Suche in Google Scholar

Chan, Cecilia Ka Yuk. 2023. A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education 20. 38. https://doi.org/10.1186/s41239-023-00408-3.Suche in Google Scholar

Cohen, Louis, Lawrence Manion & Keith Morrison. 2002. Research methods in education. London; New York: Routledge.10.4324/9780203224342Suche in Google Scholar

Creswell, John W. 2015. Educational research: Planning, conducting, and evaluating quantitative and qualitative research. London: Pearson. https://thuvienso.hoasen.edu.vn/handle/123456789/12789.Suche in Google Scholar

Flower, Linda & John R. Hayes. 1981. A cognitive process theory of writing. College Composition & Communication 32(4). 365–387. https://doi.org/10.2307/356600.Suche in Google Scholar

Gao, Catherine A., Frederick M. Howard, Nikolay S. Markov, Emma C. Dyer, Siddhi Ramesh, Luo Yuan & Alexander T. Pearson. 2023. Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. npj Digit Med 6. 75. https://doi.org/10.1038/s41746-023-00819-6.Suche in Google Scholar

Giray, Louie. 2023. Prompt engineering with ChatGPT: A guide for academic writer. Annals of Biomedical Engineering 51. 2629–2633. https://doi.org/10.1007/s10439-023-03272-4.Suche in Google Scholar

Hoang, Giang & Viet-Phuong La. 2023. Academic writing and AI: Day-5 experiment with cultural additivity. https://osf.io/u3cjx/download.10.31219/osf.io/u3cjxSuche in Google Scholar

Ingley, Spencer J. & Austin Pack. 2023. Leveraging AI tools to develop the writer rather than the writing. Trends in Ecology & Evolution 38(9). 785–787. https://doi.org/10.1016/j.tree.2023.05.007.Suche in Google Scholar

Khalifa, Mohamed & Mona Albadawy. 2024. Using artificial intelligence in academic writing and research: An essential productivity tool. Computer Methods and Programs in Biomedicine Update 5. 100145. https://doi.org/10.1016/j.cmpbup.2024.100145.Suche in Google Scholar

Kim, Jinhee & Young Hoan Cho. 2023. My teammate is AI: Understanding students’ perceptions of student-AI collaboration in drawing tasks. Asia Pacific Journal of Education 43(4). 1–15. https://doi.org/10.1080/02188791.2023.2286206.Suche in Google Scholar

Kim, Jinhee, Seongryeong Yu, Rita Detrick & li Na. 2024. Exploring students’ perspectives on Generative AI-assisted academic writing. Education and Information Technologies 30. 1265–1300. https://doi.org/10.1007/s10639-024-12878-7.Suche in Google Scholar

Kumar, Arun H. S. 2023. Analysis of ChatGPT tool to assess the potential of its utility for academic writing in biomedical domain. Biology Engineering Medicine and Science Reports 9. 24–30. https://doi.org/10.5530/bems.9.1.5.Suche in Google Scholar

Lee, Alison & Susan Danby. 2012. Reshaping doctoral education: International approaches and pedagogies. London: Routledge.10.4324/9780203142783Suche in Google Scholar

Lee, Unggi, Haewon Jung, Younghoon Jeon, Younghoon Sohn, Wonhee Hwang, Jewoong Moon & Hyeoncheol Kim. 2023. Few-shot is enough: Exploring ChatGPT prompt engineering method for automatic question generation in English education. Education and Information Technologies 29. 11483–11515. https://doi.org/10.1007/s10639-023-12249-8.Suche in Google Scholar

Liu, Jiacheng, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi & Hannaneh Hajishirzi. 2021. Generated knowledge prompting for commonsense reasoning. https://doi.org/10.48550/arXiv.2110.08387.Suche in Google Scholar

Marvin, Ggaliwango., Nakayiza Hellen, Daudi Jjingo and Joyce Nakatumba-Nabende. 2024. Prompt engineering in large language models. In I. Jeena Jacob, Selvanayaki Kolandapalayam Shanmugam, Selwyn Piramuthu & Przemyslaw Falkowski-Gilski (eds), Data intelligence and cognitive informatcis, algorithms for intelligent systems, 387–402. Springer.10.1007/978-981-99-7962-2_30Suche in Google Scholar

Morris, Erica J. 2018. Academic integrity matters: Five considerations for addressing contract cheating. International Journal for Educational Integrity 14(1). 15. https://doi.org/10.1007/s40979-018-0038-5.Suche in Google Scholar

Nguyen, Minh-Hoang. 2023. Academic writing and AI: Day-2 experiment with Bayesian Mindsponge framework. https://osf.io/kr29c/download.10.31219/osf.io/kr29cSuche in Google Scholar

Özçelik, Nermin Punar & Gonca Yangın Ekşi. 2024. Cultivating writing skills: The role of ChatGPT as a learning assistant—a case study. Smart Learning Environments 11. 10. https://doi.org/10.1186/s40561-024-00296-8.Suche in Google Scholar

Parker, Jessica L., Veronica M. Richard, Alexandra Acabá, Sierra Escoffier, Stephen Flaherty, Jablonka Shannon & Kimberly P. Becker. 2024. Negotiating meaning with machines: Al’s role in doctoral writing pedagogy. https://doi.org/10.1007/s40593-024-00425-X.Suche in Google Scholar

Peirce, Charles. Sanders. 1955. Logic as semiotic: The theory of signs. In J. Buchler (ed.), Philosophical writings of Peirce, 98–119. New York: Dover Publications, Inc.Suche in Google Scholar

Reynolds, Laria & Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended abstracts of the 2021 CHI conference on human factors in computing systems. 1–7. https://arxiv.org/abs/2102.07350.10.1145/3411763.3451760Suche in Google Scholar

Ruksakulpiwat, Suebsarn, Ayanesh Kumar & Anuoluwapo Ajibade. 2023. Using ChatGPT in medical research: Current status and future direction. Journal of Multidisciplinary Healthcare 16. 1513–1520. https://doi.org/10.2147/jmdh.s413470.Suche in Google Scholar

Sullivan, Miriam, Andrew Kelly & McLaughlan Paul. 2023. ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning & Teaching 6(1). 1–10. https://doi.org/10.37074/jalt.2023.6.1.17.Suche in Google Scholar

Susnjak, Teo. 2023. ChatGPT: The end of online exam integrity? https://doi.org/10.48550/arXiv.2212.09292.Suche in Google Scholar

Tarchi, Christian, Alessandra Zappoli, Lidia Casado Ledesma & Eva Wennas Brante. 2024. The use of ChatGPT in source-Based writing tasks. International Journal of Artificial Intelligence in Education 34(2). https://doi.org/10.1007/s40593-024-00413-1.Suche in Google Scholar

Thanasi-Boçe, Marsela & Julian Hoxha. 2024. From ideas to ventures: Building entrepreneurship knowledge with LLM, prompt engineering, and conversational agents. Education and Information Technologies 29. 24309–24365. https://doi.org/10.1007/s10639-024-12775-z.Suche in Google Scholar

Walter, Yoshija. 2024. Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal Educational Technology in Higher Education 21. 15. https://doi.org/10.1186/s41239-024-00448-3.Suche in Google Scholar

Wang, Jun. 2019. On the indexical nature of language. Language and Semiotic Studies 5(4). 47–70. https://doi.org/10.1515/lass-2019-050403.Suche in Google Scholar

Wang, Xuezhi, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery & Denny Zhou. 2023. Self-Consistency improves Chain of Thought reasoning in language models. https://doi.org/10.48550/arXiv.2203.11171.Suche in Google Scholar

Weise, Karen & Cade Metz. 2023. When AI chatbots hallucinate. The New York Times 9. 610–623.Suche in Google Scholar

Wolfe, Cary. 2010. What is posthumanism? Minneapolis: University of Minnesota Press.Suche in Google Scholar

Xu, Benfeng, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang & Zhendong Mao. 2023. ExpertPrompting: Instructing large language models to be distinguished experts. https://arxiv.org/abs/2305.14688.Suche in Google Scholar

Yao, Yao, Zuchao Li & Hai Zhao. 2023. Beyond Chain-of-Thought, effective Graph-of-Thought reasoning in language models. https://doi.org/10.48550/arXiv.2305.16582.Suche in Google Scholar

Zhou, Yongchao, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan & Ba Jimmy. 2023. Large language models are human-level prompt engineers. https://doi.org/10.48550/arXiv.2211.01910.Suche in Google Scholar

Zhu, Jing & Chunyun Duan. 2022. Sign and indexicality: A case study of enhancing alignment of situation models in SCWT. Language and Semiotic Studies 8(4). 197–215. https://doi.org/10.1515/lass-2022-0004.Suche in Google Scholar

Zhu, Jing, Jiying Kang & Chunyun Duan. 2023. Animal representations in Margaret Atwood’s novels: A study based on Pan-indexicality model. Language and Semiotic Studies 9(4). 484–509. https://doi.org/10.1515/lass-2023-0026.Suche in Google Scholar

Zou, Andy, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter & Matt Fredrikson. 2023. Universal and transferable adversarial attacks on aligned language models. https://doi.org/10.48550/arXiv.2307.15043.Suche in Google Scholar

Received: 2025-01-25
Accepted: 2025-02-13
Published Online: 2025-04-04

© 2025 the author(s), published by De Gruyter on behalf of Soochow University

This work is licensed under the Creative Commons Attribution 4.0 International License.

Heruntergeladen am 31.12.2025 von https://www.degruyterbrill.com/document/doi/10.1515/lass-2025-0008/html
Button zum nach oben scrollen