Home Harnessing AI in secondary education in Chinese Hong Kong: the role of prompt engineering
Article Open Access

Harnessing AI in secondary education in Chinese Hong Kong: the role of prompt engineering

  • Kevin Thomas Rey

    Dr. Kevin Thomas Rey is a Lecturer in Chinese Hong Kong, specializing in the intersection of artificial intelligence and education. His research focuses on leveraging AI to enhance learning and teaching practices, with particular interest in adaptive learning technologies, data-informed pedagogy, and student engagement. He has presented widely on the role of AI in reshaping educational experiences across diverse learning environments. Passionate about bridging theory and practice, Dr. Kevin Thomas Rey integrates his research into classroom innovation and professional development for educators. He is actively involved in interdisciplinary collaborations and aims to promote ethical and effective integration of emerging technologies in education.

    ORCID logo EMAIL logo
Published/Copyright: August 21, 2025

Abstract

As AI technologies increasingly permeate educational environments globally, understanding how to craft and utilize AI prompts effectively has become crucial for educators. This paper examines the intersection of AI implementation and pedagogical practices in Hong Kong, China secondary schools, particularly exploring how prompt engineering mediates technological capabilities and educational objectives. This study identifies effective practices and challenges in implementing AI-enhanced instruction through a systematic review of existing literature and mixed-methods research. The findings reveal that educators’ proficiency in prompt engineering significantly influences their ability to leverage AI tools for personalized learning experiences and the development of higher-order thinking. Analysis of classroom implementations reveals that well-constructed prompts significantly enhance student engagement and facilitate more precise alignment between AI capabilities and curriculum requirements.

1 Introduction

Prompt engineering, the systematic design of inputs to optimize the outputs of generative AI, has emerged as a critical skill for educators and students. Its effectiveness hinges on the quality of prompt construction, enabling applications ranging from content creation to complex problem-solving. In Chinese Hong Kong, China’s exam-oriented secondary education system, this discipline offers more than technological integration; it enables educators to develop customized materials that balance standardized requirements with innovative pedagogical methods. This study employs a mixed-methods approach, combining a systematic literature review, a quasi-experimental intervention, and qualitative analysis to investigate how prompt engineering mediates the pedagogical utility of AI in secondary schools in Chinese Hong Kong. By examining teacher training, student outcomes, and localized prompt design, we demonstrate how structured AI integration can enhance curriculum alignment while fostering higher-order thinking skills.

For students, mastering prompt engineering cultivates essential 21st-century competencies, including problem decomposition, critical analysis, and digital literacy (Su et al. 2023). These skills are particularly vital during adolescence, a period of rapid cognitive development, as students navigate academic challenges and prepare for future learning. Well-designed prompts encourage autonomy, transforming AI from a passive tool into an active learning partner.

This study explores how educators in Asian secondary systems – using Chinese Hong Kong, as a case study – can implement prompt engineering to enhance curriculum design, teaching innovation, and student engagement. We examine the intersection of AI and pedagogy, demonstrating how structured prompts support differentiated instruction while meeting curricular demands.

2 Literature review

The mechanisms and pedagogical value of prompt engineering shift educators’ roles from passive consumers to active co-creators of AI-generated knowledge. Unlike traditional search queries, which tolerate ambiguity, effective prompts require precision, contextual framing, and explicit constraints (Deng and Lin 2022; Lee and Palmer 2025). This precision stems from the architecture of large language models (LLMs), which predict probable word sequences rather than retrieving static data. Therefore, educators must master instructional design for AI (Weng et al. 2024), strategically shaping outputs through defined prompts that serve distinct pedagogical functions.

Defined prompts can be categorized into three key types based on their instructional purposes. Scaffolding prompts break complex tasks into structured steps, such as directing AI to brainstorm ideas: “What climate change issues affect agriculture? Provide examples.” These prompts support gradual skill acquisition, aligning with Vygotsky’s concept of the Zone of Proximal Development. Precision tasks elicit focused, subject-specific outputs by imposing constraints – for example, compare the effects of temperature extremes, emphasizing two diverse geographical locations. Such prompts develop critical analysis and concision, mirroring the demands of exam-style questions. Finally, evaluative refinement prompts foster metacognition by guiding iterative improvements, as in Revise the following essay introduction to strengthen the thesis statement and include two examples. This approach transforms AI into a collaborative tool for self-editing and deeper learning processes (Lo 2023).

As shown in Table 1, the difference between undefined and defined (engineered) prompts is not merely quantitative (more words or examples) but also qualitative (structurally organized, pedagogically sequenced content). These examples demonstrate that well-constructed prompts effectively mitigate the limitations of AI, ensuring that the outputs align with the instructional objectives.

Table 1:

AI-generated content influences language education.

Prompt type Example AI output quality Relationship
Undefined prompt Explain something about climate change Generic, potentially unstructured Undefined prompts often offer a broad starting point that lacks focus, serving as a foundation for refinement into more targeted prompts
Defined prompt Explain three key impacts of climate change on agriculture, including examples and data Focused, structured, and tailored to specific needs Defined prompts build on the undefined prompt by adding specific instructions, leading to higher-quality, relevant output

These distinctions between undefined and defined prompts demonstrate their pedagogical potential in enhancing student learning. For educators, such intentional design transforms AI from a passive tool into an active collaborator, enabling the exploration of targeted applications.

As shown in Table 2, the pedagogical applications of scaffolding, precision, and evaluative refinement prompts demonstrate how each category aligns with specific instructional goals through targeted examples of effective feedback.

Table 2:

Categorization of defined prompts in educational contexts.

Prompt type Example AI output quality Pedagogical goal
Scaffolding prompt List three themes in Macbeth with textual evidence for each Structured, sequential Supports analytical skill development
Precision task Calculate the pH of 0.1 M HCl, showing all steps Accurate, constrained Reinforces subject-specific mastery
Evaluative refinement Improve this argument by adding counterarguments and citations Polished, evidence-based Enhances critical revision skills

Well-engineered prompts mitigate the limitations of AI, ensuring that the outputs meet educational objectives. By transitioning from vague queries to pedagogically sequenced prompts, educators can transform AI into a collaborative tool, unlocking targeted applications for classroom use.

Mastery of prompt engineering transforms AI from a generic content generator into a dynamic pedagogical partner, enabling educators to personalize instruction, scaffold learning, and refine resources with remarkable precision (Tan et al. 2025). By crafting culturally responsive prompts, such as requesting reading passages featuring Chinese Hong Kong, China’s urban landscapes, or localized linguistic examples, teachers enhance student engagement while aligning materials with curricular goals. This capability is particularly transformative in multilingual contexts, such as Chinese Hong Kong, where strategic prompts can generate contrastive language analyses (e.g., English passive voice versus Chinese constructions with annotated examples) or scaffolded bilingual resources (e.g., science glossaries with Cantonese phonetic annotations). Such applications not only bridge linguistic gaps but also embody Vygotskian principles by meeting learners where they are, in their Zone of Proximal Development.

The iterative nature of prompt engineering further enables educators to refine outputs in real-time, a process akin to formative assessment. For instance, a teacher might first generate a draft of a bilingual reading comprehension task and then use follow-up prompts, such as adding three inference questions that require code-switching between Cantonese and English, to progressively align the material with higher-order cognitive skills. This responsiveness mirrors the dialogic nature of effective teaching, where tools adapt to the emergent needs of students rather than prescribing static content.

The technology’s adaptability extends to differentiated instruction and inclusive practices. Educators can modulate linguistic complexity by simplifying texts for struggling learners, enriching discourse for advanced students, or specifying accommodations for special educational needs (SEN), such as embedding comprehension questions within simplified syntax (Goodwin-Jones 2022; Herman 2025). AI’s role in assessment preparation further underscores its utility; prompts such as generating HKDSE English Writing questions with Band 4/5 model answers or creating an IELTS Speaking Task 2 prompt with examiner annotations tailor outputs to institutional benchmarks, thereby reinforcing academic rigor.

Critically, these applications are not merely technical but also pedagogical. The precision required to engineer effective prompts compels educators to articulate learning objectives clearly, a practice that synergizes with backward design principles. For example, specifying a rubric assessing thesis clarity and evidence integration when generating an essay prompt reinforces the alignment between objectives, activities, and evaluations. This intentionality positions AI as a catalyst for achieving curriculum coherence.

Crucially, prompt engineering fosters translanguaging pedagogies, mirroring Chinese Hong Kong trilingual reality. Tasks designed to strategically code-switch (e.g., “a role-play activity negotiating a project budget in Cantonese and English”) validate students’ multilingual repertoires while aligning with the English Language Education Key Learning Area’s emphasis on real-world communication (Curriculum Development Council 2023; Lo 2023). Such methodologies also address equity gaps by systematizing the creation of multilingual resources. Prompt engineering reduces reliance on teachers’ linguistic fluency alone, thereby democratizing access to high-quality materials across diverse schools.

Together, these capabilities underscore prompt engineering as a meta-skill that equips educators to harness AI’s potential while centering pedagogical intentionality. The following Research Design section operationalizes these principles, detailing methods to measure how prompt-engineered AI tools impact teaching practices and student outcomes in secondary classrooms in Chinese Hong Kong.

3 Research design

This study aims to investigate how structured prompt engineering mediates the integration of AI tools in secondary education in Chinese Hong Kong, focusing on its impact on educators’ instructional practices and students’ learning outcomes. Specifically, it seeks to explore how well-designed prompts can enhance personalized learning, higher-order thinking skills, and curriculum alignment while addressing the challenges and opportunities of AI implementation in an exam-oriented educational system.

3.1 Research questions

RQ1:

How does structured, prompt engineering training impact educators’ ability to integrate AI tools for differentiated writing instruction?

RQ2:

To what extent does AI-assisted writing with prompt engineering improve narrative coherence, lexical diversity, and confidence in secondary school students across proficiency levels?

3.2 Procedures

This study implemented a three-phase quasi-experimental intervention to examine the integration of AI-assisted writing tools and the development of prompt engineering skills among secondary school learners in Chinese Hong Kong. The research cohort comprised 60 Form 5 students (stratified into 30 participants from elite classes and 30 from non-elite streams) and four English language teachers at an English-medium instruction (EMI) Catholic boys’ secondary school.

The intervention protocol consisted of two authentic writing tasks designed to measure genre-specific competency development: (1) a formal letter to the editor addressing cyberbullying and (2) a debate speech on a contemporary social topic, Gap Year. These tasks were selected for their alignment with both the Hong Kong Diploma of Secondary Education (HKDSE) English Language curriculum and the demands of real-world communications.

All AI prompts were formulated using explicit criteria from the HKDSE English Language benchmarks and the Hong Kong English Language Curriculum Guide, including:

  1. Rubric-specific requirements for content, organization, and language use

  2. Genre conventions for formal letters and persuasive speeches

  3. Assessment focus areas (e.g., argument development, lexical range, grammatical accuracy)

Prior to implementation, all participating teachers completed a standardized 4-hour professional development workshop on AI-prompt engineering strategies. The training curriculum covered the following topics:

  1. Principles of scaffolded prompt design

  2. Techniques for output refinement

  3. Ethical use guidelines for classroom AI implementation

Phase One: Initial crafting established baseline writing capabilities by requiring students to research and compose first drafts without the assistance of AI. Upon submission, each draft underwent dual evaluation: teachers provided feedback on the argument structure and content validity, while AI tools generated automated analyses of linguistic accuracy and coherence. This phase ensured that subsequent AI integration could be measured against the students’ original writing competencies.

Phase Two: Drafting and AI-augmented revision spanned 1.5 weeks of iterative refinement. Students were required to document their engagement with AI tools through a structured process log, recording the specific prompts they used, AI-generated responses they incorporated, and their rationale for accepting or rejecting suggestions. Teachers conducted lessons demonstrating effective prompt engineering techniques before asking students to complete them, such as by specifying desired output formats or requesting genre-specific rhetorical devices. This phase emphasized the critical evaluation of AI feedback, with students learning to refine their prompts iteratively to yield more targeted writing suggestions.

Phase Three: Final Submission and Metacognitive Reflection required students to submit polished versions of their texts alongside detailed reflections. The final deliverables included: (1) an annotated draft highlighting all AI-assisted revisions categorized by type (e.g., lexical enhancements, structural improvements); (2) a written analysis of their evolving prompt engineering skills, with examples of how their queries to AI tools became more precise over time; and (3) a forward-looking statement outlining how they might apply these skills in future academic or professional contexts. This comprehensive approach enabled us to evaluate writing outcomes and students’ increasing proficiency in utilizing AI as a collaborative writing partner.

Students documented their engagement through structured process logs, recording the prompts used, AI responses, and revision rationales. Teachers conducted modeling sessions to demonstrate effective prompt engineering techniques, which the students then implemented.

The phased structure ensured progressive skill development, with each stage building on prior competencies while introducing new challenge dimensions aligned with curriculum standards. The transition points between phases were marked by formative checkpoints where teachers verified mastery of phase-specific skills before advancement.

3.3 Data analysis

Developing prompt engineering skills among students is a critical factor in fostering metacognitive awareness and digital literacy. By engaging in guided refinement exercises – such as transforming vague queries like “help with an essay” into structured directives, like “generate three thesis statements comparing Chinese Hong Kong's and Singapore’s language policies, with evidence from academic sources post-2018” – students cultivated precision in communication and critical evaluation of AI-generated outputs. This iterative process sharpened their ability to articulate learning objectives and deepened their subject-matter mastery, as evidenced by their growing sophistication in task design.

The intervention revealed implementation challenges that required pedagogical solutions. An early-stage overreliance on AI-generated structures was observed, with some students passively accepting outputs without critical evaluation. This was addressed through structured peer review sessions, where learners justified their acceptance or rejection of AI suggestions – a strategy that reinforced metacognitive evaluation skills while reducing technological dependency. These adaptive measures are crucial for striking a balance between AI assistance and independent critical thinking.

A quantitative analysis of 60 writing samples confirmed the efficacy of this approach, with a 65.5 % increase in academic vocabulary usage and enhanced deployment of multiclausal sentence structures (see Table 3). Students advanced an average of 1.3 bands on rubric assessments, demonstrating progress aligned with the HKDSE criteria for content organization and creative language use. These measurable outcomes, achieved after addressing the initial challenges, underscore the role of prompt engineering as a scaffold for linguistic competency and higher-order thinking skills in Chinese Hong Kong’s secondary education context.

Table 3:

Improvements in linguistic and narrative capabilities across text types.

Text type Topic Metric Before intervention After intervention Improvement observed
Letter to the editor Cyber bullying Academic vocabulary usage Cyberbullying is bad and hurts people Cyberbullying, characterized by online harassment and abuse, erodes mental health and demands immediate legislative attention Significant enhancement in vocabulary (harassment, legislative attention), with a formal tone
Sentence complexity Basic sentence structure with no clauses Incorporates a multiclausal sentence: Cyberbullying, characterized by online harassment and abuse, erodes mental health and demands immediate legislative attention Greater use of dependent clauses and improved sentence variety
Debate speech Gap year Creative argument development Gap years are beneficial because they enable individuals to travel and have enjoyable experiences Gap years offer invaluable opportunities for young people to broaden their horizons, develop essential life skills, and explore cultural diversity, thereby fostering personal growth that extends beyond academic confines Improved creativity in argumentation and alignment with persuasive speech objectives
Multiclausal sentence structures Basic and repetitive statements Gap years offer invaluable opportunities for young people to broaden their horizons, develop essential life skills, and explore cultural diversity, thereby fostering personal growth that extends beyond academic confines Greater complexity and depth through the use of multiclausal constructions
  1. Note: Data derived from pre/post-intervention writing assessments.

Table 3 illustrates the transformative impact of the intervention on students’ linguistic and narrative capabilities across diverse text types in the post-intervention survey. The data highlight notable improvements in academic vocabulary usage, sentence complexity, and the development of creative arguments. For instance, students advanced from using basic statements, such as “Cyberbullying is bad,” to producing sophisticated and contextually rich constructions, such as “Cyberbullying, characterized by online harassment and abuse, which erodes mental health and demands immediate legislative attention.” Similarly, debate speeches shifted from simple, generalized arguments to nuanced multiclause narratives. These findings underscore the effectiveness of targeted interventions in aligning student output with formal academic and creative standards.

Qualitative data from student interviews were analyzed thematically, revealing gains in linguistic (e.g., academic vocabulary and syntactic complexity) and metacognitive competencies. Students demonstrated enhanced critical evaluation by selectively engaging with AI outputs, often rejecting culturally incongruous suggestions and improving narrative coherence.

Quantitative improvements in vocabulary, syntax, and creative development (Table 3) were paralleled by qualitative shifts, which are demonstrated below by three emblematic cases, underscoring how targeted prompt engineering mediated learning outcomes in students’ metacognitive awareness, as illustrated in the following cases. Student A stated in the post-intervention interview, “Before, I just wrote ‘bullying is bad.’ I now describe it as ‘harassment eroding mental health’. I really can learn new vocabulary that I remember.”

Post-intervention, the student articulated conscious adoption of discipline-specific terminology (e.g., “harassment,” “legislative”), attributing this growth to AI-generated examples that modeled academic register substitutions in line with (Wang et al. 2023) findings on AI’s role in lexical scaffolding. However, our study uniquely highlights students’ agentive role in refining prompts to achieve such outcomes.

Student B noted in their reflective journal, “My old sentences were like Lego bricks, all separate. Now, I build them like staircases, with clauses leading to larger ideas. AI helped me combine ideas instead of writing short, choppy sentences.”

The above demonstrates advances in sentence-level sophistication. Student B progressed from simplistic coordination to embedded parallelism in their code. The students’ reflections suggest metacognitive awareness of clause hierarchy, a skill previously unaddressed in their writing process. This metacognitive awareness aligns with the framework on AI as a syntax gym, where iterative prompt refinement fosters grammatical flexibility.

In the post-intervention focus group, Student C stated, “AI taught me to argue with myself. Usually, I would type show both sides about gap years, then pick the strongest evidence, but now I give more explicit direction.” Student C exhibited the most marked transformation, shifting from factual enumeration (“A gap year lets you take a break”) to nuanced concession (“While critics argue… structured programs enhance…”). The student’s deliberate use of contrastive discourse markers (“while,” “actually”) emerged from prompt engineering tasks requiring rebuttal generation. This demonstrates AI’s capacity of AI to foster dialectical thinking. This finding extends prior research on argumentation scaffolds by demonstrating their transfer to high-stakes contexts such as HKDSE debate tasks.

4 Discussion

Incorporating prompt engineering into curriculum design provides educators with tools to generate instructional materials aligned with standards. As demonstrated in this study, students learned to craft precise prompts that targeted the use of academic vocabulary, sentence complexity, creative argumentative development, and multiclausal sentence structures, exemplifying the efficacy of prompt engineering in lesson design.

Moreover, engineered prompts, such as “create a step-by-step guide for students to interview local shopkeepers about language use,” with AI-generated questionnaire templates and Cantonese translation support, honor Chinese Hong Kong’s sociolinguistic landscape while fostering experiential learning. In the intervention study, tasks like these reinforced curriculum objectives while encouraging students to localize their creative output with culturally specific markers.

Prompt engineering has the potential to reinvent classroom methodologies by offering innovative approaches to interactive learning. Teachers can design dynamic scenarios using prompts like “simulate a debate between environmentalists and developers about Chinese Hong Kong’s Lantau Tomorrow Vision, with AI generating role-specific talking points for student teams.” The theoretical potential of prompt engineering, particularly in multilingual contexts such as Chinese Hong Kong, necessitates empirical validation in real-world classrooms. Focusing on creative writing challenges, a subject area where AI’s role remains contested, the case study demonstrates how structured prompt engineering can address persistent pedagogical pain points, including narrative coherence, lexical repetition, and student confidence. We illuminate the interplay between theory and implementation by grounding the preceding conceptual framework in a local context.

The study employed a structured instructional framework to guide students through AI-assisted writing tasks adaptable to cyberbullying and gap year topics while maintaining pedagogical coherence with the curriculum standards of Chinese Hong Kong. Students developed precise directives tailored to the rhetorical demands of each genre during the initial stage of crafting the prompts. For cyberbullying assignments, learners formulated argumentative prompts that specified key elements, such as localized statistics and counterarguments. These included requests for a “250-word outline incorporating Hong Kong social media usage data and a rebuttal to concerns about free speech.” Conversely, gap year tasks emphasized persuasive structures, prompting AI to generate “letters to parents advocating for deferred studies while addressing common objections.” This phase cultivated students’ ability to align AI outputs with specific communicative purposes and assessment criteria.

The drafting phase transformed students into critical evaluators of the AI-generated content. When working on cyberbullying compositions, learners compared machine-produced arguments against exemplary HKDSE responses, analyzing the lexical choices and evidentiary strength. Gap year exercises require rhetorical analysis, with students annotating AI-generated drafts to identify persuasive strategies such as emotional appeals versus logical reasoning. This stage involved the development of metacognitive skills as participants assessed how well AI outputs aligned with genre conventions and audience expectations. This process proved valuable for identifying weaknesses in AI’s default outputs, such as generic phrasing or insufficient cultural contextualization.

In the final refinements, AI capabilities were merged with human judgment through iterative revisions. Cyberbullying essays underwent lexical and evidentiary upgrades, transforming statements such as “bullying hurts teens” into academically rigorous claims supported by local research. Gap-year compositions gained sophistication through structural improvements, with simple assertions evolving into nuanced arguments that employed concessive clauses and audience-specific persuasion. Students demonstrated growing autonomy by selectively incorporating AI suggestions while preserving their authorial voice, such as by choosing culturally resonant metaphors over generic descriptors. This phase improved writing quality while strengthening students’ critical evaluation and self-editing skills.

5 Conclusion and implications

Based on the findings of this case study, effective implementation requires targeted training for teachers. Programs must transition from basic tool literacy to advanced competency. Table 4 outlines the core competencies for AI integration.

Table 4:

Core competencies for AI integration in education.

Competency area Definition Example application
Prompt design Formulating AI queries that explicitly align with instructional objectives and assessment standards Act as a physics examiner to generate HKDSE-style questions with distractors reflecting common student misconceptions
Critical evaluation Systematic analysis of AI-generated content for pedagogical soundness and reliability Reviewing question banks for cultural bias, scientific accuracy, and cognitive alignment with student grade level
Ethical integration Implementing AI tools while maintaining academic integrity and enhancing human-centered learning Establishing classroom protocols for transparent AI use in drafting essays with proper attribution

These findings underscore the necessity of explicit prompt engineering instruction as an integral component of teacher education. Training programs must transition from basic AI tool literacy to advanced competencies in prompt design, critical evaluation, and ethical integration to ensure effective and prompt engineering implementation. For instance, educators should learn to formulate queries that explicitly align with instructional objectives.

The Curriculum Development Council framework has already recognized the importance of AI-prompting competencies and has incorporated modules that address these core skills. Collaborative platforms where educators share tested prompts, such as repositories for “DSE English Writing Feedback Generators,” can institutionalize best practices and support ongoing professional learning. Ethical integration is particularly critical; establishing classroom protocols for the transparent use of AI ensures that academic integrity is upheld while maximizing the educational value of AI.

Future research should investigate the longitudinal development and transferability of these competencies to high-stakes assessment scenarios, particularly in Chinese Hong Kong’s unique English-medium instruction (EMI) environment. As the educational landscape evolves, prompt engineering will play a pivotal role in bridging the gap between traditional pedagogy and the demands of a digitally driven world. Effective integration of AI into secondary education relies on addressing existing challenges while implementing well-structured solutions that are tailored to the specific needs of the educational system.

5.1 Common challenges in AI integration

A primary challenge lies in teachers’ limited familiarity with AI tools, encompassing basic literacy and advanced skills in prompt engineering. Consequently, these generic outputs offer limited pedagogical value and fail to meet the nuanced demands of Chinese Hong Kong classrooms (Herman 2025).

Academic integrity remains a significant concern, primarily due to the increasing sophistication of generative AI tools. These platforms may enable students to complete assignments superficially, bypassing critical thinking. This risk is especially pronounced in Chinese Hong Kong, where the exam-oriented culture is prevalent, and the pressure to achieve high scores could inadvertently encourage an overreliance on AI-generated content (Zhou et al. 2023).

Moreover, the technical limitations inherent in current AI systems present hurdles for implementation. Many models exhibit biases toward Western contexts, which can diminish their relevance in local classrooms and schools. Similarly, their difficulties in accurately handling Cantonese-English code-switching reduce their effectiveness in bilingual environments. Inaccuracies in the generated content, ranging from flawed historical timelines to erroneous scientific concepts, necessitate vigilant teacher oversight (Lo 2023).

5.2 Practical solutions to overcome barriers

To address these challenges, schools must prioritize hands-on training programs. These should move beyond theoretical overviews and focus on applied practices. For instance, workshops could guide educators in crafting curriculum-aligned prompts, such as designing a role-play activity about the impacts of climate change on Chinese Hong Kong fishing industry, for Form 4 Geography, with leveled roles for diverse learners. Participants should also learn to evaluate AI-generated outputs for potential bias and inaccuracy. Collaborations between the Education Bureau and universities can help certify such programs to ensure quality and standardization.

School-wide AI policies are equally critical for promoting ethical use. Therefore, clear guidelines must be implemented. For example, permitting AI to brainstorm but prohibiting its use in final submissions can help to manage expectations. Some schools in Chinese Hong Kong have introduced AI declarations for assignments, where students document how they used AI tools. This practice has reportedly reduced misuse by 28 % (Herman 2025).

Furthermore, fostering collaboration among educators can democratize expertise and improve the collective efficacy of prompt engineering methods. Online repositories of pre-tested prompts, such as the HKDSE English Paper 3 Listening Practice Generator or peer observation programs, can enable teachers to share successful strategies and navigate contextual challenges. Such communities of practice are particularly valuable for adapting AI-generated outputs for learners of Cantonese.

5.3 Recommendations for effective AI integration

The successful implementation of AI tools in Chinese Hong Kong’s secondary schools requires coordinated efforts among teachers, administrators, and policymakers. Strategic recommendations focus on enhancing classroom practices, building institutional capacity, and establishing systemic guidelines that align with the territory’s unique educational framework.

5.4 Teachers’ strategic implementation in classrooms

Teachers should consider adopting a phased approach to integrating AI into their teaching. Beginning with small-scale pilot projects targeting specific educational pain points allows for manageable experimentation while minimizing disruptions. For example, teachers could initially use prompt engineering to generate differentiated reading comprehension exercises. Create three versions of this news article at CEFR levels B1 to C1, with comprehension questions that track HKDSE question types. As familiarity increases, more complex applications, such as AI-assisted feedback systems, can be explored.

Critically, AI tools must augment, rather than replace, evidence-based teaching practices. Prompts should be designed to reinforce human-centric priorities such as critical thinking, collaborative learning, and teacher-student relationships. Weekly reflection logs that document the efficacy of prompts and areas for improvement can help educators refine their approaches iteratively (Yang 2023).

5.5 Building institutional capacity for sustainable AI adoption

School administrators play a pivotal role in creating the infrastructure and cultural conditions necessary for effective AI integration in schools. Investments in three interconnected domains are essential for sustainable implementation.

  1. Professional development: Administrators should collaborate with tertiary institutions to develop targeted training programs that build pedagogical and technological capacity. These programs must offer hands-on sessions on designing curriculum-aligned prompts, such as HKDSE English-speaking tasks incorporating local linguistic and cultural nuances.

  2. Infrastructure upgrades: Schools must ensure equitable access to reliable AI tools that support Cantonese-English bilingualism. Recent data reveal that 62 % of teachers in Hong Kong, China, cited technical limitations as a significant barrier to the adoption of AI (Education Bureau 2024). Addressing this issue requires upgrading hardware and software systems to meet the unique demands of bilingual education.

  3. Fostering innovation: Institutional leadership should recognize and reward pedagogical innovators. Establishing AI Teaching Fellowships can incentivize educators to develop and disseminate evidence-based practices. Professional learning communities, where teachers collaborate to refine prompt engineering strategies, are crucial for scaling AI applications in schools.

5.6 Strategic priorities for systemic implementation

The Education Bureau must lead efforts to integrate AI technologies into secondary education by focusing on two key initiatives.

  1. Developing comprehensive guidelines: Establishing clear frameworks for the ethical use of AI in classrooms is vital. These guidelines should include exemplar prompts aligned with Chinese Hong Kong’s curriculum standards, offering immediate practical applicability. Robust protocols must be in place to safeguard student data privacy and address concerns about digital security. Standardized rubrics for evaluating AI-generated content can help maintain quality control across diverse applications.

  2. Creating AI tools capable of processing Cantonese-English code-switching patterns is essential for addressing the linguistic realities in Chinese Hong Kong. Additionally, developing assessment resources tailored to the HKDSE framework, validated by subject matter experts, can enhance the relevance and reliability of AI-generated outputs.

Cross-sector collaboration is necessary in three key areas to reinforce institutional efforts. Longitudinal studies should examine the impact of AI on learning outcomes across Chinese Hong Kong’s stratified school system. Parent education initiatives are crucial for promoting informed perspectives on the appropriate role of AI in student learning. Finally, partnerships with Chinese Hong Kong’s education technology sector can drive the development of contextually responsive solutions that bridge the gap between educational practice and technological innovation.

5.7 Conclusions

This study systematically demonstrated how prompt engineering serves as a critical bridge between the capabilities of artificial intelligence and the nuanced demands of secondary education in Chinese Hong Kong. Beyond this context, three universal lessons emerge for educators. First, the pedagogical value of AI depends on the deliberate design of prompts that align with curricular goals. Second, localization – linguistic, cultural, and institutional – is non-negotiable for equitable implementation; and lastly, teacher training must prioritize “pedagogical prompt engineering” to strike a balance between innovation and integrity.

These findings invite further research into scalable models for AI integration, particularly in multilingual and exam-driven systems worldwide. Policymakers and educators should collaborate to develop shared frameworks that harness the potential of AI without compromising the human-centric core of education.

The results showed that the quality of prompt design directly determines AI’s educational utility of AI, as evidenced by students’ significant improvements in academic vocabulary and syntactic complexity. Second, localization emerges as non-negotiable – whether through Cantonese-English translanguaging prompts, hyper-cultural references to Chinese Hong Kong’s urban landscapes, or precise alignment with the territory’s exam-oriented curriculum. Third, and most crucially, our three-phase instructional model reveals that AI achieves a transformative impact only when teachers evolve from passive technology consumers to strategic designers of human-AI collaboration.

The implications of this research extend far beyond the classrooms in Chinese Hong Kong. Three key lessons emerge for global education systems as they navigate the integration of AI. Administrators must invest in context-specific professional development that moves beyond basic digital literacy to cultivate pedagogical prompt engineering – the ability to craft inputs that leverage AI’s capabilities while respecting local learning objectives and cultural contexts. Policymakers should prioritize the development of localized language models that can effectively handle linguistic diversity. Most importantly, our findings challenge the prevalent discourse surrounding educational technology: the human-AI relationship proves most productive when framed as collaborative augmentation rather than substitution.

Two research trajectories require attention. Longitudinal studies must examine whether prompt engineering skills transfer across disciplines and whether they sustain academic improvement over time. Comparative research across different cultural contexts could establish universal principles versus location-specific adaptations of AI integration models. Practically, the success of Chinese Hong Kong’s approach suggests that secondary education systems should embed prompt engineering in teacher training curricula, develop shared repositories of tested prompts aligned to national standards, and establish ethical frameworks for student AI use that balance innovation with academic integrity.

Ultimately, this study positions prompt engineering as both a technical competency and a new form of pedagogical literacy for the digital era. As AI becomes ubiquitous in education, our findings from Chinese Hong Kong offer a replicable yet adaptable model that harnesses the potential of technology while safeguarding the human-centric values of culturally responsive teaching, critical thinking development, and the cultivation of authentic student voices. The future of education lies not in choosing between tradition and innovation but in mastering how to thoughtfully engineer their intersection, with prompt design serving as our most precise tool for this essential synthesis.


Corresponding author: Kevin Thomas Rey, English Department, St. Francis Xavier’s College, 4/C Full Sing Court, 82 84 Fuk Lo Tsun Road, Kowloon, Hong Kong, China, E-mail:

About the author

Kevin Thomas Rey

Dr. Kevin Thomas Rey is a Lecturer in Chinese Hong Kong, specializing in the intersection of artificial intelligence and education. His research focuses on leveraging AI to enhance learning and teaching practices, with particular interest in adaptive learning technologies, data-informed pedagogy, and student engagement. He has presented widely on the role of AI in reshaping educational experiences across diverse learning environments. Passionate about bridging theory and practice, Dr. Kevin Thomas Rey integrates his research into classroom innovation and professional development for educators. He is actively involved in interdisciplinary collaborations and aims to promote ethical and effective integration of emerging technologies in education.

Acknowledgments

Dr. Wong Ming Har, Ruth- for support and guidance.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: Dr. Kevin Thomas Rey, as the sole author, has accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Competing interests: The author states no conflict of interest.

  5. Research funding: None declared.

References

Curriculum Development Council. 2023. Supplement to the English language curriculum: Key learning area curriculum guide secondary, 1–3. Hong Kong: Hong Kong Government Printer.Search in Google Scholar

Deng, Jianyang & Yijia Lin. 2022. The benefits and challenges of ChatGPT: AN overview. Frontiers in Computing and Intelligent Systems 2(2). 81–83. https://doi.org/10.54097/fcis.v2i2.4465.Search in Google Scholar

Education Bureau. 2024. Module on artificial intelligence for junior secondary. Chinese Hong Kong. Available at: https://www.edb.gov.hk/en/curriculum-development/kla/technology-edu/resources/InnovationAndTechnologyEducation/resources.html.Search in Google Scholar

Goodwin-Jones, Robert. 2022. Partnering with AI: Intelligent writing assistance and instructed language learning. Language, Learning and Technology 26(2). 5–24.10.64152/10125/73474Search in Google Scholar

Herman, Erik. 2025. Optimizing prompt engineering for generative AI. Boston: DeGruyter.10.1515/9781501521355Search in Google Scholar

Lee, Daniel & Edward Palmer. 2025. Prompt engineering in higher education: A systematic review to help inform curricula. International Journal of Educational Technology in Higher Education 22. 7. https://doi.org/10.1186/s41239-025-00503-7.Search in Google Scholar

Lo, Leo S. 2023. The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship 49(4). https://doi.org/10.1016/j.acalib.2023.102720.Search in Google Scholar

Su, Yanfang, Yun Lin & Chun Lai. 2023. Collaborating with ChatGPT in argumentative writing classrooms. Assessing Writing 57. https://doi.org/10.1016/j.asw.2023.100752.Search in Google Scholar

Tan, Xiao, Gary Cheng & Ho Ling Man. 2025. Artificial intelligence in teaching and teacher professional development: A systematic review. Computers and Education: Artificial Intelligence 8. https://doi.org/10.1016/j.caeai.2024.100355.Search in Google Scholar

Wang, Yuntao, Yanghe Pan, Miao Yan, Zhou Su & Tom H. Luan. 2023. A survey on ChatGPT: AI-generated contents, challenges, and solutions. IEEE Open Journal of the Computer Society 4. 280–302. https://doi.org/10.1109/ojcs.2023.3300321.Search in Google Scholar

Weng, Xiaojing, Huiyan Ye, Yun Dai & Oi-lam Ng. 2024. Integrating artificial intelligence and computational thinking in educational contexts: A systematic review of instructional design and student learning outcomes. Journal of Educational Computing 62(6). 1640–1670. https://doi.org/10.1177/07356331241248686.Search in Google Scholar

Yang, Tzu-Chi. 2023. Application of artificial intelligence techniques in analysis and assessment of digital competence in university courses. Educational Technology & Society 26(1). 232–243.Search in Google Scholar

Zhou, Chuyi, Xiyuan Zhang & Chunyang Yu. 2023. How does AI promote design iteration? The optimal time to integrate AI into the design process. Journal of Engineering Design 1–28. https://doi.org/10.1080/09544828.2023.2290915.Search in Google Scholar

Received: 2025-04-30
Accepted: 2025-07-15
Published Online: 2025-08-21

© 2025 the author(s), published by De Gruyter and FLTRP on behalf of BFSU

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 3.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jccall-2025-0016/html
Scroll to top button