Startseite Artificial Intelligence in Academic Writing and Research: Adoption and Effectiveness
Artikel Open Access

Artificial Intelligence in Academic Writing and Research: Adoption and Effectiveness

  • Somipam R. Shimray ORCID logo EMAIL logo und A. Subaveerapandiyan ORCID logo EMAIL logo
Veröffentlicht/Copyright: 13. September 2025
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

This study inspects the effect of artificial intelligence (AI) tools on Ph.D. scholars at Babasaheb Bhimrao Ambedkar University. The research assesses the types of AI tools used, the purpose of using AI tools, and the challenges faced in using AI tools. A structured questionnaire was used for data collection. The study results indicate a high adoption rate of AI tools, with 91.2% of respondents using technologies such as plagiarism detection software, large language models, paraphrasing tools, and academic research databases with AI features. These tools were predominantly effective for literature reviews and research writing, improving precision, proficiency, and creativity. This study presents distinctive understandings of the transformative role of AI in academic research, precisely within the setting of doctoral education. By concentrating on the experiences of Ph.D. students, it highlights both the potential and challenges of AI incorporation, paying attention to the role of technology-driven invention in higher education and bring into line with sustainable development objectives for knowledge dissemination.

1 Introduction

Academic writing plays a crucial role in constructing arguments, recording methodologies, engaging in scholarly discourse, advancing knowledge, earning respect, securing funding, and achieving promotions. It is a fundamental component of a successful graduate program, especially at the doctoral level. Scholars are expected to produce their theses and academic publications in a scholarly style. However, many lack specialized training in academic writing, which hampers their writing ability. Consequently, numerous scholars encounter significant challenges in scholarly writing.

Artificial intelligence (AI) is a computing system capable of performing human-like tasks such as learning, adapting, synthesizing, self-correcting, and utilizing data for complex processing activities (Popenici & Kerr, 2017). AI has emerged as a transformative force in scholarly research, providing powerful tools that enhance efficiency, accuracy, and innovation (Khalifa & Albadawy, 2024). Researchers widely employ AI applications like machine learning, natural language processing (NLP), and data analytics across various fields to automate data collection, improve literature reviews, and assist with writing and editing. AI-driven writing assistants support grammar, structure, citation management, and compliance with disciplinary standards. AI tools improve academic writing efficiency by enabling writers to concentrate on essential and innovative research elements (Golan & Azoulay, 2023). These tools save time and help researchers uncover new insights by analysing large datasets that would otherwise be too complex to process manually. While academic writing presents specific challenges, AI tools significantly facilitate this process, boosting research productivity and enhancing overall work efficiency.

AI tools used in academia span several categories based on their function and application. Writing assistants, such as ChatGPT, Jasper, and Grammarly, aid in drafting, editing, paraphrasing, and improving grammar and coherence in academic writing (Pryma, Pelivan, Teletska, Tsobenko, & Zagrebelna, 2025; Sontake, 2025). Plagiarism detection tools like Turnitin and iThenticate compare submissions against extensive databases to identify potential instances of copied content, ensuring academic integrity (Sontake, 2025). Data analysis tools, including IBM SPSS and IBM Watson, support researchers in statistical processing and machine learning applications (Jiang, Liu, Baig, & Li, 2024; Sontake, 2025). Generative AI models, such as DALL·E, Midjourney, and Stable Diffusion, convert text prompts into visuals, playing a growing role in science communication and conceptual illustration (Joynt et al., 2024). In addition, literature discovery and research design tools like Semantic Scholar, Elicit, Iris.ai, and ResearchRabbit utilize AI to streamline literature searches, identify research gaps, and visualize citation networks (Oklahoma State University Libraries, 2025; Pinzolits, 2024; Texas Tech University Libraries, 2025). These tools, though varied in purpose, collectively enhance efficiency, accuracy, and creativity in academic research and writing.

This study aims to address the following key research questions:

  1. What are the prevalent AI tools used by Ph.D. students at BBAU?

  2. For what purposes are AI tools most commonly employed in research?

  3. How do Ph.D. students comprehend the usefulness of AI tools in enriching research?

  4. What obstacles restrict the utilisation of AI tools in research?

  5. How do AI tools determine students’ creativity, problem-solving, and knowledge acquisition?

Study Objectives

  • To examine the AI tools utilized by Ph.D. students at BBAU.

  • To understand the intent of using AI tools in research.

  • To examine Ph.D. students’ discernment of AI tools use in enriching research.

  • To understand challenges in using AI tools in research.

  • To inspect the perceived advantages of AI tools in research.

2 Literature Review

2.1 Theoretical Framework on Technology Adoption

Several theoretical frameworks are used to understand technology adoption. Among all the frameworks, the technology acceptance model (TAM), innovation diffusion theory (IDT), and the unified theory of acceptance and use of technology (UTAUT) are notable frameworks. TAM is a theoretical framework that explains the techniques to encourage user acceptance and use of new technology (Davis, 1989). It is widely used to handle the difficulty faced by organizations in promoting the acceptance of new technology (Liu, Dedehayir, & Katzy, 2015). Two dimensions of TAM consist of perceived usefulness and perceived ease of use (Davis, 1989). Zhang, Hu, and Zhou (2025) study extended TAM and found that perceived utility strongly influences adoption, and ethical concerns considerably moderate this relationship, particularly in public universities. Likewise, Al-Bukhrani, Alrefaee, and Tawfik (2025) employed the theory of reasoned action to examine the adoption of AI writing tools and found that attitudes and subjective norms are main predictors of intention, whereas perceived barriers did not significantly deter adoption, a notable deviation from TAM and UTAUT supposition.

Rogers (1962) introduced IDT, IDT is widely employed to understand the influence of innovation diffusion. This framework emphasizes the diffusion of the innovation process occurring over a social system through specific channels over a period of time. The five characteristics of IDT consist of “advantage,” “complexity,” “trialability,” “compatibility,” and “observability”. IDT explains that the adoption of novel technology is influenced by the features of the innovation, the channels of communication, the social framework, and the features of the adopters (Mbatha, 2024). Gutiérrez-Leefmans, Picazo-Vela, and Kareem (2025) study identified relative advantage, observability, and compatibility as strong predictors of adoption among centennial users, whereas complexity was less influential than in earlier IDT-based studies. Likewise, Almaiah et al. (2022) noted that trialability and observability were necessary for effective AI integration in institutional online learning environments.

The UTAUT is a framework intended to understand user acceptance and the use of new technologies (Williams, Rana, & Dwivedi, 2015). Four factors, i.e. performance expectancy (PE), social influence (SI), effort expectancy (EE), and facilitating conditions (FC) (Venkatesh, Morris, Davis, & Davis, 2003), made up the UTAUT model. Additionally, it integrates four individual variables, i.e. gender, age, experience, and voluntariness of use, which attempt to forecast the association between the primary factor, behavioural intention, and usage behaviour (Venkatesh et al., 2003). In the UTAUT framework, PE, SI, EE, and FC influence intention to use technology. Liu et al. (2015) exhibited that performance and EE determine university educators’ eagerness to engage with tools like ChatGPT. While Mbatha (2024) stressed trainability and complexity as core DOI factors affecting Chatbot technology use in education.

2.2 Overview of AI Tool Usage in Research

AI has swiftly become a transformative force in various sectors with considerable influence on academic research (Golan & Azoulay, 2023). The implementation of AI tools in education and research improves productivity and efficiency. AI tools reduce the time spent on data processing and improve the quality of output through grammar checks, structural support, and citation management (Zhao, 2023). AI applications in academic writing include assisting in content creation, such as writing assistants, plagiarism detection, and automated peer reviews (Golan & Azoulay, 2023). In the field of medicine, AI has made profound aids in diagnostics and patient treatment predictions (Davenport & Kalakota, 2019), in financial institutions, AI helps in detecting fraud and forecasting trends (Li, Sigov, Ratkin, Ivanov, & Li, 2023), and in retail marketing, AI enhances customer service (Lu, Cheng, Tzou, & Chen, 2023). In the field of education, AI helps in personalizing learning experiences and improving administrative tasks, providing customized tutoring interference centred on student needs (Zawacki-Richter, Marín, Bond, & Gouverneur, 2019).

The application of AI is transforming research in multiple arenas like biology, mathematics, physics, chemistry, and the humanities. In the field of biology, AI improves molecular dynamics simulations, strengthening discernment of the SARS-CoV-2 virus and its spike protein (Casalino et al., 2020). In the field of mathematics, AI locates patterns and anomalies, resulting in new theorems and enhanced forecasting in fields like knot theory and astrophysics (Davies et al., 2021). In the field of humanities, AI application supports data analysis and creative processes, like examining emotional tones in texts (Taherdoost & Madanchian, 2023), creating themes from topic modelling (Mustak, Salminen, Plé, & Wirtz, 2021), and helping in digital archiving (Teel, 2024). AI tools like Google Translate alleviate cross-cultural research by translating historical documents and studying language development (Moneus & Sahari, 2024).

2.3 Types of AI Tools in Academic Research

AI tools are widely used in academic research; the most widely used AI tools are Grammarly, ChatGPT, Elicit, Perplexity, and Consensus (Granjeiro et al., 2025). AI tools like ChatGPT have gained significant attention as they offer assistance in generating ideas, answering questions, and improving language fluency (Pham, 2025). Grammarly is a commonly used AI-powered proofreading tool that assists in improving academic writing by correcting grammar, punctuation, and sentence clarity. Likewise, Elicit, Consensus, and Perplexity provide support for literature reviews and information synthesis (Granjeiro et al., 2025). In spite of all the benefits, AI tools have certain limitations. It has concerns with the content accuracy, over-standardization of writing style, and the possible erosion of authorship accountability prevailing (Granjeiro et al., 2025). Moreover, published literature indicates that generative AI tools, while helpful for fostering productivity and creativity, necessitate deliberate pedagogical frameworks to guarantee responsible usage and to cultivate ethical and digital literacy skills among students (Saúde, Barros, & Almeida, 2024).

Moreover, the adoption of AI writing assistants in non-Western and low-resource contexts poses more challenges. For example, research from Tanzanian universities indicates that the lack of assistance for local languages such as Swahili, affordability issues, and ethical uncertainty are major obstacles to adoption (Kondoro, 2025). The outcomes indicate the need for culturally adaptive and linguistically inclusive AI tools that can serve diverse student populations effectively. Nevertheless, AI tools are rescaling academic workflows, making research approachable and effective. Nevertheless, AI tools integration must be aided by ethical standards, proper direction, and an awareness of their contextual limit.

2.4 Plagiarism Detection Tools

In the age of AI, it is imperative to maintain academic integrity. Plagiarism detection tools function using algorithms based on machine learning, NLP, and semantic analysis to equate the similarities between written content and published content (Amirzhanov, Turan, & Makhmutova, 2025). Out of many tools available, Turnitin is the most widely used tool because of its robust database. Turnitin examines student papers with millions of submissions in its database (Von Isenburg, Oermann, & Howard, 2019). Published literature indicates that Turnitin AI achieves an accuracy rate between 92 and 100%, signifying its efficacy in detecting both human and AI-generated texts (Canyakan, 2025).

Copyscape is another tool widely used for detecting plagiarism across corporate websites. Investigators use it to reveal instances of “content theft” (Foltýnek et al., 2020). GPTZero, Copyleaks, and OpenAI Classifier are other tools that use advanced AI-based classifiers to identify AI-authored content. For instance, GPTZero was developed to avoid AI-assisted academic dishonesty by identifying generative form in student submissions, whereas Copyleaks is used in educational platforms to detect plagiarized content (Sajid, Sanaullah, Fuzail, Malik, & Shuhidan, 2025).

Despite all the positive paybacks, plagiarism detection tools also have challenges, such as false positives, which can take place, especially in clinical and scientific circumstances, causing unnecessary academic penalties. In the field of medical education, false positives from Turnitin cause worry about fairness in AI-assisted plagiarism detection (Daungsupawong & Wiwanitkit, 2025). Published literature indicates that while anti-plagiarism systems are efficient, their acceptance by faculty is uneven, mostly obstructed by an absence of training and impedance to change (Kolhar and Alameen, 2021). With AI-generated content and paraphrased plagiarism, the intricacy of plagiarism detection has extended. Published literature indicates that there is a need to adopt hybrid techniques that combine semantic text analysis, deep learning models, and stylometric examination to capture subtler forms of intellectual theft (Amirzhanov et al., 2025; Sajid et al., 2025). Moreover, cross-language and code plagiarism detection continues to rise; thus, customized detection algorithms are required (Amirzhanov et al., 2025).

2.5 Purposes of Use of AI Tools by Students

AI tools are widely used by university students for various purposes such as writing support, language correction, and self-directed learning. Published literature indicated that AI tools like ChatGPT are used for idea generation, while Grammarly is used for grammar correction, QuillBot is used for summarization, paraphrasing, grammar checking, plagiarism checking, etc. (Tokdemir Demirel, 2024). AI tools improve writing clarity and coherence and also assist students in improving vocabulary. Susha, Viberg, and Koren (2024) conducted a study to examine students’ feelings about co-writing essays with ChatGPT. Students flagged using AI for idea creation, content design, and language editing. Nonetheless, students indicate that they require critical thinking, fact-checking, and ethical consciousness when using AI tools.

Li, Sadiq, Qambar, and Zheng (2025) undertook a quasi-experimental study and found that using ChatGPT in research considerably improved students’ research skills, self-directed motivation, and self-directed learning behaviours. Students using ChatGPT to complete the assigned task outperformed those in control groups, signalling that AI tools can promote active participation and deeper learning. Likewise, Yousef, Deeb, and Alhashlamon (2025) found that 87% of Palestinian medical students used AI tools often, with ChatGPT being used by 76%. These students used AI for drafting literature, analyzing data, researching performance, and enhancing research. Nevertheless, the students report that proper training is required in using AI tools.

Bista and Bista (2025) examine doctoral students’ use of ChatGPT, Google Gemini, and Microsoft Copilot and found that AI tools are used to refine academic texts, manage cognitive load, and enhance writing. At the same time, students were concerned about the accuracy and the requirement of maintaining academic integrity. Dai, Lai, Lim, and Liu (2023) study found that postgraduate research students in Australia employed ChatGPT to improve critical thinking and independence in research. These outcomes indicate that students use AI tools to perform academic tasks, enhance learning outcomes, and gain autonomy. Nonetheless, proper training, ethical awareness, and institutional counsel are required to assure effective and responsible use.

2.6 Perceived Benefits of AI Tools in Academic Research

The implementation of AI tools in academic research has brought several benefits for students, including improvement in productivity, learning efficiency, and language expression. Almassaad, Alajlan, and Alebaikan (2024) examine the use of Generative AI (GenAI) tools like ChatGPT and Gemini among students in Saudi Arabia and found that AI tools help in saving time, ease of access, and instant feedback, thereby supporting efficient academic performance. Similarly, Arbab, Dhuhli, Krishnan, and Crisostomo (2024) examine AI tools usage in Oman and found that students perceived AI tools as instrumental in improving writing skills, analytical thinking, and critical reasoning. At the same time, they stressed using AI tools judiciously for idea generation and structure planning rather than complete task automation. It is also widely appreciated that AI tools have the ability to personalize learning, streamline research tasks, and enhance understanding of complex subjects, but are about over-dependency on AI tools (Kostas, Paraschou, Spanos, Tzortzoglou, & Sofos, 2025).

Al-Bukhrani et al. (2025) indicate that AI writing tools considerably promote productivity, especially in drafting manuscripts and minimizing language obstacles for non-native English speakers. Chiu (2025) stressed that awareness of AI tools is imperative in shaping perceptions of AI usefulness. These studies’ findings recommend that AI tools offer substantial academic value by enhancing writing fluency, promoting personalized learning, assisting research inclusion, and improving motivation and participation at the same time, calling for attentive integration, training, and supervision to guarantee responsible use.

2.7 Challenges in Student Adoption of AI Tools

In spite of the many potentials that AI offers in the academic environment. There are many challenges hindering the wide adoption among students. Students are concerned about how AI tools collect and possibly misuse personal data, raising fears of privacy breach and infringement of confidentiality (Klimova, Pikhart, & Kacetl, 2023). AI systems lack the contextual awareness required for classroom interactions, resulting in student distrust and avoidance toward AI-facilitated teaching and learning (Han, Coghlan, Buchanan, & McKay, 2025). Moreover, there is a technological shortfall, like uneven service quality and the necessity for pedagogical adaptation (Salhab, 2025).

Institutional support is another factor that significantly influences AI adoption. Absence of training on AI tools, absence of digital orientation, and inadequate infrastructure are factors that influence AI adoption among students (Jackman, Marshall, & Carrington, 2024). Some students use AI tools to complete tasks, such as checking grammar and idea generation. While others avoid owing to fear of academic dishonesty (Smerdon, 2024), bias, privacy issues, lack of explainability, transparency, and accountability (Chiu, 2025). This understanding complicates the adoption of AI tools. Published findings indicate that though encouraging attitudes and social principles promote adoption, perceived barriers like lack of lucidity on ethical use and inadequate training hinder AI adoption (Al-Bukhrani et al., 2025). Kostas et al. (2025) also identified ethical concerns, reliability, and decreasing creativity as major student worries. Almassaad et al. (2024) noted subscription fees, misinformation, and reduced peer interaction as practical and social barriers. The findings underline the importance of AI literacy orientation and clear guidelines to guide ethical AI use.

2.8 Copyright Implications of AI-Generated Academic Content

Use of AI-generated content has caused considerable legal and ethical issues, particularly with copyright and authorship. With the advancement in AI tools, determining the ownership is becoming a challenging task. The traditional principles of authorship indicate that authorship requires human contribution. Now, AI tools such as GPT-3 and GPT-4 can produce literary works, along with academic texts, raising the question of whether such content generated using AI tools qualifies for copyright protection. Publication guidelines, national legislation, and conventions such as the Berne Convention proposed that copyright must be with human authorship (Gaffar and Albarashdi, 2025). This indicates that content generated using AI tools lacks legal protection. In the case of the European Union, the Copyright Directive does not expressly cover AI authorship, leaving it to national courts to analyse authorship and ownership. In the European Union, it proposed frameworks that permit individuals to own AI-generated content; however, no distinct stand has been taken in this case (Zhuk, 2024). The United States specifies the “human authorship” doctrine rigorously and denies copyright to machine-generated outputs.

Kretschmer, Margoni, and Oruç (2024) draw on how AI development is reliant on large-scale text and data mining (TDM) from copyrighted sources, especially for training large language models (LLMs). EU law allows TDM under limited exemptions, but only when lawful access is guaranteed in the absence of consent from the copyright holders; it can break copyright laws. Chu, Song, and Yang (2024) suggest a means to minimize the replication of copyrighted materials in outputs through a mathematical framework; however, they admit that perfect compliance remains impalpable. Considering the policy perspective, unregulated AI use risks undermining the creative and academic sectors (Glenster, Hampton, Neff, & Lacy, 2025). In the field of music, Enochson (2025) exemplifies this in the instance of AI-generated songs imitating human artists. Such instances underline the urgency for academic and legal institutions to shed light on how AI-generated academic content should be regulated under copyright law. The copyright position of AI-generated academic content appears to be evolving.

2.9 Research Gap and Contribution of the Study

While previous studies have explored the use of AI tools in higher education and their implications for writing, productivity, and ethical concerns, much of this literature has focused on general student populations, educators, or Western institutional contexts. Limited research has specifically investigated how doctoral students, particularly within Indian universities, adopt, utilize, and perceive the effectiveness of diverse AI tools in their academic research. Moreover, few studies have systematically examined the intersection of AI usage with research creativity, knowledge acquisition, and inventive thinking. This study addresses this gap by focusing on Ph.D. students at Babasaheb Bhimrao Ambedkar University (BBAU), offering empirical insights into the types of AI tools used, their perceived benefits and challenges, and the influence of these tools on higher-order research capacities. By doing so, it contributes context-specific evidence to the growing discourse on AI integration in doctoral education and informs policy and practice on ethical, effective, and equitable AI adoption in academic research.

3 Methodology

3.1 Research Design

This study adopts a quantitative research design to investigate the adoption and impact of AI tools on Ph.D. students’ research practices at BBAU. A structured survey-based approach was utilized to collect data on AI tool usage, effectiveness, barriers, and the influence of AI on research productivity, creativity, and knowledge acquisition.

3.2 Sampling

The target population includes all Ph.D. students enrolled in various research disciplines at BBAU. A sample was drawn based on departmental representation and availability. The total number of questionnaires distributed was 629, and 261 completed responses were received, resulting in a response rate of 41.5%. According to Krejcie and Morgan (1970), the minimum sample size for a population of 629 is 242, confirming that our sample is statistically adequate for analysis.

3.3 Instrumentation

The research instrument consisted of a structured questionnaire divided into four sections:

  1. Demographic Information: Captured the respondent’s gender and age.

  2. AI Tool Usage: Assessed whether students utilize AI tools and their usage frequency.

  3. Primary Purposes for Using AI Tools: Explored the main purposes for employing AI tools in their research.

  4. Challenges and Limitations: Identified barriers hindering the effective adoption of AI tools.

Before the main data collection, a pilot study was conducted to test the questionnaire’s clarity and effectiveness. Feedback from the pilot study led to refinements in the instrument to ensure comprehensibility and relevance.

The study variables, such as gender, age, AI tools used in research, frequency of AI tool usage, purpose of using AI, and barriers to using AI tools, are measured in a checklist. While AI Tool Effectiveness and Knowledge Acquisition are measured on a 5-point Likert scale, challenges or limitations in using AI tools for creative tasks are measured in checklists.

3.4 Data Collection

Data were collected through an online survey distributed via institutional mailing lists. Ethical considerations, including informed consent and respondent anonymity, were strictly maintained. Data collection occurred from September 10 to September 20, 2024.

3.5 Data Analysis

Statistical analyses were conducted using SPSS. To assess the reliability and validity of the data, factor analysis was performed. The Kaiser–Meyer–Olkin (KMO) measure and Bartlett’s test of Sphericity confirmed sampling adequacy (KMO > 0.9) and significant correlation among variables (p < 0.05). Three primary dimensions of AI tool usage were examined: Effectiveness, Efficiency, and their influence on Knowledge Creation and Inventive Thinking. Cronbach’s alpha values for each factor exceeded 0.8, indicating high internal consistency.

Descriptive and inferential statistical techniques were employed to analyse the data. Independent sample t-tests and ANOVA were conducted to evaluate differences in perceptions based on demographic variables such as gender, age, year of Ph.D., and frequency of AI tool usage.

3.6 Data Visualization and Presentation Tools

To effectively present and interpret the quantitative survey results, visual representations of key findings were generated. The data collected from 261 Ph.D. students was first cleaned and organized using Microsoft Excel. For graphical representation, Python programming (version 3.10) was used, specifically for creating bar charts and horizontal bar graphs. Each figure was designed to highlight demographic distributions, AI tool usage patterns, perceived benefits, challenges, and limitations. Distinct colour schemes, percentage labels, and respondent counts were included to enhance clarity and visual appeal.

4 Results

4.1 Demographic Information

Figure 1 presents the demographic profile of the 261 Ph.D. student respondents. In terms of gender distribution, the sample was nearly balanced, with 51% identifying as male (n = 133) and 49% as female (n = 128). The majority of participants (61.3%) were aged 27 and above, while 31.4% were between 25 and 26 years, and only 7.3% were in the 23–24 age range. Regarding the year of Ph.D. enrolment, nearly half of the respondents (48.3%) were in their first year, followed by 23.8% in the third year. The remaining participants were in their second year (11.5%), fourth year (10.3%), or fifth year and beyond (6.1%). Concerning the frequency of AI tool usage in research, a notable portion (41.4%) reported using AI tools occasionally, while 28% used them daily and 24.9% weekly. Only 5.7% reported monthly use. These figures reflect a high level of engagement with AI tools among early-career researchers, particularly those in the initial phases of their doctoral journey.

Figure 1 
                  Demographic information.
Figure 1

Demographic information.

Figure 2 illustrates the overall adoption of AI tools among the surveyed Ph.D. students. An overwhelming majority (91.2%, n = 238) reported using AI tools in their research activities, while only 8.8% (n = 23) indicated that they did not use such tools. This high adoption rate underscores the growing reliance on AI technologies in academic research and suggests that these tools have become integral to the research workflows of doctoral scholars at BBAU.

Figure 2 
                  AI tool usage in research.
Figure 2

AI tool usage in research.

4.2 Types of AI Tools Utilized in Research

The data from Figure 3 illustrate the range and frequency of AI tools adopted by Ph.D. students at BBAU in their research workflows. The most widely used tools are those that support academic writing and ensure content originality. Plagiarism detection tools (62.1%) top the list, indicating that students prioritize academic integrity and compliance with originality standards. This is followed closely by LLMs like ChatGPT and GPT-4 (59.4%), and paraphrasing tools (56.3%), which suggests that students are leveraging generative AI to improve the clarity, structure, and novelty of their writing.

Figure 3 
                  Types of AI tools utilized in research.
Figure 3

Types of AI tools utilized in research.

Tools that assist in grammar and style correction, such as Grammarly and ProWritingAid, are used by 54.4% of students, reinforcing the focus on enhancing the linguistic quality of academic texts. A significant portion (43.7%) also uses AI-enhanced research databases like Semantic Scholar and Google Scholar, indicating the growing reliance on AI for more efficient literature discovery and review processes.

Mid-tier adoption is observed for tools that support summarization (41%), virtual assistance (31%), and automated translation (29.1%), reflecting their utility in content comprehension, multitasking, and language support, particularly for non-native English speakers. However, more technical and specialized AI tools, such as those for predictive analytics (18.8%), data analysis frameworks (18%), and image/speech recognition (15.7% and 13%, respectively), show relatively lower usage, likely due to limited relevance for students in non-STEM disciplines or lack of training.

Only a small percentage of students reported using code generation tools (11.9%), machine learning frameworks (8.4%), NLP tools (6.9%), and sentiment analysis systems (3.1%), suggesting that more advanced or discipline-specific AI tools are still on the periphery of doctoral research in this context. Notably, 8.8% of respondents indicated no use of AI tools in their research.

The results demonstrate that the most frequently used AI tools among Ph.D. scholars are those that directly support writing, reviewing, and ensuring the originality of academic content. More technical AI applications show lower adoption, pointing to a potential gap in training or applicability across disciplines. These patterns underscore the need for broader exposure and skill development to enable more comprehensive use of AI in research.

4.3 Primary Purposes for Using AI Tools in Research

Figure 4 presents a detailed breakdown of the specific research tasks for which Ph.D. students at BBAU utilize AI tools. The most prominent purpose is research paper writing and editing, reported by 59.8% of respondents. This reflects students’ reliance on AI tools like Grammarly and language models to enhance grammar, coherence, and overall academic writing quality, an observation consistent with prior studies (Almassaad et al., 2024; Tokdemir Demirel, 2024).

Figure 4 
                  Primary purposes for using AI tools in research.
Figure 4

Primary purposes for using AI tools in research.

Closely following is the literature review and synthesis (58.6%), suggesting that AI-powered platforms such as Semantic Scholar and Elicit are instrumental in helping students search, filter, and summarize relevant academic sources. These tools likely reduce the cognitive and time burden typically associated with comprehensive literature reviews (Granjeiro et al., 2025).

A substantial portion of students also reported using AI tools for data analysis and visualization (38.3%) and statistical analysis (34.1%), indicating that a significant number are incorporating tools like Tableau, SPSS, or R in handling research data. This shows a positive trend toward data-driven research, though the figures also imply that many students may still lack training or confidence in using AI for quantitative analysis.

One-fourth of the adoption was seen for tasks such as knowledge extraction and discovery (26.1%), hypothesis generation and testing (17.6%), and data collection and management (16.1%). These uses indicate the subsequent integration of AI into more advanced and exploratory research phases, where tools assist with text mining, predictive modelling, and data scraping.

Lower usage was reported for specialized tasks such as survey design and analysis (15.3%), experimental design (12.6%), and ethics and compliance monitoring (12.3%). This may be due to limited awareness of such AI capabilities or their lesser relevance to many students’ research domains. Similarly, tasks involving model development and training (10.3%), simulation and modelling (6.9%), and data cleaning and reprocessing (6.5%) received lower mentions, which could reflect their predominance in technical fields like computer science or engineering.

Only 5% of students reported using AI for collaboration and project management, and 5.4% indicated no current application, suggesting untapped potential for broader AI integration into research workflows beyond writing and analysis.

The data illustrate that AI tools are predominantly used to support core academic tasks such as writing and literature synthesis, with moderate use in data handling and limited use in design, modelling, and project management. These findings point to a need for targeted training to expand AI’s use in more complex and strategic aspects of research across disciplines.

4.4 Challenges Hindering AI Tool Adoption in Research

Figure 5 outlines the key barriers perceived by Ph.D. students at BBAU that hinder the effective adoption and use of AI tools in academic research. Among many challenges, the lack of access to AI tools (54%) has the highest issue, indicating that many students face obstacles related to subscription-based software, institutional licensing gaps, and limited tool availability. Followed by lack of knowledge or skills (50.6%), underlining a significant training gap. This finding is in line with the published literature (Chiu, 2025; Jackman et al., 2024), which indicates the role of digital literacy in AI adoption.

Figure 5 
                  Challenges hindering AI tool adoption in research.
Figure 5

Challenges hindering AI tool adoption in research.

Concerns about the reliability and accuracy of AI tools (47.5%) are another prominent issue. Students are concerned about the accuracy of the content generated by AI tools. The findings reflect trust issues as indicated by Han et al. (2025) and Al-Bukhrani et al. (2025). Cost is another barrier, rated by 45.2% of respondents. The cost of licensing fees of premium tools like Turnitin, Grammarly Premium, and advanced AI-based analytics software is costly, particularly with lower-income, data privacy, and security concerns (44.1%). Exposing the challenges around how personal and sensitive research data is handled by AI platforms, especially cloud-based services. Ethical concerns are rated by 42.5% of students, signifying concern about attribution and originality. The findings support legal and academic discussion on authorship, plagiarism, and transparency in AI-assisted writing (Gaffar & Albarashdi, 2025; Glenster et al., 2025).

Limited support or documentation (23.4%) and integration challenges (15.3%) are less frequently reported but still notable. These findings indicate that students lack guidance. Only 8.4% of respondents rated resistance to change, signifying reluctance and attachment to traditional methods. These outcomes indicate that while students use AI in research, they also face challenges such as access, cost, skills, training, and ethical or trust-related issues.

4.5 Factor Analysis of AI Tools Used in Research

Table 1 shows the KMO and Bartlett’s test, assessing sampling adequacy and sphericity. The KMO measure is 0.916, indicating the sample is suitable for factor analysis. Bartlett’s Test of Sphericity yields a chi-square value of 1558.963, confirming adequate correlations among variables for factor analysis.

Table 1

KMO and Bartlett’s test

KMO measure of sampling adequacy 0.916
Bartlett’s Test of Sphericity Approx. Chi-square 1558.963
df 45
Sig. 0.000

Table 2 details the factor analysis of AI tools used in research. It identifies two factors: effectiveness and efficiency. The effectiveness factor, with an eigenvalue of 5.829, accounts for 58.295% of the variance and has a high Cronbach’s alpha of 0.889, indicating strong internal consistency. Key items loading on this factor include “AI tools have significantly enhanced the accuracy of my research findings” (0.871) and “AI tools have helped me uncover new insights and perspectives I might have otherwise missed” (0.791). The efficiency factor has an eigenvalue of 0.997, explaining 9.970% of the variance, with a Cronbach’s alpha of 0.864, reflecting its reliability. Significant loadings in this category include “AI tools have made my research more innovative and original” (0.835) and “AI tools have helped me stay up-to-date with my field’s latest developments” (0.820). The analysis underscores AI tools used in research and enhances effectiveness and efficiency.

Table 2

Factor analysis of AI tools used in research

Factors Factor loading Eigen values % Of variance Cronbach’s alpha
Effectiveness 5.829 58.295 0.889
AI tools have significantly enhanced the accuracy of my research findings 0.871
AI tools have helped me uncover new insights and perspectives I might have otherwise missed 0.791
AI tools have improved the efficiency of my data analysis and visualization 0.783
AI tools have made it easier for me to identify and correct errors in my research 0.698
AI tools have helped me to strengthen the theoretical foundation of my research 0.633
Efficiency 0.997 9.970 0.864
AI tools have made my research more innovative and original 0.835
AI tools have helped me stay up-to-date with my field’s latest developments 0.820
AI tools have helped me to collaborate more effectively with other researchers 0.726
I am highly satisfied with the effectiveness of AI tools in supporting my research 0.679
AI tools have improved the quality of my writing and communication 0.573

Table 3 presents the independent sample t-test results of gender with AI tools used in research. The findings indicate that “effectiveness” (t = 0.159, p = 0.874) and “efficiency” (t = 0.565, p = 0.912) do not have a significant difference. Indicating that gender difference does not have a difference in using AI tools for research.

Table 3

Differences in AI tools used in research based on gender

AI tools used in research Male Female t-value p-value
Effectiveness 3.6256 3.6109 0.159 0.874
Efficiency 3.4571 3.4469 0.565 0.912

Table 4 illustrates the differences in means regarding the utilization of AI tools in research. The analysis indicates that “effectiveness” shows a statistically significant difference in relation to the duration of Ph.D. studies (F = 3.428, p = 0.009). In contrast, “effectiveness” does not demonstrate significant differences when correlated with age (F = 0.515, p = 0.598) or the frequency of AI tool usage in research (F = 1.932, p = 0.125). Likewise, “efficiency” does not reveal significant differences concerning age (F = 0.894, p = 0.410), the duration of Ph.D. studies (F = 0.275, p = 0.280), or the frequency of AI tool usage in research (F = 1.983, p = 0.117). The findings indicate that there is a variation in the utilization of AI tools for research effectiveness among research scholars, which correlates with the number of years they have been enrolled.

Table 4

ANOVA on AI tools used in research

Variable Indicator Mean F-value p-value
Effectiveness
Age 23–24 3.6105 0.515 0.598
25–26 3.5512
27 and above 3.6537
Year of Ph.D. 1st year 3.6730 3.428 0.009
2nd year 3.5933
3rd year 3.7161
4th year 3.5333
5th year or more 3.0000
Frequency of AI tool usage in research Daily 3.7863 1.932 0.125
Weekly 3.6031
Monthly 3.4667
Occasionally 3.5352
Efficiency
Age 23–24 3.4632 0.894 0.410
25–26 3.3610
27 and above 3.4975
Year of Ph.D. 1st year 3.6730 1.275 0.280
2nd year 3.5933
3rd year 3.7161
4th year 3.5333
5th year or more 3.0000
Frequency of AI tool usage in research Daily 3.6164 1.983 0.117
Weekly 3.4246
Monthly 3.5333
Occasionally 3.3463

4.6 Factor Analysis of AI Tools in Knowledge Creation

Table 5 shows the KMO and Bartlett’s test results, assessing sampling adequacy and sphericity. The KMO measure is 0.944, indicating the sample is suitable for factor analysis. Bartlett’s Test of Sphericity yields a chi-square value of 1918.649, confirming adequate correlations among variables for factor analysis.

Table 5

KMO and Bartlett’s test

KMO measure of sampling adequacy 0.944
Bartlett’s test of sphericity Approx. chi-square 1918.649
df 45
Sig. 0.000

Table 6 details the factor analysis of AI Tools on knowledge creation. It identifies two factors: creativity and innovation. The creativity factor, with an eigenvalue of 6.616, accounts for 66.160% of the variance and has a high Cronbach’s alpha of 0.915, indicating strong internal consistency. Key items loading on this factor include “AI tools help to understand complex concepts or theories more easily” (0.843). The innovation factor has an eigenvalue of 0.681, explaining 6.809% of the variance, with a Cronbach’s alpha of 0.878, reflecting its reliability. Significant loadings in this category include “AI tools motivate to engage in lifelong learning and continue to develop knowledge and skills” (0.8885). The analysis underscores AI Tools in knowledge creation and enhances creativity and innovation during research.

Table 6

Factor analysis of AI tools in knowledge creation

Factors Factor loading Eigen values % of variance Cronbach’s alpha
Creativity 6.616 66.160 0.915
AI tools help to understand complex concepts or theories more easily 0.843
AI tools help to connect different ideas or concepts in research 0.788
AI tools help to synthesize information from various sources more effectively 0.733
AI tools help to apply knowledge to solve complex problems more effectively 0.701
AI tools stimulate critical thinking skills and help to evaluate information more critically 0.688
AI tools help to explore a wider range of research topics or perspectives 0.567
Innovation 0.681 6.809 0.878
AI tools motivate to engage in lifelong learning and continue to develop knowledge and skills 0.888
AI tools help to retain information better and recall it more easily 0.760
AI tools help to delve deeper into specific research areas 0.683
AI tools help to conduct research more efficiently by providing quick access to relevant information 0.603

Table 7 displays the results of the independent sample t-test examining the relationship between gender and the use of AI tools in knowledge creation. The findings reveal that there is no significant difference in “creativity” (t = −0.566, p = 0.572) and “innovation” (t = −0.179, p = 0.858). This suggests that gender does not influence the utilization of AI tools used in knowledge creation.

Table 7

Differences in AI tools in knowledge creation based on gender

AI tools on knowledge creation Male Female t-value p-value
Creativity 3.5038 3.5573 −0.566 0.572
Innovation 3.4906 3.5078 −0.179 0.858

Table 8 illustrates the results concerning the differences in means related to the utilization of AI tools in the process of knowledge creation. The analysis indicates that “creativity” exhibits a statistically significant difference in relation to the duration of Ph.D. studies (F = 2.418, p = 0.049). In contrast, “creativity” does not show significant differences when correlated with age (F = 0.729, p = 0.483) or the frequency of AI tool usage in research (F = 1.583, p = 0.194). Likewise, “innovation” does not demonstrate significant differences with respect to age (F = 0.049, p = 0.952), the duration of Ph.D. studies (F = 2.034, p = 0.090), or the frequency of AI tool usage in research (F = 2.044, p = 0.108). The findings indicate that there is a variation in the utilization of AI tools for creativity among research scholars, which correlates with the number of years they have been enrolled.

Table 8

ANOVA on AI tools in knowledge creation

Variable Indicator Mean F-value p-value
Creativity
Age 23–24 3.4825 0.729 0.483
25–26 3.4533
27 and above 3.5750
Year of Ph.D. 1st year 3.5820 2.418 0.049
2nd year 3.6111
3rd year 3.5914
4th year 3.3457
5th year or more 3.0417
Frequency of AI tool usage in research Daily 3.6849 1.583 0.194
Weekly 3.5205
Monthly 3.5000
Occasionally 3.4352
Innovation
Age 23–24 3.4737 0.049 0.952
25–26 3.4817
27 and above 3.5109
Year of Ph.D. 1st year 3.5675 2.034 0.090
2nd year 3.4417
3rd year 3.5524
4th year 3.4074
5th year or more 3.0156
Frequency of AI tool usage in research Daily 3.6747 2.044 0.108
Weekly 3.5000
Monthly 3.4500
Occasionally 3.3866

4.7 Factor Analysis of AI Tools in Inventive Thinking

Table 9 shows the KMO and Bartlett’s test results, assessing sampling adequacy and sphericity. The KMO measure is 0.949, indicating the sample is suitable for factor analysis. Bartlett’s test of sphericity yields a chi-square value of 2272.820, confirming adequate correlations among variables for factor analysis.

Table 9

KMO and Bartlett’s test

KMO measure of sampling adequacy 0.949
Bartlett’s test of Sphericity Approx. Chi-Square 2272.820
df 45
Sig. 0.000

Table 10 details the factor analysis of AI Tools on inventive thinking. It identifies two factors: creative thinking and novelty. The creative thinking factor, with an eigenvalue of 7.095, accounts for 70.949% of the variance and has a high Cronbach’s alpha of 0.930, indicating strong internal consistency. Key items loading on this factor include “AI tools have encouraged me to take more risks and experiment with new approaches” (0.870). The novelty factor has an eigenvalue of 0.551, explaining 5.508% of the variance, with a Cronbach’s alpha of 0.910, reflecting its reliability. Significant loadings in this category include “AI tools have stimulated my imagination and encouraged me to think creatively” (0.841). Overall, the analysis underscores AI Tools in inventive thinking and enhances creative thinking and novelty during research.

Table 10

Factor analysis of AI tools in inventive thinking

Factors Factor loading Eigen values % of variance Cronbach’s alpha
Creative thinking 7.095 70.949 0.930
AI tools have encouraged me to take more risks and experiment with new approaches 0.870
AI tools have improved my ability to think critically and evaluate different options 0.784
AI tools have had a positive impact on my creative thinking and problem-solving abilities 0.726
AI tools have helped me to become more innovative in my research 0.720
AI tools have helped me to approach problems from different perspectives 0.621
AI tools have helped me to identify new patterns or connections that I might have otherwise missed 0.590
Novelty .551 5.508 0.910
AI tools have stimulated my imagination and encouraged me to think creatively 0.841
AI tools have helped me to develop a more open and flexible mindset 0.836
AI tools have made it easier for me to generate novel and original ideas 0.763
AI tools have inspired me to think outside the box and explore unconventional solutions 0.594

Table 11 presents the outcomes of the independent sample t-test that investigates the correlation between gender and the application of AI tools in creative thinking. The results indicate that there is no statistically significant difference in “creative thinking” (t = −0.140, p = 0.889) and “novelty” (t = 0.401, p = 0.689). This implies that gender does not affect the use of AI tools in the context of inventive thinking.

Table 11

Differences in AI Tools in inventive thinking based on gender

AI Tools on inventive thinking Male Female t-value p-value
Creative thinking 3.4524 3.4661 −0.140 0.889
Novelty 3.4135 3.3730 0.401 0.689

Table 12 illustrates the results concerning the differences in means related to the utilization of AI tools in the process of inventive thinking. The analysis indicates that “creative thinking” exhibits a statistically significant difference in relation to the duration of Ph.D. studies (F = 0.4161, p = 0.003). In contrast, “creative thinking” does not show significant differences when correlated with age (F = 0.389, p = 0.678) or the frequency of AI tool usage in research (F = 0.454, p = 0.714). Likewise, “novelty” does not demonstrate significant differences with respect to age (F = 0.174, p = 0.840), the duration of Ph.D. studies (F = 1.897, p = 0.111), or the frequency of AI tool usage in research (F = 0.621, p = 0.602). The findings indicate that there is a variation in the utilization of AI tools for creative thinking among research scholars, which correlates with the number of years they have been enrolled.

Table 12

ANOVA on AI tools in inventive thinking

Variable Indicator Mean F-value p-value
Creative thinking
Age 23–24 3.3070 0.389 0.678
25–26 3.4837
27 and above 3.4646
Year of Ph.D. 1st year 3.5556 0.4161 0.003
2nd year 3.4111
3rd year 3.5672
4th year 3.1728
5th year or more 2.8542
Frequency of AI tool usage in research Daily 3.5502 0.454 0.714
Weekly 3.4385
Monthly 3.4000
Occasionally 3.4182
Novelty
Age 23–24 3.3421 0.174 0.840
25–26 3.3598
27 and above 3.4172
Year of Ph.D. 1st year 3.4187 1.897 0.111
2nd year 3.4583
3rd year 3.4960
4th year 3.2500
5th year or more 2.9219
Frequency of AI tool usage in research Daily 3.4418 0.621 0.602
Weekly 3.3308
Monthly 3.6167
Occasionally 3.3681

4.8 Challenges and Limitations in Using AI Tools for Creative Tasks

Figure 6 presents critical insights into the specific challenges Ph.D. students at BBAU face when using AI tools for creative research tasks such as idea generation, academic writing, and conceptual development. The notable concern is ethical uncertainty (60.2%). This consists of concern about authorship attribution, plagiarism, and the misrepresentation of AI-generated content as original work. The findings support warnings of Glenster et al. (2025) and Gaffar and Albarashdi (2025) on originality and integrity. Difficulty in generating unique or original content using AI tools is reported by 54.8% of students, indicating that AI outputs often lack novelty or creativity (Kostas et al., 2025).

Figure 6 
                  Challenges and limitations in using AI tools for creative tasks.
Figure 6

Challenges and limitations in using AI tools for creative tasks.

The issue of overreliance on AI-generated ideas is rated by 43.3%. This issue underscores the potential for reducing personal creativity and diminishing the development of independent research skills and academic identity (Pham, 2025; Susha et al., 2024). Lack of contextual understanding (40.6%) is another significant limitation. The findings can affect the depth and relevance of AI-assisted writing in humanities and social science research, where context sensitivity is important. Intellectual property concerns (38.3%) are another issue signifying uncertainty over ownership of AI-generated text. The findings support the debates on the copyright status of non-human-authored content (Kretschmer et al., 2024; Zhuk, 2024).

Quality inconsistency (37.5%) and the limited ability to capture personal writing style (36.8%) are other challenges indicating students’ dissatisfaction with the coherence, tone, and alignment of AI outputs. Technical errors (31.8%) and workflow integration issues (29.5%) suggest that AI tools may produce factually inaccurate results. Only 10.3% of students rated that these challenges were not applicable to them. AI tools are widely used for enhancing research creativity; at the same time, there are challenges such as ethical ambiguities, originality concerns, limited contextual awareness, and stylistic rigidity. These challenges and limitations indicate the significance of orientation, academic guidelines, and user awareness to guarantee that AI supports, rather than substitutes, human creativity in scholarly research.

5 Discussion

This study examines the adoption of AI in academic writing and research among Ph.D. students at BBAU.

5.1 AI Tool Adoption and Usage Purposes

The study outcomes indicated a high rate of AI tool adoption, with 91.2% of students reporting usage in research, indicating a growing integration of AI in education (Golan and Azoulay, 2023; Khalifa and Albadawy, 2024). Plagiarism detection software (62.1%), LLMs (59.4%), and paraphrasing tools (56.3%) were among the most commonly used tools, indicating a strong sense of maintaining academic integrity, refining language, and enhancing clarity in scholarly writing. The main objectives for using AI tools consist of research content writing and editing (59.8%), literature review and synthesis (58.6%), and data analysis and visualization (38.3%). These outcomes are aligned with the published literature (Li et al., 2025; Tokdemir Demirel, 2024; Yousef et al., 2025), which signifies the role of AI in reducing cognitive load, improving time efficiency, and facilitating data-driven insights.

5.2 Perceived Effectiveness and Efficiency

Factor analysis on AI tools used in research generates two factors, i.e. effectiveness and efficiency of research work. Students consent that these tools improved the accuracy of findings, enabled identification of novel insights, streamlined error detection, and strengthened theoretical underpinnings. The findings support Al-Bukhrani et al. (2025) and Chiu (2025) results, indicating that AI tools improve writing quality and boost research productivity, especially for non-native English speakers and early-career scholars. Fascinatingly, effectiveness evaluation differs significantly by year of Ph.D. study but not by gender or age. Third-year students exhibited higher perceived gains, probably owing to research engagement and maturity. The outcome supports earlier findings of Dai et al. (2023) and Bista and Bista (2025), who found that the research stage regulates the depth and outcome of AI tool engagement.

5.3 AI’s Role in Knowledge Acquisition, Creativity, and Innovation

Factor analyses on AI Tools in knowledge creation identified creativity and innovation. The findings indicate strong associations between AI usage and students’ capability to comprehend complex theories, connect ideas across disciplines, and process diverse information sources. These findings support that AI tools are not only support mechanisms but catalysts for intellectual engagement and deeper learning (Almassaad et al., 2024; Granjeiro et al., 2025). AI tools enhance creativity, critical evaluation, and generate new ideas. AI tools like ChatGPT and Elicit promote originality and innovation (Gayed, Carlon, Oriola, & Cross, 2022; Pham, 2025). At the same time, perceived creativity and innovation differ drastically based on the students’ year of study. First- and third-year scholars indicate higher gains compared to final-year students, probably because of openness to experimentation and evolving research design needs (Liu et al., 2015; Mbatha, 2024).

5.4 Barriers and Ethical Considerations

The application of AI tools in research brings many benefits, but at the same time, there are several obstacles that hamper AI adoption. The main barriers that hamper AI adoption include lack of access (54%), limited digital literacy (50.6%), concerns about reliability (47.5%), cost (45.2%), and ethical concerns (42.5%). The findings support earlier findings, which identified that institutional support, training, and ethical clarity are essential enablers of effective AI use (Jackman et al., 2024; Klimova et al., 2023; Smerdon, 2024). Barriers concerning creative tasks were also notable, such as ethical dilemmas (60.2%), difficulty generating original content (54.8%), and concerns over intellectual property (38.3%), indicating apprehensiveness about authorship, plagiarism, and diminished ownership. The findings support previous findings that called for reforms in copyright law and institutional guidance to delineate boundaries between human-authored and AI-generated work (Gaffar and Albarashdi, 2025; Glenster et al., 2025). Fascinatingly, student opposition to using AI tools was owing to a lack of policy, guidance, and training, supporting Al-Bukhrani et al. (2025) findings, who proposed that favourable attitudes and social norms were not enough to ensure adoption in the absence of structural support and ethical standards.

5.5 Theoretical Implications

The outcomes from this study offer a significant understanding of technology adoption. The TAM (Davis, 1989), factors, i.e. perceived usefulness and ease of use, are evident in this study by the variables such as ethical perception, data privacy, and content accuracy (Zhang et al., 2025). Likewise, the IDT’s (Rogers, 1962) factors of compatibility and trainability were applicable among early researchers’ observations with various AI tools. At the same time, the UTAUT factors, PE, and facilitating term seemed relevant but incomplete in apprehending students’ AI incidents. External factor impacts like institutional AI guidelines, subscription mode, and pedagogical instruction surfaced as a censorious cause (Gutiérrez-Leefmans et al., 2025).

6 Conclusion

This study investigates the adoption and effectiveness of AI in academic writing and research among Ph.D. students at BBAU. The outcome underscores that AI tools are extensively used for research purposes. Scholars used ChatGPT, Grammarly, Turnitin, and Elicit for writing, literature review, improving language and checking plagiarism. The findings confirmed that AI tools help in the generation of ideas among mid-level Ph.D. scholars.

In spite of many advantages that AI tools offer, there are many issues that need to be addressed, such as the absence of orientation, ethical guidelines, content originality, and authorship. These challenges indicate the need for institutional training, guidelines to encourage equitable access, ethical use, and AI literacy. Without which, scholars over-rely on AI tools and reduce creativity, and ultimately violate academic integrity.

The findings from this study are from a single Indian university; thus, their generalizability must be approached cautiously. The outcome for self-reported data and quantitative analysis may not capture the depth of individual experience. A mixed-methods and qualitative study is recommended to comprehend in-depth experiences. Moreover, comparative studies among universities and longitudinal studies can also be conducted to examine long-term impacts on academic performance, writing quality, and research innovation.

  1. Funding information: The authors state no funding involved.

  2. Author contributions: Somipam R. Shimray – methodology, writing – original draft, writing – review & editing, and formal analysis; Subaveerapandiyan A – conceptualization, writing – original draft, writing – review & editing, and validation.

  3. Conflict of interest: The authors state no conflict of interest.

  4. Data availability statement: The data supporting this study are available from the corresponding author upon reasonable request.

References

Al-Bukhrani, M. A., Alrefaee, Y. M. H., & Tawfik, M. (2025). Adoption of AI writing tools among academic researchers: A theory of reasoned action approach. PLOS ONE, 20(1), e0313837. doi: 10.1371/journal.pone.0313837.Suche in Google Scholar

Almaiah, M. A., Alfaisal, R., Salloum, S. A., Hajjej, F., Shishakly, R., Lutfi, A., … Al-Maroof, R. S. (2022). Measuring institutions’ adoption of artificial intelligence applications in online learning environments: Integrating the innovation diffusion theory with technology adoption rate. Electronics, 11(20), 3291. doi: 10.3390/electronics11203291.Suche in Google Scholar

Almassaad, A., Alajlan, H., & Alebaikan, R. (2024). Student perceptions of generative artificial intelligence: Investigating utilization, benefits, and challenges in higher education. Systems, 12(10), 385. doi: 10.3390/systems12100385.Suche in Google Scholar

Amirzhanov, A., Turan, C., & Makhmutova, A. (2025). Plagiarism types and detection methods: A systematic survey of algorithms in text analysis. Frontiers in Computer Science, 7, 1504725. doi: 10.3389/fcomp.2025.1504725.Suche in Google Scholar

Arbab, A. N., Dhuhli, B. A., Krishnan, Y., & Crisostomo, A. S. (2024). Student’s utilization and assistance of AI tools in assessment completion: Perceptions and implications. International Linguistics Research, 7(3), p1. doi: 10.30560/ilr.v7n3p1.Suche in Google Scholar

Bista, K., & Bista, R. (2025). Leveraging AI tools in academic writing: Insights from doctoral students on benefits and challenges. American Journal of STEM Education, 6, 32–47. doi: 10.32674/9m8dq081.Suche in Google Scholar

Canyakan, S. (2025). Comparative accuracy of AI-based plagiarism detection tools: An enhanced systematic review. Journal of AI, Humanities and New Ethics, 1(1), 5–18.Suche in Google Scholar

Casalino, L., Gaieb, Z., Goldsmith, J. A., Hjorth, C. K., Dommer, A. C., Harbison, A. M., … Amaro, R. E. (2020). Beyond shielding: The roles of glycans in the SARS-CoV-2 spike protein. ACS Central Science, 6(10), 1722–1734. doi: 10.1021/acscentsci.0c01056.Suche in Google Scholar

Chiu, M. L. (2025). Exploring user awareness and perceived usefulness of generative AI in higher education: The moderating role of trust. Education and Information Technologies, 1–35. doi: 10.1007/s10639-025-13612-7.Suche in Google Scholar

Chu, T., Song, Z., & Yang, C. (2024). How to protect copyright data in optimization of large language models? Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17871–17879. doi: 10.1609/aaai.v38i16.29741.Suche in Google Scholar

Dai, Y., Lai, S., Lim, C. P., & Liu, A. (2023). ChatGPT and its impact on research supervision: Insights from Australian postgraduate research students. Australasian Journal of Educational Technology, 39(4), 74–88. doi: 10.14742/ajet.8843.Suche in Google Scholar

Daungsupawong, H., & Wiwanitkit, V. (2025). Artificial intelligence for academic purpose in clinic surgery: ChatGPT, Turnitin, and false positive. Formosan Journal of Surgery, 58(3), 143–144. doi: 10.1097/FS9.0000000000000177.Suche in Google Scholar

Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 94–98. doi: 10.7861/futurehosp.6-2-94.Suche in Google Scholar

Davies, A., Veličković, P., Buesing, L., Blackwell, S., Zheng, D., Tomašev, N., … Kohli, P. (2021). Advancing mathematics by guiding human intuition with AI. Nature, 600(7887), 70–74. doi: 10.1038/s41586-021-04086-x.Suche in Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319. doi: 10.2307/249008.Suche in Google Scholar

Enochson, C. (2025). AI created music – copyright infringement or new creation?. (Master Thesis). Lund, Sweden: Lund University. http://lup.lub.lu.se/student-papers/record/9196943.Suche in Google Scholar

Foltýnek, T., Dlabolová, D., Anohina-Naumeca, A., Razı, S., Kravjar, J., Kamzola, L., … Weber-Wulff, D. (2020). Testing of support tools for plagiarism detection. International Journal of Educational Technology in Higher Education, 17(1), 46. doi: 10.1186/s41239-020-00192-4.Suche in Google Scholar

Gaffar, H., & Albarashdi, S. (2025). Copyright protection for AI-generated works: Exploring originality and ownership in a digital landscape. Asian Journal of International Law, 15(1), 23–46. doi: 10.1017/S2044251323000735.Suche in Google Scholar

Gayed, J. M., Carlon, M. K. J., Oriola, A. M., & Cross, J. S. (2022). Exploring an AI-based writing Assistant’s impact on English language learners. Computers and Education: Artificial Intelligence, 3, 100055. doi: 10.1016/j.caeai.2022.100055.Suche in Google Scholar

Glenster, A. K., Hampton, L., Neff, G., & Lacy, T. (2025). Policy brief: AI, copyright, and productivity in the creative industries (p. 41). Minderoo Centre for Technology & Democracy. University of Cambridge. https://www.repository.cam.ac.uk/handle/1810/379796.Suche in Google Scholar

Golan, G., & Azoulay, M. (2023). Photoinduced currents (nanoamperes) at single crystalline cadmium telluride surfaces analyzed at ambient by scanning tunneling microscopy. 2023 IEEE 33rd International Conference on Microelectronics (MIEL) (pp. 1–4). doi: 10.1109/MIEL58498.2023.10315846.Suche in Google Scholar

Granjeiro, J. M., Cury, A. A. D. B., Cury, J. A., Bueno, M., Sousa-Neto, M. D., & Estrela, C. (2025). The future of scientific writing: AI tools, benefits, and ethical implications. Brazilian Dental Journal, 36, e25–6471. doi: 10.1590/0103-644020256471.Suche in Google Scholar

Gutiérrez-Leefmans, M., Picazo-Vela, S., & Kareem, O. (2025). Adoption of artificial intelligence in higher education: A diffusion of innovation approach. The TQM Journal. doi: 10.1108/TQM-12-2024-0523.Suche in Google Scholar

Han, B., Coghlan, S., Buchanan, G., & McKay, D. (2025). Who is helping whom? Student concerns about AI-teacher collaboration in higher education classrooms. Proceedings of the ACM on Human-Computer Interaction, 9(2), 1–32. doi: 10.1145/3711104.Suche in Google Scholar

Jackman, G.-A., Marshall, I. A., & Carrington, T. (2024). Opportunity or threat: Investigating faculty readiness to adopt artificial intelligence in higher education. Caribbean Journal of Education and Development, 1(2), 4–27. doi: 10.46425/cjed1201029055.Suche in Google Scholar

Jiang, T., Liu, E., Baig, T., & Li, Q. (2024). Enhancing decision-making in higher education: Exploring the integration of ChatGPT and data visualization tools in data analysis. New Directions for Higher Education, 2024(207), 15–29. doi: 10.1002/he.20510.Suche in Google Scholar

Joynt, V., Cooper, J., Bhargava, N., Vu, K., Kwon, O. H., Allen, T. R., … Radaideh, M. I. (2024). A comparative analysis of text-to-image generative AI models in scientific contexts: A case study on nuclear power. Scientific Reports, 14(1), 30377. doi: 10.1038/s41598-024-79705-4.Suche in Google Scholar

Khalifa, M., & Albadawy, M. (2024). Using artificial intelligence in academic writing and research: An essential productivity tool. Computer Methods and Programs in Biomedicine Update, 5, 100145. doi: 10.1016/j.cmpbup.2024.100145.Suche in Google Scholar

Klimova, B., Pikhart, M., & Kacetl, J. (2023). Ethical issues of the use of AI-driven mobile apps for education. Frontiers in Public Health, 10, 1118116. doi: 10.3389/fpubh.2022.1118116.Suche in Google Scholar

Kolhar, M., & Alameen, A. (2021). University learning with anti-plagiarism systems. Accountability in Research, 28(4), 226–246. doi: 10.1080/08989621.2020.1822171.Suche in Google Scholar

Kondoro, A. M. (2025). AI writing assistants in Tanzanian Universities: Adoption trends, challenges, and opportunities. In V. Padmakumar, K. Gero, T. Wambsganss, S. Sterman, T.-H. Huang, D. Zhou, & J. Chung (Eds.), Proceedings of the Fourth Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2025) (pp. 37–46). Association for Computational Linguistics. doi: 10.18653/v1/2025.in2writing-1.4.Suche in Google Scholar

Kostas, A., Paraschou, V., Spanos, D., Tzortzoglou, F., & Sofos, A. (2025). AI and ChatGPT in higher education: Greek students’ perceived practices, benefits, and challenges. Education Sciences, 15(5), 605. doi: 10.3390/educsci15050605.Suche in Google Scholar

Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30(3), 607–610. doi: 10.1177/001316447003000308.Suche in Google Scholar

Kretschmer, M., Margoni, T., & Oruç, P. (2024). Copyright law and the lifecycle of machine learning models. IIC - International Review of Intellectual Property and Competition Law, 55(1), 110–138. doi: 10.1007/s40319-023-01419-3.Suche in Google Scholar

Li, Y., Sadiq, G., Qambar, G., & Zheng, P. (2025). The impact of students’ use of ChatGPT on their research skills: The mediating effects of autonomous motivation, engagement, and self-directed learning. Education and Information Technologies, 30(4), 4185–4216. doi: 10.1007/s10639-024-12981-9.Suche in Google Scholar

Li, X., Sigov, A., Ratkin, L., Ivanov, L. A., & Li, L. (2023). Artificial intelligence applications in finance: A survey. Journal of Management Analytics, 10(4), 676–692. doi: 10.1080/23270012.2023.2244503.Suche in Google Scholar

Liu, F., Dedehayir, O., & Katzy, B. (2015). Coalition formation during technology adoption. Behaviour & Information Technology, 34(12), 1186–1199. doi: 10.1080/0144929X.2015.1046929.Suche in Google Scholar

Lu, H.-P., Cheng, H.-L., Tzou, J.-C., & Chen, C.-S. (2023). Technology roadmap of AI applications in the retail industry. Technological Forecasting and Social Change, 195, 122778. doi: 10.1016/j.techfore.2023.122778.Suche in Google Scholar

Mbatha, B. (2024). Diffusion of innovations: How adoption of new technology spreads in society. In D. Ocholla, O. B. Onyancha, & A. O. Adesina (Eds.), Information, knowledge, and technology for teaching and research in Africa (pp. 1–18). Cham: Springer Nature Switzerland. doi: 10.1007/978-3-031-60267-2_1.Suche in Google Scholar

Moneus, A. M., & Sahari, Y. (2024). Artificial intelligence and human translation: A contrastive study based on legal texts. Heliyon, 10(6), e28106. doi: 10.1016/j.heliyon.2024.e28106.Suche in Google Scholar

Mustak, M., Salminen, J., Plé, L., & Wirtz, J. (2021). Artificial intelligence in marketing: Topic modeling, scientometric analysis, and research agenda. Journal of Business Research, 124, 389–404. doi: 10.1016/j.jbusres.2020.10.044.Suche in Google Scholar

Oklahoma State University Libraries. (2025). AI in academic research and writing: AI tools for academic research & writing [Education]. https://info.library.okstate.edu/AI/tools.Suche in Google Scholar

Pham, N. N. H. (2025). The use of ChatGPT in EFL students as a learning assistant in their writing skills: A literature review. International Journal of AI in Language Education, 2(1), 38–54. doi: 10.54855/ijaile.25213.Suche in Google Scholar

Pinzolits, R. (2024). AI in academia: An overview of selected tools and their areas of application. MAP Education and Humanities, 4, 37–50. doi: 10.53880/2744-2373.2023.4.37.Suche in Google Scholar

Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1), 22. doi: 10.1186/s41039-017-0062-8.Suche in Google Scholar

Pryma, V., Pelivan, O., Teletska, T., Tsobenko, O., & Zagrebelna, N. (2025). AI writing assistants and student competence: A linguistic aspect. Arab World English Journal, 1, 319–329. doi: 10.24093/awej/AI.18.Suche in Google Scholar

Rogers, E. M. (1962). Diffusion of innovations. New York: Free Press.Suche in Google Scholar

Sajid, M., Sanaullah, M., Fuzail, M., Malik, T. S., & Shuhidan, S. M. (2025). Comparative analysis of text-based plagiarism detection techniques. PLOS ONE, 20(4), e0319551. doi: 10.1371/journal.pone.0319551.Suche in Google Scholar

Salhab, R. (2025). The role of artificial intelligence in education among college instructors: Palestine Technical University Kadoorie as a case study. Frontiers in Education, 10, 1560074. doi: 10.3389/feduc.2025.1560074.Suche in Google Scholar

Saúde, S., Barros, J. P., & Almeida, I. (2024). Impacts of generative artificial intelligence in higher education: Research trends and students’ perceptions. Social Sciences, 13(8), 410. doi: 10.3390/socsci13080410.Suche in Google Scholar

Smerdon, D. (2024). AI in essay-based assessment: Student adoption, usage, and performance. Computers and Education: Artificial Intelligence, 7, 100288. doi: 10.1016/j.caeai.2024.100288.Suche in Google Scholar

Sontake, P. (2025). A review on artificial intelligence (AI) tools in research writing. International Journal of Scientific Research and Technology, 2(5), 85–102. doi: 10.5281/zenodo.15331079.Suche in Google Scholar

Susha, I., Viberg, O., & Koren, G. (2024). Co-writing an essay with ChatGPT: Experiences and perceptions of students in higher education. Proceedings of 2024 AIS SIGED European Conference on Information Systems Education Research (p. 14). https://aisel.aisnet.org/eciser2024/3.Suche in Google Scholar

Taherdoost, H., & Madanchian, M. (2023). Artificial intelligence and sentiment analysis: A review in competitive research. Computers, 12(2), 37. doi: 10.3390/computers12020037.Suche in Google Scholar

Teel, Z. (Abbie). (2024). Artificial intelligence’s role in digitally preserving historic archives. Preservation, Digital Technology & Culture, 53(1), 29–33. doi: 10.1515/pdtc-2023-0050.Suche in Google Scholar

Texas Tech University Libraries. (2025). Artificial intelligence tools for detection, research and writing [Education]. https://guides.library.ttu.edu/artificialintelligencetools/home.Suche in Google Scholar

Tokdemir Demirel, E. (2024). The use and perceptions towards AI tools for academic writing among university students. Innovations in Language Teaching Journal, 1(1), 1–20. doi: 10.53463/innovltej.20240328.Suche in Google Scholar

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. doi: 10.2307/30036540.Suche in Google Scholar

Von Isenburg, M., Oermann, M. H., & Howard, V. (2019). Plagiarism detection software and its appropriate use. Nurse Author & Editor, 29(1), 1–10. doi: 10.1111/j.1750-4910.2019.tb00034.x.Suche in Google Scholar

Williams, M. D., Rana, N. P., & Dwivedi, Y. K. (2015). The unified theory of acceptance and use of technology (UTAUT): A literature review. Journal of Enterprise Information Management, 28(3), 443–488. doi: 10.1108/JEIM-09-2014-0088.Suche in Google Scholar

Yousef, M., Deeb, S., & Alhashlamon, K. (2025). AI usage among medical students in Palestine: A cross-sectional study and demonstration of AI-assisted research workflows. BMC Medical Education, 25(1), 693. doi: 10.1186/s12909-025-07272-x.Suche in Google Scholar

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. doi: 10.1186/s41239-019-0171-0.Suche in Google Scholar

Zhang, X., Hu, J., & Zhou, Y. (2025). The role of perceived utility and ethical concerns in the adoption of AI-based data analysis tools: A multi-group structural equation model analysis among academic researchers. Education and Information Technologies, 1–33. doi: 10.1007/s10639-025-13535-3.Suche in Google Scholar

Zhao, X. (2023). Leveraging artificial intelligence (AI) technology for English writing: Introducing wordtune as a digital writing assistant for EFL writers. RELC Journal, 54(3), 890–894. doi: 10.1177/00336882221094089.Suche in Google Scholar

Zhuk, A. (2024). Navigating the legal landscape of AI copyright: A comparative analysis of EU, US, and Chinese approaches. AI and Ethics, 4(4), 1299–1306. doi: 10.1007/s43681-023-00299-0.Suche in Google Scholar

Received: 2025-03-19
Revised: 2025-07-19
Accepted: 2025-08-14
Published Online: 2025-09-13

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Heruntergeladen am 22.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/opis-2025-0026/html
Button zum nach oben scrollen