Abstract
This article delves into the transformative impact of artificial intelligence (AI) on Italian journalism, providing an in-depth examination of how AI technologies are being integrated into and reshaping the journalistic landscape. Through a qualitative study, which includes 10 qualitative interviews with journalists, digital editors, and IT managers from leading Italian media companies, this research investigates the attitudes, expectations, and concerns surrounding AI adoption in newsrooms. The core of the study revolves around the nuanced perceptions and discursive frames that journalism professionals mobilize to describe the incorporation of AI into newsmaking. It is nearly impossible to investigate AI without considering the imaginations, perceptions, and expectations associated with it, especially considering two structural elements – opacity and communicative capacity – that make it particularly susceptible to distortions and uncontrollable interpretative projections (Milne 2021. Uses and abuses of hype. In Frederike Kaltheunuer (ed.), Fake AI, 115–122. Meatspace Press. https://ia804607.us.archive.org/3/items/fakeai/Fake_AI.pdf (accessed 5 July 2024)). The centrality of discourse has been highlighted by the neo-institutionalist approach, which views journalism as a discursive institution founded on a complex set of rules, practices, and values that are constructed and legitimized through discourse (Hanitzsch and Vos 2017. Journalistic roles and the struggle over institutional identity: The discursive constitution of journalism. Communication Theory 27. 115–135). From this perspective, discursive legitimation represents both a process of internal organization of rules, practices, and values and a tool to confirm, defend, and claim the institution’s reputation within society. The interviewees offer diverse perspectives on the potential of AI to revolutionize journalistic workflows by introducing efficiency and new capabilities, such as automated content generation and data analysis. However, rather than seeing AI as a replacement for human journalists, most view it as a complementary tool that could enhance the quality of news production and facilitate more in-depth journalism. Despite the recognized potential, the study highlights a prevailing sense of caution among the participants. Some express concerns about the ethical implications of AI in journalism, including issues related to transparency, risk of job displacement and the maintenance of professional standards in an increasingly automated environment. The article suggests the need for clear editorial strategies that transparently establish shared guidelines of AI use in newsrooms. It also highlights the necessity for significant technological investments by publishers to ensure that the use of generative AI is not left to the initiative of individual journalists, but that its adoption makes sense within a wider project of a rational, efficient, and shared restructuring of workflow processes. In this regard, it is imperative to recognize the need to invest in significantly enhancing the computer skills of journalists within a more comprehensive frame of AI literacy that is mindful of the ethical and democratic implications of these tools.
1 Artificial intelligence and discursive legitimations
This article focuses on the discursive construction that has accompanied the introduction of AI into Italian journalism, grounded in the understanding that all technological innovations, especially the most disruptive ones, open up spaces for discursive and cultural negotiation concerning the operational and normative meanings of the social functions that technology promises, or threatens, to assume (Flichy 1995). The study gathers and analyzes the discourses, awareness, and expectations expressed by a group of journalists from all major Italian media groups as they prepare to incorporate AI into newsroom practices, organizational structures, and workflows. The point of departure is a socio-technical definition of AI. Rather than framing the phenomenon strictly in terms of its functional mechanics, which we know are based on more or less advanced deep learning systems (Mitchell 2019), we draw on the applied definition offered by Charlie Beckett, director of the Polis Journalism AI Project at the London School of Economics and Political Science. He defines AI as “a collection of ideas, technologies, and techniques that relate to a computer system’s capacity to perform tasks normally requiring human intelligence” (Beckett 2019). According to the 2023 report from the Osservatorio sul Giornalismo Digitale, well before the public release of ChatGPT sparked widespread attention and despite the structural delays in Italy’s digital transformation, Italian journalism had already begun integrating various automation systems. These systems supported both content production and topic selection processes. During the COVID-19 pandemic, the news agency ANSA, in collaboration with Applied XLab, employed Natural Language Generation (NLG) to produce real-time news reports and graphics based on data from Civil Protection services (Pizzi 2023). As early as 2019, Accenture developed the Intelligent Assistant System for “Il Secolo XIX,” which aids journalists through data verification, hyperlink checking, spelling and grammar correction, and automatic content classification. More recently, Deloitte developed a “computer edit journalism” system for ANSA in which generative AI (specifically KGRIAL, a proprietary model) produces multilingual text drafts that journalists can modify or enrich according to editorial judgment and context.
From a purely technological perspective, it’s clear that the introduction of ChatGPT, followed by customized GPT models, has represented a potentially disruptive innovation for newsrooms, as it has enabled the rapid and cost-effective creation of new applications, customizable to the specific parameters and needs established by the organizations themselves. The development and training of proprietary Large Language Models (LLMs) is no longer the only avenue for technological innovation. For journalism organizations with limited resources, alternative paths are now theoretically viable, less costly yet equally effective. These include using externally hosted LLMs (Large Language Models like GPT-4 or Claude) or deploying open-source models internally (such as Mistral-7B), adapted via various personalization techniques. However, the mere existence of a technological possibility does not guarantee its realization. For this to happen, an entire system must be activated, one involving economic models and strategies, ideological representations, organizational cultures, and systems of professional expertise. In a September 2023 interview, ANSA director Luigi Contu referred to the agency’s collaboration with Deloitte, emphasizing that while AI can be an invaluable tool for enhancing, consulting, and analyzing textual archives, it can never replace “the moral and emotional characteristics that define journalism.”[1] The introduction of such a disruptive technology is inevitably accompanied by a variety of discursive investments aimed at preserving ideological cornerstones and reinforcing the social and ethical legitimacy of journalism. The importance of business models, the degree of editorial responsibility in decision-making, and the power dynamics and negotiations within newsrooms are clearly highlighted in the statements of Claudio Silvestri, deputy general secretary of the FNSI (Italian National Press Federation).[2] He warns that the indiscriminate use of AI occurs when a business model that aims at profit maximization prevails, by cutting costs and capitalizing on the mass production of content devoid of journalistic value but still profitable within the attention economy of platforms.
In this initial phase, we found it useful to start with a mapping of the discursive layers accompanying the introduction of AI into newsrooms. We aimed not so much to capture how AI is deployed in the contingent unfolding of work routines, but rather to focus on the more detached reflexivity that emerged from interviews. This framing of the field inevitably entails a non-exhaustive description of the broader landscape, one that is structurally partial, since it focuses on just one tile of the complex mosaic that represents the multifaceted and ever-shifting process of social construction and appropriation of AI in journalism. Despite being aware of this limitation, we believe that a detailed analysis of the discourses through which journalists talk about technology is valuable for at least two reasons.
First, it is practically impossible to investigate AI without considering the imaginations, perceptions, and expectations associated with it, especially in light of two structural features, its opacity and communicative capacity, which make it particularly prone to distortion and interpretive projections that are difficult to control (Milne 2021). Second, the centrality of discourse has recently been highlighted by the neo-institutionalist approach, which views journalism as a discursive institution built upon a complex set of rules, practices, and values constructed and legitimized through discourse (Hanitzsch and Vos 2017). From this perspective, discursive legitimation represents both “an internal organizing process of rules, practices, and values” (Sorrentino and Splendore 2022: 15–16) and a tool for confirming, defending, and asserting the institution’s reputation within society. At stake, then, is the very definition of professional identity, the attribution of relevance or irrelevance, the crafting of a narrative, and the outlining of an ethical horizon. Applying this theoretical framework to the study of automated journalism, Schapals and Porlezza (2020) observed that the integration of algorithmic systems into news production challenges all four discursive frames through which the role of journalism is constructed: normative ideas (what journalism should do), cognitive orientations (what journalists want to do and how this affects normative ideas), professional practice (what they actually do), and the narrated performance (what journalists say they should do). AI enters these discursive arenas as a “boundary object” (Lewis and Usher 2016; Moran and Shaikh 2022), a metaphorical space of negotiation and contestation where actors both inside and outside the field of journalism meet to negotiate not only the operational aspects of newsmaking but also its broader meanings, particularly in relation to its social value and who is authorized to produce it. Drawing on insights from STS (Science and Technology Studies), some journalism scholars have sought to explain these contestations and transformations through the lens of “boundary work” (Gieryn 1983), a process of redefining boundaries through which social actors compete to control definitions, assign or remove specific labels, and thereby assert symbolic control over particular domains of social production. Technological innovations in journalism typically act as “boundary objects”: foreign bodies or intruders that cross the fluid boundaries of the journalistic field, helping it survive in difficult times but simultaneously challenging journalists’ control over the definition and social legitimation of their profession (Tandoc and Oh 2017). Artificial intelligence poses an even greater challenge, insofar as it demonstrates that tasks once considered the exclusive domain of human creativity can now be performed with great skill by exceptionally efficient machines, whose structural opacity crushes any hope for transparency at the outset (Ananny and Crawford 2018). Starting from this theoretical foundation, we conducted in-depth interviews with journalists and other newsroom members variously involved in implementing technological systems to support news production. These interviews explored how journalists perceive AI, along with its promises and risks. We took into account both their descriptions of the concrete practices of implementation within their daily routines, and the more abstract representations they invoked when imagining future scenarios.
The study addressed both the ethical implications of such representations and the pragmatic interpretations, as well as normative projections regarding the technology itself and the journalistic profession.
2 Hybrid journalism: development trajectories and identity negotiations
The introduction of AI into journalism fits within a long-standing process of datafication (Porlezza 2024), which has brought increasing centrality to data, algorithms, and machine learning systems in the newsmaking process. The increasingly tight interweaving of human contribution and machine mediation has led to the widespread and cross-sector adoption of the term “hybrid journalism” (Diakopoulos 2019), even though the ways in which professionals and algorithmic mediation combine to produce news efficiently and effectively remain variable and only partially mapped. Datafication is part of the so-called “algorithmic turn” (Napoli 2014), whereby the pervasive use of metrics and big data has had significant impacts on the structuring of production flows, the selection and placement of topics, and the definition of news format and style (Christin 2020). Loosen (2018) identifies four forms of datafied journalism: data journalism, which consists of a reporting style that draws substantially from large publicly available datasets; algorithmed journalism, resulting from the growing influence algorithms exert over the distribution of journalistic content through selection and ranking operations carried out by platforms; automated journalism, indicating a growing reliance on automated content production via technologies developed and provided by private providers that do not consider themselves editorial organizations; metric-driven journalism, which includes various attempts to meaningfully analyze the digital traces of audiences to optimize every decision involved in the newsmaking process. The increasing pervasiveness of algorithms has made automated journalism the most suitable term to describe the growing reliance on technological tools capable of automating source gathering, content production, distribution, and especially the personalization of both content and distribution modes (Zamith 2019).
In 2014, the American Associated Press announced that more than 3,000 articles that year had been generated by so-called “robots.” By 2016, Forbes, The New York Times, Los Angeles Times, and ProPublica had also adopted automated production systems in their newsrooms (Graefe 2016). These “robotic reporters” consist of algorithmic processes that convert data into narrative texts with limited or even no human intervention beyond initial programming (Carlson 2015). Their usefulness initially emerged especially in areas requiring the processing of large amounts of data, such as sports, finance, and statistics, while also allowing for a schematic and predictable narrative structure. During this early phase, key factors in adopting such tools included the growing availability of large, publicly accessible datasets and publishers’ pressure to cut costs while increasing the quantity of news offered to the public. Also particularly appealing was the ability to create content in multiple languages and from various perspectives, making it customizable according to readers’ preferences and expressed demands. Within this ongoing process of newsmaking automation, the introduction of generative AI tools represents an additional threshold of advancement. The introduction of new technologies in journalism has historically met considerable resistance, often from journalists themselves (Thurman et al. 2017). While it’s undeniable that such resistance may have diverse and multiple roots, the literature highlights, among the various logics, the desire to preserve an ideological continuity in the definition of journalism’s social role (Deuze 2005), by emphasizing the distinctiveness of human contribution compared to machines. In a recent study exploring the intersection between automated journalism and the conception of the professional role, Shapals and Porlezza (2020) found that interviews often revealed a clear demarcation between human storytelling and algorithmic storytelling, emphasizing typically human elements such as creativity, the ability to decode emotions, and critical thinking.
Among the ten principles outlined in the Paris Charter on AI and Journalism, signed on November 10, 2023, during the Paris Peace Forum by Reporters Sans Frontières (RSF) and sixteen partner organizations, not only does the primacy of ethics stand out, but also the reaffirmation of the priority of human agency in journalistic decision-making processes. This emphasis aligns with the findings of the JournalismAI Global Survey conducted by Polis, the journalism think tank at the London School of Economics and Political Science. A significant proportion of the journalists interviewed believe that AI has the potential to enhance newsroom roles rather than replace them, by introducing new tasks and functions, and by restructuring workflows, yet without entirely displacing human resources (Beckett and Yaseen 2023). Overall, AI is regarded as a promising opportunity, provided that news organizations do not relinquish their ethical and editorial responsibilities. Unsurprisingly, one of the most widely shared hopes for the future involves substantial investment in AI-related skills. Such investments aim not only to improve overall awareness of the technology’s limitations and opportunities but also to strengthen technical competencies. Building on the premise that AI acts as a “boundary object,” a conceptual tool that prompts discursive negotiation of professional boundaries and roles, we turn our focus to the Italian context: Are Italian journalists feeling threatened by AI? Are there clear editorial strategies aimed at leveraging AI to restructure journalistic workflows, practices, and skill sets? What concerns and expectations are shaping perceptions of the ongoing changes and those anticipated in the near future? Is there an ongoing redefinition of professional roles, or rather a discursive negotiation aimed at preserving journalism’s social legitimacy?
The investigation presented here represents a preliminary exploration of this evolving field. While we are fully aware that only a situated inquiry into the actual contexts of negotiation, captured in their contingency and variability, can ultimately offer a comprehensive account of what is unfolding, we argue that mapping these discursive layers may provide interpretive coordinates that pave the way for future, in-depth analyses.
3 Methodology and sample
Consistent with the exploratory nature of this study, a qualitative research methodology was adopted. Ten online video interviews were conducted with journalists, digital editors, and individuals responsible for managing and implementing information systems within newsrooms. All participants are affiliated with major Italian media companies. The sample does not claim to be either exhaustive or representative of the entire Italian journalistic landscape. Rather, the limited number of interviewees reflects a deliberate selection process focused exclusively on senior figures within leading media organizations, based on the assumption that these individuals are most attentive to and engaged with developments in artificial intelligence. Despite the small sample size, the depth and duration of the interviews allowed for the collection of rich and meaningful data.
In agreement with the interviewees, their identities have been kept confidential. Many of the insights shared during the interviews pertained to internal industrial strategies and projects still under development, not yet disclosed to the public. Given the sensitive nature of the information, we provide here only a summary of the key characteristics of the participants:
Code | Role | Company type | Gender | Age |
---|---|---|---|---|
1 | Membership Development | Online news outlet | M | 35 |
2 | Head of Digital | National media company | M | 59 |
3 | General Manager | National media company | M | 61 |
4 | Editor-in-Chief | National daily newspaper | M | 54 |
5 | Deputy Editor | Online information website | M | 42 |
6 | Editor-in-Chief | National daily newspaper | M | 54 |
7 | Journalist | Online news outlet | M | 38 |
8 | Journalist | Local daily newspaper | M | 45 |
9 | Deputy Editor | Local daily newspaper | M | 46 |
10 | Journalist | National daily newspaper | M | 43 |
The interviews, with an average duration of 1.5 hours, were conducted in the summer and autumn of 2023. The semi-structured interview guide and the related questions focused on three main areas: a reconstruction of the (potential) technologies used within the interviewee’s organization, an assessment of the impact of AI on newsmaking processes, and the ethical and deontological consequences of AI usage in the media and journalistic profession.
The interviews were video-recorded and transcribed. The transcriptions were analyzed using word-processing software to facilitate the qualitative analysis of the text.
Once the texts were collected, they were coded using the “constant comparative method,” which is frequently employed in qualitative research within the framework of grounded theory (Glaser and Strauss 1967). Specifically, after an initial complete reading of the interviews, a manual coding process was used to identify patterns, topics, and relationships within the texts. This coding process was essential for organizing the data and extracting significant shared information through continuous interaction among researchers. By constantly comparing and contrasting the coding processes, the researchers identified the key themes that emerged across the interviews. These will be discussed in the following sections.
4 Technologies in the field
The newsrooms represented in our sample appear to still be in an exploratory phase regarding the potential applications of AI. The interviewees themselves emphasized that, in comparison with the dynamism of the international landscape, the Italian journalistic market is marked by a certain degree of backwardness, caution, and wait-and-see attitudes:
We’ve moved past the hype from a few months ago, when everyone started using ChatGPT. Now we’re just observing. We’re looking around to better understand, maybe waiting for something to happen. (1, M, 35)
I notice there’s a lot of confusion about the term AI. Often, even among professionals, people conflate it with basic data-processing or interpretation software. (7, M, 38)
Despite this commonly shared perception of technological delay, the interviews revealed significant differences among the news organizations analyzed. Many editorial teams have yet to adopt AI technologies and show little interest in doing so in the near future, except through the personal initiatives or curiosity of individual journalists. This lag often results from a cautious stance adopted by some media groups, who remain in a phase of observation, particularly regarding foreign competitors, assessing the costs and benefits of implementing AI through the lens of others’ experiences. In the absence of clear editorial strategies, individual journalists often compensate by exchanging knowledge and insights within their professional communities. These individual efforts are frequently motivated by the desire to identify tools that can accelerate workflows, especially for repetitive and time-consuming tasks:
There are people more attuned to these developments, naturally inclined to experiment. During breaks or over coffee, they share discoveries […] we talk about small things, little tools that help with those classic boring tasks that we can now get done decently. (1, M, 35)
Some media organizations and publishing groups, on the other hand, have begun reflecting on the pros and cons of AI by setting up working groups involving both editorial and marketing departments:
[We’ve set up a working group] to systematically and precisely collect all the needs, not only at the journalistic level but also from the business unit […] but it’s a group that hasn’t yet produced anything concrete. (1, M, 35)
Those who already make systematic use of AI tend to frame it as a continuation of previous text management technologies (translation tools, CMS, SEO suggestions, and archive-related functions), thus considering AI an incremental rather than disruptive innovation in the automation of editorial processes:
We can already think about integrating AI models in the CMS to generate headlines, test which ones perform best, enrich articles with data tables, or retrieve previously published content on the same topic. (3, M, 61)
This incremental and non-disruptive adoption of AI is also influenced by concerns about the potential for drastically altering the editorial product. For instance, one online news platform has begun experimenting with personalized recommendation systems as a precursor to adaptive homepage design. Such an initiative is perceived as particularly impactful, as it involves relinquishing the editorial team’s responsibility in defining the salience and hierarchy of news stories. Delegating this function to an algorithm that reflects user preferences is viewed as a significant risk:
Personalized recommendations are among the less risky options, right? But a homepage redesigned based on the individual user clearly requires deeper reflection, because the homepage – the opening, the sequence of stories – is obviously a fundamental part of a newspaper’s identity and of the newsroom’s editorial work. (1, M, 35)
Only a small number of the news outlets involved in our study are currently employing artificial intelligence, and those that do are primarily adopting it through an industrial perspective that spans multiple dimensions of the media enterprise. This includes both editorial departments, which handle the production and curation of “news” content (such as the use of advanced Text-to-Speech, Speech-to-Text systems, and CMS tools), and marketing divisions, which are increasingly focused on tailoring editorial content to a fragmented and highly profilable audience. It is predominantly the larger media groups that have entered more advanced phases of experimentation. These organizations are implementing tools like chatbots, capable of efficiently retrieving and reworking content from proprietary news databases, and personalization systems that adjust homepage content salience based on the individual preferences of each user. Marketing department heads are often the most interested in implementing such tools activated through subscription and paywall.
Even within the most technologically progressive media companies at the national level, the use of generative AI for the production and publication of news content remains cautious and peripheral. Current applications are generally confined to the automated processing and description of large official datasets, such as local election results or stock market data.
A particularly promising and structurally clearer domain is that of editorial services targeted at professionals. In these cases, generative AI enables the automated generation of informational content in response to specific queries submitted by subscribers. The proprietary nature of the source data is regarded as a key guarantor of reliability: the perceived credibility of the final editorial product stems from the authoritative informational foundations (databases, court rulings, commentaries) on which the generative AI systems are trained to produce new content.
5 Will AI revolutionize the work of journalists?
We asked our interviewees whether AI has the potential to revolutionize the work of information professionals. Two dominant perspectives emerged from their responses. On one hand, some interviewees tend to downplay the transformative potential of AI in journalism, arguing that public discourse is currently overestimating its impact. From this viewpoint, criticisms are often directed at the imprecise use of the term “AI” and the lack of a more pragmatic and secular approach to the concrete possibilities for simplification and increased efficiency that a strategic implementation of such technologies could bring. Automated news production is viewed by some as an opportunity to reduce the reliance on unskilled journalistic labor, which has historically been engaged in the mass production of low-quality content for equally questionable online outlets. Disrupting a market sustained by the exploitation of low-cost editorial labor could pave the way for a renewed focus on high-quality journalism that cannot be easily outsourced to AI:
Certain journalistic approaches, certain sensitivities, will remain the exclusive domain of human journalists for a long time, or will not be easily transferrable to, or replicable by AI software. (1, M, 35)
For others, however, concern predominates regarding the potential expansion of AI applications, particularly in relation to its possible repercussions on employment: AI is perceived as yet another unknown factor within a sector already facing significant challenges and lacking a clear sense of future direction. One of the main fears expressed is the further widening of the digital divide between older and younger generations of journalists, similar to the existing gap between web journalists and those working in television and print media. Our interviewees emphasize that acquiring AI literacy may prove difficult for many journalists, due both to natural resistance to innovation and to endemic issues such as lack of time and insufficient basic technological skills.
Hopes for genuinely disruptive technologies that could reshape the relationship between reader and news content are particularly pronounced among media company executives, who perceive AI as a vehicle for editorial innovation. Conversely, local newsrooms express a preference for incremental innovations, seen as critical assets to ensure survival in a competitive and challenging environment historically marked by inadequate production resources. The aspiration is that, through well-conceived editorial strategies, it will soon be possible to integrate automation tools for managing sources and drafting articles.
6 Ethical issues and impacts on quality
The concerns regarding ethical implications are present and mentioned by the interviewees, yet they are not a focal point in the discourse. The publication of news content lacking journalistic processing is described as an already normalized practice, which is not regarded as particularly problematic in some areas. The principle of transparency and the accurate disclosure to readers about the use of AI in the production of specific content are considered priorities. With regard to the risk of declining textual quality, many interviewees point out that such deterioration is already a reality, largely due to the widespread practice of copy-pasting from press agency dispatches or press releases. It is not the technology itself that is responsible for this regressive shift, but rather a broader economic weakening of the industry coupled with a decline in editorial accountability principles.
The fear of a homogenized news offering, in my opinion, stems from prejudice: the idea that all articles will end up looking the same. But this already happened. There was a colleague of mine who used to write identical crime reports, just changing the victims’ names. (9, M, 46)
Regarding the risk that AI may further undermine journalistic work and its social legitimacy, a widespread pessimism emerges about the current state of the profession. Many believe that AI can hardly worsen a situation already perceived as out of control, due to the precarious working conditions experienced by the majority of journalists:
I think the role of journalists has already been diminished – just consider the fact that some colleagues are paid two euros per article […] They have to write ten, eight pieces just to bring home a salary, to barely make it through the month. (4, M, 54)
A key concern for some is the risk of AI being weaponized by non-journalistic actors to produce disinformation. In this context, AI merely reanimates the long-standing tension between UGC content and professional journalism.
The most worrying aspect is that if the tool escapes control, the consequences could be devastating. Imagine a fabricated article, or images – an app that creates a fake story […] It could destroy a person’s reputation. (9, M, 46)
Also observed with concern is the phenomenon of fraudulent websites that plagiarize articles from other outlets, use AI to reprocess them, and then republish them without proper attribution. This practice, already common in the more ethically compromised segments of digital journalism, could be further exacerbated by such technologies.
Another ethical issue, particularly emphasized by editors-in-chief and newsroom directors, pertains to the unauthorized use of AI tools by employed or freelance journalists. This raises concerns of transparency within newsrooms and accountability in the event of emerging critical incidents. The underlying fear is that AI use might not only go undisclosed but may also occur without any editorial guidelines or established internal policies:
As with all aspects of work, the use of AI must be regulated at both the legislative and union levels. Thus, in an Italian newsroom, it simply can’t happen that AI is used without an agreement from the editorial team. (4, M, 54)
7 Conclusions
The analysis of our interviews centered on three primary thematic dimensions: (1) the pragmatic dimension, which explores what interviewees concretely do with AI, how they describe the current state of affairs within their specific contexts and in the broader journalistic landscape; (2) the cognitive dimension, concerning how AI is perceived to redefine professional competencies and roles, whether it integrates into established newsmaking practices as a substitute or a complement; (3) the ethical dimension, focusing on which sensitive issues are highlighted in relation to existing journalistic ethics and core values such as transparency, accountability, and responsibility.
With respect to the pragmatic dimension, our interviews reveal that tactical considerations and individual initiative predominantly drive journalists to adopt AI in their daily routines, seeking autonomous solutions to streamline repetitive tasks. Editorial strategies and long-term implementation plans for AI emerge only marginally. Even the most advanced initiatives tend to position machine learning and deep learning solutions as enhancements to existing newsroom technologies or as refinements of pre-existing marketing and reader profiling strategies.
Regarding the cognitive dimension, the interviewees generally focus more on the benefits than the potential risks of AI’s introduction into journalism. They offer varying perspectives on AI’s capacity to revolutionize journalistic workflows by introducing efficiency and new capabilities such as automated content generation and data analysis. Rather than perceiving AI as a threat to human journalists, the majority view it as a complementary tool, freeing up human resources from routine work to enable more investigative and in-depth journalism.
As for the ethical dimension, the research reveals a prevailing sense of caution among participants, though not outright concern. Many advocate for greater attention to the ethical implications of AI in journalism, including issues related to transparency, accuracy, potential job losses, journalistic integrity, and the maintenance of professional standards in an increasingly automated environment. Despite the anxieties within the industry about AI potentially displacing journalists (Iannuzzi 2024), our interviewees generally adopt a more optimistic stance, suggesting that AI can represent a meaningful opportunity. However, this optimism hinges on the condition that news organizations maintain their ethical and editorial responsibilities, remaining transparent with readers and capable of managing sources and automated content generation processes. Surprisingly, our interviewees appear to be less concerned about the impact of AI than what emerges from the reflections of professional associations and regulatory bodies (ibidem).
However, this perspective may also be linked to one of the main limitations of our research: our sample is composed predominantly of mid-to-high profile journalists who are already professionally established, rather than precariously employed or freelance workers. It would be particularly interesting to examine how different perceptions might emerge among young, precarious, or freelance journalists, those currently entering the profession, regarding the impact of AI on the future of journalism, and thus on their own professional futures, which are yet to be fully consolidated.
Overall, the results suggest a pressing need for clear editorial strategies and shared, transparent guidelines regarding the use of AI in newsrooms. Equally critical is the necessity for significant technological investment by publishers to ensure that the deployment of generative AI is not left to the discretion of individual journalists, but rather incorporated into a broader, well-structured, and collaboratively developed reorganization of newsmaking processes. Notably, among journalists’ primary hopes for the future is a substantial investment in AI-related competencies, not only to strengthen basic technical literacy but also to foster deeper awareness of the social, cultural, and political dimensions of these transformative technologies.
References
Ananny, Mike & Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20(3). 973–989. https://doi.org/10.1177/1461444816676645.Search in Google Scholar
Beckett, Charlie. 2019. New powers, new responsibilities. A global survey of journalism and artificial intelligence. Polis, London School of Economics and Political Science in Collaboration with Google News Initiative. https://www.journalismai.info/research (accessed 5 July 2024).Search in Google Scholar
Beckett, Charlie & Mira Yaseen. 2023. Generating change. A global survey of what news organisations are doing with artificial intelligence. Polis, London School of Economics and Political Science in Collaboration with Google News Initiative. https://www.journalismai.info/research (accessed 5 July 2024).Search in Google Scholar
Carlson, Matt. 2015. The robotic reporter: Automated journalism and the redefinition of labor, compositional forms, and journalistic authority. Digital Journalism 3(3). 416–431. https://doi.org/10.1080/21670811.2014.976412.Search in Google Scholar
Christin, Angèle. 2020. Metrics at work: Journalism and the contested meaning of algorithms. Princeton: Princeton University Press.10.23943/princeton/9780691175232.001.0001Search in Google Scholar
Deuze, Mark. 2005. What is journalism? Professional identity and ideology of journalist reconsidered. Journalism 6(4). 442–464. https://doi.org/10.1177/1464884905056815.Search in Google Scholar
Diakopoulos, Nicholas. 2019. Automating the news. Cambridge: Harvard University Press.10.4159/9780674239302Search in Google Scholar
Flichy, Patrice. 1995. L’innovation technique [Technical innovation]. Paris: Éditions La Découvertes.Search in Google Scholar
Gieryn, Thomas F. 1983. Boundary-work and the demarcation of science from non-science: Strains and interests in professional ideologies of scientists. American Sociological Review 48(6). 781–795. https://doi.org/10.2307/2095325.Search in Google Scholar
Glaser, Barney & Anselm Strauss. 1967. Discovery of grounded theory: Strategies for qualitative research. Mill Valley, CA: Sociology Press.Search in Google Scholar
Graefe, Andreas. 2016.Guide to automated journalism. New York: Tow Center for Digital Journalism.Search in Google Scholar
Hanitzsch, Thomas & Tim P. Vos. 2017. Journalistic roles and the struggle over institutional identity: The discursive constitution of journalism. Communication Theory 27. 115–135. https://doi.org/10.1111/comt.12112.Search in Google Scholar
Iannuzzi, Andrea. 2024. Intelligenza artificiale nelle redazioni italiane. Report 2024 [Artificial Intelligence in Italian Newsrooms. Report 2024]. Osservatorio sul giornalismo digitale. https://www.odg.it/osservatorioreport-2024 (accessed 5 July 2024).Search in Google Scholar
Lewis, Seth C. & Nikki Usher. 2016. Trading zones, boundary objects, and the pursuit of news innovation: A case study of journalists and programmers. Convergence 22(5). 543–560. https://doi.org/10.1177/1354856515623865.Search in Google Scholar
Loosen, Wiebke. 2018. Four forms of datafied journalism. Journalism’s response to the datafication of Society. Communicative Figurations Working Paper Series 18. Bremen: University of Bremen.Search in Google Scholar
Milne, Gemma. 2021. Uses and abuses of hype. In Frederike Kaltheunuer (ed.), Fake AI, 115–122. Meatspace Press. https://ia804607.us.archive.org/3/items/fakeai/Fake_AI.pdf (accessed 5 July 2024).Search in Google Scholar
Mitchell, Melanie. 2019. Artificial intelligence. London: Penguin Books.Search in Google Scholar
Moran, Rachel E. & Sonia Jawaid Shaikh. 2022. Robots in the news and newsrooms: Unpacking meta journalistic discourse on the use of artificial intelligence in journalism. Digital Journalism 10(10). 1756–1774. https://doi.org/10.1080/21670811.2022.2085129.Search in Google Scholar
Napoli, Philip M. 2014. On automation in media industries: Integrating algorithmic media production into media industries scholarship. Media Industries Journal 1(1). 33–38. https://doi.org/10.3998/mij.15031809.0001.107.Search in Google Scholar
Pizzi, Alessia. 2023. Giornalismo e Intelligenza Artificiale. Tendenze e nuovi scenari per il giornalismo. Digitale? Artificiale? Report 2023 [Journalism and Artificial Intelligence. Trends and New Scenarios for Journalism. Digital? Artificial? Report 2023]. Osservatorio sul giornalismo digitale. https://www.odg.it/osservatorio-report-20243 (accessed 5 July 2024).Search in Google Scholar
Porlezza, Colin. 2024. The datafication of digital journalism: A history of everlasting challenges between ethical issues and regulation. Journalism 25(5). 1167–1185. https://doi.org/10.1177/14648849231190232.Search in Google Scholar
Schapals, Aljosha Karim & Colin Porlezza. 2020. Assistance or resistance? Evaluating the intersection of automated journalism and journalistic role conceptions. Media and Communication 8(3). 16–26. https://doi.org/10.17645/mac.v8i3.3054.Search in Google Scholar
Sorrentino, Carlo & Sergio Splendore. 2022. Le vie del giornalismo. Come si raccontano i giornalisti italiani [The Paths of Journalism. How Italian Journalists Tell Their Stories]. Bologna: Il Mulino.Search in Google Scholar
Tandoc, Edson C.Jr. & Soo-Kwang Oh. 2017. Small departures, big continuities? Norms, values, and routines in the guardian’s big data journalism. Journalism Studies 18(8). 997–1015. https://doi.org/10.1080/1461670x.2015.1104260.Search in Google Scholar
Thurman, Neil, Konstantin Dörr & Jessica Kunert. 2017. When reporters get hands-on with robo-writing: Professionals consider automated journalism’s capabilities and consequences. Digital Journalism 5(10). 1240–1259. https://doi.org/10.1080/21670811.2017.1289819.Search in Google Scholar
Zamith, Rodrigo. 2019. Algorithms and journalism. In Henrik Örnebring (ed.), Oxford encyclopedia of journalism studies. Oxford: Oxford University Press.10.1093/acrefore/9780190228613.013.779Search in Google Scholar
© 2025 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Research Articles
- Does social media use make us more environmentally knowledgeable or more eco-anxious? A multi-country investigation
- “Carried” over to streaming: glocalizing Sex and the City in Amazon Prime Video’s Four More Shots Please!
- Addressing excessive social media use: effects of perceived intrusiveness and psychological reactance on compliance with Douyin healthy use reminders
- Who endorses online hate? The roles of ideology and knowledge in South Korean perceptions of anti-Chinese slurs
- Weak ties, strong facts: how influencers amplify public health communication on X (formerly Twitter)
- Generational perspectives of sports figure authenticity: how age shapes fan perceptions of sports influencers in social media
- Featured Translated Research Outside the Anglosphere
- AI and newsmaking: an exploratory investigation of discourses and practices in Italian newsrooms
Articles in the same Issue
- Frontmatter
- Research Articles
- Does social media use make us more environmentally knowledgeable or more eco-anxious? A multi-country investigation
- “Carried” over to streaming: glocalizing Sex and the City in Amazon Prime Video’s Four More Shots Please!
- Addressing excessive social media use: effects of perceived intrusiveness and psychological reactance on compliance with Douyin healthy use reminders
- Who endorses online hate? The roles of ideology and knowledge in South Korean perceptions of anti-Chinese slurs
- Weak ties, strong facts: how influencers amplify public health communication on X (formerly Twitter)
- Generational perspectives of sports figure authenticity: how age shapes fan perceptions of sports influencers in social media
- Featured Translated Research Outside the Anglosphere
- AI and newsmaking: an exploratory investigation of discourses and practices in Italian newsrooms