Home / Peer Reviewed Scholarship / The Attitudes of Journalists Toward Written Content Generated by AI
Artificial Intelligence. Photo courtesy of FreeMalaysiaNow.
Artificial Intelligence. Photo courtesy of FreeMalaysiaNow.

The Attitudes of Journalists Toward Written Content Generated by AI

Issue 37, Winter/Spring 2024
https://doi.org/10.70090/AMEN24AI
Abstract

This experimental study investigates the attitudes of journalists towards written content that is generated by artificial intelligence (AI) when compared to human-written journalistic pieces. The methodology utilized a comparative experiment that relied upon four journalists and three AI platforms, which respectively wrote an article on “The Language of Dialogue Between Parents and Adolescents in the AI Era”. Articles were anonymized and evaluated based on criteria of accuracy, coherence, objectivity, creativity, ethical considerations, and audience engagement. The research aims to assess differences in quality between AI-generated and journalist-written content, variations among AI outputs, and the influence of the backgrounds of journalists on their attitudes towards AI-generated work. Furthermore, it explores ethical considerations regarding AI platforms and potential for responsible AI integration in journalism, while maintaining industry standards. The significance of the study lies in understanding AI-human collaboration as it pertains to challenges and opportunities in newsrooms. These comprehensions serve as guiding policies for AI-assisted journalism to complement human skills and provides insight for journalists and AI developers.

Introduction

The increasing prevalence of artificial intelligence (AI) within various fields is revolutionizing how tasks are performed and information is generated. In the realm of journalism, AI has the potential to automate content creation, generate news articles, and mimic human writing styles, which is transforming the traditional roles of journalists (Túñez-Lopez, Toural-Bran, and Valdiviezo-Abad 2019). This experimental study investigates the attitudes of professional journalists toward written content that is produced by AI platforms and human-written journalistic pieces. The advent of AI writing technology offers both unprecedented opportunities and concerns for news organizations. AI algorithms can analyze vast data, automate content creation, and potentially enhance productivity (Amponsah and Atianashie 2024). However, the integration of AI-generated content raises important questions regarding its impact on the quality, credibility, and ethics of journalism. Concerns exist that pertain to AI-written articles. Can these automated articles match the depth, standards, and ethical principles upheld by human journalists? This study aims to uncover valuable insights into potential challenges and opportunities associated with adopting AI writing assistance. The research examines the perceptions and evaluations of professional journalists toward articles written by AI bots versus expert human writers. Understanding the attitudes of journalists towards AI-generated content is crucial for developing guidelines that allow responsible AI implementation in newsrooms while maintaining journalistic quality and ethics.

The findings contribute to the ongoing discourse surrounding AI's role in journalism and inform strategies for leveraging AI's capabilities while upholding professional standards. This article presents a comprehensive overview, including a literature review, the experimental methodology, key results, and a discussion of implications for responsibly integrating AI writing tools within journalism.

Theoretical Framework

This study draws upon the Unified Theory of Acceptance and Use of Technology (UTAUT) developed by Venkatesh (2003) as the guiding theoretical framework. UTAUT aims to explain user intentions regarding the adoption of new technology and their subsequent usage behavior. By synthesizing constructs from several established models—such as the Technology Acceptance Model (TAM) and the Theory of Planned Behavior (TPB)—UTAUT provides a comprehensive framework to examine technology acceptance. As such, this relevant theoretical framework was adopted to investigate the differences between AI-generated and journalist-written columns.

According to UTAUT, four core constructs influence behavioral intention and subsequent technology usage. These four constructs are performance expectancy, effort expectancy, social influence, and facilitating conditions. These constructs offer a robust structure to examine the variables that may influence the adoption and perceived value of AI-generated content in journalism. The first construct is performance expectancy, which refers to the degree that individuals believe using technology will improve their job performance. In the context of journalism, performance expectancy reflects an expectation that AI-generated content will match or exceed the quality of human-written columns in terms of accuracy, coherence, and objectivity. Venkatesh et al. (2012) demonstrated that when users believe a technology will enhance their job performance, then they are more likely to adopt it. The second construct is effort expectancy, which pertains to the perceived ease of using AI content creation tools. This construct assesses how easily journalists can work with AI platforms to produce high-quality columns. Venkatesh & Davis (2000) revealed that technologies that are perceived as user-friendly typically enjoy higher rates of adoption. If AI tools are intuitive and require minimal effort to generate high-quality content, then journalists may be more inclined to integrate them into their workflows. The third construct is social Influence, which assesses the extent that peers, supervisors, or societal norms encourage the adoption of AI technology. In a newsroom, social influence relates to how journalists perceive the use of AI-generated content based on feedback and encouragement from colleagues and management. Schaubroeck et al. (2016) demonstrated that social norms and support play a significant role in technology adoption, which is particularly true in professional environments. The fourth and final construct is facilitating conditions, which refers to the availability of organizational and technical infrastructure needed to support technology usage. For journalists, facilitating conditions might include access to advanced AI platforms, training, and technical support, which are essential for integrating AI tools into content production (Thompson et al., 2006).

Hypotheses 

Building upon UTAUT, this study adopts this theoretical framework to evaluate AI-generated content versus journalist-written columns. As such, the following hypotheses are proposed.

  • H1: There are statistically significant differences in the quality of AI-generated columns and journalist-written columns.
  • H2: There are statistically significant differences in the performance of journalists and AI across set criteria such as accuracy, coherence, objectivity, etc.
  • H3: There are statistically significant differences between each of the AI platforms in terms of the quality of columns.

The initial hypothesis examines whether the quality of AI-generated content—based on criteria such as coherence, accuracy, and objectivity—differs significantly from traditional journalist-written columns. It aims to measure how well AI tools can meet or exceed established journalistic standards. The second hypothesis investigates specific performance measures by comparing human produced to AI-generated content, which is based upon defined metrics such as coherence, factual accuracy, and impartiality. This hypothesis will explore whether AI can replicate a human journalist’s strengths or if significant disparities exist by identifying performance gaps. The third hypothesis focuses on comparing different AI platforms by assessing their performance in producing high-quality content. It explores whether some AI tools outperform others based on set criteria, which provides insight into the variability of AI-generated content across different platforms.

Application of UTAUT in Journalism

UTAUT has been applied in various sectors to better understand technology adoption and performance. For example, Khechine et al. (2016) used UTAUT to examine acceptance of e-learning platforms by teachers and revealed that performance expectancy and social influence significantly impacted adoption rates. Similarly, Sharma et al. (2017) found that facilitating conditions were critical for healthcare professionals' acceptance of electronic health records. In the context of this research, UTAUT serves as a valuable framework for understanding a journalist’s perceptions of AI-generated content and its potential to either complement or replace human-written columns. This study’s application of UTAUT aims to evaluate both the quality of AI content and the journalist’s acceptance of AI tools while also identifying any key differences between AI platforms.

Contribution to the Field

This research contributes to the growing literature that examines AI adoption in journalism by exploring differences in quality, performance, and platform variability. It seeks to inform both practitioners and scholars regarding the potential of AI in content creation and the conditions where journalists may accept AI-generated content as a viable alternative or complement to human produced journalism.

Literature Review

Several essential criteria must be considered to effectively assess the quality of journalistic writing. As suggested by former Broadcast Executive David Cox, “Objectivity means an effort to exclude subjective judgment” (Sambrook 2012, 5). Reliability encompasses the accurate and credible reporting of facts, which ensures that provided information is trustworthy and well-researched (Shapiro et al. 2006). Persuasiveness involves the effective use of language and rhetorical techniques to engage the reader and convincingly convey ideas (Kinneavy and Warshauer 1994). Simplicity encompasses the use of clear and concise language that is easily understandable by the audience (Zinsser 2006). Updating refers to the timely incorporation of the latest relevant information, which ensures the content remains current and up to date (Bradshaw 2017). Integration involves the seamless blending of various sources and perspectives into a cohesive narrative (Brooks et al. 2002). Comprehensive accuracy demands a thorough and precise representation of all relevant facts and details (Kovach and Rosenstiel 2007). Finally, coherence refers to the logical flow and organization of ideas, which ensures the content is well-structured and easy to follow (Gopen and Swan 1990). The use of artificial intelligence (AI) in journalism has gained significant attention in recent years. This literature review aims to examine studies that explore content generated by AI language processing models as it pertains to the adherence of professional journalism standards, attitudes of journalists towards AI written content, the quality of AI written content, and AI creativity in written content.

Adherence to Professional Journalism Standards

Previous studies examine the application of artificial intelligence (AI) in the field of journalism and its various impacts on newsrooms. Miroshnichenko (2018) posits that journalists have already been replaced by robot journalists, especially relating to the quantitative aspect of journalism. The research also suggests that further technological advancements will allow robots to replace humans in the newsroom. The research estimates that AI will overcome the qualitative limitations of writing in five to seven years. This is due to the economic efficiency and the output that robot journalists can produce. However, humans may retain a position within journalism due to the social need of humancentric jobs, as well as human errors becoming more appealing if the industry becomes saturated with auto-journalism. Noain-Sánchez (2022) conducted a study in two rounds, which involved interviewing journalists, media professionals, academics, and AI technology providers from the United States, United Kingdom, Germany, and Spain. The research investigated how the application of AI in newsrooms impacts news production, ethical issues, daily operations, and profiles. The findings revealed that AI facilitates improvement of a journalist’s capabilities by relieving them of mundane tasks, which improves news-making efficiency by allowing journalists to shift their efforts toward enhancing the quality of the news. AI also allows media companies to compete with social media and combat monetary challenges.

Despite these findings, some experts express distrust towards AI. Kim (2019) conducted research to explore the ethical use of AI in journalism. The study investigated how AI is currently utilized in various aspects of the news production process, including story discovery, story production, and story distribution. It identified ethical challenges associated with the implementation of AI in newsrooms and provided a framework that encourages ethical practices and transparency for integrating AI in journalism. Zagorulko (2023) examined ChatGPT's adherence to professional journalistic standards. The study evaluated six core standards, which included topicality, reliability, balance of opinion, separation of facts and opinions, accuracy, as well as completeness of information. The findings revealed that ChatGPT tends to generate biased content. Additionally, several problems were identified, which included use of outdated information, opaque data sources, and the tendency to fabricate facts. The results further demonstrated that content generated by ChatGPT does not fully comply with the professional standards of topicality, reliability, balance of opinion, accuracy of information, completeness of information, as well as separation of facts and opinions. Despite these serious issues, the study recognizes the potential use of AI in digital media, particularly for tasks like preparing news backgrounds, translating and correcting texts, generating headlines, as well as enhancing interactivity. 

Attitudes of Professionals Towards AI-generated Written Content

Hofeditz et al. (2021) examined the credibility and factors that influence trust in AI-generated news. The online survey explored whether transparent communication and explanation of AI in journalism enhanced credibility of AI-generated news. The results indicated that explanations and transparency did not have a significant impact on credibility. Instead, factors such as the credibility of media companies, a user’s experience with social media, and AI experience influenced trust in AI-generated content. Túñez-López et al. (2021) built upon these findings by identifying the potential impact of AI on the media ecosystem and explaining what changes have already occurred. The research relied on Delphi-like in-depth interviews that were conducted in two rounds, which used an intentional sample of sixteen academics, journalism associations, and leading companies in the field. The results indicate that AI will facilitate the proliferation of automated text-to-audio news and video-on-demand, which potentially amplifies non-linear consumption. It will also promote changes in the business model through novel ways of relating to the audience and distributing content. Further, the roles for individual journalists will likely change as they may not operate in the exact capacity they were trained. However, this may also indicate that journalists will have more cognitive contribution in the production of news. The paper also states that AI advances will pose more social and ethical challenges. Komatsu et al. (2020) posits that integration of AI into journalism will challenge notions of transparency, accountability, and responsibility. Moreover, Beckett (2019) conducted a survey that indicated 24 precent of media organizations reported a problem with cultural resistance while integrating AI globally. Additionally, Garcia and Chen (2019) investigated the perceptions of academics, which shed light on attitudes towards AI-generated content in scholarly contexts. Lee and Wang (2022) explored the attitudes of copywriters towards AI-generated content in marketing communications, which provided insight into professional perspectives pertaining to AI's role in creative industries.

The Quality of AI Written Content

There is an array of scientific research that aims to test the quality of AI-generated written content. Ma, Liu, and Yi (2023) examined the linguistic abilities of natural language processing models (NLP) within the context of computer science and biomedical fields. This research tested multiple models—such as Chat GPT—and the results indicate that AI has the capability to generate written scientific content. However, noticeable differences in depth and overall quality still exist, particularly as it relates to text distribution, lack of valuable insight, and low external inconsistency with the field of scientific knowledge. The results also suggest a writing style gap as AI-generated scientific text is more likely to have language redundancy and lack of factual accuracy. The results reveal a gap between AI-generated and human-written scientific text. Wu (2019) identified that a reader’s assessment of AI-generated news stories varied according to the topic covered. More specifically, AI-generated sports and political news stories were perceived as more credible and objective, while human-written financial news stories were perceived as more credible and objective. Häufglöckner (2023) identified that readers believed that opinion pieces—in German—generated by GPT-3 were similar in terms of quality to those written by human journalists. Interestingly, some respondents rated the AI-generated versions minutely better as it pertains to quality and trustworthiness, although this difference was not statistically significant. Lozić & Štular (2023) examined the limitations and potential of different AI’s as it relates to writing content in the fields of archaeology, historiography, linguistics, and genetic history. The output of six different AI chatbots—ChatGPT-3.5, ChatGPT-4, Bard, Bing Chabot, Aria, and Claude 2—were analyzed by human experts. In terms of both quantitative and qualitative tests, Chat GPT performed the best, but there was no original scientific contribution in the auto-generated content. Further, multiple errors were identified in the content. This included heavy use of academic references from the Global West—indicative of a neo-colonial bias—as well as a lack of up-to-date citations and bias toward English publications.

AI and Creativity

In contrast to the common belief that AI has a negligible artistic capacity, AI has proven capable of writing literature and poetry. This was explored by Gunser et al. (2021) as literature experts found it difficult to distinguish poems influenced by AI—Chat GPT-2—from purely human-written content. Additionally, AI-generated content was typical or cliché in terms of semantic order and demonstrated a poor grasp of different literary genres. Moreover, with the correctly identified poems, experts noted deficiencies in following aspects of artistic style, structure, and content. Nonetheless, experts do recognize some of the AI-influenced work as authentic and worthy of recognition. Guzik et al. (2023) examined Chat GPT-4’s creativity via the Torrance Tests of creative thinking. The results were compared with 24 human samples and a national percentile from Scholastic Testing Services of 2,718 students. The research relied on the factors of fluency, flexibility, and originality. The results indicated that Chat GPT-4 had overall high scores in the flexibility test, as well as in originality and fluency. However, the scores were relatively lower in certain aspects of the flexibility test, such as guessing causes, guessing consequences, and product improvement. The reason for the lower scores was likely due to deficiencies in prompting or ChatGPT needing further development as it relates to flexibility. Additionally, the research points out that creativity assessments should be re-evaluated to improve the assessment, which may enable better measurements of various aspects of creativity and convergent thinking. Hitsuwari et al. (2023) explored another dimension of creativity via an experiment involving 385 participants, which compared haiku poems that were AI-generated compared to those authored by a human. The results indicated that AI-generated haiku without human intervention was rated the same as human-written haiku. However, AI-generated poetry with human intervention was rate the highest. Moreover, many readers were unable to distinguish between AI-generated and human-written haiku. Further, people perceive high-quality AI-generated content to be human-written. The literature above presents various perspectives on AI’s capabilities and ethical considerations as well as the attitudes of journalists.

While concerns regarding biases are apparent, there is also a recognition of the potential benefits of AI in relation to economic efficiency and productivity. Nonetheless, further testing of AI’s capabilities is needed. The research by Hitsuwari et al. (2023) does not fully explore or quantify AI-generated poetry due to Haiku poetry relying on a limited number of characters. More linguistically challenging types or genres of poetry should be explored to determine the full depth of AI capacity to write poetry. Meanwhile, the research of Guzik et al. (2023) also has a clear limitation involving the reliance of Scholastic Testing Services from 2016 to compare to AI outputs conducted in 2023. Moreover, Gunser et al. (2021) mentioned a limitation of their study due to using ChatGPT-2 instead of ChatGPT-3, despite the latter’s availability. Regarding originality, AI can’t necessarily innovate new ideas. Instead, it reformulates existing information from its voluminous database in a seemingly new structure or manner. Despite these findings, most research calls for AI-human collaboration to establish best outcomes.

While artificial intelligence (AI) has made impressive strides in many domains, its ability to produce high-quality creative output is still subject to debate. Recent studies have explored AI's potential in this realm. As such, both challenges and opportunities of integrating AI into the field of journalism must be contextualized. The introduction to Journalism & Mass Communication Quarterly’s special forum on AI and journalism indicated that AI technologies—like conversational agents and automated writing software—increasingly blur the traditional lines between human and machine communication. While AI has catalyzed transformative change across many industries, its prospective impact on journalism is particularly profound given the field's traditional role as a key societal institution and communication channel. The contributors to the forum highlighted a need to ground discussions of AI's implications in a clear-eyed and human-centric understanding of what these technologies can and cannot do. Meredith Broussard cautions against falling victim to ‘techno chauvinism’, which is the assumption that technical solutions are inherently superior. Rather, she argues journalism and AI research must remain focused on reporting insights about humanity, "a process that will never be sleek and mathematically precise". Similarly, Nicholas Diakopoulos positions AI as a new medium for journalists to express ethical values through code implementation, rather than viewing it as potential replacement for human expertise. As such, the road ahead involves studying the hybridization of human journalists and AI systems within evolving industry workflows. This undertaking will require media scholars to bridge disciplinary divides by collaborating with difference fields, like human-computer interaction and computer science. Overall, while AI introduces new possibilities in areas like automated writing and content curation, it must be shaped by a humancentric vision to complement—not replace or automate—journalism's core functions of discovering, investigating, and conveying insights about the human experience.

Sampling and Methodology

The current study relies on a comparative experimental methodology to evaluate the quality of journalistic articles written by professional journalists when compared to articles generated by AI platforms. The experiment was conducted in four stages. The first stage was topic selection, followed by participants selection, then evaluation of articles, and finally calculating results. The focus was on a specific journalistic style—journalistic articles.

Sample

The sample was comprised of two groups:

  • Professional Journalists: Four established Egyptian journalists who work in esteemed media organizations were selected. All chosen journalists hold prominent positions in their respective media organizations. They have vast experience in journalism, which positions them as exemplary subjects for this study. The journalists were chosen based on their extensive experience and ability to write high-quality journalistic content.
  • AI Platforms: The three most well-known and reputable generative AI writing platforms were selected, which included ChatGPT-4, Gemini, and Claude. The professional versions of these platforms were used. Further, a software engineer ensured the comparison was curated fairly by providing appropriate prompts to the AI to elicit output. The selection of AI platforms was due to the advanced content generation capabilities of the selected platforms.

Sampling Rationale

The experiment aims to evaluate the performance of distinguished journalist professionals and explore their perspective as to relates to AI-generated compared to human authored written news. The constrains of study relates to the participant pool as there are relatively few professionals who specialize in this type of writing and possess the necessary qualifications to engage in the aims of this study. While the sample size is small, it is representative of top-tier journalists, which aligns with the study’s objectives. Additionally, the findings are reflective only of this specific subset journalists. As such, a more inclusive experiment in terms of writing genres would allow broader generalizability. Meanwhile, the generalizability of this research does not extend to other journalistic styles or to the Western perspective. Instead, the research focuses on one specific genre as this will enable a sophisticated analysis, which yields more nuanced results. Furthermore, the deliberate choice to conduct the experiment in this writing style is due to its complex and creative nature, as well as journalistic articles are relatively less explored in this context. Finally, the decision of conducting the experiment in Egypt and in Arabic was to augment the representation of Arab scholarly perspectives in this area.

Qualitative Results

The research specifically examined the overall accuracy, coherence, novelty, simplicity, informativeness, persuasiveness, and objectivity of each writing sample. The results of the quantitative analysis were:

  • For the human evaluation of the first article, the scores range from 5.33 to 8.333 with overall accuracy being the lowest and objectivity being the highest. The overall human evaluation is 7.50. The standard deviations indicate variation in the scores awarded by humans with the largest variation appearing in novelty.
  • For artificial intelligence, the scores are higher while ranging from 8.33 to 9.00. The standard deviations are generally lower, which indicates more consistency in the evaluations. The overall AI evaluation is 8.67, which is higher than humans.

This comparison suggests that AI is evaluated more consistently and higher than humans on these criteria. It would be interesting to understand and contextualize the decision process that underwrites how these scores were assigned, as well as whether they related to a specific task or a general evaluation of capabilities.

Research Objectives

  • Assess the potential for AI to replace journalists and other media professionals.
  • Measure journalists' attitudes toward and acceptance of AI in journalism.
  • Evaluate the differences in output quality by comparing AI-generated content to articles produced by journalists.
  • Analyze the distinctions between the outputs of journalists and those generated by AI.
  • Examine variations among the outputs produced by different journalists.
  • Investigate the differences in performance and quality among various AI platforms and chatbots.

Research Significance

This study aims to provide insights into the attitudes of journalists toward the quality of AI-generated compared to traditional human-written work. This understanding is crucial for evaluating the future role of AI in journalism. Additionally, by identifying differences in perceptions of AI among journalists based on their backgrounds, the research sheds light on how personal and professional experiences shape attitudes toward technological advancements in the field. Further, the evaluation of various AI platforms will highlight their strengths and weaknesses, which is essential for informed decision-making regarding their adoption in newsrooms. This knowledge will assist organizations in selecting appropriate tools that meet their specific needs. Moreover, the findings will contribute to discussions pertaining to the responsible integration of AI in journalism, while ensuring industry standards and ethical considerations are upheld. This is vital for maintaining public trust and ensuring the integrity of the profession. Finally, the study aims to inform stakeholders regarding the evolving landscape of media by examining AI's potential impact on the journalism profession. Understanding these changes is critical for preparing for the challenges and opportunities that AI presents to journalists and news organizations. 

Methodology

Figure 1: This is the adopted methodology of the experiment from process to output.

Operational Definition

In the context of journalism, news articles are written pieces that serve to inform, analyze, or comment on current events, issues, or topics of public interest. These articles, often published in newspapers, magazines, or online platforms, are grounded in factual reporting and may include news stories, features, editorials, or opinion pieces. Journalistic articles adhere to core principles, such as accuracy, objectivity, and timeliness. These articles aim to provide readers with reliable information. They vary in style and depth, which range from brief news reports to in-depth investigative pieces, while contributing to an informed public discourse (Kovach and Rosenstiel 2014). Within the context of this paper, quality refers to the degree of excellence the columns represent as assessed by subjects using a Likert scale measurement method. It includes aspects such as coherence, persuasiveness, simplicity, informativeness, and objectivity, while reflecting the comprehensive assessment of the content's merit and value. The Likert scale employed in this experiment relied on a numerical range from one to ten, where a score of ten indicates the highest level of excellence and a rating of one signifies the lowest. Subjects assign scores to indicate their degree of agreement, satisfaction, or perception towards the evaluated criteria. This study adopted a comparative experimental methodology, which involved conducting a comparative experiment between the skills of journalists in writing a specific journalistic article and the capabilities of artificial intelligence applications in writing the same article. Both the human and AI generated articles were predicated on the same specifications and features. The experiment was conducted in four stages.

Stage 1: Selection of the topic

The selected topic for the article was “The Language of Dialogue Between Parents and Adolescents in the Era of Artificial Intelligence Domination”. The topic was determined by taking into consideration the following factors, which includes selecting a topic that is contemporary and relevant to Egyptian society and Egyptian families, suitable for the age group of the researcher's study (adolescents and youth) and exemplifies the criteria of journalistic writing in terms of accuracy, up-to-date information, information gathering skills, scientific research, creativity, and innovation. Regarding the third point, a topic was chosen to avoid inflicting any trauma, emotionally or psychologically, upon the study’s participants. While the selected criteria are essential to evaluate journalistic quality, other dimensions (such as creativity, ethical considerations, and audience engagement) were not fully captured. Future studies could include these additional elements.

Stage 2: Participant Selection

The participating journalists and artificial intelligence applications were selected. The journalists were chosen based on their academic and professional experience. All subjects are graduates of media colleges with a specialization in journalism. Moreover, all subjects are professionals who specialize in writing journalistic articles.

  • Abdel Salam Farooq - Editor-in-chief of Al-Ahram
  • Mohamed Shomroukh - Assistant Editor-in-Chief, Al-Ahram
  • Khaled Hassan Al-Naqeeb - Managing Editor, Al-Ahram
  • Magdy Khalaf - Managing Editor, Al Gomhuria
     

Additionally, three of the most popular artificial intelligence platforms were selected, which included ChatGPT 4, Gemini, and ClaudeAI. The professional versions of these applications were used, and an engineer was selected to properly prompt the AI and guide our research efforts. During this stage a question was provided to all participants regarding the essential professional criteria that should be used to assess a journalistic article. The following criteria was identified based on the answers provided by the participants, which included both the four journalists and the three artificial intelligence applications. The identified criteria were unity, coherence, comprehensive accuracy, integration, updating, simplicity, persuasiveness, reliability, and objectivity. All participants in the experiment were informed of the required article topic and writing conditions were set. These conditions include, the article should not exceed 500 words and the writing period should not exceed three days.

Stage 3:  Evaluation of Articles

The seven articles written by all participants in the experiment—four journalists and three AI chatbots—were collected. Each article was encoded with a letter and the author's name was removed to ensure anonymity. The six articles were then presented to each participant for the evaluation phase, which was carried out according to the previously established criteria. We did not ask any journalist to evaluate their own article. Participants were asked to rate the articles on a numerical scale of one through ten according to the evaluation form. Participants were also requested to provide justification for their evaluations.

Stage 4: Results

The results of the experiment were obtained during this stage. Each participant's article was evaluated by the rest of participants without identifying the author's identity—either human journalist or AI chatbot.
 In conclusion, the methodology employed in this study involved a comparative experimental approach to assess the skills of journalists and artificial intelligence.

Validity and Reliability

The evaluation form was independently examined by researchers Maryam Ahmed Ali and Donya Ibrahim.

Findings

General Descriptive Results

Prior to testing the hypotheses, general descriptive statistics were calculated to assess the overall evaluation of both AI-generated and journalist-written columns. The evaluations were rated on several criteria including accuracy, coherence, objectivity, unity, simplicity, up-to-date information, and persuasiveness. The mean ranks of these evaluations were recorded across both the journalist and AI evaluators. Initial observations suggested that AI-generated columns generally received higher mean ranks in some areas, such as coherence and up-to-date information, while journalist-written columns showed slight advantages in areas like accuracy and objectivity. Though informative, these trends were subjected to statistical testing to determine their significance.

Hypothesis Testing

H1: There are statistically significant differences in the quality of AI-generated columns and journalist-written columns.

  • This hypothesis was tested using the Mann-Whitney U test to compare the overall evaluations of AI-generated columns and journalist-written columns.
  • AI Evaluation: Statistically significant differences were observed between the AI-generated and journalist-written columns with AI evaluation favoring AI-generated articles. The significance level was 0.034 (P < 0.05), which indicates that AI-generated articles were rated higher than those written by journalists.
  • Journalist Evaluation: No statistically significant differences were found in journalist evaluations of the columns. The P-value was 0.724 (P > 0.05), which suggests that journalists did not perceive significant differences in quality between the two types of columns.

H2: There are statistically significant differences in the performance of journalists and AI across the set criteria, such as accuracy, coherence, objectivity, etc.

  • The Mann-Whitney U test was used to evaluate the performance of AI and journalists across various criteria.
  • Cohesion and Unity: No statistically significant differences were found for cohesion and unity in AI or journalist evaluations, which respectively exhibited P-values of 0.077 (AI evaluation) and 0.157 (journalist evaluation).
  • Comprehensive Accuracy: Neither the AI evaluations (P = 0.074), nor the journalist evaluations (P = 0.476), revealed significant differences in accuracy across AI-generated and journalist-written columns.
  • Coherence: Statistically significant differences were found in coherence as rated by AI. More specifically, AI-generated articles receiving higher evaluations (P = 0.032). However, evaluations by journalists did not reveal any significant difference (P = 0.480).
  • Up-to-Date Information: AI evaluations revealed significant differences in favor of AI-generated columns (P = 0.034). On the other hand, evaluations by the journalists did not reveal significant differences (P = 0.208).
  • Simplicity: Statistically significant differences were found in AI evaluations. AI-generated content receiving higher ratings (P = 0.032) while no significant differences were observed evaluations by journalists (P = 0.463).
  • Informativeness: According to AI evaluations, significant differences were noted in informativeness (P = 0.032) while journalist evaluations did not reveal significant differences in this category (P = 0.285).
  • Persuasiveness: AI evaluations again favored AI-generated columns with significant differences (P = 0.032). However, journalist evaluations did not show any significant differences in persuasiveness (P = 0.372).
  • Objectivity: AI evaluations demonstrated statistically significant differences in objectivity (P = 0.032) while favoring AI-generated columns. However, no significant differences were found in journalist evaluations (P = 0.480).

H3: There are statistically significant differences between each of the AI platforms in terms of the quality of columns.

  • This hypothesis was tested using the Kruskal-Wallis test to compare the performance of different AI platforms. The platforms used were ChatGPT, ClaudeAI, and Gemini.
  • No statistically significant differences were found between the AI platforms regarding the quality of the articles they generated. The P-value was 0.386, which is greater than 0.05, indicating that the performance of different AI platforms in terms of column quality was not significantly different.

Results

Hypotheses 1 and 2 were tested using a Mann-Whitney U test.

H1: There are statistically significant differences in the quality of AI-generated columns and journalist-written columns.

Table 1: Overall Evaluation of Artificial Intelligence in Journalism - Mean Ranks and Statistical Significance of Overall Evaluations – AI Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Overall evaluation

-Artificial intelligence

Journalist 4 2.50   0.00 0.034
Artificial intelligence 3 6.00

Whether authored by journalists or AI-generated, statistically significant differences were observed as per AI evaluation. This was observed at a significance level of (0.034). Interestingly, the AI evaluation favored articles written by artificial intelligence.

Table 1: Overall Evaluation of Artificial Intelligence in Journalism - Mean Ranks and Statistical Significance of Overall Evaluations – Journalists Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Overall evaluation

-Journalists

Journalist 4 3.75 5.00 0.724
Artificial

intelligence

3 4.33

Whether authored by journalists or AI-generated, no statistically significant differences were found as per the evaluation by journalists. This was observed at a significance level of (0.724), which indicates a P-value greater than 0.05.

H2: There are statistically significant differences in the performance of journalists and AI across the set criteria, such as accuracy, coherence, objectivity, etc.

Table 2: Assessment of AI’s Evaluations: Mean Ranks and Statistical Significance for Unity and Cohesion – AI Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Unity & Cohesion  

-Artificial intelligence

Journalist 4 2.75 1.00 0.034
Artificial intelligence 3 5.67

Whether authored by journalists or AI-generated, no statistically significant differences were found as per AI’s evaluation in terms of cohesion and unity. This was observed at a significance level of (0.077), which is greater than (0.05).

Table 2: Assessment of AI’s Evaluations: Mean Ranks and Statistical Significance for Unity and Cohesion – Journalist Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Unity & Cohesion

-Journalists

Journalist 4 3.50 4.00 0.476
Artificial intelligence 3 4.67

Whether authored by journalists or AI-generated, no statistically significant differences were identified as per journalist evaluations in terms of cohesion and unity. This was observed at a significance level of (0.157), which indicates a P-value greater than 0.05.

Table 3: Evaluation of Comprehensive Accuracy in AI: Mean Ranks and Statistical Significance from Journalist Assessments – AI Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Comprehensive accuracy

-Artificial intelligence

Journalist 4 2.75 1.00 0.074
Artificial intelligence 3 5.67

Whether authored by journalists or AI-generated, no statistically significant differences were found as per journalist evaluations in terms of cohesion and unity. This was underscored by a P-value greater than 0.05 with a recorded value of (0.074).

Table 3: Evaluation of Comprehensive Accuracy in AI: Mean Ranks and Statistical Significance from Journalist Assessments - Journalist Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Comprehensive accuracy

-Journalists

Journalist 4 3.50 4.00 0.476
Artificial intelligence 3 4.67

Whether authored by journalists or AI-generated, no statistically significant differences were found as per journalist evaluations in terms of comprehensive accuracy. This observation was made at a significance level of (0.476), which indicates a P-value greater than 0.05.

Table 4: Evaluation of Coherence in AI: Mean Ranks and Statistical Significance Based on Journalist Assessments – AI Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Coherence

-Artificial intelligence

Journalist 4 2.50 0.00 0.032
Artificial intelligence 3 6.00

Whether authored by journalists or AI-generated, statistically significant differences were found as per AI’s evaluation in terms of coherence. This was observed at a significance level of (0.032).

Table 4: Evaluation of Coherence in AI: Mean Ranks and Statistical Significance Based on Journalist Assessments – Journalist Evaluation

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Coherence

-Journalists

Journalist 4 3.50 4.00 0.480
Artificial intelligence 3 4.67

Whether authored by journalists or AI-generated, there were no statistically significant differences found as per journalist evaluations in terms of coherence. This observation was made at a significance level of (0.480), which indicates a P-value greater than 0.05.

Table 5: Table 5: AI's Evaluation of Up-to-Date Information Compared to Journalists – AI Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Up-to-date information- Artificial intelligence Journalist 4 2.50 0.00 0.034
Artificial intelligence 3 6.00

Whether authored by journalists or AI-generated, there were statistically significant differences found as per AI’s evaluation in terms of up-to-date information. This was observed at a significance level of (0.034)

Table 5: Table 5: AI's Evaluation of Up-to-Date Information Compared to Journalists – Journalist Evaluation

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Up-to-date

Information

-Journalists

Journalist 4 3.13 2.000 0.208
Artificial intelligence 3 5.17

Whether authored by journalists or AI-generated, there were no statistically significant differences found as per journalist evaluations in terms of up-to-date information. This was observed at a significance level of (0.208), which indicates a P-value greater than 0.05.  

Table 6: AI's Evaluation of Simplicity Compared to Journalists – AI Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Simplicity

-Artificial intelligence

Journalist 4 2.50 0.00 0.032
Artificial intelligence 3 6.00

Whether authored by journalists or AI-generated, there was statistically significant differences found as per AI’s evaluations in terms of simplicity. This was observed at a significance level of (0.32).

Table 6: AI's Evaluation of Simplicity Compared to Journalists – Journalist Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Simplicity

-Journalists

Journalist 4 3.50 4.00 0.463
Artificial intelligence 3 4.67

Whether authored by journalists or AI-generated, no statistically significant differences were found as per journalist evaluations in terms of simplicity. This was observed at a significance level of (0.463), which indicates a P-value greater than 0.05. 

Table 7: AI’s Evaluation of Informal Compared to Journalists – AI Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Information

-Artificial intelligence

Journalist 4 2.50 0.00 0.032
Artificial intelligence 3 6.00

Whether authored by journalists or AI generated, statistically significant differences were found as per AI’s evaluation in terms of informativeness. This was observed at a significance level of (0.32).

Table 7: AI’s Evaluation of Informal Compared to Journalists – Journalist Evaluation

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Information-Journalists Journalist 4 4.75 3.00 0.285
Artificial intelligence 3 3.00

Whether authored by journalists or AI generated, no statistically significant differences were found among journalist evaluations in terms of informativeness. This was observed at a significance level of (0.285), which indicates a P-value greater than 0.05.

Table 8: AI's Evaluation of Persuasion Compared to Journalists – AI Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Persuasion

-Artificial intelligence

Journalist 4 2.50 0.00 0.032
Artificial intelligence 3 6.00

Whether authored by journalists or AI-generated, statistically significant differences were found as per AI’s evaluation in terms of persuasiveness. This was observed at a significance level of (0.32).

Table 8: AI's Evaluation of Persuasion Compared to Journalists – Journalist Evaluation

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Persuasion

-Journalists

Journalist 4 3.38 3.50 0.372
Artificial intelligence 3 4.83

Whether authored by journalists or AI-generated, no statistically significant differences were found as per journalist evaluations in terms of persuasiveness. This was observed at a significance level of (0.372), which indicates a P-value greater than 0.05.

Table 9: AI's Evaluation of Objectivity Compared to Journalists AI’s evaluations – AI Evaluations

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Objectivity

-Artificial intelligence

Journalist 4 2.50 0.00 0.032
Artificial intelligence 3 6.00

Whether authored by journalists or AI-generated, statistically significant differences were found as per AI’s evaluation in terms of persuasiveness. This was observed at a significance level of (0.032).

Table 9: AI's Evaluation of Objectivity Compared to Journalists AI’s evaluations – Journalist Evaluation

Evaluation Evaluator N Mean Rank Mann-Whitney U Sig
Objectivity

-Journalists

Journalist 4 4.88 2.50 0.208
Artificial intelligence 3 2.83

Whether authored by journalists or AI-generated, no statistically significant differences were found as per journalist evaluations in terms of objectivity. This was observed at a significance level of (0.480), which indicates a P-value greater than 0.05.

H3: There are statistically significant differences between each of the AI platforms in terms of the quality of columns.

Hypothesis three was tested using a Kruskal-Wallis test

Table 10: Evaluation of Artificial Intelligence Platforms

Evaluation Evaluator N Mean Rank Kruskal-Wallis df Sig
Artificial intelligence platforms Chat GPT 1 1.00 2 2 0.208
Claude 1 3.00
Google Bard 1 2.00

No statistically significant differences were found between the type of artificial intelligence and evaluation scores of the articles. This was observed at a significance level of (0.386), which indicates a P-value greater than 0.05.

Discussion and Conclusion

This study examined the quality of AI-generated columns versus human-written journalistic articles through an article titled “The Language of Dialogue Between Parents and Adolescents in the Era of Artificial Intelligence Domination”. First, the findings suggest that AI produces content with similar unity, coherence, and overall accuracy when compared to expert human journalists. This challenges the notion that AI-generated written content lacks quality. Intriguingly, AI was able to outperform humans in certain aspects. This suggests that AI can generate well-structured, clear, and informative content that may be more objective when compared to articles written by expert human journalists. However, it is important to acknowledge limitations. The study employed a relatively small sample size, which limits generalizability. This research focused on a single genre of journalistic content, which might not be reflective of the level of creativity for all types of written content. For example, it may not be reflective of opinion pieces. Further, AI performance may vary across different writing styles and domains. Additionally, as far as the degree of creativity is concerned, it can be concluded that AI did demonstrate a level of creativity, yet the measures needed to comprehensively test the overall creativity of AI in journalism may be an area of further research. Second, the findings suggest that journalists did not perceive AI-generated content differently from human-generated content. This is evident in the results that indicate journalists might be more accepting and flexible to the use of AI in their work. This is also evident in research conducted by Adjin-Tettey et al. (2024) where journalist from Ghana and South Africa suggested that AI’s use in newsrooms is an opportunity for journalists to dedicate more time to in-depth analysis and creativity. Nonetheless, further large-scale nuanced research is needed to understand the attitudes and perceptions of journalists towards the use of AI in their work. Third, ethical concerns regarding the use of AI in journalism need to be addressed. For example, it is vital to instill measures that allow a responsible and ethical use of AI. Additionally, it is important to consider the potential impact of AI on the job market for journalists. This points toward a necessity for legal framework and legislation to regulate the use of AI and avoid it becoming a disruptive innovation. This comprehension is underscored by AI’s capacity to be utilized to create fake news, in addition to many other ethical dilemmas (Monti 2018).

The use of predetermined evaluation criteria focused primarily on textual coherence and accuracy, which does not address more nuanced dimensions, like creativity or ethical standards. Future research could broaden the scope of evaluation to provide a more holistic assessment of journalistic quality. The study was conducted within the MENA region—specifically Egypt—and the results may not be fully generalizable to other regions or contexts. Differences in cultural, sociological, and technological factors could influence the attitudes of journalists towards AI-generated content. Further research is needed to explore these attitudes across different regions. To gain a more comprehensive understanding of AI’s impact on journalism, it is essential to replicate this study in diverse regions and contexts. Overall, this study provides valuable insights into the capabilities of AI as it pertains to authoring Arabic journalistic writing, as well as how expert journalists evaluate these articles. These findings lay a foundation for discussing the interplay of AI in the future of journalism within the MENA region. Furthermore, these results showcase that AI may be able to replace human journalists in certain tasks or domains, especially with further development of the technology.

Limitations and Future Research

This study encountered several limitations that should be acknowledged. First, the predetermined evaluation criteria used to assess the articles (e.g., unity, coherence, accuracy) may not fully capture all relevant dimensions of journalistic writing. A more comprehensive rubric or additional criteria could offer a more holistic assessment. Moreover, as AI technologies rapidly evolve, the capabilities of the assessed AI platforms may become outdated or surpassed by newer developments, which potentially limits the long-term applicability of the findings. Another significant limitation was the high cost and limited availability of the professional versions of the AI platforms used in the study. Accessing these AI writing tools and involving the engineers responsible for their prompting and guidance required substantial resources and time, which may not be feasible for all research teams. Additionally, the creative abilities of human journalists can be influenced by factors such as mood and motivation, which may not be as easily replicated by AI bots and thus potentially influence the comparative analysis.

Recommendations for Future Research

Several recommendations can be made to guide future research in this domain based on the identified limitations of the present study. First, it is imperative to expand the sample diversity by involving a larger and more heterogeneous group of journalists to represent diverse media organizations, cultural backgrounds, and experience levels. This would enable a deeper, context-sensitive understanding of how a journalist’s professional and personal backgrounds shape their attitudes and perceptions towards AI-generated content (Reiter & Risku 2020). Second, interdisciplinary collaboration between journalism experts, educators, and AI developers is recommended to refine and establish a comprehensive and standardized evaluation method. This would ensure a more holistic and reliable assessment of journalistic writing quality that surpasses the existing focus on surface-level textual features (Thurman et al. 2019). Third, replicating similar experiments across varied journalistic topics, genres (e.g., news reports, features, editorials), and writing styles would shed light on the versatility of AI writing tools and the consistency (or variability) of the attitudes of journalists. Moreover, longitudinal studies tracking the evolving capabilities of AI writing platforms and changes in the perceptions of journalists would offer valuable insights involving the interplay between technological advancements and professional mindsets (Montal & Reich 2017). Furthermore, in-depth investigations are warranted into the ethical considerations, potential biases, and fairness as it relates to AI writing platforms. Developing guidelines or frameworks to mitigate these concerns is crucial for AI's responsible and ethical integration into journalism while complementing human skills. Such efforts should be undertaken in close collaboration among journalists, AI developers, ethicists, policymakers, and journalism educators (Beckett 2019; Diakopoulos 2019). Expanding the research scope to include comparisons between AI-generated and human-produced content across different media types (e.g., text, audio, video) would provide a more comprehensive understanding of AI's potential impact on the journalism industry. Finally, conducting case studies or pilot implementations within newsroom settings would allow for the evaluation of practical challenges, opportunities, and strategies for integrating AI writing tools into journalistic workflows while maintaining industry standards and ethical practices (Montal & Reich 2017).

References

 

Adjin-Tettey, Theodora Dame, Tigere Muringa, Samuel Danso, and Siphumelele Zondi. “The Role of Artificial Intelligence in Contemporary Journalism Practice in Two African Countries.” Journalism and Media 3 (2024): 846-860.https://doi.org/10.3390/journalmedia5030054.

 

Alzoubi, Omar Abdallah, Normahfuzah Ahmad, and Norsiah Abdul Hamid. “Artificial Intelligence in Newsrooms: Ethical Challenges Facing Journalists.” Studies in Media and Communication 12, no. 1 (January 2024): 401. https://doi.org/10.11114/smc.v12i1.6587.

 

Amponsah, Peter N., and Atianashie Miracle Atianashie. 2024. “Navigating the New Frontier: A Comprehensive Review of AI in Journalism.” Advances in Journalism and Communication 12, no. 1 (March): 1–17. https://doi.org/10.4236/ajc.2024.121001.

 

Bailer, Werner, Georg Thallinger, Viktoria Krawarik, Katharina Schell, and Vanessa Ertelthalner. “AI for the Media Industry: Application Potential and Automation Levels.” n.d.

 

Beckett, Charlie. 2019. “New Powers, New Responsibilities. A Global Survey of Journalism and Artificial Intelligence.” POLIS Journalism and Society at the London School of Economics. November 18, 2019. https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities/.

 

Bradshaw, Paul. *The Online Journalism Handbook: Skills to Survive and Thrive in the Digital Age*. New York: Routledge, 2017.

 

Brooks, Brian S., George Kennedy, Daryl R. Moen, and Don Ranly. *News Reporting and Writing*. Boston: Bedford/St. Martin’s, 2002.

 

Carlson, Matt, and Seth C. Lewis. “Artificial Intelligence and Journalism.” *Journalism & Mass Communication Quarterly* 96, no. 3 (2019): 673-695. https://doi.org/10.1177/1077699019859901.

 

Diakopoulos, Nicholas. Automating the News: How Algorithms Are Rewriting the Media. Cambridge: Harvard University Press, 2019.

 

Dierickx, Laurence, Andreas Lothe Opdahl, Sohail Ahmed Khan, Carl-Gustav Lindén, and Diana Carolina Guerrero Rojas. “A Data-Centric Approach for Ethical and Trustworthy AI in Journalism.” Ethics and Information Technology 26, no. 4 (September 28, 2024).https://doi.org/10.1007/s10676-024-09801-6.

 

Garcia, Jonathan, and Andrea Chen. “Online Learning and Performance in Higher Education.” Journal of Educational Technology Systems 48, no. 1 (2019): 23-45. Available at ERIC.

 

George, A. Shaji. Artificial Intelligence and the Future of Work: Job Shifting Not Job Loss. Partners Universal Innovative Research Publication, 02(02), 2024. https://doi.org/10.5281/zenodo.10936490.

 

Goni, Md. Ashraf, and Maliha Tabassum. “Artificial Intelligence (AI) in Journalism: Is Bangladesh Ready for It? A Study on Journalism Students in Bangladesh.” *Athens Journal of Mass Media and Communications* 6, no. 4 (2020): 209-228. https://doi.org/10.30958/ajmmc.6-4-1.

 

Gopen, George D., and Judith A. Swan. “The Science of Scientific Writing.” *American Scientist* 78, no. 6 (1990): 550-558.

 

Gunser, Vivien Emily, Sebastian Gottschling, Benedikt Brucker, Stefanie Richter, and Peter Gerjets. “Can Users Distinguish Narrative Texts Written by an Artificial Intelligence Writing Tool from Purely Human Text?” In *HCI International 2021 – Posters*, 520–27. Cham: Springer International Publishing, 2021.

 

Guzik, Elyse E., Christian Byrge, and Christian Gilde. “The Originality of Machines: AI Takes the Torrance Test.” *Journal of Creativity* 33, no. 3 (2023): 100065. https://doi.org/10.1016/j.yjoc.2023.100065.

 

Häufglöckner, Lasse. “Self-Supervised Language Models in Journalism: Quality Perception of GPT-3-Written Articles.” Master’s thesis, Hochschule Darmstadt, 2023. https://opus4.kobv.de/opus4-h-da/frontdoor/deliver/index/docId/397/file/MA_Haeufgloeckner.pdf.

 

Hitsuwari, Junichi, Yoshiko Ueda, Woojin Yun, and Miho Nomura. “Does Human–AI Collaboration Lead to More Creative Art? Aesthetic Evaluation of Human-Made and AI-Generated Haiku Poetry.” *Computers in Human Behavior* 139 (2023): 107502. https://doi.org/10.1016/j.chb.2022.107502.

 

Hofeditz, Lennart, Milad Mirbabaie, Jonas Holstein, and Stefan Stieglitz. “Do You Trust an AI Journalist? A Credibility Analysis of News Content with AI-Authorship.” In *ECIS 2021 Research Papers*, 50. 2021. https://aisel.aisnet.org/ecis2021_rp/50.

 

Kim, Heejung. “AI in Journalism: Creating an Ethical Framework.” Honors capstone project, Syracuse University, 2019. https://surface.syr.edu/honors_capstone/1083.

 

Kinneavy, James L., and John C. Warshauer. *From Composition Theory to Practice: A Writing Process Approach*. New York: HarperCollins College Publishers, 1994.

 

Komatsu, Tomoko, Marisela Gutierrez Lopez, Stephann Makri, Colin Porlezza, Glenda Cooper, Andrew MacFarlane, and Sondess Missaoui. “AI Should Embody Our Values: Investigating Journalistic Values to Inform AI Technology Design.” In *Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society*, October 2020. https://dl.acm.org/doi/fullHtml/10.1145/3419249.3420105#BibPLXBIB0057.

 

Kovach, Bill, and Tom Rosenstiel. The Elements of Journalism: What Newspeople Should Know and the Public Should Expect. 3rd ed. New York: Three Rivers Press, 2014.

 

Kovach, Bill, and Tom Rosenstiel. *The Elements of Journalism: What Newspeople Should Know and the Public Should Expect*. New York: Three Rivers Press, 2007.

 

Lawal, Bashir Abdullahi. “Perception of Registered Journalists on Job Security in the Era of Artificial Intelligence in Katsina State.” International Journal of Innovative Social Sciences & Humanities Research 12, no. 1 (2024): 77-85. ISSN: 2354-2926.

 

Lee, C., and H. Wang. “Attitudes of Copywriters Towards AI-Generated Content in Marketing Communications.” *Journal of Advertising Research*, 2021.

 

Lozić, Eva, and Benjamin Štular. “Fluent but Not Factual: A Comparative Analysis of ChatGPT and Other AI Chatbots’ Proficiency and Originality in Scientific Writing for Humanities.” *Future Internet* 15, no. 10 (2023): 336. https://doi.org/10.3390/fi15100336.

 

Maiden, Neil, Konstantinos Zachos, Amanda Brown, George Brock, Lars Nyre, Aleksander Nygård Tonheim, Dimitris Apostolou, and Jeremy Evans. “Making the News: Digital Creativity Support for Journalists.” In *Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems*, 475:1-475:15. New York: ACM, 2018. https://doi.org/10.1145/3173574.3174049.

 

Monti, Matteo. “Automated Journalism and Freedom of Information: Ethical and Juridical Problems Related to AI in the Press Field.” Opinio Juris in Comparatione 1 (2018): 20 pages. Posted January 24, 2019. Last revised July 15, 2020. LUISS Guido Carli; European University Institute.https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3318460.

 

Ma, Yongqiang, Liu Jiawei, and Fan Yi. “Is This Abstract Generated by AI? A Research for the Gap between AI-generated Scientific Text and Human-written Scientific Text.” January 2023. https://doi.org/10.48550/arXiv.2301.10416.

 

Miroshnichenko, Andrey. “AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is ‘Yes’).” *Information* 9, no. 7 (2018): 183.

 

Montal, Tamar, and Zvi Reich. “I, Robot. You, Journalist. Who Is the Author? Authorship, By-lines and Full Disclosure in Automated Journalism.” Digital Journalism 5, no. 7 (2017): 829-849.

 

Noain-Sánchez, Amaya. “Addressing the Impact of Artificial Intelligence on Journalism: The Perception of Experts, Journalists and Academics.” *Communication & Society* 35, no. 3 (2022): 105-121.

 

Reiter, Cornelia, and Hanna Risku. “Translation and Technology: A Broader View on the Impact of Technology on Translation Practices.” Translation Studies 13, no. 2 (2020): 154-171.

 

Sambrook, Richard. “Delivering Trust: Impartiality and Objectivity in the Digital Age.” Reuters Institute for the Study of Journalism, 2012. https://reutersinstitute.politics.ox.ac.uk/our-research/delivering-trust-impartiality-and-objectivity-digital-age.

 

Shapiro, Ivor, Colette Brin, Isabelle Bédard-Brûlé, and Kasia Mychajlowycz. “Verification as a Strategic Ritual: How Journalists Retrospectively Describe Processes for Ensuring Accuracy.” *Journalism Practice* 7, no. 6 (2013): 657-673. https://doi.org/10.1080/17512786.2013.765638.

 

Thurman, Alexander. “How Many Ways Can We Define Online Learning? A Systematic Literature Review.” American Journal of Distance Education 33, no. 4 (2019): 289-306. Available at Taylor & Francis Online.

 

Túñez-Lopez, M., C. Toural-Bran, and C. Valdiviezo-Abad. 2019. “Automation, Bots and Algorithms in Newsmaking: Impact and Quality of Artificial Journalism.” Revista Latina de Comunicación Social 74: 1411–1433. https://doi.org/10.4185/RLCS-2019-1391en.

 

Túñez-López, José Miguel, César Fieiras Ceide, and Martín Vaz-Álvarez. “Impact of Artificial Intelligence on Journalism: Transformations in the Company, Products, Contents and Professional Profile.” *Communication & Society* 34, no. 1 (2021): 177-193.

 

Venkatesh, Viswanath, Michael G. Morris, Gordon B. Davis, and Fred D. Davis. “User Acceptance of Information Technology: Toward a Unified View.” MIS Quarterly 27, no. 3 (2003): 425-478. Available at Springer Open.

 

Wu, Yanfang. “Is Automated Journalistic Writing Less Biased? An Experimental Test of Auto-Written and Human-Written News Stories.” *Journalism Practice* 14, no. 8 (2019): 1008–28. https://doi.org/10.1080/17512786.2019.1682940.

 

Zagorulko, Daria I. “ChatGPT in Newsrooms: Adherence of AI-Generated Content to Journalism Standards and Prospects for Its Implementation in Digital Media.” *Mediian Sõlm* 1, no. 2 (2023): 50. https://doi.org/10.32782/2710-4656/2023.1.2/50.

Zinsser, William. *On Writing Well: The Classic Guide to Writing Nonfiction*. New York: HarperCollins, 2006.

 

About Amira Mohamed El Nemr

Amira Mohamed El Nemr

Assistant Professor at Department of Mass Communication and Children’s Culture in the Faculty of Postgraduate Childhood Studies Ain-Shams University, Egypt

https://orcid.org/0009-0002-5898-728

 

Check Also

Members of the Jordanian media pool. Photo courtesy of NARA Public Domain Archive.

Contemporary Tasks for Jordanian Journalists in the Era of Artificial Intelligence

Issue 37, Winter/Spring 2024 https://doi.org/10.70090/ON24CTJJ Abstract Artificial intelligence (AI) is a tremendous technological advancement that …