Google is reportedly in the testing phase of a groundbreaking product that utilizes artificial intelligence (AI) technology to produce news stories. According to sources familiar with the matter, the tool, internally known as Genesis, has been pitched to prominent news organizations, including The New York Times, The Washington Post, and News Corp, the owner of The Wall Street Journal. The technology can process relevant information, such as details of current events, and generate news content automatically.
Aim of Genesis
The motive behind the development of Genesis is to offer news publishers a potential personal assistant for journalists, automating certain tasks to free up their time for more critical aspects of reporting. Google views this AI-driven tool as a responsible solution that can help the publishing industry avoid the pitfalls of generative AI.
However, the product’s pitch has left some executives feeling uneasy, raising concerns about the potential implications on the accuracy and artistry of news reporting. It seems to take for granted the effort and skill involved in creating well-researched and compelling news stories.
Jenn Crider, a spokesperson from Google, clarified that the company is only in the early stages of exploring ideas to provide AI-enabled tools to assist journalists, especially smaller publishers. She emphasized that these tools are not intended to replace journalists’ essential roles in reporting, fact-checking, and creating articles. Instead, they could offer options for headlines and writing styles.
Reactions and Skepticism Around AI
The response from News Corp has been positive, expressing appreciation for Google’s long-term commitment to journalism. On the other hand, both The New York Times and The Washington Post declined to comment on the matter.
Experts and industry professionals have differing opinions on the potential impact of Google’s new tool. Jeff Jarvis, a journalism professor, and media commentator, sees potential upsides and downsides. He believes that if technology can deliver factual information reliably, journalists should consider using it. However, he warns against its misuse on topics that require nuance and cultural understanding, as it could damage both the tool’s credibility and the reputation of news organizations that utilize it.
Many news organizations worldwide have been contemplating the responsible use of AI tools in their newsrooms. While some have explored AI’s potential in enhancing news reporting, others remain skeptical about its impact on journalistic credibility. Several companies, including The Associated Press, have already employed AI to generate stories, particularly on matters like corporate earnings reports, albeit in limited capacities compared to articles written by journalists.
Challenges of AI
The advent of AI-generated articles raises concerns about misinformation spreading if the content is not thoroughly edited and fact-checked. Google’s fast-paced development of generative AI has presented challenges, particularly regarding its chatbot, Bard, which has been criticized for presenting incorrect factual assertions and not directing users to authoritative sources like news publishers.
Moreover, governments worldwide have called on Google to allocate a larger share of its advertising revenue to news outlets. In response, Google has entered partnerships with news organizations under its News Showcase program. Despite this, some publishers have criticized Google and other AI giants for using decades of their articles without proper compensation, prompting news organizations to push back against the unauthorized use of their data by AI systems.
Source: The New York Times