AI will not replace journalists, but it will replace those who do not know how to work with it — a thesis that has been voiced in various versions for some time now in both technology companies and the media business. Like any innovation, large language models need time to descend to the level of routine and widespread use. Many newsrooms have accepted this and are monitoring how and where AI can help. For example, in working with systematizing large data sets or to improve user experience or customer support, with highlighting the main points from long meetings and summarizing — that is, with what machine learning can do in a matter of minutes, when a person would need either a team or much more time. This does not necessarily mean investing large resources, but it requires openness to experiments and a clear understanding of the goals of your project. We tell you what, how and where world newsrooms are implementing AI in the field.
Less effort, more results
The Connecticut Mirror (CT Mirror), a local American newspaper, lacked the resources to cover all the important events in the state of Connecticut. Much of the news in the editorial office concerns local political and municipal issues, such as local government meetings, and the focus is on working with large data sets and reports that require painstaking work. Therefore, the media decided to turn to artificial intelligence and create a new position for working with innovations.
CT Mirror now has a new data reporter and AI product developer, Angela Eichhorst, Poynter reports. Eichhorst's job is to use large language models (LLMs) to analyze official documents and legislative texts, archives, and track bills and transcribe local government meetings to find topics for stories.
Before Angela Eickhorst joined CT Mirror, managing editor Steven Busmeier was already experimenting with AI. He created four tools trained on a variety of data about Connecticut government: state laws, a government information directory, the voting patterns of each member of the General Assembly to analyze their voting patterns, and the sources local politicians use to write legislation. Because the scale of such analysis is beyond the reach of local newsrooms, CT Mirror is able to produce unique and socially relevant stories and stand out from the competition.
Busmeier is very keen on the idea of a product like Ask The Post AI — The Washington Post’s chatbot that answers certain queries based on the publication’s archive of materials — and is also keen to create something similar for his own media outlet. However, the editorial office does have to edit the AI’s work. The managing editor calls him “an intern who overdid it with caffeine” — meaning that he can make a mistake in any detail and everything has to be checked manually.
Currently, among other things, the editorial team is working on an AI tool for video scraping, i.e. automatic collection and systematization of data from publicly available videos on the Internet, to expand the team's capabilities. Their goal is primarily to improve the quality of their materials, and AI is a way to do this more efficiently and with benefits for the reader.
Big little changes
In the same way, media outlets also analyze user engagement — the process by which unique or insightful material reaches a reader and keeps them coming back for more. The Financial Times (FT) often experiments with innovation. But it’s not always about big or resource-intensive innovation. Rather, it’s about solving a case more effectively with minimal resources. Bella Cockarill, the publication’s senior product manager, told the FT’s product and technology blog how they decided to improve reader engagement in the comments section.
Since comments have a great impact on reader engagement and loyalty to the media, the FT wanted to increase their visibility for readers and encourage them to continue reading this section. After all, readers rarely use this type of interaction on websites. To do this, a short question appeared on the page when the reader had scrolled approximately two-thirds of the way through the material. In this way, the FT pushed the audience to a short but deeper reflection on the text, which could encourage them to go and leave a comment under the material.
The team generated the questions using artificial intelligence: first, through a chain-of-thought prompt, the AI was asked to summarize the article, and then to suggest hypothetical questions for discussion on it, with some clarifications regarding the tone of voice in the wording (regarding the appropriateness of the time, a clear introduction, and the absence of abbreviations). The generated questions were always checked and, if necessary, changed by the editors of the publication, i.e., uncontrolled publication was not allowed. At the same time, AI here made it possible to reduce the time to complete a task that could have taken more resources from a human.
The results were expectedly small, but still positive — comment views increased by 3.5% overall, and there was also an 11.5% increase in views from readers who had not viewed the section in the previous month. The overall tone of the comments also changed — thanks to the question being clearly related to the topic of the article, they became more focused and the discussion did not turn into something unrelated. In the future, the Financial Times plans to test this feature in the publication's news app.
Chatbots for everyone
The company also helps to "dive" AI solutions further, to the world's regional publications. Al-Masry Al-Youm is an Egyptian daily newspaper that has been published in print for over 20 years, and is published online to a slightly lesser extent. As part of a joint initiative between FT Strategies and the Google News Initiative, the publication worked on using AI technologies to improve the user experience, as well as to be more environmentally friendly with content and extend the life of its multi-million-page material archive. Later, based on the results of the program, a report was published with a detailed description of the cases.
Following the example of The Washington Post, they created a chatbot there (in collaboration with a platform that develops APIs based on artificial intelligence). Through LLM semantic search, it provides clear and meaningful, but simple answers to users' questions related to local or regional news. The answer is based not on open data from the network, but on materials that were previously published on the Al-Masry Al-Youm website. As a result, the editorial office created the first such Arabic-language AI product and will analyze its potential through detailed measurement by metrics.
As part of the same program, the German newspaper Ruhr Nachrichten from Dortmund, which covers regional topics, created a similar product. As in the case of the Egyptian edition, the newspaper tried to make the process of finding and consuming information easier, while at the same time extending the duration of the session per user. Their German-language chatbot, based on machine learning, searched and generated a short answer with relevant links among the publication's content from the past 30 days.
The results were good and even slightly exceeded expectations. For example, during testing, the chatbot surpassed the benchmark of 85% content matching the query and showed 91%. At the same time, there were also reviews about AI hallucinations in the answers, to which the team added a disclaimer and usage guidelines at this stage to remain transparent.
This approach can already be considered a full-fledged tool for personalizing news search and a response to the "confrontation" between Big Tech and the media business. Instead of opposing one to the other, such media accept the fact of changing the information landscape and user behavior and see AI as a support, not an enemy. The approach, when information remains created and verified by human reporters, but is distributed through AI automation, is useful in that it personalizes and simplifies the delivery of results for a specific user query. It is also safer than directly using artificial intelligence as an uncensored resource, because the information will not be stolen from behind a paywall or without mentioning the original source.
An assistant, not a panacea
As mentioned in the section on the FT experiment, AI can become an assistant in the most simple, but repetitive and time-consuming tasks. Some of them artificial intelligence already does better than humans, so it's not just about efficiency.
Hilke Schellmann, an Emmy Award-winning investigative journalist and associate professor of journalism at New York University, tested the capabilities of various popular AI-based chatbots in their ability to summarize meetings.
Together with fellow scientists and journalists, Schellmann tested ChatGPT-4o, Claude Opus 4, Perplexity Pro, and Gemini 2.5 Pro. They were given the same transcript of meetings between local officials from three different cities and states and asked to generate three short (200 words) and three long (300 words) summaries of each meeting. At the same time, humans performed the same task to compare the results and the performance of LLM. It turned out that short summaries are better and faster to do with AI — all chatbots coped with the task well (the best indicator was ChatGPT-4o). In particular, their texts contained more facts than human ones, and were almost free of hallucinations.
In the near future, we will see even more development of such capabilities. Companies are actively implementing AI assistants in their video products, which can help with transcribing video conferences and taking notes. For example, Google will soon launch the Ask Gemini AI assistant in Google Meet. Among other things, it will be able to summarize the conversation of individual participants in the call and generate summaries if the participant joined the conference later. Although the function will be limited to individual users and may also hallucinate, which Google warns about and asks to check the content yourself.
However, it is worth refraining from complete fascination with machine learning and actually doing what is mentioned in the disclaimers. Hilke Schellmann's goal was to honestly highlight the accuracy of chatbots, and it is far from indisputable. If AI is almost perfect for short summaries, then with the generation of a longer version, hallucinations and irrelevant information became more frequent. And although AI generated this task in a minute, while a person would have done it in 3-4 hours, human summaries contained significantly more facts.
The situation was even worse with the analysis of AI tools for searching lists of relevant scientific literature for journalists — none of the 5 resources generated a list that would meet the benchmark, changed the list after a few days with the same prompt, and poorly selected examples of scientific papers. The conclusion from the experiment: not all tasks should be entrusted to AI, but it makes sense to expect that these problems will be solved in the next iterations of updates and new generations of products.
We need to look for improvements, not new things.
Scandinavian media analyst and journalist Thomas Baekdal was very critical of partnerships between AI companies and publishers in his recent newsletter. It’s hard to call it a partnership, because the former get much more here, essentially exploiting the work of the media. But in addition to the sinful tech giants, he also drew attention to an important nuance - publishers are also to blame for the fact that people no longer like news.
They are looking for greater reach, a way to increase click-through rates and, accordingly, increase their audience and earnings, but mostly not through growth, but through roundabout ways to success. As an example, Bekdahl drew a detailed parallel with how robot vacuum cleaners have developed over time. From the simplest (but the best possible at the time of release) to the latest models with advanced navigation capabilities from those available. That is, in the world of technology, each product is improved according to which task it can perform better and better than its competitor. This progress is very easy to track over time, and if you ask for feedback on a particular model, you can quickly determine the "best product." Publishers should approach the implementation of innovations and new options in the same way. Will it improve the user experience? Is it really needed? What will this or that AI integration solve? These and other similar questions (you can use the "5 why" method) are a must-ask all publications that seek to truly effectively use artificial intelligence.