
Gina’s view
You go to war with the audience you have, not the audience you might wish to have.
— paraphrased from Donald Rumsfeld
Journalists have a lot of questions about AI: Can it automate the tasks I hate? What are the ethics of using it in reporting and writing? Will I ever be able to trust chatbots? Are Big Tech companies stealing our intellectual property? Am I about to be fired?
All those are good and important questions — certainly to us.
But the bigger question, one that doesn’t get as much ink, isn’t about what happens to those of us who create the content. It’s about how AI will change the needs and habits of the people we serve — our readers, viewers, listeners — and about how the news industry can adapt to those changes.
That’s not a sometime-in-the-future issue. Search traffic is already down, as users increasingly depend on AI-generated summaries in Google and other search engines to get the information they need rather than plow through links and multiple stories on the same event.
But that’s just a tiny taste of what’s coming. Generative AI promises to revolutionize how people interact with information — how they’ll come to it, what they’ll expect from it, and what they’d do with it. In the process, it’ll upend what we think of as a “story” — not just the words we put on paper but the idea of what might be worthy of coverage. It’ll force us to rethink what we create and who we create it for.
The questions are multiple: If readers increasingly come to expect that stories will be created on the fly for them, and personalized to what AI knows of their level of knowledge and interest, what happens to the carefully crafted narratives that journalists pour hours (and days and weeks, sometimes) into writing? Will our value be increasingly measured in the questions we ask, the facts we gather, the insights we have, and the relationships we have with our audience, rather than in our words? If we’re increasingly creating content — facts, analysis, context — that machines will ingest and rewrite, what forms should our output take? Will we have to learn a new type of LLM-centric search engine optimization? What new technical standards will come to dominate this world, and how might they help or hurt the mission of informing the public?
What markets might open up if news no longer has to be a one-size-fits-all story that tries to reach as much of our target audience as possible, and can instead be machine-tailored to niche groups? Could we create tools that can help readers read between the lines, as it were, to understand what parts of articles are facts, what parts are analysis, and what parts are the reporter’s assumptions? Or help them compare how multiple news organizations cover the same event? (It’s not hard to build them.)
It’s not just text. Some news organizations are already using machine-generated illustrations to accompany stories. Others have experimented — more controversially — with using AI to synthesize news anchor reports. And in 2022, Semafor worked with an artist to illustrate and animate an interview using AI tools to create a haunting video interpretation of life in Ukraine under Russian occupation.
And that’s not counting all the myriad ways it can ease the work we currently do, from helping edit and proofread stories, to scanning our drafts and looking for related stories on the web, to suggesting and building charts to go with them. All of which we’re already using AI for at Semafor.
None of this is to suggest the coming changes are an unmitigated good, or even a mitigated good. As news gets personalized, the danger is that we each fall into filter bubbles of one; carefully curated versions of reality that play to our biases and never challenge our worldview. The challenge will be to find ways to create shared realities while acknowledging multiple perspectives and interests — something news organizations haven’t exactly excelled at.
And then, of course, there’s the question of how or if the people creating all this information can make a living from doing so. If we’re not being paid in traffic and ad impressions, or if “news” comes to mean an aggregation of information from a host of different news sources, what business model will support that work? Will it come from what we know or who trusts us rather than what we write? Perhaps, as Ben Thompson at Stratechery suggests, the value won’t be in the content we create but in the communities around that information that we foster.
There are a lot of questions, and not many answers. But if the AI revolution is anything like the internet and social media revolutions — and the evidence so far is that this will be much more disruptive than those two — we don’t have a lot of time to figure this out.

Room for Disagreement
Some see good news in this picture. There’s more appetite than ever for trusted individual voices across different mediums, but particularly video. And the core act of reporting — gathering information that isn’t yet available in digital form — is relatively impossible to duplicate with AI, for now. Meanwhile, live events, whether in person or online, may take on more importance as a way for audiences to connect with newsmakers and understand key issues. (Not coincidentally, by the way, these are key planks of Semafor’s strategy.)

The View From the copy desk
This story was read and reviewed by a human editor, which Semafor also still employs. (The editor did have help from a bot.)

Notable
- Publishers are terrified of “Google Zero”: the day the search behemoth stops sending traffic to news sites.
- The New York Times’ copyright suit against OpenAI continues, even as other publishers have reached agreements with the tech company.
- Perplexity announced a program to share revenues with publishers when it uses content from their news articles in its answers.