• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we talk to Tony Stubblebine about how the online publisher is on track to become͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
February 7, 2024
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

Semafor’s launch of Signals on Monday got a lot of attention in the tech press, making it one of the top ten articles read on Techmeme, a daily barometer of what people in the industry are talking about.

The tech hook was Microsoft’s partnership with Semafor and other media organizations to provide AI tools that can be used to more efficiently gather information from around the web. Most notably, Semafor is using AI to scour local language publications in countries like China, which are often ignored by reporters in the U.S.

It’s a great example of a shift that is happening. The advent of social media was a weakening force for media organizations. AI, on the other hand, is a strengthening technology.

Social media turned some journalists into stars and helped juice traffic numbers for almost every major publication. But the targeted advertising business, turbocharged by social media, siphoned money away from high-quality publications and the traffic was just an empty promise.

AI, on the other hand, is different. When people think of AI and news, the first thing that comes to mind is reporters being replaced by bots. While a handful of outlets like CNET and Sports Illustrated have been tempted to try this, those examples are just anomalies. AI-generated content is more or less spam, which doesn’t replace journalism. It drives consumers toward trusted publishers.

Medium CEO Tony Stubblebine recently told me that his site is getting flooded with AI-generated content, but it’s quickly being outed by humans. At the same time, Medium has given up on trying to compete with journalism, instead focusing on what could be seen as traditional blogging. Read below for my conversation with him.

Move Fast/Break Things
Riccardo Savi/Getty Images for Concordia Summit

➚ MOVE FAST: Listening. Uber CEO Dara Khosrowshahi heard investors’ growing intolerance for the “growth at all costs” mantra of many tech firms. It has finally paid off two years later, with the company today reporting its first full-year profit.

➘ BREAK THINGS: Talking. Snap continued to blame the fighting in Gaza for disappointing earnings, even though rivals like Meta saw a jump in quarterly ad revenue. Meanwhile, Adam Neumann’s advisers have told bankrupt WeWork he wants to buy the company he founded and has the support of hedge fund Third Point, which then told CNBC it has not committed any financing.

PostEmail
Artificial Flavor
Screenshot via Skyn

The condom company Skyn has a Valentine’s day ad about human intimacy and it uses — you guessed it — AI chatbots to make its point. It depicts an uncommunicative couple at their side-by-side computers. When they leave, two AI chatbots living in the computers start commenting on the couple’s lack of intimacy, and the digital characters grow more fond of each other.

With the Super Bowl coming up, we may see more AI chatbots used in creative ways to sell goods. And we will see companies marketing the technology. It tells us two things: AI is quickly being woven into our public consciousness in a deep way. But it also may be a bad omen for AI firms. When tech products are pitched during the Super Bowl, it doesn’t always end well. (See Pets.com, crypto, etc.)

PostEmail
Q&A

Tony Stubblebine became the CEO of Medium in 2022 and was the platform’s largest publishing partner through his Better Humans outlet. He replaced Evan Williams, years after they worked together at Odeo, which later became Twitter. Medium is on track to be profitable this year.

Q: Last year, you decided to not allow AI to crawl and use content for training. Has your opinion changed? One way to look at the Times lawsuit is it’s a contract negotiation that has now spilled into court.

A: It’s a contract negotiation, absolutely. And that’s why it’s been hard to get a coalition going. I think there’s a huge moral criticism of the AI companies, that they put all the content creators in a position where, in order to start the contract negotiation, they had to fight. There was a different way to do it, which was to get all this stuff negotiated before you trained. Our authors feel very deeply that it’s just fundamentally unfair.

There was no consent, no credit, no compensation. And that’s very different from Google, where we’re all happy with the exchange of value. Google crawls our content and sends traffic in exchange. With OpenAI, there’s not a single value you get back from having them trained on your content. It’s natural for people to opt out. There’s no reason for us to be here unless you want to come to the table and negotiate something.

I feel like there’s a competitive advantage coming, where some AI companies are going to do these deals and be able to train on better data. All that stuff can be really valuable to an AI company as they compete amongst themselves.

Q: What should you get?

A: A reason for Medium to be in the conversation is that we actually can represent the UCG [user-generated content] world because it’s already in our model to pass that money back. Whereas, if Reddit were to do this deal, they’d just keep all the money for themselves. Our negotiation is as a service for the authors. We’ll do the negotiation, we’ll take a cut for lawyers, and send the whole thing back to the authors.

We see how far away the authors are emotionally from wanting this money, because it’s not that much. What’s been put on the table is pennies per piece. It’s not going to matter. There isn’t a number that people will pay to make it matter. An AI company CEO (and I refuse to name which one, but you could guess) did offer us a low single digit millions of dollars, and our authors shrugged at it. It’s not nearly enough.

Q: Are you seeing AI-generated content on the platform?

A: That’s the worst part of it. It’s not just that there’s no exchange of value. It’s a huge cost now. Spam got cheaper. Every couple of weeks, we test the latest AI detection tools. None of them are good enough for platform use. But the good news is that humans spot this stuff pretty easily.

But even if they’re misidentifying shitty writing with AI writing, it doesn’t matter. The whole point of having humans look at it is to find the stuff worth recommending. And this is one of the side benefits of putting humans in the loop of recommendations again. This is not why we did it — we did it because we think human expertise and human curation is really valuable. But as soon as the AI-generated content started showing up, it was the humans that spotted it immediately. So we have a lot of it on the platform, but for the most part, it stays out of the recommendations because it’s all trash.

Q: Is it just people trying to game the system and make easy money?

A: Yes, but they don’t make easy money. This is one of the ways that the creator economy went wrong. At the top end of the creative economy, it pulled people out of organizations that were already professional writers. That’s great for them. But the majority of it, and the reason we’re able to call it an economy, is because as an industry we’ve pitched a get-rich-quick scheme where anyone can just get rich by gaming the system.

There are a lot of problems with that, but one of them that people don’t talk about enough is that it’s a local maxima. You could grind $100 a month out of Medium with trash. It’s absolutely possible to do that. But why would you bother doing that when you could instead use writing to develop an expertise or mastery that people would pay hundreds or thousands of dollars for?

I had a tech writer complaining to me that their earnings [on Medium] had dropped from $175 to $75 a month. And I was like, you’re writing about emerging AI trends. You have this opportunity to be a recognized subject matter expert on a thing that people will pay you hundreds of thousands of dollars for and your goal is instead to protect $175 a month. You’re crazy. You’ve completely missed where the value is in writing.

Medium

Q: How did you shift from a loss to a profit?

A: We did have a Goldilocks problem. We tried it too high and too low. We spent a lot of money bringing on our own journalists. The impulse was obvious. If you take the logic of a subscription, it’s got to be of a high enough quality that people would pay for it. When you bring in a bunch of journalists, it goes against what makes blogging great — that you get to hear personal experiences rather than reported experiences. And the economics just didn’t work at all because it’s expensive to write about things that you don’t already know. That’s why journalists have to be paid a lot to get a high quality piece up. The reporting is really time intensive.

Q: Do you feel guilty turning a profit in 2024 when everyone is getting laid off?

A: It is very counter-narrative even just for our own history to be doing well, and everyone else is struggling, including across tech if you’re not in AI. And this is already baked into the current growth trajectory.

We got rid of what we called “owned and operated publications,” and just leaned into what was on the platform, what we were recommending on the platform. The end result was we just paid for a lot of clickbait and content that otherwise wouldn’t even exist on the internet. That was exactly the wrong way.

The Goldilocks place is that we had to redo our recommendations in a way that would work for content that people find valuable, and put topic experts in the loop. So they’re out there just spotting people, whether they have an audience or not.

The thing that people find valuable is authentic personal experiences from people who know what they’re talking about. And those people tend not to be in the creator economy. The whole idea that you have to build an audience to be heard is sort of at odds with the people that are living a life worth talking about.

In order to spot those people, you have to create a custom system that’s not gameable in the way that the creator economy tends to game platforms.

Read here for the rest of the conversation, including whether Taylor Swift posts are popular on Medium.  →

PostEmail
Friends of Semafor

Navigate the world of Big Tech with Big Technology. Join independent journalist Alex Kantrowitz as he reports on the untold stories surrounding tech titans like Amazon, Apple, and NVIDIA. In a tech landscape dominated by extreme voices, Big Technology’s nuanced and unbiased reporting is a breath of fresh air. Avoid the extremes – subscribe today and get an exclusive 30% off.

PostEmail
Semafor Signals

Semafor launched in October of 2022 with a philosophy of presenting our sophisticated audience with reliable facts and sophisticated, diverse insights. Our Semaform story structure, which separates facts and analysis, embodies that approach. And you seem to like it!

So we’re announcing the launch of our biggest new product since then, a new, global multi-source breaking news feed called Signals. Our journalists, using tools from Microsoft and Open AI, will offer readers diverse, global insights on the biggest stories in the world as they develop on our gorgeous site, Semafor.com, as well as other platforms like this one.

Read more about our attempt to address the troubles of fragmented, polarizing internet breaking news in a memo from editor-in-chief Ben Smith and executive editor Gina Chua. →

PostEmail
Watchdogs

White House official Elizabeth Kelly was named today as head of the U.S. government’s new AI Safety Institute, which will develop guidelines, evaluate models, and pursue research on the risks and opportunities of the technology. Last week, the Biden administration brought together top officials as part of the AI Council, where they reported they had completed the tasks assigned to them in an executive order last year.

Some of the measures are sweeping, such as using the Defense Production Act to require developers to report safety test results to the Commerce Department, and forcing U.S. cloud companies like Amazon, Microsoft, and Google to disclose overseas customers using large amount of compute resources to train AI models. Without more specific criteria, that seems like an overreach, and as our colleague Louise Matsakis pointed out in a story last year, it could also be moot given pushes to make these models smaller.

Meanwhile, various U.S. states, China, the EU, and others are also weighing in with their own proposed rules. Companies are also becoming more vocal, partly to avoid a patchwork of measures. David Zapolsky, Amazon’s senior vice president of global public policy, called for U.S. federal frameworks for safe and responsible innovation in concert with like-minded trading partners, like Australia. So far, though, the U.S. is lagging a bit when it comes to AI regulations, and it remains to be seen whether the latest White House moves can push it forward.

PostEmail
Obsessions
Photo by Leon Neal/Getty Images

Some users of ChatGPT have been lamenting that, in the name of “safety,” the tool has been chastened so much that it’s become less useful. It’s an issue we’ve covered occasionally in this newsletter. This morning, Dylan Patel, chief analyst at SemiAnalysis, found a way to get ChatGPT to divulge a set of instructions that shows how it works, which Patel calls a “systems prompt.”

Patel, whom we’ve interviewed here before, has become an expert on how GPT-4 works. The prompts limit what ChatGPT is allowed to say and what image-generator Dall-E is allowed to create. For instance, Dall-E is instructed to include a diverse set of ethnicities when depicting people and never show people of a certain profession as all coming from the same gender. In an apparent attempt to avoid copyright issues, ChatGPT is told: “EXTREMELY IMPORTANT. Do NOT be thorough in the case of lyrics or recipes found online.” It also limits the length of summaries. “When asked to write summaries longer than 100 words, write an 80 word summary,” the instructions say. “Laziness is literally part of the prompt,” Patel wrote on X.

OpenAI, the company behind ChatGPT, is not alone here. Many companies creating large foundation models have tried to dial them back, fearing people will prompt them to write responses that embarrass them or say offensive things. Aravind Srinivas, CEO of Perplexity, posted on X that Google Bard told him that the “effective accelerationist” movement is a “dangerous and harmful ideology.” Srinivas, a former researcher at OpenAI, theorized that the response was due to “reinforcement learning with human feedback,” a technique used to focus and hone large language models.

I tried to reproduce what Patel did, and got a response that partly matched his, but didn’t go into nearly as much detail. OpenAI did not immediately respond to a request for comment. A Google spokesman said the response was a hallucination and against its policies.

One takeaway is that there is real market demand for a chatbot that will not hold back. If AI is the next great technology platform, then OpenAI is kind of like AOL. People are waiting for Netscape. (This isn’t necessarily a knock on OpenAI. AOL was hugely successful!). It’s also a reminder that there is so much power and capability that has yet to be unlocked in large language models.

PostEmail
Hot On Semafor
PostEmail