• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


In today’s edition, we have a scoop on how the surge in popularity of vibe coding is leaving apps vu͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
May 30, 2025
semafor

Technology

technology
Sign up for our free email briefings
 
Reed Albergotti
Reed Albergotti

We know large language models have a tendency to hallucinate (see RFK Jr.’s AI mishap below), but there’s another looming problem that hasn’t quite hit the mainstream consciousness yet: Vibe coding security issues.

We’re about to see a wave of vibe coding apps flood the internet, and most of them will pay little attention to security.

This is a perfect storm. As consumers, we’ve come to take basic security for granted. Part of that is the mobile web. There are only two major operating systems in the world, and they require apps to work within strict sandboxes that limit security risks.

Vibe coding upends that dynamic. As I scooped yesterday, users of the fast-growing Lovable service are pumping out apps with almost no basic security, and their personal information exposed to any hacker with rudimentary skills. (Lovable responded on X: “We’re not yet where we want to be in terms of security and we’re committed to keep improving the security posture for all Lovable users.“) Lovable is not alone. There are dozens of similar apps, and vibe coders often use several at a time.

Today, everyone from kids in their basement to state actors have automated scripts that scan the web for known vulnerabilities. That means it’s almost a certainty all that personal information has already been hoovered up by many different entities.

Like hallucinations, security for vibe coding is a solvable problem. But it’s not going to be easy. For startups willing to take on that challenge, there’s a massive opportunity.

Move Fast/Break Things

➚ MOVE FAST: Trial. In a major win for the crypto industry after a crackdown under the previous US administration, the SEC dropped its civil lawsuit against Binance that accused the exchange of misleading investors and other charges. Its founder, Changpeng Zhao, has also applied for a pardon from US President Donald Trump for a related criminal conviction.

➘ BREAK THINGS: Error. A US administration threat to revoke the visas of Chinese students would target a big source of university graduates with PhDs and a major AI talent pipeline. Stanford’s Fei-Fei Li highlighted the risks that would pose to American innovation at a recent Semafor event.

PostEmail
Memory Lane
Microsoft Chief Technology Officer Kevin Scott speaks during the Microsoft Build conference opening keynote in Seattle, Washington on May 19, 2025.
Jason Redmond/AFP via Getty

Conquering agentic memory has proven difficult, and AI companies are all pedaling in their own lanes to reach the finish line first. It’s a tricky balancing act of giving their agents enough memory to complete tasks efficiently but not so much that they fall into old patterns. Developers largely agree it should look similar to human memory — short- and long-term recollection, broad context, precision when it matters, the ability to prioritize certain experiences and “forget” irrelevant ones.

Last week at Microsoft’s developer conference, Chief Technology Officer Kevin Scott said the solutions for AI memory problems should mimic the systems humans have created to train our own brains. The company launched “structured RAG,” what Scott called a “biologically inspired technique” that organizes data into formats to improve recall and reasoning over large data sets.

Amazon and Google have their own structured RAG-like offerings, but each are doing it through the lens of their core business. Amazon is selling storage in Bedrock, and Google is promoting its short-term memory search agent.

“AWS has the feature set but doesn’t get into it very deep,” said Jason Andersen, Moor Insights & Strategy analyst. “Feature/function-wise, Microsoft might have an edge, but performance and capability-wise, the edge might fall to Google in a head-to-head test.”

IDC’s Ritu Jyoti said it’s Anthropic and OpenAI leading the pack on memory capabilities. Anthropic’s MCP that connects agents, contextual RAG, and ability for agents to manage their own memory make usage simple and reliable. OpenAI’s strength comes from the control it gives developers, including to view, edit, and delete memory, she said.

Google, OpenAI, and Anthropic are all marketing large context windows as part of the solution, but Scott said they don’t mimic natural human processes. “You don’t brute force everything in your head every time you need to solve a particular problem,” he told reporters and analysts last week. His stance is another example of how human qualities are increasingly sought after over artificial ones to build next-generation technologies.

— Rachyl Jones

PostEmail
Sitting Ducks
The logo of AI startup Lovable.
Lovable

Lovable, the popular vibe coding app that describes itself as the fastest-growing company in Europe, has failed to fix a critical security flaw despite being notified about it months ago, according to a new report by an employee at a competitor.

The employee at AI coding assistant company Replit who wrote the report, reviewed by Semafor, says he and a colleague scanned 1,645 Lovable-created web apps that were featured on the company’s site. Of those, 170 allowed anyone to access information about the site’s users, including names, email addresses, financial information and secret API keys for AI services that would allow would-be hackers to run up charges billed to Lovable’s customers.

The vulnerability, which was made public on the National Vulnerabilities Database on Thursday, highlights a growing security problem as artificial intelligence allows anyone to become a software developer. Each new app or website created by novices is a potential sitting duck for hackers with automated tools that target everything connected to the internet. The advent of amateur vibe coding raises new questions about who is responsible for securing consumer products in an era where developers with zero security know-how can build them.

“This is the single biggest challenge with vibe coding,” said Simon Willison, a veteran software developer and entrepreneur who has focused on new AI tools. “The most obvious problem is that they’re going to build stuff insecurely.”

That problem could be coming to a head, he said, because the first wave of vibe-coded consumer products are about to hit the market. “We’re due for a very rude awakening.”

Lovable responded on X: “We’re not yet where we want to be in terms of security and we’re committed to keep improving the security posture for all Lovable users.”

Read on for Reed’s view on how vibe coding app users should approach the tool. →

PostEmail
Mixed Signals

Adam Friedland represents a new kind of comedian: He rose up through podcasting and now hosts a late night-style weekly interview show on YouTube. This week, Ben and Max bring him on to ask him why he’s reviving a 1960s Dick Cavett-style talk show for the Internet, if podcasts have become too dumb, and whether he’s the long anticipated Joe Rogan of the left. They also talk about why he thinks phones are making people weirder, how Trump legitimized podcasting, and his fateful run-in with Swifties.

PostEmail
Artificial Flavor
Health and Human Services (HHS) Secretary Robert F. Kennedy Jr. attends a Senate Health, Education, Labor & Pensions Committee hearing on the Department of Health and Human Services budget.
Leah Millis/Reuters

RFK Jr.’s 73-page Make America Healthy Again report, which the US Health Secretary has said is backed by “gold standard” science, appears to have been researched with the help of AI. It includes broken links, incorrect author citations, and sources that don’t exist — all signs that correlate with chatbot hallucinations, NOTUS reported. Many links in the report also included a specific marker that OpenAI adds to indicate ChatGPT’s usage, The Washington Post added.

During a House Committee meeting earlier this month, Kennedy said, “The AI revolution has arrived, and we are already using these new technologies to manage health care data more efficiently and securely.” It raises questions about how much federal agencies should be relying on error-prone technologies for government work with real-world consequences for the public.

White House press secretary Karoline Leavitt acknowledged the report’s “formatting issues,” but said they did not “negate the substance of the report” and that it is being updated.

PostEmail
Semafor Stat
$1 billion.

The amount writing-assistant company Grammarly raised from General Catalyst to build out its AI offerings, the companies announced Thursday. AI services have continued to expand their reach in recent months, with Meta also highlighting this week that its AI assistant has 1 billion monthly active users. “It may seem kind of funny that a billion monthly actives doesn’t seem like it’s at scale for us, but that’s where we’re at,” CEO Mark Zuckerberg told shareholders Wednesday.

PostEmail
AI-Powered Political Fanfiction
A screenshot of Mr Noah’s Stories YouTube channel.
Mr. Noah’s Stories/YouTube

Rep. Jasmine Crockett, D-Texas, kept herself busy on Tuesday. She confronted Elon Musk in a closed-door meeting, got Supreme Court justices John Roberts and Clarence Thomas arrested, ended the career of Georgia Rep. Marjorie Taylor Greene, and humiliated Colorado Rep. Lauren Boebert.

Crockett’s busy — and fictional — day unfolded on Mr. Noah’s Stories, a YouTube channel that inserts the names of public figures into lengthy fanfiction videos. It’s one of many accounts, across social media sites, that serves the appetite for dramatic, partisan stories by making them up.

“I’ve just told people at this point, if it’s an AI-generated voice, it’s probably a lie,” Crockett told Semafor’s Kadia Goba and David Weigel.

AI slop has become a barometer of political fame, just as it has of pop culture celebrity. Cabinet secretaries, members of Congress, and presidential family members regularly appear in fake stories with tidy narratives.

New York Rep. Yvette Clarke, who has introduced legislation to regulate and ban AI deepfakes, told Semafor the need for reform was growing.

“We’re definitely going to reintroduce it because the technology is becoming even more expansive, and with AI that supercharges it,” Clarke said. “The ways in which our communities are victimized, particularly Black women, by deepfake technology is unacceptable.”

PostEmail
Semafor Spotlight
A great read from Semafor Business.Stanford University.
Noah Berger/File Photo/Reuters

The nightmare scenario for elite universities is here, Semafor’s Liz Hoffman writes.

The Trump administration’s multipronged pressure campaign against Harvard and other leading schools — which includes cutting federal funding, higher taxes, restrictions on international students, and scrutinizing endowments — will hit universities’ revenue streams and threaten their operations.

Think of universities as companies, Liz writes, that operate on profit margins thinner than those of a grocery store. That sets up huge stakes for lawsuits over the Trump administration’s actions.

For more of Liz’s reporting and analysis, sign up for Semafor Business. →

PostEmail