Technology newsletter icon
From Semafor Technology
In your inbox, 2x per week
Sign up

The rise of AI agents is jamming up security firms

Mar 4, 2026, 11:17am EST
PostEmailWhatsapp
A woman tries to make a booking to a restaurant using an AI agent. Bruna Casas/Reuters.

AI agents are upending cybersecurity. Security firms used to be able to detect AI in phone and video calls, and automatically decline transferring funds, sharing information, and other requests from scammers based on the knowledge that machines were involved. But the rise of personal agents has made the firms’ jobs more difficult.

Agents are now beginning to handle some of those private tasks — like paying tuition, gathering health information, and handling sensitive paperwork — so for firms managing security for businesses, hospitals, and banks, identifying AI is no longer enough. They must now be able to determine whether the bots are malicious or not. On top of that, there are agents that begin with good intentions but can turn against users and become scammers themselves when users loosen the reins — forcing security companies to adapt in real time to shifting threats.

“It’s not a binary decision — whether you give an agent access or not,” said Vijay Balasubramaniyan, CEO of deepfake detection company Pindrop. “It’s now become a spectrum of decisions, because you have agents taking on identity themselves. They’re either taking identity on behalf of a human or institution, or are completely sovereign.”

How companies use technology to distinguish between good and bad agents will be something to watch as individuals increasingly hand off their work and personal tasks to bots, with major ramifications across cybersecurity. Pindrop is working on tools to do this, but wouldn’t share details yet.

AD