Reed’s view
A Waymo robotaxi allegedly ran over a cat in San Francisco this week, and the reaction illustrates the biggest hurdle facing AI. Tech CEOs often tout how they’re building “artificial general intelligence” or “superintelligence,” an almost mythical concept that a god-like technology will solve all of our problems or cause human extinction, depending on who you ask.
Customers, for the most part, don’t want superintelligence. They want AI to do tasks that humans can do easily, but would rather not. Driving is one example. Being a good driver doesn’t take superintelligence, or really any kind of intelligence. Waymo robotaxis are astonishingly good at driving compared to humans, but that’s not even remotely good enough. If KitKat, the beloved feline hit on Monday, had been struck by a human driver, the outrage would’ve been relatively tempered. But because it was a robot driving the car, people are calling for Waymo to be taken off the road.
In many situations, AI has to be perfect. If you have a human personal assistant, you’ll accept mistakes every now and then. But if an AI equivalent accidentally buys you a plane ticket to Istanbul instead of Miami, you’re not going to use that product anymore.
This is not a matter of building bigger data centers and hoping that some day AI stops making mistakes. It will take new breakthroughs to build predictability into the general-purpose foundation models that exist today. That may require an understanding of why these impossibly large neural networks do what they do — something that is impossible today.
Human drivers take a staggering 1 million lives every year. But Waymo could be shut down if it is responsible for one death. Google understands this risk, which is why the mass commercial rollout of the autonomous vehicles has taken so long.
In this article:
Room for Disagreement
A 2017 report from the RAND Corporation found there is a human cost to waiting until autonomous driving technologies are perfect to release them. Deploying autonomous vehicles that are slightly better than human drivers could save hundreds of thousands of lives over 30 years, more than if companies waited until the technology was superior to humans, researchers found.
Notable
- In May, Waymo recalled more than 1,200 vehicles after repeated collisions with chains, gates, or other “barrier-like objects.”
- AI expert Cassie Kozyrkov names the tendency to expect perfection from AI the “AI Reliability Paradox,” warning that high-performing systems can become dangerous without safety nets.


