The News
Social media users were duped over the weekend with an apparently AI-generated, viral post claiming uncouth labor practices from an unnamed major food delivery company. Hundreds of thousands of users engaged with the content, and some news outlets gave credibility to the allegations, before several journalists revealed that additional evidence provided by the original poster was generated by AI.
We’ve repeatedly warned that AI-generated misinformation will become too difficult to discern as the tools become more advanced. The post, coupled with fake photos of ousted Venezuelan president Nicolás Maduro that spread over the weekend, made that clear.
Know More
The initial post also had such widespread impact because it reinforced a widely held belief (and with good reason) that algorithms are secretly screwing consumers over, prompting some of the largest food delivery targets to actively go on defense.
AI detectors and communications with Uber revealed the scheme, but there will come a time when verifying the authenticity of documents will be much more difficult for journalists (and lawyers, bankers, law enforcement, government officials, and so on). It’s a growing problem that will likely need to be solved by the same industry building the models in the first place.


