A major AI research conference rejected hundreds of papers after revealing that the authors used AI.
The International Conference on Machine Learning requires submitting authors to review other submissions, and use of AI to do so is banned. The organizers set a trap: They distributed papers for review, and included hidden-text watermarks instructing AIs to include telltale phrases. About 2% of authors were caught, and their papers rejected, Nature reported.
A 2025 AI conference found 21% of peer reviews were likely AI-generated, and a third found hundreds of apparently hallucinated citations in papers. The problem is greatest in AI research but widespread elsewhere: One publisher retracted 8,000 fraudulent articles in 2023, as AI tools make generating fake papers easy.




