The question of whether AIs are conscious is becoming more important, and a team of researchers has tried to answer it. Consciousness is notoriously difficult to research: How can we determine whether it feels like something to be an AI? Researchers, therefore, start with easier questions — like, “Can an AI think about what it is thinking?” — and hope that it correlates with the hard question.
The new paper did essentially that, and concluded that “no current AI systems are conscious,” but future ones may be.
Scott Alexander, an AI-and-philosophy-obsessed thinker, noted that this remains a bait-and-switch; the researchers have not solved the hard problem of consciousness. But, he said, they do a better job than most, and that AI is forcing us to do “philosophy with a deadline.” If AIs become conscious, then we may have moral obligations to them. These millennia-old questions, like why we have inner lives at all, suddenly have real-world implications. Alexander notes that whether or not AIs are conscious, people will likely treat them as such: DeepMind Co-founder Mustafa Suleyman warned against building “seemingly conscious AI,” which would imitate consciousness while not possessing it.

