Major AI firms are hiring explosives experts in order to prevent their LLMs from helping users to make dangerous weapons.
Anthropic’s “Policy Manager, Chemical Weapons and High-Yield Explosives” hire will help the company train its Claude AI, while rival tech company OpenAI has a similar job, the BBC reported.
One of the key AI safety concerns, alongside the AI itself becoming dangerous, is the democratization of deadly technologies: Just as AI has lowered the threshold for users in coding, art, language translation, among other skills, it could make it worryingly easy to build explosives or “dirty” radiological bombs. Anthropic is currently in a row with the US government over the use of its chatbot in war.



