Albania plans to use ChatGPT to speed up its application to join the European Union by translating thousands of pages of legal documents.
Prime Minister Edi Rama reportedly said this week that the country will partner with OpenAI, the company behind the chatbot, to translate complex EU legal measures into Albanian, detail what changes need to be made to existing local laws, and then analyze the impact of those adjustments. Albania has been trying to join the EU for 14 years.
It’s the latest example of AI’s increasing presence in government globally, as calls grow for more oversight of the technology.
Using AI in this way could backfire for Albania
It’s smart for Rama to stay on top of new developments in AI, but using ChatGPT in this way could backfire, the head of an AI-and-governance research program argued. ChatGPT is known to occasionally produce false information, and is more likely to actually prolong the ascension process, “as government officials may blindly follow the wrong instructions and information,” Medlir Mema wrote in A2, Albania’s CNN affiliate. He also raised concerns about data privacy, and questioned whether Albania is taking a flashy route without focusing on the content of the reforms needed to join the bloc.
AI can be used to fight government corruption
Rama also announced plans to integrate AI into Albanian government services in an anti-corruption drive. The ability for AI programs to rapidly comb through datasets has made it possible to uncover corruption that was previously all but impossible to detect, a researcher at the U4 Anti-Corruption Resource Centre argued. In Armenia, the government will soon start to use an AI bot to look for fraud in the asset declarations of government officials. A similar project has been going on in Brazil since 2016, when a team of data scientists set up Operation Serenata de Amor, an AI platform that has reported over 600 suspicious counts of government spending.
US racing ahead with integrating AI in day-to-day tasks
AI systems are already in use or will be rolled out in more than 1,200 distinct ways within the U.S. government, from monitoring the border to studying volcanoes, according to a new government watchdog report. It follows a sweeping executive order that mandated agencies advance their use of AI and build rules around it. The report warned that more than half of the agencies haven’t met federal AI requirements, potentially risking U.S. security. It also raised the issue that parts of U.S. law define AI differently, and given the range of tasks where the software can come in handy, getting government offices to agree on a definition of AI will be a challenge, Nextgov reported.