Governments across the world are jumping on the latest AI bandwagon, using ChatGPT-like bots as government advisers, city and law enforcement officials, and court staffers.
The public sector has experimented with automation and algorithms in the past, and seen mixed results. Now, AI experts warn that without ethical guardrails and proper oversight, these new public-facing systems could similarly be abused or fall vulnerable to biases.
Or, maybe it’s just a fad that’ll pass.
“It seems cool and techy and cutting-edge to say that you’re doing this,” said Arthur Holland Michel, a senior fellow at the Carnegie Council for Ethics in International Affairs. But it’s very possible, he said, that these systems introduced in the government sector “will end up just sitting on a shelf, figuratively speaking.”
The View From Romania
Romania made headlines this week when it unveiled what it called the newest adviser to the prime minister’s cabinet: an AI-powered bot called Ion.
The government said Ion was aimed at representing the will of the people by analyzing feedback from Romanian citizens on social media and an online portal, and relaying that to leaders to inform their policy decisions.
The announcement — accompanied by a demonstration in which the Romanian prime minister spoke to the mirror-like robot as it emitted sci-fi like sound effects — sparked questions about whether Ion will truly be representative of Romania’s population, and whether the output would reflect human biases.
But lawmakers were enthusiastic that the system will help tear down walls between bureaucrats and the people. Romania’s Minister of Research Sebastian Burduja told the Washington Post that the tech will be able to filter out machine-generated content to ensure only human voices are heard.
Expert Holland Michel questioned the long-term practicality and accuracy of the system.
“I would like to believe that no senior policy maker in their right mind would consult a generative AI system as the technology trends today in order to make critical decisions.”
The View From Portugal
The Portuguese government last month announced plans to roll out a “Practical Guide to Access to Justice”: an AI model that Microsoft helped develop which uses the underlying tech of ChatGPT to help citizens with basic legal questions.
The Ministry of Justice plans to implement the chat feature in March. Officials say it will give citizens information on court proceedings, answering questions about documents required for a marriage license or Portuguese citizenship, for example.
A journalist for a local tech news site said it would help reduce the burden on court staffers usually tasked with these requests.
The View From Austin, Texas
On Thursday, the Austin Police Department launched a new AI-based system for residents to report a wide range of non-emergency crimes, including fraud, graffiti, and assault resulting in minor injury.
The AI assistant works over voice, web, and text in multiple languages and “conducts a full interview with the person filing a report and provides key information to the police department,” APD said in a statement.
According to local news reports, some residents have previously waited for weeks for a call back from a police officer to file a non-emergency report.
The View From Costa Rica
On Thursday, Costa Rica announced a partnership with the United Nations to develop AI tools promoting sustainable development and human rights. Part of that focus will be on combatting hate speech.
While the specifics of the new project remain unclear, the United Nations has previously unveiled its own hate speech detection software, dubbed eMonitor+, which was has been used in Lebanon, Tunisia, Libya, and Peru.
According to the UN Development Program, the software works in Spanish, French, Arabic, and English, scanning thousands of articles and social media posts to detect violent or hateful speech.
The AI model scanned through 37,000 posts and identified more than 1,000 instances of hate speech in Peru promoted by political actors or related accounts, and broke down the reasons behind the hate speech, the Peru.21 newspaper reported.
Room for Disagreement
Some governments are taking a more cautious approach to the chat-based advancements — for now.
In the U.K., multiple federal departments have sought clarification on whether they are allowed to use ChatGPT to automate repetitive policy-making tasks like writing emails or letters, iNews reported last month.
The civil servants were reportedly cautioned against using the chatbot, which has been known to sometimes produce inaccurate information, but use of the tool has not been entirely ruled out.
Experts said that when a government uses an AI tool, it should be held to a different standard than when a private company debuts one.
“When the government rolls something out … it has to work for 100% of the people on day one,” Rumman Chowdhury, a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University, told Semafor.
That’s because past cases have shown that governmental use of AI could cause privacy concerns or lead to bias in which the tech disproportionately targets specific populations. For example, multiple studies have found that algorithms used by American police departments to predict future criminal activity are racially biased.
Chowdhury, who runs a consulting firm that does some audit work for governments on issues of emerging technology, said additional oversight is needed to ensure AI technology isn’t abused.
She questioned whether governments should be using the world as a “testbed” for technology developed by a few privately-owned companies.
Then there’s the question of whether the public will even find the new tools useful. Government agencies like the United States Citizenship and Immigration Services have rolled out chatbots before, but as Chowdhury pointed out, “everybody hates interacting with a chatbot for customer service.”
Holland Michel said there may be some instances in which public-facing government services could become automated, especially for limited tasks like information retrieval or document collection.
“After an initial period of fanatic excitement there is a settling-down period where users find the niche use cases that are really the shining examples of what the technology can and should do,” he said. “Often, these use cases are ... much more boring than what one would hope.”