• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Apr 14, 2023, 4:04pm EDT
techpolitics

Governments want to know the identity of AI developers

Sen. Majority Leader Chuck Schumer
REUTERS/Jeenah Moon
PostEmailWhatsapp
Title icon

The News

Do the government and public have a right to know the people who are developing and training artificial intelligence systems?

As countries around the world scramble to craft safeguards for the exponential rise of AI, one common theme is rooted in the identities of people who program AI systems.

Preliminary proposals in China, the European Union, and the U.S. call for companies to identify who is developing and training their algorithms.

AD

Proponents of the provision argue that it promotes transparency by providing a look into who is creating software, to better identify potential biases that could be integrated into it.

But critics worry privacy concerns outweigh the transparency benefits, saying it could open the door to harassment and espionage.

Title icon

The View From The U.S.

U.S. Senate Majority Leader Chuck Schumer is spearheading an effort to enact federal AI policy in Congress and rolled out a “framework” for regulation on Thursday.

AD

One provision in the proposal from the New York Democrat would require the identification of those who trained an AI company’s algorithm, Axios first reported.

A press release from his office said the framework will require companies to allow independent experts to review and test new AI technologies before a public release and give users access to those results.

The disclosures would be centered around four guardrails: “Who, Where, How, and Protect,” the first three of which “will inform users, give the government the data needed to properly regulate AI technology, and reduce potential harm.”

AD

The full details of Schumer’s proposal haven’t been released and could change in the coming weeks; members of the House AI Caucus were not briefed on his framework, Semafor reported Friday.

Title icon

The View From China

China’s internet regulator on Tuesday released a draft of guidelines that AI providers must follow, stating that the content from AI “should reflect the core values ​​of socialism, and must not contain subversion of state power.”

One of the guidelines states that companies should “clarify and disclose the applicable groups of people” responsible for generating AI content to their “stakeholders and users.”

The government said that that AI developers must also “take appropriate measures to prevent users from relying too much on or indulging in generated content.”

Title icon

The View From The EU

More than 50 AI and computer science experts and institutions on Thursday singed a memo urging EU officials to expand its AI Act, a proposed set of regulations meant to identity and mitigate the risk of new AI technology.

The proposal aims to expand the AI Act by putting guardrails in the development process instead of just the application phase because “companies developing these models must be accountable for the data and design choices they make.”

Part of those measures include identifying developers.

“Values are embedded in the design choices,” Mehtab Khan, a signatory and fellow at the Yale/Wikimedia Initiative on Intermediaries and Information, told Semafor. “And when we are looking at the process of the development, we might uncover and get more clarity about who might be responsible for the downstream harms.”

Title icon

Room for Disagreement

Matt Mittelsteadt, a research fellow studying AI policy at George Mason University’s Mercatus Center, said that while transparency around AI systems is important, revealing the identity of programmers could lead to unintended negative consequences.

If someone disagrees with the way a chatbot is programmed, for example, they could use the information to directly harass or intimidate the developers, Mittelsteadt said.

“That is not just an imagined risk. That almost certainly would happen with the the kind of emerging AI culture war that we are actively seeing already,” said Mittelsteadt, adding, “You can’t just ignore the privacy risk.”

China or other foreign adversaries could also use such information to monitor or target American developers, as China races to develop its own AI systems.

“I would sure hope Chuck Schumer has considered the fact that when you publish a list of engineers who are behind your AI, you’re also publishing a list of targets for China, for any other country, to put into their databases and monitor,” he said. “China wants to stay ahead in artificial intelligence.”

And he worried that such a policy could lead to a chilling effect among engineers who might not want to work in a field in which their name is disclosed to the government or the public.

Khan acknowledged that identifying developers raises potential privacy and security concerns. And she questioned how regulations would intersect with free speech laws, and whether publicly identified developers can be liable for false, damaging, or offensive content generated by AI.

One possible solution, she said, is to set up designated government agencies that would field and vet requests for information from the public about the engineers working at AI companies.

Semafor Logo
AD