• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Feb 27, 2024, 5:36pm EST
techEast Asia
icon

Semafor Signals

China court says AI broke copyright law in apparent world first

Insights from Beijing Lawyer Zhao Duidui, Agence France-Presse, and Forbes

Arrow Down
Parents and children play a card game at an Ultraman Card store in Shanghai, China, January 28, 2023.
CFOTO/Future Publishing via Getty Images
PostEmailWhatsapp
Title icon

The News

A Chinese court found that images generated by an artificial intelligence service infringed the copyright of a popular Japanese superhero character, a Chinese newspaper reported, in what appears to be the first ruling of its kind.

An unnamed plaintiff in the suit who held partial copyright to Ultraman, a science fiction character created by Japanese studio Tsuburaya Productions, sued an AI company after its software created images that closely resembled the character, according to the 21st Century Business Herald. The name of the AI company involved was not disclosed.

AD

The Guangzhou Internet Court found that the images generated by the AI service were “substantially similar” to the Ultraman character – suggesting that the original had been used to train the AI – and awarded 10,000 yuan (about $1,400) in damages, the paper reported. No information about the case was available on the court’s website.

Tsuburaya Productions has previously been involved in several international copyright disputes over the long-running Ultraman franchise, including in China and the United States.

It comes after a Beijing court ruled in November that artists can copyright material generated by AI, with experts saying the two legal decisions are potentially in conflict.

AD

The latest ruling may also accelerate the debate over AI’s potential to infringe on protected material, as courts around the world take on the complex legal question.

icon

SIGNALS

Semafor Signals: Global insights on today's biggest stories.

Guangzhou decision may contradict landmark Beijing ruling last year

Source icon
Source:  
Beijing Lawyer Zhao Duidui

A landmark ruling by the Beijing Internet Court last year gave copyright protection to AI-generated content – but the latest case may contradict that decision, according to an anonymous legal blogger on WeChat who goes by the handle Beijing Lawyer Zhao Duidui. In the Beijing case, the court found that generative AI is “merely a tool that assists the plaintiff in his creation” of art, while the Guangzhou ruling determined that AI “participated in the creation of content involved in the case, rather than being purely instrumental,” Zhao wrote. The legal contradictions ultimately show that Chinese intellectual property law is “unfamiliar with the world’s most advanced technological achievements in artificial intelligence,” the blogger said.

Ruling could deter Chinese firms from investing in AI research over legal fears

Source icon
Sources:  
Global Times, AsiaIPLaw, AFP

The Guangzhou court’s ruling could have the effect of deterring Chinese artificial intelligence companies from investing in future development for fear the legal risks are too high, Zhou Chengxiong, a director at the Chinese Academy of Sciences, told the Global Times. But while China is starting to toughen its intellectual property laws, it still has loose regulations on publicity rights — that is, using AI to create likenesses of public figures. That loophole means Chinese chatbots can be programmed to have the personalities of celebrities and CEOs, for example. A growing number of young women are “dating” these bots to combat growing loneliness and isolation in the country, with one young woman telling Agence France-Presse that the bot is “better than a real man.”

‘Traceability’ is central to the AI intellectual property debate in the US

Source icon
Source:  
Forbes

A New York Times lawsuit brought against OpenAI last year alleging the company used copyrighted material to train its wildly popular bot ChatGPT highlights “why the current paradigm of traceless AI is problematic,” argued Forbes Council Member Lindsey Witmer Collins. Without traceability — that is, identifying where data or content used to train AI originates from — users are unable to trust what generative AI tells them, and “no one can be credited or paid for their contributions,” Collins wrote. A better model, she suggested, would be for AI makers to incorporate traceability into their programming, allowing “an exchange directly between creator and consumer” — similar to Spotify, where artists are typically paid directly when someone listens to their content.

Semafor Logo
AD