• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Feb 9, 2024, 1:44pm EST
tech

As more governments look at AI rules, Accenture tests software to help companies comply

EQTY Lab
PostEmailWhatsapp
Title icon

The Scoop

As global regulators increasingly scrutinize artificial intelligence, massive consulting firm Accenture is testing a startup’s technology in what could become a standard method of complying with rules to ensure responsible and safe innovation.

Los Angeles-based EQTY Lab created a new method, employing cryptography and blockchain, to track the origins and characteristics of large language models, and provide transparency on their inner workings, so companies and regulators can easily inspect them. The idea is to automatically examine the model as it is being created, rather than focus on its output.

“What we’re doing is creating certainty,” that a model works the way it was intended, said EQTY Lab co-founder Jonathan Dotan. “​​Right now, we’ve done the first step, which is proving this is possible.”

AD

EQTY Lab’s AI Integrity Suite is being evaluated in Accenture’s AI lab in Brussels to see if the software could be scaled to serve the firm’s thousands of clients, many of whom are in the Fortune 100.

The work is being done as countries propose ways to address the promise and risks of AI. On Thursday, the U.S. Commerce Department announced a consortium of more than 200 tech companies, academics and researchers that will advise the new government AI Safety Institute, which will develop “red team” testing standards and other guidelines directed by a White House executive order on AI last year.

“Responsible AI is absolutely critical. It’s on top of everyone’s mind,” said Bryan Rich, senior managing director for Accenture’s Global AI Practice. “But how do you go from talking about responsible AI to actually delivering it?”

AD
Title icon

Know More

Unlike other forms of machine learning, the mechanics of generative AI models can be murky. By the time many of these models are implemented at companies, they’ve been modified and fine-tuned, sometimes creating layers of complexity that can lead to compliance risks.

The Data Provenance Initiative, for example, found that many implementations of generative AI models contain improperly licensed data that is difficult to track.

EQTY lab hopes AI developers will use its software to create an immutable fingerprint of all of a model’s components. Its software essentially tracks the different parts of an AI model as it’s being developed, and turns the information into a cryptographic signature stored on a blockchain, making it theoretically impossible to tamper with.

AD

EQTY said it chose blockchain technology, despite its association with cryptocurrencies, because it was the simplest and easiest method of keeping track of the records. The alternative, Dotan said, would be to use a government agency or a company to handle it, which would add cost and complexity.

When others use an EQTY-registered model and possibly modify it, the technology continues to track additional layers of training or fine-tuning. Even benchmarks that measure a models’ performance, or its levels of potential bias or toxicity, would be recorded.

By the time a model is used by a company, it would ideally have an easily visible audit trail showing all of its characteristics, according to the startup.

A cryptographic signature could also be useful to companies that want to ensure they’re getting what they pay for. Companies employing large language models often use third parties to implement the technology. Each day, employees might send hundreds or thousands of prompts querying the LLM. When those queries come back, companies want to know that the model they are paying for was actually used, instead of some cheaper or potentially less secure one. EQTY says its software could automatically check for that, the way a Secure Sockets Layer certificate ensures an encrypted connection between a web server and a browser.

As a proof of concept, EQTY labs built and trained its own large language model, called ClimateGPT, using its software to track each step.

It plans to open source its technology, allowing any developer to use it for free, but is expected to sell its AI Integrity Suite as an enterprise product.

Title icon

Reed’s view

The White House AI executive order and other regulations, including those proposed in Europe, make it seem like watchdogs see AI models as the kinds of products that can be inspected and stamped as either safe or unsafe, like a car or a consumer gadget.

In reality, large language models are like a perpetual stew, with ingredients from many places constantly thrown in together.

Also, the idea of using just one model is antiquated. For a single AI product, more models are increasingly being employed as developers glom together specially trained ones to carry out specific tasks.

We’re already likely approaching a place in which it will be difficult and time consuming for companies to vet every AI model they use.

That’s why, in theory, EQTY’s idea makes sense: A cryptographic signature would allow developers to retain trade secrets while simultaneously offering some transparency into how the models were put together.

For instance, Meta’s Llama 2 model does not disclose the contents of the data that was used to train it. That’s led to tension as the company faces lawsuits alleging it violated copyright law by including protected work in its training data. Let’s say that Meta, in a purely hypothetical scenario, wanted to prove that a specific set of copyrighted work was not included in the data. EQTY says it is developing a way that Meta could prove that without having to divulge the entire training set.

Title icon

Room for Disagreement

The need for this kind of tracing of AI models stems from hasty government regulation, which may be misguided, argues Columbia law professor Tim Wu, a former Biden administration official: “The temptation to overreact is understandable. No one wants to be the clueless government official in the disaster movie who blithely waves off the early signs of pending cataclysm. The White House is not wrong to want standardized testing of A.I. and independent oversight of catastrophic risk.

But the truth is that no one knows if any of these world-shattering developments will come to pass. Technological predictions are not like those of climate science, with a relatively limited number of parameters. Tech history is full of confident projections and ‘inevitabilities’ that never happened.”

Semafor Logo
AD