Scale AI is teaming up with prominent Washington think tank, the Center for Strategic and International Studies, to develop and fine tune AI models that can be used for international relations strategy and war gaming, the two organizations announced Wednesday.
Scale cited “cyberattacks, coordinated disinformation campaigns and black-box AI developments from foreign adversaries” as focuses for the project. “The time is now for Scale and CSIS to join forces, given the gravity of national security concerns today,” John Brennan, general manager of Scale’s public sector business, said in a statement.
Their work will test the usefulness of generative AI in the complex area of geopolitics. “Artificial intelligence has game-changing potential if applied thoughtfully to foreign policy and national security concerns,” Dr. Ben Jensen, a senior fellow at CSIS, said in a statement.
Scale, founded in 2016 by Alexandr Wang, built its business on the data that underpins machine learning and AI models. It hires droves of people to label and annotate data that can be used in everything from autonomous driving to the training of large language models.
In recent years, Scale has evolved to include fine-tuning large language models for the commercial sector and government, including the U.S. military.
The partnership between Scale and CSIS is a fascinating experiment in the use of large language models.
We’re about to find out what happens when you fine tune an LLM on a unique dataset: That of a highly respected think tank. It’s kind of like if CSIS brought in a college intern who, in the first couple days on the job, was able to read, digest and understand every single thing the organization had ever written.
Will that intern become a seasoned analyst overnight? That probably won’t happen right off the bat, and the two companies are certainly not claiming that it will. But it’s likely that the intern is going to be a very good one.
If analysts at CSIS are able to cut down on the time it takes to write reports, especially when they pertain to important issues around national security and defense, it will be a win. But what if it turns out this new product comes up with insights that human analysts may not have considered? That could be even more powerful.