The News
A handful of startups are betting that encrypted AI models will attract business customers skittish about the security risks of chats, even as large AI companies offer private models. One of those startups, NEAR AI recently launched a platform hosting popular open-weight frameworks that encrypt all prompts and responses, preventing the model providers from viewing those chats, training on the material, turning them over in a court proceeding, or accidentally leaking them.
“Some people do not feel very comfortable connecting their email or giving computer access directly to the AI,” said Illia Polosukhin, NEAR co-founder and a co-author of Google’s foundational Attention Is All You Need paper. “We’re limiting how much we can use AI because of a lack of privacy.”

Know More
WhatsApp offers end-to-end encryption for some AI functions like writing assistance and message summaries. Apple’s AI runs on-device when tasks are small enough, and for larger computations, data processed through servers isn’t stored or viewed by the company, it says. NEAR’s software encrypts a user’s prompt locally when it sends, similar to how Signal or Threema work. Inside a secure environment, the Nvidia or Intel hardware that NEAR uses automatically decrypts the query, the model runs inference on it, and it is sent back in an encrypted form. It’s an interesting alternative that could attract users who want in on AI but don’t trust AI companies, as well as the blockchain and crypto firms that love all things encryption.
But it’s a hard sell for mainstream companies when other, more familiar privacy options are available, like running smaller models locally. Ultimately, even encrypted AI requires user trust. And in NEAR’s case, users must still rely on Nvidia’s and Intel’s computing technology to be designed to maintain confidentiality as advertised.


