Why AI Needs Confidential Computing to Break Free

While the timeline marvels at the latest astronomical deal an AI engineer received to switch teams, it’s missing the deeper trend of the entire AI industry being swallowed whole by companies that have spent decades perfecting surveillance capitalism.
The same firms that turned your searches, messages, and photos into profit centers increasingly control the models running our medical diagnoses, financial systems, and personal assistants.
OpenAI is now required by court order to preserve all ChatGPT logs including "temporary chats" and API requests that would have been deleted
— kepano (@kepano) June 4, 2025
if I understand this correctly, it means data retention policies for apps that use OpenAI API simply cannot be honored pic.twitter.com/k989iTyzxb
That should terrify us: every prompt you type, every question you ask, every piece of data you process through these AI systems is visible to these companies. Our business strategies, health concerns, personal struggles — all of it flows through their servers, unencrypted as it’s processed. We’re trusting companies that have violated that trust repeatedly.
We need AI that is truly private. And we need it now.
The Core Need: Computation on Encrypted Data
The crypto world taught us to think about trust differently. "Don't trust, verify" became our mantra. But AI has a fundamental problem that makes this nearly impossible.
Today's encryption works great when data is sitting still or moving between servers. Our messages to doctors are encrypted. Our bank transactions are encrypted. But the moment something actually needs to be done with that data, it must be decrypted. Our most sensitive information becomes naked and vulnerable at the exact moment it matters most.
Imagine telling your deepest secrets to a therapist who must shout them to a crowded room before responding. That's essentially what happens when we use AI today.
The solution exists, and it's elegant: confidential computing, where data stays encrypted the entire time — even while being processed. This magic happens through Trusted Execution Environments (TEEs): special zones in hardware that create an impenetrable bubble around both the code and the data being run.
TEEs and GPU Privacy: Confidential Compute at Scale
Think of a TEE as a black box inside your computer chip. Even if someone has complete control of the machine — even if they're the cloud provider or system administrator — they can't see inside this box. The data goes in encrypted, gets processed in secret, and comes out encrypted. If you have an iPhone, you interact with TEEs daily when verifying biometric data.
For years, TEEs were limited to CPUs — (the "brains" of computers that execute instructions and run programs) — making them too slow for serious AI work. Running a large language model on CPU-based TEEs was like trying to stream Netflix on dial-up. But now TEEs run on GPUs, making it possible to run full-scale AI models with bulletproof privacy and almost zero performance hit.
Here's how it works:
- You submit a prompt or data query — immediately encrypted with keys only you control. This encrypted input is sent to a secure compute setup made of two parts: a CPU and a GPU, which both have their own TEEs.
- The CPU uses its TEE to run a locked-down virtual machine and perform attestation — a cryptographic check proving the software environment is clean and trusted.
- Then the encrypted data moves to a GPU’s TEE, where the model processes your query while keeping it hidden from the OS, the cloud provider, and even physical attackers.
- Finally, the result is encrypted and sent back. Only you can decrypt and read it.
For most real-world AI applications today, TEEs offer the most practical balance of security and usability. But, they rely on trusting the hardware manufacturers of these secure enclaves, and security researchers have found exploits in the past.
Cryptographic alternatives like zkML promise even stronger guarantees but prove to be dramatically slower and harder to scale. It's like choosing between a sports car with good-but-not-perfect brakes versus a tank that maxes out at 5 mph.
Yet, TEEs come with the potential to be combined with other Privacy Enhancing Technologies (PETs) to increase security based on an application’s needs.

Who's Building Private AI with TEEs?
The pioneers are already here, building the infrastructure for a more private future. They include:
- iExec — A decentralized platform for AI and DePIN that lets developers run apps privately and securely using TEEs. Developers can use its toolkit (including an app builder) to create privacy-preserving software in common languages like Rust and JavaScript, and also earn with their data, models, or computing resources using its RLC token.
- Oasis — A Layer 1 blockchain for private, onchain computing through "confidential ParaTimes," special zones where data is processed securely using TEEs. These zones can run private AI tasks like trading bots (WT3, for example) or credit scoring, with the results made verifiable onchain.
- Nillion — A privacy-first network called the “blind computer,” which keeps data protected throughout the entire computation process. Nillion does this using a mix of TEEs, secret sharing, and multiparty computation, each of which can be called depending on the needs of a particular use case. Its architecture separates where data is stored from where it’s processed, so no one party sees the full picture — allowing for private collaboration across AI apps.
- Phala Network — A decentralized, computing network that uses TEEs to protect AI tasks from exposure and provide confidential computing to users. You can test a version of its confidential computing network through an instance of DeepSeek. It’s also partnered with a series of projects like NearAI, Newton, and Vana. With NearAI, Phala created a Private ML SDK for confidential computing to exist alongside the rest of its toolkits.
- Atoma — A decentralized AI network built on Sui that focuses on private, verifiable model inference using GPU-based TEEs. It connects users with GPU providers through a marketplace, enabling private and verifiable AI inference. You can try its confidential run of DeepSeek here.
Right now, most people have no idea where their AI prompts go or what happens to them. They can't confirm their data isn't being stored, analyzed, or used to train future models. They're flying blind, trusting promises from companies that have broken those promises before.
When it comes to AI, without privacy, we risk building the most sophisticated surveillance infrastructure in human history.
TEEs offer a real, working path to encrypted inference with minimal tradeoffs. They aren't perfect — no security technology is — but they're the most practical way to push AI toward privacy today. Combined with decentralized infrastructure, they provide scalable, usable solutions available right now.
I highlight this to show that better paths exist. Cryptography gives us tools to reclaim control of this truly transformative technology from those who prioritize growth over user privacy. Supporting these alternatives — or at minimum, understanding what's possible — matters more than we might think.
The infrastructure for private AI isn't some distant dream. It's here, being built by teams who believe your thoughts should remain your own. The question is whether we'll use it. I, for one, will.