Trusted Execution Environments (TEEs) — like Intel SGX, AMD SEV, and ARM TrustZone — are marketed as cutting-edge security features designed to protect sensitive code and data, even from system administrators or attackers with root access. In reality, they're opaque, proprietary black boxes riddled with vulnerabilities, deployed under the pretense of trust without transparency. For anyone serious about actual security, TEEs should not be trusted — and certainly not relied upon.

Over the past decade, major chip manufacturers like Intel and AMD have positioned TEEs as a solution to growing concerns about data breaches and cloud provider trust. TEEs are now a foundational component of "confidential computing" — systems that claim to protect data in use by isolating sensitive operations in secure enclaves.

AMD's whitepaper promotes their TEE technology AMD-SEV with confident claims:

“Even an administrator with malicious intentions at a cloud data center would not be able to access the data in a hosted VM.”

Cloud providers eagerly jumped onboard. Microsoft, Amazon, and Google all market "confidential computing" using TEE-backed technology. They make promise such as that even they — the infrastructure operators — can't access your data.

Amazon:

“AWS confidential computing is always on. There is no mechanism for any AWS operator to access customers' Amazon Elastic Compute Cloud (Amazon EC2) instances within the AWS Nitro System.”

Microsoft:

“Azure provides the broadest support for hardened technologies such as AMD SEV-SNP, Intel Trust Domain Extensions (TDX), and Intel Software Guard Extensions (SGX). All technologies meet our definition of confidential computing, which is to help organizations prevent unauthorized access or modification of code and data while in use.”

Sounds good on paper. But that's just nice words and marketing.

Here's the uncomfortable truth: all major TEEs are built on closed, proprietary hardware and firmware. The microcode, management engines, secure enclaves, and cryptographic operations running on your machine are invisible to you. You can't audit them. You can't verify them. You can't change them. And it's not just the TEEs — the same goes for the CPUs themselves, and nearly every other component in your computer. In cloud environments, the situation is even worse, where you're expected to trust layers of infrastructure you have zero visibility into.

You're asked to trust these companies with long histories of opaque development processes, security mishaps, and collaboration with states and surveillance agencies. You're also asked to trust the cloud provider running the hypervisor, and the hypervisor itself, and the firmware signed by a vendor you can't see.

So when you hear these corporations say, "we promise your data is safe, even from us," what they really mean is: "just trust us, bro."

Even setting aside the trust model, TEEs have failed to deliver on their technical promises. From their very inception, they've been riddled with vulnerabilities — a litany of side-channel attacks, speculative execution exploits, and firmware-level compromises. In practice, these "trusted" environments have resembled Swiss cheese more than secure enclaves: full of holes.

In just the past few years:

And that's just what's been published in academic circles during the last few years. These attacks aren't hypothetical. They demonstrate that even attackers without root privileges — in some cases, just another VM on the same host — can bypass the confidentiality guarantees these TEEs claim to provide.

TEEs don't eliminate trust; they centralize it in opaque entities. You're not removing risk, you're transferring it. Real security isn't based on blind trust. It's based on verifiability — the ability to inspect, audit, and control the systems that process your data. That means:

Projects such as Libre-SOC aren't mainstream yet. But they're grounded in a security model that respects the user, not just corporate and state interests or regulatory checkboxes.

TEEs were never a silver bullet, but they've been sold like one. Instead of challenging centralized trust models, they've deepened the problem. Instead of offering true privacy, they offer unverified assurances. And instead of moving toward a more secure future, they lock us further into a hardware and software monoculture we can't escape or examine.

So if you're placing your trust in TEEs to protect your data: don't. Reject the black box.

Below is a growing list of academic research exposing the minefield of side channels, speculative execution flaws, and architectural fuckups that plague TEEs. If we missed something worth highlighting, let us know.

2025:

2024:

2023:

2022:

2021:

2020:

2019:

2018:

2017:

2016:

The first paper demonstrating [archived] an attack against the Intel SGX was published in May 2015 by Xu et al. In this work, they introduced controlled-channel attacks targeting Haven, a system based on [archived] Intel SGX built using an instruction-accurate SGX emulator. Notably, this research was conducted prior to the public release of SGX-capable hardware, which occurred in August 2015 with the launch of the first SGX-enabled Skylake CPUs.

Other TEE related vulnerability / exploitation resources: