Key release policy

Confidential Computing in Cloud: TEEs (SGX/TDX/SEV-SNP) and Attestation for Data and Keys in Use

Confidential computing is the practical answer to a very specific question: how do you keep data and cryptographic keys protected while they are being processed, not just while stored or transmitted? In 2026, the most common way to do this in cloud environments is to run the workload inside a trusted execution environment (TEE) and then use remote attestation to prove—cryptographically—that the code, configuration, and hardware state are what you expect before any sensitive material is released.

TEEs in 2026: what SGX, TDX and SEV-SNP actually protect

All TEEs aim to reduce the number of parties that can access “data-in-use”, but they do it at different layers. Intel SGX is an enclave model: you isolate a specific process or component and protect its memory region from the rest of the system. Intel TDX and AMD SEV-SNP are VM-oriented: they aim to protect an entire guest VM from the host hypervisor and cloud operator access, including memory inspection and many classes of tampering. In practice, this means SGX can be a good fit for isolating a narrow cryptographic service or a privacy-sensitive algorithm, while TDX and SEV-SNP are the easier path when you want to move an existing VM workload into a confidential VM without rewriting the application.

It’s important to be explicit about threat boundaries. A VM-based TEE does not magically make a workload “invisible to the world”; it narrows who can see or modify guest memory and state. You still need standard security controls inside the guest: patching, access control, network policies, and logging. What changes is that the cloud control plane and host stack are no longer assumed to be trustworthy for confidentiality of guest memory, and the hardware becomes the root for memory encryption and integrity checks.

In 2026, you also have to track lifecycle realities. SGX continues to exist, but parts of its supporting infrastructure have been evolving; for example, Intel has ended older versions of the SGX Provisioning Certification Service APIs, which matters if you rely on specific attestation/provisioning flows in production. The lesson is simple: treat “TEE + attestation” as an end-to-end system, not just a CPU feature, and plan upgrades the same way you would plan TLS or certificate lifecycle changes.

Attestation evidence: reports, quotes, endorsements and what they mean

Remote attestation is the mechanism that turns “I ran inside a TEE” into something a verifier can rely on. The TEE produces signed evidence describing its identity and state. Depending on the technology, this evidence might be called a quote (common in Intel ecosystems), an attestation report (common in AMD SEV-SNP documents), or a set of endorsements plus measurements (common in cloud-specific implementations). The core idea is consistent: a verifier validates signatures back to a hardware root of trust, checks a trust status (for example, whether the system is at an acceptable patch/TCB level), and then compares the measured identity to a known-good reference.

In Intel TDX, the guest (trust domain) produces a report which is turned into a signed quote by a quoting component. That quote can be verified off-host by a relying party that trusts the certificate chain and understands how to interpret the measurements. In AMD SEV-SNP, the guest obtains an attestation report which includes launch measurements and other claims, and the report is signed in a way that chains to AMD’s root, with endorsement material made available through AMD key distribution mechanisms used by cloud providers. In both models, the “identity” you verify should include more than “a VM exists”; it should bind to a specific boot chain, firmware/TCB status, and the initial guest state that you are willing to trust.

Cloud providers increasingly add their own endorsement layer because customers want a practical verification path that fits their operations. For example, cloud documentation commonly describes how to retrieve provider-signed endorsements or firmware measurements and validate them against reference values. This is not marketing fluff: it’s a real operational bridge between raw hardware measurements and something you can embed in CI/CD, admission control, and key-release policies.

Remote attestation as a gate for key release and sensitive data access

The most useful pattern in real systems is “attestation-gated secret release”. Instead of storing plaintext secrets inside the VM image or handing them out based on network location, you make a key management service (KMS), HSM-backed vault, or secrets manager release secrets only when it receives valid attestation evidence from the running workload. This turns a TEE into a controllable trust boundary: if the workload is not running in the expected TEE mode, or if the TCB level is below your policy, the secrets simply do not appear.

In 2026, major cloud environments provide building blocks for this. AWS Nitro Enclaves uses a signed attestation document that an enclave can request and attach to calls to external services; those services can validate measurements against an access policy before performing cryptographic operations. This is commonly used to protect high-value keys: the enclave asks AWS KMS to decrypt or generate a data key, and KMS checks the enclave attestation before allowing the operation. Separately, AWS also documents attestation flows for SEV-SNP-based instances, where the attestation report includes a launch measurement that can be used to verify the initial boot code and environment. The two approaches address different use cases, but the same architectural idea applies: secrets are released only to verified code running under verified protections.

On Microsoft Azure, confidential VM offerings are closely tied to attestation services and guest attestation designs. A typical approach uses a vTPM-backed chain inside the guest, and a verifier service checks evidence against policy before granting access. Meanwhile, Google Cloud Confidential VM documentation emphasises verification of launch endorsements and firmware-related measurements for AMD SEV-SNP or Intel TDX enabled instances. When you design for “keys only for verified workloads”, these provider capabilities help you avoid inventing your own brittle verification protocol.

Designing the policy: what you should actually validate before releasing keys

A useful attestation policy is specific, versioned, and tied to your deployment lifecycle. Start with identity: you should be able to match the workload to a known measurement (or a set of accepted measurements) that corresponds to your signed boot artefacts, kernel, initramfs, and the early user space that will handle secrets. If you rely on containers inside a confidential VM, consider whether you need a second-layer measurement or admission check, because “a VM booted correctly” does not automatically mean “my container image is exactly the one I intended”. In practice, many teams combine VM attestation with signed container images and a strict runtime policy.

Next, validate freshness and security posture. Attestation evidence often includes a TCB status or security version numbers that tell you whether the system is up to date enough for your risk tolerance. This is not a theoretical concern: TEEs have security advisories, microcode updates, and firmware updates like any other technology. A policy that ignores TCB status is effectively saying, “I trust the environment even if it’s unpatched.” For regulated workloads, that rarely survives an audit. Treat this like certificate validation: it is not enough that a signature chains to a root; you also care about revocation and the current health of the chain.

Finally, bind secrets to context. A robust pattern is to have the workload generate an ephemeral public key inside the TEE, include that key (or its hash) inside the attestation evidence (or in attestation user data if supported), and then have the verifier encrypt secrets to that ephemeral key. This ensures that even if the secret payload is intercepted, only the currently running, verified instance can decrypt it. It also makes rotation straightforward: every restart produces a new keypair and therefore a clean cryptographic boundary between deployments.

Key release policy

Operational reality: limits, pitfalls, and what “secure in use” does not mean

Confidential computing is a major step forward, but it is not a blanket guarantee. The classic misconception is that TEEs remove the need for hardening the guest. They do not. You still have to manage credentials, patch the OS, secure workloads, and handle insider threats within your own organisation. TEEs mainly reduce the risk from a compromised or curious host, from certain classes of hypervisor-level attacks, and from accidental exposure through host-level debugging or snapshotting.

Side channels and misconfiguration remain real. Any serious design in 2026 should assume that side-channel research continues, and that performance/telemetry features can become attack surfaces. You mitigate this by keeping the trusted code base small (especially for enclave-style designs), avoiding unnecessary data-dependent branching in sensitive routines, controlling what the workload exposes via logs and metrics, and using constant-time cryptographic primitives where applicable. For VM-based TEEs, you also need to think about what you expose through network services: a confidential VM that happily returns secrets over an unauthenticated endpoint is still a data breach waiting to happen.

Another operational pitfall is treating attestation as a one-time setup task. Evidence formats, certificate chains, and provider verification flows evolve. A good engineering team tests attestation in CI, monitors verification failures, and keeps a controlled “break glass” process for incidents. If attestation fails during an outage, your system should degrade safely. That might mean running in a reduced capability mode without the most sensitive keys, rather than silently falling back to plaintext secrets delivery because “the service must stay up”.

A practical blueprint: end-to-end flow for confidential key use

A workable blueprint looks like this. First, build an immutable artefact (VM image or container bundle) with a minimal secret footprint: no long-lived keys baked into the image. Second, start the workload in a confidential VM or enclave-backed environment that can produce remote attestation evidence. Third, have the workload generate an ephemeral keypair inside the TEE and request an attestation quote/report that includes the ephemeral public key (or a digest) in the user-data field when supported, or bind it at the verifier layer if that is the provider’s pattern.

Then, a verifier service validates the evidence: signature chain, TCB/patch level, expected measurements, and any provider endorsements required. Only if all checks pass does the verifier ask a KMS/HSM to release a wrapped data key, or decrypt a key-encryption key, or provide a short-lived credential. The verifier encrypts the result to the workload’s ephemeral public key and returns it. The workload decrypts it inside the TEE and uses it for envelope encryption (data keys for bulk data, with master keys staying in KMS/HSM). This ensures that the most sensitive key material never exists in plaintext outside a verified environment.

Finally, operationalise rotation and audit. Rotate secrets on deployment, on a schedule, and on policy changes (for example, when you tighten accepted TCB levels). Log verification decisions with enough detail for forensics: which measurement set was accepted, which policy version was used, and which key release occurred. If you later discover a vulnerability that changes your risk posture, you should be able to identify which workloads received keys under the old policy and invalidate them quickly. That is the difference between a security feature and a security programme.