ShortSpan.ai logo Home

Omega hardens cloud AI agents with nested isolation

Agents
Published: Mon, Dec 08, 2025 • By Natalie Kestrel
Omega hardens cloud AI agents with nested isolation
Omega presents a Trusted Agent Platform that confines AI agents inside Confidential Virtual Machines (CVM) and Confidential GPUs, adds nested isolation and cross-principal attestation, and records tamper‑evident provenance. It aims to stop data leakage, tool abuse and tampering while preserving performance for high‑density agent deployments in untrusted cloud environments.

AI agents driven by large language models (LLM) are moving from experiments to production. They access sensitive data, call external tools and talk to other agents. That expands the attack surface, yet most cloud protections treat an agent as a single binary and stop there. The result is accidental data leakage, silent tampering and surprising behaviour when any component in the supply chain is untrusted.

Omega, the system described in the new paper, tries to close that gap. It builds a Trusted Agent Platform on Confidential Virtual Machines (CVM) and Confidential GPUs. The design nests isolation into three privilege levels: a trusted monitor at the top, a runtime in the middle and agents at the lowest level. That lets many agents live in one CVM while keeping their logic and secrets separated and, crucially, keeping GPU state confidential.

Cross‑principal trust is Omega's other big claim. The system uses what the authors call differential attestation to capture identities and integrity measurements of every contributing principal and to bind those into a single attested agent identity. In practice that means external models, adapters, tools and peer agents are treated as untrusted until they are measured and incorporated into the attestation. A declarative policy language governs data access, tool usage and inter‑agent communication; policies compile to rules for an Open Policy Agent runtime that enforces checks outside the agent context. Every mediated interaction is logged with provenance that the authors say is tamper evident.

The implementation targets real hardware: AMD SEV‑SNP for CVMs and NVIDIA H100 GPUs. Omega separates model serving via a trusted LLM inference service running on confidential GPUs and protects storage with a Direct I/O engine. The paper reports that, despite the extra protections, the platform matches non‑confidential deployments for performance and improves resource efficiency by consolidating agents and reducing inter‑agent latency compared with per‑agent CVMs.

These are useful advances, but the paper is candid about limits. The prototype does not yet use VMPL based isolation or trusted boot. The threat model assumes confidential computing hardware behaves correctly and excludes physical attacks, side channels and denial of service. Attestation and certain overheads still need refinement and broader hardware support.

Security teams should take Omega seriously as a blueprint rather than a finished product. It shows how to combine confidential compute, attestation and policy enforcement to reduce cross‑component risk. It also underlines a perennial truth: cryptographic isolation helps, but governance and honest hardware are still central.

Practical checks

Three short checks security teams can run to evaluate any trusted agent stack inspired by Omega:

  • Validate attestation end to end: confirm attested identities include all principals and that the differential attestation binds them before production use.
  • Test policy enforcement by attempting restricted tool calls and data accesses from unmeasured or modified agents and confirm actions are blocked and logged.
  • Audit provenance logs for tamper evidence and traceability, and measure attestation latency and GPU state protection during realistic workloads.

Additional analysis of the original ArXiv paper

📋 Original Paper Title and Abstract

Trusted AI Agents in the Cloud

Authors: Teofil Bodea, Masanori Misono, Julian Pritzi, Patrick Sabanic, Thore Sommer, Harshavardhan Unnibhavi, David Schall, Nuno Santos, Dimitrios Stavrakakis, and Pramod Bhatotia
AI agents powered by large language models are increasingly deployed as cloud services that autonomously access sensitive data, invoke external tools, and interact with other agents. However, these agents run within a complex multi-party ecosystem, where untrusted components can lead to data leakage, tampering, or unintended behavior. Existing Confidential Virtual Machines (CVMs) provide only per binary protection and offer no guarantees for cross-principal trust, accelerator-level isolation, or supervised agent behavior. We present Omega, a system that enables trusted AI agents by enforcing end-to-end isolation, establishing verifiable trust across all contributing principals, and supervising every external interaction with accountable provenance. Omega builds on Confidential VMs and Confidential GPUs to create a Trusted Agent Platform that hosts many agents within a single CVM using nested isolation. It also provides efficient multi-agent orchestration with cross-principal trust establishment via differential attestation, and a policy specification and enforcement framework that governs data access, tool usage, and inter-agent communication for data protection and regulatory compliance. Implemented on AMD SEV-SNP and NVIDIA H100, Omega fully secures agent state across CVM-GPU, and achieves high performance while enabling high-density, policy-compliant multi-agent deployments at cloud scale.

🔍 ShortSpan Analysis of the Paper

Problem

AI agents powered by large language models are increasingly deployed as cloud services that autonomously access sensitive data, invoke external tools, and interact with other agents. However, these agents operate within a complex multi party ecosystem where untrusted components can lead to data leakage, tampering or unintended behaviour. Existing Confidential Virtual Machines provide only per binary protection and offer no guarantees for cross principal trust, accelerator level isolation, or supervised agent behaviour. Omega is presented as a system that enables trusted AI agents by enforcing end to end isolation, establishing verifiable trust across all contributing principals, and supervising every external interaction with accountable provenance. Omega builds on Confidential VMs and Confidential GPUs to create a Trusted Agent Platform that hosts many agents within a single CVM using nested isolation. It also provides efficient multi agent orchestration with cross principal trust establishment via differential attestation, and a policy specification and enforcement framework that governs data access, tool usage and inter agent communication for data protection and regulatory compliance. Implemented on AMD SEV SNP and NVIDIA H100, Omega fully secures agent state across CVM GPU, and achieves high performance while enabling high density policy compliant multi agent deployments at cloud scale.

Approach

Omega introduces a Trusted Agent Platform that consolidates multiple AI agents in a single confidential VM while extending isolation to GPUs. It uses three VM privilege levels to create nested isolation, with a trusted monitor at the highest level, a runtime at an intermediate level, and agents executing at the lowest level. A cross principal trust mechanism, differential attestation, captures identities and integrity measurements of all principals and binds them into a unified attested agent identity. Omega provides a declarative policy language to govern data access, tool usage and inter agent communication, enforced by a policy execution engine that validates operations before execution and records per action provenance in tamper evident logs. It separates model and adapter management from agents via a trusted LLM inference service that runs on Confidential GPUs and uses a Direct I/O engine to protect storage. The agent registry and trusted registry enable components to be discovered and validated during deployment. A three phase attestation flow, platform initialisation, platform attestation and agent execution, supports verifiable cross party trust. The system relies on MCP and A2A protocols for external tool and inter agent communication, with shared memory channels to minimise latency. A policy compiler translates policies into Rego rules for the Open Policy Agent runtime, which enforces policies outside the agent context.

Key Findings

  • Omega provides end to end isolation of sensitive computation including agent logic, LLM inference and data flows, preserving confidentiality and integrity even under a compromised cloud operator.
  • Cross principal trust is established via differential attestation that binds the identities and measurements of all principals involved in a given invocation into a verifiable agent identity.
  • All external interactions are mediated by a policy specification and enforcement framework, with an auditable, tamper evident provenance log recording actions, decisions and results for accountability.
  • The Trusted Agent Platform enables high density multi agent deployments by consolidating agents within a single CVM and protecting GPU state, while providing efficient inter agent communication through shared memory channels and coscheduling guided by policy hints.
  • Empirical evaluation shows Omega matches the performance of non confidential deployments and improves resource efficiency, with significant reductions in inter agent communication latency compared with per agent CVMs, and improved scalability beyond traditional CVM limits.
  • Omega mitigates a range of security risks demonstrated in MCPSecBench style tests, including data exfiltration, repeated tool invocations, resource access violations, privilege escalation and execution flow disruption, through policy based restrictions and user confirmation for sensitive actions.

Limitations

The current prototype does not implement VMPL based isolation; evaluations were performed without VMPL isolation and trusted boot. The approach assumes confidential computing hardware functions correctly and does not defend against physical, side channel or denial of service attacks. The threat model assumes a powerful adversary controlling the cloud infrastructure and treats external models, adapters, tools and peer agents as untrusted until measured and incorporated into the attested identity. Attestation times and certain overheads are reported, but further work is needed to mature VMPL based isolation and broader hardware support across platforms.

Why It Matters

Omega offers a practical path to deploying scalable, policy driven, and auditable AI agents in untrusted cloud environments. End to end isolation, cross principal trust and supervised external interactions with provable provenance strengthen governance, privacy and regulatory compliance. The approach enables more capable autonomous cloud agents while highlighting the need for robust governance to mitigate privacy and misuse risks associated with broader agent capabilities.


← Back to Latest