On May 4, 2026, AWS open-sourced a project called Trusted Remote Execution, or Rex, which represents a significant advancement in runtime security for agentic AI systems. The tool intercepts every system operation that an AI-generated script attempts and evaluates it against a Cedar policy defined by the host owner, not by the developer or the AI agent itself.
This architectural shift is a major step forward in containing common failure modes such as hallucinated code, prompt injection, and overly eager task interpretation. However, security and compliance leaders must recognize that Rex solves only part of the problem. The data layer—where purpose limitation, data minimization, and auditability are enforced—remains untouched by this release.
What AWS Solved with Rex
Rex uses Rhai, a lightweight embedded scripting language with no built-in operating system access. Every read, write, or open operation is intercepted by a Rex SDK call, which evaluates a Cedar policy before allowing the system call to proceed. If denied, the script receives an ACCESS_DENIED_EXCEPTION and the operation never reaches the kernel. Importantly, the script and the policy are versioned separately, giving the host owner full control over what is permitted.
AWS designed Rex to address three specific threats in agentic AI: hallucinated code, prompt injection, and overly eager task interpretation. Each of these is a documented and persistent problem. OpenAI and Anthropic have acknowledged that prompt injection is unlikely to be fully solved. Rex inverts the traditional approach: instead of trying to bound what the agent generates, it bounds what any host operation can actually do. This is a crucial shift in trust architecture.
The runtime pattern is correct and should be adopted. It provides a hyperscaler endorsement of treating prompts as instructions rather than access controls, and verifying the agent's claimed identity. Security teams can reference a working open-source implementation of this pattern in architecture reviews and vendor security questionnaires.
What AWS Did Not Solve
Rex governs system calls, not data security. This distinction is critical. A Cedar policy can permit reading a customer records file, but it cannot answer questions like: Is this read happening on behalf of a specific human user with proper authorization? Is the agent pulling more data than necessary? Are records subject to deletion requests or legal holds? Is the access logged in a tamper-evident way that will survive model lifecycle changes?
These questions are required by regulations such as GDPR (purpose limitation, data minimization), HIPAA (minimum necessary), and CMMC (access control). Runtime gating alone cannot satisfy these frameworks. Without data-layer controls, an organization can demonstrate that no script exceeded host permissions but still fail to prove that the right person authorized the right data access for the right purpose.
The Numbers Make the Gap Concrete
A 2026 forecast report found that 63% of organizations cannot enforce purpose limitations on AI agents. Sixty percent cannot quickly terminate misbehaving agents. Fifty-five percent cannot isolate AI systems from broader network access. Fifty-four percent cannot validate AI inputs. Rex addresses termination, isolation, and input validation at the runtime layer. However, purpose limitation remains unaddressed because it is a data-semantics control that must be enforced on the data itself.
Only 43% of organizations have a centralized AI data gateway. The remaining 57% run agentic AI through fragmented data-layer controls. Adding Rex to that 57% closes the runtime gap but leaves the data gap unchanged. The Five Eyes advisory on agentic AI from April/May 2026 names five risk categories: privilege, design and configuration, behavior, structural, and accountability. Rex addresses parts of privilege and behavior but not accountability, which requires evidence about who accessed what data, on whose behalf, and for what purpose.
Architecture That Data Security Requires
A defensible architecture for agentic AI must be layered. Runtime controls like Rex enforce what the host permits. Identity controls verify who the agent acts for. Data-layer controls use attribute-based access control evaluated against classification, jurisdiction, consent, and purpose. Each layer addresses a different failure mode, and none substitutes for the others.
The data layer is where every access is authenticated against the human user, where every authorization decision respects classification and consent, and where every operation produces a tamper-evident audit record that outlives the model that initiated it. AWS does not provide that layer in the Rex release. It is the architect's responsibility to build it explicitly.
What This Means for Security and Compliance Leaders
The right operational response has three parts. First, adopt the runtime pattern. Rex is open-source under Apache 2.0 and runs on Linux and macOS, so there is no procurement obstacle. Second, do not treat runtime gating as the entire solution. Map current controls against the Five Eyes risk categories and identify where the architecture stops at the kernel and where the data layer is still ungoverned. Third, build the audit trail at the layer that survives model lifecycle changes. The model can be retired, the runtime can be replaced, but the data layer is the only place where evidence outlasts the agent that produced it.
AWS solved part of the problem. Data security—the part that appears in audits, regulatory inquiries, breach notifications, and litigation discovery—requires governance at the data layer. The runtime layer just got easier. The data layer remains the architect's responsibility, and it will determine whether the next agentic AI audit succeeds or fails. As agentic AI expands into hiring, healthcare, supply chains, and customer service, the need for comprehensive data-layer controls becomes even more pressing. Organizations that ignore this gap risk failing compliance and security audits, with significant legal and financial repercussions.
Source: TechRepublic News