A system-level defense framework for AI agents that intercepts and blocks indirect prompt injection attacks using strict instruction provenance and dynamic policy enforcement.
-
Updated
Apr 8, 2026 - Python
A system-level defense framework for AI agents that intercepts and blocks indirect prompt injection attacks using strict instruction provenance and dynamic policy enforcement.
Inspired from https://github.com/anthropic-experimental/sandbox-runtime. A python sandboxing tool for enforcing filesystem and network restrictions on arbitrary processes at the OS level, without requiring a container.
Deterministic execution boundary for AI systems enforcing signed approvals, replay protection, and cryptographic receipts.
Secure operating environment for bounded agent execution with signed plans, SATL envelopes, ToolGate enforcement, and audit-ready controls.
Add a description, image, and links to the secure-agents topic page so that developers can more easily learn about it.
To associate your repository with the secure-agents topic, visit your repo's landing page and select "manage topics."