Skip to content
#

agentic-ai-security

Here are 7 public repositories matching this topic...

Language: All
Filter by language

Formal safety framework for AI agents. Pluggable LLM reasoning constrained by mathematically proven budget, invariant, and termination guarantee. 7 theorems enforced by construction, not by prompting. Includes Bayesian belief tracking, causal dependency graphs, sandboxed attestors, environment reconciliation, and a 155-test adversarial suite.

  • Updated Feb 28, 2026
  • Python

Improve this page

Add a description, image, and links to the agentic-ai-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the agentic-ai-security topic, visit your repo's landing page and select "manage topics."

Learn more