Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
-
Updated
Feb 3, 2026 - Python
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
DeepTeam is a framework to red team LLMs and LLM systems.
The fastest Trust Layer for AI Agents
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
The Open Source Firewall for LLMs. A self-hosted gateway to secure and control AI applications with powerful guardrails.
A TypeScript library providing a set of guards for LLM (Large Language Model) applications
LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.
LLM prompt injection detection for Go applications
Offical repository for NeurIPS 2025 paper "From Judgment to Interference: Early Stopping LLM Harmful Outputs via Streaming Content Monitoring".
Engineered to help red teams and penetration testers exploit large language model AI solutions vulnerabilities.
Veil Armor is an enterprise-grade security framework for Large Language Models (LLMs) that provides multi-layered protection against prompt injections, jailbreaks, PII leakage, and sophisticated attack vectors.
Exposing Jailbreak Vulnerabilities in LLM Applications with ARTKIT
OpenClaw plugin for Prisma AIRS from Palo Alto Networks
User prompt attack detection system
Example of running last_layer with FastAPI on vercel
CLI tool for testing production safety controls in LLM/RAG apps - prompt injection, data leakage, hallucinations, cost vulnerabilities
Privacy-first proxy that automatically detects and masks sensitive data before it reaches AI models without compromising latency or SDK capabilities!
A local-first C# reference for intent routing with deterministic guardrails and constrained LLM usage.
The Self-Hosted AI Firewall & Gateway. Drop-in guardrails for LLMs running entirely on CPU. Blocks jailbreaks, enforces policies, and ensures compliance in real-time
Add a description, image, and links to the llm-guardrails topic page so that developers can more easily learn about it.
To associate your repository with the llm-guardrails topic, visit your repo's landing page and select "manage topics."