The Security Toolkit for LLM Interactions
-
Updated
Dec 15, 2025 - Python
The Security Toolkit for LLM Interactions
a security scanner for custom LLM applications
HacxGPT CLI — Open-source command-line interface for unrestricted AI model access with multi-provider support, prompt injection research capabilities, configurable API endpoints, Termux/Linux/Windows compatibility, and Rich terminal UI for security research and red-team evaluation
KawaiiGPT — Open-source LLM gateway accessing DeepSeek, Gemini, and Kimi-K2 through reverse-engineered Pollinations API with no API keys required, built-in prompt injection capabilities for security research, Termux/Linux native support, and Rich console interface
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
Open-source Python, TypeScript, and Go SAST with dead code detection. Finds secrets, exploitable flows, and AI regressions. VS Code extension, GitHub Action, and MCP server for AI agents.
Self-hardening firewall for large language models
Dropbox LLM Security research code and results
Security toolkit for AI agents. Scan your machine for dangerous skills and MCP configs, monitor for supply chain attacks, test prompt injection resistance, and audit live MCP servers for tool poisoning.
Advanced prompt injection defense system for AI agents. Multi-language detection, severity scoring, and security auditing.
Static security scanner for LLM agents — prompt injection, MCP config auditing, taint analysis. 49 rules mapped to OWASP Agentic Top 10 (2026). Works with LangChain, CrewAI, AutoGen.
The Open Source Firewall for LLMs. A self-hosted gateway to secure and control AI applications with powerful guardrails.
AI Security Platform: Defense (61 Rust engines + Micro-Model Swarm) + Offense (39K+ payloads)
Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks
PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.
Code scanner to check for issues in prompts and LLM calls
The Anti-Virus for AI Artifacts & RAG Firewall. A static analysis tool scanning Models and Notebooks for RCE, Datasets and RAG docs for Data Poisoning, PII, and Prompt Injections. Secure your AI Supply Chain.
Add a description, image, and links to the prompt-injection topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection topic, visit your repo's landing page and select "manage topics."