This repository provides a benchmark for prompt Injection attacks and defenses
-
Updated
Jul 16, 2025 - Python
This repository provides a benchmark for prompt Injection attacks and defenses
Manual Prompt Injection / Red Teaming Tool
LLM Security Platform.
LLM Security Project with Llama Guard
Latest AI Jailbreak Payloads & Exploit Techniques for GPT, QWEN, and all LLM Models
PITT is an open‑source, OWASP‑aligned LLM security scanner that detects prompt injection, data leakage, plugin abuse, and other AI‑specific vulnerabilities. Supports 90+ attack techniques, multiple LLM providers, YAML‑based rules, and generates detailed HTML/JSON reports for developers and security teams.
LLM Security Platform Docs
Client SDK to send LLM interactions to Vibranium Dome
🔍 Discover LLM jailbreaks, prompt injections, and AI vulnerabilities to understand and explore AI security risks in this curated resource.
Prompt Engineering Tool for AI Models with cli prompt or api usage
FRACTURED-SORRY-Bench: This repository contains the code and data for the creating an Automated Multi-shot Jailbreak framework, as described in our paper.
Add a description, image, and links to the prompt-injection-tool topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection-tool topic, visit your repo's landing page and select "manage topics."