The go-to API for detecting and preventing prompt injection attacks.
-
Updated
Apr 28, 2025 - Jupyter Notebook
8000
The go-to API for detecting and preventing prompt injection attacks.
LMTWT is AI security testing framework for evaluating LLM prompt injection vulnerabilities
Proof of concept demonstrating Unicode injection vulnerabilities using invisible characters to manipulate Large Language Models (LLMs) and AI assistants (e.g., Claude, AI Studio) via hidden prompts or data poisoning. Educational/research purposes only.
Add a description, image, and links to the promptinjection topic page so that developers can more easily learn about it.
To associate your repository with the promptinjection topic, visit your repo's landing page and select "manage topics."