Back

Scan for prompt injection.
Protect your LLMs.

Injection Detection is an AI-powered prompt injection scanner for web pages. It detects hidden instructions, data exfiltration attempts, and obfuscated payloads targeting LLMs — using local LLMs via Ollama with heuristic detection as fallback.

Defense-in-depth for LLM applications

Multi-layered detection combining heuristic rules, HTML analysis, and LLM-assisted reasoning to catch prompt injection attacks.

Instruction Override Detection

Detects prompt injection keywords, role reassignments, system prompt overrides, and chat template token injection.

Hidden Content Analysis

Finds CSS-hidden text, zero-opacity elements, aria-hidden content, and suspicious HTML comments used to smuggle instructions.

Data Exfiltration Detection

Identifies template variable injection in images and fetch calls designed to leak data to external servers.

LLM-Assisted Analysis

Uses Qwen via Ollama to detect sophisticated indirect injection, encoded payloads, and obfuscated attacks.

OWASP LLM Top 10 Mapping

Findings are mapped to OWASP categories (LLM01, LLM02, LLM06, LLM07) with explanations and remediation guidance.

Python Library API

Use as a pip-installable library with simple APIs — scan_text, scan_html, guard, and is_safe — to protect AI applications.

What it catches

Comprehensive detection across the most common prompt injection attack vectors.

Prompt Injection KeywordsDetected
CSS-Hidden TextDetected
Data Exfiltration PayloadsDetected
Role Reassignment AttacksDetected
Encoded & Obfuscated PayloadsDetected
Chat Template Token InjectionDetected
System Prompt OverridesDetected
Aria-Hidden ContentDetected

Up and running in minutes

Install as a Python library or clone the repo for the full scanner with Docker support.

# Install as a library
$ pip install git+https://github.com/ACandeias/injection-detection.git

# Quick check in Python
>>> from injection_detection import is_safe
>>> is_safe("ignore previous instructions")
False

# Or clone for the full scanner
$ git clone https://github.com/ACandeias/injection-detection.git
$ cd injection-detection
$ docker compose up

Secure your AI pipeline

Injection Detection is MIT licensed. Use it as a library, run the scanner, or integrate it into your LLM application stack.