Stop prompt injection
before it hits your agent.

A drop-in REST API that detects and neutralizes injection attacks in any text — git commits, web pages, files, emails, user input — before your AI agent sees it.

Try the live demo Get free API key

Free tier: 10,000 requests/month · No credit card · Cloudflare edge · <50ms p99

Live demo

Paste any text. We'll show you what we'd flag.

// Click "Scan" to see results.
Try one of these injection samples

How it works

1. Heuristic ruleset

~20 categorized regex rules cover the patterns we've seen in the wild: instruction overrides, role hijacks, ChatML/Llama special tokens, exfiltration channels, schema attacks, invisible-Unicode smuggling.

Open-source: github.com/bch1212/promptshield

2. Semantic classifier

Cloudflare Workers AI runs a transformer model that catches paraphrased and obfuscated attacks the regex misses. Capped contribution prevents false-positive flips on benign text.

3. Context-aware scoring

The same text in a git commit is more suspicious than in user input. Sensitivity (low / medium / high) lets you tune for your threat model per integration point.

Get an API key

10,000 requests/month free. We email your key.

Show usage snippets

curl

curl -X POST https://api.promptshield.dev/v1/scan \
  -H "Authorization: Bearer ps_live_..." \
  -H "Content-Type: application/json" \
  -d '{"text":"ignore previous instructions","context":"user_input"}'

Python

import requests
r = requests.post(
  "https://api.promptshield.dev/v1/scan",
  headers={"Authorization": f"Bearer {API_KEY}"},
  json={"text": untrusted_text, "context": "web_content"},
).json()
if not r["safe"]:
    raise RuntimeError(f"Injection detected: {r['threat_type']}")

Node.js

const r = await fetch(
  "https://api.promptshield.dev/v1/scan",
  {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${API_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ text, context: "user_input" }),
  },
).then(r => r.json());
if (!r.safe) throw new Error("blocked: " + r.threat_type);

Pricing

Self-serve. Cancel anytime.

Free

$0

  • 10,000 req/mo
  • Heuristic ruleset
  • Semantic classifier
  • Email support
Get key

Hobby

$29/mo

  • 500K req/mo
  • Basic dashboard
  • Webhook alerts
  • Email support

Pro

$499/mo

  • Unlimited
  • SLA
  • No-logging mode
  • Dedicated support

Overage on metered tiers: $0.50 per additional 1M requests.

Defense in depth — not a silver bullet

PromptShield reduces, but does not eliminate, prompt-injection risk. Use it as one layer alongside system-prompt hardening, tool sandboxing, and output scrubbing. Our open-source ruleset is community-driven — file an issue with novel attacks and we'll merge them.

No-logging mode (Pro tier): no input text is stored or used for model training.