Enterprise-grade AI security integrations for various AI platforms using Zscaler AI Guard Detection as a Service (DAS).
This repository contains integrations that enable runtime AI security by scanning AI traffic through Zscaler AI Guard for inspection. AI Guard provides comprehensive protection against:
- Prompt Injection - Malicious attempts to manipulate AI behavior
- Data Loss Prevention (DLP) - Sensitive data exposure (PII, secrets, credentials)
- Toxicity - Harmful or inappropriate content
- Malicious URLs - Links to phishing, malware, or blocked domains
- Gibberish - Encoded or nonsensical content
- Off-Topic - Content outside allowed scope
- Competition - Questions about competitors
AI Guard uses a DAS pattern where each AI application integrates independently:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ AI App 1 │ │ AI App 2 │ │ AI App 3 │
│ (Claude Code)│ │ (Azure APIM) │ │ (LangChain) │
└──────┬───────┘ └──────┬───────┘ └──────┬───────┘
│ │ │
└─────────────────┼─────────────────┘
▼
┌─────────────┐
│ AI Guard │
│ DAS API │
└─────────────┘
Benefits:
- ✅ No proxy infrastructure required
- ✅ Each app integrates independently
- ✅ No single point of failure
- ✅ Platform-specific optimizations
- ✅ Scales naturally with applications
See ARCHITECTURE.md for detailed explanation.
| Platform | Type | Status | Documentation |
|---|---|---|---|
| Claude Code | Hooks (Python) | ✅ Complete | Guide |
| Azure AI Gateway | APIM Policy Fragment | ✅ Complete | Guide |
| Cursor IDE | Hooks (Python) | ✅ Complete | Guide |
| Cline | VS Code hooks (Python) | ✅ Complete | Guide |
| Windsurf | Cascade hooks (Python) | ✅ Complete | Guide |
| GitHub Actions | CI/CD Pipeline (Python) | ✅ Complete | Guide |
| Jenkins | Declarative Pipeline (Python) | ✅ Complete | Guide |
| Google Apigee X | API Proxy | ✅ Complete | Guide |
| Kong Gateway | Lua Plugin | ✅ Complete | Guide |
| LiteLLM | Native Plugin + Python Callback | ✅ Complete | Guide · Official Docs |
| NeMo Guardrails | Library Plugin (Python) | ✅ Complete | Guide |
| Portkey AI Gateway | Native Plugin + SDK Client | ✅ Complete | Guide · Official Docs |
| TrueFoundry | FastAPI Guardrail Server | ✅ Complete | Guide |
| n8n | TypeScript Node | ✅ Complete | Guide |
| LangChain | Callbacks | 🚧 Planned | - |
From the repository root:
| Command | Purpose |
|---|---|
make help |
List all targets |
make test-compile |
Python syntax check for Anthropic, Cursor, Cline, Windsurf hooks and CI scripts (no API) |
make test-policy-gha |
AI Guard scan using github-actions/ (needs AIGUARD_API_KEY in the environment) |
make test-policy-jenkins |
Same scan using Jenkins/declarative-pipeline/ (parity check) |
make test-cursor / test-cline / test-windsurf |
Run local_dev/*/test_*.sh samples (API) |
make test-all |
Compile + both policy scans + hook sample scripts |
Scheduled GitHub Actions: .github/workflows/weekly-integrations.yml runs Mondays 16:00 UTC (08:00 PST; during PDT that is ~09:00 Los Angeles). It runs make test-compile, then two policy scans (github-actions and Jenkins configs). Set repository secrets AIGUARD_API_KEY, and optionally AIGUARD_CLOUD, AIGUARD_POLICY_ID. Use Actions → Weekly integration checks → Run workflow for a manual run.
For developers using Claude Code CLI:
# 1. Install dependencies
pip install git+https://github.com/zscaler/zscaler-sdk-python.git
# 2. Copy hooks
mkdir -p ~/.claude/hooks/aiguard
cp Anthropic/claude-code-aiguard/hooks/*.py ~/.claude/hooks/aiguard/
# 3. Configure environment
cp Anthropic/claude-code-aiguard/.env.example ~/.claude/hooks/aiguard/.env
# Edit .env with your credentials
# 4. Configure Claude Code
cp Anthropic/claude-code-aiguard/settings.json ~/.claude/settings.jsonSee Claude Code Guide for details.
For developers using Cursor IDE:
# 1. Install Python dependencies
pip install zscaler-sdk-python python-dotenv
# 2. Copy hooks into your project
mkdir -p .cursor
cp path/to/zguard-ai-integrations/Cursor/.cursor/hooks.json .cursor/hooks.json
cp -r path/to/zguard-ai-integrations/Cursor/hooks Cursor/hooks
# 3. Configure environment
export AIGUARD_API_KEY="your-aiguard-api-key"
# Or copy example.env to .env in your project root
# 4. Restart CursorSee Cursor Guide for details.
cd Cline && pip install -r requirements.txt
cp Cline/.env.example Cline/.env # or use repo root .env
chmod +x Cline/.clinerules/hooks/UserPromptSubmit Cline/.clinerules/hooks/PreToolUse \
Cline/.clinerules/hooks/PostToolUse Cline/.clinerules/hooks/TaskCompleteSee Cline Guide for hook coverage and limitations.
cd Windsurf && pip install -r requirements.txt
# Ensure Windsurf/.env has AIGUARD_API_KEY (.env.example is optional template)
# Open the Windsurf/ folder as the workspace in Windsurf IDESee Windsurf Guide for pre vs post hook blocking limits.
For CI/CD policy validation before deploying AI applications:
# 1. Add GitHub Secrets
# AIGUARD_API_KEY — Your AI Guard API key
# AIGUARD_CLOUD — Cloud region (optional, default: us1)
# 2. Copy the workflow and scripts into your repo
cp -r path/to/zguard-ai-integrations/github-actions/.github .github
cp -r path/to/zguard-ai-integrations/github-actions/scripts scripts
cp -r path/to/zguard-ai-integrations/github-actions/config config
# 3. Define test cases in config/test-prompts.yaml
# 4. Push — the workflow runs automaticallySee GitHub Actions Guide for details.
For policy validation on a Jenkins controller:
# 1. Create credential aiguard-api-key (Secret text) = AI Guard API key
# 2. Copy declarative-pipeline/ into your repo (or use this repo) and point the job at that directory
# 3. Edit config/test-prompts.yaml — same format as GitHub Actions
# 4. Build with Parameters: use FORCE_RUN on first run if no SCM diff is detectedSee Jenkins Guide for credential IDs, monitored paths, and optional Vertex deploy stages.
For Azure APIM / AI Gateway deployments:
# 1. Create Named Values in APIM
- AIGUARD-API-KEY (secret)
- AIGUARD-CLOUD (us1/us2/eu1/eu2)
# 2. Create policy fragment
- Name: zscaler-aiguard-scan
- Content: Copy from Microsoft/zscaler-aiguard-scan
# 3. Add to your API policy
<inbound>
<set-variable name="ScanType" value="prompt" />
<include-fragment fragment-id="zscaler-aiguard-scan" />
</inbound>
<outbound>
<set-variable name="ScanType" value="response" />
<include-fragment fragment-id="zscaler-aiguard-scan" />
</outbound>See Azure AI Gateway Guide for details.
AI Guard provides multiple layers of protection:
User Input
↓
[Layer 1: Prompt Scanning]
↓ Detects: Injection, Toxicity, PII
↓ ALLOW/BLOCK
↓
AI/LLM Processing
↓
[Layer 2: Tool Call Scanning] (if applicable)
↓ Detects: Malicious parameters
↓ ALLOW/BLOCK
↓
External Service Call
↓
[Layer 3: Response Scanning]
↓ Detects: PII leakage, Secrets, Malicious URLs
↓ ALLOW/BLOCK
↓
User Output
User: "Fetch data from https://malicious-site.com"
SCAN 1: User Input
├─ Content: "Fetch data from https://malicious-site.com"
├─ Direction: IN
├─ Result: ALLOW ✅
└─ Reason: Prompt itself is benign
SCAN 2: URL Check
├─ Content: "https://malicious-site.com"
├─ Direction: OUT
├─ Result: BLOCK ❌
└─ Reason: Malicious URL detector triggered
User sees: "Blocked by AI Guard: Malicious URL detected"
- Log in to AI Guard Console:
https://admin.{cloud}.zseclipse.net - Navigate to Private AI Apps → App API Keys
- Create or copy your API key
- Note your cloud environment (us1, us2, eu1, eu2)
- Go to Policies in AI Guard Console
- Create or edit a policy
- Enable detectors:
- Prompt Detectors: Toxicity, Injection, PII, Secrets, Gibberish
- Response Detectors: PII, Secrets, Malicious URLs, Data Leakage
- Set detector actions:
- BLOCK - Hard block (stops execution)
- DETECT - Soft block (logs but allows)
- Activate the policy
- Note the Policy ID
For auto-resolution without specifying policy ID:
- Go to Private AI Apps → Applications
- Create or select an application
- Go to App API Keys → Associate your key with the application
- Assign your policy to the application
Every scan generates a transaction ID for correlation:
Claude Code logs: BLOCKED (txn:abc-123)
↓
AI Guard Console: Search txn:abc-123
↓
View: Full scan details, triggered detectors, content samples
Claude Code: ~/.claude/hooks/aiguard/security.log
[2026-01-30 15:30:00] BLOCKED USER INPUT: severity=CRITICAL policy=policy_760 detectors=[toxicity] (txn:abc123...)
Cursor IDE: Cursor/hooks/aiguard.log
[2026-01-30 15:30:00] BLOCKED USER PROMPT: severity=CRITICAL policy=Default_Policy detectors=[toxicity] (txn:abc123...)
Azure APIM: Azure Monitor / Application Insights
403 Forbidden responses
AI Guard API call duration
Error rates
- Architecture Overview - DAS pattern and design decisions
- Agentic AI Integration - Multi-agent systems
- Setup Summary - Quick reference
- Claude Code - Detailed installation
- Cursor IDE - Cursor hooks integration
- Cline - Cline VS Code hooks
- Windsurf - Windsurf Cascade hooks
- GitHub Actions - CI/CD policy validation
- Jenkins - Declarative pipeline policy validation
- Azure AI Gateway - Azure APIM policy fragment integration
- Google Apigee - Apigee + Vertex AI proxy
- Kong Gateway - Lua plugin and Konnect callout
- LiteLLM - Native plugin (official docs) + SDK callback
- NeMo Guardrails - NVIDIA NeMo library plugin
- Portkey - Native plugin (official docs) + SDK client
- TrueFoundry - FastAPI guardrail server
- n8n - Workflow automation TypeScript node
# Start Claude Code
claude
# Safe prompt - allowed
"List all my Zscaler applications"
# Toxic prompt - blocked
"I hate my coworker"
# → Blocked by AI Guard: toxicity detector# Safe request - allowed (200 OK)
curl -X POST "https://your-apim.azure-api.net/llm/chat/completions" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o-mini", "messages": [{"role": "user", "content": "What is cloud computing?"}]}'
# Toxic content - blocked (403 Forbidden)
curl -X POST "https://your-apim.azure-api.net/llm/chat/completions" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o-mini", "messages": [{"role": "user", "content": "I hate my neighbor and want to punch him badly"}]}'
# → {"error":"ZSCALER AI GUARD SECURITY ALERT: REQUEST BLOCKED","action":"BLOCK","severity":"CRITICAL",...}Contributions welcome! Particularly interested in:
- Claude Code hooks
- Azure AI Gateway (APIM policy fragment)
- Cursor IDE integration
- Cline VS Code hooks
- Windsurf Cascade hooks
- GitHub Actions CI/CD pipeline
- Jenkins Declarative Pipeline
- Google Apigee X proxy
- Kong Gateway Lua plugin
- LiteLLM proxy callback
- NVIDIA NeMo Guardrails plugin
- Portkey AI Gateway plugin
- TrueFoundry guardrail server
- n8n workflow automation node
- LangChain middleware
- AWS API Gateway integration
- Report security issues: security@zscaler.com
- API key protection: Never commit
.envfiles - Credential rotation: Rotate keys every 90 days
- Least privilege: Use dedicated API keys per environment
Part of the Zscaler AI Guard integrations repository.