Quick Start Guide

Get started with Rampart in under 5 minutes

1. Get Your API Key

Sign up and generate your API key from the dashboard:

2. Filter User Input Before Sending to LLM

Check prompts for prompt injection, PII, and other threats

# Python example
import requests

response = requests.post(
    "https://rampart.arunrao.com/api/v1/filter",
    headers={"Authorization": "Bearer rmp_live_xxxxx"},
    json={
        "content": user_input,
        "filters": ["prompt_injection", "pii"],
        "user_id": "user_123"
    }
)

result = response.json()

# Check if content is safe
if not result["is_safe"]:
    print("⚠️ Threat detected:", result["threats"])
    print("Details:", result["prompt_injection"])
else:
    # Safe to send to your LLM
    print("✓ Content is safe - proceed to LLM")
    llm_response = call_your_llm(user_input)

3. Or Use the Secured LLM Proxy (Optional)

Automatic security checks + LLM calls in one request

# Combined filtering + LLM call
response = requests.post(
    "https://rampart.arunrao.com/api/v1/llm/complete",
    headers={"Authorization": "Bearer rmp_live_xxxxx"},
    json={
        "prompt": user_input,
        "model": "gpt-4",
        "provider": "openai",
        "user_id": "user_123"
    }
)

result = response.json()

if result["blocked"]:
    print("⚠️ Request blocked:", result["reason"])
else:
    print("✓ LLM Response:", result["response"])
    print("Security checks:", result["security_checks"])

🎉 That's it! Your LLM calls are now protected

All requests are automatically scanned for prompt injection, data exfiltration, and policy violations.

What's Next?

Configure Policies

Set up custom security policies and compliance templates

Monitor Activity

View traces, security incidents, and cost analytics