Back to Documentation

Waclaude

LLM Security Guardrails

Comprehensive documentation for integrating Waclaude security guardrails into your LLM applications.

Quick Start

Get started in 5 minutes

1. Installation

# Install the Waclaude Python SDK
pip install waclaude

# Or using npm for Node.js
npm install @obscurelabs/waclaude

2. Authentication

Get your API key from the ObscureLabs dashboard and set it as an environment variable:

export WACLAUDE_API_KEY="your-api-key-here"

3. Basic Usage

Replace your existing LLM calls with Waclaude-protected endpoints:

import waclaude

# Initialize the client
client = waclaude.Client()

# Secure LLM call with guardrails
response = client.complete(
    messages=[{"role": "user", "content": "Help me write a secure function"}],
    model="claude-3-sonnet",
    guardrails=["prompt_injection", "secret_detection"]
)

print(response.content)

Core Features

Comprehensive protection for your AI applications

Prompt Injection Detection

Advanced ML models detect sophisticated prompt injection attempts in real-time.

client.complete(
  messages=messages,
  guardrails=["prompt_injection"],
  injection_threshold=0.8
)

Secret Scanning

Continuous monitoring for exposed API keys, passwords, and sensitive data.

client.complete(
  messages=messages,
  guardrails=["secret_detection"],
  redact_secrets=True
)

Content Filtering

Customizable content policies for harmful, biased, or inappropriate content.

client.complete(
  messages=messages,
  guardrails=["content_filter"],
  policy="enterprise"
)

Configuration

Customize Waclaude for your use case

Environment Variables

# Required
WACLAUDE_API_KEY=your-api-key

# Optional
WACLAUDE_ENDPOINT=https://api.obscurelabs.com/waclaude
WACLAUDE_TIMEOUT=30
WACLAUDE_RETRY_ATTEMPTS=3

Guardrail Configuration

Configure detection thresholds and policies for each guardrail:

config = {
    "prompt_injection": {
        "enabled": True,
        "threshold": 0.7,
        "action": "block"  # or "warn"
    },
    "secret_detection": {
        "enabled": True,
        "patterns": ["api_key", "password", "token"],
        "redact": True
    },
    "content_filter": {
        "enabled": True,
        "policy": "strict",
        "categories": ["harmful", "bias", "nsfw"]
    }
}

client = waclaude.Client(config=config)

Error Handling

Handle security violations and API errors gracefully:

try:
    response = client.complete(messages=messages)
except waclaude.PromptInjectionError as e:
    print(f"Prompt injection detected: {e.details}")
except waclaude.SecretDetectedError as e:
    print(f"Secret found and redacted: {e.secret_type}")
except waclaude.ContentViolationError as e:
    print(f"Content policy violation: {e.category}")

Additional Resources

Explore more Waclaude documentation

Need Help?

Our team is here to help you integrate Waclaude successfully. Reach out if you have questions or need assistance.