Skip to content

Environment Variables

GuardClaw is configured through environment variables stored in a .env file in your working directory. All variables can be set via guardclaw config set or by editing .env directly.

Quick reference

bash
# View all current settings
guardclaw config show

# Set a variable
guardclaw config set <KEY> <VALUE>

LLM Backend

VariableDefaultDescription
SAFEGUARD_BACKENDlmstudioActive LLM backend for risk scoring
LMSTUDIO_URLhttp://localhost:1234/v1LM Studio / OpenAI-compatible API URL
LMSTUDIO_MODELautoModel name (or auto to use whatever is loaded)
LMSTUDIO_API_KEYAPI key for LM Studio (optional)
LLM_API_KEYGeneric LLM API key
OLLAMA_URLhttp://localhost:11434Ollama API URL
OLLAMA_MODELllama3Ollama model name
ANTHROPIC_API_KEYAnthropic API key (for anthropic backend)
OPENROUTER_API_KEYOpenRouter API key
OPENROUTER_MODELOpenRouter model ID (e.g. openai/gpt-4o)
OPENAI_API_KEYOpenAI API key
GEMINI_API_KEYGoogle Gemini API key
KIMI_API_KEYKimi (Moonshot) API key
MINIMAX_API_KEYMiniMax API key

Backend values

ValueDescription
lmstudioLocal LLM via LM Studio (recommended)
ollamaLocal LLM via Ollama
anthropicClaude API
openrouterOpenRouter (400+ models)
minimaxMiniMax API
built-inApple Silicon MLX models
fallbackRule-based only, no LLM

Cloud Judge

VariableDefaultDescription
CLOUD_JUDGE_ENABLEDfalseEnable cloud-based escalation
CLOUD_JUDGE_MODElocal-onlyEvaluation mode
CLOUD_JUDGE_PROVIDERCloud provider for escalation

Evaluation modes

ModeDescription
local-onlyAll evaluation on local LLM
mixedLocal first, cloud escalates risky calls
cloud-onlyAll evaluation via cloud API

Cloud providers

ProviderDescription
claudeAnthropic Claude (OAuth or API key)
openai-codexOpenAI Codex / ChatGPT (OAuth or API key)
minimaxMiniMax (OAuth or API key)
kimiKimi / Moonshot (API key)
openrouterOpenRouter (API key)
geminiGoogle Gemini (API key)
openaiOpenAI (API key)

Approval Policy

VariableDefaultDescription
GUARDCLAW_APPROVAL_MODEautoHow to respond to risky tool calls
GUARDCLAW_AUTO_ALLOW_THRESHOLD6Scores at or below this are auto-allowed
GUARDCLAW_ASK_THRESHOLD8Scores at or below this (in prompt mode) trigger user confirmation
GUARDCLAW_AUTO_BLOCK_THRESHOLD9Scores at or above this are auto-blocked

Approval modes

ModeDescription
autoScore, warn agent, and flag risky calls (recommended)
promptPause execution and ask user for approval
monitor-onlyScore and log only, no intervention

Threshold behavior

Risk scores range from 1 to 10. The three thresholds control the decision flow:

Score: 1 ──── 3 ──── 6 ──── 8 ──── 9 ──── 10
       │  SAFE  │ WARN │ ASK  │ BLOCK │
       └────────┘──────┘──────┘───────┘
          auto-allow    ask     auto-block
  • Score ≤ auto-allow (default ≤ 6): automatically allowed
  • Score ≤ ask (default ≤ 8): in prompt mode, asks user; in auto mode, warns agent
  • Score ≥ auto-block (default ≥ 9): automatically blocked

Connections

VariableDefaultDescription
BACKENDautoGateway connection mode
OPENCLAW_TOKENOpenClaw gateway authentication token
QCLAW_TOKENQclaw gateway authentication token
PORT3002GuardClaw server port

Gateway modes

ModeDescription
autoConnect to any detected gateway
openclawConnect to OpenClaw gateway only
qclawConnect to Qclaw gateway only
nanobotConnect to nanobot gateway only

Notifications

VariableDefaultDescription
TELEGRAM_BOT_TOKENTelegram bot token for alert notifications
TELEGRAM_CHAT_IDTelegram chat ID to send alerts to
DISCORD_WEBHOOK_URLDiscord webhook URL for alert notifications

.env file location

The .env file is loaded from your current working directory when starting GuardClaw. This allows per-project configuration.

bash
# In project A
cd ~/projects/project-a
guardclaw start
# Reads ~/projects/project-a/.env

# In project B
cd ~/projects/project-b
guardclaw start
# Reads ~/projects/project-b/.env

Example .env

ini
# LLM Backend
SAFEGUARD_BACKEND=lmstudio
LMSTUDIO_URL=http://localhost:1234/v1
LMSTUDIO_MODEL=auto

# Cloud Judge (mixed mode)
CLOUD_JUDGE_ENABLED=true
CLOUD_JUDGE_MODE=mixed
CLOUD_JUDGE_PROVIDER=claude

# Approval Policy
GUARDCLAW_APPROVAL_MODE=auto
GUARDCLAW_AUTO_ALLOW_THRESHOLD=6
GUARDCLAW_ASK_THRESHOLD=8
GUARDCLAW_AUTO_BLOCK_THRESHOLD=9

# Server
PORT=3002

# Agent Connections
OPENCLAW_TOKEN=eyJ0eXAiOiJKV1Qi...

Released under the MIT License.