Skip to content

LLM Backends

GuardClaw supports multiple LLM backends for risk scoring. All cloud backends use the OpenAI-compatible API format.

Quick Comparison

BackendPrivacySpeedCostBest for
Built-in MLX100% localFast (Apple Silicon)FreeMac users, easiest setup
LM Studio100% localFastFreeAdvanced local model control
Ollama100% localFastFreeLinux/Docker environments
OpenRouterCloudVery fastPay-per-useBest model variety
Anthropic ClaudeCloudFastPay-per-useHighest accuracy
MiniMaxCloudFastPay-per-useCost-effective cloud option
fallbackLocalInstantFreeNo LLM available, rule-only

Selecting a Backend

bash
guardclaw config llm

The interactive picker shows available options, lets you enter API keys, and tests the connection before saving.

Fallback Mode

If no LLM is available, GuardClaw uses deterministic rule-based scoring:

bash
guardclaw config set SAFEGUARD_BACKEND fallback

Fallback mode:

  • ✅ Instant (< 1ms per call)
  • ✅ Zero memory usage
  • ✅ Catches known-dangerous patterns reliably
  • ❌ No context understanding
  • ❌ Pattern-matching only — may miss novel attacks

Use fallback as a last resort or during LLM downtime.

Released under the MIT License.