kodachi-claw
Kodachi hardened AI runtime with embedded anonymity controls
Version: 9.0.1 | Size: 28.5MB | Author: theonlyhennygod
License: Proprietary - Kodachi OS | Website: https://github.com/WMAL/Linux-Kodachi
File Information
| Property | Value |
|---|---|
| Binary Name | kodachi-claw |
| Version | 9.0.1 |
| Build Date | 2026-02-26T08:01:57.598726320Z |
| Rust Version | 1.82.0 |
| File Size | 28.5MB |
| JSON Data | View Raw JSON |
SHA256 Checksum
Features
| Feature | Description |
|---|---|
| Feature | Embedded Arti-first Tor anonymity runtime |
| Feature | Multi-circuit load balancing across Tor instances |
| Feature | Isolated per-request circuit assignment |
| Feature | Single-circuit mode for consistent identity |
| Feature | Automatic MAC address randomization |
| Feature | Hostname and timezone randomization |
| Feature | IP and DNS leak verification |
| Feature | OPSEC filter (outbound identity leak redaction) |
| Feature | HMAC-SHA256 tamper-evident audit logging |
| Feature | Authentication via online-auth integration |
| Feature | Integrity checking via integrity-check integration |
| Feature | Permission monitoring via permission-guard integration |
| Feature | Centralized logging via logs-hook integration |
| Feature | 12+ AI providers (OpenAI, Anthropic, Gemini, Ollama, OpenRouter, etc.) |
| Feature | 14 communication channels (Telegram, Discord, Slack, Matrix, etc.) |
| Feature | Hybrid memory (SQLite FTS5 + vector cosine similarity) |
| Feature | ChaCha20-Poly1305 encrypted secret store |
| Feature | Sandbox backends (Landlock, Bubblewrap, Firejail, Docker) |
| Feature | Cron job scheduler with security allowlists |
| Feature | Gateway server with rate limiting and idempotency |
| Feature | Hardware peripherals (STM32, RPi GPIO, robotics) |
| Feature | Configurable circuit assignment strategies |
| Feature | Identity restoration on shutdown |
| Feature | Retry and timeout controls for network operations |
| Feature | JSON output mode for automation |
Security Features
| Feature | Description |
|---|---|
| Authentication | In-process Kodachi auth with auto-recovery, device ID verification |
| Encryption | ChaCha20-Poly1305 AEAD, TLS 1.3 via Arti, HMAC-SHA256 webhooks |
| Inputvalidation | Command allowlists, path sanitization, SSRF protection |
| Ratelimiting | Sliding-window rate limiting with configurable thresholds |
System Requirements
| Requirement | Value |
|---|---|
| OS | Linux (Kodachi OS, Debian-based distributions) |
| Privileges | Root/sudo required for MAC/hostname/timezone randomization |
| Dependencies | macchanger (MAC randomization), ip (network interface control), hostnamectl or hostname (hostname management), timedatectl (timezone management), online-auth (authentication service), integrity-check (file integrity verification), permission-guard (permission monitoring), logs-hook (centralized logging) |
Global Options
| Flag | Description |
|---|---|
-h, --help |
Print help and exit |
-v, --version |
Print version and exit |
-n, --info |
Show detailed program information and exit |
-e, --examples |
Show comprehensive command examples and exit |
--json |
Output startup status as compact JSON |
--json-pretty |
Output startup status as pretty JSON |
--json-human |
Output startup status as human-readable JSON |
--mode <MODE> |
Anonymity runtime mode [default: multi-circuit] [possible values: multi-circuit, isolated, single] |
--tor-instances <N> |
Tor pool size (ignored in single mode) [default: 10] |
--instance-policy <POLICY> |
Instance reuse policy [default: reuse] [possible values: reuse, new, mixed] |
--instance-prefix <PREFIX> |
Instance name prefix [default: kodachi-claw-instance] |
--access-mode <MODE> |
Access mode for execution path [default: system] [possible values: system, gateway] |
--auth-mode <MODE> |
Authentication mode [default: auto] [possible values: auto, required] |
--skip-mac |
Skip MAC randomization |
--skip-hostname |
Skip hostname randomization |
--skip-timezone |
Skip timezone randomization |
--skip-identity |
Skip all identity randomization |
--skip-tor |
Skip embedded Tor startup |
--skip-ip-check |
Skip IP/Tor verification checks [aliases: --skip-verify] |
--skip-dns-check |
Skip DNS verification checks |
--skip-anonymity |
Skip all anonymity bootstrap behavior |
--skip-integrity-check |
Skip integrity check during preflight |
--skip-permission-check |
Skip permission check during preflight |
--restore-on-exit |
Restore MAC/hostname/timezone state on shutdown |
--auto-recover-internet |
Auto-check and recover internet after identity changes and on exit |
--skip-auto-recover-internet |
Disable auto-recover-internet (overrides --auto-recover-internet) |
-V, --verbose |
Enable verbose logging output |
-q, --quiet |
Suppress all non-error output |
--no-color |
Disable colored output |
--timeout <SECONDS> |
Timeout in seconds for network operations [default: 30] |
--retry <COUNT> |
Number of retries for network operations [default: 3] |
--circuit-strategy <STRATEGY> |
Circuit assignment strategy for multi-circuit mode [default: round-robin] [possible values: round-robin, random, least-used, sticky] |
--skip-all |
Skip all anonymity startup phases except OS authentication |
Commands
Commands
onboard
Initialize your workspace and configuration
Usage:
agent
Start the AI agent loop
Usage:
gateway
Start the gateway server (webhooks, websockets)
Usage:
daemon
Start long-running autonomous runtime (gateway + channels + heartbeat + scheduler)
Usage:
service
Manage OS service lifecycle (launchd/systemd user service)
Usage:
tor
Control Tor instances via tor-switch
Usage:
doctor
Run diagnostics for daemon/scheduler/channel freshness
Usage:
status
Show system status (full details)
Usage:
cron
Configure and manage scheduled tasks
Usage:
models
Manage provider model catalogs
Usage:
providers
List supported AI providers (includes kodachi-ai local provider when built with --features kodachi-ai)
Usage:
channel
Manage channels (telegram, discord, slack)
Usage:
integrations
Browse 50+ integrations
Usage:
skills
Manage skills (user-defined capabilities)
Usage:
migrate
Migrate data from other agent runtimes
Usage:
auth
Manage provider subscription authentication profiles
Usage:
hardware
Discover and introspect USB hardware
Usage:
peripheral
Manage hardware peripherals (STM32, RPi GPIO, etc.)
Usage:
recover-internet
Check internet connectivity and recover if broken (invokes health-control)
Usage:
help
Print this message or the help of the given subcommand(s)
Usage:
Operational Scenarios
Scenario-oriented workflows generated from the binary's built-in -e --json examples.
Scenario 1: AI Agent
Start and interact with the AI agent
Step 1: Interactive session with full anonymity
Expected Output: Tor bootstrapped, identity randomized, agent readyNote
Requires onboarding first: kodachi-claw onboard
Step 2: Single message mode
Expected Output: Response through Tor-routed connectionStep 3: Specific provider/model
Expected Output: Agent session using Anthropic ClaudeStep 4: Local offline model
Expected Output: Agent runs with local model, Tor still active for toolsStep 5: Low temperature for deterministic output
Expected Output: Agent runs with temperature 0.2Step 6: Hardware peripheral attached
Expected Output: Agent with STM32 board attachedStep 7: Use installed Claude Code CLI (no API key)
Expected Output: Agent uses local Claude Code CLI for inferenceNote
Requires claude CLI installed. No API key needed
Scenario 2: Daemon & Gateway
Long-running services and webhook endpoints
Step 1: Full daemon with all channels
Expected Output: Daemon running: all configured channels activeNote
Listens on all configured channels simultaneously
Step 2: Custom gateway port
Expected Output: Gateway + channels + heartbeat + scheduler runningStep 3: Bind to all interfaces
Expected Output: Daemon bound to 0.0.0.0:9090Step 4: Gateway-only (webhook/WebSocket)
Expected Output: Gateway accepting webhook requests on :9090Step 5: Install as systemd service
Expected Output: Service installed with auto-restart on failureScenario 3: Setup & Onboarding
First-time configuration and channel management
Step 1: Full wizard (9 steps)
Expected Output: Guided 9-step setup wizardNote
Run this first before using agent or daemon
Step 2: Quick setup
Expected Output: Config created with OpenRouter providerStep 3: Quick setup with memory backend
Expected Output: Config created with Anthropic + SQLite memoryStep 4: Quick setup with Claude Code CLI (no API key)
Expected Output: Config created with claude-code providerNote
No API key needed -- Claude Code handles auth internally
Step 5: Reconfigure channels only
Expected Output: Channel configuration updatedStep 6: Bind Telegram identity
Expected Output: Telegram user bound to allowlistScenario 4: Status & Diagnostics
System status, health checks, and diagnostics
Step 1: Full status including MAC, hostname, timezone, IP, auth
Expected Output: Complete system status with identity infoStep 2: Basic status without security/identity info
Expected Output: Config and channel status onlyStep 3: JSON status for automation
Expected Output: Pretty-printed JSON envelope with status dataStep 4: Run health diagnostics
Expected Output: Diagnostic report for daemon/scheduler/channelsStep 5: Probe model availability
Expected Output: Available models for the specified providerScenario 5: Scheduled Tasks
Configure and manage cron-style scheduled tasks
Step 1: List all scheduled tasks
Expected Output: Table of scheduled tasks with statusStep 2: Run every 6 hours
Expected Output: Task added with cron scheduleStep 3: Weekly with timezone
Expected Output: Task scheduled for Monday 9AM ETStep 4: One-shot at specific time
Expected Output: One-time task scheduledStep 5: Every 5 minutes
Expected Output: Interval task added (300s)Step 6: One-shot after 30 minutes
Expected Output: One-time delayed task scheduledStep 7: Pause/resume tasks
Expected Output: Task paused/resumedScenario 6: Models & Providers
Manage AI model catalogs and providers
Step 1: Refresh model catalog from default provider
Expected Output: Model catalog updatedStep 2: Force refresh from specific provider
Expected Output: OpenAI model catalog force-refreshedStep 3: List all 12+ supported AI providers
Expected Output: Provider table with active markerStep 4: Check cached model availability
Expected Output: Model availability from cacheScenario 7: Channel Management
Configure and manage communication channels
Step 1: List configured channels
Expected Output: Channel status tableStep 2: Start all configured channels
Expected Output: All channels listeningStep 3: Health check all channels
Expected Output: Channel health reportStep 4: Add Telegram channel
Expected Output: Telegram channel configuredStep 5: Remove a channel
Expected Output: Discord channel removedStep 6: Bind Telegram user ID to allowlist
Expected Output: Telegram user ID boundScenario 8: Authentication
Manage provider authentication profiles
Step 1: OAuth login
Expected Output: Browser-based OAuth flow startedStep 2: Device code flow
Expected Output: Device code displayed for authorizationStep 3: Paste API key
Expected Output: API key stored securelyStep 4: Interactive token entry
Expected Output: Token stored in encrypted secret storeStep 5: Refresh access token
Expected Output: Token refreshed successfullyStep 6: List all auth profiles
Expected Output: Auth profile table with active markersStep 7: Show active profile and token expiry
Expected Output: Profile status with expiration infoStep 8: Remove auth profile
Expected Output: Auth profile removedScenario 9: Skills Management
Manage user-defined capabilities
Step 1: List installed skills
Expected Output: Installed skills tableStep 2: Install from GitHub
Expected Output: Skill installed and registeredStep 3: Remove installed skill
Expected Output: Skill removedScenario 10: Integrations
Browse and manage service integrations
Step 1: Show GitHub integration details
Expected Output: GitHub integration configuration and statusStep 2: Show Jira integration details
Expected Output: Jira integration configuration and statusScenario 11: Migration
Import data from other agent runtimes
Step 1: Preview migration without writing
Expected Output: Migration preview with changes listedStep 2: Import from OpenClaw
Expected Output: Data imported from OpenClaw workspaceScenario 12: Hardware & Peripherals
Discover and manage hardware devices
Step 1: Enumerate USB devices and known boards
Expected Output: Detected hardware devicesStep 2: Introspect specific device
Expected Output: Device capabilities and firmware infoStep 3: Get chip info
Expected Output: Chip specifications and pinoutStep 4: List configured peripherals
Expected Output: Configured peripheral boardsStep 5: Add STM32 board
Expected Output: Peripheral added to configStep 6: Flash firmware
Expected Output: Firmware flashed to deviceScenario 13: Service Lifecycle
Install and manage as a system service
Step 1: Install as systemd/launchd service
Expected Output: Service unit installedStep 2: Start the service
Expected Output: Service startedStep 3: Stop the service
Expected Output: Service stoppedStep 4: Check service status
Expected Output: Service running/stopped statusStep 5: Uninstall the service
Expected Output: Service unit removedScenario 14: Tor Instance Control
Control all Tor instances directly from kodachi-claw
Step 1: List all known Tor instances
Expected Output: Table of instance tags and statusStep 2: List instances with current Tor exit IP
Expected Output: Instance table with live Tor IP addressesStep 3: Start every Tor instance
Expected Output: All configured Tor instances startedStep 4: Stop every Tor instance
Expected Output: All configured Tor instances stoppedStep 5: Delete all non-default Tor instances
Expected Output: Custom Tor instances deletedStep 6: Delete all instances including default
Expected Output: All Tor instances deleted, including defaultNote
Use when you want a full Tor instance reset
Step 7: Automation-friendly Tor instance inventory
Expected Output: JSON envelope with instance and IP detailsScenario 15: Anonymity & Tor Modes
Control Tor instances, circuits, and identity randomization
Step 1: 10 parallel circuits
Expected Output: 10 Arti instances bootstrapped, traffic distributed across circuitsNote
Default mode. Each tool/channel gets a different circuit
Step 2: Namespace isolation via oniux
Expected Output: Namespace-isolated agent with embedded TorNote
Requires root or CAP_NET_ADMIN
Step 3: Single circuit (low-resource)
Expected Output: Single Arti instance, minimal memory usageStep 4: Sticky circuit assignment
Expected Output: Sticky circuit assignment per tool/channelNote
Strategies: round-robin (default), random, least-used, sticky
Step 5: Random assignment across 5 circuits
Expected Output: Random circuit selection per requestStep 6: Restore identity on exit
Expected Output: Identity restored after session endsNote
Without this flag, spoofed identity persists after exit
Step 7: Selective identity spoofing
Expected Output: Only timezone randomized, Tor still activeStep 8: Gateway access with required auth
Expected Output: Gateway mode with mandatory authenticationStep 9: Check and recover internet connectivity
Expected Output: Internet connectivity is working / Internet recovered successfullyNote
Invokes health-control recover-internet if connectivity is lost
Step 10: Force recovery even if internet appears working
Expected Output: Internet recovered successfullyNote
Skips initial check, goes straight to health-control recovery
Step 11: Check/recover with JSON output
Expected Output: {status: connected, recovery_needed: false, ...}Note
Returns JSON envelope with connectivity status and recovery details
Step 12: Auto-recover internet after identity changes
Expected Output: Net check after MAC change, recovery on exitNote
Checks connectivity after MAC randomization and during shutdown
Step 13: Skip flag overrides auto-recover
Expected Output: Agent runs without auto-recovery (skip wins)Note
--skip-auto-recover-internet takes precedence over --auto-recover-internet
Scenario 16: Skip Controls
Disable specific startup phases for debugging or testing
Step 1: No Tor, no identity changes
Expected Output: Agent runs without Tor, no identity changesNote
WARNING: No privacy protection. Local testing only
Step 2: Skip all startup phases
Expected Output: Status with no anonymity bootstrapStep 3: Quick status without Tor
Expected Output: Status report with auth check onlyStep 4: Skip verification checks
Expected Output: Tor starts but IP/DNS not verifiedStep 5: Skip preflight checks
Expected Output: Agent starts without preflight verificationScenario 17: Output & Automation
JSON output modes for scripting and CI/CD
Step 1: Compact JSON for scripting
Expected Output: {"status":"success",...}Step 2: Pretty-printed JSON
Expected Output: Formatted JSON with indentationStep 3: Human-annotated JSON
Expected Output: JSON with human-readable annotationsNote
Also: --json (compact), --json-pretty (indented)
Step 4: Verbose logging
Expected Output: Debug-level log outputStep 5: Suppress non-error output
Expected Output: Only error messages shownStep 6: Custom network settings
Expected Output: 60s timeout, 5 retries, fresh instancesNote
Policies: reuse (default), new, mixed
Scenario 18: AI Gateway Providers
Route requests through AI gateway proxies (Cloudflare, Vercel, custom OpenAI-compatible endpoints)
Step 1: Use Cloudflare AI Gateway
Expected Output: Request routed through gateway.ai.cloudflare.com/v1 over TorNote
Set CLOUDFLARE_API_KEY env var or api_key in config. Supports all Cloudflare-hosted models
Step 2: Use Vercel AI Gateway
Expected Output: Request routed through api.vercel.ai over TorNote
Set VERCEL_API_KEY env var or api_key in config
Step 3: Any OpenAI-compatible gateway via custom URL
Expected Output: Request sent to your-gateway.example.com/v1/chat/completions over TorNote
Works with vLLM, LiteLLM, Azure OpenAI, any /v1/chat/completions endpoint
Step 4: Anthropic-compatible proxy (corporate/self-hosted)
sudo kodachi-claw agent --provider "anthropic-custom:https://llm-proxy.corp.example.com" --message "review PR"
Note
For proxies that speak the Anthropic Messages API instead of OpenAI format
Step 5: Groq ultra-fast inference gateway
Expected Output: Agent response generated with Groq LPU inferenceNote
Set GROQ_API_KEY. Ultra-low latency for supported models
Step 6: Together AI inference gateway
sudo kodachi-claw agent --provider together --model meta-llama/Llama-3-70b-chat-hf --message "analyze"
Note
Set TOGETHER_API_KEY. Supports 100+ open models
Step 7: Fireworks AI inference gateway
sudo kodachi-claw agent --provider fireworks --model accounts/fireworks/models/llama-v3-70b-instruct --message "write tests"
Note
Set FIREWORKS_API_KEY. Optimized for fast open-model inference
Step 8: Onboard with a custom AI gateway
Expected Output: Config created with custom gateway as default providerNote
The custom URL is stored in config.toml as default_provider
Step 9: List all supported AI gateway providers
Expected Output: Table showing 30+ providers including Cloudflare, Vercel, Groq, Together, Fireworks, Mistral, xAI, and moreNote
Use custom: prefix for unlisted OpenAI-compatible gateways
Scenario 19: Local AI (kodachi-ai)
Run with local AI models — no API key required. Build with: cargo build --features kodachi-ai
Step 1: Local AI agent (auto-detects ONNX/GGUF models)
Expected Output: 🦀 Kodachi Claw Interactive Mode (local: ONNX + Qwen 3B)Note
No API key needed. Auto-detects: Claude CLI > Mistral.rs > Local LLM > ONNX > TF-IDF
Step 2: NLP-routed command execution
Expected Output: NLP classifies intent, matches kodachi service commands, executes via gateway-coreNote
High-confidence NLP matches bypass the AI model for instant command routing
Step 3: Onboard with local AI (zero API keys)
Expected Output: Config created with kodachi-ai provider — fully offline capableNote
No network needed for AI inference. Tor still used for anonymity
Step 4: List available local AI models (interactive command)
Expected Output: Model list: all-MiniLM-L6-v2.onnx (90MB), kodachi-intent-classifier.onnx (67MB), Qwen2.5-3B (2.1GB)Note
Run inside agent interactive mode. Calls ai-trainer subprocess
Step 5: Download a GGUF model for local inference (interactive command)
Expected Output: Downloading Qwen2.5-3B... doneNote
Run inside agent interactive mode. Downloads to models directory
Step 6: Retrain NLP embeddings from latest command data (interactive command)
Expected Output: Training embeddings... 458 commands processed, model savedNote
Run inside agent interactive mode. Calls ai-trainer subprocess
Step 7: Force specific AI tier (ONNX)
Expected Output: Agent using ONNX classifier model onlyNote
Available tiers: onnx, mistral, genai, tfidf. Auto-fallback if unavailable
Step 8: Force TF-IDF fallback (no neural models)
Expected Output: Agent using TF-IDF keyword matching onlyNote
Fastest but lowest accuracy. Useful for low-resource environments
Scenario 20: Interactive Mode Commands
Slash commands available inside agent interactive mode
Step 1: Show available interactive commands
Expected Output: List of all slash commands with descriptionsNote
Run inside agent mode prompt
Step 2: Clear conversation history
Expected Output: Conversation history clearedNote
Resets context for new conversation thread
Step 3: Exit agent mode (alias: /exit)
Expected Output: Agent shutdown initiatedNote
Gracefully stops Tor and restores identity if --restore-on-exit was used
Step 4: Exit agent mode (alias: /quit)
Expected Output: Agent shutdown initiatedNote
Same as /quit
Step 5: List local AI models (kodachi-ai feature)
Expected Output: Local model inventory with sizes and statusNote
Requires --features kodachi-ai build
Step 6: Download AI model (kodachi-ai feature)
Expected Output: Model download progress and completionNote
Downloads ONNX or GGUF models to local cache
Step 7: Retrain NLP embeddings (kodachi-ai feature)
Expected Output: Training progress and completionNote
Retrains intent classifier from latest command data
Scenario 21: Kodachi Gateway Integration
Execute Kodachi system services via AI agent (kodachi-ai feature)
Step 1: NLP command routing to routing-switch
Expected Output: NLP detects routing intent, executes routing-switch service via gateway-coreNote
Natural language commands auto-route to appropriate Kodachi services
Step 2: NLP command routing to dns-leak
Expected Output: Executes dns-leak service and returns resultsNote
Service execution includes policy checks and risk validation
Step 3: Profile execution via profile-registry
Expected Output: Finds emergency profile, shows steps, executes panic sequenceNote
Workflow profiles available: panic, recovery, security-scan, network-test
Step 4: NLP routing to ip-fetch service
Expected Output: Fetches IP via ip-fetch and displays geolocationNote
All service calls use recovery-engine for automatic retry on failure
Step 5: NLP routing to health-control network scan
Expected Output: Executes health-control net-check and returns diagnosticsNote
Supports 18+ Kodachi services via kodachi-gateway-core
Scenario 22: Training & Model Management
AI model training and management (kodachi-ai feature)
Step 1: Start agent with auto-learning enabled
Expected Output: Agent learns from user patterns and command sequencesNote
Learning-engine tracks patterns, sequences, and user preferences
Step 2: Manually trigger NLP retraining
Expected Output: Retrains intent classifier from accumulated interaction dataNote
Calls ai-trainer subprocess to rebuild embeddings
Step 3: View local model inventory
Expected Output: Lists ONNX classifiers, GGUF models, embeddings with sizesNote
Shows model status: available, downloading, missing
Step 4: Refresh model catalog from sources
Expected Output: Updates available model list from Hugging Face and local sourcesNote
Checks for model updates and new releases
Scenario 23: Profile & Workflow Discovery
Discover and execute workflow profiles (kodachi-ai feature)
Step 1: Profile-based workflow execution
Expected Output: Finds security-scan profile (92+ profiles available), shows steps, executes sequenceNote
Profiles include: security-scan, network-test, panic-mode, recovery-workflow
Step 2: Emergency profile execution
Expected Output: Executes panic-mode profile: kill network, wipe RAM, shutdownNote
High-risk profiles require confirmation before execution
Step 3: Recovery profile execution
Expected Output: Executes recovery-workflow: check connectivity, reset interfaces, restore DNSNote
Recovery profiles use recovery-engine for smart retry and correlation
Scenario 24: Advanced Skip Controls
Fine-grained control over startup phase skipping
Step 1: Skip MAC randomization only
Expected Output: Agent starts with original MAC, hostname and timezone still randomizedNote
Useful when MAC change breaks network connectivity
Step 2: Skip hostname randomization only
Expected Output: Agent starts with original hostname, MAC and timezone still randomizedNote
Useful for preserving network identifiers
Step 3: Skip all identity randomization (MAC, hostname, timezone)
Expected Output: Agent starts with original identity, Tor still activeNote
Tor routing active but no identity spoofing
Step 4: Skip Tor bootstrap only
Expected Output: Agent starts without Tor, identity still randomizedNote
Identity spoofing active but no Tor routing (local testing)
Step 5: Skip all anonymity and security phases
Expected Output: Agent starts immediately with no protectionNote
WARNING: No anonymity, no Tor, no identity changes. Local testing ONLY
Step 6: Combine multiple skip flags
Expected Output: Status with only timezone changed and no TorNote
Skip flags are composable for precise control
Step 7: Skip verification checks
Expected Output: Agent starts without post-bootstrap verificationNote
Faster startup but no connectivity/integrity validation
Scenario 25: Circuit Strategy Examples
Tor circuit assignment and load balancing strategies
Step 1: Round-robin circuit assignment (default)
Expected Output: Each request uses next circuit in sequence (balanced load)Note
Best for even load distribution across circuits
Step 2: Random circuit assignment
Expected Output: Each request randomly selects a circuitNote
Best for unpredictable traffic patterns
Step 3: Least-used circuit assignment
Expected Output: Each request uses circuit with lowest current loadNote
Best for uneven request sizes or long-running requests
Step 4: Sticky circuit assignment (per tool/channel)
Expected Output: Each tool/channel always uses same dedicated circuitNote
Best for maintaining consistent exit IPs per service
Scenario 26: Tor Mode Examples
Different Tor runtime modes and instance configurations
Step 1: Multi-circuit mode with single instance
Expected Output: Single Arti instance, all requests through one circuitNote
Minimum resource mode, no circuit isolation
Step 2: Multi-circuit mode with 20 instances
Expected Output: 20 parallel Arti instances, maximum circuit diversityNote
Maximum anonymity but high memory usage (each instance ~50MB)
Step 3: Single mode (explicit)
Expected Output: Single Arti instance, optimized for low-resourceNote
Equivalent to --mode multi-circuit --tor-instances 1
Step 4: Namespace isolation via oniux
Expected Output: Full network namespace isolation, embedded TorNote
Requires root or CAP_NET_ADMIN. Strongest isolation
Scenario 27: Recovery Examples
Internet connectivity recovery and troubleshooting
Step 1: Check and auto-recover internet
Expected Output: Connectivity check, recovery if needed via health-controlNote
Non-destructive check first, recovery only if needed
Step 2: Force immediate recovery
Expected Output: Skips check, immediately runs health-control recover-internetNote
Use when connectivity check itself is failing
Step 3: Recovery with JSON output
Expected Output: JSON envelope with connectivity status, recovery actions, resultsNote
Parseable output for automation and monitoring
Step 4: Agent with automatic recovery
Expected Output: Agent checks connectivity after MAC change and during shutdownNote
Auto-recovers network if broken by identity changes
Step 5: Auto-recovery with identity restoration
Expected Output: Recovery after MAC change and during shutdown + identity restoreNote
Ensures connectivity restored before and after session
Command Examples (Raw)
AI Agent
Start and interact with the AI agent
Interactive session with full anonymity
Expected Output: Tor bootstrapped, identity randomized, agent readyNote
Requires onboarding first: kodachi-claw onboard
Single message mode
Expected Output: Response through Tor-routed connectionSpecific provider/model
Expected Output: Agent session using Anthropic ClaudeLocal offline model
Expected Output: Agent runs with local model, Tor still active for toolsLow temperature for deterministic output
Expected Output: Agent runs with temperature 0.2Hardware peripheral attached
Expected Output: Agent with STM32 board attachedUse installed Claude Code CLI (no API key)
Expected Output: Agent uses local Claude Code CLI for inferenceNote
Requires claude CLI installed. No API key needed
Daemon & Gateway
Long-running services and webhook endpoints
Full daemon with all channels
Expected Output: Daemon running: all configured channels activeNote
Listens on all configured channels simultaneously
Custom gateway port
Expected Output: Gateway + channels + heartbeat + scheduler runningBind to all interfaces
Expected Output: Daemon bound to 0.0.0.0:9090Gateway-only (webhook/WebSocket)
Expected Output: Gateway accepting webhook requests on :9090Install as systemd service
Expected Output: Service installed with auto-restart on failureSetup & Onboarding
First-time configuration and channel management
Full wizard (9 steps)
Expected Output: Guided 9-step setup wizardNote
Run this first before using agent or daemon
Quick setup
Expected Output: Config created with OpenRouter providerQuick setup with memory backend
Expected Output: Config created with Anthropic + SQLite memoryQuick setup with Claude Code CLI (no API key)
Expected Output: Config created with claude-code providerNote
No API key needed -- Claude Code handles auth internally
Reconfigure channels only
Expected Output: Channel configuration updatedBind Telegram identity
Expected Output: Telegram user bound to allowlistStatus & Diagnostics
System status, health checks, and diagnostics
Full status including MAC, hostname, timezone, IP, auth
Expected Output: Complete system status with identity infoBasic status without security/identity info
Expected Output: Config and channel status onlyJSON status for automation
Expected Output: Pretty-printed JSON envelope with status dataRun health diagnostics
Expected Output: Diagnostic report for daemon/scheduler/channelsProbe model availability
Expected Output: Available models for the specified providerScheduled Tasks
Configure and manage cron-style scheduled tasks
List all scheduled tasks
Expected Output: Table of scheduled tasks with statusRun every 6 hours
Expected Output: Task added with cron scheduleWeekly with timezone
Expected Output: Task scheduled for Monday 9AM ETOne-shot at specific time
Expected Output: One-time task scheduledEvery 5 minutes
Expected Output: Interval task added (300s)One-shot after 30 minutes
Expected Output: One-time delayed task scheduledPause/resume tasks
Expected Output: Task paused/resumedModels & Providers
Manage AI model catalogs and providers
Refresh model catalog from default provider
Expected Output: Model catalog updatedForce refresh from specific provider
Expected Output: OpenAI model catalog force-refreshedList all 12+ supported AI providers
Expected Output: Provider table with active markerCheck cached model availability
Expected Output: Model availability from cacheChannel Management
Configure and manage communication channels
List configured channels
Expected Output: Channel status tableStart all configured channels
Expected Output: All channels listeningHealth check all channels
Expected Output: Channel health reportAdd Telegram channel
Expected Output: Telegram channel configuredRemove a channel
Expected Output: Discord channel removedBind Telegram user ID to allowlist
Expected Output: Telegram user ID boundAuthentication
Manage provider authentication profiles
OAuth login
Expected Output: Browser-based OAuth flow startedDevice code flow
Expected Output: Device code displayed for authorizationPaste API key
Expected Output: API key stored securelyInteractive token entry
Expected Output: Token stored in encrypted secret storeRefresh access token
Expected Output: Token refreshed successfullyList all auth profiles
Expected Output: Auth profile table with active markersShow active profile and token expiry
Expected Output: Profile status with expiration infoRemove auth profile
Expected Output: Auth profile removedSkills Management
Manage user-defined capabilities
List installed skills
Expected Output: Installed skills tableInstall from GitHub
Expected Output: Skill installed and registeredRemove installed skill
Expected Output: Skill removedIntegrations
Browse and manage service integrations
Show GitHub integration details
Expected Output: GitHub integration configuration and statusShow Jira integration details
Expected Output: Jira integration configuration and statusMigration
Import data from other agent runtimes
Preview migration without writing
Expected Output: Migration preview with changes listedImport from OpenClaw
Expected Output: Data imported from OpenClaw workspaceHardware & Peripherals
Discover and manage hardware devices
Enumerate USB devices and known boards
Expected Output: Detected hardware devicesIntrospect specific device
Expected Output: Device capabilities and firmware infoGet chip info
Expected Output: Chip specifications and pinoutList configured peripherals
Expected Output: Configured peripheral boardsAdd STM32 board
Expected Output: Peripheral added to configFlash firmware
Expected Output: Firmware flashed to deviceService Lifecycle
Install and manage as a system service
Install as systemd/launchd service
Expected Output: Service unit installedStart the service
Expected Output: Service startedStop the service
Expected Output: Service stoppedCheck service status
Expected Output: Service running/stopped statusUninstall the service
Expected Output: Service unit removedTor Instance Control
Control all Tor instances directly from kodachi-claw
List all known Tor instances
Expected Output: Table of instance tags and statusList instances with current Tor exit IP
Expected Output: Instance table with live Tor IP addressesStart every Tor instance
Expected Output: All configured Tor instances startedStop every Tor instance
Expected Output: All configured Tor instances stoppedDelete all non-default Tor instances
Expected Output: Custom Tor instances deletedDelete all instances including default
Expected Output: All Tor instances deleted, including defaultNote
Use when you want a full Tor instance reset
Automation-friendly Tor instance inventory
Expected Output: JSON envelope with instance and IP detailsAnonymity & Tor Modes
Control Tor instances, circuits, and identity randomization
10 parallel circuits
Expected Output: 10 Arti instances bootstrapped, traffic distributed across circuitsNote
Default mode. Each tool/channel gets a different circuit
Namespace isolation via oniux
Expected Output: Namespace-isolated agent with embedded TorNote
Requires root or CAP_NET_ADMIN
Single circuit (low-resource)
Expected Output: Single Arti instance, minimal memory usageSticky circuit assignment
Expected Output: Sticky circuit assignment per tool/channelNote
Strategies: round-robin (default), random, least-used, sticky
Random assignment across 5 circuits
Expected Output: Random circuit selection per requestRestore identity on exit
Expected Output: Identity restored after session endsNote
Without this flag, spoofed identity persists after exit
Selective identity spoofing
Expected Output: Only timezone randomized, Tor still activeGateway access with required auth
Expected Output: Gateway mode with mandatory authenticationCheck and recover internet connectivity
Expected Output: Internet connectivity is working / Internet recovered successfullyNote
Invokes health-control recover-internet if connectivity is lost
Force recovery even if internet appears working
Expected Output: Internet recovered successfullyNote
Skips initial check, goes straight to health-control recovery
Check/recover with JSON output
Expected Output: {status: connected, recovery_needed: false, ...}Note
Returns JSON envelope with connectivity status and recovery details
Auto-recover internet after identity changes
Expected Output: Net check after MAC change, recovery on exitNote
Checks connectivity after MAC randomization and during shutdown
Skip flag overrides auto-recover
Expected Output: Agent runs without auto-recovery (skip wins)Note
--skip-auto-recover-internet takes precedence over --auto-recover-internet
Skip Controls
Disable specific startup phases for debugging or testing
No Tor, no identity changes
Expected Output: Agent runs without Tor, no identity changesNote
WARNING: No privacy protection. Local testing only
Skip all startup phases
Expected Output: Status with no anonymity bootstrapQuick status without Tor
Expected Output: Status report with auth check onlySkip verification checks
Expected Output: Tor starts but IP/DNS not verifiedSkip preflight checks
Expected Output: Agent starts without preflight verificationOutput & Automation
JSON output modes for scripting and CI/CD
Compact JSON for scripting
Expected Output: {"status":"success",...}Pretty-printed JSON
Expected Output: Formatted JSON with indentationHuman-annotated JSON
Expected Output: JSON with human-readable annotationsNote
Also: --json (compact), --json-pretty (indented)
Verbose logging
Expected Output: Debug-level log outputSuppress non-error output
Expected Output: Only error messages shownCustom network settings
Expected Output: 60s timeout, 5 retries, fresh instancesNote
Policies: reuse (default), new, mixed
AI Gateway Providers
Route requests through AI gateway proxies (Cloudflare, Vercel, custom OpenAI-compatible endpoints)
Use Cloudflare AI Gateway
Expected Output: Request routed through gateway.ai.cloudflare.com/v1 over TorNote
Set CLOUDFLARE_API_KEY env var or api_key in config. Supports all Cloudflare-hosted models
Use Vercel AI Gateway
Expected Output: Request routed through api.vercel.ai over TorNote
Set VERCEL_API_KEY env var or api_key in config
Any OpenAI-compatible gateway via custom URL
Expected Output: Request sent to your-gateway.example.com/v1/chat/completions over TorNote
Works with vLLM, LiteLLM, Azure OpenAI, any /v1/chat/completions endpoint
Anthropic-compatible proxy (corporate/self-hosted)
sudo kodachi-claw agent --provider "anthropic-custom:https://llm-proxy.corp.example.com" --message "review PR"
Note
For proxies that speak the Anthropic Messages API instead of OpenAI format
Groq ultra-fast inference gateway
Expected Output: Agent response generated with Groq LPU inferenceNote
Set GROQ_API_KEY. Ultra-low latency for supported models
Together AI inference gateway
sudo kodachi-claw agent --provider together --model meta-llama/Llama-3-70b-chat-hf --message "analyze"
Note
Set TOGETHER_API_KEY. Supports 100+ open models
Fireworks AI inference gateway
sudo kodachi-claw agent --provider fireworks --model accounts/fireworks/models/llama-v3-70b-instruct --message "write tests"
Note
Set FIREWORKS_API_KEY. Optimized for fast open-model inference
Onboard with a custom AI gateway
Expected Output: Config created with custom gateway as default providerNote
The custom URL is stored in config.toml as default_provider
List all supported AI gateway providers
Expected Output: Table showing 30+ providers including Cloudflare, Vercel, Groq, Together, Fireworks, Mistral, xAI, and moreNote
Use custom: prefix for unlisted OpenAI-compatible gateways
Local AI (kodachi-ai)
Run with local AI models — no API key required. Build with: cargo build --features kodachi-ai
Local AI agent (auto-detects ONNX/GGUF models)
Expected Output: 🦀 Kodachi Claw Interactive Mode (local: ONNX + Qwen 3B)Note
No API key needed. Auto-detects: Claude CLI > Mistral.rs > Local LLM > ONNX > TF-IDF
NLP-routed command execution
Expected Output: NLP classifies intent, matches kodachi service commands, executes via gateway-coreNote
High-confidence NLP matches bypass the AI model for instant command routing
Onboard with local AI (zero API keys)
Expected Output: Config created with kodachi-ai provider — fully offline capableNote
No network needed for AI inference. Tor still used for anonymity
List available local AI models (interactive command)
Expected Output: Model list: all-MiniLM-L6-v2.onnx (90MB), kodachi-intent-classifier.onnx (67MB), Qwen2.5-3B (2.1GB)Note
Run inside agent interactive mode. Calls ai-trainer subprocess
Download a GGUF model for local inference (interactive command)
Expected Output: Downloading Qwen2.5-3B... doneNote
Run inside agent interactive mode. Downloads to models directory
Retrain NLP embeddings from latest command data (interactive command)
Expected Output: Training embeddings... 458 commands processed, model savedNote
Run inside agent interactive mode. Calls ai-trainer subprocess
Force specific AI tier (ONNX)
Expected Output: Agent using ONNX classifier model onlyNote
Available tiers: onnx, mistral, genai, tfidf. Auto-fallback if unavailable
Force TF-IDF fallback (no neural models)
Expected Output: Agent using TF-IDF keyword matching onlyNote
Fastest but lowest accuracy. Useful for low-resource environments
Interactive Mode Commands
Slash commands available inside agent interactive mode
Show available interactive commands
Expected Output: List of all slash commands with descriptionsNote
Run inside agent mode prompt
Clear conversation history
Expected Output: Conversation history clearedNote
Resets context for new conversation thread
Exit agent mode (alias: /exit)
Expected Output: Agent shutdown initiatedNote
Gracefully stops Tor and restores identity if --restore-on-exit was used
Exit agent mode (alias: /quit)
Expected Output: Agent shutdown initiatedNote
Same as /quit
List local AI models (kodachi-ai feature)
Expected Output: Local model inventory with sizes and statusNote
Requires --features kodachi-ai build
Download AI model (kodachi-ai feature)
Expected Output: Model download progress and completionNote
Downloads ONNX or GGUF models to local cache
Retrain NLP embeddings (kodachi-ai feature)
Expected Output: Training progress and completionNote
Retrains intent classifier from latest command data
Kodachi Gateway Integration
Execute Kodachi system services via AI agent (kodachi-ai feature)
NLP command routing to routing-switch
Expected Output: NLP detects routing intent, executes routing-switch service via gateway-coreNote
Natural language commands auto-route to appropriate Kodachi services
NLP command routing to dns-leak
Expected Output: Executes dns-leak service and returns resultsNote
Service execution includes policy checks and risk validation
Profile execution via profile-registry
Expected Output: Finds emergency profile, shows steps, executes panic sequenceNote
Workflow profiles available: panic, recovery, security-scan, network-test
NLP routing to ip-fetch service
Expected Output: Fetches IP via ip-fetch and displays geolocationNote
All service calls use recovery-engine for automatic retry on failure
NLP routing to health-control network scan
Expected Output: Executes health-control net-check and returns diagnosticsNote
Supports 18+ Kodachi services via kodachi-gateway-core
Training & Model Management
AI model training and management (kodachi-ai feature)
Start agent with auto-learning enabled
Expected Output: Agent learns from user patterns and command sequencesNote
Learning-engine tracks patterns, sequences, and user preferences
Manually trigger NLP retraining
Expected Output: Retrains intent classifier from accumulated interaction dataNote
Calls ai-trainer subprocess to rebuild embeddings
View local model inventory
Expected Output: Lists ONNX classifiers, GGUF models, embeddings with sizesNote
Shows model status: available, downloading, missing
Refresh model catalog from sources
Expected Output: Updates available model list from Hugging Face and local sourcesNote
Checks for model updates and new releases
Profile & Workflow Discovery
Discover and execute workflow profiles (kodachi-ai feature)
Profile-based workflow execution
Expected Output: Finds security-scan profile (92+ profiles available), shows steps, executes sequenceNote
Profiles include: security-scan, network-test, panic-mode, recovery-workflow
Emergency profile execution
Expected Output: Executes panic-mode profile: kill network, wipe RAM, shutdownNote
High-risk profiles require confirmation before execution
Recovery profile execution
Expected Output: Executes recovery-workflow: check connectivity, reset interfaces, restore DNSNote
Recovery profiles use recovery-engine for smart retry and correlation
Advanced Skip Controls
Fine-grained control over startup phase skipping
Skip MAC randomization only
Expected Output: Agent starts with original MAC, hostname and timezone still randomizedNote
Useful when MAC change breaks network connectivity
Skip hostname randomization only
Expected Output: Agent starts with original hostname, MAC and timezone still randomizedNote
Useful for preserving network identifiers
Skip all identity randomization (MAC, hostname, timezone)
Expected Output: Agent starts with original identity, Tor still activeNote
Tor routing active but no identity spoofing
Skip Tor bootstrap only
Expected Output: Agent starts without Tor, identity still randomizedNote
Identity spoofing active but no Tor routing (local testing)
Skip all anonymity and security phases
Expected Output: Agent starts immediately with no protectionNote
WARNING: No anonymity, no Tor, no identity changes. Local testing ONLY
Combine multiple skip flags
Expected Output: Status with only timezone changed and no TorNote
Skip flags are composable for precise control
Skip verification checks
Expected Output: Agent starts without post-bootstrap verificationNote
Faster startup but no connectivity/integrity validation
Circuit Strategy Examples
Tor circuit assignment and load balancing strategies
Round-robin circuit assignment (default)
Expected Output: Each request uses next circuit in sequence (balanced load)Note
Best for even load distribution across circuits
Random circuit assignment
Expected Output: Each request randomly selects a circuitNote
Best for unpredictable traffic patterns
Least-used circuit assignment
Expected Output: Each request uses circuit with lowest current loadNote
Best for uneven request sizes or long-running requests
Sticky circuit assignment (per tool/channel)
Expected Output: Each tool/channel always uses same dedicated circuitNote
Best for maintaining consistent exit IPs per service
Tor Mode Examples
Different Tor runtime modes and instance configurations
Multi-circuit mode with single instance
Expected Output: Single Arti instance, all requests through one circuitNote
Minimum resource mode, no circuit isolation
Multi-circuit mode with 20 instances
Expected Output: 20 parallel Arti instances, maximum circuit diversityNote
Maximum anonymity but high memory usage (each instance ~50MB)
Single mode (explicit)
Expected Output: Single Arti instance, optimized for low-resourceNote
Equivalent to --mode multi-circuit --tor-instances 1
Namespace isolation via oniux
Expected Output: Full network namespace isolation, embedded TorNote
Requires root or CAP_NET_ADMIN. Strongest isolation
Recovery Examples
Internet connectivity recovery and troubleshooting
Check and auto-recover internet
Expected Output: Connectivity check, recovery if needed via health-controlNote
Non-destructive check first, recovery only if needed
Force immediate recovery
Expected Output: Skips check, immediately runs health-control recover-internetNote
Use when connectivity check itself is failing
Recovery with JSON output
Expected Output: JSON envelope with connectivity status, recovery actions, resultsNote
Parseable output for automation and monitoring
Agent with automatic recovery
Expected Output: Agent checks connectivity after MAC change and during shutdownNote
Auto-recovers network if broken by identity changes
Auto-recovery with identity restoration
Expected Output: Recovery after MAC change and during shutdown + identity restoreNote
Ensures connectivity restored before and after session
Environment Variables
| Variable | Description | Default | Values |
|---|---|---|---|
RUST_LOG |
Set logging level | info | trace, debug, info, warn, error |
Exit Codes
| Code | Description |
|---|---|
| 0 | Success |
| 1 | General error |
| 2 | Invalid arguments |
| 3 | Permission denied |