Skip to content

ai-cmd

AI-powered command-line interface for natural language command execution

Version: 9.0.1 | Size: 55.8MB | Author: Warith Al Maawali

License: Proprietary | Website: https://www.digi77.com


File Information

Property Value
Binary Name ai-cmd
Version 9.0.1
Build Date 2026-02-26T08:01:59.031929706Z
Rust Version 1.82.0
File Size 55.8MB
JSON Data View Raw JSON

SHA256 Checksum

d9e36d46428a79ac7420abecbf8c0a283e1db01a59f28161b78c5e130d643115

Features

Feature Description
Core Advanced functionality for Kodachi OS

Security Features

Feature Description
Inputvalidation All inputs are validated and sanitized
Ratelimiting Built-in rate limiting for network operations
Authentication Secure authentication with certificate pinning
Encryption TLS 1.3 for all network communications

System Requirements

Requirement Value
OS Linux (Debian-based)
Privileges root/sudo for system operations
Dependencies OpenSSL, libcurl

Global Options

Flag Description
-h, --help Print help information
-v, --version Print version information
-n, --info Display detailed information
-e, --examples Show usage examples
--json Output in JSON format
--json-pretty Pretty-print JSON output with indentation
--json-human Enhanced JSON output with improved formatting (like jq)
--verbose Enable verbose output
--quiet Suppress non-essential output
--no-color Disable colored output
--config <FILE> Use custom configuration file
--timeout <SECS> Set timeout (default: 30)
--retry <COUNT> Retry attempts (default: 3)

Commands

Commands

query

Process a natural language query and execute the matching command

Usage:

ai-cmd ai-cmd query "<natural language query>"

Options: - --threshold: Confidence threshold for execution - --dry-run: Preview without execution - --auto-execute: Auto-execute high confidence matches - --engine: AI engine tier: auto, tfidf, onnx, onnx-classifier, llm, mistral, genai, claude - --model-path: Path to GGUF model for local LLM tier - --stream: Stream response tokens in real-time - --temperature: Sampling temperature 0.0-2.0 (default: 0.7) - --model: Model name for GenAI tier (supports local, OpenAI/Codex, Claude, Gemini, OpenRouter routing) - --tor-proxy: Route cloud providers through Tor proxy - --use: Use a specific GGUF model file from models/ directory - --no-gateway, --skip-gateway: Bypass gateway validation and execute directly

Examples:

ai-cmd query "check network connectivity"
ai-cmd query "start tor service" --dry-run
ai-cmd query "get my ip address" --json
ai-cmd query "block internet" --threshold 0.8
ai-cmd query "check tor" --engine onnx
ai-cmd query "check tor" --engine mistral
ai-cmd query "check tor" --engine mistral --json
ai-cmd query "check tor status" --no-gateway --dry-run
ai-cmd query "check tor status" --skip-gateway --dry-run
ai-cmd query "check tor status" --engine genai --model gpt-4o-mini --dry-run
ai-cmd query "check tor status" --engine genai --model claude-sonnet-4-5 --tor-proxy --dry-run

interactive

Start interactive REPL mode for continuous queries

Usage:

ai-cmd ai-cmd interactive

Options: - --threshold: Confidence threshold for execution - --engine: AI engine tier: auto, tfidf, onnx, llm, claude - --model-path: Path to GGUF model for local LLM tier

Examples:

ai-cmd interactive
ai-cmd interactive --threshold 0.8
ai-cmd interactive --engine onnx
ai-cmd interactive --engine claude --threshold 0.7
ai-cmd interactive --engine llm --model-path ./models/phi.gguf
ai-cmd interactive --engine genai

feedback

Submit feedback to improve intent classification

Usage:

ai-cmd ai-cmd feedback "<query>" [OPTIONS]

Options: - --correct-intent: Specify the correct intent ID - --correct-command: Specify the correct command - --comment: Add a comment

Examples:

ai-cmd feedback "check network" --correct-intent network_check
ai-cmd feedback "test internet" --correct-command "health-control net-check"
ai-cmd feedback "my ip" --comment "Should show IP address"

preview

Preview intent classification without execution

Usage:

ai-cmd ai-cmd preview "<query>"

Options: - --alternatives: Number of alternative matches to show

Examples:

ai-cmd preview "check network"
ai-cmd preview "stop tor" --alternatives 5
ai-cmd preview "block internet" --json

voice

Voice input mode for natural language queries

Usage:

ai-cmd ai-cmd voice [OPTIONS]

Options: - --continuous: Enable continuous listening mode - --timeout: Timeout for voice input in seconds - --provider: STT provider: whisper-cpp, vosk, placeholder, auto - --voice: TTS voice name - --speed: Speech speed (words per minute) - --list-devices: List available audio devices - --check-deps: Check voice engine dependencies

Examples:

ai-cmd voice
ai-cmd voice --continuous
ai-cmd voice --timeout 60
ai-cmd voice --provider whisper-cpp
ai-cmd voice --voice en-us --speed 200
ai-cmd voice --list-devices
ai-cmd voice --check-deps

suggest

Get proactive command suggestions based on usage patterns

Usage:

ai-cmd ai-cmd suggest [OPTIONS]

Options: - --limit: Number of suggestions to show - --proactive: Show proactive suggestions

Examples:

ai-cmd suggest
ai-cmd suggest --limit 10
ai-cmd suggest --proactive

workflow

Execute workflow profiles via natural language queries

Usage:

ai-cmd workflow [OPTIONS]

tiers

List all AI engine tiers and their availability status

Usage:

ai-cmd ai-cmd tiers [--json]

Examples:

ai-cmd tiers
ai-cmd tiers --json

tools

List all callable AI tools with parameter schemas

Usage:

ai-cmd ai-cmd tools [--json]

Examples:

ai-cmd tools
ai-cmd tools --json

providers

List available GenAI providers and model configuration

Usage:

ai-cmd ai-cmd providers [--json]

Examples:

ai-cmd providers
ai-cmd providers --json

model-info

Show loaded AI model details and configuration

Usage:

ai-cmd ai-cmd model-info [--json]

Examples:

ai-cmd model-info
ai-cmd model-info --json

policy

Show current AI policy (intent thresholds, tool allowlist, risk mode)

Usage:

ai-cmd ai-cmd policy [--json]

Examples:

ai-cmd policy
ai-cmd policy --json

export-intents

Export all intents as JSON catalog for shared libraries

Usage:

ai-cmd ai-cmd export-intents [--output FILE] [--json]

Examples:

ai-cmd export-intents
ai-cmd export-intents --output intent-catalog.json
ai-cmd export-intents --json

Operational Scenarios

Scenario-oriented workflows generated from the binary's built-in -e --json examples.

Scenario 1: Basic Usage

Basic natural language queries (safe mode examples)

Step 1: Check network status using natural language

ai-cmd query "check network connectivity" --dry-run
Expected Output: Dry-run preview of matched command

Step 2: Get IP address query result with JSON output

ai-cmd query "what is my ip address" --dry-run --json
Expected Output: JSON with intent and command preview

Scenario 2: Advanced Features

Advanced query options and preview workflows

Step 1: Preview command execution without running

ai-cmd query "start tor service" --dry-run
Expected Output: Dry-run preview of Tor command

Step 2: Show interactive mode usage

ai-cmd interactive --help
Expected Output: Interactive mode options and examples

Step 3: Preview intent matching with alternatives

ai-cmd preview "block internet" --alternatives 5
Expected Output: Intent analysis with alternative matches

Step 4: Use custom confidence threshold with dry-run

ai-cmd query "test dns leak" --threshold 0.8 --dry-run
Expected Output: Dry-run preview gated by threshold

Scenario 3: Feedback System

Feedback command syntax and usage patterns

Step 1: Show feedback syntax for intent correction

ai-cmd feedback "check network" --correct-intent network_check --help
Expected Output: Feedback command usage help

Step 2: Show feedback syntax for command correction

ai-cmd feedback "check tor" --correct-command "tor-switch tor-status" --help
Expected Output: Feedback command usage help

Step 3: Show feedback syntax for comments

ai-cmd feedback "check tor" --comment "Should use tor-status subcommand" --help
Expected Output: Feedback command usage help

Scenario 4: Voice Input

Voice command usage, diagnostics, and device checks

Step 1: Show voice mode usage and options

ai-cmd voice --help
Expected Output: Voice command help

Step 2: Show continuous voice mode usage

ai-cmd voice --continuous --help
Expected Output: Voice command help for continuous mode

Step 3: Show timeout option usage

ai-cmd voice --timeout 10 --help
Expected Output: Voice command help with timeout option

Step 4: Show provider selection usage

ai-cmd voice --provider vosk --help
Expected Output: Voice command help with provider option

Step 5: List available audio input devices

ai-cmd voice --list-devices
Expected Output: Audio device list

Step 6: Check voice dependencies

ai-cmd voice --check-deps
Expected Output: Voice dependency status report

Scenario 5: AI Suggestions

Get command suggestions based on usage patterns

Step 1: Get recent command suggestions

ai-cmd suggest
Expected Output: List of suggested commands

Step 2: Get proactive suggestions

ai-cmd suggest --proactive
Expected Output: Popular or proactive suggestions

Step 3: Limit number of suggestions

ai-cmd suggest --limit 5
Expected Output: Top 5 suggestions

Scenario 6: Interactive Mode

Interactive REPL usage and configuration options

Step 1: Show interactive mode usage

ai-cmd interactive --help
Expected Output: Interactive mode help

Step 2: Show threshold usage in interactive mode

ai-cmd interactive --threshold 0.8 --help
Expected Output: Interactive help with threshold option

Step 3: Show ONNX engine interactive usage

ai-cmd interactive --engine onnx --help
Expected Output: Interactive help with engine option

Step 4: Show Claude interactive syntax

ai-cmd interactive --engine claude --threshold 0.7 --help
Expected Output: Interactive help with Claude engine options

Step 5: Show local LLM interactive syntax

ai-cmd interactive --engine llm --model-path ./models/phi.gguf --help
Expected Output: Interactive help with model-path option

Scenario 7: AI Engine Tiers

Select specific AI engine tiers for query processing

Step 1: Auto-select best available engine tier

ai-cmd query "check tor" --engine auto --dry-run
Expected Output: Dry-run preview using auto-selected tier

Step 2: Use TF-IDF keyword matching tier

ai-cmd query "check tor" --engine tfidf --dry-run
Expected Output: Dry-run preview using tfidf tier

Step 3: Use ONNX semantic matching tier

ai-cmd query "check tor" --engine onnx --dry-run
Expected Output: Dry-run preview using onnx tier

Step 4: Show Mistral tier syntax and options

ai-cmd query "check tor" --engine mistral --help
Expected Output: Query help with mistral engine

Step 5: Show GenAI tier syntax and options

ai-cmd query "check tor" --engine genai --help
Expected Output: Query help with genai engine

Step 6: Show legacy LLM tier syntax

ai-cmd query "check tor" --engine llm --help
Expected Output: Query help with llm engine

Step 7: Show custom model-path syntax

ai-cmd query "check tor" --engine llm --model-path ./models/phi.gguf --help
Expected Output: Query help with model-path option

Step 8: Show Claude tier syntax and options

ai-cmd query "check tor" --engine claude --help
Expected Output: Query help with claude engine

Step 9: ONNX engine with dry-run preview

ai-cmd query "check tor status" --engine onnx --dry-run
Expected Output: Preview matched command without executing

Step 10: ONNX engine dry-run validation

ai-cmd query "test dns leak" --engine onnx --dry-run
Expected Output: Dry-run DNS leak query preview

Step 11: Show all available AI tiers and status

ai-cmd tiers
Expected Output: Tier list with availability indicators

Scenario 8: Mistral.rs Local LLM

Mistral tier command syntax and diagnostics

Step 1: Show local mistral query syntax

ai-cmd query "is my system secure?" --engine mistral --help
Expected Output: Query help with mistral options

Step 2: Show mistral streaming syntax

ai-cmd query "check all services" --engine mistral --stream --help
Expected Output: Query help including stream option

Step 3: Show mistral temperature option usage

ai-cmd query "run dns leak test" --engine mistral --temperature 0.3 --help
Expected Output: Query help including temperature option

Step 4: Show loaded model details and tier metadata

ai-cmd model-info --json
Expected Output: JSON model status

Scenario 9: Ollama / GenAI Provider

GenAI tier syntax and provider discovery

Step 1: Show genai query syntax

ai-cmd query "analyze my network" --engine genai --help
Expected Output: Query help with genai options

Step 2: Show genai model override syntax

ai-cmd query "security audit" --engine genai --model llama3:8b --help
Expected Output: Query help with model option

Step 3: List available providers and models

ai-cmd providers --json
Expected Output: JSON provider list

Step 4: Show OpenAI/Codex model syntax with tor-proxy flag

ai-cmd query "detailed analysis" --engine genai --model gpt-4o-mini --tor-proxy --help
Expected Output: Query help including tor-proxy option

Step 5: Show Claude model syntax (direct key or OpenRouter fallback)

ai-cmd query "check tor status" --engine genai --model claude-sonnet-4-5 --dry-run
Expected Output: Dry-run preview using Claude-compatible cloud routing

Step 6: Show Gemini model syntax (direct key or OpenRouter fallback)

ai-cmd query "check dns status" --engine genai --model gemini-2.0-flash --dry-run
Expected Output: Dry-run preview using Gemini-compatible cloud routing

Scenario 10: Tool Calling

Tool-calling queries and tool catalog inspection

Step 1: Tool-calling query in safe dry-run mode

ai-cmd query "what's my tor status and dns config?" --dry-run
Expected Output: Dry-run preview for multi-tool query

Step 2: List all callable tools with JSON metadata

ai-cmd tools --json
Expected Output: JSON tool definitions

Step 3: Show tools grouped by domain

ai-cmd tools
Expected Output: Human-readable tool catalog

Scenario 11: Security-First Routing

Fast-path and fallback-path routing examples

Step 1: Fast-path style query with dry-run

ai-cmd query "check tor status" --dry-run
Expected Output: Dry-run preview for tor status

Step 2: Show slow-path mistral syntax

ai-cmd query "explain tor routing in detail" --engine mistral --help
Expected Output: Query help for mistral explanation path

Step 3: Configuration change flow with safety preview

ai-cmd query "change dns to cloudflare" --dry-run
Expected Output: Dry-run preview of DNS change

Scenario 12: AI Policy

View and manage AI policy configuration including intent thresholds, tool allowlists, and risk mode.

Step 1: Show current AI policy (thresholds, tools, risk mode)

ai-cmd policy
Expected Output: Policy details with intent thresholds and tool list

Step 2: AI policy as JSON for programmatic use

ai-cmd policy --json
Expected Output: JSON with version, thresholds, allowlist, signature status

Scenario 13: Model Management

Download, list, and select GGUF models for local LLM inference

Step 1: Show syntax for selecting a specific GGUF model file

ai-cmd query "check tor" --use Qwen2.5-3B-Instruct-Q4_K_M.gguf --help
Expected Output: Query help with --use model selector

Note

Use --use without --help when the model file exists in models/

Step 2: Show details of loaded model

ai-cmd model-info --json
Expected Output: Model path, architecture, and capabilities

Step 3: Show all AI tiers including model availability

ai-cmd tiers --json
Expected Output: JSON with tier details and download status

Scenario 14: Linux System Commands

General-purpose Linux system commands via natural language

Step 1: Disk usage request preview

ai-cmd query "how much disk space" --dry-run
Expected Output: Dry-run preview for disk space command

Step 2: Process listing request preview

ai-cmd query "list running processes" --dry-run
Expected Output: Dry-run preview for process listing command

Step 3: Network port check request preview

ai-cmd query "check if port 80 is open" --dry-run
Expected Output: Dry-run preview for port inspection command

Step 4: Safe preview of package update flow

ai-cmd query "update system packages" --dry-run
Expected Output: Dry-run preview of package update

Note

Remove --dry-run to execute after authentication

Step 5: Network diagnostic request preview

ai-cmd query "ping google" --dry-run
Expected Output: Dry-run preview for ping diagnostic

Step 6: Log viewing with explicit TF-IDF engine

ai-cmd query "show system logs" --engine tfidf --dry-run
Expected Output: Dry-run preview using tfidf tier

Step 7: Service management query with strict confidence threshold

ai-cmd query "restart nginx service" --threshold 0.9 --dry-run
Expected Output: Dry-run preview gated by 0.9 threshold

Step 8: Firewall inspection query preview

ai-cmd query "check firewall rules" --dry-run
Expected Output: Dry-run preview for firewall status command

Scenario 15: Confidence Threshold Reference

The --threshold flag controls minimum confidence required for a match. Range: 0.3 to 1.0. Default: 0.6.

Step 1: Minimum threshold for broad matching

ai-cmd query "check tor" --threshold 0.3 --dry-run
Expected Output: Dry-run preview with loose confidence gating

Step 2: Default threshold balance

ai-cmd query "check tor" --threshold 0.6 --dry-run
Expected Output: Dry-run preview with default confidence gating

Step 3: Higher threshold for sensitive operations

ai-cmd query "block internet" --threshold 0.8 --dry-run
Expected Output: Dry-run preview with strict confidence gating

Step 4: Very strict threshold for near-exact matching

ai-cmd query "check tor" --threshold 0.95 --dry-run
Expected Output: Dry-run preview requiring near-exact match

Scenario 16: Workflow Profiles

Execute multi-step workflow profiles for complete operations

Step 1: List all available workflow profiles

ai-cmd workflow --list
Expected Output: Profiles grouped by category (privacy, emergency, network, etc.)

Step 2: Preview privacy workflow profile

ai-cmd workflow "maximum privacy setup" --preview
Expected Output: Workflow details, steps, and services used

Note

Use without --preview to execute after review

Step 3: Preview emergency recovery workflow

ai-cmd workflow "emergency recovery" --preview
Expected Output: Workflow details, steps, and services used

Step 4: Dry-run workflow execution

ai-cmd workflow "setup network" --dry-run
Expected Output: Shows what would be executed without running commands

Step 5: Preview with custom confidence threshold

ai-cmd workflow "tor setup" --threshold 0.7 --preview
Expected Output: Workflow preview if confidence >= 0.7

Step 6: Show workflow registry statistics

ai-cmd workflow --stats
Expected Output: Profile counts, categories, and service usage

Step 7: Preview workflow with custom parameters

ai-cmd workflow "complete anonymity" --param country=us --preview
Expected Output: Workflow with custom parameter values

Note

Pass key=value pairs for profile parameters

Scenario 17: Setup & Model Management

Download AI models and manage engine tiers (via ai-trainer)

Step 1: List all AI engine tiers and their availability

ai-cmd tiers
Expected Output: Tier list showing which engines are available

Note

Shows setup hints for unavailable tiers

Step 2: Show active model paths and availability across AI tiers

ai-cmd model-info
Expected Output: Model inventory for embeddings/LLM backends

Note

Useful before running complex queries

Step 3: Get model inventory in JSON format

ai-cmd model-info --json
Expected Output: Structured model metadata

Note

Suitable for automation checks

Step 4: List available provider backends

ai-cmd providers
Expected Output: Provider list and status

Step 5: List providers in JSON format

ai-cmd providers --json
Expected Output: Structured provider status details

Note

Includes fields useful for health checks

Step 6: Show AI tier readiness in JSON format

ai-cmd tiers --json
Expected Output: Tier availability and setup hints as JSON

Scenario 18: Gateway Core Integration

ai-cmd and ai-gateway share the same registry/policy/executor core for consistent behavior

Step 1: Service-style queries now validate through shared gateway core

ai-cmd query "check tor status" --dry-run --json
Expected Output: Dry-run preview with consistent policy decision

Step 2: Bypass gateway validation and execute directly (primary flag)

ai-cmd query "check tor status" --no-gateway --dry-run
Expected Output: Dry-run preview without gateway validation step

Step 3: Same bypass behavior using alias flag

ai-cmd query "check tor status" --skip-gateway --dry-run
Expected Output: Dry-run preview without gateway validation step

Note

--skip-gateway is an alias of --no-gateway

Step 4: Skip gateway for commands with long execution times

ai-cmd query "test dns leak" --skip-gateway --dry-run
Expected Output: Dry-run preview bypassing gateway timeout

Step 5: Display policy guard configuration used by shared gateway core

ai-cmd policy --json
Expected Output: JSON policy summary

Note

Confirms deny/allow behavior used during execution

Step 6: List tool metadata exposed through the shared core

ai-cmd tools --json
Expected Output: JSON tool catalog with safety tiers

Note

Use this to verify command availability before query execution

Command Examples (Raw)

Basic Usage

Basic natural language queries (safe mode examples)

Check network status using natural language

ai-cmd query "check network connectivity" --dry-run
Expected Output: Dry-run preview of matched command

Get IP address query result with JSON output

ai-cmd query "what is my ip address" --dry-run --json
Expected Output: JSON with intent and command preview

Advanced Features

Advanced query options and preview workflows

Preview command execution without running

ai-cmd query "start tor service" --dry-run
Expected Output: Dry-run preview of Tor command

Show interactive mode usage

ai-cmd interactive --help
Expected Output: Interactive mode options and examples

Preview intent matching with alternatives

ai-cmd preview "block internet" --alternatives 5
Expected Output: Intent analysis with alternative matches

Use custom confidence threshold with dry-run

ai-cmd query "test dns leak" --threshold 0.8 --dry-run
Expected Output: Dry-run preview gated by threshold

Feedback System

Feedback command syntax and usage patterns

Show feedback syntax for intent correction

ai-cmd feedback "check network" --correct-intent network_check --help
Expected Output: Feedback command usage help

Show feedback syntax for command correction

ai-cmd feedback "check tor" --correct-command "tor-switch tor-status" --help
Expected Output: Feedback command usage help

Show feedback syntax for comments

ai-cmd feedback "check tor" --comment "Should use tor-status subcommand" --help
Expected Output: Feedback command usage help

Voice Input

Voice command usage, diagnostics, and device checks

Show voice mode usage and options

ai-cmd voice --help
Expected Output: Voice command help

Show continuous voice mode usage

ai-cmd voice --continuous --help
Expected Output: Voice command help for continuous mode

Show timeout option usage

ai-cmd voice --timeout 10 --help
Expected Output: Voice command help with timeout option

Show provider selection usage

ai-cmd voice --provider vosk --help
Expected Output: Voice command help with provider option

List available audio input devices

ai-cmd voice --list-devices
Expected Output: Audio device list

Check voice dependencies

ai-cmd voice --check-deps
Expected Output: Voice dependency status report

AI Suggestions

Get command suggestions based on usage patterns

Get recent command suggestions

ai-cmd suggest
Expected Output: List of suggested commands

Get proactive suggestions

ai-cmd suggest --proactive
Expected Output: Popular or proactive suggestions

Limit number of suggestions

ai-cmd suggest --limit 5
Expected Output: Top 5 suggestions

Interactive Mode

Interactive REPL usage and configuration options

Show interactive mode usage

ai-cmd interactive --help
Expected Output: Interactive mode help

Show threshold usage in interactive mode

ai-cmd interactive --threshold 0.8 --help
Expected Output: Interactive help with threshold option

Show ONNX engine interactive usage

ai-cmd interactive --engine onnx --help
Expected Output: Interactive help with engine option

Show Claude interactive syntax

ai-cmd interactive --engine claude --threshold 0.7 --help
Expected Output: Interactive help with Claude engine options

Show local LLM interactive syntax

ai-cmd interactive --engine llm --model-path ./models/phi.gguf --help
Expected Output: Interactive help with model-path option

AI Engine Tiers

Select specific AI engine tiers for query processing

Auto-select best available engine tier

ai-cmd query "check tor" --engine auto --dry-run
Expected Output: Dry-run preview using auto-selected tier

Use TF-IDF keyword matching tier

ai-cmd query "check tor" --engine tfidf --dry-run
Expected Output: Dry-run preview using tfidf tier

Use ONNX semantic matching tier

ai-cmd query "check tor" --engine onnx --dry-run
Expected Output: Dry-run preview using onnx tier

Show Mistral tier syntax and options

ai-cmd query "check tor" --engine mistral --help
Expected Output: Query help with mistral engine

Show GenAI tier syntax and options

ai-cmd query "check tor" --engine genai --help
Expected Output: Query help with genai engine

Show legacy LLM tier syntax

ai-cmd query "check tor" --engine llm --help
Expected Output: Query help with llm engine

Show custom model-path syntax

ai-cmd query "check tor" --engine llm --model-path ./models/phi.gguf --help
Expected Output: Query help with model-path option

Show Claude tier syntax and options

ai-cmd query "check tor" --engine claude --help
Expected Output: Query help with claude engine

ONNX engine with dry-run preview

ai-cmd query "check tor status" --engine onnx --dry-run
Expected Output: Preview matched command without executing

ONNX engine dry-run validation

ai-cmd query "test dns leak" --engine onnx --dry-run
Expected Output: Dry-run DNS leak query preview

Show all available AI tiers and status

ai-cmd tiers
Expected Output: Tier list with availability indicators

Mistral.rs Local LLM

Mistral tier command syntax and diagnostics

Show local mistral query syntax

ai-cmd query "is my system secure?" --engine mistral --help
Expected Output: Query help with mistral options

Show mistral streaming syntax

ai-cmd query "check all services" --engine mistral --stream --help
Expected Output: Query help including stream option

Show mistral temperature option usage

ai-cmd query "run dns leak test" --engine mistral --temperature 0.3 --help
Expected Output: Query help including temperature option

Show loaded model details and tier metadata

ai-cmd model-info --json
Expected Output: JSON model status

Ollama / GenAI Provider

GenAI tier syntax and provider discovery

Show genai query syntax

ai-cmd query "analyze my network" --engine genai --help
Expected Output: Query help with genai options

Show genai model override syntax

ai-cmd query "security audit" --engine genai --model llama3:8b --help
Expected Output: Query help with model option

List available providers and models

ai-cmd providers --json
Expected Output: JSON provider list

Show OpenAI/Codex model syntax with tor-proxy flag

ai-cmd query "detailed analysis" --engine genai --model gpt-4o-mini --tor-proxy --help
Expected Output: Query help including tor-proxy option

Show Claude model syntax (direct key or OpenRouter fallback)

ai-cmd query "check tor status" --engine genai --model claude-sonnet-4-5 --dry-run
Expected Output: Dry-run preview using Claude-compatible cloud routing

Show Gemini model syntax (direct key or OpenRouter fallback)

ai-cmd query "check dns status" --engine genai --model gemini-2.0-flash --dry-run
Expected Output: Dry-run preview using Gemini-compatible cloud routing

Tool Calling

Tool-calling queries and tool catalog inspection

Tool-calling query in safe dry-run mode

ai-cmd query "what's my tor status and dns config?" --dry-run
Expected Output: Dry-run preview for multi-tool query

List all callable tools with JSON metadata

ai-cmd tools --json
Expected Output: JSON tool definitions

Show tools grouped by domain

ai-cmd tools
Expected Output: Human-readable tool catalog

Security-First Routing

Fast-path and fallback-path routing examples

Fast-path style query with dry-run

ai-cmd query "check tor status" --dry-run
Expected Output: Dry-run preview for tor status

Show slow-path mistral syntax

ai-cmd query "explain tor routing in detail" --engine mistral --help
Expected Output: Query help for mistral explanation path

Configuration change flow with safety preview

ai-cmd query "change dns to cloudflare" --dry-run
Expected Output: Dry-run preview of DNS change

AI Policy

View and manage AI policy configuration including intent thresholds, tool allowlists, and risk mode.

Show current AI policy (thresholds, tools, risk mode)

ai-cmd policy
Expected Output: Policy details with intent thresholds and tool list

AI policy as JSON for programmatic use

ai-cmd policy --json
Expected Output: JSON with version, thresholds, allowlist, signature status

Model Management

Download, list, and select GGUF models for local LLM inference

Show syntax for selecting a specific GGUF model file

ai-cmd query "check tor" --use Qwen2.5-3B-Instruct-Q4_K_M.gguf --help
Expected Output: Query help with --use model selector

Note

Use --use without --help when the model file exists in models/

Show details of loaded model

ai-cmd model-info --json
Expected Output: Model path, architecture, and capabilities

Show all AI tiers including model availability

ai-cmd tiers --json
Expected Output: JSON with tier details and download status

Linux System Commands

General-purpose Linux system commands via natural language

Disk usage request preview

ai-cmd query "how much disk space" --dry-run
Expected Output: Dry-run preview for disk space command

Process listing request preview

ai-cmd query "list running processes" --dry-run
Expected Output: Dry-run preview for process listing command

Network port check request preview

ai-cmd query "check if port 80 is open" --dry-run
Expected Output: Dry-run preview for port inspection command

Safe preview of package update flow

ai-cmd query "update system packages" --dry-run
Expected Output: Dry-run preview of package update

Note

Remove --dry-run to execute after authentication

Network diagnostic request preview

ai-cmd query "ping google" --dry-run
Expected Output: Dry-run preview for ping diagnostic

Log viewing with explicit TF-IDF engine

ai-cmd query "show system logs" --engine tfidf --dry-run
Expected Output: Dry-run preview using tfidf tier

Service management query with strict confidence threshold

ai-cmd query "restart nginx service" --threshold 0.9 --dry-run
Expected Output: Dry-run preview gated by 0.9 threshold

Firewall inspection query preview

ai-cmd query "check firewall rules" --dry-run
Expected Output: Dry-run preview for firewall status command

Confidence Threshold Reference

The --threshold flag controls minimum confidence required for a match. Range: 0.3 to 1.0. Default: 0.6.

Minimum threshold for broad matching

ai-cmd query "check tor" --threshold 0.3 --dry-run
Expected Output: Dry-run preview with loose confidence gating

Default threshold balance

ai-cmd query "check tor" --threshold 0.6 --dry-run
Expected Output: Dry-run preview with default confidence gating

Higher threshold for sensitive operations

ai-cmd query "block internet" --threshold 0.8 --dry-run
Expected Output: Dry-run preview with strict confidence gating

Very strict threshold for near-exact matching

ai-cmd query "check tor" --threshold 0.95 --dry-run
Expected Output: Dry-run preview requiring near-exact match

Workflow Profiles

Execute multi-step workflow profiles for complete operations

List all available workflow profiles

ai-cmd workflow --list
Expected Output: Profiles grouped by category (privacy, emergency, network, etc.)

Preview privacy workflow profile

ai-cmd workflow "maximum privacy setup" --preview
Expected Output: Workflow details, steps, and services used

Note

Use without --preview to execute after review

Preview emergency recovery workflow

ai-cmd workflow "emergency recovery" --preview
Expected Output: Workflow details, steps, and services used

Dry-run workflow execution

ai-cmd workflow "setup network" --dry-run
Expected Output: Shows what would be executed without running commands

Preview with custom confidence threshold

ai-cmd workflow "tor setup" --threshold 0.7 --preview
Expected Output: Workflow preview if confidence >= 0.7

Show workflow registry statistics

ai-cmd workflow --stats
Expected Output: Profile counts, categories, and service usage

Preview workflow with custom parameters

ai-cmd workflow "complete anonymity" --param country=us --preview
Expected Output: Workflow with custom parameter values

Note

Pass key=value pairs for profile parameters

Setup & Model Management

Download AI models and manage engine tiers (via ai-trainer)

List all AI engine tiers and their availability

ai-cmd tiers
Expected Output: Tier list showing which engines are available

Note

Shows setup hints for unavailable tiers

Show active model paths and availability across AI tiers

ai-cmd model-info
Expected Output: Model inventory for embeddings/LLM backends

Note

Useful before running complex queries

Get model inventory in JSON format

ai-cmd model-info --json
Expected Output: Structured model metadata

Note

Suitable for automation checks

List available provider backends

ai-cmd providers
Expected Output: Provider list and status

List providers in JSON format

ai-cmd providers --json
Expected Output: Structured provider status details

Note

Includes fields useful for health checks

Show AI tier readiness in JSON format

ai-cmd tiers --json
Expected Output: Tier availability and setup hints as JSON

Gateway Core Integration

ai-cmd and ai-gateway share the same registry/policy/executor core for consistent behavior

Service-style queries now validate through shared gateway core

ai-cmd query "check tor status" --dry-run --json
Expected Output: Dry-run preview with consistent policy decision

Bypass gateway validation and execute directly (primary flag)

ai-cmd query "check tor status" --no-gateway --dry-run
Expected Output: Dry-run preview without gateway validation step

Same bypass behavior using alias flag

ai-cmd query "check tor status" --skip-gateway --dry-run
Expected Output: Dry-run preview without gateway validation step

Note

--skip-gateway is an alias of --no-gateway

Skip gateway for commands with long execution times

ai-cmd query "test dns leak" --skip-gateway --dry-run
Expected Output: Dry-run preview bypassing gateway timeout

Display policy guard configuration used by shared gateway core

ai-cmd policy --json
Expected Output: JSON policy summary

Note

Confirms deny/allow behavior used during execution

List tool metadata exposed through the shared core

ai-cmd tools --json
Expected Output: JSON tool catalog with safety tiers

Note

Use this to verify command availability before query execution

Environment Variables

Variable Description Default Values
RUST_LOG Set logging level info error
NO_COLOR Disable all colored output when set unset 1

Exit Codes

Code Description
1 General error
3 Permission denied
2 Invalid arguments
4 Network error
5 File not found
0 Success