ai-learner
AI learning engine for continuous improvement and performance analysis
Version: 9.0.1 | Size: 2.7MB | Author: Warith Al Maawali warith@digi77.com
License: LicenseRef-Kodachi-SAN-1.0 | Website: https://kodachi.cloud
File Information
| Property | Value |
|---|---|
| Binary Name | ai-learner |
| Version | 9.0.1 |
| Build Date | 2026-02-26T08:01:54.443899302Z |
| Rust Version | 1.82.0 |
| File Size | 2.7MB |
| JSON Data | View Raw JSON |
SHA256 Checksum
Features
| Feature | Description |
|---|---|
| Feature | Feedback aggregation and analysis |
| Feature | Incremental learning with convergence detection |
| Feature | Performance tracking and trend analysis |
| Feature | Multi-format report generation (JSON, Markdown, HTML) |
Security Features
| Feature | Description |
|---|---|
| Inputvalidation | All inputs are validated and sanitized |
| Ratelimiting | Built-in rate limiting for network operations |
| Authentication | Secure authentication with certificate pinning |
| Encryption | TLS 1.3 for all network communications |
System Requirements
| Requirement | Value |
|---|---|
| OS | Linux (Debian-based) |
| Privileges | root/sudo for system operations |
| Dependencies | OpenSSL, libcurl |
Global Options
| Flag | Description |
|---|---|
-h, --help |
Print help information |
-v, --version |
Print version information |
-n, --info |
Display detailed information |
-e, --examples |
Show usage examples |
--json |
Output in JSON format |
--json-pretty |
Pretty-print JSON output with indentation |
--json-human |
Enhanced JSON output with improved formatting (like jq) |
--verbose |
Enable verbose output |
--quiet |
Suppress non-essential output |
--no-color |
Disable colored output |
--config <FILE> |
Use custom configuration file |
--timeout <SECS> |
Set timeout (default: 30) |
--retry <COUNT> |
Retry attempts (default: 3) |
Commands
Analysis Operations
analyze
Analyze model performance and trends
Usage:
Learning Operations
learn
Run learning cycle to improve model based on feedback
Usage:
Reporting Operations
report
Generate comprehensive performance reports
Usage:
Status Operations
status
Show ai-learner status, database health, and activity metrics
Usage:
Operational Scenarios
Scenario-oriented workflows generated from the binary's built-in -e --json examples.
Scenario 1: Basic Learning Operations
Run learning cycles and update models based on feedback
Step 1: Run a full learning cycle on all feedback
Expected Output: Learning statistics showing improvementsNote
Processes all feedback since last run
Step 2: Run incremental learning on new feedback only
Expected Output: Quick learning update with delta statisticsStep 3: Get learning results in JSON format
Expected Output: JSON response with detailed learning metricsNote
Useful for automated processing
Step 4: Learn with custom learning rate
Expected Output: Learning results with adjusted convergence speedNote
Lower rates for more stable convergence
Step 5: Learn with minimum feedback threshold
Expected Output: Learning skipped if insufficient feedback availableNote
Ensures statistical significance
Step 6: Incremental learning with JSON output
Expected Output: JSON with incremental learning delta statisticsNote
Combines fast incremental mode with structured output
Step 7: Full parameter learning with JSON output
Expected Output: JSON with custom rate and threshold learning metricsNote
All learning parameters combined for fine-tuned runs
Step 8: Generate signed AI policy file after learning
Expected Output: Learning cycle + ai-policy.json generated in results/Note
Policy contains intent thresholds, tool allowlist, risk mode
Step 9: Generate policy with JSON output
Expected Output: JSON learning results + signed policy file writtenNote
Policy is signed with SHA-256 HMAC to prevent tampering
Scenario 2: Performance Analysis
Analyze model accuracy and performance trends
Step 1: Analyze performance over the last week
Expected Output: Accuracy metrics and trend analysisStep 2: Get accuracy analysis in JSON format
Expected Output: JSON with per-intent accuracy breakdownNote
Supports: accuracy, confidence, f1-score
Step 3: Generate learning curve visualization data
Expected Output: Time-series data showing accuracy improvementNote
Useful for identifying plateaus
Step 4: Analyze confidence metrics
Expected Output: Confidence score distribution and statisticsNote
Shows prediction certainty levels
Step 5: Analyze last 30 days as JSON
Expected Output: JSON with monthly performance trendsNote
Useful for monthly reporting
Step 6: Analyze all-time data
Expected Output: Complete historical performance analysisNote
Shows long-term improvement trends
Step 7: Analyze F1-score metric with JSON output
Expected Output: JSON with F1-score breakdown per intentNote
F1-score balances precision and recall
Step 8: Learning curve data as JSON
Expected Output: JSON time-series of accuracy improvementNote
Structured data for visualization tools
Scenario 3: AI Tier Performance
Analyze learning metrics per AI engine tier (TF-IDF, ONNX, Mistral.rs, GenAI/Ollama, Legacy LLM, Claude)
Step 1: Show accuracy breakdown across all AI tiers
Expected Output: JSON with per-tier accuracy metricsNote
Compares tier performance for optimization decisions
Step 2: Weekly tier performance trends
Expected Output: JSON with weekly metrics including new tier dataNote
Tracks mistral.rs and GenAI tier improvement over time
Step 3: ONNX vs LLM routing breakdown
Expected Output: JSON with fast-path vs slow-path query statisticsNote
Shows what percentage of queries use ONNX fast path vs LLM
Scenario 4: Report Generation
Generate comprehensive reports on learning performance
Step 1: Generate a summary report in JSON format
Expected Output: JSON report with all performance metricsStep 2: Generate a Markdown report
Expected Output: Formatted Markdown document with tables and graphsNote
Great for documentation
Step 3: Generate an HTML report with visualizations
Expected Output: Interactive HTML report saved to fileNote
Includes charts and graphs
Step 4: Generate weekly report
Expected Output: JSON report covering last 7 days of activityNote
Useful for weekly reviews
Step 5: Full report in markdown JSON format
Expected Output: Complete historical report in markdown format as JSONNote
Combines all-time data with markdown formatting
Step 6: Generate HTML report to file
Expected Output: HTML report with charts saved to results/report.htmlNote
File output within execution folder
Scenario 5: Integration Workflow
ai-learner commands commonly used alongside training/query cycles
Step 1: Check learner health before running learning jobs
Expected Output: JSON status with metrics and last run timestampsNote
Recommended first step in automation pipelines
Step 2: Inspect recent feedback trends before retraining
Expected Output: JSON analysis of recent learner performanceNote
Useful for deciding whether to run learn now
Step 3: Generate weekly markdown report for training review
Expected Output: JSON envelope containing markdown reportNote
Good handoff artifact before ai-trainer operations
Step 4: Run incremental learning with existing feedback
Expected Output: Incremental learning summary and convergence dataNote
Requires authentication
Step 5: Learn and regenerate policy artifacts in one step
Expected Output: JSON with learning results and policy output statusNote
Requires authentication
Step 6: Generate long-range HTML report for model tuning
Expected Output: HTML report written to results/learning-report.htmlNote
Use before planning major trainer updates
Command Examples (Raw)
Basic Learning Operations
Run learning cycles and update models based on feedback
Run a full learning cycle on all feedback
Expected Output: Learning statistics showing improvementsNote
Processes all feedback since last run
Run incremental learning on new feedback only
Expected Output: Quick learning update with delta statisticsGet learning results in JSON format
Expected Output: JSON response with detailed learning metricsNote
Useful for automated processing
Learn with custom learning rate
Expected Output: Learning results with adjusted convergence speedNote
Lower rates for more stable convergence
Learn with minimum feedback threshold
Expected Output: Learning skipped if insufficient feedback availableNote
Ensures statistical significance
Incremental learning with JSON output
Expected Output: JSON with incremental learning delta statisticsNote
Combines fast incremental mode with structured output
Full parameter learning with JSON output
Expected Output: JSON with custom rate and threshold learning metricsNote
All learning parameters combined for fine-tuned runs
Generate signed AI policy file after learning
Expected Output: Learning cycle + ai-policy.json generated in results/Note
Policy contains intent thresholds, tool allowlist, risk mode
Generate policy with JSON output
Expected Output: JSON learning results + signed policy file writtenNote
Policy is signed with SHA-256 HMAC to prevent tampering
Performance Analysis
Analyze model accuracy and performance trends
Analyze performance over the last week
Expected Output: Accuracy metrics and trend analysisGet accuracy analysis in JSON format
Expected Output: JSON with per-intent accuracy breakdownNote
Supports: accuracy, confidence, f1-score
Generate learning curve visualization data
Expected Output: Time-series data showing accuracy improvementNote
Useful for identifying plateaus
Analyze confidence metrics
Expected Output: Confidence score distribution and statisticsNote
Shows prediction certainty levels
Analyze last 30 days as JSON
Expected Output: JSON with monthly performance trendsNote
Useful for monthly reporting
Analyze all-time data
Expected Output: Complete historical performance analysisNote
Shows long-term improvement trends
Analyze F1-score metric with JSON output
Expected Output: JSON with F1-score breakdown per intentNote
F1-score balances precision and recall
Learning curve data as JSON
Expected Output: JSON time-series of accuracy improvementNote
Structured data for visualization tools
AI Tier Performance
Analyze learning metrics per AI engine tier (TF-IDF, ONNX, Mistral.rs, GenAI/Ollama, Legacy LLM, Claude)
Show accuracy breakdown across all AI tiers
Expected Output: JSON with per-tier accuracy metricsNote
Compares tier performance for optimization decisions
Weekly tier performance trends
Expected Output: JSON with weekly metrics including new tier dataNote
Tracks mistral.rs and GenAI tier improvement over time
ONNX vs LLM routing breakdown
Expected Output: JSON with fast-path vs slow-path query statisticsNote
Shows what percentage of queries use ONNX fast path vs LLM
Report Generation
Generate comprehensive reports on learning performance
Generate a summary report in JSON format
Expected Output: JSON report with all performance metricsGenerate a Markdown report
Expected Output: Formatted Markdown document with tables and graphsNote
Great for documentation
Generate an HTML report with visualizations
Expected Output: Interactive HTML report saved to fileNote
Includes charts and graphs
Generate weekly report
Expected Output: JSON report covering last 7 days of activityNote
Useful for weekly reviews
Full report in markdown JSON format
Expected Output: Complete historical report in markdown format as JSONNote
Combines all-time data with markdown formatting
Generate HTML report to file
Expected Output: HTML report with charts saved to results/report.htmlNote
File output within execution folder
Integration Workflow
ai-learner commands commonly used alongside training/query cycles
Check learner health before running learning jobs
Expected Output: JSON status with metrics and last run timestampsNote
Recommended first step in automation pipelines
Inspect recent feedback trends before retraining
Expected Output: JSON analysis of recent learner performanceNote
Useful for deciding whether to run learn now
Generate weekly markdown report for training review
Expected Output: JSON envelope containing markdown reportNote
Good handoff artifact before ai-trainer operations
Run incremental learning with existing feedback
Expected Output: Incremental learning summary and convergence dataNote
Requires authentication
Learn and regenerate policy artifacts in one step
Expected Output: JSON with learning results and policy output statusNote
Requires authentication
Generate long-range HTML report for model tuning
Expected Output: HTML report written to results/learning-report.htmlNote
Use before planning major trainer updates
Environment Variables
| Variable | Description | Default | Values |
|---|---|---|---|
RUST_LOG |
Set logging level | info | error |
NO_COLOR |
Disable all colored output when set | unset | 1 |
Exit Codes
| Code | Description |
|---|---|
| 5 | File not found |
| 4 | Network error |
| 2 | Invalid arguments |
| 1 | General error |
| 3 | Permission denied |
| 0 | Success |