Configuration
All environment variables and configuration options for the ReductrAI binary.
Core Settings
| Variable | Description | Default |
|---|---|---|
REDUCTRAI_LICENSE REQUIRED |
Your license key | — |
REDUCTRAI_PORT optional |
Telemetry ingestion port | 8080 |
REDUCTRAI_DATA_DIR optional |
Storage location | ~/.reductrai |
LLM Configuration
Configure which LLM provider to use for AI investigations. See LLM Providers for detailed setup guides.
| Variable | Description | Default |
|---|---|---|
LLM_ENDPOINT optional |
LLM API endpoint (OpenAI-compatible) | Uses AI Relay |
LLM_API_KEY optional |
API key for LLM provider | — |
LLM_MODEL optional |
Model name to use | llama3.2 |
If LLM_ENDPOINT is not set, ReductrAI uses the managed AI Relay (included in your subscription).
Autonomy Levels
Autonomy level is configured in the dashboard:
| Level | Behavior | Availability |
|---|---|---|
| L1 | AI detects anomalies, human investigates | All tiers |
| L2 | AI finds root cause, human decides & executes | All tiers |
| L3 | AI proposes remediation, human approves (DEFAULT) | All tiers |
| L4 | AI executes known patterns automatically | BUSINESS+ |
See Autonomy Levels for details on each level.
Storage & Retention
Data retention is automatic based on your license tier:
| Tier | Retention | Compression |
|---|---|---|
| FREE | 30 days | 91-95% automatic |
| PRO | 90 days | 91-95% automatic |
| BUSINESS | 180 days | 91-95% automatic |
| ENTERPRISE | Custom (S3, GCS, Azure) | 91-95% automatic |
Data is automatically tiered: HOT (<1 hour) → WARM (compressed) → COLD (archived). Enterprise customers can configure custom archive storage with unlimited retention.
Example Configuration
# Core (required)
REDUCTRAI_LICENSE=RF-xxx-xxx-xxx
# Optional overrides
REDUCTRAI_PORT=8080
REDUCTRAI_DATA_DIR=~/.reductrai
# Self-hosted LLM (optional - for air-gapped deployments)
LLM_ENDPOINT=http://localhost:11434/v1
LLM_MODEL=llama3.2
REDUCTRAI_LICENSE=RF-xxx-xxx-xxx
# Optional overrides
REDUCTRAI_PORT=8080
REDUCTRAI_DATA_DIR=~/.reductrai
# Self-hosted LLM (optional - for air-gapped deployments)
LLM_ENDPOINT=http://localhost:11434/v1
LLM_MODEL=llama3.2
CLI Commands
# Start the agent
reductrai start
# Stop the agent
reductrai stop
# Check status
reductrai status
# Query local storage
reductrai query "SELECT * FROM metrics LIMIT 10"
# Show database schema and example queries
reductrai schema
# Show version
reductrai version
reductrai start
# Stop the agent
reductrai stop
# Check status
reductrai status
# Query local storage
reductrai query "SELECT * FROM metrics LIMIT 10"
# Show database schema and example queries
reductrai schema
# Show version
reductrai version