Configuration

All environment variables and configuration options for the ReductrAI binary.

Core Settings

VariableDescriptionDefault
REDUCTRAI_LICENSE REQUIRED Your license key
REDUCTRAI_PORT optional Telemetry ingestion port 8080
REDUCTRAI_DATA_DIR optional Storage location ~/.reductrai

LLM Configuration

Configure which LLM provider to use for AI investigations. See LLM Providers for detailed setup guides.

VariableDescriptionDefault
LLM_ENDPOINT optional LLM API endpoint (OpenAI-compatible) Uses AI Relay
LLM_API_KEY optional API key for LLM provider
LLM_MODEL optional Model name to use llama3.2

If LLM_ENDPOINT is not set, ReductrAI uses the managed AI Relay (included in your subscription).

Autonomy Levels

Autonomy level is configured in the dashboard:

LevelBehaviorAvailability
L1 AI detects anomalies, human investigates All tiers
L2 AI finds root cause, human decides & executes All tiers
L3 AI proposes remediation, human approves (DEFAULT) All tiers
L4 AI executes known patterns automatically BUSINESS+

See Autonomy Levels for details on each level.

Storage & Retention

Data retention is automatic based on your license tier:

TierRetentionCompression
FREE30 days91-95% automatic
PRO90 days91-95% automatic
BUSINESS180 days91-95% automatic
ENTERPRISECustom (S3, GCS, Azure)91-95% automatic

Data is automatically tiered: HOT (<1 hour) → WARM (compressed) → COLD (archived). Enterprise customers can configure custom archive storage with unlimited retention.

Example Configuration

# Core (required)
REDUCTRAI_LICENSE=RF-xxx-xxx-xxx

# Optional overrides
REDUCTRAI_PORT=8080
REDUCTRAI_DATA_DIR=~/.reductrai

# Self-hosted LLM (optional - for air-gapped deployments)
LLM_ENDPOINT=http://localhost:11434/v1
LLM_MODEL=llama3.2

CLI Commands

# Start the agent
reductrai start

# Stop the agent
reductrai stop

# Check status
reductrai status

# Query local storage
reductrai query "SELECT * FROM metrics LIMIT 10"

# Show database schema and example queries
reductrai schema

# Show version
reductrai version