Air-Gapped Deployment
Deploy ReductrAI with zero network egress. For financial services, healthcare, government, and defense.
Zero data leaves your network. Raw telemetry stays local. AI runs locally. No cloud connection required.
Architecture Overview
+-----------------------------------------------------------------+ | YOUR INFRASTRUCTURE | | | | +------------------+ +------------------+ | | | ReductrAI | | Local LLM | | | | Binary |---->| (Ollama/vLLM) | | | | | | | | | | - Ingestion | | - Runs locally | | | | - DuckDB | | - CPU or GPU | | | | - Detection | | - Any OpenAI- | | | | - Dashboard | | compatible | | | +------------------+ +------------------+ | | | | | | License validated locally (signed JWT) | | | | | +--------v---------+ | | | Your Apps | | | | (telemetry) | | | +------------------+ | | | | ============================================================ | | NO OUTBOUND NETWORK CONNECTIONS | +-----------------------------------------------------------------+
Requirements
- ReductrAI binary (download once, copy to air-gapped network)
- Local LLM runtime - Ollama, vLLM, LocalAI, or any OpenAI-compatible server
- An open-weight model file (e.g. Llama 3.2, Mistral 7B, Qwen 2.5)
- Enterprise license (contact sales@reductrai.com)
- 8 GB+ RAM for typical 7B models (less for smaller quantized variants)
Step 1: Obtain License
Contact sales@reductrai.com for an enterprise license. For air-gapped networks, we provide offline license validation.
Step 2: Download a Local LLM & Model
On an internet-connected machine, install Ollama and pull a model:
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model (choose based on your hardware)
ollama pull llama3.2
# Models are stored in ~/.ollama/models/
Transfer to Air-Gapped Network
tar -czf reductrai-bundle.tar.gz \
~/.ollama /usr/local/bin/reductrai /usr/local/bin/ollama
# Transfer via USB/approved media to air-gapped machine
# On air-gapped machine:
tar -xzf reductrai-bundle.tar.gz -C /
Step 3: Install ReductrAI Binary
Download the binary on an internet-connected machine, then transfer to air-gapped network:
wget https://github.com/reductrai/reductrai/releases/latest/download/reductrai-linux-amd64
# Transfer to air-gapped machine and install
chmod +x reductrai-linux-amd64
mv reductrai-linux-amd64 /usr/local/bin/reductrai
Step 4: Configure ReductrAI
Set environment variables for air-gapped operation:
export REDUCTRAI_LICENSE=your-license-key
# Point at your local LLM
export REDUCTRAI_LLM_ENDPOINT=http://localhost:11434/v1
export REDUCTRAI_LLM_MODEL=llama3.2
# Standard settings
export REDUCTRAI_PORT=8080
export REDUCTRAI_DATA_DIR=~/.reductrai
As long as REDUCTRAI_LLM_ENDPOINT points at an internal address, no traffic leaves your network.
Step 5: Start ReductrAI
ollama serve &
# Start ReductrAI
reductrai start --local
# Verify
reductrai status
Step 6: Access Dashboard
The dashboard runs locally on the ReductrAI binary:
In air-gapped mode, the Copilot dashboard is served directly from the binary on port 8082. No external cloud connection.
Recommended Models for Air-Gapped
| Model | Size | RAM Required | Notes |
|---|---|---|---|
| Llama 3.2 8B (Recommended) | 4.7 GB | 8 GB | Strong reasoning, open weights, runs well on CPU |
| Qwen 2.5 7B | 4.4 GB | 8 GB | Excellent at structured output / tool use |
| Mistral 7B Instruct | 4.1 GB | 8 GB | Fast, permissive license |
| Llama 3.2 3B (quantized) | 2.0 GB | 4 GB | Smallest footprint; acceptable for summarization |
Security Considerations
- Binary is obfuscated (garble) to protect proprietary code
- License file uses cryptographic signatures - cannot be forged
- All data stays in
~/.reductrai- audit and control access - Dashboard can be restricted to localhost or internal network
- No telemetry or analytics sent anywhere
License Renewal
Enterprise licenses are issued for a fixed term (typically 1 year). Before expiration, contact sales@reductrai.com for renewal.
Troubleshooting
License validation failed: Verify your license key is correct. Contact sales@reductrai.com for enterprise air-gapped licensing.
LLM not reachable: Confirm your local LLM (Ollama, vLLM, etc.) is running and listening on the address in REDUCTRAI_LLM_ENDPOINT. Test with curl $REDUCTRAI_LLM_ENDPOINT/models.
Need Help?
For enterprise and air-gapped deployments, contact our team:
- Email: sales@reductrai.com
- Security: security@reductrai.com