LLM Providers
Local AI Plugin
Fully offline AI capabilities for ElizaOS
Features
- 100% offline - No internet connection required
- Privacy-first - Data never leaves your machine
- No API keys - Zero configuration needed
- Multimodal - Text, embeddings, vision, and speech
Installation
Automatic Activation
Local AI serves as the ultimate fallback when no cloud providers are configured:
Configuration
Environment Variables
Character Configuration
Supported Operations
Operation | Technology | Notes |
---|---|---|
TEXT_GENERATION | llama.cpp | Various model sizes |
EMBEDDING | Local transformers | Sentence embeddings |
VISION | Local vision models | Image description |
SPEECH | Whisper + TTS | Transcription & synthesis |
Model Management
The plugin automatically downloads required models on first use:
Available Models
Text Generation
- Small: 1-3B parameter models
- Medium: 7B parameter models
- Large: 13B+ parameter models
Embeddings
- Sentence transformers
- MiniLM variants
Vision
- BLIP for image captioning
- CLIP for image understanding
Performance Optimization
CPU Optimization
Memory Management
Hardware Requirements
Feature | Minimum RAM | Recommended |
---|---|---|
Text (Small) | 4GB | 8GB |
Text (Medium) | 8GB | 16GB |
Embeddings | 2GB | 4GB |
Vision | 4GB | 8GB |
All Features | 16GB | 32GB |
Common Use Cases
1. Development Environment
2. Privacy-Critical Applications
3. Offline Deployment
Limitations
- Slower than cloud APIs
- Limited model selection
- Higher memory usage
- CPU-bound performance
Troubleshooting
Model Download Issues
Performance Issues
- Use smaller models
- Enable quantization
- Reduce context size
- Add more RAM/CPU