Language Model Configuration
Understanding and configuring Language Model plugins in ElizaOS
ElizaOS uses a plugin-based architecture for integrating different Language Model providers. This guide explains how to configure and use LLM plugins, including fallback mechanisms for embeddings and model registration.
Key Concepts
Model Types
ElizaOS supports three types of model operations:
- TEXT_GENERATION - Generating conversational responses
- EMBEDDING - Creating vector embeddings for memory and similarity search
- OBJECT_GENERATION - Structured output generation (JSON/XML)
Plugin Capabilities
Not all LLM plugins support all model types:
Plugin | Text Generation | Embeddings | Object Generation |
---|---|---|---|
OpenAI | ✅ | ✅ | ✅ |
Anthropic | ✅ | ❌ | ✅ |
Google GenAI | ✅ | ✅ | ✅ |
Ollama | ✅ | ✅ | ✅ |
OpenRouter | ✅ | ❌ | ✅ |
Local AI | ✅ | ✅ | ✅ |
Plugin Loading Order
The order in which plugins are loaded matters significantly. From the default character configuration:
Understanding the Order
- Text-only plugins first - Anthropic and OpenRouter are loaded first for text generation
- Embedding-capable plugins last - These serve as fallbacks for embedding operations
- Local AI as ultimate fallback - Only loads when no cloud providers are configured
Model Registration
Each LLM plugin registers its models with the runtime during initialization:
Priority System
Models are selected based on:
- Explicit provider - If specified, uses that provider’s model
- Priority - Higher priority models are preferred
- Registration order - First registered wins for same priority
Embedding Fallback Strategy
Since not all plugins support embeddings, ElizaOS uses a fallback strategy:
Common Patterns
Anthropic + OpenAI Fallback
OpenRouter + Local Embeddings
Configuration
Environment Variables
Each plugin requires specific environment variables:
Important: The model names shown are examples. You can use any model available from each provider.
Character-Specific Secrets
You can also configure API keys per character:
Available Plugins
Cloud Providers
- OpenAI Plugin - Full-featured with all model types
- Anthropic Plugin - Claude models for text generation
- Google GenAI Plugin - Gemini models
- OpenRouter Plugin - Access to multiple providers
Local/Self-Hosted
- Ollama Plugin - Run models locally with Ollama
- Local AI Plugin - Fully offline operation
Best Practices
1. Always Configure Embeddings
Even if your primary model doesn’t support embeddings, always include a fallback:
2. Order Matters
Place your preferred providers first, but ensure embedding capability somewhere in the chain.
3. Test Your Configuration
Verify all model types work:
4. Monitor Costs
Different providers have different pricing. Consider:
- Using local models (Ollama) for development
- Mixing providers (e.g., OpenRouter for text, local for embeddings)
- Setting up usage alerts with your providers
Troubleshooting
”No model found for type EMBEDDING”
Your configured plugins don’t support embeddings. Add an embedding-capable plugin:
“Missing API Key”
Ensure your environment variables are set:
Models Not Loading
Check plugin initialization in logs:
Migration from v0.x
In ElizaOS v0.x, models were configured directly in character files:
The modelProvider
field is now ignored. All model configuration happens through plugins.