6.8 KiB
Provider Integration Content for Landing Page
Suggested Section: Add this content to https://ruv.io/agentic-flow as a new "Provider Support" section
🔌 Multi-Provider LLM Support
agentic-flow works with multiple LLM providers through intelligent proxy architecture that automatically converts requests to provider-specific formats while maintaining full compatibility with Claude Agent SDK.
Supported Providers
| Provider | Models | Cost vs Claude | Speed | Setup Required |
|---|---|---|---|---|
| Anthropic | Claude 3.5 Sonnet, Opus, Haiku | Baseline | Fast | ANTHROPIC_API_KEY |
| Gemini (NEW v1.9.3) | Gemini 2.0 Flash, Pro | 70% cheaper | 2-5x faster | GOOGLE_GEMINI_API_KEY |
| OpenRouter | 100+ models (GPT-4, Llama, DeepSeek) | Varies | Varies | OPENROUTER_API_KEY |
| ONNX | Phi-4 (runs locally) | 100% FREE | Medium | No key needed |
Quick Provider Examples
# Anthropic (default, highest quality)
npx agentic-flow --agent coder --task "Build REST API"
# Gemini (fastest, cheapest - NEW in v1.9.3!)
npx agentic-flow --agent coder --task "Build REST API" --provider gemini
# OpenRouter (access 100+ models)
npx agentic-flow --agent coder --task "Build REST API" --provider openrouter
# ONNX (100% free, runs locally)
npx agentic-flow --agent coder --task "Build REST API" --provider onnx
Gemini Provider - Fully Functional (v1.9.3)
The Gemini provider is now production-ready with complete streaming support. Three critical bugs were fixed in v1.9.3:
✅ Model Selection - Correctly uses Gemini models instead of Claude model names
✅ Streaming Responses - Full Server-Sent Events (SSE) support with &alt=sse parameter
✅ Provider Selection - Respects --provider flag and doesn't auto-select incorrectly
Gemini Benefits:
- ⚡ 2-5x faster response times compared to Anthropic
- 💰 70% cost reduction - great for high-volume tasks
- 🎯 Excellent for code generation, code analysis, refactoring, simple tasks
- ✅ Full streaming support for real-time responses
- 🔄 Zero code changes - drop-in replacement via proxy architecture
Setup:
# Get your free Gemini API key from https://ai.google.dev/
export GOOGLE_GEMINI_API_KEY=your_key_here
# Use Gemini with streaming
npx agentic-flow --agent coder --task "Create function" --provider gemini --stream
How Provider Proxies Work
Intelligent Request Conversion:
- User makes request → Claude Agent SDK format (Anthropic Messages API)
- Proxy intercepts → Converts to provider-specific format
- Provider responds → Native format (Gemini, OpenRouter, etc.)
- Proxy converts back → Claude Agent SDK format
- User receives → Standard response, no code changes needed
Architecture Benefits:
- ✅ Single codebase works with all providers
- ✅ Provider-agnostic agent development
- ✅ Easy switching between providers via CLI flags
- ✅ Cost optimization - use cheap providers for simple tasks, Claude for complex ones
- ✅ Automatic failover - switch providers if one is down
Configuration
Environment Variables:
# Provider API Keys (set only what you need)
export ANTHROPIC_API_KEY=sk-ant-... # Anthropic Claude
export GOOGLE_GEMINI_API_KEY=AIza... # Google Gemini
export OPENROUTER_API_KEY=sk-or-... # OpenRouter
# Model Selection (optional)
export COMPLETION_MODEL=claude-sonnet-4-5-20250929 # Anthropic model
export REASONING_MODEL=gemini-2.0-flash-exp # Gemini model
CLI Flags:
--provider <name> # anthropic, gemini, openrouter, onnx
--model <model-id> # Override default model
--stream # Enable streaming responses
Cost Optimization Strategy
Smart Provider Selection:
# Simple tasks → Use Gemini (70% cheaper, 5x faster)
npx agentic-flow --agent coder --task "Add comments" --provider gemini
# Complex reasoning → Use Claude (highest quality)
npx agentic-flow --agent researcher --task "Analyze architecture" --provider anthropic
# Experimentation → Use ONNX (100% free)
npx agentic-flow --agent tester --task "Generate tests" --provider onnx
# Access specific models → Use OpenRouter
npx agentic-flow --agent coder --task "Optimize code" --provider openrouter --model meta-llama/llama-3.2-3b-instruct
Production Deployment
High-Volume Applications:
# Use Gemini for 80% of requests (fast, cheap)
export DEFAULT_PROVIDER=gemini
export GOOGLE_GEMINI_API_KEY=...
# Fall back to Claude for complex tasks
export ANTHROPIC_API_KEY=...
# Run with automatic provider selection
npx agentic-flow --agent coder --task "..." # Uses Gemini by default
npx agentic-flow --agent architect --task "..." --provider anthropic # Override for complex tasks
Monitoring:
- Track token usage per provider
- Monitor response quality
- Measure cost savings
- Analyze performance metrics
Why This Matters for Consulting Clients
Cost Savings:
- 70% reduction in LLM costs using Gemini for appropriate tasks
- 100% free local inference with ONNX for development/testing
- Flexible budgeting - scale up/down providers based on needs
Performance:
- 2-5x faster responses with Gemini for code generation
- Real-time streaming for better user experience
- Local inference option eliminates API latency
Flexibility:
- 100+ models available via OpenRouter
- Zero vendor lock-in - switch providers anytime
- Multi-provider strategies - use best tool for each job
Technical Specifications
Proxy Architecture:
- Located in
src/proxy/directory anthropic-to-gemini.ts- Gemini proxy (v1.9.3 - fully functional)anthropic-to-openrouter.ts- OpenRouter proxy- Automatic request/response format conversion
- Full MCP tool support via function calling conversion
- Streaming and non-streaming modes
Supported Features:
- ✅ Text generation
- ✅ Streaming responses (SSE)
- ✅ Tool calling / Function calling
- ✅ System prompts
- ✅ Multi-turn conversations
- ✅ Temperature control
- ✅ Token limits
Verified Compatibility:
- Claude Agent SDK
- Claude Code CLI
- MCP (Model Context Protocol) tools
- All 66 specialized agents
- Multi-agent swarm orchestration
Get Started
# Install agentic-flow
npm install -g agentic-flow@1.9.3
# Setup your preferred provider
export GOOGLE_GEMINI_API_KEY=your_key_here
# Run your first agent with Gemini
npx agentic-flow --agent coder --task "Write a hello world function" --provider gemini --stream
# See the cost and speed difference yourself!
Documentation: https://github.com/ruvnet/agentic-flow#readme Full Provider Guide: https://github.com/ruvnet/agentic-flow#-provider-support Latest Release: v1.9.3 - Gemini Provider Fully Functional