# dd0c/route — LLM Cost Router & Optimization Dashboard Drop-in OpenAI-compatible proxy that routes AI requests to the cheapest capable model. ## Quick Start ```bash # Start local infra docker compose up -d # Run the proxy OPENAI_API_KEY=sk-your-key cargo run --bin dd0c-proxy # Test it curl http://localhost:8080/v1/chat/completions \ -H "Authorization: Bearer your-dd0c-key" \ -H "Content-Type: application/json" \ -d '{"model":"gpt-4o","messages":[{"role":"user","content":"Hello"}]}' ``` ## Architecture - **Proxy Engine** (Rust/Axum) — <5ms overhead, SSE streaming, async telemetry - **Router Brain** — Complexity classification, cost-based routing, fallback chains - **Dashboard API** — REST API for config, analytics, team management - **TimescaleDB** — Time-series telemetry with continuous aggregates - **PostgreSQL** — Config, auth, routing rules - **Redis** — API key cache, rate limiting ## Project Structure ``` src/ proxy/ — Proxy server (main entry point) api/ — Dashboard API server worker/ — Background jobs (digests, alerts) router/ — Routing logic & complexity classifier auth/ — API key validation, JWT, OAuth config/ — App configuration data/ — Data layer traits (EventQueue, ObjectStore) analytics/ — PostHog PLG instrumentation migrations/ — PostgreSQL + TimescaleDB schemas tests/ — Unit, integration, E2E tests infra/ — CDK / deployment configs ``` ## Pricing | Tier | Price | Requests/mo | Retention | |------|-------|-------------|-----------| | Free | $0 | 10K | 7 days | | Pro | $49/mo | 1M | 90 days | | Enterprise | Custom | Unlimited | 1 year |