Add OpenAI-compatible backend support (Kiro gateway, OpenRouter)
- LLM_BACKEND=openai routes to /v1/chat/completions - Default: ollama (unchanged) - For Kiro gateway: LLM_BACKEND=openai OPENAI_URL=http://192.168.86.11:8000 OPENAI_MODEL=claude-haiku-4 - Updated README with new env vars
This commit is contained in:
@@ -125,9 +125,13 @@ dev-intel-poc/
|
||||
|
||||
| Env Variable | Default | Description |
|
||||
|---|---|---|
|
||||
| `LLM_BACKEND` | `ollama` | `ollama` or `openai` (for Kiro gateway, OpenRouter, etc.) |
|
||||
| `OLLAMA_URL` | `http://192.168.86.172:11434` | Ollama endpoint |
|
||||
| `OLLAMA_MODEL` | `qwen2.5:7b` | Model for doc generation |
|
||||
| `OLLAMA_MODEL` | `qwen2.5:7b` | Ollama model |
|
||||
| `OPENAI_URL` | `http://192.168.86.11:8000` | OpenAI-compatible endpoint (Kiro gateway) |
|
||||
| `OPENAI_MODEL` | `claude-haiku-4` | Model name for OpenAI-compatible API |
|
||||
| `OPENAI_API_KEY` | `not-needed` | API key (if required by endpoint) |
|
||||
| `TARGET_REPO` | `https://github.com/labstack/echo.git` | Repo to ingest |
|
||||
| `MAX_CONCURRENT` | `4` | Parallel Ollama requests |
|
||||
| `MAX_CONCURRENT` | `4` | Parallel LLM requests |
|
||||
| `DEVINTEL_DB` | `./devintel.db` | SQLite database path |
|
||||
| `REPO_DIR` | `./repos/target` | Cloned repo location |
|
||||
|
||||
Reference in New Issue
Block a user