Add OpenAI-compatible backend support (Kiro gateway, OpenRouter)

- LLM_BACKEND=openai routes to /v1/chat/completions
- Default: ollama (unchanged)
- For Kiro gateway: LLM_BACKEND=openai OPENAI_URL=http://192.168.86.11:8000 OPENAI_MODEL=claude-haiku-4
- Updated README with new env vars
This commit is contained in:
Jarvis Prime
2026-03-04 04:37:46 +00:00
parent 20253329e4
commit 65e114a5d6
2 changed files with 51 additions and 6 deletions

View File

@@ -125,9 +125,13 @@ dev-intel-poc/
| Env Variable | Default | Description |
|---|---|---|
| `LLM_BACKEND` | `ollama` | `ollama` or `openai` (for Kiro gateway, OpenRouter, etc.) |
| `OLLAMA_URL` | `http://192.168.86.172:11434` | Ollama endpoint |
| `OLLAMA_MODEL` | `qwen2.5:7b` | Model for doc generation |
| `OLLAMA_MODEL` | `qwen2.5:7b` | Ollama model |
| `OPENAI_URL` | `http://192.168.86.11:8000` | OpenAI-compatible endpoint (Kiro gateway) |
| `OPENAI_MODEL` | `claude-haiku-4` | Model name for OpenAI-compatible API |
| `OPENAI_API_KEY` | `not-needed` | API key (if required by endpoint) |
| `TARGET_REPO` | `https://github.com/labstack/echo.git` | Repo to ingest |
| `MAX_CONCURRENT` | `4` | Parallel Ollama requests |
| `MAX_CONCURRENT` | `4` | Parallel LLM requests |
| `DEVINTEL_DB` | `./devintel.db` | SQLite database path |
| `REPO_DIR` | `./repos/target` | Cloned repo location |