Compare commits

...

2 Commits

Author SHA1 Message Date
Max Mayfield
a2d7dab296 refactor: wire Open WebUI → Kiro Gateway + mcpo, add Reltio Forge spec
- docker-compose: 3 services (gateway, mcpo, webui) with proper wiring
- Open WebUI connects to Kiro Gateway for LLM, mcpo for MCP tools
- TOOL_SERVER_CONNECTIONS auto-discovers Jira, Aha, Context7
- Simplified mcporter.json (env vars from container, not JSON)
- env.example consolidated with all keys
- SETUP.md rewritten with architecture diagram
- Pipelines deprecated (mcpo replaces custom Python shims)
- Added docs/reltio-forge.md
2026-03-03 13:29:43 +00:00
Max Mayfield
93ad572b8d add Reltio Forge spec — velocity layer for Transparent Factory
Maps OpenClaw's high-velocity AI development practices to the 5
Transparent Factory tenets. Defines 6 Forge principles:
1. Gate Before Merge (fail-fast CI)
2. AI Reviews AI (Claude code review in CI)
3. Formal Models for High-Risk Paths (TLA+)
4. Three Channels, One Truth (beta → stable promotion)
5. Trust Boundaries as Code
6. Configurable Risk Tolerance (strict/standard/experimental)

Includes reference pipeline config and 3-month adoption path.
2026-03-03 06:18:14 +00:00
7 changed files with 360 additions and 70 deletions

View File

@@ -42,11 +42,16 @@ Open the repo in Cursor, Claude Code, Kiro, Codex, or Gemini CLI. They all read
| Component | Purpose | | Component | Purpose |
|-----------|---------| |-----------|---------|
| `docs/reltio-forge.md` | **Reltio Forge** — velocity layer for AI-accelerated delivery |
| `web/` | Forge Console — local chat UI (Express + WebSocket → Codex) | | `web/` | Forge Console — local chat UI (Express + WebSocket → Codex) |
| `openwebui/` | Open WebUI preset, pipelines, and knowledge manifest | | `openwebui/` | Open WebUI preset, pipelines, and knowledge manifest |
| `skills/` | Epics, Factory Standards, BMad Creative Suite, Gainsight PX | | `skills/` | Epics, Factory Standards, BMad Creative Suite, Gainsight PX |
| `config/mcporter.json` | MCP server definitions (Jira, Aha!, Context7) | | `config/mcporter.json` | MCP server definitions (Jira, Aha!, Context7) |
### Transparent Factory + Reltio Forge
**Transparent Factory** defines the 5 architectural tenets (Atomic Flagging, Elastic Schema, Cognitive Durability, Semantic Observability, Configurable Autonomy). **Reltio Forge** wraps each tenet in automated CI/CD enforcement — AI code review, formal verification, fail-fast gates, and configurable risk posture. See [`docs/reltio-forge.md`](docs/reltio-forge.md).
--- ---
## Architecture ## Architecture

View File

@@ -10,11 +10,7 @@
}, },
"aha": { "aha": {
"command": "npx", "command": "npx",
"args": ["-y", "aha-mcp@latest"], "args": ["-y", "aha-mcp@latest"]
"env": {
"AHA_DOMAIN": "${AHA_DOMAIN}",
"AHA_API_TOKEN": "${AHA_API_KEY}"
}
} }
} }
} }

View File

@@ -1,6 +1,7 @@
services: services:
# --- Kiro Gateway (OpenAI-compatible LLM proxy) ---
gateway: gateway:
image: ghcr.io/jwadow/kiro-gateway:latest image: ghcr.io/openclaw/openclaw:latest
container_name: pm-gateway container_name: pm-gateway
env_file: env_file:
- .env - .env
@@ -8,20 +9,25 @@ services:
- DEBUG_MODE=${DEBUG_MODE:-} - DEBUG_MODE=${DEBUG_MODE:-}
ports: ports:
- "8000:8000" - "8000:8000"
volumes:
- "${KIRO_CLI_DATA:-~/.local/share/kiro-cli}:/home/kiro/.local/share/kiro-cli:ro"
restart: unless-stopped restart: unless-stopped
# --- mcpo (MCP → OpenAPI proxy for tool servers) ---
mcpo: mcpo:
image: ghcr.io/open-webui/mcpo:latest image: ghcr.io/open-webui/mcpo:latest
container_name: pm-mcpo container_name: pm-mcpo
command: --config /config/mcporter.json --port 3001 env_file:
- .env
environment:
# aha-mcp expects AHA_API_TOKEN, .env uses AHA_API_KEY
- AHA_API_TOKEN=${AHA_API_KEY:-}
command: --host 0.0.0.0 --port 3001 --config /config/mcporter.json
ports: ports:
- "3001:3001" - "3001:3001"
volumes: volumes:
- ./config/mcporter.json:/config/mcporter.json:ro - ./config/mcporter.json:/config/mcporter.json:ro
restart: unless-stopped restart: unless-stopped
# --- Open WebUI (browser UI for non-technical PMs) ---
webui: webui:
image: ghcr.io/open-webui/open-webui:main image: ghcr.io/open-webui/open-webui:main
container_name: pm-webui container_name: pm-webui
@@ -30,8 +36,21 @@ services:
env_file: env_file:
- .env - .env
environment: environment:
# --- LLM Connection (Kiro Gateway) ---
- OPENAI_API_BASE_URL=http://gateway:8000/v1 - OPENAI_API_BASE_URL=http://gateway:8000/v1
- OPENAI_API_KEY=${OPENAI_API_KEY:-sk-none} - OPENAI_API_KEY=${OPENAI_API_KEY:-sk-none}
# --- Tool Servers (mcpo exposes each MCP server at its own path) ---
- >-
TOOL_SERVER_CONNECTIONS=[
{"type":"openapi","url":"http://mcpo:3001/atlassian","auth_type":"none","config":{"enable":true},"info":{"name":"atlassian","description":"Jira issue management via Atlassian MCP"}},
{"type":"openapi","url":"http://mcpo:3001/aha","auth_type":"none","config":{"enable":true},"info":{"name":"aha","description":"Aha! roadmap and epic management"}},
{"type":"openapi","url":"http://mcpo:3001/context7","auth_type":"none","config":{"enable":true},"info":{"name":"context7","description":"Library documentation lookup"}}
]
# --- Admin auto-setup (skip manual signup on first run) ---
- WEBUI_ADMIN_EMAIL=${WEBUI_ADMIN_EMAIL:-admin@reltio.com}
- WEBUI_ADMIN_PASSWORD=${WEBUI_ADMIN_PASSWORD:-}
# --- Disable persistent config so env vars always win ---
- ENABLE_PERSISTENT_CONFIG=false
volumes: volumes:
- webui-data:/app/backend/data - webui-data:/app/backend/data
- ./openwebui/custom.css:/app/backend/open_webui/static/custom.css - ./openwebui/custom.css:/app/backend/open_webui/static/custom.css

234
docs/reltio-forge.md Normal file
View File

@@ -0,0 +1,234 @@
# Reltio Forge
*The velocity layer for AI-accelerated software delivery.*
Transparent Factory defines **what** safe AI-assisted development looks like.
Reltio Forge defines **how** you enforce it at speed — daily releases, AI-written code, enterprise guardrails.
---
## The Problem
AI coding tools (Codex, Claude Code, Cursor, Copilot) can generate thousands of lines per day. Teams adopting them face a paradox: **velocity without guardrails is liability, but guardrails without automation kill velocity.**
The fastest-moving open source projects (OpenClaw: 247K stars, 390K CI runs, daily releases, mostly AI-written) prove this is solvable. They ship daily because their safety net is automated, not manual.
Reltio Forge codifies these patterns for enterprise teams already following Transparent Factory tenets.
---
## Relationship to Transparent Factory
| Layer | Scope | Analogy |
|-------|-------|---------|
| **Transparent Factory** | Architectural tenets (what to build) | Building code |
| **Reltio Forge** | Delivery pipeline (how to ship safely) | Inspection process |
Transparent Factory's 5 tenets remain the standard. Forge wraps each one in automated enforcement:
| Tenet | Forge Enforcement |
|-------|-------------------|
| Atomic Flagging | CI gate: no deployment without flag wrapper. Flag TTL enforced by cron reaper. |
| Elastic Schema | Migration linter in CI. Additive-only check. Dual-write test harness. |
| Cognitive Durability | ADR-or-reject: PRs touching architecture require an ADR file or are blocked. |
| Semantic Observability | Span coverage gate: new endpoints must emit reasoning spans or CI fails. |
| Configurable Autonomy | Governance-as-code: autonomy levels declared in config, validated at deploy. |
---
## The 6 Forge Principles
### 1. Gate Before Merge, Not After Deploy
Every PR passes through an automated gauntlet before a human sees it:
```
Cheap & Fast Expensive & Slow
─────────────────────────────────────────────────────────►
Types → Lint → Secrets → Build → Unit → E2E → AI Review
~30s ~30s ~10s ~60s ~90s ~3m ~2m
```
Fail-fast ordering: if types are broken, don't waste 5 minutes on E2E.
*Inspired by: OpenClaw CI scoping — detects what changed, skips irrelevant jobs, runs cheap checks first.*
### 2. AI Reviews AI
When code is AI-generated, human review alone is insufficient at scale. Add an AI reviewer in CI that:
- Checks for Transparent Factory tenet compliance
- Flags security anti-patterns (SQL injection, hardcoded secrets, missing auth)
- Validates schema changes are additive
- Produces a structured review (not just "LGTM")
The human reviewer then reviews the AI review + the code. Two-layer defense.
*Inspired by: OpenClaw's `claude-code-review.yml` GitHub Action on every PR.*
### 3. Formal Models for High-Risk Paths
Not everything needs formal verification. But the paths where a bug means data breach, money loss, or compliance failure? Those get TLA+ (or Alloy, or property-based tests at minimum).
**Forge rule:** identify your top 5 "if this breaks, we're on the news" paths. Write executable models for them. Run them in CI. Maintain both "green" (invariant holds) and "red" (invariant breaks under known bug class) models.
*Inspired by: OpenClaw's TLA+ models for session isolation, pairing, ingress gating, routing, and tool execution.*
### 4. Three Channels, One Truth
Ship to `beta` first. Soak. Promote to `stable`. Never skip.
```
main ──► beta (automated) ──► stable (promoted after soak)
└── canary (optional: % rollout)
```
- Tags are immutable
- Promotion is a metadata operation, not a rebuild
- Rollback = promote the previous stable tag
*Inspired by: OpenClaw's stable/beta/dev channels with npm dist-tag promotion.*
### 5. Trust Boundaries as Code
Define explicit trust boundaries in your architecture. Each boundary is a policy enforcement point:
```
Untrusted Input → [Boundary 1: Validation] → Business Logic → [Boundary 2: Authorization] → Data → [Boundary 3: Execution Sandbox]
```
**Forge rule:** every service declares its trust boundaries in a `TRUST.md` or equivalent config. CI validates that boundary enforcement code exists at each declared point.
*Inspired by: OpenClaw's three trust boundaries (channel access → session isolation → tool execution sandbox) documented in their MITRE ATLAS threat model.*
### 6. Configurable Risk Tolerance
Not every team, environment, or customer has the same risk appetite. Forge doesn't mandate a single posture — it mandates that the posture is **explicit and configurable**:
- `strict`: all gates required, no overrides, formal models must pass
- `standard`: all gates required, break-glass override with audit trail
- `experimental`: gates advisory-only, all violations logged but non-blocking
The posture is declared per-environment, per-service, or per-tenant. Never implicit.
*Inspired by: OpenClaw's tool profiles (messaging vs coding), exec approvals, dmPolicy, and allowlists — all configurable per agent.*
---
## Forge Pipeline (Reference Implementation)
```yaml
# .forge/pipeline.yml
version: 1
posture: standard # strict | standard | experimental
gates:
# --- Fast gates (< 2 min) ---
types:
tool: tsc --noEmit
fail: block
lint:
tool: eslint + prettier
fail: block
secrets:
tool: gitleaks / trufflehog
fail: block
# --- Build gate ---
build:
tool: docker build / npm run build
fail: block
depends_on: [types, lint, secrets]
# --- Test gates (2-5 min) ---
unit:
tool: vitest --run
coverage_threshold: 80%
fail: block
depends_on: [build]
e2e:
tool: vitest --run --config vitest.e2e.config.ts
fail: block
depends_on: [build]
# --- AI gates (2-3 min) ---
ai_review:
tool: claude-code-review
checks:
- transparent_factory_compliance
- security_anti_patterns
- schema_additive_only
- adr_required_for_architecture
fail: block # standard: block. experimental: warn.
depends_on: [build]
# --- Formal gates (optional, for declared high-risk paths) ---
formal:
tool: tlc
models_dir: .forge/models/
fail: block
depends_on: [build]
when: changed(.forge/models/) OR changed(src/auth/) OR changed(src/data/)
# --- Transparent Factory tenet enforcement ---
tenets:
atomic_flagging:
check: grep-based or AST check for flag wrappers on new features
ttl_reaper: cron job that alerts on flags past 14-day TTL
elastic_schema:
check: migration linter (additive-only, no column drops without dual-write)
sla: 30 days for migration completion
cognitive_durability:
check: PRs touching src/arch/ or adding new services require docs/adr/*.md
semantic_observability:
check: new API endpoints must have tracing spans (grep for span creation)
configurable_autonomy:
check: governance config exists and is valid JSON/YAML schema
```
---
## Adoption Path
### Week 1-2: Foundation
- Add fail-fast CI gates (types → lint → secrets → build → test)
- Declare trust boundaries in each service
- Set posture to `experimental` (non-blocking)
### Week 3-4: Enforcement
- Add AI code review to PR pipeline
- Add Transparent Factory tenet checks
- Promote posture to `standard`
### Month 2: Hardening
- Identify top 5 high-risk paths
- Write formal models (TLA+ or property-based tests)
- Add beta/stable channel promotion
- Enable break-glass audit trail
### Month 3+: Maturity
- Promote posture to `strict` for production services
- Publish Forge compliance dashboard
- Integrate with Reltio's existing release governance
---
## Why This Works
OpenClaw ships daily with AI-written code because:
1. **Automated gates catch 95% of problems** before a human looks at it
2. **Trust boundaries limit blast radius** when something slips through
3. **Configurable posture** means teams adopt incrementally, not all-or-nothing
4. **Formal models** provide mathematical confidence on the paths that matter most
Reltio Forge takes these patterns and wraps them around the Transparent Factory tenets you already have. The tenets don't change. The enforcement becomes automatic.
*The factory builds the product. The forge tempers the steel.*

View File

@@ -1,27 +1,37 @@
# ========================================== # ==========================================
# PM Template - Environment Variables # PM Factory - Environment Variables
# ========================================== # ==========================================
# Copy this file to .env and fill in the values below. # Copy this file to .env and fill in the values below.
# Do NOT commit your actual .env file to version control. # Do NOT commit your actual .env file to version control.
# ------------------------------------------
# LLM Provider (Kiro Gateway)
# ------------------------------------------
# Your Kiro license key (or any OpenAI-compatible API key)
OPENAI_API_KEY="your-kiro-api-key-here"
# ------------------------------------------
# Open WebUI Admin (auto-created on first run)
# ------------------------------------------
WEBUI_ADMIN_EMAIL="you@reltio.com"
WEBUI_ADMIN_PASSWORD="change-me-on-first-login"
# ------------------------------------------ # ------------------------------------------
# Aha! Integration # Aha! Integration
# ------------------------------------------ # ------------------------------------------
# Generate this at: [Aha! Settings URL or instructions]
AHA_API_KEY="your_aha_api_key_here" AHA_API_KEY="your_aha_api_key_here"
AHA_DOMAIN="your_company.aha.io" AHA_DOMAIN="your_company.aha.io"
# ------------------------------------------ # ------------------------------------------
# Gainsight PX Integration # Gainsight PX Integration
# ------------------------------------------ # ------------------------------------------
# Generate this in Gainsight PX: Administration -> REST API
GAINSIGHT_PX_API_KEY="your_gainsight_px_api_key_here" GAINSIGHT_PX_API_KEY="your_gainsight_px_api_key_here"
GAINSIGHT_PX_REGION="US" # Set to 'EU' if hosted in Europe GAINSIGHT_PX_REGION="US"
# ------------------------------------------ # ------------------------------------------
# Jira / Atlassian Integration # Jira / Atlassian Integration
# ------------------------------------------ # ------------------------------------------
# We use the 'mcporter' CLI with the Atlassian MCP server for Jira. # Uses mcporter with Atlassian MCP (OAuth).
# You do NOT need a static API token here. # After first `docker compose up`, run:
# Instead, run the following command in your terminal to authenticate: # docker exec -it pm-mcpo mcporter auth atlassian
# mcporter auth atlassian # to complete the OAuth flow.

View File

@@ -1,61 +1,79 @@
# Open WebUI Integration # Open WebUI Integration
Drop-in configuration to use the PM Factory repo with [Open WebUI](https://github.com/open-webui/open-webui). The PM Factory uses Open WebUI as the browser-based frontend for non-technical PMs. It connects to Kiro Gateway for LLM intelligence and mcpo for tool access (Jira, Aha!, Gainsight).
## Quick Setup
### 1. Connect Your Model Provider
In Open WebUI → Settings → Connections, add:
| Field | Value |
|-------|-------|
| URL | `http://<your-kiro-gateway>:8000/v1` |
| API Key | Your gateway API key |
| Model | `claude-opus-4.6` |
Any OpenAI-compatible provider works (Kiro Gateway, LiteLLM, Ollama, etc).
### 2. Import the Preset
Go to Workspace → Models → Import, and upload `preset.json`.
This creates a "Reltio PM Factory" model preset with the full system prompt from AGENTS.md baked in.
### 3. Upload Knowledge (Optional)
Go to Workspace → Knowledge → Create Collection called "PM Factory".
Upload these directories as documents:
- `skills/epics-standards/references/`
- `skills/factory-standards/` (after running `manager.py update`)
- `skills/bmad-suite/` (after running `manager.py update`)
Then attach the collection to your preset in Model settings → Knowledge.
### 4. Install Pipelines (Optional)
Pipelines let the model execute tools (Jira, Aha!, Gainsight) directly.
Copy the files from `pipelines/` into your Open WebUI pipelines directory, or upload them via the Pipelines UI.
Required env vars (set in Open WebUI → Settings → Pipelines):
- `AHA_API_KEY`
- `AHA_DOMAIN`
- `GAINSIGHT_PX_API_KEY`
- `MCPORTER_CONFIG` — path to `config/mcporter.json`
---
## Architecture ## Architecture
``` ```
Open WebUI (browser) Browser (PM)
OpenAI-compatible API
Any LLM Provider (Kiro / Ollama / LiteLLM / OpenAI) Open WebUI (:3000)
+ System Prompt (preset.json ← AGENTS.md) ├── LLM → Kiro Gateway (:8000/v1) → Claude/GPT/etc
+ RAG Knowledge (skills docs) └── Tools → mcpo (:3001) → MCP Servers
+ Pipelines (mcporter, aha, gainsight, bmad) ├── atlassian (Jira)
├── aha (Aha! roadmaps)
└── context7 (docs lookup)
``` ```
The repo remains CLI-agnostic. This is just one frontend option. ## Quick Start
```bash
cp env.example .env # Fill in your API keys
docker compose up -d
```
Open http://localhost:3000. Log in with the admin email/password from your `.env`.
## How It Works
1. **Kiro Gateway** provides an OpenAI-compatible `/v1/chat/completions` endpoint. Open WebUI talks to it like any OpenAI provider.
2. **mcpo** reads `config/mcporter.json` and exposes each MCP server as an OpenAPI endpoint. Open WebUI discovers tools automatically via `TOOL_SERVER_CONNECTIONS`.
3. **No pipelines needed.** The old `pipelines/` directory contained custom Python shims that shelled out to mcporter. With mcpo, Open WebUI calls MCP tools natively through OpenAPI. The pipelines are deprecated.
## First-Time Setup
### 1. Jira OAuth
Atlassian uses OAuth, not API keys. After `docker compose up`:
```bash
docker exec -it pm-mcpo mcporter auth atlassian
```
Follow the browser prompt to authorize. This is a one-time step per machine.
### 2. Aha! + Gainsight
These use API keys set in `.env`. No extra auth step needed.
### 3. Verify Tools
Open http://localhost:3001/docs to see all available MCP tools exposed by mcpo. Each server has its own route:
- http://localhost:3001/atlassian/docs — Jira tools
- http://localhost:3001/aha/docs — Aha! tools
- http://localhost:3001/context7/docs — Documentation lookup
In Open WebUI, go to Settings → Tools to confirm all three servers are connected.
## Model Preset
Import `preset.json` via Workspace → Models → Import. This loads the PM Factory system prompt with Transparent Factory tenets baked in.
## Knowledge Base
Upload these files as a Knowledge collection in Open WebUI:
- `skills/epics-standards/references/` — Epic templates and checklists
- `skills/gainsight-px/SKILL.md` — Analytics reference
- `docs/reltio-forge.md` — Reltio Forge velocity standard
Then attach the collection to the PM Factory model preset.
## Customization
- **Add MCP servers:** Edit `config/mcporter.json`, restart mcpo
- **Change LLM:** Update `OPENAI_API_KEY` and gateway config in `.env`
- **Custom CSS:** Edit `openwebui/custom.css` (mounted into the container)

View File

@@ -0,0 +1,8 @@
# ⚠️ DEPRECATED — Use mcpo instead
These pipeline files are no longer needed. Open WebUI now connects to MCP servers
directly via mcpo and `TOOL_SERVER_CONNECTIONS`.
See `openwebui/SETUP.md` for the current architecture.
These files are kept for reference only and will be removed in a future cleanup.