diff --git a/.mcp.json b/.mcp.json new file mode 100644 index 00000000..2f9befb4 --- /dev/null +++ b/.mcp.json @@ -0,0 +1,11 @@ +{ + "mcpServers": { + "overlord": { + "command": "python3", + "args": ["-m", "agent.mcp_overlord"], + "env": { + "PYTHONPATH": "/home/erik/MosswartOverlord" + } + } + } +} diff --git a/CLAUDE.md b/CLAUDE.md index f6144321..cb4e0e51 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -137,4 +137,40 @@ Real-time equipment optimization engine for building optimal character loadouts ### WebSocket Endpoints - `/ws/position`: Plugin telemetry, inventory, portal, rare events (authenticated) -- `/ws/live`: Browser client commands and live updates (unauthenticated) \ No newline at end of file +- `/ws/live`: Browser client commands and live updates (unauthenticated) + +--- + +## Overlord Assistant Mode + +When invoked through the dashboard's chat window (the **πŸ€– Assistant** button) or through `/api/agent/ask`, you are acting as the **Overlord Assistant** β€” answering ad-hoc questions for the user about their live multi-account Asheron's Call setup. + +**You have MCP tools** (from `.mcp.json`) for live game data. **Always use them** instead of guessing or apologising for not having data: + +- `get_live_players` β€” current online characters with positions/kills/state +- `get_recent_rares` β€” rare item finds in the last N hours +- `query_telemetry_db` β€” read-only SQL on the telemetry DB for ad-hoc analysis +- (more tools added over time β€” call `list_tools` if unsure) + +### Behaviour rules + +1. **Use tools, don't speculate.** If the user asks "how many chars are online" β€” call `get_live_players`. Don't say "I'd need to check" β€” just check. +2. **Be concise.** The user is glancing at a chat window, not reading a report. 2-5 sentences for most answers. Use markdown tables for tabular data. +3. **No code unless asked.** This mode is about *operating* the system, not editing it. Don't open files or write code unless the user explicitly asks. +4. **Real numbers, real names.** Cite actual character names and counts from tools β€” never make up sample data. +5. **Read-only.** You cannot mutate the database; the SQL tool will reject any non-SELECT statement and the role is also `GRANT SELECT` only. If a question requires a write, say so. +6. **Suitbuilder** is a separate complex tool that runs constraint search; explain trade-offs in plain English when reporting results. +7. **Out-of-scope questions** (general AC lore, unrelated coding) β€” answer briefly without using tools. + +### Available data tables (for `query_telemetry_db`) + +- `telemetry_events` (hypertable, 30-day retention) β€” position/state snapshots every ~2s per character +- `rare_events` β€” rare item find log +- `spawn_events` (hypertable, 7-day retention) β€” monster spawn observations +- `portals` β€” discovered portal coords (1h dedup window) +- `char_stats`, `rare_stats`, `rare_stats_sessions` β€” lifetime/session aggregates +- `character_stats` β€” latest full stats JSON per character +- `combat_stats`, `combat_stats_sessions` β€” combat tracking +- `server_status` β€” current Coldeve game-server state (single row) + +If asked about something not covered above, look in `db_async.py` for the schema or just try a query and report what you see. \ No newline at end of file diff --git a/agent/README.md b/agent/README.md new file mode 100644 index 00000000..aa875178 --- /dev/null +++ b/agent/README.md @@ -0,0 +1,146 @@ +# Overlord Agent + +A small host-side Python service that gives Claude Code (running in +headless mode) access to live Overlord data so it can answer questions +from the dashboard chat window. + +## Why a separate service? + +`dereth-tracker` runs in Docker. The `claude` CLI binary at +`/home/erik/.local/bin/claude` depends on `~/.claude` credentials owned +by user `erik` on the host. The tracker container can't invoke it. + +So this service runs **outside** Docker, listens on `127.0.0.1:8767`, +and nginx routes `/api/agent/*` to it. It validates the same browser +session cookie the tracker issues (shared `SECRET_KEY`) and shells out +to `claude -p` with `cwd=/home/erik/MosswartOverlord`. + +## Architecture + +``` +Browser ──nginx──┬─► /api/* ──► dereth-tracker (Docker, 8765) + β”‚ + └─► /api/agent/* ──► overlord-agent (host, 8767) + β”‚ + β”œβ”€β–Ί subprocess: claude -p ... + β”‚ β”‚ + β”‚ └─► MCP stdio ──► mcp_overlord.py + β”‚ β”‚ + β”‚ └─► HTTP loopback to tracker + β”‚ └─► asyncpg to dereth-db + β”‚ + └─► validates "session" cookie +``` + +## Files + +| File | What | +|------|------| +| `service.py` | FastAPI app (`/agent/health`, `/agent/sessions/new`, `/agent/ask`, `/agent/sessions/{id}/history`) | +| `auth.py` | Session-cookie validation (mirrors `main.py:1013-1019`) | +| `claude_wrapper.py` | `asyncio.create_subprocess_exec("claude", "-p", ...)` | +| `tools.py` | Pure tool implementations (HTTP loopback + read-only DB) | +| `mcp_overlord.py` | MCP stdio server registering tools for Claude Code | +| `sql/0001_overlord_agent_ro.sql` | Read-only PG role for the SQL tool | +| `overlord-agent.service` | systemd unit | +| `install.sh` | One-shot installer (venv + pip install + systemd) | + +## Required env vars (in repo-root `.env`) + +``` +SECRET_KEY= +AGENT_DB_DSN=postgresql://overlord_agent_ro:@127.0.0.1:5432/dereth +TRACKER_URL=http://127.0.0.1:8765 # optional, this is the default +CLAUDE_BIN=/home/erik/.local/bin/claude # optional, this is the default +CLAUDE_CWD=/home/erik/MosswartOverlord # optional, this is the default +CLAUDE_TIMEOUT_S=120 # optional +``` + +## First-time setup on the server + +1. **Create the read-only DB role** (one-time): + ```bash + docker exec -i dereth-db psql -U postgres -d dereth \ + < /home/erik/MosswartOverlord/agent/sql/0001_overlord_agent_ro.sql + docker exec -it dereth-db psql -U postgres -d dereth \ + -c "ALTER ROLE overlord_agent_ro PASSWORD '';" + ``` +2. **Add `AGENT_DB_DSN`** to `/home/erik/MosswartOverlord/.env` with the + password you just set. +3. **Run the installer**: + ```bash + cd /home/erik/MosswartOverlord + bash agent/install.sh + ``` +4. **Update nginx**: edit `/etc/nginx/sites-enabled/overlord` to add the + `/api/agent/` location (already in `nginx/overlord.conf` in the repo β€” + just `sudo cp` and reload). + +## Day-to-day deploy + +After editing any agent file: + +```bash +# On dev: +git push + +# On server: +ssh erik@overlord.snakedesert.se +cd /home/erik/MosswartOverlord +git pull +sudo systemctl restart overlord-agent +journalctl -u overlord-agent -f # tail logs +``` + +For Python dependency changes: + +```bash +agent/.venv/bin/pip install -r agent/requirements.txt +sudo systemctl restart overlord-agent +``` + +## Smoke tests + +```bash +# 1. Service alive? +curl http://127.0.0.1:8767/agent/health + +# 2. Cookie required? +curl -X POST http://127.0.0.1:8767/agent/ask \ + -H 'Content-Type: application/json' \ + -d '{"session_id":"x","message":"hi"}' +# β‡’ 401 + +# 3. Direct claude invocation works? +echo "hello" | /home/erik/.local/bin/claude -p \ + --session-id 11111111-1111-1111-1111-111111111111 \ + --output-format json + +# 4. End-to-end via nginx (with cookie): +curl -X POST https://overlord.snakedesert.se/api/agent/ask \ + -b 'session=' \ + -H 'Content-Type: application/json' \ + -d '{"session_id":"","message":"How many characters are online?"}' +``` + +## Cost / rate-limit notes + +- Each `/agent/ask` shells out to `claude -p` once. +- We use the user's Claude subscription (no API key) β€” flat-rate, no + per-call billing, but subscription-tier rate limits still apply. +- **Reactive only**: there are no background loops or periodic ticks. + Each user message = one Claude turn (which may chain several tool + calls internally before producing a final answer). +- The SQL tool is hard-capped at 10s and 200 rows. +- `suitbuilder_search` is the only tool that can take minutes; nginx + read timeout is 180s for `/api/agent/`. + +## Adding a new MCP tool + +1. Implement `async def my_tool(...) -> dict` in `tools.py`. +2. Register it in `mcp_overlord.py` under `TOOL_DEFS`: + - description (the agent reads this to decide when to call) + - JSON schema for arguments + - lambda dispatching to `T.my_tool(...)` +3. `sudo systemctl restart overlord-agent`. Claude Code re-discovers the + tool list on each invocation. diff --git a/agent/__init__.py b/agent/__init__.py new file mode 100644 index 00000000..2cbfa0cf --- /dev/null +++ b/agent/__init__.py @@ -0,0 +1,10 @@ +"""Overlord Agent β€” host-side service that shells out to claude -p. + +Runs OUTSIDE the dereth-tracker Docker container because the `claude` CLI +binary lives at /home/erik/.local/bin/claude on the host and depends on +~/.claude/ credentials owned by user erik. The container can't invoke it +directly, so this is a small standalone FastAPI service on port 8767. + +nginx routes /api/agent/* to here. The same browser session cookie that +dereth-tracker validates is reused (shared SECRET_KEY env var). +""" diff --git a/agent/auth.py b/agent/auth.py new file mode 100644 index 00000000..2928bed4 --- /dev/null +++ b/agent/auth.py @@ -0,0 +1,51 @@ +"""Session-cookie validation that mirrors main.py. + +Re-implements the verify path so this host-side service can authenticate +the same browser cookie that dereth-tracker issues. Both services must +share the SECRET_KEY env var. +""" + +from __future__ import annotations + +import os + +from fastapi import HTTPException, Request, status +from itsdangerous import BadSignature, SignatureExpired, URLSafeTimedSerializer + +# Mirror main.py:996-998 +SECRET_KEY = os.getenv("SECRET_KEY", "change-me-in-production-please") +SESSION_MAX_AGE = 30 * 24 * 3600 # 30 days +_serializer = URLSafeTimedSerializer(SECRET_KEY) + + +def verify_session_cookie(token: str) -> dict | None: + """Verify and decode a session token. Returns None if invalid/expired. + + Mirrors main.py:1013-1019 byte-for-byte so a cookie issued by the tracker + decodes here identically. + """ + try: + data = _serializer.loads(token, max_age=SESSION_MAX_AGE) + return {"username": data["u"], "is_admin": data["a"]} + except (BadSignature, SignatureExpired, KeyError): + return None + + +def require_user(request: Request) -> dict: + """FastAPI dependency: enforces a valid session cookie. + + Returns the decoded user dict on success; raises 401 otherwise. + """ + token = request.cookies.get("session") + if not token: + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Not authenticated", + ) + user = verify_session_cookie(token) + if not user: + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Session invalid or expired", + ) + return user diff --git a/agent/claude_wrapper.py b/agent/claude_wrapper.py new file mode 100644 index 00000000..2518c36c --- /dev/null +++ b/agent/claude_wrapper.py @@ -0,0 +1,123 @@ +"""Subprocess wrapper around `claude -p` (Claude Code in headless JSON mode). + +Run from cwd=/home/erik/MosswartOverlord so: + β€’ Sessions persist at ~/.claude/projects/-home-erik-MosswartOverlord/.jsonl + β€’ Project-level .mcp.json is auto-loaded + β€’ CLAUDE.md in the repo root briefs the agent + +The `--session-id` flag both creates a new session (first call) and resumes +an existing one (subsequent calls), so we don't need separate code paths. +""" + +from __future__ import annotations + +import asyncio +import json +import logging +import os +from dataclasses import dataclass +from pathlib import Path +from typing import Any + +logger = logging.getLogger(__name__) + +# These can be overridden via env vars for non-prod testing. +CLAUDE_BIN = os.getenv("CLAUDE_BIN", "/home/erik/.local/bin/claude") +CLAUDE_CWD = os.getenv("CLAUDE_CWD", "/home/erik/MosswartOverlord") +# Hard cap on how long a single agent turn may take. Claude Code can spin a +# while when chaining many tool calls; we don't want to leave a zombie +# subprocess if something gets stuck. +CLAUDE_TIMEOUT_S = int(os.getenv("CLAUDE_TIMEOUT_S", "120")) + + +@dataclass +class ClaudeResult: + result: str + session_id: str + duration_ms: int + num_turns: int + is_error: bool + raw: dict[str, Any] + + +class ClaudeError(RuntimeError): + """Raised when the claude CLI returns a non-zero exit or unparseable output.""" + + +async def ask_claude(message: str, session_id: str) -> ClaudeResult: + """Send `message` to `claude -p` resuming session_id; return parsed result. + + Raises ClaudeError on subprocess failure, JSON parse failure, or timeout. + """ + if not Path(CLAUDE_BIN).exists(): + raise ClaudeError(f"claude binary not found at {CLAUDE_BIN}") + if not Path(CLAUDE_CWD).is_dir(): + raise ClaudeError(f"CLAUDE_CWD does not exist: {CLAUDE_CWD}") + + args = [ + CLAUDE_BIN, + "-p", + "--session-id", + session_id, + "--output-format", + "json", + ] + + logger.info( + "claude exec: session=%s msg_len=%d cwd=%s", session_id, len(message), CLAUDE_CWD + ) + + proc = await asyncio.create_subprocess_exec( + *args, + stdin=asyncio.subprocess.PIPE, + stdout=asyncio.subprocess.PIPE, + stderr=asyncio.subprocess.PIPE, + cwd=CLAUDE_CWD, + ) + + try: + stdout, stderr = await asyncio.wait_for( + proc.communicate(input=message.encode("utf-8")), + timeout=CLAUDE_TIMEOUT_S, + ) + except asyncio.TimeoutError: + try: + proc.kill() + except ProcessLookupError: + pass + raise ClaudeError(f"claude timed out after {CLAUDE_TIMEOUT_S}s") + + if proc.returncode != 0: + raise ClaudeError( + f"claude exited {proc.returncode}: {stderr.decode('utf-8', 'replace')[:500]}" + ) + + raw_text = stdout.decode("utf-8", "replace").strip() + if not raw_text: + raise ClaudeError("claude produced empty stdout") + + # In --output-format json mode the LAST line is the JSON envelope; some + # earlier lines may be progress. Be tolerant. + try: + envelope = json.loads(raw_text) + except json.JSONDecodeError: + # Try the last non-empty line + last = next( + (line for line in reversed(raw_text.splitlines()) if line.strip()), + "", + ) + try: + envelope = json.loads(last) + except json.JSONDecodeError as e: + raise ClaudeError( + f"claude stdout was not JSON: {raw_text[:500]}" + ) from e + + return ClaudeResult( + result=envelope.get("result", ""), + session_id=envelope.get("session_id", session_id), + duration_ms=int(envelope.get("duration_ms", 0)), + num_turns=int(envelope.get("num_turns", 0)), + is_error=bool(envelope.get("is_error", False)), + raw=envelope, + ) diff --git a/agent/install.sh b/agent/install.sh new file mode 100644 index 00000000..f92a2c7a --- /dev/null +++ b/agent/install.sh @@ -0,0 +1,46 @@ +#!/bin/bash +# Install / re-install the Overlord Agent host-side service. +# +# Run as user `erik` from /home/erik/MosswartOverlord: +# bash agent/install.sh +# +# Requires sudo for the systemd parts (you'll be prompted once). + +set -euo pipefail + +REPO_DIR="/home/erik/MosswartOverlord" +AGENT_DIR="$REPO_DIR/agent" +VENV_DIR="$AGENT_DIR/.venv" +SERVICE_FILE="$AGENT_DIR/overlord-agent.service" +SYSTEMD_TARGET="/etc/systemd/system/overlord-agent.service" + +if [[ "$(pwd)" != "$REPO_DIR" ]]; then + echo "Run from $REPO_DIR (currently in $(pwd))" >&2 + exit 1 +fi + +echo "==> Creating/updating venv at $VENV_DIR" +if [[ ! -d "$VENV_DIR" ]]; then + python3 -m venv "$VENV_DIR" +fi +"$VENV_DIR/bin/pip" install --quiet --upgrade pip +"$VENV_DIR/bin/pip" install --quiet -r "$AGENT_DIR/requirements.txt" + +echo "==> Installing systemd unit" +sudo cp "$SERVICE_FILE" "$SYSTEMD_TARGET" +sudo systemctl daemon-reload + +echo "==> Enabling + starting overlord-agent" +sudo systemctl enable overlord-agent +sudo systemctl restart overlord-agent + +sleep 1 +echo "==> Status:" +sudo systemctl --no-pager status overlord-agent | head -15 + +echo "" +echo "==> Smoke test:" +curl -s http://127.0.0.1:8767/agent/health | python3 -m json.tool || true + +echo "" +echo "Done. Logs: journalctl -u overlord-agent -f" diff --git a/agent/mcp_overlord.py b/agent/mcp_overlord.py new file mode 100644 index 00000000..8ad0f31a --- /dev/null +++ b/agent/mcp_overlord.py @@ -0,0 +1,262 @@ +"""MCP stdio server exposing Overlord data to Claude Code. + +Configured via .mcp.json at the repo root, which Claude Code auto-loads +when invoked with cwd=/home/erik/MosswartOverlord. Tool implementations +live in tools.py β€” this file is just MCP protocol plumbing. + +Run directly with: + python3 /home/erik/MosswartOverlord/agent/mcp_overlord.py +""" + +from __future__ import annotations + +import asyncio +import json +import logging +from typing import Any + +from mcp.server import Server +from mcp.server.stdio import stdio_server +from mcp.types import TextContent, Tool + +from . import tools as T + +logging.basicConfig( + level=logging.INFO, + format="%(asctime)s %(levelname)s mcp_overlord: %(message)s", +) +logger = logging.getLogger("mcp_overlord") + +server: Server = Server("overlord") + + +# ─── Tool registry ────────────────────────────────────────────────── +# +# Each entry: name β†’ (description, JSON schema, callable async fn). +# We register them with @server.list_tools / @server.call_tool below. + +TOOL_DEFS: dict[str, dict[str, Any]] = { + "get_live_players": { + "description": ( + "Return active characters seen in the last ~30 seconds with their " + "current position, kills, KPH, vitae, online time, and VTank state. " + "Use this for any 'who is online right now / what is X doing' question." + ), + "schema": {"type": "object", "properties": {}}, + "fn": lambda _args: T.get_live_players(), + }, + "get_recent_rares": { + "description": ( + "Return rare item finds from the last N hours, newest first. " + "Use for questions about recent drops, who is finding rares, or " + "rare-rate analysis. Defaults to 24 hours, max 30 days." + ), + "schema": { + "type": "object", + "properties": { + "hours": { + "type": "integer", + "minimum": 1, + "maximum": 720, + "default": 24, + }, + "limit": { + "type": "integer", + "minimum": 1, + "maximum": 200, + "default": 100, + }, + }, + }, + "fn": lambda args: T.get_recent_rares( + hours=int(args.get("hours", 24)), + limit=int(args.get("limit", 100)), + ), + }, + "query_telemetry_db": { + "description": ( + "Run a read-only SQL query against the telemetry database (TimescaleDB). " + "Only SELECT / WITH statements are accepted; any DML or DDL is rejected. " + "Useful for questions that aren't covered by the other tools β€” top-N " + "lists, custom aggregations, time-window comparisons. " + "Available tables include: telemetry_events (hypertable, 30d retention), " + "rare_events, spawn_events (hypertable, 7d retention), portals, " + "char_stats, rare_stats, rare_stats_sessions, character_stats, " + "combat_stats, combat_stats_sessions, server_status. " + "The query has a 10s timeout and returns at most 200 rows." + ), + "schema": { + "type": "object", + "required": ["sql"], + "properties": { + "sql": { + "type": "string", + "description": "A single PostgreSQL SELECT or WITH ... SELECT statement.", + } + }, + }, + "fn": lambda args: T.query_telemetry_db(str(args["sql"])), + }, + "get_player_state": { + "description": ( + "Combined snapshot for ONE character: live telemetry (if online) " + "+ full character stats (attributes, skills, augmentations). " + "Use this for questions like 'what is X doing right now' or 'show me X's stats'." + ), + "schema": { + "type": "object", + "required": ["character_name"], + "properties": { + "character_name": {"type": "string"}, + }, + }, + "fn": lambda args: T.get_player_state(str(args["character_name"])), + }, + "get_inventory": { + "description": ( + "Full inventory listing for one character β€” every item with name, " + "icon, container, equipped slot, spells, material, tinkers, etc. " + "Large response β€” prefer get_inventory_search for narrow queries." + ), + "schema": { + "type": "object", + "required": ["character_name"], + "properties": {"character_name": {"type": "string"}}, + }, + "fn": lambda args: T.get_inventory(str(args["character_name"])), + }, + "get_inventory_search": { + "description": ( + "Filtered inventory search for one character. Pass filter query " + "params as the `filters` object. Common filters: name (substring), " + "armor_level_min, armor_level_max, material, item_set, has_spell. " + "Returns matching items in the same shape as get_inventory." + ), + "schema": { + "type": "object", + "required": ["character_name"], + "properties": { + "character_name": {"type": "string"}, + "filters": { + "type": "object", + "description": "Query params dict, e.g. {\"name\": \"pearl\", \"armor_level_min\": 500}", + }, + }, + }, + "fn": lambda args: T.get_inventory_search( + str(args["character_name"]), args.get("filters") or {} + ), + }, + "get_combat_stats": { + "description": ( + "Lifetime + session combat stats for one character. Includes total " + "damage given/received, per-element offense/defense breakdown, kill " + "counts, and aetheria surge counts." + ), + "schema": { + "type": "object", + "required": ["character_name"], + "properties": {"character_name": {"type": "string"}}, + }, + "fn": lambda args: T.get_combat_stats(str(args["character_name"])), + }, + "get_equipment_cantrips": { + "description": ( + "Currently-equipped items for a character along with their active " + "cantrip/spell state. Useful for 'what is X wearing' or 'is X " + "running their suit' questions." + ), + "schema": { + "type": "object", + "required": ["character_name"], + "properties": {"character_name": {"type": "string"}}, + }, + "fn": lambda args: T.get_equipment_cantrips(str(args["character_name"])), + }, + "get_quest_status": { + "description": ( + "Active quest timers and progress across ALL characters. Returns " + "for each character which quests are READY vs counting down." + ), + "schema": {"type": "object", "properties": {}}, + "fn": lambda _args: T.get_quest_status(), + }, + "get_server_health": { + "description": ( + "Current Coldeve game-server status: up/down, latency in ms, " + "current player count from TreeStats.net, total uptime. Updated " + "every 30 seconds in the background." + ), + "schema": {"type": "object", "properties": {}}, + "fn": lambda _args: T.get_server_health(), + }, + "suitbuilder_search": { + "description": ( + "Run a constraint-satisfaction armor optimization across all " + "characters' inventories ('mules'). Drives the same suitbuilder " + "the /suitbuilder.html page uses. Pass the same params dict the " + "page sends β€” see /suitbuilder.html JS for the schema. The search " + "is SSE-streaming on the backend; this tool collects until done " + "and returns the final suit(s) plus the last few phase events. " + "Can take up to 5 minutes for complex constraints β€” only call " + "when the user explicitly asks for an optimization run." + ), + "schema": { + "type": "object", + "required": ["params"], + "properties": { + "params": { + "type": "object", + "description": "Suitbuilder request body (characters, locked slots, set constraints, etc.)", + }, + }, + }, + "fn": lambda args: T.suitbuilder_search(args.get("params") or {}), + }, +} + + +# ─── MCP protocol wiring ──────────────────────────────────────────── + + +@server.list_tools() +async def list_tools() -> list[Tool]: + return [ + Tool(name=name, description=defn["description"], inputSchema=defn["schema"]) + for name, defn in TOOL_DEFS.items() + ] + + +@server.call_tool() +async def call_tool(name: str, arguments: dict[str, Any]) -> list[TextContent]: + if name not in TOOL_DEFS: + return [TextContent(type="text", text=f"unknown tool: {name}")] + + fn = TOOL_DEFS[name]["fn"] + try: + result = await fn(arguments or {}) + except T.SqlNotAllowed as e: + return [TextContent(type="text", text=f"REJECTED: {e}")] + except Exception as e: # noqa: BLE001 + logger.exception("tool %s failed", name) + return [TextContent(type="text", text=f"ERROR: {type(e).__name__}: {e}")] + + text = json.dumps(result, default=str, ensure_ascii=False, indent=2) + return [TextContent(type="text", text=text)] + + +async def _run() -> None: + logger.info("starting MCP stdio server (overlord)") + try: + async with stdio_server() as (reader, writer): + await server.run(reader, writer, server.create_initialization_options()) + finally: + await T.shutdown() + + +def main() -> None: + asyncio.run(_run()) + + +if __name__ == "__main__": + main() diff --git a/agent/overlord-agent.service b/agent/overlord-agent.service new file mode 100644 index 00000000..5a026a81 --- /dev/null +++ b/agent/overlord-agent.service @@ -0,0 +1,29 @@ +[Unit] +Description=Overlord Agent (Claude Code shell-out service) +After=network-online.target +Wants=network-online.target + +[Service] +Type=simple +User=erik +Group=erik +# Working directory MUST be the repo root so: +# - claude -p sessions land at ~/.claude/projects/-home-erik-MosswartOverlord/ +# - .mcp.json is auto-loaded +WorkingDirectory=/home/erik/MosswartOverlord +EnvironmentFile=-/home/erik/MosswartOverlord/.env +# Run inside the venv populated by install.sh. +ExecStart=/home/erik/MosswartOverlord/agent/.venv/bin/python -m agent.service +Restart=on-failure +RestartSec=3 +# Don't tie up the disk with stdout β€” let journald handle it. +StandardOutput=journal +StandardError=journal + +# Resource hints β€” the service is light, but cap so a runaway can't +# starve the host. +MemoryLimit=512M +CPUQuota=200% + +[Install] +WantedBy=multi-user.target diff --git a/agent/requirements.txt b/agent/requirements.txt new file mode 100644 index 00000000..c14e3454 --- /dev/null +++ b/agent/requirements.txt @@ -0,0 +1,13 @@ +fastapi>=0.110 +uvicorn[standard]>=0.30 +httpx>=0.27 +itsdangerous>=2.2 +pydantic>=2.6 +# MCP server SDK (used by mcp_overlord.py for the stdio MCP server) +mcp>=1.0 +# SQL safety: parses SQL to enforce read-only on the query_db tool +sqlglot>=25.0 +# Direct DB access for the read-only query tool and rare_events lookups +asyncpg>=0.29 +# .env loader +python-dotenv>=1.0 diff --git a/agent/service.py b/agent/service.py new file mode 100644 index 00000000..d3fbb1fb --- /dev/null +++ b/agent/service.py @@ -0,0 +1,213 @@ +"""Overlord Agent host-side FastAPI service. + +Runs OUTSIDE Docker (host-side) on port 8767. + +Endpoints: + GET /agent/health β€” liveness check + POST /agent/sessions/new β€” returns a fresh session UUID + POST /agent/ask β€” runs claude -p with given session + GET /agent/sessions/{session_id}/history + β€” replays a session's JSONL on disk + +Auth: every endpoint except /health requires the same browser session +cookie that dereth-tracker issues. +""" + +from __future__ import annotations + +import json +import logging +import time +import uuid +from pathlib import Path +from typing import Any + +from fastapi import Depends, FastAPI, HTTPException +from fastapi.responses import JSONResponse +from pydantic import BaseModel, Field + +from . import auth +from .claude_wrapper import CLAUDE_CWD, ClaudeError, ask_claude + +logging.basicConfig( + level=logging.INFO, + format="%(asctime)s %(levelname)s %(name)s: %(message)s", +) +logger = logging.getLogger("agent") + +app = FastAPI(title="Overlord Agent", version="0.1.0") + + +# ─── Models ────────────────────────────────────────────────────────── + + +class AskRequest(BaseModel): + session_id: str = Field( + ..., description="Stable per-conversation UUID stored in browser localStorage" + ) + message: str = Field(..., min_length=1, max_length=10_000) + + +class AskResponse(BaseModel): + result: str + session_id: str + duration_ms: int + num_turns: int + is_error: bool + + +class NewSessionResponse(BaseModel): + session_id: str + + +# ─── Helpers ───────────────────────────────────────────────────────── + + +def _encode_cwd(cwd: str) -> str: + """Match Claude Code's on-disk encoding for cwd β†’ directory name. + + Claude Code stores sessions at ~/.claude/projects//.jsonl + where non-alphanumerics in the cwd are replaced with hyphens. + Example: /home/erik/MosswartOverlord β†’ -home-erik-MosswartOverlord + """ + return "".join(c if c.isalnum() else "-" for c in cwd) + + +def _sessions_dir() -> Path: + return Path.home() / ".claude" / "projects" / _encode_cwd(CLAUDE_CWD) + + +# ─── Endpoints ─────────────────────────────────────────────────────── + + +@app.get("/agent/health") +async def health() -> dict: + """Liveness probe β€” no auth, used by deployment scripts.""" + return { + "status": "ok", + "claude_cwd": CLAUDE_CWD, + "sessions_dir_exists": _sessions_dir().exists(), + } + + +@app.post("/agent/sessions/new", response_model=NewSessionResponse) +async def new_session(_user: dict = Depends(auth.require_user)) -> NewSessionResponse: + """Generate a fresh session UUID. Doesn't touch disk β€” claude creates the + JSONL file when the first message lands.""" + return NewSessionResponse(session_id=str(uuid.uuid4())) + + +@app.post("/agent/ask", response_model=AskResponse) +async def agent_ask( + req: AskRequest, user: dict = Depends(auth.require_user) +) -> AskResponse: + """Forward a message to claude -p resuming the given session.""" + started = time.monotonic() + try: + result = await ask_claude(req.message, req.session_id) + except ClaudeError as e: + logger.warning( + "claude failed user=%s session=%s err=%s", user["username"], req.session_id, e + ) + raise HTTPException(status_code=502, detail=str(e)) + + elapsed_ms = int((time.monotonic() - started) * 1000) + logger.info( + "ask user=%s session=%s turns=%d duration_ms=%d (subprocess=%dms)", + user["username"], + result.session_id, + result.num_turns, + elapsed_ms, + result.duration_ms, + ) + + return AskResponse( + result=result.result, + session_id=result.session_id, + duration_ms=result.duration_ms, + num_turns=result.num_turns, + is_error=result.is_error, + ) + + +@app.get("/agent/sessions/{session_id}/history") +async def session_history( + session_id: str, _user: dict = Depends(auth.require_user) +) -> JSONResponse: + """Replay a session's JSONL from ~/.claude/projects/.../.jsonl. + + Returns a flat array of {role, text, timestamp} for the chat window. + Returns an empty array if the session file doesn't exist yet. + """ + # UUID sanity check to prevent path traversal β€” claude Code uses uuid4 + try: + uuid.UUID(session_id) + except ValueError: + raise HTTPException(status_code=400, detail="invalid session_id") + + path = _sessions_dir() / f"{session_id}.jsonl" + if not path.is_file(): + return JSONResponse({"messages": []}) + + messages: list[dict[str, Any]] = [] + try: + with path.open("r", encoding="utf-8") as f: + for line in f: + line = line.strip() + if not line: + continue + try: + obj = json.loads(line) + except json.JSONDecodeError: + continue + # Claude Code records turns with type=user / type=assistant. + # Tool-use traffic is verbose; skip it for the chat UI. + msg_type = obj.get("type") + if msg_type not in ("user", "assistant"): + continue + msg = obj.get("message") or {} + content = msg.get("content") + # `content` may be a string or list[{type,text}]. + if isinstance(content, str): + text = content + elif isinstance(content, list): + text = "".join( + part.get("text", "") + for part in content + if isinstance(part, dict) and part.get("type") == "text" + ) + else: + text = "" + if not text: + continue + messages.append( + { + "role": msg_type, + "text": text, + "timestamp": obj.get("timestamp"), + } + ) + except OSError as e: + logger.warning("failed to read session %s: %s", session_id, e) + raise HTTPException(status_code=500, detail="failed to read session") + + return JSONResponse({"messages": messages}) + + +# ─── Entrypoint ────────────────────────────────────────────────────── + + +def main() -> None: + """Run via `python -m agent.service` for local testing.""" + import uvicorn + + uvicorn.run( + "agent.service:app", + host="127.0.0.1", + port=8767, + log_level="info", + ) + + +if __name__ == "__main__": + main() diff --git a/agent/sql/0001_overlord_agent_ro.sql b/agent/sql/0001_overlord_agent_ro.sql new file mode 100644 index 00000000..b87f9cce --- /dev/null +++ b/agent/sql/0001_overlord_agent_ro.sql @@ -0,0 +1,35 @@ +-- Read-only PG role for the Overlord Agent's `query_telemetry_db` MCP tool. +-- +-- This is the second line of defense (the first is the sqlglot parser in +-- agent/tools.py:assert_read_only). Even a parser bypass cannot mutate +-- because this role only has SELECT. +-- +-- Apply on the dereth-db container: +-- docker exec dereth-db psql -U postgres -d dereth -f - < agent/sql/0001_overlord_agent_ro.sql +-- (substitute the password before running, or keep as a placeholder and +-- ALTER ROLE … PASSWORD '…' separately) + +DO $$ +BEGIN + IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'overlord_agent_ro') THEN + CREATE ROLE overlord_agent_ro NOINHERIT LOGIN PASSWORD 'change-me-set-via-alter-role'; + END IF; +END$$; + +GRANT CONNECT ON DATABASE dereth TO overlord_agent_ro; +GRANT USAGE ON SCHEMA public TO overlord_agent_ro; + +-- Grant SELECT on all current public tables. +GRANT SELECT ON ALL TABLES IN SCHEMA public TO overlord_agent_ro; +GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO overlord_agent_ro; + +-- And on any future tables created in public. +ALTER DEFAULT PRIVILEGES IN SCHEMA public + GRANT SELECT ON TABLES TO overlord_agent_ro; + +-- TimescaleDB-internal schema (chunks live here). Read on hypertable chunks +-- requires SELECT on _timescaledb_internal too. +GRANT USAGE ON SCHEMA _timescaledb_internal TO overlord_agent_ro; +GRANT SELECT ON ALL TABLES IN SCHEMA _timescaledb_internal TO overlord_agent_ro; +ALTER DEFAULT PRIVILEGES IN SCHEMA _timescaledb_internal + GRANT SELECT ON TABLES TO overlord_agent_ro; diff --git a/agent/tools.py b/agent/tools.py new file mode 100644 index 00000000..e9e743f8 --- /dev/null +++ b/agent/tools.py @@ -0,0 +1,401 @@ +"""Tool implementations exposed to Claude via the MCP server. + +These are pure functions β€” the MCP server (mcp_overlord.py) only handles +the protocol wrapping. Keep tool logic here so it's easy to test in +isolation and reuse from elsewhere (e.g. /agent/ask shortcuts). + +Two flavors of data access: + * HTTP loopback to the dereth-tracker container (for endpoints that + already exist and have validated logic). + * Direct asyncpg to the read-only PG role for ad-hoc queries + (rare_events, telemetry, anything not exposed via HTTP). +""" + +from __future__ import annotations + +import asyncio +import json +import logging +import os +from typing import Any +from urllib.parse import quote + +import asyncpg +import httpx +import sqlglot +import sqlglot.errors +import sqlglot.expressions as exp + +logger = logging.getLogger(__name__) + +# The dereth-tracker FastAPI app, reachable from the host because Docker +# port-forwards 127.0.0.1:8765:8765 in docker-compose.yml. +TRACKER_URL = os.getenv("TRACKER_URL", "http://127.0.0.1:8765") + +# Read-only PG role; see deployment plan. +DB_DSN = os.getenv( + "AGENT_DB_DSN", + "postgresql://overlord_agent_ro@127.0.0.1:5432/dereth", +) + +# Hard caps for the SQL tool to keep the agent honest. +SQL_TIMEOUT_S = float(os.getenv("AGENT_SQL_TIMEOUT_S", "10")) +SQL_MAX_ROWS = int(os.getenv("AGENT_SQL_MAX_ROWS", "200")) + + +# ─── HTTP loopback helpers ────────────────────────────────────────── + + +_http_client: httpx.AsyncClient | None = None + + +async def _http() -> httpx.AsyncClient: + """Lazily create + reuse a single httpx client (connection pool).""" + global _http_client + if _http_client is None: + _http_client = httpx.AsyncClient(base_url=TRACKER_URL, timeout=30.0) + return _http_client + + +async def _get_json(path: str) -> Any: + client = await _http() + resp = await client.get(path) + resp.raise_for_status() + return resp.json() + + +# ─── DB helpers ───────────────────────────────────────────────────── + + +_db_pool: asyncpg.Pool | None = None + + +async def _db() -> asyncpg.Pool: + global _db_pool + if _db_pool is None: + _db_pool = await asyncpg.create_pool( + DB_DSN, min_size=1, max_size=4, command_timeout=SQL_TIMEOUT_S + ) + return _db_pool + + +# ─── SQL safety ───────────────────────────────────────────────────── + + +_ALLOWED_TOPLEVEL = (exp.Select, exp.With, exp.Union, exp.Subquery) + + +class SqlNotAllowed(ValueError): + """Raised when the agent attempts a non-read-only SQL statement.""" + + +def assert_read_only(sql: str) -> None: + """Parse `sql` and reject anything that isn't a read query. + + Belt-and-suspenders: the PG role is also read-only (GRANT SELECT only), + so even a parser bypass can't actually mutate. This is the first line + of defense β€” friendlier error messages and faster reject. + """ + try: + statements = sqlglot.parse(sql, read="postgres") + except sqlglot.errors.ParseError as e: + raise SqlNotAllowed(f"SQL parse error: {e}") from e + + if not statements: + raise SqlNotAllowed("empty SQL") + if len(statements) > 1: + raise SqlNotAllowed("only one statement allowed") + + stmt = statements[0] + if not isinstance(stmt, _ALLOWED_TOPLEVEL): + raise SqlNotAllowed( + f"only SELECT / WITH allowed, got {type(stmt).__name__}" + ) + + # Walk the tree and reject any DML/DDL hidden inside (e.g. CTE with + # INSERT β€” yes, postgres allows that). + for node in stmt.walk(): + if isinstance( + node, + ( + exp.Insert, + exp.Update, + exp.Delete, + exp.Drop, + exp.AlterTable, + exp.Create, + exp.TruncateTable, + exp.Merge, + ), + ): + raise SqlNotAllowed( + f"writes/DDL not allowed (found {type(node).__name__})" + ) + + +# ─── Tools ────────────────────────────────────────────────────────── + + +async def get_live_players() -> dict[str, Any]: + """Active characters (telemetry seen in the last ~30s). + + Returns the same shape as `GET /live`: + { "players": [ { character_name, ew, ns, z, kills, ... } ] } + """ + return await _get_json("/live") + + +async def get_recent_rares(hours: int = 24, limit: int = 100) -> dict[str, Any]: + """Rare item finds in the last N hours, newest first.""" + hours = max(1, min(int(hours), 24 * 30)) # cap at 30 days + limit = max(1, min(int(limit), SQL_MAX_ROWS)) + pool = await _db() + rows = await pool.fetch( + """ + SELECT timestamp, character_name, name, ew, ns, z + FROM rare_events + WHERE timestamp >= NOW() - ($1::int || ' hours')::interval + ORDER BY timestamp DESC + LIMIT $2 + """, + hours, + limit, + ) + return { + "hours": hours, + "count": len(rows), + "rares": [ + { + "timestamp": r["timestamp"].isoformat(), + "character_name": r["character_name"], + "name": r["name"], + "ew": r["ew"], + "ns": r["ns"], + "z": r["z"], + } + for r in rows + ], + } + + +async def query_telemetry_db(sql: str) -> dict[str, Any]: + """Run a read-only SQL statement against the telemetry DB. + + The query is parsed and any non-SELECT/WITH statement is rejected. + The connection role is also GRANT SELECT only (defense in depth). + + Useful for ad-hoc questions: "top 5 KPH today", "kill count by character + yesterday", etc. + """ + assert_read_only(sql) + pool = await _db() + try: + rows = await asyncio.wait_for(pool.fetch(sql), timeout=SQL_TIMEOUT_S) + except asyncio.TimeoutError: + raise SqlNotAllowed(f"query exceeded {SQL_TIMEOUT_S:.0f}s timeout") + + if len(rows) > SQL_MAX_ROWS: + rows = rows[:SQL_MAX_ROWS] + truncated = True + else: + truncated = False + + return { + "row_count": len(rows), + "truncated": truncated, + "rows": [ + {k: _json_safe(v) for k, v in dict(r).items()} for r in rows + ], + } + + +def _json_safe(v: Any) -> Any: + """Convert datetime / Decimal / etc. to JSON-friendly types.""" + from datetime import date, datetime, timedelta + from decimal import Decimal + + if v is None: + return None + if isinstance(v, (str, int, float, bool)): + return v + if isinstance(v, (datetime, date)): + return v.isoformat() + if isinstance(v, timedelta): + return v.total_seconds() + if isinstance(v, Decimal): + return float(v) + if isinstance(v, (list, tuple)): + return [_json_safe(x) for x in v] + if isinstance(v, dict): + return {k: _json_safe(x) for k, x in v.items()} + return str(v) + + +# ─── Per-character lookups (HTTP loopback) ────────────────────────── + + +async def get_player_state(character_name: str) -> dict[str, Any]: + """Combined snapshot for one character: live telemetry + character stats. + + Returns: + { + "character_name": str, + "telemetry": {...} | None, # from /live, or None if offline + "character_stats": {...} | None, # from /character-stats/ + "vitals": {...} | None, # last vitals from /live (subset) + "online": bool, # whether telemetry was found in /live + } + """ + name = character_name.strip() + live = await _get_json("/live") + players = live.get("players", []) if isinstance(live, dict) else [] + telemetry = next( + (p for p in players if p.get("character_name") == name), None + ) + + char_stats: dict[str, Any] | None = None + try: + client = await _http() + resp = await client.get(f"/character-stats/{quote(name, safe='')}") + if resp.status_code == 200: + char_stats = resp.json() + except Exception: + char_stats = None + + return { + "character_name": name, + "online": telemetry is not None, + "telemetry": telemetry, + "character_stats": char_stats, + } + + +async def get_inventory(character_name: str) -> dict[str, Any]: + """Full inventory for one character. Items only β€” for filtered queries + use get_inventory_search.""" + client = await _http() + resp = await client.get(f"/inventory/{quote(character_name, safe='')}") + resp.raise_for_status() + return resp.json() + + +async def get_inventory_search( + character_name: str, filters: dict[str, Any] | None = None +) -> dict[str, Any]: + """Filtered inventory search. `filters` is a dict of query params, e.g. + {"name": "pearl", "armor_level_min": 500}. + + Caller is expected to know the supported filters from the dereth-tracker + /inventory/{name}/search route β€” pass through opaquely. + """ + client = await _http() + resp = await client.get( + f"/inventory/{quote(character_name, safe='')}/search", + params=filters or {}, + ) + resp.raise_for_status() + return resp.json() + + +async def get_combat_stats(character_name: str) -> dict[str, Any]: + """Lifetime + session combat stats for one character (per-element split, + monster encounters, surge counts).""" + client = await _http() + resp = await client.get(f"/combat-stats/{quote(character_name, safe='')}") + resp.raise_for_status() + return resp.json() + + +async def get_equipment_cantrips(character_name: str) -> dict[str, Any]: + """Currently-equipped items + their active cantrip/spell state.""" + client = await _http() + resp = await client.get( + f"/equipment-cantrip-state/{quote(character_name, safe='')}" + ) + resp.raise_for_status() + return resp.json() + + +async def get_quest_status() -> dict[str, Any]: + """All characters' active quest timers and progress.""" + return await _get_json("/quest-status") + + +async def get_server_health() -> dict[str, Any]: + """Coldeve server status: up/down, latency, current player count, uptime.""" + return await _get_json("/server-health") + + +async def suitbuilder_search( + params: dict[str, Any], max_phase_events: int = 50 +) -> dict[str, Any]: + """Drive a suitbuilder constraint search synchronously. + + The dereth-tracker /inv/suitbuilder/search endpoint is an SSE stream. + We collect events until the stream closes, drop intermediate phase + chatter (keeping the last N), and return: + + { "final_suits": [...], "phases": [...latest few...] } + + `params` is the JSON body the suitbuilder expects. Call it like the + /suitbuilder.html page does. + """ + client = await _http() + final: list[dict[str, Any]] = [] + phases: list[dict[str, Any]] = [] + + # Use a fresh long-timeout client for the SSE stream β€” don't tie up the + # shared pool for a 5-minute search. + async with httpx.AsyncClient( + base_url=TRACKER_URL, timeout=httpx.Timeout(300.0, connect=10.0) + ) as stream_client: + async with stream_client.stream( + "POST", + "/inv/suitbuilder/search", + json=params, + headers={"Content-Type": "application/json"}, + ) as resp: + event_name = "message" + data_lines: list[str] = [] + async for line_bytes in resp.aiter_lines(): + line = line_bytes.rstrip("\r") + if line.startswith("event:"): + event_name = line[6:].strip() + elif line.startswith("data:"): + data_lines.append(line[5:].strip()) + elif line == "": + # Dispatch + if data_lines: + try: + payload = json.loads("\n".join(data_lines)) + except json.JSONDecodeError: + payload = {"raw": "\n".join(data_lines)} + if event_name == "result" or event_name == "final": + final.append(payload) + elif event_name == "error": + phases.append({"event": "error", "data": payload}) + else: + phases.append({"event": event_name, "data": payload}) + phases = phases[-max_phase_events:] + data_lines = [] + event_name = "message" + + return { + "final_suits": final, + "phases": phases[-max_phase_events:], + "phase_count": len(phases), + } + + +# ─── Cleanup ──────────────────────────────────────────────────────── + + +async def shutdown() -> None: + """Close shared resources. Call from MCP server lifespan / on exit.""" + global _http_client, _db_pool + if _http_client is not None: + await _http_client.aclose() + _http_client = None + if _db_pool is not None: + await _db_pool.close() + _db_pool = None diff --git a/frontend/src/api/client.ts b/frontend/src/api/client.ts index 1a8747e1..2f74a9ff 100644 --- a/frontend/src/api/client.ts +++ b/frontend/src/api/client.ts @@ -9,6 +9,26 @@ export async function apiFetch(path: string): Promise { return res.json(); } +/** + * POST JSON to an authenticated API endpoint. + * Sends `body` as JSON, includes session cookie, parses JSON response. + * Throws Error with HTTP status on non-2xx. + */ +export async function apiPost(path: string, body: unknown): Promise { + const res = await fetch(`${API_BASE}${path}`, { + method: 'POST', + credentials: 'include', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(body ?? {}), + }); + if (!res.ok) { + let detail = ''; + try { detail = (await res.json())?.detail ?? ''; } catch { /* ignore */ } + throw new Error(`API ${path}: ${res.status}${detail ? ` (${detail})` : ''}`); + } + return res.json(); +} + export function wsUrl(): string { const proto = location.protocol === 'https:' ? 'wss:' : 'ws:'; return `${proto}//${location.host}/api/ws/live`; diff --git a/frontend/src/api/endpoints.ts b/frontend/src/api/endpoints.ts index 52488544..609d3b1e 100644 --- a/frontend/src/api/endpoints.ts +++ b/frontend/src/api/endpoints.ts @@ -1,4 +1,4 @@ -import { apiFetch } from './client'; +import { apiFetch, apiPost } from './client'; import type { TelemetrySnapshot, CombatStatsMessage, ServerHealth } from '../types'; interface LiveResponse { @@ -19,3 +19,30 @@ export const getServerHealth = () => apiFetch('/server-health'); export const getTotalRares = () => apiFetch('/total-rares'); export const getTotalKills = () => apiFetch('/total-kills'); export const getCharacterStats = (name: string) => apiFetch>(`/character-stats/${encodeURIComponent(name)}`); + +// ─── Agent endpoints (host-side service via /api/agent/*) ────────────────── + +export interface AgentAskResponse { + result: string; + session_id: string; + duration_ms: number; + num_turns: number; + is_error: boolean; +} + +export interface AgentHistoryMessage { + role: 'user' | 'assistant'; + text: string; + timestamp?: string; +} + +export const agentAsk = (message: string, sessionId: string) => + apiPost('/agent/ask', { message, session_id: sessionId }); + +export const agentNewSession = () => + apiPost<{ session_id: string }>('/agent/sessions/new', {}); + +export const agentSessionHistory = (sessionId: string) => + apiFetch<{ messages: AgentHistoryMessage[] }>( + `/agent/sessions/${encodeURIComponent(sessionId)}/history`, + ); diff --git a/frontend/src/components/sidebar/SidebarWindowButtons.tsx b/frontend/src/components/sidebar/SidebarWindowButtons.tsx index d2f9c251..63cb6234 100644 --- a/frontend/src/components/sidebar/SidebarWindowButtons.tsx +++ b/frontend/src/components/sidebar/SidebarWindowButtons.tsx @@ -6,6 +6,8 @@ export const SidebarWindowButtons: React.FC = () => { return (
+ openWindow('agent', 'Overlord Assistant')}>πŸ€– Assistant openWindow('playerdash', 'Player Dashboard')}>πŸ‘₯ Dashboard Math.floor(Math.random() * n); + return `${r(0x100000000).toString(16).padStart(8, '0')}-${r(0x10000).toString(16).padStart(4, '0')}-4${r(0x1000).toString(16).padStart(3, '0')}-${(8 + r(4)).toString(16)}${r(0x1000).toString(16).padStart(3, '0')}-${r(0x1000000000000).toString(16).padStart(12, '0')}`; +} + +function loadSessionId(): string { + try { + const stored = localStorage.getItem(SESSION_KEY); + if (stored) return stored; + } catch { /* ignore */ } + const fresh = newUuid(); + try { localStorage.setItem(SESSION_KEY, fresh); } catch { /* ignore */ } + return fresh; +} + +export const AgentWindow: React.FC = ({ id, zIndex }) => { + const [sessionId, setSessionId] = useState(() => loadSessionId()); + const [messages, setMessages] = useState([]); + const [input, setInput] = useState(''); + const [loading, setLoading] = useState(false); + const [hydrating, setHydrating] = useState(true); + const scrollRef = useRef(null); + + // Rehydrate from server-side session JSONL on mount / session change. + useEffect(() => { + let cancelled = false; + setHydrating(true); + agentSessionHistory(sessionId) + .then(res => { + if (cancelled) return; + const msgs: ChatMsg[] = (res.messages ?? []).map((m: AgentHistoryMessage) => ({ + role: m.role, + text: m.text, + })); + setMessages(msgs); + }) + .catch(() => { + if (!cancelled) setMessages([]); + }) + .finally(() => { + if (!cancelled) setHydrating(false); + }); + return () => { cancelled = true; }; + }, [sessionId]); + + // Auto-scroll to bottom on new messages. + useEffect(() => { + const el = scrollRef.current; + if (el) el.scrollTop = el.scrollHeight; + }, [messages.length, loading]); + + const send = useCallback(async () => { + const text = input.trim(); + if (!text || loading) return; + setInput(''); + setMessages(prev => [...prev, { role: 'user', text }]); + setLoading(true); + try { + const res = await agentAsk(text, sessionId); + setMessages(prev => [ + ...prev, + { role: res.is_error ? 'error' : 'assistant', text: res.result || '(no response)' }, + ]); + } catch (err) { + setMessages(prev => [ + ...prev, + { role: 'error', text: `Request failed: ${String(err)}` }, + ]); + } finally { + setLoading(false); + } + }, [input, loading, sessionId]); + + const newChat = useCallback(async () => { + if (loading) return; + let fresh = ''; + try { + const res = await agentNewSession(); + fresh = res.session_id; + } catch { + fresh = newUuid(); + } + try { localStorage.setItem(SESSION_KEY, fresh); } catch { /* ignore */ } + setSessionId(fresh); + setMessages([]); + setInput(''); + }, [loading]); + + const onKeyDown = useCallback((e: React.KeyboardEvent) => { + if (e.key === 'Enter' && !e.shiftKey) { + e.preventDefault(); + void send(); + } + }, [send]); + + return ( + +
+
+ + {sessionId.slice(0, 8)}… +
+ +
+ {hydrating && messages.length === 0 && ( +
Loading conversation…
+ )} + {!hydrating && messages.length === 0 && ( +
+ Ask anything about the live game state β€” players, kills, inventory, + suitbuilder, recent rares, etc. +
+ )} + {messages.map((m, i) => ( +
+
+ {m.role === 'user' ? 'You' : m.role === 'assistant' ? 'Overlord' : 'Error'} +
+
{m.text}
+
+ ))} + {loading && ( +
+
Overlord
+
Thinking…
+
+ )} +
+ +
{ e.preventDefault(); void send(); }} + > +