feat(agent): Phase 1 — chat-window AI assistant via Claude Code subprocess
Adds an in-dashboard AI assistant that answers questions about live game
state. Designed reactively (no background loops) — every user message in
the chat window or via /api/agent/ask runs one `claude -p` invocation.
Architecture:
- New host-side FastAPI service (agent/) on 127.0.0.1:8767, OUTSIDE the
dereth-tracker Docker container because `claude` and ~/.claude
credentials live on the host.
- nginx routes /api/agent/* to the host service.
- The same browser session cookie the tracker issues authenticates
agent requests (shared SECRET_KEY).
- The agent shells out to `claude -p --session-id <uuid>` with
cwd=/home/erik/MosswartOverlord. Sessions persist as JSONL on disk
via Claude Code's built-in machinery.
- An MCP stdio server (agent/mcp_overlord.py) exposes tools to Claude:
get_live_players, get_recent_rares, query_telemetry_db (read-only,
parsed by sqlglot to reject DML/DDL), get_player_state, get_inventory,
get_inventory_search, get_combat_stats, get_equipment_cantrips,
get_quest_status, get_server_health, suitbuilder_search.
- Read-only PG role (overlord_agent_ro) is the second line of defense
on the SQL tool — even a parser bypass can't mutate.
Frontend:
- AgentWindow.tsx — draggable chat window with localStorage-pinned
session UUID, "New Chat" button, on-mount rehydration from
/agent/sessions/{id}/history (parses Claude Code's JSONL).
- Wired into WindowRenderer + Sidebar (🤖 Assistant button).
Operational:
- systemd unit (overlord-agent.service) + install.sh.
- agent/README.md documents env vars, deploy flow, smoke tests.
- nginx/overlord.conf gets a new /api/agent/ location with 180s timeout.
- CLAUDE.md gets an "Overlord Assistant Mode" section briefing the
agent on which tools to use and how to behave.
NOT YET DEPLOYED — server still needs:
1. Apply agent/sql/0001_overlord_agent_ro.sql + ALTER ROLE password
2. Add AGENT_DB_DSN to /home/erik/MosswartOverlord/.env
3. bash agent/install.sh (creates venv, installs unit, starts service)
4. sudo cp /home/erik/MosswartOverlord/nginx/overlord.conf to nginx + reload
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
aeddaf9925
commit
79cf88d3f7
35 changed files with 1763 additions and 25 deletions
146
agent/README.md
Normal file
146
agent/README.md
Normal file
|
|
@ -0,0 +1,146 @@
|
|||
# Overlord Agent
|
||||
|
||||
A small host-side Python service that gives Claude Code (running in
|
||||
headless mode) access to live Overlord data so it can answer questions
|
||||
from the dashboard chat window.
|
||||
|
||||
## Why a separate service?
|
||||
|
||||
`dereth-tracker` runs in Docker. The `claude` CLI binary at
|
||||
`/home/erik/.local/bin/claude` depends on `~/.claude` credentials owned
|
||||
by user `erik` on the host. The tracker container can't invoke it.
|
||||
|
||||
So this service runs **outside** Docker, listens on `127.0.0.1:8767`,
|
||||
and nginx routes `/api/agent/*` to it. It validates the same browser
|
||||
session cookie the tracker issues (shared `SECRET_KEY`) and shells out
|
||||
to `claude -p` with `cwd=/home/erik/MosswartOverlord`.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Browser ──nginx──┬─► /api/* ──► dereth-tracker (Docker, 8765)
|
||||
│
|
||||
└─► /api/agent/* ──► overlord-agent (host, 8767)
|
||||
│
|
||||
├─► subprocess: claude -p ...
|
||||
│ │
|
||||
│ └─► MCP stdio ──► mcp_overlord.py
|
||||
│ │
|
||||
│ └─► HTTP loopback to tracker
|
||||
│ └─► asyncpg to dereth-db
|
||||
│
|
||||
└─► validates "session" cookie
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
| File | What |
|
||||
|------|------|
|
||||
| `service.py` | FastAPI app (`/agent/health`, `/agent/sessions/new`, `/agent/ask`, `/agent/sessions/{id}/history`) |
|
||||
| `auth.py` | Session-cookie validation (mirrors `main.py:1013-1019`) |
|
||||
| `claude_wrapper.py` | `asyncio.create_subprocess_exec("claude", "-p", ...)` |
|
||||
| `tools.py` | Pure tool implementations (HTTP loopback + read-only DB) |
|
||||
| `mcp_overlord.py` | MCP stdio server registering tools for Claude Code |
|
||||
| `sql/0001_overlord_agent_ro.sql` | Read-only PG role for the SQL tool |
|
||||
| `overlord-agent.service` | systemd unit |
|
||||
| `install.sh` | One-shot installer (venv + pip install + systemd) |
|
||||
|
||||
## Required env vars (in repo-root `.env`)
|
||||
|
||||
```
|
||||
SECRET_KEY=<same value the tracker uses to sign cookies>
|
||||
AGENT_DB_DSN=postgresql://overlord_agent_ro:<password>@127.0.0.1:5432/dereth
|
||||
TRACKER_URL=http://127.0.0.1:8765 # optional, this is the default
|
||||
CLAUDE_BIN=/home/erik/.local/bin/claude # optional, this is the default
|
||||
CLAUDE_CWD=/home/erik/MosswartOverlord # optional, this is the default
|
||||
CLAUDE_TIMEOUT_S=120 # optional
|
||||
```
|
||||
|
||||
## First-time setup on the server
|
||||
|
||||
1. **Create the read-only DB role** (one-time):
|
||||
```bash
|
||||
docker exec -i dereth-db psql -U postgres -d dereth \
|
||||
< /home/erik/MosswartOverlord/agent/sql/0001_overlord_agent_ro.sql
|
||||
docker exec -it dereth-db psql -U postgres -d dereth \
|
||||
-c "ALTER ROLE overlord_agent_ro PASSWORD '<random-password>';"
|
||||
```
|
||||
2. **Add `AGENT_DB_DSN`** to `/home/erik/MosswartOverlord/.env` with the
|
||||
password you just set.
|
||||
3. **Run the installer**:
|
||||
```bash
|
||||
cd /home/erik/MosswartOverlord
|
||||
bash agent/install.sh
|
||||
```
|
||||
4. **Update nginx**: edit `/etc/nginx/sites-enabled/overlord` to add the
|
||||
`/api/agent/` location (already in `nginx/overlord.conf` in the repo —
|
||||
just `sudo cp` and reload).
|
||||
|
||||
## Day-to-day deploy
|
||||
|
||||
After editing any agent file:
|
||||
|
||||
```bash
|
||||
# On dev:
|
||||
git push
|
||||
|
||||
# On server:
|
||||
ssh erik@overlord.snakedesert.se
|
||||
cd /home/erik/MosswartOverlord
|
||||
git pull
|
||||
sudo systemctl restart overlord-agent
|
||||
journalctl -u overlord-agent -f # tail logs
|
||||
```
|
||||
|
||||
For Python dependency changes:
|
||||
|
||||
```bash
|
||||
agent/.venv/bin/pip install -r agent/requirements.txt
|
||||
sudo systemctl restart overlord-agent
|
||||
```
|
||||
|
||||
## Smoke tests
|
||||
|
||||
```bash
|
||||
# 1. Service alive?
|
||||
curl http://127.0.0.1:8767/agent/health
|
||||
|
||||
# 2. Cookie required?
|
||||
curl -X POST http://127.0.0.1:8767/agent/ask \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"session_id":"x","message":"hi"}'
|
||||
# ⇒ 401
|
||||
|
||||
# 3. Direct claude invocation works?
|
||||
echo "hello" | /home/erik/.local/bin/claude -p \
|
||||
--session-id 11111111-1111-1111-1111-111111111111 \
|
||||
--output-format json
|
||||
|
||||
# 4. End-to-end via nginx (with cookie):
|
||||
curl -X POST https://overlord.snakedesert.se/api/agent/ask \
|
||||
-b 'session=<your-session-cookie>' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"session_id":"<uuid>","message":"How many characters are online?"}'
|
||||
```
|
||||
|
||||
## Cost / rate-limit notes
|
||||
|
||||
- Each `/agent/ask` shells out to `claude -p` once.
|
||||
- We use the user's Claude subscription (no API key) — flat-rate, no
|
||||
per-call billing, but subscription-tier rate limits still apply.
|
||||
- **Reactive only**: there are no background loops or periodic ticks.
|
||||
Each user message = one Claude turn (which may chain several tool
|
||||
calls internally before producing a final answer).
|
||||
- The SQL tool is hard-capped at 10s and 200 rows.
|
||||
- `suitbuilder_search` is the only tool that can take minutes; nginx
|
||||
read timeout is 180s for `/api/agent/`.
|
||||
|
||||
## Adding a new MCP tool
|
||||
|
||||
1. Implement `async def my_tool(...) -> dict` in `tools.py`.
|
||||
2. Register it in `mcp_overlord.py` under `TOOL_DEFS`:
|
||||
- description (the agent reads this to decide when to call)
|
||||
- JSON schema for arguments
|
||||
- lambda dispatching to `T.my_tool(...)`
|
||||
3. `sudo systemctl restart overlord-agent`. Claude Code re-discovers the
|
||||
tool list on each invocation.
|
||||
10
agent/__init__.py
Normal file
10
agent/__init__.py
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
"""Overlord Agent — host-side service that shells out to claude -p.
|
||||
|
||||
Runs OUTSIDE the dereth-tracker Docker container because the `claude` CLI
|
||||
binary lives at /home/erik/.local/bin/claude on the host and depends on
|
||||
~/.claude/ credentials owned by user erik. The container can't invoke it
|
||||
directly, so this is a small standalone FastAPI service on port 8767.
|
||||
|
||||
nginx routes /api/agent/* to here. The same browser session cookie that
|
||||
dereth-tracker validates is reused (shared SECRET_KEY env var).
|
||||
"""
|
||||
51
agent/auth.py
Normal file
51
agent/auth.py
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
"""Session-cookie validation that mirrors main.py.
|
||||
|
||||
Re-implements the verify path so this host-side service can authenticate
|
||||
the same browser cookie that dereth-tracker issues. Both services must
|
||||
share the SECRET_KEY env var.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
|
||||
from fastapi import HTTPException, Request, status
|
||||
from itsdangerous import BadSignature, SignatureExpired, URLSafeTimedSerializer
|
||||
|
||||
# Mirror main.py:996-998
|
||||
SECRET_KEY = os.getenv("SECRET_KEY", "change-me-in-production-please")
|
||||
SESSION_MAX_AGE = 30 * 24 * 3600 # 30 days
|
||||
_serializer = URLSafeTimedSerializer(SECRET_KEY)
|
||||
|
||||
|
||||
def verify_session_cookie(token: str) -> dict | None:
|
||||
"""Verify and decode a session token. Returns None if invalid/expired.
|
||||
|
||||
Mirrors main.py:1013-1019 byte-for-byte so a cookie issued by the tracker
|
||||
decodes here identically.
|
||||
"""
|
||||
try:
|
||||
data = _serializer.loads(token, max_age=SESSION_MAX_AGE)
|
||||
return {"username": data["u"], "is_admin": data["a"]}
|
||||
except (BadSignature, SignatureExpired, KeyError):
|
||||
return None
|
||||
|
||||
|
||||
def require_user(request: Request) -> dict:
|
||||
"""FastAPI dependency: enforces a valid session cookie.
|
||||
|
||||
Returns the decoded user dict on success; raises 401 otherwise.
|
||||
"""
|
||||
token = request.cookies.get("session")
|
||||
if not token:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Not authenticated",
|
||||
)
|
||||
user = verify_session_cookie(token)
|
||||
if not user:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Session invalid or expired",
|
||||
)
|
||||
return user
|
||||
123
agent/claude_wrapper.py
Normal file
123
agent/claude_wrapper.py
Normal file
|
|
@ -0,0 +1,123 @@
|
|||
"""Subprocess wrapper around `claude -p` (Claude Code in headless JSON mode).
|
||||
|
||||
Run from cwd=/home/erik/MosswartOverlord so:
|
||||
• Sessions persist at ~/.claude/projects/-home-erik-MosswartOverlord/<uuid>.jsonl
|
||||
• Project-level .mcp.json is auto-loaded
|
||||
• CLAUDE.md in the repo root briefs the agent
|
||||
|
||||
The `--session-id` flag both creates a new session (first call) and resumes
|
||||
an existing one (subsequent calls), so we don't need separate code paths.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# These can be overridden via env vars for non-prod testing.
|
||||
CLAUDE_BIN = os.getenv("CLAUDE_BIN", "/home/erik/.local/bin/claude")
|
||||
CLAUDE_CWD = os.getenv("CLAUDE_CWD", "/home/erik/MosswartOverlord")
|
||||
# Hard cap on how long a single agent turn may take. Claude Code can spin a
|
||||
# while when chaining many tool calls; we don't want to leave a zombie
|
||||
# subprocess if something gets stuck.
|
||||
CLAUDE_TIMEOUT_S = int(os.getenv("CLAUDE_TIMEOUT_S", "120"))
|
||||
|
||||
|
||||
@dataclass
|
||||
class ClaudeResult:
|
||||
result: str
|
||||
session_id: str
|
||||
duration_ms: int
|
||||
num_turns: int
|
||||
is_error: bool
|
||||
raw: dict[str, Any]
|
||||
|
||||
|
||||
class ClaudeError(RuntimeError):
|
||||
"""Raised when the claude CLI returns a non-zero exit or unparseable output."""
|
||||
|
||||
|
||||
async def ask_claude(message: str, session_id: str) -> ClaudeResult:
|
||||
"""Send `message` to `claude -p` resuming session_id; return parsed result.
|
||||
|
||||
Raises ClaudeError on subprocess failure, JSON parse failure, or timeout.
|
||||
"""
|
||||
if not Path(CLAUDE_BIN).exists():
|
||||
raise ClaudeError(f"claude binary not found at {CLAUDE_BIN}")
|
||||
if not Path(CLAUDE_CWD).is_dir():
|
||||
raise ClaudeError(f"CLAUDE_CWD does not exist: {CLAUDE_CWD}")
|
||||
|
||||
args = [
|
||||
CLAUDE_BIN,
|
||||
"-p",
|
||||
"--session-id",
|
||||
session_id,
|
||||
"--output-format",
|
||||
"json",
|
||||
]
|
||||
|
||||
logger.info(
|
||||
"claude exec: session=%s msg_len=%d cwd=%s", session_id, len(message), CLAUDE_CWD
|
||||
)
|
||||
|
||||
proc = await asyncio.create_subprocess_exec(
|
||||
*args,
|
||||
stdin=asyncio.subprocess.PIPE,
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
cwd=CLAUDE_CWD,
|
||||
)
|
||||
|
||||
try:
|
||||
stdout, stderr = await asyncio.wait_for(
|
||||
proc.communicate(input=message.encode("utf-8")),
|
||||
timeout=CLAUDE_TIMEOUT_S,
|
||||
)
|
||||
except asyncio.TimeoutError:
|
||||
try:
|
||||
proc.kill()
|
||||
except ProcessLookupError:
|
||||
pass
|
||||
raise ClaudeError(f"claude timed out after {CLAUDE_TIMEOUT_S}s")
|
||||
|
||||
if proc.returncode != 0:
|
||||
raise ClaudeError(
|
||||
f"claude exited {proc.returncode}: {stderr.decode('utf-8', 'replace')[:500]}"
|
||||
)
|
||||
|
||||
raw_text = stdout.decode("utf-8", "replace").strip()
|
||||
if not raw_text:
|
||||
raise ClaudeError("claude produced empty stdout")
|
||||
|
||||
# In --output-format json mode the LAST line is the JSON envelope; some
|
||||
# earlier lines may be progress. Be tolerant.
|
||||
try:
|
||||
envelope = json.loads(raw_text)
|
||||
except json.JSONDecodeError:
|
||||
# Try the last non-empty line
|
||||
last = next(
|
||||
(line for line in reversed(raw_text.splitlines()) if line.strip()),
|
||||
"",
|
||||
)
|
||||
try:
|
||||
envelope = json.loads(last)
|
||||
except json.JSONDecodeError as e:
|
||||
raise ClaudeError(
|
||||
f"claude stdout was not JSON: {raw_text[:500]}"
|
||||
) from e
|
||||
|
||||
return ClaudeResult(
|
||||
result=envelope.get("result", ""),
|
||||
session_id=envelope.get("session_id", session_id),
|
||||
duration_ms=int(envelope.get("duration_ms", 0)),
|
||||
num_turns=int(envelope.get("num_turns", 0)),
|
||||
is_error=bool(envelope.get("is_error", False)),
|
||||
raw=envelope,
|
||||
)
|
||||
46
agent/install.sh
Normal file
46
agent/install.sh
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
#!/bin/bash
|
||||
# Install / re-install the Overlord Agent host-side service.
|
||||
#
|
||||
# Run as user `erik` from /home/erik/MosswartOverlord:
|
||||
# bash agent/install.sh
|
||||
#
|
||||
# Requires sudo for the systemd parts (you'll be prompted once).
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
REPO_DIR="/home/erik/MosswartOverlord"
|
||||
AGENT_DIR="$REPO_DIR/agent"
|
||||
VENV_DIR="$AGENT_DIR/.venv"
|
||||
SERVICE_FILE="$AGENT_DIR/overlord-agent.service"
|
||||
SYSTEMD_TARGET="/etc/systemd/system/overlord-agent.service"
|
||||
|
||||
if [[ "$(pwd)" != "$REPO_DIR" ]]; then
|
||||
echo "Run from $REPO_DIR (currently in $(pwd))" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "==> Creating/updating venv at $VENV_DIR"
|
||||
if [[ ! -d "$VENV_DIR" ]]; then
|
||||
python3 -m venv "$VENV_DIR"
|
||||
fi
|
||||
"$VENV_DIR/bin/pip" install --quiet --upgrade pip
|
||||
"$VENV_DIR/bin/pip" install --quiet -r "$AGENT_DIR/requirements.txt"
|
||||
|
||||
echo "==> Installing systemd unit"
|
||||
sudo cp "$SERVICE_FILE" "$SYSTEMD_TARGET"
|
||||
sudo systemctl daemon-reload
|
||||
|
||||
echo "==> Enabling + starting overlord-agent"
|
||||
sudo systemctl enable overlord-agent
|
||||
sudo systemctl restart overlord-agent
|
||||
|
||||
sleep 1
|
||||
echo "==> Status:"
|
||||
sudo systemctl --no-pager status overlord-agent | head -15
|
||||
|
||||
echo ""
|
||||
echo "==> Smoke test:"
|
||||
curl -s http://127.0.0.1:8767/agent/health | python3 -m json.tool || true
|
||||
|
||||
echo ""
|
||||
echo "Done. Logs: journalctl -u overlord-agent -f"
|
||||
262
agent/mcp_overlord.py
Normal file
262
agent/mcp_overlord.py
Normal file
|
|
@ -0,0 +1,262 @@
|
|||
"""MCP stdio server exposing Overlord data to Claude Code.
|
||||
|
||||
Configured via .mcp.json at the repo root, which Claude Code auto-loads
|
||||
when invoked with cwd=/home/erik/MosswartOverlord. Tool implementations
|
||||
live in tools.py — this file is just MCP protocol plumbing.
|
||||
|
||||
Run directly with:
|
||||
python3 /home/erik/MosswartOverlord/agent/mcp_overlord.py
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from mcp.server import Server
|
||||
from mcp.server.stdio import stdio_server
|
||||
from mcp.types import TextContent, Tool
|
||||
|
||||
from . import tools as T
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s %(levelname)s mcp_overlord: %(message)s",
|
||||
)
|
||||
logger = logging.getLogger("mcp_overlord")
|
||||
|
||||
server: Server = Server("overlord")
|
||||
|
||||
|
||||
# ─── Tool registry ──────────────────────────────────────────────────
|
||||
#
|
||||
# Each entry: name → (description, JSON schema, callable async fn).
|
||||
# We register them with @server.list_tools / @server.call_tool below.
|
||||
|
||||
TOOL_DEFS: dict[str, dict[str, Any]] = {
|
||||
"get_live_players": {
|
||||
"description": (
|
||||
"Return active characters seen in the last ~30 seconds with their "
|
||||
"current position, kills, KPH, vitae, online time, and VTank state. "
|
||||
"Use this for any 'who is online right now / what is X doing' question."
|
||||
),
|
||||
"schema": {"type": "object", "properties": {}},
|
||||
"fn": lambda _args: T.get_live_players(),
|
||||
},
|
||||
"get_recent_rares": {
|
||||
"description": (
|
||||
"Return rare item finds from the last N hours, newest first. "
|
||||
"Use for questions about recent drops, who is finding rares, or "
|
||||
"rare-rate analysis. Defaults to 24 hours, max 30 days."
|
||||
),
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"hours": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 720,
|
||||
"default": 24,
|
||||
},
|
||||
"limit": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 200,
|
||||
"default": 100,
|
||||
},
|
||||
},
|
||||
},
|
||||
"fn": lambda args: T.get_recent_rares(
|
||||
hours=int(args.get("hours", 24)),
|
||||
limit=int(args.get("limit", 100)),
|
||||
),
|
||||
},
|
||||
"query_telemetry_db": {
|
||||
"description": (
|
||||
"Run a read-only SQL query against the telemetry database (TimescaleDB). "
|
||||
"Only SELECT / WITH statements are accepted; any DML or DDL is rejected. "
|
||||
"Useful for questions that aren't covered by the other tools — top-N "
|
||||
"lists, custom aggregations, time-window comparisons. "
|
||||
"Available tables include: telemetry_events (hypertable, 30d retention), "
|
||||
"rare_events, spawn_events (hypertable, 7d retention), portals, "
|
||||
"char_stats, rare_stats, rare_stats_sessions, character_stats, "
|
||||
"combat_stats, combat_stats_sessions, server_status. "
|
||||
"The query has a 10s timeout and returns at most 200 rows."
|
||||
),
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"required": ["sql"],
|
||||
"properties": {
|
||||
"sql": {
|
||||
"type": "string",
|
||||
"description": "A single PostgreSQL SELECT or WITH ... SELECT statement.",
|
||||
}
|
||||
},
|
||||
},
|
||||
"fn": lambda args: T.query_telemetry_db(str(args["sql"])),
|
||||
},
|
||||
"get_player_state": {
|
||||
"description": (
|
||||
"Combined snapshot for ONE character: live telemetry (if online) "
|
||||
"+ full character stats (attributes, skills, augmentations). "
|
||||
"Use this for questions like 'what is X doing right now' or 'show me X's stats'."
|
||||
),
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"required": ["character_name"],
|
||||
"properties": {
|
||||
"character_name": {"type": "string"},
|
||||
},
|
||||
},
|
||||
"fn": lambda args: T.get_player_state(str(args["character_name"])),
|
||||
},
|
||||
"get_inventory": {
|
||||
"description": (
|
||||
"Full inventory listing for one character — every item with name, "
|
||||
"icon, container, equipped slot, spells, material, tinkers, etc. "
|
||||
"Large response — prefer get_inventory_search for narrow queries."
|
||||
),
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"required": ["character_name"],
|
||||
"properties": {"character_name": {"type": "string"}},
|
||||
},
|
||||
"fn": lambda args: T.get_inventory(str(args["character_name"])),
|
||||
},
|
||||
"get_inventory_search": {
|
||||
"description": (
|
||||
"Filtered inventory search for one character. Pass filter query "
|
||||
"params as the `filters` object. Common filters: name (substring), "
|
||||
"armor_level_min, armor_level_max, material, item_set, has_spell. "
|
||||
"Returns matching items in the same shape as get_inventory."
|
||||
),
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"required": ["character_name"],
|
||||
"properties": {
|
||||
"character_name": {"type": "string"},
|
||||
"filters": {
|
||||
"type": "object",
|
||||
"description": "Query params dict, e.g. {\"name\": \"pearl\", \"armor_level_min\": 500}",
|
||||
},
|
||||
},
|
||||
},
|
||||
"fn": lambda args: T.get_inventory_search(
|
||||
str(args["character_name"]), args.get("filters") or {}
|
||||
),
|
||||
},
|
||||
"get_combat_stats": {
|
||||
"description": (
|
||||
"Lifetime + session combat stats for one character. Includes total "
|
||||
"damage given/received, per-element offense/defense breakdown, kill "
|
||||
"counts, and aetheria surge counts."
|
||||
),
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"required": ["character_name"],
|
||||
"properties": {"character_name": {"type": "string"}},
|
||||
},
|
||||
"fn": lambda args: T.get_combat_stats(str(args["character_name"])),
|
||||
},
|
||||
"get_equipment_cantrips": {
|
||||
"description": (
|
||||
"Currently-equipped items for a character along with their active "
|
||||
"cantrip/spell state. Useful for 'what is X wearing' or 'is X "
|
||||
"running their suit' questions."
|
||||
),
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"required": ["character_name"],
|
||||
"properties": {"character_name": {"type": "string"}},
|
||||
},
|
||||
"fn": lambda args: T.get_equipment_cantrips(str(args["character_name"])),
|
||||
},
|
||||
"get_quest_status": {
|
||||
"description": (
|
||||
"Active quest timers and progress across ALL characters. Returns "
|
||||
"for each character which quests are READY vs counting down."
|
||||
),
|
||||
"schema": {"type": "object", "properties": {}},
|
||||
"fn": lambda _args: T.get_quest_status(),
|
||||
},
|
||||
"get_server_health": {
|
||||
"description": (
|
||||
"Current Coldeve game-server status: up/down, latency in ms, "
|
||||
"current player count from TreeStats.net, total uptime. Updated "
|
||||
"every 30 seconds in the background."
|
||||
),
|
||||
"schema": {"type": "object", "properties": {}},
|
||||
"fn": lambda _args: T.get_server_health(),
|
||||
},
|
||||
"suitbuilder_search": {
|
||||
"description": (
|
||||
"Run a constraint-satisfaction armor optimization across all "
|
||||
"characters' inventories ('mules'). Drives the same suitbuilder "
|
||||
"the /suitbuilder.html page uses. Pass the same params dict the "
|
||||
"page sends — see /suitbuilder.html JS for the schema. The search "
|
||||
"is SSE-streaming on the backend; this tool collects until done "
|
||||
"and returns the final suit(s) plus the last few phase events. "
|
||||
"Can take up to 5 minutes for complex constraints — only call "
|
||||
"when the user explicitly asks for an optimization run."
|
||||
),
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"required": ["params"],
|
||||
"properties": {
|
||||
"params": {
|
||||
"type": "object",
|
||||
"description": "Suitbuilder request body (characters, locked slots, set constraints, etc.)",
|
||||
},
|
||||
},
|
||||
},
|
||||
"fn": lambda args: T.suitbuilder_search(args.get("params") or {}),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
# ─── MCP protocol wiring ────────────────────────────────────────────
|
||||
|
||||
|
||||
@server.list_tools()
|
||||
async def list_tools() -> list[Tool]:
|
||||
return [
|
||||
Tool(name=name, description=defn["description"], inputSchema=defn["schema"])
|
||||
for name, defn in TOOL_DEFS.items()
|
||||
]
|
||||
|
||||
|
||||
@server.call_tool()
|
||||
async def call_tool(name: str, arguments: dict[str, Any]) -> list[TextContent]:
|
||||
if name not in TOOL_DEFS:
|
||||
return [TextContent(type="text", text=f"unknown tool: {name}")]
|
||||
|
||||
fn = TOOL_DEFS[name]["fn"]
|
||||
try:
|
||||
result = await fn(arguments or {})
|
||||
except T.SqlNotAllowed as e:
|
||||
return [TextContent(type="text", text=f"REJECTED: {e}")]
|
||||
except Exception as e: # noqa: BLE001
|
||||
logger.exception("tool %s failed", name)
|
||||
return [TextContent(type="text", text=f"ERROR: {type(e).__name__}: {e}")]
|
||||
|
||||
text = json.dumps(result, default=str, ensure_ascii=False, indent=2)
|
||||
return [TextContent(type="text", text=text)]
|
||||
|
||||
|
||||
async def _run() -> None:
|
||||
logger.info("starting MCP stdio server (overlord)")
|
||||
try:
|
||||
async with stdio_server() as (reader, writer):
|
||||
await server.run(reader, writer, server.create_initialization_options())
|
||||
finally:
|
||||
await T.shutdown()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
asyncio.run(_run())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
29
agent/overlord-agent.service
Normal file
29
agent/overlord-agent.service
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
[Unit]
|
||||
Description=Overlord Agent (Claude Code shell-out service)
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=erik
|
||||
Group=erik
|
||||
# Working directory MUST be the repo root so:
|
||||
# - claude -p sessions land at ~/.claude/projects/-home-erik-MosswartOverlord/
|
||||
# - .mcp.json is auto-loaded
|
||||
WorkingDirectory=/home/erik/MosswartOverlord
|
||||
EnvironmentFile=-/home/erik/MosswartOverlord/.env
|
||||
# Run inside the venv populated by install.sh.
|
||||
ExecStart=/home/erik/MosswartOverlord/agent/.venv/bin/python -m agent.service
|
||||
Restart=on-failure
|
||||
RestartSec=3
|
||||
# Don't tie up the disk with stdout — let journald handle it.
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
# Resource hints — the service is light, but cap so a runaway can't
|
||||
# starve the host.
|
||||
MemoryLimit=512M
|
||||
CPUQuota=200%
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
13
agent/requirements.txt
Normal file
13
agent/requirements.txt
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
fastapi>=0.110
|
||||
uvicorn[standard]>=0.30
|
||||
httpx>=0.27
|
||||
itsdangerous>=2.2
|
||||
pydantic>=2.6
|
||||
# MCP server SDK (used by mcp_overlord.py for the stdio MCP server)
|
||||
mcp>=1.0
|
||||
# SQL safety: parses SQL to enforce read-only on the query_db tool
|
||||
sqlglot>=25.0
|
||||
# Direct DB access for the read-only query tool and rare_events lookups
|
||||
asyncpg>=0.29
|
||||
# .env loader
|
||||
python-dotenv>=1.0
|
||||
213
agent/service.py
Normal file
213
agent/service.py
Normal file
|
|
@ -0,0 +1,213 @@
|
|||
"""Overlord Agent host-side FastAPI service.
|
||||
|
||||
Runs OUTSIDE Docker (host-side) on port 8767.
|
||||
|
||||
Endpoints:
|
||||
GET /agent/health — liveness check
|
||||
POST /agent/sessions/new — returns a fresh session UUID
|
||||
POST /agent/ask — runs claude -p with given session
|
||||
GET /agent/sessions/{session_id}/history
|
||||
— replays a session's JSONL on disk
|
||||
|
||||
Auth: every endpoint except /health requires the same browser session
|
||||
cookie that dereth-tracker issues.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from fastapi import Depends, FastAPI, HTTPException
|
||||
from fastapi.responses import JSONResponse
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from . import auth
|
||||
from .claude_wrapper import CLAUDE_CWD, ClaudeError, ask_claude
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s %(levelname)s %(name)s: %(message)s",
|
||||
)
|
||||
logger = logging.getLogger("agent")
|
||||
|
||||
app = FastAPI(title="Overlord Agent", version="0.1.0")
|
||||
|
||||
|
||||
# ─── Models ──────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class AskRequest(BaseModel):
|
||||
session_id: str = Field(
|
||||
..., description="Stable per-conversation UUID stored in browser localStorage"
|
||||
)
|
||||
message: str = Field(..., min_length=1, max_length=10_000)
|
||||
|
||||
|
||||
class AskResponse(BaseModel):
|
||||
result: str
|
||||
session_id: str
|
||||
duration_ms: int
|
||||
num_turns: int
|
||||
is_error: bool
|
||||
|
||||
|
||||
class NewSessionResponse(BaseModel):
|
||||
session_id: str
|
||||
|
||||
|
||||
# ─── Helpers ─────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def _encode_cwd(cwd: str) -> str:
|
||||
"""Match Claude Code's on-disk encoding for cwd → directory name.
|
||||
|
||||
Claude Code stores sessions at ~/.claude/projects/<encoded-cwd>/<uuid>.jsonl
|
||||
where non-alphanumerics in the cwd are replaced with hyphens.
|
||||
Example: /home/erik/MosswartOverlord → -home-erik-MosswartOverlord
|
||||
"""
|
||||
return "".join(c if c.isalnum() else "-" for c in cwd)
|
||||
|
||||
|
||||
def _sessions_dir() -> Path:
|
||||
return Path.home() / ".claude" / "projects" / _encode_cwd(CLAUDE_CWD)
|
||||
|
||||
|
||||
# ─── Endpoints ───────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@app.get("/agent/health")
|
||||
async def health() -> dict:
|
||||
"""Liveness probe — no auth, used by deployment scripts."""
|
||||
return {
|
||||
"status": "ok",
|
||||
"claude_cwd": CLAUDE_CWD,
|
||||
"sessions_dir_exists": _sessions_dir().exists(),
|
||||
}
|
||||
|
||||
|
||||
@app.post("/agent/sessions/new", response_model=NewSessionResponse)
|
||||
async def new_session(_user: dict = Depends(auth.require_user)) -> NewSessionResponse:
|
||||
"""Generate a fresh session UUID. Doesn't touch disk — claude creates the
|
||||
JSONL file when the first message lands."""
|
||||
return NewSessionResponse(session_id=str(uuid.uuid4()))
|
||||
|
||||
|
||||
@app.post("/agent/ask", response_model=AskResponse)
|
||||
async def agent_ask(
|
||||
req: AskRequest, user: dict = Depends(auth.require_user)
|
||||
) -> AskResponse:
|
||||
"""Forward a message to claude -p resuming the given session."""
|
||||
started = time.monotonic()
|
||||
try:
|
||||
result = await ask_claude(req.message, req.session_id)
|
||||
except ClaudeError as e:
|
||||
logger.warning(
|
||||
"claude failed user=%s session=%s err=%s", user["username"], req.session_id, e
|
||||
)
|
||||
raise HTTPException(status_code=502, detail=str(e))
|
||||
|
||||
elapsed_ms = int((time.monotonic() - started) * 1000)
|
||||
logger.info(
|
||||
"ask user=%s session=%s turns=%d duration_ms=%d (subprocess=%dms)",
|
||||
user["username"],
|
||||
result.session_id,
|
||||
result.num_turns,
|
||||
elapsed_ms,
|
||||
result.duration_ms,
|
||||
)
|
||||
|
||||
return AskResponse(
|
||||
result=result.result,
|
||||
session_id=result.session_id,
|
||||
duration_ms=result.duration_ms,
|
||||
num_turns=result.num_turns,
|
||||
is_error=result.is_error,
|
||||
)
|
||||
|
||||
|
||||
@app.get("/agent/sessions/{session_id}/history")
|
||||
async def session_history(
|
||||
session_id: str, _user: dict = Depends(auth.require_user)
|
||||
) -> JSONResponse:
|
||||
"""Replay a session's JSONL from ~/.claude/projects/.../<id>.jsonl.
|
||||
|
||||
Returns a flat array of {role, text, timestamp} for the chat window.
|
||||
Returns an empty array if the session file doesn't exist yet.
|
||||
"""
|
||||
# UUID sanity check to prevent path traversal — claude Code uses uuid4
|
||||
try:
|
||||
uuid.UUID(session_id)
|
||||
except ValueError:
|
||||
raise HTTPException(status_code=400, detail="invalid session_id")
|
||||
|
||||
path = _sessions_dir() / f"{session_id}.jsonl"
|
||||
if not path.is_file():
|
||||
return JSONResponse({"messages": []})
|
||||
|
||||
messages: list[dict[str, Any]] = []
|
||||
try:
|
||||
with path.open("r", encoding="utf-8") as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
obj = json.loads(line)
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
# Claude Code records turns with type=user / type=assistant.
|
||||
# Tool-use traffic is verbose; skip it for the chat UI.
|
||||
msg_type = obj.get("type")
|
||||
if msg_type not in ("user", "assistant"):
|
||||
continue
|
||||
msg = obj.get("message") or {}
|
||||
content = msg.get("content")
|
||||
# `content` may be a string or list[{type,text}].
|
||||
if isinstance(content, str):
|
||||
text = content
|
||||
elif isinstance(content, list):
|
||||
text = "".join(
|
||||
part.get("text", "")
|
||||
for part in content
|
||||
if isinstance(part, dict) and part.get("type") == "text"
|
||||
)
|
||||
else:
|
||||
text = ""
|
||||
if not text:
|
||||
continue
|
||||
messages.append(
|
||||
{
|
||||
"role": msg_type,
|
||||
"text": text,
|
||||
"timestamp": obj.get("timestamp"),
|
||||
}
|
||||
)
|
||||
except OSError as e:
|
||||
logger.warning("failed to read session %s: %s", session_id, e)
|
||||
raise HTTPException(status_code=500, detail="failed to read session")
|
||||
|
||||
return JSONResponse({"messages": messages})
|
||||
|
||||
|
||||
# ─── Entrypoint ──────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Run via `python -m agent.service` for local testing."""
|
||||
import uvicorn
|
||||
|
||||
uvicorn.run(
|
||||
"agent.service:app",
|
||||
host="127.0.0.1",
|
||||
port=8767,
|
||||
log_level="info",
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
35
agent/sql/0001_overlord_agent_ro.sql
Normal file
35
agent/sql/0001_overlord_agent_ro.sql
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
-- Read-only PG role for the Overlord Agent's `query_telemetry_db` MCP tool.
|
||||
--
|
||||
-- This is the second line of defense (the first is the sqlglot parser in
|
||||
-- agent/tools.py:assert_read_only). Even a parser bypass cannot mutate
|
||||
-- because this role only has SELECT.
|
||||
--
|
||||
-- Apply on the dereth-db container:
|
||||
-- docker exec dereth-db psql -U postgres -d dereth -f - < agent/sql/0001_overlord_agent_ro.sql
|
||||
-- (substitute the password before running, or keep as a placeholder and
|
||||
-- ALTER ROLE … PASSWORD '…' separately)
|
||||
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'overlord_agent_ro') THEN
|
||||
CREATE ROLE overlord_agent_ro NOINHERIT LOGIN PASSWORD 'change-me-set-via-alter-role';
|
||||
END IF;
|
||||
END$$;
|
||||
|
||||
GRANT CONNECT ON DATABASE dereth TO overlord_agent_ro;
|
||||
GRANT USAGE ON SCHEMA public TO overlord_agent_ro;
|
||||
|
||||
-- Grant SELECT on all current public tables.
|
||||
GRANT SELECT ON ALL TABLES IN SCHEMA public TO overlord_agent_ro;
|
||||
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO overlord_agent_ro;
|
||||
|
||||
-- And on any future tables created in public.
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA public
|
||||
GRANT SELECT ON TABLES TO overlord_agent_ro;
|
||||
|
||||
-- TimescaleDB-internal schema (chunks live here). Read on hypertable chunks
|
||||
-- requires SELECT on _timescaledb_internal too.
|
||||
GRANT USAGE ON SCHEMA _timescaledb_internal TO overlord_agent_ro;
|
||||
GRANT SELECT ON ALL TABLES IN SCHEMA _timescaledb_internal TO overlord_agent_ro;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA _timescaledb_internal
|
||||
GRANT SELECT ON TABLES TO overlord_agent_ro;
|
||||
401
agent/tools.py
Normal file
401
agent/tools.py
Normal file
|
|
@ -0,0 +1,401 @@
|
|||
"""Tool implementations exposed to Claude via the MCP server.
|
||||
|
||||
These are pure functions — the MCP server (mcp_overlord.py) only handles
|
||||
the protocol wrapping. Keep tool logic here so it's easy to test in
|
||||
isolation and reuse from elsewhere (e.g. /agent/ask shortcuts).
|
||||
|
||||
Two flavors of data access:
|
||||
* HTTP loopback to the dereth-tracker container (for endpoints that
|
||||
already exist and have validated logic).
|
||||
* Direct asyncpg to the read-only PG role for ad-hoc queries
|
||||
(rare_events, telemetry, anything not exposed via HTTP).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import Any
|
||||
from urllib.parse import quote
|
||||
|
||||
import asyncpg
|
||||
import httpx
|
||||
import sqlglot
|
||||
import sqlglot.errors
|
||||
import sqlglot.expressions as exp
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# The dereth-tracker FastAPI app, reachable from the host because Docker
|
||||
# port-forwards 127.0.0.1:8765:8765 in docker-compose.yml.
|
||||
TRACKER_URL = os.getenv("TRACKER_URL", "http://127.0.0.1:8765")
|
||||
|
||||
# Read-only PG role; see deployment plan.
|
||||
DB_DSN = os.getenv(
|
||||
"AGENT_DB_DSN",
|
||||
"postgresql://overlord_agent_ro@127.0.0.1:5432/dereth",
|
||||
)
|
||||
|
||||
# Hard caps for the SQL tool to keep the agent honest.
|
||||
SQL_TIMEOUT_S = float(os.getenv("AGENT_SQL_TIMEOUT_S", "10"))
|
||||
SQL_MAX_ROWS = int(os.getenv("AGENT_SQL_MAX_ROWS", "200"))
|
||||
|
||||
|
||||
# ─── HTTP loopback helpers ──────────────────────────────────────────
|
||||
|
||||
|
||||
_http_client: httpx.AsyncClient | None = None
|
||||
|
||||
|
||||
async def _http() -> httpx.AsyncClient:
|
||||
"""Lazily create + reuse a single httpx client (connection pool)."""
|
||||
global _http_client
|
||||
if _http_client is None:
|
||||
_http_client = httpx.AsyncClient(base_url=TRACKER_URL, timeout=30.0)
|
||||
return _http_client
|
||||
|
||||
|
||||
async def _get_json(path: str) -> Any:
|
||||
client = await _http()
|
||||
resp = await client.get(path)
|
||||
resp.raise_for_status()
|
||||
return resp.json()
|
||||
|
||||
|
||||
# ─── DB helpers ─────────────────────────────────────────────────────
|
||||
|
||||
|
||||
_db_pool: asyncpg.Pool | None = None
|
||||
|
||||
|
||||
async def _db() -> asyncpg.Pool:
|
||||
global _db_pool
|
||||
if _db_pool is None:
|
||||
_db_pool = await asyncpg.create_pool(
|
||||
DB_DSN, min_size=1, max_size=4, command_timeout=SQL_TIMEOUT_S
|
||||
)
|
||||
return _db_pool
|
||||
|
||||
|
||||
# ─── SQL safety ─────────────────────────────────────────────────────
|
||||
|
||||
|
||||
_ALLOWED_TOPLEVEL = (exp.Select, exp.With, exp.Union, exp.Subquery)
|
||||
|
||||
|
||||
class SqlNotAllowed(ValueError):
|
||||
"""Raised when the agent attempts a non-read-only SQL statement."""
|
||||
|
||||
|
||||
def assert_read_only(sql: str) -> None:
|
||||
"""Parse `sql` and reject anything that isn't a read query.
|
||||
|
||||
Belt-and-suspenders: the PG role is also read-only (GRANT SELECT only),
|
||||
so even a parser bypass can't actually mutate. This is the first line
|
||||
of defense — friendlier error messages and faster reject.
|
||||
"""
|
||||
try:
|
||||
statements = sqlglot.parse(sql, read="postgres")
|
||||
except sqlglot.errors.ParseError as e:
|
||||
raise SqlNotAllowed(f"SQL parse error: {e}") from e
|
||||
|
||||
if not statements:
|
||||
raise SqlNotAllowed("empty SQL")
|
||||
if len(statements) > 1:
|
||||
raise SqlNotAllowed("only one statement allowed")
|
||||
|
||||
stmt = statements[0]
|
||||
if not isinstance(stmt, _ALLOWED_TOPLEVEL):
|
||||
raise SqlNotAllowed(
|
||||
f"only SELECT / WITH allowed, got {type(stmt).__name__}"
|
||||
)
|
||||
|
||||
# Walk the tree and reject any DML/DDL hidden inside (e.g. CTE with
|
||||
# INSERT — yes, postgres allows that).
|
||||
for node in stmt.walk():
|
||||
if isinstance(
|
||||
node,
|
||||
(
|
||||
exp.Insert,
|
||||
exp.Update,
|
||||
exp.Delete,
|
||||
exp.Drop,
|
||||
exp.AlterTable,
|
||||
exp.Create,
|
||||
exp.TruncateTable,
|
||||
exp.Merge,
|
||||
),
|
||||
):
|
||||
raise SqlNotAllowed(
|
||||
f"writes/DDL not allowed (found {type(node).__name__})"
|
||||
)
|
||||
|
||||
|
||||
# ─── Tools ──────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def get_live_players() -> dict[str, Any]:
|
||||
"""Active characters (telemetry seen in the last ~30s).
|
||||
|
||||
Returns the same shape as `GET /live`:
|
||||
{ "players": [ { character_name, ew, ns, z, kills, ... } ] }
|
||||
"""
|
||||
return await _get_json("/live")
|
||||
|
||||
|
||||
async def get_recent_rares(hours: int = 24, limit: int = 100) -> dict[str, Any]:
|
||||
"""Rare item finds in the last N hours, newest first."""
|
||||
hours = max(1, min(int(hours), 24 * 30)) # cap at 30 days
|
||||
limit = max(1, min(int(limit), SQL_MAX_ROWS))
|
||||
pool = await _db()
|
||||
rows = await pool.fetch(
|
||||
"""
|
||||
SELECT timestamp, character_name, name, ew, ns, z
|
||||
FROM rare_events
|
||||
WHERE timestamp >= NOW() - ($1::int || ' hours')::interval
|
||||
ORDER BY timestamp DESC
|
||||
LIMIT $2
|
||||
""",
|
||||
hours,
|
||||
limit,
|
||||
)
|
||||
return {
|
||||
"hours": hours,
|
||||
"count": len(rows),
|
||||
"rares": [
|
||||
{
|
||||
"timestamp": r["timestamp"].isoformat(),
|
||||
"character_name": r["character_name"],
|
||||
"name": r["name"],
|
||||
"ew": r["ew"],
|
||||
"ns": r["ns"],
|
||||
"z": r["z"],
|
||||
}
|
||||
for r in rows
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
async def query_telemetry_db(sql: str) -> dict[str, Any]:
|
||||
"""Run a read-only SQL statement against the telemetry DB.
|
||||
|
||||
The query is parsed and any non-SELECT/WITH statement is rejected.
|
||||
The connection role is also GRANT SELECT only (defense in depth).
|
||||
|
||||
Useful for ad-hoc questions: "top 5 KPH today", "kill count by character
|
||||
yesterday", etc.
|
||||
"""
|
||||
assert_read_only(sql)
|
||||
pool = await _db()
|
||||
try:
|
||||
rows = await asyncio.wait_for(pool.fetch(sql), timeout=SQL_TIMEOUT_S)
|
||||
except asyncio.TimeoutError:
|
||||
raise SqlNotAllowed(f"query exceeded {SQL_TIMEOUT_S:.0f}s timeout")
|
||||
|
||||
if len(rows) > SQL_MAX_ROWS:
|
||||
rows = rows[:SQL_MAX_ROWS]
|
||||
truncated = True
|
||||
else:
|
||||
truncated = False
|
||||
|
||||
return {
|
||||
"row_count": len(rows),
|
||||
"truncated": truncated,
|
||||
"rows": [
|
||||
{k: _json_safe(v) for k, v in dict(r).items()} for r in rows
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
def _json_safe(v: Any) -> Any:
|
||||
"""Convert datetime / Decimal / etc. to JSON-friendly types."""
|
||||
from datetime import date, datetime, timedelta
|
||||
from decimal import Decimal
|
||||
|
||||
if v is None:
|
||||
return None
|
||||
if isinstance(v, (str, int, float, bool)):
|
||||
return v
|
||||
if isinstance(v, (datetime, date)):
|
||||
return v.isoformat()
|
||||
if isinstance(v, timedelta):
|
||||
return v.total_seconds()
|
||||
if isinstance(v, Decimal):
|
||||
return float(v)
|
||||
if isinstance(v, (list, tuple)):
|
||||
return [_json_safe(x) for x in v]
|
||||
if isinstance(v, dict):
|
||||
return {k: _json_safe(x) for k, x in v.items()}
|
||||
return str(v)
|
||||
|
||||
|
||||
# ─── Per-character lookups (HTTP loopback) ──────────────────────────
|
||||
|
||||
|
||||
async def get_player_state(character_name: str) -> dict[str, Any]:
|
||||
"""Combined snapshot for one character: live telemetry + character stats.
|
||||
|
||||
Returns:
|
||||
{
|
||||
"character_name": str,
|
||||
"telemetry": {...} | None, # from /live, or None if offline
|
||||
"character_stats": {...} | None, # from /character-stats/<name>
|
||||
"vitals": {...} | None, # last vitals from /live (subset)
|
||||
"online": bool, # whether telemetry was found in /live
|
||||
}
|
||||
"""
|
||||
name = character_name.strip()
|
||||
live = await _get_json("/live")
|
||||
players = live.get("players", []) if isinstance(live, dict) else []
|
||||
telemetry = next(
|
||||
(p for p in players if p.get("character_name") == name), None
|
||||
)
|
||||
|
||||
char_stats: dict[str, Any] | None = None
|
||||
try:
|
||||
client = await _http()
|
||||
resp = await client.get(f"/character-stats/{quote(name, safe='')}")
|
||||
if resp.status_code == 200:
|
||||
char_stats = resp.json()
|
||||
except Exception:
|
||||
char_stats = None
|
||||
|
||||
return {
|
||||
"character_name": name,
|
||||
"online": telemetry is not None,
|
||||
"telemetry": telemetry,
|
||||
"character_stats": char_stats,
|
||||
}
|
||||
|
||||
|
||||
async def get_inventory(character_name: str) -> dict[str, Any]:
|
||||
"""Full inventory for one character. Items only — for filtered queries
|
||||
use get_inventory_search."""
|
||||
client = await _http()
|
||||
resp = await client.get(f"/inventory/{quote(character_name, safe='')}")
|
||||
resp.raise_for_status()
|
||||
return resp.json()
|
||||
|
||||
|
||||
async def get_inventory_search(
|
||||
character_name: str, filters: dict[str, Any] | None = None
|
||||
) -> dict[str, Any]:
|
||||
"""Filtered inventory search. `filters` is a dict of query params, e.g.
|
||||
{"name": "pearl", "armor_level_min": 500}.
|
||||
|
||||
Caller is expected to know the supported filters from the dereth-tracker
|
||||
/inventory/{name}/search route — pass through opaquely.
|
||||
"""
|
||||
client = await _http()
|
||||
resp = await client.get(
|
||||
f"/inventory/{quote(character_name, safe='')}/search",
|
||||
params=filters or {},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
return resp.json()
|
||||
|
||||
|
||||
async def get_combat_stats(character_name: str) -> dict[str, Any]:
|
||||
"""Lifetime + session combat stats for one character (per-element split,
|
||||
monster encounters, surge counts)."""
|
||||
client = await _http()
|
||||
resp = await client.get(f"/combat-stats/{quote(character_name, safe='')}")
|
||||
resp.raise_for_status()
|
||||
return resp.json()
|
||||
|
||||
|
||||
async def get_equipment_cantrips(character_name: str) -> dict[str, Any]:
|
||||
"""Currently-equipped items + their active cantrip/spell state."""
|
||||
client = await _http()
|
||||
resp = await client.get(
|
||||
f"/equipment-cantrip-state/{quote(character_name, safe='')}"
|
||||
)
|
||||
resp.raise_for_status()
|
||||
return resp.json()
|
||||
|
||||
|
||||
async def get_quest_status() -> dict[str, Any]:
|
||||
"""All characters' active quest timers and progress."""
|
||||
return await _get_json("/quest-status")
|
||||
|
||||
|
||||
async def get_server_health() -> dict[str, Any]:
|
||||
"""Coldeve server status: up/down, latency, current player count, uptime."""
|
||||
return await _get_json("/server-health")
|
||||
|
||||
|
||||
async def suitbuilder_search(
|
||||
params: dict[str, Any], max_phase_events: int = 50
|
||||
) -> dict[str, Any]:
|
||||
"""Drive a suitbuilder constraint search synchronously.
|
||||
|
||||
The dereth-tracker /inv/suitbuilder/search endpoint is an SSE stream.
|
||||
We collect events until the stream closes, drop intermediate phase
|
||||
chatter (keeping the last N), and return:
|
||||
|
||||
{ "final_suits": [...], "phases": [...latest few...] }
|
||||
|
||||
`params` is the JSON body the suitbuilder expects. Call it like the
|
||||
/suitbuilder.html page does.
|
||||
"""
|
||||
client = await _http()
|
||||
final: list[dict[str, Any]] = []
|
||||
phases: list[dict[str, Any]] = []
|
||||
|
||||
# Use a fresh long-timeout client for the SSE stream — don't tie up the
|
||||
# shared pool for a 5-minute search.
|
||||
async with httpx.AsyncClient(
|
||||
base_url=TRACKER_URL, timeout=httpx.Timeout(300.0, connect=10.0)
|
||||
) as stream_client:
|
||||
async with stream_client.stream(
|
||||
"POST",
|
||||
"/inv/suitbuilder/search",
|
||||
json=params,
|
||||
headers={"Content-Type": "application/json"},
|
||||
) as resp:
|
||||
event_name = "message"
|
||||
data_lines: list[str] = []
|
||||
async for line_bytes in resp.aiter_lines():
|
||||
line = line_bytes.rstrip("\r")
|
||||
if line.startswith("event:"):
|
||||
event_name = line[6:].strip()
|
||||
elif line.startswith("data:"):
|
||||
data_lines.append(line[5:].strip())
|
||||
elif line == "":
|
||||
# Dispatch
|
||||
if data_lines:
|
||||
try:
|
||||
payload = json.loads("\n".join(data_lines))
|
||||
except json.JSONDecodeError:
|
||||
payload = {"raw": "\n".join(data_lines)}
|
||||
if event_name == "result" or event_name == "final":
|
||||
final.append(payload)
|
||||
elif event_name == "error":
|
||||
phases.append({"event": "error", "data": payload})
|
||||
else:
|
||||
phases.append({"event": event_name, "data": payload})
|
||||
phases = phases[-max_phase_events:]
|
||||
data_lines = []
|
||||
event_name = "message"
|
||||
|
||||
return {
|
||||
"final_suits": final,
|
||||
"phases": phases[-max_phase_events:],
|
||||
"phase_count": len(phases),
|
||||
}
|
||||
|
||||
|
||||
# ─── Cleanup ────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def shutdown() -> None:
|
||||
"""Close shared resources. Call from MCP server lifespan / on exit."""
|
||||
global _http_client, _db_pool
|
||||
if _http_client is not None:
|
||||
await _http_client.aclose()
|
||||
_http_client = None
|
||||
if _db_pool is not None:
|
||||
await _db_pool.close()
|
||||
_db_pool = None
|
||||
Loading…
Add table
Add a link
Reference in a new issue