MosswartOverlord/README.md
2025-05-23 08:11:11 +00:00

8.8 KiB
Raw Blame History

Dereth Tracker

Dereth Tracker is a real-time telemetry service for the world of Dereth. It collects player data, stores it in a SQLite database, and provides a live map interface along with a sample data generator for testing.

Table of Contents

Overview

This project provides:

  • A FastAPI backend with endpoints for receiving and querying telemetry data.
  • SQLite-based storage for snapshots and live state.
  • A live, interactive map using static HTML, CSS, and JavaScript.
  • A sample data generator script (generate_data.py) for simulating telemetry snapshots.

Features

  • WebSocket /ws/position: Stream telemetry snapshots (protected by a shared secret).
  • GET /live: Fetch active players seen in the last 30 seconds.
  • GET /history: Retrieve historical telemetry data with optional time filtering.
  • GET /debug: Health check endpoint.
  • Live Map: Interactive map interface with panning, zooming, and sorting.
  • Sample Data Generator: generate_data.py sends telemetry snapshots over WebSocket for testing.

Requirements

  • Python 3.9 or newer (only if running without Docker)
  • pip (only if running without Docker)
  • Docker & Docker Compose (recommended)

Python packages (if using local virtualenv):

  • fastapi
  • uvicorn
  • pydantic
  • databases
  • asyncpg
  • sqlalchemy
  • websockets # required for sample data generator

Installation

  1. Clone the repository:
    git clone https://github.com/yourusername/dereth-tracker.git
    cd dereth-tracker
    
  2. Create and activate a virtual environment:
    python3 -m venv venv
    source venv/bin/activate  # Windows: venv\Scripts\activate
    
  3. Install dependencies:

pip install fastapi uvicorn pydantic websockets


## Configuration

- Update the `SHARED_SECRET` in `main.py` to match your plugin (default: `"your_shared_secret"`).
- The SQLite database file `dereth.db` is created in the project root. To change the path, edit `DB_FILE` in `db.py`.
- To limit the maximum database size, set the environment variable `DB_MAX_SIZE_MB` (default: 2048 MB).

## Usage

Start the server using Uvicorn:

```bash
uvicorn main:app --reload --host 0.0.0.0 --port 8000

Grafana Dashboard UI

location /grafana/ {
  proxy_pass         http://127.0.0.1:3000/;
  proxy_http_version 1.1;
  proxy_set_header   Host              $host;
  proxy_set_header   X-Real-IP         $remote_addr;
  proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
  proxy_set_header   X-Forwarded-Proto $scheme;
  # WebSocket support (for live panels)
  proxy_set_header   Upgrade           $http_upgrade;
  proxy_set_header   Connection        "upgrade";
  proxy_cache_bypass $http_upgrade;
}

NGINX Proxy Configuration

If you cannot reassign the existing /live and /trails routes, you can namespace this service under /api (or any other prefix) and configure NGINX accordingly. Be sure to forward WebSocket upgrade headers so that /ws/live and /ws/position continue to work. Example:

location /api/ {
  proxy_pass         http://127.0.0.1:8765/;
   proxy_http_version 1.1;
  proxy_set_header   Host              $host;
  proxy_set_header   X-Real-IP         $remote_addr;
  proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
  proxy_set_header   X-Forwarded-Proto $scheme;
  # WebSocket support
  proxy_set_header   Upgrade           $http_upgrade;
  proxy_set_header   Connection        "upgrade";
  proxy_cache_bypass $http_upgrade;
}

Then the browser client (static/script.js) will fetch /api/live/ and /api/trails/ to reach this new server.

- Live Map: `http://localhost:8000/` (or `http://<your-domain>/api/` if behind a prefix)
- Grafana UI: `http://localhost:3000/grafana/` (or `http://<your-domain>/grafana/` if proxied under that path)

Frontend Configuration

  • In static/script.js, the constant API_BASE controls where live/trails data and WebSocket /ws/live are fetched. By default:
    const API_BASE = '/api';
    
    Update API_BASE if you mount the service under a different path or serve it at root.

Debugging WebSockets

  • Server logs now print every incoming WebSocket frame in main.py:
    • [WS-PLUGIN RX] <client>: <raw-payload> for plugin messages on /ws/position
    • [WS-LIVE RX] <client>: <parsed-json> for browser messages on /ws/live
  • Use these logs to verify messages and troubleshoot handshake failures.

Styling Adjustments

  • Chat input bar is fixed at the bottom of the chat window (.chat-form { position:absolute; bottom:0; }).
  • Input text and placeholder are white for readability (.chat-input, .chat-input::placeholder { color:#fff; }).
  • Incoming chat messages forced white via .chat-messages div { color:#fff !important; }.

API Reference

WebSocket /ws/position

Stream telemetry snapshots over a WebSocket connection. Provide your shared secret either as a query parameter or WebSocket header:

ws://<host>:<port>/ws/position?secret=<shared_secret>

or

X-Plugin-Secret: <shared_secret>

After connecting, send JSON messages matching the TelemetrySnapshot schema. For example:

{
  "type": "telemetry",
  "character_name": "Dunking Rares",
  "char_tag": "moss",
  "session_id": "dunk-20250422-xyz",
  "timestamp": "2025-04-22T13:45:00Z",
  "ew": 123.4,
  "ns": 567.8,
  "z": 10.2,
  "kills": 42,
  "deaths": 1,
  "prismatic_taper_count": 17,
  "vt_state": "Combat",
  "kills_per_hour": "N/A",
  "onlinetime": "00:05:00"
 }

Each message above is sent as its own JSON object over the WebSocket (one frame per event). When you want to report a rare spawn, send a standalone rare event instead of embedding rare counts in telemetry. For example:

{
  "type": "rare",
  "timestamp": "2025-04-22T13:48:00Z",
  "character_name": "MyCharacter",
  "name": "Golden Gryphon",
  "ew": 150.5,
  "ns": 350.7,
  "z": 5.0,
  "additional_info": "first sighting of the day"
}

Chat messages

You can also send chat envelopes over the same WebSocket to display messages in the browser. Fields:

  • type: must be "chat"
  • character_name: target player name
  • text: message content
  • color (optional): CSS color string (e.g. "#ff8800"); if sent as an integer (0xRRGGBB), it will be converted to hex.

Example chat payload:

{
  "type": "chat",
  "character_name": "MyCharacter",
  "text": "Hello world!",
  "color": "#88f"
}

Event Payload Formats

For a complete reference of JSON payloads accepted by the backend (over /ws/position), see the file EVENT_FORMATS.json in the project root. It contains example schemas for:

  • Telemetry events (type: "telemetry")
  • Spawn events (type: "spawn")
  • Chat events (type: "chat")
  • Rare events (type: "rare")

Notes on payload changes:

  • Spawn events no longer require the z coordinate; if omitted, the server defaults it to 0.0. Coordinates (ew, ns, z) may be sent as JSON numbers or strings; the backend will coerce them to floats.
  • Telemetry events have removed the latency_ms field; please omit it from your payloads.

Each entry shows all required and optional fields, their types, and example values.

GET /live

Returns active players seen within the last 30 seconds:

{
  "players": [ { ... } ]
}

GET /history

Retrieve historical snapshots with optional from and to ISO8601 timestamps:

GET /history?from=2025-04-22T12:00:00Z&to=2025-04-22T13:00:00Z

Response:

{
  "data": [ { ... } ]
}

Frontend

  • Live Map: static/index.html Real-time player positions on a map.

Database Schema

  • telemetry_log: Stored history of snapshots.
  • live_state: Current snapshot per character (upserted).

Contributing

Contributions are welcome! Feel free to open issues or submit pull requests.

Roadmap & TODO

For detailed tasks, migration steps, and future enhancements, see TODO.md.

Local Development Database

This project will migrate from SQLite to PostgreSQL/TimescaleDB. You can configure local development using Docker Compose or connect to an external instance:

  1. PostgreSQL/TimescaleDB via Docker Compose (recommended):

    • Pros:
      • Reproducible, isolated environment out-of-the-box
      • No need to install Postgres locally
      • Aligns development with production setups
    • Cons:
      • Additional resource usage (memory, CPU)
      • Slightly more complex Docker configuration
  2. External PostgreSQL instance:

    • Pros:
      • Leverages existing infrastructure
      • No Docker overhead
    • Cons:
      • Requires manual setup and Timescale extension
      • Less portable for new contributors