Updated readme

This commit is contained in:
erik 2025-05-26 17:14:52 +00:00
parent 7845570819
commit add24e5c9d

View file

@ -1,6 +1,6 @@
# Dereth Tracker # Dereth Tracker
Dereth Tracker is a real-time telemetry service for the world of Dereth. It collects player data, stores it in a SQLite database, and provides a live map interface along with a sample data generator for testing. Dereth Tracker is a real-time telemetry service for the world of Dereth. It collects player data, stores it in a PostgreSQL (TimescaleDB) database for efficient time-series storage, and provides a live map interface along with a sample data generator for testing.
## Table of Contents ## Table of Contents
- [Overview](#overview) - [Overview](#overview)
@ -16,11 +16,12 @@ Dereth Tracker is a real-time telemetry service for the world of Dereth. It coll
## Overview ## Overview
This project provides: - This project provides:
- A FastAPI backend with endpoints for receiving and querying telemetry data. - A FastAPI backend with endpoints for receiving and querying telemetry data.
- SQLite-based storage for snapshots and live state. - PostgreSQL/TimescaleDB-based storage for time-series telemetry and per-character stats.
- A live, interactive map using static HTML, CSS, and JavaScript. - A live, interactive map using static HTML, CSS, and JavaScript.
- A sample data generator script (`generate_data.py`) for simulating telemetry snapshots. - A sample data generator script (`generate_data.py`) for simulating telemetry snapshots.
- A sample data generator script (`generate_data.py`) for simulating telemetry snapshots.
## Features ## Features
@ -66,9 +67,10 @@ Python packages (if using local virtualenv):
## Configuration ## Configuration
- Update the `SHARED_SECRET` in `main.py` to match your plugin (default: `"your_shared_secret"`). - Configure the plugin shared secret via the `SHARED_SECRET` environment variable (default in code: `"your_shared_secret"`).
- The SQLite database file `dereth.db` is created in the project root. To change the path, edit `DB_FILE` in `db.py`. - The database connection is controlled by the `DATABASE_URL` environment variable (e.g. `postgresql://postgres:password@db:5432/dereth`).
- To limit the maximum database size, set the environment variable `DB_MAX_SIZE_MB` (default: 2048 MB). By default, when using Docker Compose, a TimescaleDB container is provisioned for you.
- If you need to tune Timescale or Postgres settings (retention, checkpoint, etc.), set the corresponding `DB_*` environment variables as documented in `docker-compose.yml`.
## Usage ## Usage
@ -81,18 +83,24 @@ uvicorn main:app --reload --host 0.0.0.0 --port 8000
# Grafana Dashboard UI # Grafana Dashboard UI
```nginx ```nginx
location /grafana/ { location /grafana/ {
proxy_pass http://127.0.0.1:3000/; # Optional: require basic auth on the Grafana UI
proxy_http_version 1.1; auth_basic "Restricted";
proxy_set_header Host $host; auth_basic_user_file /etc/nginx/.htpasswd;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:3000/;
proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1;
# WebSocket support (for live panels) proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Connection "upgrade"; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-Proto $scheme;
# Inject Grafana service account token for anonymous panel embeds
proxy_set_header Authorization "Bearer <YOUR_SERVICE_ACCOUNT_TOKEN>";
# WebSocket support (for live panels)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
} }
``` ```
## NGINX Proxy Configuration ## NGINX Proxy Configuration
@ -248,8 +256,46 @@ Response:
## Database Schema ## Database Schema
- **telemetry_log**: Stored history of snapshots. This service uses PostgreSQL with the TimescaleDB extension to store telemetry time-series data
- **live_state**: Current snapshot per character (upserted). and aggregate character statistics. The primary tables are:
- **telemetry_events** (hypertable):
- `id` (PK, serial)
- `character_name` (text, indexed)
- `char_tag` (text, nullable)
- `session_id` (text, indexed)
- `timestamp` (timestamptz, indexed)
- `ew`, `ns`, `z` (float)
- `kills`, `deaths`, `rares_found`, `prismatic_taper_count` (integer)
- `kills_per_hour` (float)
- `onlinetime`, `vt_state` (text)
- Optional metrics: `mem_mb`, `cpu_pct`, `mem_handles`, `latency_ms` (float)
- **char_stats**:
- `character_name` (text, PK)
- `total_kills` (integer)
- **rare_stats**:
- `character_name` (text, PK)
- `total_rares` (integer)
- **rare_stats_sessions**:
- `character_name`, `session_id` (composite PK)
- `session_rares` (integer)
- **spawn_events**:
- `id` (PK, serial)
- `character_name` (text)
- `mob` (text)
- `timestamp` (timestamptz)
- `ew`, `ns`, `z` (float)
- **rare_events**:
- `id` (PK, serial)
- `character_name` (text)
- `name` (text)
- `timestamp` (timestamptz)
- `ew`, `ns`, `z` (float)
## Contributing ## Contributing
@ -259,7 +305,7 @@ Contributions are welcome! Feel free to open issues or submit pull requests.
For detailed tasks, migration steps, and future enhancements, see [TODO.md](TODO.md). For detailed tasks, migration steps, and future enhancements, see [TODO.md](TODO.md).
### Local Development Database ### Local Development Database
This project will migrate from SQLite to PostgreSQL/TimescaleDB. You can configure local development using Docker Compose or connect to an external instance: This service uses PostgreSQL with the TimescaleDB extension. You can configure local development using the provided Docker Compose setup or connect to an external instance:
1. PostgreSQL/TimescaleDB via Docker Compose (recommended): 1. PostgreSQL/TimescaleDB via Docker Compose (recommended):
- Pros: - Pros: