fixed rares event

This commit is contained in:
erik 2025-05-22 16:29:05 +00:00
parent c418221575
commit f86ad9a542
6 changed files with 101 additions and 2 deletions

View file

@ -59,6 +59,11 @@ Root directory:
5. **spawn_events**
- Records individual mob spawn events for heatmapping.
- Columns: `id` (PK), `character_name` (String), `mob` (String), `timestamp` (DateTime), `ew`, `ns`, `z` (Float).
- Coordinates (`ew`, `ns`, `z`) can be sent as JSON numbers or strings and are coerced to floats.
6. **rare_events**
- Records each rare spawn event for future heatmaps and analysis.
- Columns: `id` (PK), `character_name` (String), `name` (String), `timestamp` (DateTime), `ew`, `ns`, `z` (Float).
### Initialization and Migrations
- On startup (`main.py`), `init_db_async()` is called:

View file

@ -37,7 +37,7 @@
"type": "rare",
"timestamp": "2025-04-22T13:48:00Z",
"character_name": "MyCharacter",
"mob": "Golden Gryphon",
"name": "Golden Gryphon",
"ew": 150.5,
"ns": 350.7,
"z": 5.0

View file

@ -158,7 +158,7 @@ After connecting, send JSON messages matching the `TelemetrySnapshot` schema. Fo
"type": "rare",
"timestamp": "2025-04-22T13:48:00Z",
"character_name": "MyCharacter",
"mob": "Golden Gryphon",
"name": "Golden Gryphon",
"ew": 150.5,
"ns": 350.7,
"z": 5.0,
@ -193,6 +193,7 @@ For a complete reference of JSON payloads accepted by the backend (over `/ws/pos
Notes on payload changes:
- Spawn events no longer require the `z` coordinate; if omitted, the server defaults it to 0.0.
Coordinates (`ew`, `ns`, `z`) may be sent as JSON numbers or strings; the backend will coerce them to floats.
- Telemetry events have removed the `latency_ms` field; please omit it from your payloads.
Each entry shows all required and optional fields, their types, and example values.

62
TODO.md Normal file
View file

@ -0,0 +1,62 @@
## TODO: Migration & Parity Plan
### Detailed Plan
1. [ ] Review Repository for Data Storage and Event Handling
- [ ] Scan for SQLite usage (telemetry, spawns, chat, session data)
- [ ] Identify all event ingestion code paths (WebSocket, HTTP, direct DB inserts)
- [ ] Locate old or deprecated payload handling
2. [ ] Update Database Access Layer to PostgreSQL/TimescaleDB
- [ ] Replace SQLite code with SQLAlchemy models & Alembic migrations
- [ ] Configure TimescaleDB hypertable for telemetry data
- [ ] Create migration for spawn events table
- [ ] Set up `DATABASE_URL` and (optional) local Docker Compose service
3. [ ] Refactor Event Ingestion Endpoints and Logic
- [ ] Modify `/ws/position` to accept new schemas (telemetry, spawn, chat)
- [ ] Persist telemetry and spawn events to PostgreSQL
- [ ] Continue broadcasting chat messages without persisting
4. [ ] Update Data Models and API Response Types
- [ ] Align Pydantic schemas to new event payload structures
- [ ] Update `/live`, `/history`, `/trails` to query Postgres
- [ ] Optionally add `GET /spawns` endpoint for spawn data
5. [ ] Migrate or Clean Historical Data
- [ ] If needed, write script to migrate existing SQLite data to Postgres
- [ ] Otherwise remove old migration and data transformation code
6. [ ] Refactor Frontend to Query and Visualize New Data (deferred)
7. [ ] Add or Update Grafana Dashboards (deferred)
8. [ ] Testing & Verification (deferred)
9. [ ] Documentation & Developer Instructions
- [ ] Update README and docs for PostgreSQL/TimescaleDB setup
10. [ ] Maintenance and Future Enhancements
- [ ] Document data retention and aggregation policies for TimescaleDB
### Phases
### Phase 1: Core Migration & Parity
- [ ] Remove SQLite usage and associated code (`db.py` and direct `sqlite3` calls).
- [ ] Integrate PostgreSQL/TimescaleDB using SQLAlchemy and Alembic for migrations.
- Set up `DATABASE_URL` environment variable for connection.
- (Optional) Add a TimescaleDB service in `docker-compose.yml` for local development.
- [ ] Define SQLAlchemy models and create initial Alembic migration:
- Telemetry table as a TimescaleDB hypertable.
- Spawn events table.
- [ ] Update backend (`main.py`):
- Ingest `telemetry` and new `spawn` messages from `/ws/position` WebSocket.
- Persist telemetry and spawn events to PostgreSQL.
- Continue broadcasting `chat` messages without persisting.
- [ ] Ensure existing endpoints (`/live`, `/history`, `/trails`) operate against the new database.
- [ ] (Optional) Add retrieval endpoint for spawn events (e.g., `GET /spawns`).
### Phase 2: Frontend & Visualization
- [ ] Update frontend to display spawn events (markers or lists).
- [ ] Expose new telemetry metrics in the UI: `latency_ms`, `mem_mb`, `cpu_pct`, `mem_handles`.
### Phase 3: Dashboards & Monitoring
* [ ] Provision or update Grafana dashboards for:
- Telemetry performance (TimescaleDB queries, hypertable metrics).
- Spawn event heatmaps and trends.
- Rare event heatmaps and trends.
### Phase 4: Documentation & Maintenance
- [ ] Finalize README and developer docs with PostgreSQL setup, migration steps, and usage examples.
- [ ] Document how to add new event types or payload fields, including schema, migrations, and tests.
- [ ] Establish data retention and aggregation policies for TimescaleDB hypertables.

View file

@ -72,6 +72,18 @@ spawn_events = Table(
Column("ns", Float, nullable=False),
Column("z", Float, nullable=False),
)
# Rare events: record individual rare spawns for future heatmaps
rare_events = Table(
"rare_events",
metadata,
Column("id", Integer, primary_key=True),
Column("character_name", String, nullable=False),
Column("name", String, nullable=False),
Column("timestamp", DateTime(timezone=True), nullable=False, index=True),
Column("ew", Float, nullable=False),
Column("ns", Float, nullable=False),
Column("z", Float, nullable=False),
)
async def init_db_async():
"""Create tables and enable TimescaleDB hypertable for telemetry_events."""

19
main.py
View file

@ -20,6 +20,7 @@ from db_async import (
rare_stats,
rare_stats_sessions,
spawn_events,
rare_events,
init_db_async
)
import asyncio
@ -69,6 +70,14 @@ class SpawnEvent(BaseModel):
ns: float
z: float = 0.0
class RareEvent(BaseModel):
character_name: str
name: str
timestamp: datetime
ew: float
ns: float
z: float = 0.0
@app.on_event("startup")
async def on_startup():
@ -310,6 +319,16 @@ async def ws_receive_snapshots(
set_={"session_rares": rare_stats_sessions.c.session_rares + 1},
)
await database.execute(stmt_sess)
# Persist individual rare event for future analysis
payload = data.copy()
payload.pop("type", None)
try:
rare_ev = RareEvent.parse_obj(payload)
await database.execute(
rare_events.insert().values(**rare_ev.dict())
)
except Exception:
pass
continue
# Chat message: broadcast to browser clients only (no DB write)
if msg_type == "chat":