3.3 KiB
3.3 KiB
TODO: Migration & Parity Plan
Detailed Plan
- Review Repository for Data Storage and Event Handling
- Scan for SQLite usage (telemetry, spawns, chat, session data)
- Identify all event ingestion code paths (WebSocket, HTTP, direct DB inserts)
- Locate old or deprecated payload handling
- Update Database Access Layer to PostgreSQL/TimescaleDB
- Replace SQLite code with SQLAlchemy models & Alembic migrations
- Configure TimescaleDB hypertable for telemetry data
- Create migration for spawn events table
- Set up
DATABASE_URLand (optional) local Docker Compose service
- Refactor Event Ingestion Endpoints and Logic
- Modify
/ws/positionto accept new schemas (telemetry, spawn, chat) - Persist telemetry and spawn events to PostgreSQL
- Continue broadcasting chat messages without persisting
- Modify
- Update Data Models and API Response Types
- Align Pydantic schemas to new event payload structures
- Update
/live,/history,/trailsto query Postgres - Optionally add
GET /spawnsendpoint for spawn data
- Migrate or Clean Historical Data
- If needed, write script to migrate existing SQLite data to Postgres
- Otherwise remove old migration and data transformation code
- Refactor Frontend to Query and Visualize New Data (deferred)
- Add or Update Grafana Dashboards (deferred)
- Testing & Verification (deferred)
- Documentation & Developer Instructions
- Update README and docs for PostgreSQL/TimescaleDB setup
- Maintenance and Future Enhancements
- Document data retention and aggregation policies for TimescaleDB
Phases
Phase 1: Core Migration & Parity
- Remove SQLite usage and associated code (
db.pyand directsqlite3calls). - Integrate PostgreSQL/TimescaleDB using SQLAlchemy and Alembic for migrations.
- Set up
DATABASE_URLenvironment variable for connection. - (Optional) Add a TimescaleDB service in
docker-compose.ymlfor local development.
- Set up
- Define SQLAlchemy models and create initial Alembic migration:
- Telemetry table as a TimescaleDB hypertable.
- Spawn events table.
- Update backend (
main.py):- Ingest
telemetryand newspawnmessages from/ws/positionWebSocket. - Persist telemetry and spawn events to PostgreSQL.
- Continue broadcasting
chatmessages without persisting.
- Ingest
- Ensure existing endpoints (
/live,/history,/trails) operate against the new database. - (Optional) Add retrieval endpoint for spawn events (e.g.,
GET /spawns).
Phase 2: Frontend & Visualization
- Update frontend to display spawn events (markers or lists).
- Expose new telemetry metrics in the UI:
latency_ms,mem_mb,cpu_pct,mem_handles.
Phase 3: Dashboards & Monitoring
- Provision or update Grafana dashboards for:
- Telemetry performance (TimescaleDB queries, hypertable metrics).
- Spawn event heatmaps and trends.
- Rare event heatmaps and trends.
Phase 4: Documentation & Maintenance
- Finalize README and developer docs with PostgreSQL setup, migration steps, and usage examples.
- Document how to add new event types or payload fields, including schema, migrations, and tests.
- Establish data retention and aggregation policies for TimescaleDB hypertables.