62 lines
No EOL
3.3 KiB
Markdown
62 lines
No EOL
3.3 KiB
Markdown
## TODO: Migration & Parity Plan
|
|
|
|
### Detailed Plan
|
|
1. [ ] Review Repository for Data Storage and Event Handling
|
|
- [ ] Scan for SQLite usage (telemetry, spawns, chat, session data)
|
|
- [ ] Identify all event ingestion code paths (WebSocket, HTTP, direct DB inserts)
|
|
- [ ] Locate old or deprecated payload handling
|
|
2. [ ] Update Database Access Layer to PostgreSQL/TimescaleDB
|
|
- [ ] Replace SQLite code with SQLAlchemy models & Alembic migrations
|
|
- [ ] Configure TimescaleDB hypertable for telemetry data
|
|
- [ ] Create migration for spawn events table
|
|
- [ ] Set up `DATABASE_URL` and (optional) local Docker Compose service
|
|
3. [ ] Refactor Event Ingestion Endpoints and Logic
|
|
- [ ] Modify `/ws/position` to accept new schemas (telemetry, spawn, chat)
|
|
- [ ] Persist telemetry and spawn events to PostgreSQL
|
|
- [ ] Continue broadcasting chat messages without persisting
|
|
4. [ ] Update Data Models and API Response Types
|
|
- [ ] Align Pydantic schemas to new event payload structures
|
|
- [ ] Update `/live`, `/history`, `/trails` to query Postgres
|
|
- [ ] Optionally add `GET /spawns` endpoint for spawn data
|
|
5. [ ] Migrate or Clean Historical Data
|
|
- [ ] If needed, write script to migrate existing SQLite data to Postgres
|
|
- [ ] Otherwise remove old migration and data transformation code
|
|
6. [ ] Refactor Frontend to Query and Visualize New Data (deferred)
|
|
7. [ ] Add or Update Grafana Dashboards (deferred)
|
|
8. [ ] Testing & Verification (deferred)
|
|
9. [ ] Documentation & Developer Instructions
|
|
- [ ] Update README and docs for PostgreSQL/TimescaleDB setup
|
|
10. [ ] Maintenance and Future Enhancements
|
|
- [ ] Document data retention and aggregation policies for TimescaleDB
|
|
|
|
### Phases
|
|
|
|
### Phase 1: Core Migration & Parity
|
|
- [ ] Remove SQLite usage and associated code (`db.py` and direct `sqlite3` calls).
|
|
- [ ] Integrate PostgreSQL/TimescaleDB using SQLAlchemy and Alembic for migrations.
|
|
- Set up `DATABASE_URL` environment variable for connection.
|
|
- (Optional) Add a TimescaleDB service in `docker-compose.yml` for local development.
|
|
- [ ] Define SQLAlchemy models and create initial Alembic migration:
|
|
- Telemetry table as a TimescaleDB hypertable.
|
|
- Spawn events table.
|
|
- [ ] Update backend (`main.py`):
|
|
- Ingest `telemetry` and new `spawn` messages from `/ws/position` WebSocket.
|
|
- Persist telemetry and spawn events to PostgreSQL.
|
|
- Continue broadcasting `chat` messages without persisting.
|
|
- [ ] Ensure existing endpoints (`/live`, `/history`, `/trails`) operate against the new database.
|
|
- [ ] (Optional) Add retrieval endpoint for spawn events (e.g., `GET /spawns`).
|
|
|
|
### Phase 2: Frontend & Visualization
|
|
- [ ] Update frontend to display spawn events (markers or lists).
|
|
- [ ] Expose new telemetry metrics in the UI: `latency_ms`, `mem_mb`, `cpu_pct`, `mem_handles`.
|
|
|
|
### Phase 3: Dashboards & Monitoring
|
|
* [ ] Provision or update Grafana dashboards for:
|
|
- Telemetry performance (TimescaleDB queries, hypertable metrics).
|
|
- Spawn event heatmaps and trends.
|
|
- Rare event heatmaps and trends.
|
|
|
|
### Phase 4: Documentation & Maintenance
|
|
- [ ] Finalize README and developer docs with PostgreSQL setup, migration steps, and usage examples.
|
|
- [ ] Document how to add new event types or payload fields, including schema, migrations, and tests.
|
|
- [ ] Establish data retention and aggregation policies for TimescaleDB hypertables. |