Split the single lakehouse.duckdb into two files to eliminate the exclusive write-lock conflict between SQLMesh (pipeline) and the Quart web app (reader): lakehouse.duckdb — SQLMesh exclusive (all pipeline layers) serving.duckdb — web app reads (serving tables only, atomically swapped) Changes: web/src/beanflows/analytics.py - Replace persistent global _conn with per-thread connections (threading.local) - Add _get_conn(): opens read_only=True on first call per thread, reopens automatically on inode change (~1μs os.stat) to pick up atomic file swaps - Switch env var from DUCKDB_PATH → SERVING_DUCKDB_PATH - Add module docstring documenting architecture + DuckLake migration path web/src/beanflows/app.py - Startup check: use SERVING_DUCKDB_PATH - Health check: use _db_path instead of _conn src/materia/export_serving.py (new) - Reads all serving.* tables from lakehouse.duckdb (read_only) - Writes to serving_new.duckdb, then os.rename → serving.duckdb (atomic) - ~50 lines; runs after each SQLMesh transform src/materia/pipelines.py - Add export_serving pipeline entry (uv run python -c ...) infra/supervisor/supervisor.sh - Add SERVING_DUCKDB_PATH env var comment - Add export step: uv run materia pipeline run export_serving infra/supervisor/materia-supervisor.service - Add Environment=SERVING_DUCKDB_PATH=/data/materia/serving.duckdb infra/bootstrap_supervisor.sh - Add SERVING_DUCKDB_PATH to .env template web/.env.example + web/docker-compose.yml - Document both env vars; switch web service to SERVING_DUCKDB_PATH web/src/beanflows/dashboard/templates/settings.html - Minor settings page fix from prior session Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
57 lines
1.2 KiB
YAML
57 lines
1.2 KiB
YAML
services:
|
|
app:
|
|
build: .
|
|
restart: unless-stopped
|
|
ports:
|
|
- "5000:5000"
|
|
volumes:
|
|
- ./data:/app/data
|
|
- ./duckdb:/app/duckdb:ro
|
|
env_file: .env
|
|
environment:
|
|
- DATABASE_PATH=/app/data/app.db
|
|
- SERVING_DUCKDB_PATH=/app/duckdb/serving.duckdb
|
|
healthcheck:
|
|
test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
|
|
interval: 30s
|
|
timeout: 10s
|
|
retries: 3
|
|
start_period: 10s
|
|
|
|
worker:
|
|
build: .
|
|
restart: unless-stopped
|
|
command: python -m beanflows.worker
|
|
volumes:
|
|
- ./data:/app/data
|
|
env_file: .env
|
|
environment:
|
|
- DATABASE_PATH=/app/data/app.db
|
|
depends_on:
|
|
- app
|
|
|
|
scheduler:
|
|
build: .
|
|
restart: unless-stopped
|
|
command: python -m beanflows.worker scheduler
|
|
volumes:
|
|
- ./data:/app/data
|
|
env_file: .env
|
|
environment:
|
|
- DATABASE_PATH=/app/data/app.db
|
|
depends_on:
|
|
- app
|
|
|
|
# Optional: Litestream for backups
|
|
litestream:
|
|
image: litestream/litestream:latest
|
|
restart: unless-stopped
|
|
command: replicate -config /etc/litestream.yml
|
|
volumes:
|
|
- ./data:/app/data
|
|
- ./litestream.yml:/etc/litestream.yml:ro
|
|
depends_on:
|
|
- app
|
|
|
|
volumes:
|