Split the single lakehouse.duckdb into two files to eliminate the exclusive write-lock conflict between SQLMesh (pipeline) and the Quart web app (reader): lakehouse.duckdb — SQLMesh exclusive (all pipeline layers) serving.duckdb — web app reads (serving tables only, atomically swapped) Changes: web/src/beanflows/analytics.py - Replace persistent global _conn with per-thread connections (threading.local) - Add _get_conn(): opens read_only=True on first call per thread, reopens automatically on inode change (~1μs os.stat) to pick up atomic file swaps - Switch env var from DUCKDB_PATH → SERVING_DUCKDB_PATH - Add module docstring documenting architecture + DuckLake migration path web/src/beanflows/app.py - Startup check: use SERVING_DUCKDB_PATH - Health check: use _db_path instead of _conn src/materia/export_serving.py (new) - Reads all serving.* tables from lakehouse.duckdb (read_only) - Writes to serving_new.duckdb, then os.rename → serving.duckdb (atomic) - ~50 lines; runs after each SQLMesh transform src/materia/pipelines.py - Add export_serving pipeline entry (uv run python -c ...) infra/supervisor/supervisor.sh - Add SERVING_DUCKDB_PATH env var comment - Add export step: uv run materia pipeline run export_serving infra/supervisor/materia-supervisor.service - Add Environment=SERVING_DUCKDB_PATH=/data/materia/serving.duckdb infra/bootstrap_supervisor.sh - Add SERVING_DUCKDB_PATH to .env template web/.env.example + web/docker-compose.yml - Document both env vars; switch web service to SERVING_DUCKDB_PATH web/src/beanflows/dashboard/templates/settings.html - Minor settings page fix from prior session Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Materia Infrastructure
Single-server local-first setup for BeanFlows.coffee on Hetzner NVMe.
Architecture
Hetzner Server (NVMe)
├── /opt/materia/ # Git repo, code, uv environment
├── /data/materia/landing/ # Extracted USDA data (year/month subdirs)
├── /data/materia/lakehouse.duckdb # SQLMesh output database
└── systemd services:
├── materia-supervisor # Pulls git, runs extract + transform daily
└── materia-backup.timer # Syncs landing/ to R2 every 6 hours
Data Flow
- Extract: USDA API →
/data/materia/landing/psd/{year}/{month}/{etag}.csv.gzip - Transform: SQLMesh reads landing CSVs → writes to
/data/materia/lakehouse.duckdb - Backup: rclone syncs
/data/materia/landing/→ R2materia-raw/landing/ - Web: Reads
lakehouse.duckdb(read-only)
Setup
Prerequisites
- Hetzner server with NVMe storage
- Pulumi ESC configured (
beanflows/prodenvironment) GITLAB_READ_TOKENandPULUMI_ACCESS_TOKENset
Bootstrap
# From local machine or CI:
ssh root@<server_ip> 'bash -s' < infra/bootstrap_supervisor.sh
This installs dependencies, clones the repo, creates data directories, and starts the supervisor service.
R2 Backup
- Install rclone:
apt install rclone - Copy and configure:
cp infra/backup/rclone.conf.example /root/.config/rclone/rclone.conf - Fill in R2 credentials from Pulumi ESC
- Install systemd units:
cp infra/backup/materia-backup.service /etc/systemd/system/
cp infra/backup/materia-backup.timer /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now materia-backup.timer
Pulumi IaC
Still manages Cloudflare R2 buckets and can provision Hetzner instances:
cd infra
pulumi login
pulumi stack select prod
pulumi up
Monitoring
# Supervisor status and logs
systemctl status materia-supervisor
journalctl -u materia-supervisor -f
# Backup timer status
systemctl list-timers materia-backup.timer
journalctl -u materia-backup -f
Cost
| Resource | Type | Cost |
|---|---|---|
| Hetzner Server | CCX22 (4 vCPU, 16GB) | ~€24/mo |
| R2 Storage | Backup (~10 GB) | $0.15/mo |
| R2 Egress | Zero | $0.00 |
| Total |