Files
beanflows/infra
Deeman 9ee7a3d9d3 fix: export_serving — Arrow-based copy, rename to analytics.duckdb
Two bugs fixed:

1. Cross-connection COPY: DuckDB doesn't support referencing another
   connection's tables as src.serving.table. Replace with Arrow as
   intermediate: src reads to Arrow, dst.register() + CREATE TABLE.

2. Catalog/schema name collision: naming the export file serving.duckdb
   made DuckDB assign catalog name "serving" — same as the schema we
   create inside it. Every serving.table query became ambiguous. Rename
   to analytics.duckdb (catalog "analytics", schema "serving" = no clash).

   SERVING_DUCKDB_PATH values updated: serving.duckdb → analytics.duckdb
   in supervisor, service, bootstrap, dev_run.sh, .env.example, docker-compose.

3. Temp file: use _export.duckdb (not serving.duckdb.tmp) to avoid
   the same catalog collision during the write phase.

Verified: 6 tables exported, serving.* queries work read-only.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-22 12:54:39 +01:00
..
2026-02-05 20:01:50 +01:00

Materia Infrastructure

Single-server local-first setup for BeanFlows.coffee on Hetzner NVMe.

Architecture

Hetzner Server (NVMe)
├── /opt/materia/              # Git repo, code, uv environment
├── /data/materia/landing/     # Extracted USDA data (year/month subdirs)
├── /data/materia/lakehouse.duckdb  # SQLMesh output database
└── systemd services:
    ├── materia-supervisor     # Pulls git, runs extract + transform daily
    └── materia-backup.timer   # Syncs landing/ to R2 every 6 hours

Data Flow

  1. Extract: USDA API → /data/materia/landing/psd/{year}/{month}/{etag}.csv.gzip
  2. Transform: SQLMesh reads landing CSVs → writes to /data/materia/lakehouse.duckdb
  3. Backup: rclone syncs /data/materia/landing/ → R2 materia-raw/landing/
  4. Web: Reads lakehouse.duckdb (read-only)

Setup

Prerequisites

  • Hetzner server with NVMe storage
  • Pulumi ESC configured (beanflows/prod environment)
  • GITLAB_READ_TOKEN and PULUMI_ACCESS_TOKEN set

Bootstrap

# From local machine or CI:
ssh root@<server_ip> 'bash -s' < infra/bootstrap_supervisor.sh

This installs dependencies, clones the repo, creates data directories, and starts the supervisor service.

R2 Backup

  1. Install rclone: apt install rclone
  2. Copy and configure: cp infra/backup/rclone.conf.example /root/.config/rclone/rclone.conf
  3. Fill in R2 credentials from Pulumi ESC
  4. Install systemd units:
cp infra/backup/materia-backup.service /etc/systemd/system/
cp infra/backup/materia-backup.timer /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now materia-backup.timer

Pulumi IaC

Still manages Cloudflare R2 buckets and can provision Hetzner instances:

cd infra
pulumi login
pulumi stack select prod
pulumi up

Monitoring

# Supervisor status and logs
systemctl status materia-supervisor
journalctl -u materia-supervisor -f

# Backup timer status
systemctl list-timers materia-backup.timer
journalctl -u materia-backup -f

Cost

Resource Type Cost
Hetzner Server CCX22 (4 vCPU, 16GB) ~€24/mo
R2 Storage Backup (~10 GB) $0.15/mo
R2 Egress Zero $0.00
Total €24/mo ($26)