docs(claude+infra): expand CLAUDE.md + infra/readme.md for full architecture

CLAUDE.md additions:
- List all 6 extractor packages + extract_core
- Full data flow with all sources + dual-DuckDB
- Foundation-as-ontology: dim_commodity conforms cross-source identifiers
- Two-DuckDB architecture explanation (why not serving.duckdb)
- Extraction pattern: one-package-per-source, state SQLite, adding new source
- Supervisor: croniter scheduling, topological waves, tag-based deploy
- CI/CD: pull-based via git tags, no SSH
- Secrets management: SOPS+age section, file table, server key workflow
- uv workspace management section
- Remove Pulumi ESC references; update env vars table

infra/readme.md:
- Update architecture diagram (add analytics.duckdb, age-key.txt)
- Rewrite setup flow: setup_server.sh → add key to SOPS → bootstrap
- Secrets management section with file table
- Deploy model: pull-based (no SSH/CI credentials)
- Monitoring: add supervisor status + extraction state DB query

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Deeman
2026-02-26 12:04:55 +01:00
parent 95f881827e
commit 518b50d0f5
2 changed files with 223 additions and 59 deletions

View File

@@ -6,63 +6,101 @@ Single-server local-first setup for BeanFlows.coffee on Hetzner NVMe.
```
Hetzner Server (NVMe)
├── /opt/materia/ # Git repo, code, uv environment
├── /data/materia/landing/ # Extracted USDA data (year/month subdirs)
├── /data/materia/lakehouse.duckdb # SQLMesh output database
├── /opt/materia/ # Git repo (checked out at latest release tag)
├── /opt/materia/age-key.txt # Server age keypair (chmod 600, gitignored)
├── /opt/materia/.env # Decrypted from .env.prod.sops at deploy time
├── /data/materia/landing/ # Extracted raw data (immutable, content-addressed)
├── /data/materia/lakehouse.duckdb # SQLMesh exclusive write
├── /data/materia/analytics.duckdb # Read-only serving copy for web app
└── systemd services:
├── materia-supervisor # Pulls git, runs extract + transform daily
└── materia-backup.timer # Syncs landing/ to R2 every 6 hours
├── materia-supervisor # Python supervisor: extract transform → export → deploy
└── materia-backup.timer # rclone: syncs landing/ to R2 every 6 hours
```
## Data Flow
1. **Extract**: USDA API → `/data/materia/landing/psd/{year}/{month}/{etag}.csv.gzip`
2. **Transform**: SQLMesh reads landing CSVs → writes to `/data/materia/lakehouse.duckdb`
3. **Backup**: rclone syncs `/data/materia/landing/`R2 `materia-raw/landing/`
4. **Web**: Reads `lakehouse.duckdb` (read-only)
1. **Extract** — Supervisor runs due extractors per `infra/supervisor/workflows.toml`
2. **Transform** SQLMesh reads landing → writes `lakehouse.duckdb`
3. **Export**`export_serving` copies `serving.*``analytics.duckdb` (atomic rename)
4. **Backup** — rclone syncs `/data/materia/landing/` → R2 `materia-raw/landing/`
5. **Web** — Web app reads `analytics.duckdb` read-only (per-thread connections)
## Setup
## Setup (new server)
### Prerequisites
- Hetzner server with NVMe storage
- Pulumi ESC configured (`beanflows/prod` environment)
- `GITLAB_READ_TOKEN` and `PULUMI_ACCESS_TOKEN` set
### Bootstrap
### 1. Run setup_server.sh
```bash
# From local machine or CI:
bash infra/setup_server.sh
```
This creates data directories, installs age, and generates the server age keypair at `/opt/materia/age-key.txt`. It prints the server's age public key.
### 2. Add the server key to SOPS
On your workstation:
```bash
# Add the server public key to .sops.yaml
# Then re-encrypt prod secrets to include the server key:
sops updatekeys .env.prod.sops
git add .sops.yaml .env.prod.sops
git commit -m "chore: add server age key"
git push
```
### 3. Bootstrap the supervisor
```bash
# Requires GITLAB_READ_TOKEN (GitLab project access token, read-only)
export GITLAB_READ_TOKEN=<token>
ssh root@<server_ip> 'bash -s' < infra/bootstrap_supervisor.sh
```
This installs dependencies, clones the repo, creates data directories, and starts the supervisor service.
This installs uv + sops + age, clones the repo, decrypts secrets, installs Python dependencies, and starts the supervisor service.
### R2 Backup
1. Install rclone: `apt install rclone`
2. Copy and configure: `cp infra/backup/rclone.conf.example /root/.config/rclone/rclone.conf`
3. Fill in R2 credentials from Pulumi ESC
4. Install systemd units:
### 4. Set up R2 backup
```bash
apt install rclone
cp infra/backup/rclone.conf.example /root/.config/rclone/rclone.conf
# Fill in R2 credentials from .env.prod.sops (ACCESS_KEY_ID, SECRET_ACCESS_KEY, bucket endpoint)
cp infra/backup/materia-backup.service /etc/systemd/system/
cp infra/backup/materia-backup.timer /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now materia-backup.timer
```
## Pulumi IaC
## Secrets management
Still manages Cloudflare R2 buckets and can provision Hetzner instances:
Secrets are stored as SOPS-encrypted dotenv files in the repo root:
| File | Purpose |
|------|---------|
| `.env.dev.sops` | Dev defaults (safe values, local paths) |
| `.env.prod.sops` | Production secrets |
| `.sops.yaml` | Maps file patterns to age public keys |
```bash
cd infra
pulumi login
pulumi stack select prod
pulumi up
# Decrypt for local dev
make secrets-decrypt-dev
# Edit prod secrets
make secrets-edit-prod
```
`bootstrap_supervisor.sh` decrypts `.env.prod.sops``/opt/materia/.env` during setup.
`web/deploy.sh` re-decrypts on every deploy (so secret rotations take effect automatically).
## Deploy model (pull-based)
No SSH keys or deploy credentials in CI.
1. CI runs tests (`test:cli`, `test:sqlmesh`, `test:web`)
2. On master, CI creates tag `v${CI_PIPELINE_IID}` using built-in `CI_JOB_TOKEN`
3. Supervisor polls for new tags every 60s
4. When a new tag appears: `git checkout --detach <tag>` + `uv sync --all-packages`
5. If `web/` files changed: `./web/deploy.sh` (Docker blue/green + health check)
## Monitoring
```bash
@@ -70,9 +108,27 @@ pulumi up
systemctl status materia-supervisor
journalctl -u materia-supervisor -f
# Workflow status table
cd /opt/materia && uv run python src/materia/supervisor.py status
# Backup timer status
systemctl list-timers materia-backup.timer
journalctl -u materia-backup -f
# Extraction state DB
sqlite3 /data/materia/landing/.state.sqlite \
"SELECT extractor, status, finished_at FROM extraction_runs ORDER BY run_id DESC LIMIT 20"
```
## Pulumi IaC
Still manages Cloudflare R2 buckets:
```bash
cd infra
pulumi login
pulumi stack select prod
pulumi up
```
## Cost