Phase 1A — KC=F Coffee Futures Prices: - New extract/coffee_prices/ package (yfinance): downloads KC=F daily OHLCV, stores as gzip CSV with SHA256-based idempotency - SQLMesh models: raw/coffee_prices → foundation/fct_coffee_prices → serving/coffee_prices (with 20d/50d SMA, 52-week high/low, daily return %) - Dashboard: 4 metric cards + dual-line chart (close, 20d MA, 50d MA) - API: GET /commodities/<ticker>/prices Phase 1B — Data Methodology Page: - New /methodology route with full-page template (base.html) - 6 anchored sections: USDA PSD, CFTC COT, KC=F price, ICE warehouse stocks, data quality model, update schedule table - "Methodology" link added to marketing footer Phase 1C — Automated Pipeline: - supervisor.sh updated: runs extract_cot, extract_prices, extract_ice in sequence before transform - Webhook failure alerting via ALERT_WEBHOOK_URL env var (ntfy/Slack/Telegram) ICE Warehouse Stocks: - New extract/ice_stocks/ package (niquests): normalizes ICE Report Center CSV to canonical schema, hash-based idempotency, soft-fail on 404 with guidance - SQLMesh models: raw/ice_warehouse_stocks → foundation/fct_ice_warehouse_stocks → serving/ice_warehouse_stocks (30d avg, WoW change, 52w drawdown) - Dashboard: 4 metric cards + line chart (certified bags + 30d avg) - API: GET /commodities/<code>/stocks Foundation: - dim_commodity: added ticker (KC=F) and ice_stock_report_code (COFFEE-C) columns - macros/__init__.py: added prices_glob() and ice_stocks_glob() - pipelines.py: added extract_prices and extract_ice entries Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Materia
A commodity data analytics platform built on a modern data engineering stack. Extracts agricultural commodity data from USDA PSD Online, transforms it through a layered SQL pipeline using SQLMesh, and stores it in DuckDB + Cloudflare R2 for analysis.
Tech Stack
- Python 3.13 with
uvpackage manager - SQLMesh for SQL transformation and orchestration
- DuckDB as the analytical database
- Cloudflare R2 (Iceberg) for data storage
- Pulumi ESC for secrets management
- Hetzner Cloud for infrastructure
Quick Start
1. Install UV
UV is our Python package manager for faster, more reliable dependency management.
curl -LsSf https://astral.sh/uv/install.sh | sh
2. Install Dependencies
uv sync
This installs Python and all dependencies declared in pyproject.toml.
3. Setup Pre-commit Hooks
pre-commit install
This enables automatic linting with ruff on every commit.
4. Install Pulumi ESC (for running with secrets)
# Install ESC CLI
curl -fsSL https://get.pulumi.com/esc/install.sh | sh
# Login
esc login
Project Structure
This is a uv workspace with three main packages:
Extract Layer (extract/)
psdonline - Extracts USDA PSD commodity data
# Local development (downloads to local directory)
uv run extract_psd
# Production (uploads to R2)
esc run beanflows/prod -- uv run extract_psd
Transform Layer (transform/sqlmesh_materia/)
SQLMesh project implementing a 4-layer data architecture (raw → staging → cleaned → serving).
All commands run from project root with -p transform/sqlmesh_materia:
# Local development
esc run beanflows/prod -- uv run sqlmesh -p transform/sqlmesh_materia plan dev_<username>
# Production
esc run beanflows/prod -- uv run sqlmesh -p transform/sqlmesh_materia plan prod
# Run tests (no secrets needed)
uv run sqlmesh -p transform/sqlmesh_materia test
# Format SQL
uv run sqlmesh -p transform/sqlmesh_materia format
Core Package (src/materia/)
CLI for managing infrastructure and pipelines (currently minimal).
Development Workflow
Adding Dependencies
For workspace root:
uv add <package-name>
For specific package:
uv add --package psdonline <package-name>
Linting and Formatting
# Check for issues
ruff check .
# Auto-fix issues
ruff check --fix .
# Format code
ruff format .
Running Tests
# Python tests
uv run pytest tests/ -v --cov=src/materia
# SQLMesh tests
uv run sqlmesh -p transform/sqlmesh_materia test
Secrets Management
All secrets are managed via Pulumi ESC environment beanflows/prod.
Load secrets into shell:
eval $(esc env open beanflows/prod --format shell)
Run commands with secrets:
# Single command
esc run beanflows/prod -- uv run extract_psd
# Multiple commands
esc run beanflows/prod -- bash -c "
uv run extract_psd
uv run sqlmesh -p transform/sqlmesh_materia plan prod
"
Production Architecture
Git-Based Deployment
- Supervisor (Hetzner CPX11): Always-on orchestrator that pulls latest code every 15 minutes
- Workers (Ephemeral): Created on-demand for each pipeline run, destroyed after completion
- Storage: Cloudflare R2 Data Catalog (Apache Iceberg REST API)
CI/CD Pipeline
GitLab CI runs on every push to master:
- Lint -
ruff check - Test - pytest + SQLMesh tests
- Deploy - Updates supervisor infrastructure and bootstraps if needed
No build artifacts - supervisor pulls code directly from git!
Architecture Principles
- Simplicity First - Avoid unnecessary abstractions
- Data-Oriented Design - Identify data by content, not metadata
- Cost Optimization - Ephemeral workers, minimal always-on infrastructure
- Inspectable - Easy to understand, test locally, and debug
Resources
- Architecture Plans: See
.claude/plans/for design decisions - UV Docs: https://docs.astral.sh/uv/
- SQLMesh Docs: https://sqlmesh.readthedocs.io/