Simplify SQLMesh to use single prod gateway with virtual environments

- Remove dev gateway (local DuckDB file no longer needed)
- Single prod gateway connects to R2 Iceberg catalog
- Use virtual environments for dev isolation (e.g., dev_<username>)
- Update CLAUDE.md with new workflow and environment strategy
- Create comprehensive transform/sqlmesh_materia/README.md

Benefits:
- Simpler configuration (one gateway instead of two)
- All environments use same R2 Iceberg catalog
- SQLMesh handles environment isolation automatically
- No need to maintain local 13GB materia_dev.db file
- before_all hooks only run for prod gateway (no conditional logic needed)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Deeman
2025-10-13 21:47:04 +02:00
parent 6536724e00
commit d2352c1876
3 changed files with 121 additions and 29 deletions

View File

@@ -55,8 +55,11 @@ SQLMesh project implementing a layered data architecture.
```bash
cd transform/sqlmesh_materia
# Plan changes (no prompts, auto-apply enabled in config)
sqlmesh plan
# Local development (creates virtual environment)
sqlmesh plan dev_<username>
# Production
sqlmesh plan prod
# Run tests
sqlmesh test
@@ -76,10 +79,17 @@ sqlmesh ui
**Configuration:**
- Config: `transform/sqlmesh_materia/config.yaml`
- Default gateway: `dev` (uses `materia_dev.db`)
- Production gateway: `prod` (uses `materia_prod.db`)
- Single gateway: `prod` (connects to R2 Iceberg catalog)
- Uses virtual environments for dev isolation (e.g., `dev_deeman`)
- Production uses `prod` environment
- Auto-apply enabled, no interactive prompts
- DuckDB extensions: zipfs, httpfs, iceberg
- DuckDB extensions: httpfs, iceberg
**Environment Strategy:**
- All environments connect to the same R2 Iceberg catalog
- Dev environments (e.g., `dev_deeman`) are isolated virtual environments
- SQLMesh manages environment isolation and table versioning
- No local DuckDB files needed
### 3. Core Package (`src/materia/`)
Currently minimal; main logic resides in workspace packages.
@@ -254,10 +264,10 @@ Supervisor: uv run materia pipeline run <pipeline>
```
#### 5. Data Storage
- **Dev**: Local DuckDB file (`materia_dev.db`)
- **Prod**: DuckDB in-memory + Cloudflare R2 Data Catalog (Iceberg REST API)
- **All environments**: DuckDB in-memory + Cloudflare R2 Data Catalog (Iceberg REST API)
- ACID transactions on object storage
- No persistent database on workers
- Virtual environments for dev isolation (e.g., `dev_deeman`)
**Execution Flow:**
1. Supervisor loop wakes up every 15 minutes
@@ -299,14 +309,15 @@ Supervisor: uv run materia pipeline run <pipeline>
- Leverage SQLMesh's built-in time macros (`@start_ds`, `@end_ds`)
- Keep raw layer thin, push transformations to staging+
## Database Location
## Data Storage
- **Dev database:** `materia_dev.db` (13GB, in project root)
- **Prod database:** `materia_prod.db` (not yet created)
Note: The dev database is large and should not be committed to git (.gitignore already configured).
All data is stored in Cloudflare R2 Data Catalog (Apache Iceberg) via REST API:
- **Production environment:** `prod`
- **Dev environments:** `dev_<username>` (virtual environments)
- SQLMesh manages environment isolation and table versioning
- No local database files needed
- We use a monorepo with uv workspaces
- The pulumi env is called beanflows/prod
- NEVER hardcode secrets in plaintext
- Never add ssh keys to the git repo!
- NEVER hardcode secrets in plaintext
- Never add ssh keys to the git repo!
- If there is a simpler more direct solution and there is no other tradeoff, always choose the simpler solution