Two bugs fixed: 1. Cross-connection COPY: DuckDB doesn't support referencing another connection's tables as src.serving.table. Replace with Arrow as intermediate: src reads to Arrow, dst.register() + CREATE TABLE. 2. Catalog/schema name collision: naming the export file serving.duckdb made DuckDB assign catalog name "serving" — same as the schema we create inside it. Every serving.table query became ambiguous. Rename to analytics.duckdb (catalog "analytics", schema "serving" = no clash). SERVING_DUCKDB_PATH values updated: serving.duckdb → analytics.duckdb in supervisor, service, bootstrap, dev_run.sh, .env.example, docker-compose. 3. Temp file: use _export.duckdb (not serving.duckdb.tmp) to avoid the same catalog collision during the write phase. Verified: 6 tables exported, serving.* queries work read-only. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
39 lines
943 B
Plaintext
39 lines
943 B
Plaintext
# App
|
|
APP_NAME=BeanFlows
|
|
SECRET_KEY=change-me-generate-a-real-secret
|
|
BASE_URL=http://localhost:5001
|
|
DEBUG=true
|
|
ADMIN_EMAILS=admin@beanflows.coffee
|
|
|
|
# Database
|
|
DATABASE_PATH=data/app.db
|
|
# DUCKDB_PATH points to the full pipeline DB (lakehouse.duckdb) — used by SQLMesh and export_serving.
|
|
# SERVING_DUCKDB_PATH points to the serving-only export (analytics.duckdb) — used by the web app.
|
|
# Run `uv run materia pipeline run export_serving` after each SQLMesh transform to populate it.
|
|
DUCKDB_PATH=../local.duckdb
|
|
SERVING_DUCKDB_PATH=../analytics.duckdb
|
|
|
|
# Auth
|
|
MAGIC_LINK_EXPIRY_MINUTES=15
|
|
SESSION_LIFETIME_DAYS=30
|
|
|
|
# Email (Resend)
|
|
RESEND_API_KEY=
|
|
EMAIL_FROM=hello@example.com
|
|
|
|
|
|
# Paddle
|
|
PADDLE_API_KEY=
|
|
PADDLE_WEBHOOK_SECRET=
|
|
PADDLE_PRICE_STARTER=
|
|
PADDLE_PRICE_PRO=
|
|
|
|
|
|
# Rate limiting
|
|
RATE_LIMIT_REQUESTS=100
|
|
RATE_LIMIT_WINDOW=60
|
|
|
|
# Waitlist (set to true to enable waitlist gate on /auth/signup)
|
|
WAITLIST_MODE=false
|
|
RESEND_AUDIENCE_WAITLIST=
|