Fix dashboard error handling, settings billing route, update vision.md

- routes.py: return_exceptions=True on gather, log individual query failures
  with per-result defaults so one bad query doesn't blank the whole page
- settings.html: fix billing.portal → billing.manage (correct blueprint route)
- vision.md: update current state to February 2026, document shipped features

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Deeman
2026-02-21 00:02:41 +01:00
parent 88e408b279
commit 4dcf1e7e84
3 changed files with 64 additions and 33 deletions

View File

@@ -89,47 +89,63 @@ We move fast, ship incrementally, and prioritize value over vanity metrics.
- Avoid full table scans
- Pay only for what changed
## Current State (October 2025)
## Current State (February 2026)
### What's Working
- USDA PSD Online extraction (2006-present, monthly archives)
- 4-layer SQLMesh pipeline (raw → staging → cleaned → serving)
- DuckDB backend with 13GB dev database
- DuckDB backend (local dev + production lakehouse)
- Incremental-by-time-range models with deduplication
- Development environment with pre-commit hooks, linting, formatting
- **Web app (BeanFlows.coffee)** — Quart + HTMX, deployed via Docker
- Magic-link auth + signup with waitlist flow
- Coffee analytics dashboard: time series, top producers, stock-to-use trend, supply/demand balance, YoY change
- Country comparison view
- User settings + account management
- API key management (create, revoke, prefix display)
- Plan-based access control (free / starter / pro) with 5-year history cap on free tier
- Billing via Paddle (subscriptions + webhooks)
- Admin panel (users, waitlist, feedback, tasks)
- REST API with Bearer token auth, rate limiting (1000 req/hr), CSV export
- Feedback + waitlist capture
- GitLab CI pipeline (lint, test, build), regression tests for billing/auth/API
### What We Have
- Comprehensive commodity supply/demand data (USDA PSD)
- Comprehensive commodity supply/demand data (USDA PSD, 2006present)
- Established naming conventions and data quality patterns
- GitLab CI pipeline (lint, test, build)
- Documentation (CLAUDE.md, layer conventions)
- Full product pipeline: data → DB → API → web dashboard
- Paddle billing integration (Starter + Pro tiers)
- Working waitlist to capture early interest
## Roadmap
### Phase 1: Coffee Market Foundation (Current)
### Phase 1: Coffee Market Foundation (In Progress → ~70% done)
**Goal:** Build complete coffee analytics from supply to price
**Data Sources to Integrate:**
- ✅ USDA PSD Online (production, stocks, consumption)
-ICO (International Coffee Organization) data
-Yahoo Finance / Alpha Vantage (coffee futures prices - KC=F)
-Weather data for coffee-growing regions (OpenWeatherMap, NOAA)
-CFTC COT data (trader positioning)
-ICE warehouse stocks (web scraping)
-CFTC COT data (trader positioning — weekly, Coffee C futures code 083731)
-Coffee futures prices — KC=F via Yahoo Finance / yfinance, or Databento for tick-level
-ICO (International Coffee Organization) data — trade volumes, consumption stats
-ICE certified warehouse stocks (daily CSV from ICE Report Center — free)
-Weather data for growing regions — ECMWF/Open-Meteo (free), Brazil frost alerts
**Features to Build:**
- ⬜ Historical price correlation analysis
- ⬜ Supply/demand balance modeling
- ⬜ Weather impact scoring
-Trader sentiment indicators (COT)
-Simple web dashboard (read-only analytics)
-Data export APIs (JSON, CSV, Parquet)
- ✅ Web dashboard (supply/demand, stock-to-use trend, YoY, country comparison)
- ✅ REST API with key auth, plan-based access, rate limiting
- ✅ CSV export
-CFTC COT integration → trader sentiment indicators
-Historical price data → price/supply correlation analysis
-Python SDK (`pip install beanflows`) — critical for the quant analyst beachhead
- ⬜ Data methodology documentation page — P0 for trust (see strategy doc)
- ⬜ Parquet export endpoint
- ⬜ Example Jupyter notebooks (show how to pipe data into common models)
**Infrastructure:**
- Move to Cloudflare R2 for raw data storage
-Deploy SQLMesh to Hetzner production environment
-Set up automated daily extraction + transformation pipeline
-Implement monitoring and alerting
- ⬜ Cloudflare R2 for raw data storage (rclone sync is partly planned)
-Automated daily pipeline on Hetzner (SQLMesh prod + cron)
-Pipeline monitoring + alerting (failure notifications)
-Published SLA for data freshness
### Phase 2: Product Market Fit
**Goal:** Validate with real traders, iterate on feedback
@@ -246,16 +262,28 @@ When making decisions, ask:
If the answer to any of these is "no," reconsider.
## Current Priorities (Q4 2025)
## Current Priorities (Q1 2026)
1. Integrate coffee futures price data (Yahoo Finance)
2. Build time-series serving models for price/supply correlation
3. Deploy production pipeline to Hetzner
4. Set up Cloudflare R2 for raw data storage
5. Create simple read-only dashboard for coffee analytics
6. Document API for beta testers
**Goal: Complete Phase 1 "whole product" and start beachhead outreach**
### Immediate (ship first):
1. **CFTC COT data** — extract weekly positioning data (CFTC code 083731), add to SQLMesh pipeline, expose via API. Completes the "USDA + CFTC" V1 promise from the strategy doc.
2. **Coffee futures price (KC=F)** — daily close via yfinance or Databento. Enables price/supply correlation in the dashboard. Core hook for trader interest.
3. **Data methodology page** — transparent docs for every field, every source, lineage. The #1 trust driver per the strategy doc. Required before outreach.
4. **Python SDK** (`pip install beanflows`) — one-line data access for quant analysts. The beachhead segment runs Python; this removes their biggest switching friction.
### Then (before Series A of customers):
5. **Automated daily pipeline** on Hetzner — cron + SQLMesh prod, with failure alerting
6. **Cloudflare R2** raw data backup + pipeline source
7. **Example Jupyter notebooks** — show before/after vs. manual WASDE workflow
8. **ICE warehouse stocks** — daily certified Arabica/Robusta inventory data (free from ICE Report Center)
### Business (parallel, not blocking):
- Start direct outreach to 2030 named analysts at mid-size commodity funds
- Weekly "BeanFlows Coffee Data Brief" newsletter (content marketing + credibility signal)
- Identify 12 early beta users willing to give feedback
---
**Last Updated:** October 2025
**Next Review:** End of Q4 2025 (adjust based on Phase 1 progress)
**Last Updated:** February 2026
**Next Review:** End of Q1 2026