Add Phase 1A-C + ICE warehouse stocks: prices, methodology, pipeline automation

Phase 1A — KC=F Coffee Futures Prices:
- New extract/coffee_prices/ package (yfinance): downloads KC=F daily OHLCV,
  stores as gzip CSV with SHA256-based idempotency
- SQLMesh models: raw/coffee_prices → foundation/fct_coffee_prices →
  serving/coffee_prices (with 20d/50d SMA, 52-week high/low, daily return %)
- Dashboard: 4 metric cards + dual-line chart (close, 20d MA, 50d MA)
- API: GET /commodities/<ticker>/prices

Phase 1B — Data Methodology Page:
- New /methodology route with full-page template (base.html)
- 6 anchored sections: USDA PSD, CFTC COT, KC=F price, ICE warehouse stocks,
  data quality model, update schedule table
- "Methodology" link added to marketing footer

Phase 1C — Automated Pipeline:
- supervisor.sh updated: runs extract_cot, extract_prices, extract_ice in
  sequence before transform
- Webhook failure alerting via ALERT_WEBHOOK_URL env var (ntfy/Slack/Telegram)

ICE Warehouse Stocks:
- New extract/ice_stocks/ package (niquests): normalizes ICE Report Center CSV
  to canonical schema, hash-based idempotency, soft-fail on 404 with guidance
- SQLMesh models: raw/ice_warehouse_stocks → foundation/fct_ice_warehouse_stocks
  → serving/ice_warehouse_stocks (30d avg, WoW change, 52w drawdown)
- Dashboard: 4 metric cards + line chart (certified bags + 30d avg)
- API: GET /commodities/<code>/stocks

Foundation:
- dim_commodity: added ticker (KC=F) and ice_stock_report_code (COFFEE-C) columns
- macros/__init__.py: added prices_glob() and ice_stocks_glob()
- pipelines.py: added extract_prices and extract_ice entries

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Deeman
2026-02-21 11:41:43 +01:00
parent 2962bf5e3b
commit 67c048485b
25 changed files with 1350 additions and 6 deletions

View File

@@ -0,0 +1,18 @@
[project]
name = "coffee_prices"
version = "0.1.0"
description = "KC=F Coffee C futures price extractor"
requires-python = ">=3.13"
dependencies = [
"yfinance>=0.2.55",
]
[project.scripts]
extract_prices = "coffee_prices.execute:extract_coffee_prices"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["src/coffee_prices"]

View File

@@ -0,0 +1,92 @@
"""Coffee C (KC=F) futures price extraction.
Downloads daily OHLCV data from Yahoo Finance via yfinance and stores as
gzip CSV in the landing directory. Uses SHA256 of CSV bytes as the
idempotency key — skips if a file with the same hash already exists.
Landing path: LANDING_DIR/prices/coffee_kc/{hash8}.csv.gzip
"""
import gzip
import hashlib
import io
import logging
import os
import pathlib
import sys
import yfinance as yf
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
logger = logging.getLogger("Coffee Prices Extractor")
LANDING_DIR = pathlib.Path(os.getenv("LANDING_DIR", "data/landing"))
TICKER = "KC=F"
DEST_SUBDIR = "prices/coffee_kc"
# yfinance raises on network issues; give it enough time for the full history
DOWNLOAD_TIMEOUT_SECONDS = 120
def extract_coffee_prices() -> None:
"""Download KC=F daily OHLCV history and store as gzip CSV.
Idempotent: computes SHA256 of CSV bytes, skips if already on disk.
On first run downloads full history (period='max'). On subsequent runs
the hash matches if no new trading days have closed since last run.
"""
logger.info(f"Downloading {TICKER} daily OHLCV from Yahoo Finance...")
ticker = yf.Ticker(TICKER)
df = ticker.history(period="max", interval="1d", auto_adjust=False, timeout=DOWNLOAD_TIMEOUT_SECONDS)
assert df is not None and len(df) > 0, f"yfinance returned empty DataFrame for {TICKER}"
# Reset index so Date becomes a plain column
df = df.reset_index()
# Keep standard OHLCV columns only; yfinance may return extra columns
keep_cols = [c for c in ["Date", "Open", "High", "Low", "Close", "Adj Close", "Volume"] if c in df.columns]
df = df[keep_cols]
# Normalize Date to ISO string for CSV stability across timezones
df["Date"] = df["Date"].dt.strftime("%Y-%m-%d")
# Serialize to CSV bytes
csv_buf = io.StringIO()
df.to_csv(csv_buf, index=False)
csv_bytes = csv_buf.getvalue().encode("utf-8")
assert len(csv_bytes) > 0, "CSV serialization produced empty output"
# Hash-based idempotency key (first 8 hex chars of SHA256)
sha256 = hashlib.sha256(csv_bytes).hexdigest()
etag = sha256[:8]
dest_dir = LANDING_DIR / DEST_SUBDIR
local_file = dest_dir / f"{etag}.csv.gzip"
if local_file.exists():
logger.info(f"File {local_file.name} already exists — no new data, skipping")
return
# Compress and write
dest_dir.mkdir(parents=True, exist_ok=True)
compressed = gzip.compress(csv_bytes)
local_file.write_bytes(compressed)
assert local_file.exists(), f"File was not written: {local_file}"
assert local_file.stat().st_size > 0, f"Written file is empty: {local_file}"
logger.info(
f"Stored {local_file} ({local_file.stat().st_size:,} bytes, {len(df):,} rows)"
)
if __name__ == "__main__":
extract_coffee_prices()