refactor migration system: single source of truth via replay

Eliminated dual-maintenance of schema.sql + versioned migrations.
All databases (fresh and existing) now replay migrations in order
starting from 0000_initial_schema.py. Removed _is_fresh_db() and
the fresh-DB fast-path that skipped migration execution.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Deeman
2026-02-19 00:23:28 +01:00
parent 0b8350c770
commit 07c7e61049
6 changed files with 658 additions and 655 deletions

View File

@@ -6,6 +6,92 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
## [Unreleased] ## [Unreleased]
### Changed
- **Migration system: single source of truth** — eliminated dual-maintenance
of `schema.sql` + versioned migrations; all databases (fresh and existing)
now replay migrations in order starting from `0000_initial_schema.py`;
removed `schema.sql`, `_is_fresh_db()`, and the fresh-DB fast-path that
skipped migration execution; `migrate()` accepts an optional `db_path`
parameter for direct use in tests; test fixtures use cached migration replay
instead of loading `schema.sql` directly; removed fragile `_old_schema_sql()`
test helper and `TestMigration0001` class; template repo updated to match
(deleted `0001_roles_and_billing_customers.py`, projects own their migrations)
- **Design system: Bricolage Grotesque + DM Sans** — replaced Inter with
Bricolage Grotesque (display headings) and DM Sans (body text); added
`--font-display` theme variable; headings use display font via
`font-family: var(--font-display)` in base layer; added `--color-forest`
(#064E3B) to theme palette
- **Glass navbar** — replaced opaque white navbar with semi-transparent
`backdrop-filter: blur(14px)` frosted glass effect
- **Landing page: dark hero** — navy background with radial blue glow,
white text, green badge on dark, white ROI calculator card with stronger
shadow; hero section is now full-width outside the container
- **Landing page: journey timeline** — replaced 5 left-border cards with
numbered step track (01-05) with connecting line, active/upcoming states;
CSS grid 5-col desktop, stacks to horizontal layout on mobile
- **Landing page: dark CTA card** — replaced plain white CTA section with
rounded navy card with noise texture and white inverted button
- **Directory card tiers** — pro cards get stronger green left border +
subtle box-shadow glow and 48px logo; featured badges more prominent with
box-shadow; free/unclaimed cards more visibly muted (lower opacity, lighter
border)
- **Supplier dashboard sidebar icons** — added inline SVG icons (chart,
inbox, building, rocket) to sidebar navigation links
- **Supplier dashboard lead cards** — added heat-color left borders
(red/amber/gray by heat score) on `.lf-card`
### Added
- **Admin sidebar navigation** — new `base_admin.html` template with
persistent sidebar (Overview, Leads, Suppliers, Users, Content, System
sections); Heroicons inline SVGs for each nav item; active state via
`{% set admin_page %}` in child templates; mobile: horizontal scroll nav;
all 20 admin templates now extend `base_admin.html`
- **Admin dashboard section labels** — stat card groups labeled "Lead
Funnel" and "Supplier Funnel" with color-coded left borders (blue for
leads, green for suppliers)
### Fixed
- **Hardcoded Inter on supplier unlock button** — `.lf-unlock-btn` used
`font-family: 'Inter'`; changed to `inherit` so it picks up DM Sans
### Changed
- **Admin auth: password → RBAC** — replaced `ADMIN_PASSWORD` env var and
session-based password login with role-based access control; admin access is
now granted via `ADMIN_EMAILS` env var (comma-separated); on login/dev-login,
matching emails auto-receive the `admin` role; removed `/admin/login` and
`/admin/logout` routes, `admin_required` decorator, and `login.html` template;
all admin routes now use `@role_required("admin")` from `auth/routes.py`
- **Billing: separated billing identity from subscriptions** — new
`billing_customers` table stores `provider_customer_id` (was on
`subscriptions.paddle_customer_id`); subscriptions table renamed
`paddle_subscription_id``provider_subscription_id` and dropped `UNIQUE`
constraint on `user_id` (allows multiple subscriptions per user);
`upsert_subscription` now finds existing rows by `provider_subscription_id`
instead of `user_id`; webhook handler calls `upsert_billing_customer()` for
all subscription events
- **Eager-loaded user context** — `load_user()` now JOINs `billing_customers`,
`user_roles`, and latest subscription in a single query; adds `g.subscription`
and `is_admin` template context variable (replaces `session.get('is_admin')`)
### Added
- **RBAC decorators** — `role_required(*roles)`, `subscription_required(plans,
allowed)`, `grant_role()`, `revoke_role()`, `ensure_admin_role()` in
`auth/routes.py`
- **`user_roles` table** — stores user-role pairs with `UNIQUE(user_id, role)`
- **`billing_customers` table** — stores provider customer ID per user
- **`ADMIN_EMAILS` config** — parsed from comma-separated env var in `core.py`
- **Migration 0011** — adds `user_roles` and `billing_customers` tables,
migrates `paddle_customer_id` data, recreates subscriptions table with
`provider_subscription_id` column and no `UNIQUE` on `user_id`
### Removed
- `ADMIN_PASSWORD` env var and password-based admin authentication
- `/admin/login` and `/admin/logout` routes
- `admin/templates/admin/login.html` template
- `admin_required` decorator (replaced by `role_required("admin")`)
- `subscription_required` from `billing/routes.py` (replaced by version in
`auth/routes.py` that reads from `g.subscription`)
### Fixed ### Fixed
- **Webhook crash on null `custom_data`** — Paddle sends `"custom_data": null` - **Webhook crash on null `custom_data`** — Paddle sends `"custom_data": null`
on lifecycle events (e.g. `subscription.updated`); `.get("custom_data", {})` on lifecycle events (e.g. `subscription.updated`); `.get("custom_data", {})`

View File

@@ -1,26 +1,18 @@
""" """
Sequential migration runner for Padelnomics. Sequential migration runner for Padelnomics.
Manages SQLite schema evolution with two paths: fresh databases get the Manages SQLite schema evolution by replaying migrations in order.
full schema from schema.sql in one shot; existing databases get incremental All databases — fresh and existing — go through the same path.
migrations applied in order.
Algorithm Algorithm
--------- ---------
1. Connect to the SQLite database (create file if missing). 1. Connect to the SQLite database (create file if missing).
2. Set WAL journal mode and enable foreign keys. 2. Set WAL journal mode and enable foreign keys.
3. Check whether the DB is fresh (no application tables at all). 3. Create the _migrations tracking table if it doesn't exist.
4. Execute schema.sql — all statements use CREATE IF NOT EXISTS, so this 4. Discover version files in versions/ matching NNNN_*.py.
is a no-op on existing databases that already have the tables. 5. Diff discovered versions against the _migrations tracking table.
5. Discover version files in versions/ matching NNNN_*.py. 6. Run pending migrations in order, record each in _migrations.
6. Diff discovered versions against the _migrations tracking table. 7. Commit the transaction and print a summary with table names.
7. Choose a path:
- **Fresh DB**: record every version as applied *without* executing it,
because schema.sql already contains the final schema state.
- **Existing DB with pending versions**: import each pending module and
call its ``up(conn)`` function, then record it in _migrations.
- **Up-to-date DB**: no-op, print confirmation.
8. Commit the transaction and print a summary with table names.
Adding a new migration Adding a new migration
---------------------- ----------------------
@@ -28,15 +20,12 @@ Adding a new migration
function that receives an *uncommitted* ``sqlite3.Connection``. function that receives an *uncommitted* ``sqlite3.Connection``.
The runner commits after all pending migrations succeed (batch The runner commits after all pending migrations succeed (batch
atomicity), so do NOT call ``conn.commit()`` inside ``up()``. atomicity), so do NOT call ``conn.commit()`` inside ``up()``.
2. Update ``schema.sql`` so it reflects the final state *after* the 2. Use IF NOT EXISTS / IF EXISTS guards for idempotency.
migration. Fresh databases must end up identical to migrated ones.
3. Keep both in sync — schema.sql is the single source of truth for
what a brand-new database looks like.
Design decisions Design decisions
---------------- ----------------
- **schema.sql as source of truth**: Fresh deploys get the full schema - **Single code path**: Fresh and existing databases both replay
instantly without replaying every historical migration. migrations. 0000_initial_schema.py is the baseline.
- **Sync sqlite3, not aiosqlite**: Migrations run at startup *before* - **Sync sqlite3, not aiosqlite**: Migrations run at startup *before*
the async event loop, so we use the stdlib sqlite3 module directly. the async event loop, so we use the stdlib sqlite3 module directly.
- **up(conn) receives an uncommitted connection**: All pending migrations - **up(conn) receives an uncommitted connection**: All pending migrations
@@ -72,28 +61,24 @@ def _discover_versions():
return versions return versions
def _is_fresh_db(conn): def migrate(db_path=None):
"""A DB is fresh if it has no application tables at all.""" if db_path is None:
row = conn.execute( db_path = os.getenv("DATABASE_PATH", "data/app.db")
"SELECT 1 FROM sqlite_master WHERE type='table'"
" AND name NOT LIKE 'sqlite_%'"
).fetchone()
return row is None
def migrate():
db_path = os.getenv("DATABASE_PATH", "data/app.db")
Path(db_path).parent.mkdir(parents=True, exist_ok=True) Path(db_path).parent.mkdir(parents=True, exist_ok=True)
conn = sqlite3.connect(db_path) conn = sqlite3.connect(db_path)
conn.execute("PRAGMA journal_mode=WAL") conn.execute("PRAGMA journal_mode=WAL")
conn.execute("PRAGMA foreign_keys=ON") conn.execute("PRAGMA foreign_keys=ON")
is_fresh = _is_fresh_db(conn) # Ensure tracking table exists before anything else
conn.execute("""
# schema.sql is always idempotent (CREATE IF NOT EXISTS) CREATE TABLE IF NOT EXISTS _migrations (
schema = (Path(__file__).parent / "schema.sql").read_text() id INTEGER PRIMARY KEY AUTOINCREMENT,
conn.executescript(schema) name TEXT UNIQUE NOT NULL,
applied_at TEXT NOT NULL DEFAULT (datetime('now'))
)
""")
conn.commit()
versions = _discover_versions() versions = _discover_versions()
applied = { applied = {
@@ -102,16 +87,7 @@ def migrate():
} }
pending = [v for v in versions if v not in applied] pending = [v for v in versions if v not in applied]
if is_fresh: if pending:
# Fresh DB — schema.sql already created final schema.
# Record all versions as applied without executing them.
for name in pending:
conn.execute("INSERT INTO _migrations (name) VALUES (?)", (name,))
conn.commit()
print(f"✓ Fresh database initialised: {db_path}")
if pending:
print(f" Recorded {len(pending)} migration(s) as already applied")
elif pending:
for name in pending: for name in pending:
print(f" Applying {name}...") print(f" Applying {name}...")
mod = importlib.import_module( mod = importlib.import_module(

View File

@@ -1,417 +0,0 @@
-- Padelnomics Database Schema
-- Run with: python -m padelnomics.migrations.migrate
-- Migration tracking
CREATE TABLE IF NOT EXISTS _migrations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT UNIQUE NOT NULL,
applied_at TEXT NOT NULL DEFAULT (datetime('now'))
);
-- Users
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE NOT NULL,
name TEXT,
created_at TEXT NOT NULL,
updated_at TEXT,
last_login_at TEXT,
deleted_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
CREATE INDEX IF NOT EXISTS idx_users_deleted ON users(deleted_at);
-- Auth Tokens (magic links)
CREATE TABLE IF NOT EXISTS auth_tokens (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
token TEXT UNIQUE NOT NULL,
expires_at TEXT NOT NULL,
used_at TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_auth_tokens_token ON auth_tokens(token);
CREATE INDEX IF NOT EXISTS idx_auth_tokens_user ON auth_tokens(user_id);
-- Subscriptions
CREATE TABLE IF NOT EXISTS subscriptions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL UNIQUE REFERENCES users(id),
plan TEXT NOT NULL DEFAULT 'free',
status TEXT NOT NULL DEFAULT 'free',
paddle_customer_id TEXT,
paddle_subscription_id TEXT,
current_period_end TEXT,
created_at TEXT NOT NULL,
updated_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_subscriptions_user ON subscriptions(user_id);
CREATE INDEX IF NOT EXISTS idx_subscriptions_provider ON subscriptions(paddle_subscription_id);
-- API Keys
CREATE TABLE IF NOT EXISTS api_keys (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
name TEXT NOT NULL,
key_hash TEXT UNIQUE NOT NULL,
key_prefix TEXT NOT NULL,
scopes TEXT DEFAULT 'read',
created_at TEXT NOT NULL,
last_used_at TEXT,
deleted_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_api_keys_hash ON api_keys(key_hash);
CREATE INDEX IF NOT EXISTS idx_api_keys_user ON api_keys(user_id);
-- API Request Log
CREATE TABLE IF NOT EXISTS api_requests (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
endpoint TEXT NOT NULL,
method TEXT NOT NULL,
created_at TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_api_requests_user ON api_requests(user_id);
CREATE INDEX IF NOT EXISTS idx_api_requests_date ON api_requests(created_at);
-- Rate Limits
CREATE TABLE IF NOT EXISTS rate_limits (
id INTEGER PRIMARY KEY AUTOINCREMENT,
key TEXT NOT NULL,
timestamp TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_rate_limits_key ON rate_limits(key, timestamp);
-- Background Tasks
CREATE TABLE IF NOT EXISTS tasks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task_name TEXT NOT NULL,
payload TEXT,
status TEXT NOT NULL DEFAULT 'pending',
run_at TEXT NOT NULL,
retries INTEGER DEFAULT 0,
error TEXT,
created_at TEXT NOT NULL,
completed_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_tasks_status ON tasks(status, run_at);
-- Scenarios (core domain entity)
CREATE TABLE IF NOT EXISTS scenarios (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
name TEXT NOT NULL DEFAULT 'Untitled Scenario',
state_json TEXT NOT NULL,
location TEXT,
is_default INTEGER DEFAULT 0,
created_at TEXT NOT NULL,
updated_at TEXT,
deleted_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_scenarios_user ON scenarios(user_id);
-- Lead requests (when user wants supplier quotes or financing)
CREATE TABLE IF NOT EXISTS lead_requests (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER REFERENCES users(id),
lead_type TEXT NOT NULL,
scenario_id INTEGER REFERENCES scenarios(id),
location TEXT,
court_count INTEGER,
budget_estimate INTEGER,
message TEXT,
status TEXT DEFAULT 'new',
created_at TEXT NOT NULL,
-- Phase 0: expanded quote qualification fields
facility_type TEXT,
glass_type TEXT,
lighting_type TEXT,
build_context TEXT,
country TEXT,
timeline TEXT,
location_status TEXT,
financing_status TEXT,
wants_financing_help INTEGER DEFAULT 0,
decision_process TEXT,
previous_supplier_contact TEXT,
services_needed TEXT,
additional_info TEXT,
contact_name TEXT,
contact_email TEXT,
contact_phone TEXT,
contact_company TEXT,
stakeholder_type TEXT,
heat_score TEXT DEFAULT 'cool',
verified_at TEXT,
-- Phase 1: credit cost and unlock tracking
credit_cost INTEGER,
unlock_count INTEGER NOT NULL DEFAULT 0
);
CREATE INDEX IF NOT EXISTS idx_leads_status ON lead_requests(status);
CREATE INDEX IF NOT EXISTS idx_leads_heat ON lead_requests(heat_score);
-- Suppliers directory (seeded with unclaimed listings)
CREATE TABLE IF NOT EXISTS suppliers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
slug TEXT UNIQUE NOT NULL,
country_code TEXT NOT NULL,
city TEXT,
region TEXT NOT NULL,
website TEXT,
description TEXT,
category TEXT NOT NULL,
contact TEXT,
claimed_at TEXT,
claimed_by INTEGER REFERENCES users(id),
created_at TEXT NOT NULL DEFAULT (datetime('now')),
tier TEXT NOT NULL DEFAULT 'free',
logo_url TEXT,
is_verified INTEGER NOT NULL DEFAULT 0,
highlight INTEGER NOT NULL DEFAULT 0,
sticky_until TEXT,
sticky_country TEXT,
-- Phase 1: expanded supplier profile and credits
service_categories TEXT,
service_area TEXT,
years_in_business INTEGER,
project_count INTEGER,
short_description TEXT,
long_description TEXT,
contact_name TEXT,
contact_email TEXT,
contact_phone TEXT,
credit_balance INTEGER NOT NULL DEFAULT 0,
monthly_credits INTEGER NOT NULL DEFAULT 0,
last_credit_refill TEXT,
-- Phase 2: editable profile fields
logo_file TEXT,
tagline TEXT
);
CREATE INDEX IF NOT EXISTS idx_suppliers_country ON suppliers(country_code);
CREATE INDEX IF NOT EXISTS idx_suppliers_category ON suppliers(category);
CREATE INDEX IF NOT EXISTS idx_suppliers_slug ON suppliers(slug);
-- FTS5 full-text search for suppliers
CREATE VIRTUAL TABLE IF NOT EXISTS suppliers_fts USING fts5(
name, description, city, country_code, category,
content='suppliers', content_rowid='id'
);
-- Keep FTS in sync with suppliers table
CREATE TRIGGER IF NOT EXISTS suppliers_ai AFTER INSERT ON suppliers BEGIN
INSERT INTO suppliers_fts(rowid, name, description, city, country_code, category)
VALUES (new.id, new.name, new.description, new.city, new.country_code, new.category);
END;
CREATE TRIGGER IF NOT EXISTS suppliers_ad AFTER DELETE ON suppliers BEGIN
INSERT INTO suppliers_fts(suppliers_fts, rowid, name, description, city, country_code, category)
VALUES ('delete', old.id, old.name, old.description, old.city, old.country_code, old.category);
END;
CREATE TRIGGER IF NOT EXISTS suppliers_au AFTER UPDATE ON suppliers BEGIN
INSERT INTO suppliers_fts(suppliers_fts, rowid, name, description, city, country_code, category)
VALUES ('delete', old.id, old.name, old.description, old.city, old.country_code, old.category);
INSERT INTO suppliers_fts(rowid, name, description, city, country_code, category)
VALUES (new.id, new.name, new.description, new.city, new.country_code, new.category);
END;
-- Credit ledger (source of truth for all credit movements)
CREATE TABLE IF NOT EXISTS credit_ledger (
id INTEGER PRIMARY KEY AUTOINCREMENT,
supplier_id INTEGER NOT NULL REFERENCES suppliers(id),
delta INTEGER NOT NULL,
balance_after INTEGER NOT NULL,
event_type TEXT NOT NULL,
reference_id INTEGER,
note TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_credit_ledger_supplier ON credit_ledger(supplier_id);
-- Lead forwards (which supplier unlocked which lead)
CREATE TABLE IF NOT EXISTS lead_forwards (
id INTEGER PRIMARY KEY AUTOINCREMENT,
lead_id INTEGER NOT NULL REFERENCES lead_requests(id),
supplier_id INTEGER NOT NULL REFERENCES suppliers(id),
credit_cost INTEGER NOT NULL,
status TEXT NOT NULL DEFAULT 'sent',
email_sent_at TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
UNIQUE(lead_id, supplier_id)
);
CREATE INDEX IF NOT EXISTS idx_lead_forwards_lead ON lead_forwards(lead_id);
CREATE INDEX IF NOT EXISTS idx_lead_forwards_supplier ON lead_forwards(supplier_id);
-- Supplier boost subscriptions/purchases
CREATE TABLE IF NOT EXISTS supplier_boosts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
supplier_id INTEGER NOT NULL REFERENCES suppliers(id),
boost_type TEXT NOT NULL,
paddle_subscription_id TEXT,
status TEXT NOT NULL DEFAULT 'active',
starts_at TEXT NOT NULL,
expires_at TEXT,
metadata TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_supplier_boosts_supplier ON supplier_boosts(supplier_id);
-- Paddle products (price IDs stored in DB, not env vars)
CREATE TABLE IF NOT EXISTS paddle_products (
id INTEGER PRIMARY KEY AUTOINCREMENT,
key TEXT NOT NULL UNIQUE,
paddle_product_id TEXT NOT NULL,
paddle_price_id TEXT NOT NULL,
name TEXT NOT NULL,
price_cents INTEGER NOT NULL,
currency TEXT NOT NULL DEFAULT 'EUR',
billing_type TEXT NOT NULL,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
-- Business plan PDF exports
CREATE TABLE IF NOT EXISTS business_plan_exports (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
scenario_id INTEGER NOT NULL REFERENCES scenarios(id),
paddle_transaction_id TEXT,
language TEXT NOT NULL DEFAULT 'en',
file_path TEXT,
status TEXT NOT NULL DEFAULT 'pending',
created_at TEXT NOT NULL DEFAULT (datetime('now')),
completed_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_bpe_user ON business_plan_exports(user_id);
CREATE INDEX IF NOT EXISTS idx_bpe_scenario ON business_plan_exports(scenario_id);
-- In-app feedback
CREATE TABLE IF NOT EXISTS feedback (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER REFERENCES users(id),
page_url TEXT,
message TEXT NOT NULL,
is_read INTEGER NOT NULL DEFAULT 0,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
-- =============================================================================
-- Content / Programmatic SEO
-- =============================================================================
-- Published scenarios (generated financial widgets for articles)
CREATE TABLE IF NOT EXISTS published_scenarios (
id INTEGER PRIMARY KEY AUTOINCREMENT,
slug TEXT UNIQUE NOT NULL,
title TEXT NOT NULL,
subtitle TEXT,
location TEXT NOT NULL,
country TEXT NOT NULL,
venue_type TEXT NOT NULL DEFAULT 'indoor',
ownership TEXT NOT NULL DEFAULT 'rent',
court_config TEXT NOT NULL,
state_json TEXT NOT NULL,
calc_json TEXT NOT NULL,
template_data_id INTEGER REFERENCES template_data(id),
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_pub_scenarios_slug ON published_scenarios(slug);
-- Article templates (content recipes)
CREATE TABLE IF NOT EXISTS article_templates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
slug TEXT UNIQUE NOT NULL,
content_type TEXT NOT NULL DEFAULT 'calculator',
input_schema TEXT NOT NULL,
url_pattern TEXT NOT NULL,
title_pattern TEXT NOT NULL,
meta_description_pattern TEXT,
body_template TEXT NOT NULL,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_article_templates_slug ON article_templates(slug);
-- Template data (per-city/region input rows)
CREATE TABLE IF NOT EXISTS template_data (
id INTEGER PRIMARY KEY AUTOINCREMENT,
template_id INTEGER NOT NULL REFERENCES article_templates(id),
data_json TEXT NOT NULL,
scenario_id INTEGER REFERENCES published_scenarios(id),
article_id INTEGER REFERENCES articles(id),
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_template_data_template ON template_data(template_id);
-- Articles (generated or manual)
CREATE TABLE IF NOT EXISTS articles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
url_path TEXT UNIQUE NOT NULL,
slug TEXT UNIQUE NOT NULL,
title TEXT NOT NULL,
meta_description TEXT,
country TEXT,
region TEXT,
og_image_url TEXT,
status TEXT NOT NULL DEFAULT 'draft',
published_at TEXT,
template_data_id INTEGER REFERENCES template_data(id),
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_articles_url_path ON articles(url_path);
CREATE INDEX IF NOT EXISTS idx_articles_slug ON articles(slug);
CREATE INDEX IF NOT EXISTS idx_articles_status ON articles(status, published_at);
-- FTS5 full-text search for articles
CREATE VIRTUAL TABLE IF NOT EXISTS articles_fts USING fts5(
title, meta_description, country, region,
content='articles', content_rowid='id'
);
-- Keep FTS in sync with articles table
CREATE TRIGGER IF NOT EXISTS articles_ai AFTER INSERT ON articles BEGIN
INSERT INTO articles_fts(rowid, title, meta_description, country, region)
VALUES (new.id, new.title, new.meta_description, new.country, new.region);
END;
CREATE TRIGGER IF NOT EXISTS articles_ad AFTER DELETE ON articles BEGIN
INSERT INTO articles_fts(articles_fts, rowid, title, meta_description, country, region)
VALUES ('delete', old.id, old.title, old.meta_description, old.country, old.region);
END;
CREATE TRIGGER IF NOT EXISTS articles_au AFTER UPDATE ON articles BEGIN
INSERT INTO articles_fts(articles_fts, rowid, title, meta_description, country, region)
VALUES ('delete', old.id, old.title, old.meta_description, old.country, old.region);
INSERT INTO articles_fts(rowid, title, meta_description, country, region)
VALUES (new.id, new.title, new.meta_description, new.country, new.region);
END;

View File

@@ -0,0 +1,431 @@
"""Initial schema baseline — all tables as of migration 0011."""
def up(conn):
conn.executescript("""
-- Users
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE NOT NULL,
name TEXT,
created_at TEXT NOT NULL,
updated_at TEXT,
last_login_at TEXT,
deleted_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
CREATE INDEX IF NOT EXISTS idx_users_deleted ON users(deleted_at);
-- User Roles (RBAC)
CREATE TABLE IF NOT EXISTS user_roles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
role TEXT NOT NULL,
granted_at TEXT NOT NULL DEFAULT (datetime('now')),
UNIQUE(user_id, role)
);
CREATE INDEX IF NOT EXISTS idx_user_roles_user ON user_roles(user_id);
CREATE INDEX IF NOT EXISTS idx_user_roles_role ON user_roles(role);
-- Billing Customers (provider customer ID, separate from subscriptions)
CREATE TABLE IF NOT EXISTS billing_customers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL UNIQUE REFERENCES users(id),
provider_customer_id TEXT NOT NULL,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_billing_customers_user ON billing_customers(user_id);
CREATE INDEX IF NOT EXISTS idx_billing_customers_provider ON billing_customers(provider_customer_id);
-- Auth Tokens (magic links)
CREATE TABLE IF NOT EXISTS auth_tokens (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
token TEXT UNIQUE NOT NULL,
expires_at TEXT NOT NULL,
used_at TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_auth_tokens_token ON auth_tokens(token);
CREATE INDEX IF NOT EXISTS idx_auth_tokens_user ON auth_tokens(user_id);
-- Subscriptions
CREATE TABLE IF NOT EXISTS subscriptions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
plan TEXT NOT NULL DEFAULT 'free',
status TEXT NOT NULL DEFAULT 'free',
provider_subscription_id TEXT,
current_period_end TEXT,
created_at TEXT NOT NULL,
updated_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_subscriptions_user ON subscriptions(user_id);
CREATE INDEX IF NOT EXISTS idx_subscriptions_provider ON subscriptions(provider_subscription_id);
-- API Keys
CREATE TABLE IF NOT EXISTS api_keys (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
name TEXT NOT NULL,
key_hash TEXT UNIQUE NOT NULL,
key_prefix TEXT NOT NULL,
scopes TEXT DEFAULT 'read',
created_at TEXT NOT NULL,
last_used_at TEXT,
deleted_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_api_keys_hash ON api_keys(key_hash);
CREATE INDEX IF NOT EXISTS idx_api_keys_user ON api_keys(user_id);
-- API Request Log
CREATE TABLE IF NOT EXISTS api_requests (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
endpoint TEXT NOT NULL,
method TEXT NOT NULL,
created_at TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_api_requests_user ON api_requests(user_id);
CREATE INDEX IF NOT EXISTS idx_api_requests_date ON api_requests(created_at);
-- Rate Limits
CREATE TABLE IF NOT EXISTS rate_limits (
id INTEGER PRIMARY KEY AUTOINCREMENT,
key TEXT NOT NULL,
timestamp TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_rate_limits_key ON rate_limits(key, timestamp);
-- Background Tasks
CREATE TABLE IF NOT EXISTS tasks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task_name TEXT NOT NULL,
payload TEXT,
status TEXT NOT NULL DEFAULT 'pending',
run_at TEXT NOT NULL,
retries INTEGER DEFAULT 0,
error TEXT,
created_at TEXT NOT NULL,
completed_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_tasks_status ON tasks(status, run_at);
-- Scenarios (core domain entity)
CREATE TABLE IF NOT EXISTS scenarios (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
name TEXT NOT NULL DEFAULT 'Untitled Scenario',
state_json TEXT NOT NULL,
location TEXT,
is_default INTEGER DEFAULT 0,
created_at TEXT NOT NULL,
updated_at TEXT,
deleted_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_scenarios_user ON scenarios(user_id);
-- Lead requests (when user wants supplier quotes or financing)
CREATE TABLE IF NOT EXISTS lead_requests (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER REFERENCES users(id),
lead_type TEXT NOT NULL,
scenario_id INTEGER REFERENCES scenarios(id),
location TEXT,
court_count INTEGER,
budget_estimate INTEGER,
message TEXT,
status TEXT DEFAULT 'new',
created_at TEXT NOT NULL,
-- Phase 0: expanded quote qualification fields
facility_type TEXT,
glass_type TEXT,
lighting_type TEXT,
build_context TEXT,
country TEXT,
timeline TEXT,
location_status TEXT,
financing_status TEXT,
wants_financing_help INTEGER DEFAULT 0,
decision_process TEXT,
previous_supplier_contact TEXT,
services_needed TEXT,
additional_info TEXT,
contact_name TEXT,
contact_email TEXT,
contact_phone TEXT,
contact_company TEXT,
stakeholder_type TEXT,
heat_score TEXT DEFAULT 'cool',
verified_at TEXT,
-- Phase 1: credit cost and unlock tracking
credit_cost INTEGER,
unlock_count INTEGER NOT NULL DEFAULT 0
);
CREATE INDEX IF NOT EXISTS idx_leads_status ON lead_requests(status);
CREATE INDEX IF NOT EXISTS idx_leads_heat ON lead_requests(heat_score);
-- Suppliers directory (seeded with unclaimed listings)
CREATE TABLE IF NOT EXISTS suppliers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
slug TEXT UNIQUE NOT NULL,
country_code TEXT NOT NULL,
city TEXT,
region TEXT NOT NULL,
website TEXT,
description TEXT,
category TEXT NOT NULL,
contact TEXT,
claimed_at TEXT,
claimed_by INTEGER REFERENCES users(id),
created_at TEXT NOT NULL DEFAULT (datetime('now')),
tier TEXT NOT NULL DEFAULT 'free',
logo_url TEXT,
is_verified INTEGER NOT NULL DEFAULT 0,
highlight INTEGER NOT NULL DEFAULT 0,
sticky_until TEXT,
sticky_country TEXT,
-- Phase 1: expanded supplier profile and credits
service_categories TEXT,
service_area TEXT,
years_in_business INTEGER,
project_count INTEGER,
short_description TEXT,
long_description TEXT,
contact_name TEXT,
contact_email TEXT,
contact_phone TEXT,
credit_balance INTEGER NOT NULL DEFAULT 0,
monthly_credits INTEGER NOT NULL DEFAULT 0,
last_credit_refill TEXT,
-- Phase 2: editable profile fields
logo_file TEXT,
tagline TEXT
);
CREATE INDEX IF NOT EXISTS idx_suppliers_country ON suppliers(country_code);
CREATE INDEX IF NOT EXISTS idx_suppliers_category ON suppliers(category);
CREATE INDEX IF NOT EXISTS idx_suppliers_slug ON suppliers(slug);
-- FTS5 full-text search for suppliers
CREATE VIRTUAL TABLE IF NOT EXISTS suppliers_fts USING fts5(
name, description, city, country_code, category,
content='suppliers', content_rowid='id'
);
-- Keep FTS in sync with suppliers table
CREATE TRIGGER IF NOT EXISTS suppliers_ai AFTER INSERT ON suppliers BEGIN
INSERT INTO suppliers_fts(rowid, name, description, city, country_code, category)
VALUES (new.id, new.name, new.description, new.city, new.country_code, new.category);
END;
CREATE TRIGGER IF NOT EXISTS suppliers_ad AFTER DELETE ON suppliers BEGIN
INSERT INTO suppliers_fts(suppliers_fts, rowid, name, description, city, country_code, category)
VALUES ('delete', old.id, old.name, old.description, old.city, old.country_code, old.category);
END;
CREATE TRIGGER IF NOT EXISTS suppliers_au AFTER UPDATE ON suppliers BEGIN
INSERT INTO suppliers_fts(suppliers_fts, rowid, name, description, city, country_code, category)
VALUES ('delete', old.id, old.name, old.description, old.city, old.country_code, old.category);
INSERT INTO suppliers_fts(rowid, name, description, city, country_code, category)
VALUES (new.id, new.name, new.description, new.city, new.country_code, new.category);
END;
-- Credit ledger (source of truth for all credit movements)
CREATE TABLE IF NOT EXISTS credit_ledger (
id INTEGER PRIMARY KEY AUTOINCREMENT,
supplier_id INTEGER NOT NULL REFERENCES suppliers(id),
delta INTEGER NOT NULL,
balance_after INTEGER NOT NULL,
event_type TEXT NOT NULL,
reference_id INTEGER,
note TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_credit_ledger_supplier ON credit_ledger(supplier_id);
-- Lead forwards (which supplier unlocked which lead)
CREATE TABLE IF NOT EXISTS lead_forwards (
id INTEGER PRIMARY KEY AUTOINCREMENT,
lead_id INTEGER NOT NULL REFERENCES lead_requests(id),
supplier_id INTEGER NOT NULL REFERENCES suppliers(id),
credit_cost INTEGER NOT NULL,
status TEXT NOT NULL DEFAULT 'sent',
email_sent_at TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
UNIQUE(lead_id, supplier_id)
);
CREATE INDEX IF NOT EXISTS idx_lead_forwards_lead ON lead_forwards(lead_id);
CREATE INDEX IF NOT EXISTS idx_lead_forwards_supplier ON lead_forwards(supplier_id);
-- Supplier boost subscriptions/purchases
CREATE TABLE IF NOT EXISTS supplier_boosts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
supplier_id INTEGER NOT NULL REFERENCES suppliers(id),
boost_type TEXT NOT NULL,
paddle_subscription_id TEXT,
status TEXT NOT NULL DEFAULT 'active',
starts_at TEXT NOT NULL,
expires_at TEXT,
metadata TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_supplier_boosts_supplier ON supplier_boosts(supplier_id);
-- Paddle products (price IDs stored in DB, not env vars)
CREATE TABLE IF NOT EXISTS paddle_products (
id INTEGER PRIMARY KEY AUTOINCREMENT,
key TEXT NOT NULL UNIQUE,
paddle_product_id TEXT NOT NULL,
paddle_price_id TEXT NOT NULL,
name TEXT NOT NULL,
price_cents INTEGER NOT NULL,
currency TEXT NOT NULL DEFAULT 'EUR',
billing_type TEXT NOT NULL,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
-- Business plan PDF exports
CREATE TABLE IF NOT EXISTS business_plan_exports (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
scenario_id INTEGER NOT NULL REFERENCES scenarios(id),
paddle_transaction_id TEXT,
language TEXT NOT NULL DEFAULT 'en',
file_path TEXT,
status TEXT NOT NULL DEFAULT 'pending',
created_at TEXT NOT NULL DEFAULT (datetime('now')),
completed_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_bpe_user ON business_plan_exports(user_id);
CREATE INDEX IF NOT EXISTS idx_bpe_scenario ON business_plan_exports(scenario_id);
-- In-app feedback
CREATE TABLE IF NOT EXISTS feedback (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER REFERENCES users(id),
page_url TEXT,
message TEXT NOT NULL,
is_read INTEGER NOT NULL DEFAULT 0,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
-- =============================================================================
-- Content / Programmatic SEO
-- =============================================================================
-- Published scenarios (generated financial widgets for articles)
CREATE TABLE IF NOT EXISTS published_scenarios (
id INTEGER PRIMARY KEY AUTOINCREMENT,
slug TEXT UNIQUE NOT NULL,
title TEXT NOT NULL,
subtitle TEXT,
location TEXT NOT NULL,
country TEXT NOT NULL,
venue_type TEXT NOT NULL DEFAULT 'indoor',
ownership TEXT NOT NULL DEFAULT 'rent',
court_config TEXT NOT NULL,
state_json TEXT NOT NULL,
calc_json TEXT NOT NULL,
template_data_id INTEGER REFERENCES template_data(id),
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_pub_scenarios_slug ON published_scenarios(slug);
-- Article templates (content recipes)
CREATE TABLE IF NOT EXISTS article_templates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
slug TEXT UNIQUE NOT NULL,
content_type TEXT NOT NULL DEFAULT 'calculator',
input_schema TEXT NOT NULL,
url_pattern TEXT NOT NULL,
title_pattern TEXT NOT NULL,
meta_description_pattern TEXT,
body_template TEXT NOT NULL,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_article_templates_slug ON article_templates(slug);
-- Template data (per-city/region input rows)
CREATE TABLE IF NOT EXISTS template_data (
id INTEGER PRIMARY KEY AUTOINCREMENT,
template_id INTEGER NOT NULL REFERENCES article_templates(id),
data_json TEXT NOT NULL,
scenario_id INTEGER REFERENCES published_scenarios(id),
article_id INTEGER REFERENCES articles(id),
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_template_data_template ON template_data(template_id);
-- Articles (generated or manual)
CREATE TABLE IF NOT EXISTS articles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
url_path TEXT UNIQUE NOT NULL,
slug TEXT UNIQUE NOT NULL,
title TEXT NOT NULL,
meta_description TEXT,
country TEXT,
region TEXT,
og_image_url TEXT,
status TEXT NOT NULL DEFAULT 'draft',
published_at TEXT,
template_data_id INTEGER REFERENCES template_data(id),
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT
);
CREATE INDEX IF NOT EXISTS idx_articles_url_path ON articles(url_path);
CREATE INDEX IF NOT EXISTS idx_articles_slug ON articles(slug);
CREATE INDEX IF NOT EXISTS idx_articles_status ON articles(status, published_at);
-- FTS5 full-text search for articles
CREATE VIRTUAL TABLE IF NOT EXISTS articles_fts USING fts5(
title, meta_description, country, region,
content='articles', content_rowid='id'
);
-- Keep FTS in sync with articles table
CREATE TRIGGER IF NOT EXISTS articles_ai AFTER INSERT ON articles BEGIN
INSERT INTO articles_fts(rowid, title, meta_description, country, region)
VALUES (new.id, new.title, new.meta_description, new.country, new.region);
END;
CREATE TRIGGER IF NOT EXISTS articles_ad AFTER DELETE ON articles BEGIN
INSERT INTO articles_fts(articles_fts, rowid, title, meta_description, country, region)
VALUES ('delete', old.id, old.title, old.meta_description, old.country, old.region);
END;
CREATE TRIGGER IF NOT EXISTS articles_au AFTER UPDATE ON articles BEGIN
INSERT INTO articles_fts(articles_fts, rowid, title, meta_description, country, region)
VALUES ('delete', old.id, old.title, old.meta_description, old.country, old.region);
INSERT INTO articles_fts(rowid, title, meta_description, country, region)
VALUES (new.id, new.title, new.meta_description, new.country, new.region);
END;
""")

View File

@@ -3,6 +3,8 @@ Shared test fixtures for the Padelnomics test suite.
""" """
import hashlib import hashlib
import hmac import hmac
import sqlite3
import tempfile
import time import time
from datetime import datetime from datetime import datetime
from pathlib import Path from pathlib import Path
@@ -13,21 +15,44 @@ import pytest
from padelnomics import core from padelnomics import core
from padelnomics.app import create_app from padelnomics.app import create_app
from padelnomics.migrations.migrate import migrate
SCHEMA_PATH = Path(__file__).parent.parent / "src" / "padelnomics" / "migrations" / "schema.sql" _SCHEMA_CACHE = None
def _get_schema_ddl():
"""Run all migrations once against a temp DB and cache the resulting DDL."""
global _SCHEMA_CACHE
if _SCHEMA_CACHE is not None:
return _SCHEMA_CACHE
tmp_db = str(Path(tempfile.mkdtemp()) / "schema.db")
migrate(tmp_db)
tmp_conn = sqlite3.connect(tmp_db)
rows = tmp_conn.execute(
"SELECT sql FROM sqlite_master"
" WHERE sql IS NOT NULL"
" AND name NOT LIKE 'sqlite_%'"
" AND name NOT LIKE '%_fts_%'" # FTS5 shadow tables (created by VIRTUAL TABLE)
" AND name != '_migrations'"
" ORDER BY rowid"
).fetchall()
tmp_conn.close()
_SCHEMA_CACHE = ";\n".join(r[0] for r in rows) + ";"
return _SCHEMA_CACHE
# ── Database ───────────────────────────────────────────────── # ── Database ─────────────────────────────────────────────────
@pytest.fixture @pytest.fixture
async def db(): async def db():
"""In-memory SQLite with full schema, patches core._db.""" """In-memory SQLite with full schema from replaying migrations."""
schema_ddl = _get_schema_ddl()
conn = await aiosqlite.connect(":memory:") conn = await aiosqlite.connect(":memory:")
conn.row_factory = aiosqlite.Row conn.row_factory = aiosqlite.Row
await conn.execute("PRAGMA foreign_keys=ON") await conn.execute("PRAGMA foreign_keys=ON")
await conn.executescript(schema_ddl)
schema = SCHEMA_PATH.read_text()
await conn.executescript(schema)
await conn.commit() await conn.commit()
original_db = core._db original_db = core._db
@@ -91,17 +116,24 @@ def create_subscription(db):
user_id: int, user_id: int,
plan: str = "pro", plan: str = "pro",
status: str = "active", status: str = "active",
paddle_customer_id: str = "ctm_test123", provider_customer_id: str = "ctm_test123",
paddle_subscription_id: str = "sub_test456", provider_subscription_id: str = "sub_test456",
current_period_end: str = "2025-03-01T00:00:00Z", current_period_end: str = "2025-03-01T00:00:00Z",
) -> int: ) -> int:
now = datetime.utcnow().isoformat() now = datetime.utcnow().isoformat()
# Create billing_customers record if provider_customer_id given
if provider_customer_id:
await db.execute(
"""INSERT OR IGNORE INTO billing_customers
(user_id, provider_customer_id, created_at) VALUES (?, ?, ?)""",
(user_id, provider_customer_id, now),
)
async with db.execute( async with db.execute(
"""INSERT INTO subscriptions """INSERT INTO subscriptions
(user_id, plan, status, paddle_customer_id, (user_id, plan, status,
paddle_subscription_id, current_period_end, created_at, updated_at) provider_subscription_id, current_period_end, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)""", VALUES (?, ?, ?, ?, ?, ?, ?)""",
(user_id, plan, status, paddle_customer_id, paddle_subscription_id, (user_id, plan, status, provider_subscription_id,
current_period_end, now, now), current_period_end, now, now),
) as cursor: ) as cursor:
sub_id = cursor.lastrowid sub_id = cursor.lastrowid

View File

@@ -13,11 +13,8 @@ from unittest.mock import patch
import pytest import pytest
from padelnomics.migrations.migrate import _discover_versions, _is_fresh_db, migrate from padelnomics.migrations.migrate import _discover_versions, migrate
SCHEMA_PATH = (
Path(__file__).parent.parent / "src" / "padelnomics" / "migrations" / "schema.sql"
)
VERSIONS_DIR = ( VERSIONS_DIR = (
Path(__file__).parent.parent / "src" / "padelnomics" / "migrations" / "versions" Path(__file__).parent.parent / "src" / "padelnomics" / "migrations" / "versions"
) )
@@ -25,14 +22,6 @@ VERSIONS_DIR = (
# ── Helpers ─────────────────────────────────────────────────── # ── Helpers ───────────────────────────────────────────────────
def _old_schema_sql():
"""Return schema.sql with paddle columns swapped back to lemonsqueezy."""
schema = SCHEMA_PATH.read_text()
schema = schema.replace("paddle_customer_id", "lemonsqueezy_customer_id")
schema = schema.replace("paddle_subscription_id", "lemonsqueezy_subscription_id")
return schema
def _table_names(conn): def _table_names(conn):
"""Return sorted list of user-visible table names.""" """Return sorted list of user-visible table names."""
rows = conn.execute( rows = conn.execute(
@@ -49,11 +38,6 @@ def _column_names(conn, table):
# ── Fixtures ────────────────────────────────────────────────── # ── Fixtures ──────────────────────────────────────────────────
@pytest.fixture
def schema_sql():
return SCHEMA_PATH.read_text()
@pytest.fixture @pytest.fixture
def fresh_db_path(tmp_path): def fresh_db_path(tmp_path):
"""Path to a non-existent DB file.""" """Path to a non-existent DB file."""
@@ -62,53 +46,37 @@ def fresh_db_path(tmp_path):
@pytest.fixture @pytest.fixture
def existing_db(tmp_path): def existing_db(tmp_path):
"""DB with old lemonsqueezy column names and no _migrations table.""" """DB with 0000 baseline applied (simulates an existing production DB)."""
db_path = str(tmp_path / "existing.db") db_path = str(tmp_path / "existing.db")
schema = _old_schema_sql()
# Remove the _migrations table DDL so this DB has no tracking
schema = re.sub(
r"CREATE TABLE IF NOT EXISTS _migrations\s*\([^)]+\);",
"",
schema,
)
conn = sqlite3.connect(db_path) conn = sqlite3.connect(db_path)
conn.executescript(schema) conn.execute("PRAGMA foreign_keys=ON")
# Create _migrations table and apply only 0000
conn.execute("""
CREATE TABLE IF NOT EXISTS _migrations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT UNIQUE NOT NULL,
applied_at TEXT NOT NULL DEFAULT (datetime('now'))
)
""")
mod_0000 = importlib.import_module(
"padelnomics.migrations.versions.0000_initial_schema"
)
mod_0000.up(conn)
conn.execute(
"INSERT INTO _migrations (name) VALUES (?)",
("0000_initial_schema",),
)
conn.commit() conn.commit()
conn.close() conn.close()
return db_path return db_path
@pytest.fixture @pytest.fixture
def production_db(tmp_path, schema_sql): def up_to_date_db(tmp_path):
"""DB with current paddle columns but no _migrations records.""" """DB with all migrations applied via migrate()."""
db_path = str(tmp_path / "production.db")
# Remove the _migrations DDL so it simulates manual migration
schema = re.sub(
r"CREATE TABLE IF NOT EXISTS _migrations\s*\([^)]+\);",
"",
schema_sql,
)
conn = sqlite3.connect(db_path)
conn.executescript(schema)
conn.commit()
conn.close()
return db_path
@pytest.fixture
def up_to_date_db(tmp_path, schema_sql):
"""DB with final schema and all migrations recorded."""
db_path = str(tmp_path / "uptodate.db") db_path = str(tmp_path / "uptodate.db")
conn = sqlite3.connect(db_path) migrate(db_path)
conn.executescript(schema_sql)
# Record all discovered versions as applied
for f in sorted(VERSIONS_DIR.iterdir()):
if re.match(r"^\d{4}_.+\.py$", f.name):
conn.execute(
"INSERT INTO _migrations (name) VALUES (?)", (f.stem,)
)
conn.commit()
conn.close()
return db_path return db_path
@@ -124,9 +92,8 @@ def mock_versions_dir(tmp_path):
class TestFreshDatabase: class TestFreshDatabase:
def test_creates_all_tables(self, fresh_db_path, monkeypatch): def test_creates_all_tables(self, fresh_db_path):
monkeypatch.setenv("DATABASE_PATH", fresh_db_path) migrate(fresh_db_path)
migrate()
conn = sqlite3.connect(fresh_db_path) conn = sqlite3.connect(fresh_db_path)
tables = _table_names(conn) tables = _table_names(conn)
conn.close() conn.close()
@@ -135,9 +102,8 @@ class TestFreshDatabase:
assert "subscriptions" in tables assert "subscriptions" in tables
assert "scenarios" in tables assert "scenarios" in tables
def test_records_all_versions_as_applied(self, fresh_db_path, monkeypatch): def test_records_all_versions_as_applied(self, fresh_db_path):
monkeypatch.setenv("DATABASE_PATH", fresh_db_path) migrate(fresh_db_path)
migrate()
conn = sqlite3.connect(fresh_db_path) conn = sqlite3.connect(fresh_db_path)
applied = { applied = {
r[0] for r in conn.execute("SELECT name FROM _migrations").fetchall() r[0] for r in conn.execute("SELECT name FROM _migrations").fetchall()
@@ -146,40 +112,40 @@ class TestFreshDatabase:
versions = _discover_versions() versions = _discover_versions()
assert applied == set(versions) assert applied == set(versions)
def test_does_not_call_import_module(self, fresh_db_path, monkeypatch): def test_uses_provider_column_names(self, fresh_db_path):
monkeypatch.setenv("DATABASE_PATH", fresh_db_path) migrate(fresh_db_path)
with patch("padelnomics.migrations.migrate.importlib.import_module") as mock_imp:
migrate()
mock_imp.assert_not_called()
def test_uses_paddle_column_names(self, fresh_db_path, monkeypatch):
monkeypatch.setenv("DATABASE_PATH", fresh_db_path)
migrate()
conn = sqlite3.connect(fresh_db_path) conn = sqlite3.connect(fresh_db_path)
cols = _column_names(conn, "subscriptions") cols = _column_names(conn, "subscriptions")
conn.close() conn.close()
assert "paddle_customer_id" in cols assert "provider_subscription_id" in cols
assert "paddle_subscription_id" in cols assert "paddle_customer_id" not in cols
assert "lemonsqueezy_customer_id" not in cols assert "lemonsqueezy_customer_id" not in cols
def test_creates_rbac_tables(self, fresh_db_path):
migrate(fresh_db_path)
conn = sqlite3.connect(fresh_db_path)
tables = _table_names(conn)
conn.close()
assert "user_roles" in tables
assert "billing_customers" in tables
# ── TestExistingDatabase ────────────────────────────────────── # ── TestExistingDatabase ──────────────────────────────────────
class TestExistingDatabase: class TestExistingDatabase:
def test_applies_pending_migration(self, existing_db, monkeypatch): def test_applies_pending_migrations(self, existing_db):
monkeypatch.setenv("DATABASE_PATH", existing_db) migrate(existing_db)
migrate()
conn = sqlite3.connect(existing_db) conn = sqlite3.connect(existing_db)
cols = _column_names(conn, "subscriptions") applied = {
r[0] for r in conn.execute("SELECT name FROM _migrations").fetchall()
}
conn.close() conn.close()
assert "paddle_customer_id" in cols versions = _discover_versions()
assert "paddle_subscription_id" in cols assert applied == set(versions)
assert "lemonsqueezy_customer_id" not in cols
def test_records_migration_with_timestamp(self, existing_db, monkeypatch): def test_records_migration_with_timestamp(self, existing_db):
monkeypatch.setenv("DATABASE_PATH", existing_db) migrate(existing_db)
migrate()
conn = sqlite3.connect(existing_db) conn = sqlite3.connect(existing_db)
row = conn.execute( row = conn.execute(
"SELECT name, applied_at FROM _migrations WHERE name LIKE '0001%'" "SELECT name, applied_at FROM _migrations WHERE name LIKE '0001%'"
@@ -194,16 +160,14 @@ class TestExistingDatabase:
class TestUpToDateDatabase: class TestUpToDateDatabase:
def test_noop_when_all_applied(self, up_to_date_db, monkeypatch): def test_noop_when_all_applied(self, up_to_date_db):
monkeypatch.setenv("DATABASE_PATH", up_to_date_db)
with patch("padelnomics.migrations.migrate.importlib.import_module") as mock_imp: with patch("padelnomics.migrations.migrate.importlib.import_module") as mock_imp:
migrate() migrate(up_to_date_db)
mock_imp.assert_not_called() mock_imp.assert_not_called()
def test_no_duplicate_entries_on_rerun(self, up_to_date_db, monkeypatch): def test_no_duplicate_entries_on_rerun(self, up_to_date_db):
monkeypatch.setenv("DATABASE_PATH", up_to_date_db) migrate(up_to_date_db)
migrate() migrate(up_to_date_db)
migrate()
conn = sqlite3.connect(up_to_date_db) conn = sqlite3.connect(up_to_date_db)
count = conn.execute("SELECT COUNT(*) FROM _migrations").fetchone()[0] count = conn.execute("SELECT COUNT(*) FROM _migrations").fetchone()[0]
conn.close() conn.close()
@@ -214,21 +178,22 @@ class TestUpToDateDatabase:
class TestIdempotentMigration: class TestIdempotentMigration:
def test_production_db_paddle_cols_already_exist( def test_migrate_twice_is_idempotent(self, fresh_db_path):
self, production_db, monkeypatch """Running migrate() twice produces the same result."""
): migrate(fresh_db_path)
"""Production scenario: paddle columns exist, no _migrations table. conn = sqlite3.connect(fresh_db_path)
0001 runs without error and gets recorded.""" tables_first = _table_names(conn)
monkeypatch.setenv("DATABASE_PATH", production_db) count_first = conn.execute("SELECT COUNT(*) FROM _migrations").fetchone()[0]
migrate()
conn = sqlite3.connect(production_db)
cols = _column_names(conn, "subscriptions")
applied = {
r[0] for r in conn.execute("SELECT name FROM _migrations").fetchall()
}
conn.close() conn.close()
assert "paddle_customer_id" in cols
assert "0001_rename_ls_to_paddle" in applied migrate(fresh_db_path)
conn = sqlite3.connect(fresh_db_path)
tables_second = _table_names(conn)
count_second = conn.execute("SELECT COUNT(*) FROM _migrations").fetchone()[0]
conn.close()
assert tables_first == tables_second
assert count_first == count_second
# ── TestDiscoverVersions ───────────────────────────────────── # ── TestDiscoverVersions ─────────────────────────────────────
@@ -237,8 +202,9 @@ class TestIdempotentMigration:
class TestDiscoverVersions: class TestDiscoverVersions:
def test_finds_and_sorts_version_files(self): def test_finds_and_sorts_version_files(self):
versions = _discover_versions() versions = _discover_versions()
assert len(versions) >= 1 assert len(versions) >= 2
assert versions[0] == "0001_rename_ls_to_paddle" assert versions[0] == "0000_initial_schema"
assert versions[1] == "0001_rename_ls_to_paddle"
def test_ignores_non_matching_files(self, mock_versions_dir, monkeypatch): def test_ignores_non_matching_files(self, mock_versions_dir, monkeypatch):
(mock_versions_dir / "__init__.py").write_text("") (mock_versions_dir / "__init__.py").write_text("")
@@ -258,69 +224,6 @@ class TestDiscoverVersions:
assert _discover_versions() == [] assert _discover_versions() == []
# ── TestIsFreshDb ─────────────────────────────────────────────
class TestIsFreshDb:
def test_empty_db_is_fresh(self, tmp_path):
conn = sqlite3.connect(str(tmp_path / "empty.db"))
assert _is_fresh_db(conn) is True
conn.close()
def test_db_with_schema_is_not_fresh(self, tmp_path, schema_sql):
conn = sqlite3.connect(str(tmp_path / "full.db"))
conn.executescript(schema_sql)
assert _is_fresh_db(conn) is False
conn.close()
def test_db_with_single_table_is_not_fresh(self, tmp_path):
conn = sqlite3.connect(str(tmp_path / "one.db"))
conn.execute("CREATE TABLE foo (id INTEGER PRIMARY KEY)")
assert _is_fresh_db(conn) is False
conn.close()
# ── TestMigration0001 ─────────────────────────────────────────
class TestMigration0001:
@pytest.fixture
def mod_0001(self):
return importlib.import_module(
"padelnomics.migrations.versions.0001_rename_ls_to_paddle"
)
def test_renames_columns(self, tmp_path, mod_0001):
conn = sqlite3.connect(str(tmp_path / "rename.db"))
conn.executescript(_old_schema_sql())
mod_0001.up(conn)
cols = _column_names(conn, "subscriptions")
conn.close()
assert "paddle_customer_id" in cols
assert "paddle_subscription_id" in cols
assert "lemonsqueezy_customer_id" not in cols
def test_idempotent_when_already_renamed(self, tmp_path, schema_sql, mod_0001):
conn = sqlite3.connect(str(tmp_path / "idem.db"))
conn.executescript(schema_sql)
# Should not raise even though columns are already paddle_*
mod_0001.up(conn)
cols = _column_names(conn, "subscriptions")
conn.close()
assert "paddle_customer_id" in cols
def test_recreates_index(self, tmp_path, mod_0001):
conn = sqlite3.connect(str(tmp_path / "idx.db"))
conn.executescript(_old_schema_sql())
mod_0001.up(conn)
indexes = conn.execute(
"SELECT name FROM sqlite_master WHERE type='index'"
" AND name='idx_subscriptions_provider'"
).fetchall()
conn.close()
assert len(indexes) == 1
# ── TestMigrationOrdering ───────────────────────────────────── # ── TestMigrationOrdering ─────────────────────────────────────
@@ -328,11 +231,6 @@ class TestMigrationOrdering:
def test_multiple_pending_run_in_order(self, tmp_path, monkeypatch): def test_multiple_pending_run_in_order(self, tmp_path, monkeypatch):
"""Mock two version files and verify they run in sorted order.""" """Mock two version files and verify they run in sorted order."""
db_path = str(tmp_path / "order.db") db_path = str(tmp_path / "order.db")
# Create a DB with one arbitrary table so it's not "fresh"
conn = sqlite3.connect(db_path)
conn.execute("CREATE TABLE dummy (id INTEGER PRIMARY KEY)")
conn.close()
monkeypatch.setenv("DATABASE_PATH", db_path) monkeypatch.setenv("DATABASE_PATH", db_path)
# Create fake version files in a temp versions dir # Create fake version files in a temp versions dir
@@ -364,13 +262,10 @@ class TestMigrationOrdering:
"padelnomics.migrations.versions.0002_second", "padelnomics.migrations.versions.0002_second",
] ]
def test_migrations_table_created_on_existing_db( def test_migrations_table_created_automatically(self, fresh_db_path):
self, existing_db, monkeypatch """A fresh DB gets the _migrations table from migrate()."""
): migrate(fresh_db_path)
"""An existing DB without _migrations gets the table after migrate().""" conn = sqlite3.connect(fresh_db_path)
monkeypatch.setenv("DATABASE_PATH", existing_db)
migrate()
conn = sqlite3.connect(existing_db)
tables = _table_names(conn) tables = _table_names(conn)
conn.close() conn.close()
assert "_migrations" in tables assert "_migrations" in tables