Compare commits

..

31 Commits

Author SHA1 Message Date
Deeman
2a7eed1576 merge: test suite compression pass (-197 lines)
All checks were successful
CI / test (push) Successful in 51s
CI / tag (push) Successful in 3s
2026-03-02 10:46:01 +01:00
Deeman
162e633c62 refactor(tests): compress admin_client + mock_send_email into conftest
Lift admin_client fixture from 7 duplicate definitions into conftest.py.
Add mock_send_email fixture, replacing 60 inline patch() blocks across
test_emails.py, test_waitlist.py, and test_businessplan.py. Net -197 lines.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 09:40:52 +01:00
Deeman
31017457a6 merge: semantic-compression — add compression helpers, macros, and coding philosophy
All checks were successful
CI / test (push) Successful in 50s
CI / tag (push) Successful in 3s
Applies Casey Muratori's semantic compression across all three packages:
- count_where() helper: 30+ COUNT(*) call sites compressed
- _forward_lead(): deduplicates lead forward routes
- 5 SQLMesh macros for country code patterns (7 models)
- skip_if_current() + write_jsonl_atomic() extract helpers
Net: -118 lines (272 added, 390 removed)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 08:00:15 +01:00
Deeman
f93e4fd0d1 chore(changelog): document semantic compression pass
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 07:54:44 +01:00
Deeman
567798ebe1 feat(extract): add skip_if_current() and write_jsonl_atomic() helpers
Task 5/6: Compress repeated patterns in extractors:
- skip_if_current(): cursor check + early-return dict (3 extractors)
- write_jsonl_atomic(): working-file → JSONL → compress (2 extractors)
Applied in gisco, geonames, census_usa, playtomic_tenants.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 07:49:18 +01:00
Deeman
b32b7cd748 merge: unify confirm dialog — pure hx-confirm + form[method=dialog]
Eliminates confirmAction() entirely. One code path: all confirmations
go through showConfirm() called by the htmx:confirm interceptor.
14 template files converted to hx-boost + hx-confirm pattern.
Pipeline endpoints updated to exclude HX-Boosted requests from the
HTMX partial path.

# Conflicts:
#	web/src/padelnomics/admin/templates/admin/affiliate_form.html
#	web/src/padelnomics/admin/templates/admin/affiliate_program_form.html
#	web/src/padelnomics/admin/templates/admin/base_admin.html
#	web/src/padelnomics/admin/templates/admin/partials/affiliate_program_results.html
#	web/src/padelnomics/admin/templates/admin/partials/affiliate_row.html
2026-03-02 07:48:49 +01:00
Deeman
6774254cb0 feat(sqlmesh): add country code macros, apply across models
Task 4/6: Add 5 macros to compress repeated country code patterns:
- @country_name / @country_slug: 20-country CASE in dim_cities, dim_locations
- @normalize_eurostat_country / @normalize_eurostat_nuts: EL→GR, UK→GB
- @infer_country_from_coords: bounding box for 8 markets
Net: +91 lines in macros, -135 lines in models = -44 lines total.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 07:45:52 +01:00
Deeman
e87a7fc9d6 refactor(admin): extract _forward_lead() from duplicate lead forward routes
Task 3/6: lead_forward and lead_forward_htmx shared ~20 lines of
identical DB logic. Extracted into _forward_lead() that returns an
error string or None. Both routes now call the helper and differ
only in response format (redirect vs HTMX partial).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 07:43:50 +01:00
Deeman
3d7a72ba26 refactor: apply count_where() across remaining web blueprints
Task 2/6 continued: Compress 18 more COUNT(*) call sites across
suppliers, directory, dashboard, public, planner, pseo, and pipeline
routes. -24 lines net.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 07:40:24 +01:00
Deeman
a55501f2ea feat(core): add count_where() helper, compress admin COUNT queries
Task 2/6: Adds count_where(table_where, params) to core.py that
compresses the fetch_one + null-check COUNT(*) pattern. Applied
across admin/routes.py — dashboard stats shrinks from ~75 to ~25
lines, plus 10 more call sites compressed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 07:35:33 +01:00
Deeman
d3626193c5 refactor(admin): unify confirm dialog — pure hx-confirm + form[method=dialog]
Eliminate `confirmAction()` and the duplicate `cloneNode` hack entirely.
One code path: everything goes through `showConfirm()` called by the
`htmx:confirm` interceptor.

Dialog HTML:
- `<form method="dialog">` for native close semantics; button `value`
  becomes `dialog.returnValue` — no manual event listener reassignment.

JS:
- `showConfirm(message)` — Promise-based, listens for `close` once.
- `htmx:confirm` handler calls `showConfirm()` and calls `issueRequest`
  if confirmed. Replaces both the old HTMX handler and `confirmAction()`.

Templates (Padelnomics, 14 files):
- All `onclick=confirmAction(...)` and `onclick=confirm()` removed.
- Form-submit buttons: added `hx-boost="true"` to form + `hx-confirm`
  on the submit button.
- Pure HTMX buttons (pipeline_transform, pipeline_overview): `hx-confirm`
  replaces `onclick=if(!confirm(...))return false;`.

Pipeline routes (pipeline_trigger_extract, pipeline_trigger_transform):
- `is_htmx` now excludes `HX-Boosted: true` requests — boosted form
  POSTs get the normal redirect instead of the inline partial.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 07:35:32 +01:00
Deeman
7ea1f234e8 chore(changelog): document htmx:confirm guard fix
All checks were successful
CI / test (push) Successful in 51s
CI / tag (push) Successful in 2s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 22:40:07 +01:00
Deeman
c1cf472caf fix(admin): guard htmx:confirm handler against empty question
The handler called evt.preventDefault() unconditionally, so auto-poll
requests (hx-trigger="every 5s", no hx-confirm) caused an empty dialog
to pop up every 5 seconds. Add an early return when evt.detail.question
is falsy so only actual hx-confirm interactions are intercepted.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 22:39:38 +01:00
Deeman
f9e22a72dd merge: fix CI — update proxy tests for 2-tier design
All checks were successful
CI / test (push) Successful in 54s
CI / tag (push) Successful in 3s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 22:36:35 +01:00
Deeman
ce466e3f7f test(proxy): update supervisor tests for 2-tier proxy (no Webshare)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 22:36:30 +01:00
Deeman
563bd1fb2e merge: tiered-proxy-tenants — gisco extractor, proxy fixes, recheck datetime fix
Some checks failed
CI / test (push) Failing after 46s
CI / tag (push) Has been skipped
- feat: GISCO NUTS-2 extractor module (replaces standalone script)
- feat: wire 5 unscheduled extractors into workflows.toml
- fix: add load_dotenv() to _shared.py so .env proxies are picked up
- fix: recheck datetime parsing (HH:MM:SS slot times need start_date prefix)
- fix: graceful 0-venue early return in recheck
- fix(proxy): remove Webshare free tier — DC tier 1, residential tier 2

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 22:12:17 +01:00
Deeman
b980b8f567 fix(proxy): remove Webshare free tier — DC tier 1, residential tier 2
Free Webshare proxies were timing out and exhausting the circuit breaker
before datacenter/residential proxies got a chance to run.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 22:12:08 +01:00
Deeman
0733f1c2a1 docs(scratch): rename guide → question bank with full gap analysis
Transforms the raw question bank into an annotated gap analysis document:
- Every section tagged ANSWERED / PARTIAL / GAP
- Summary table of 13 gaps across 3 tiers with impact and feasibility
- Inline actionable notes linking to research files, planner inputs, and backlog

Key findings captured:
- Tier 1 gaps: subsidies/grants, buyer segmentation, indoor-vs-outdoor decision
  framework, OPEX benchmark display
- Tier 2 gaps: booking platform strategy, depreciation/tax shield, legal/regulatory
  checklist (DE), supplier selection framework, staffing plan template
- Tier 3 gaps: zero-court pSEO pages, pre-opening playbook, drive-time isochrones

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 21:30:27 +01:00
Deeman
320777d24c update env vars
All checks were successful
CI / test (push) Successful in 50s
CI / tag (push) Successful in 2s
2026-03-01 21:28:45 +01:00
Deeman
92930ac717 fix(extract): handle 0-result recheck gracefully — skip file write
When all proxy tiers are exhausted and 0 venues are fetched, the working
file is empty and compress_jsonl_atomic asserts non-empty. Return early
with a warning instead of crashing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 21:25:09 +01:00
Deeman
0cfc841c08 merge: fix recheck 0-result crash
All checks were successful
CI / test (push) Successful in 51s
CI / tag (push) Successful in 3s
2026-03-01 21:25:09 +01:00
Deeman
36deaba00e merge: fix recheck slot datetime parsing
All checks were successful
CI / test (push) Successful in 50s
CI / tag (push) Successful in 2s
2026-03-01 19:47:49 +01:00
Deeman
9608b7f601 feat(admin): replace all native confirm() with styled dialog + fix pipeline tabs scrollbar
Some checks failed
CI / tag (push) Has been cancelled
CI / test (push) Has been cancelled
- Add global htmx:confirm handler in base_admin.html that intercepts
  hx-confirm attributes and shows #confirm-dialog instead of window.confirm()
- Convert 4 pipeline HTMX buttons (Run Transform, Run Export, Run Full
  Pipeline, Run extractor) from onclick+confirm() to hx-confirm
- Convert 4 affiliate form/list delete buttons from onclick+confirm()
  to confirmAction() via event.preventDefault()
- Add scrollbar-width:none + ::-webkit-scrollbar{display:none} to
  .pipeline-tabs to suppress spurious horizontal scrollbar

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 19:47:34 +01:00
Deeman
0811b30cbd fix(extract): recheck slot datetime parsing — was silently skipping all slots
start_time is "HH:MM:SS" (time only), not a full ISO datetime. Combining
with resource's start_date to get "YYYY-MM-DDTHH:MM:SS" before parsing.
The ValueError was silently caught on every slot → 0 venues found → recheck
never actually ran since it was first deployed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 19:47:01 +01:00
Deeman
7d2950928e fix(infra): add R2_ENDPOINT to prod secrets for landing backup
All checks were successful
CI / test (push) Successful in 50s
CI / tag (push) Successful in 3s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 18:41:42 +01:00
Deeman
65e51d2972 fix(infra): switch landing backup to shared r2-landing rclone remote
All checks were successful
CI / test (push) Successful in 52s
CI / tag (push) Successful in 3s
Replace inline LITESTREAM_R2_* credentials in the backup service with
the named [r2-landing] rclone remote and R2_LANDING_* env vars, matching
the beanflows pattern. Add rclone.conf setup to bootstrap_supervisor.sh
so the remote is written from env on each bootstrap run.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 18:36:57 +01:00
Deeman
c5d872ec55 chore(secrets): add R2_LANDING_* vars for
All checks were successful
CI / test (push) Successful in 49s
CI / tag (push) Successful in 2s
landing-zone backup bucket…
2026-03-01 17:32:51 +01:00
Deeman
75305935bd merge: GISCO extractor + wire all extractors + load_dotenv fix
All checks were successful
CI / test (push) Successful in 50s
CI / tag (push) Successful in 3s
2026-03-01 17:08:42 +01:00
Deeman
99cb0ac005 chore: remove .gitlab-ci.yml (GitLab now backup-only mirror)
All checks were successful
CI / test (push) Successful in 50s
CI / tag (push) Successful in 2s
CI runs on Gitea only. GitLab is a passive push mirror — no runners,
no tagging, no deploy involvement.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 17:06:09 +01:00
Deeman
a15c32d398 fix(extract): load .env automatically via load_dotenv()
PROXY_URLS_* and other secrets were defined in .env but never loaded,
causing availability to run in slow serial mode (1 req/s) instead of
parallel mode with proxies.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 16:41:59 +01:00
Deeman
97c5846d51 feat(extract): GISCO extractor + wire all unscheduled extractors
- New gisco.py: proper extractor module replacing scripts/download_gisco_nuts.py.
  Writes uncompressed .geojson (ST_Read can't handle .gz). Fixed partition path
  gisco/2024/01/nuts2_boundaries.geojson; cursor tracking skips re-download monthly.
- all.py: import + register gisco in EXTRACTORS (9 independent, 1 dep)
- pyproject.toml: add extract-gisco entry point
- workflows.toml: add census_usa, census_usa_income, eurostat_city_labels,
  ons_uk, gisco — all monthly, no dependencies
- Delete scripts/download_gisco_nuts.py (superseded)

Unblocks: stg_nuts2_boundaries, stg_regional_income, stg_income_usa,
and 4 downstream models (dim_locations, pseo_city_costs_de,
location_opportunity_profile, pseo_country_overview).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 15:49:39 +01:00
62 changed files with 955 additions and 1070 deletions

View File

@@ -3,6 +3,8 @@ APP_NAME=ENC[AES256_GCM,data:Vic/MJYoxZo8JAI=,iv:n1SEGQaGeZtYMtLmDRFiljDBbNKFvCz
SECRET_KEY=ENC[AES256_GCM,data:a3Bhj3gSQaE3llRWBYzpjoFDhhhSsNee67jXJs7+qn4=,iv:yvrx78X5Ut4DBSlmBnIn09ESVc/tuDiwiV4njmjcvko=,tag:cbFUTAEpX+isQD9FCVllsw==,type:str] SECRET_KEY=ENC[AES256_GCM,data:a3Bhj3gSQaE3llRWBYzpjoFDhhhSsNee67jXJs7+qn4=,iv:yvrx78X5Ut4DBSlmBnIn09ESVc/tuDiwiV4njmjcvko=,tag:cbFUTAEpX+isQD9FCVllsw==,type:str]
BASE_URL=ENC[AES256_GCM,data:LcbPDZf9Pwcuv7RxN9xhNfa9Tufi,iv:cOdjW9nNe+BuDXh+dL4b5LFQL2mKBiKV0FaEsDGMAQc=,tag:3uAn3AIwsztIfGpkQLD5Fg==,type:str] BASE_URL=ENC[AES256_GCM,data:LcbPDZf9Pwcuv7RxN9xhNfa9Tufi,iv:cOdjW9nNe+BuDXh+dL4b5LFQL2mKBiKV0FaEsDGMAQc=,tag:3uAn3AIwsztIfGpkQLD5Fg==,type:str]
DEBUG=ENC[AES256_GCM,data:qrEGkA==,iv:bCyEDWiEzolHo4vabiyYTsqM0eUaBmNbXYYu4wCsaeE=,tag:80gnDNbdZHRWVEYtuA1M2Q==,type:str] DEBUG=ENC[AES256_GCM,data:qrEGkA==,iv:bCyEDWiEzolHo4vabiyYTsqM0eUaBmNbXYYu4wCsaeE=,tag:80gnDNbdZHRWVEYtuA1M2Q==,type:str]
#ENC[AES256_GCM,data:YB5h,iv:2HFpvHNebAB9M/44rtPk/QpFV9hNKOlV/099OSjPnOA=,tag:BVj8vGy6K3LW/wb1vcZ+Ug==,type:comment]
GITEA_TOKEN=ENC[AES256_GCM,data:aIM7vQXxFbz7FDdXEdwtelvmXAdLgJfWNCSPeK//NlveQrU5cLDt8w==,iv:9qhjk52ZAs+y5WwP5WebMUwHhu6JNdHzAsEOpznrwBw=,tag:WnCDA4hAccMFs6vXVVKqxw==,type:str]
#ENC[AES256_GCM,data:YmlGAWpXxRCqam3oTWtGxHDXC+svEXI4HyUxrm/8OcKTuJsYPcL1WcnYqrP5Mf5lU5qPezEXUrrgZy8vjVW6qAbb0IA2PMM4Kg==,iv:dx6Dn99dJgjwyvUp8NAygXjRQ50yKYFeC73Oqt9WvmY=,tag:6JLF2ixSAv39VkKt6+cecQ==,type:comment] #ENC[AES256_GCM,data:YmlGAWpXxRCqam3oTWtGxHDXC+svEXI4HyUxrm/8OcKTuJsYPcL1WcnYqrP5Mf5lU5qPezEXUrrgZy8vjVW6qAbb0IA2PMM4Kg==,iv:dx6Dn99dJgjwyvUp8NAygXjRQ50yKYFeC73Oqt9WvmY=,tag:6JLF2ixSAv39VkKt6+cecQ==,type:comment]
ADMIN_EMAILS=ENC[AES256_GCM,data:hlG8b32WlD4ems3VKQ==,iv:wWO08dmX4oLhHulXg4HUG0PjRnFiX19RUTkTvjqIw5I=,tag:KMjXsBt7aE/KqlCfV+fdMg==,type:str] ADMIN_EMAILS=ENC[AES256_GCM,data:hlG8b32WlD4ems3VKQ==,iv:wWO08dmX4oLhHulXg4HUG0PjRnFiX19RUTkTvjqIw5I=,tag:KMjXsBt7aE/KqlCfV+fdMg==,type:str]
#ENC[AES256_GCM,data:b2wQxnL8Q2Bp,iv:q8ep3yUPzCumpZpljoVL2jbcPdsI5c2piiZ0x5k10Mw=,tag:IbjkT0Mjgu9n+6FGiPVihg==,type:comment] #ENC[AES256_GCM,data:b2wQxnL8Q2Bp,iv:q8ep3yUPzCumpZpljoVL2jbcPdsI5c2piiZ0x5k10Mw=,tag:IbjkT0Mjgu9n+6FGiPVihg==,type:comment]
@@ -71,7 +73,7 @@ GEONAMES_USERNAME=ENC[AES256_GCM,data:aSkVdLNrhiF6tlg=,iv:eemFGwDIv3EG/P3lVHGZj9
CENSUS_API_KEY=ENC[AES256_GCM,data:qqG971573aGq9MiHI2xLlanKKFwjfcNNoMXtm8LNbyh0rMbQN2XukQ==,iv:az2i0ldH75nHGah4DeOxaXmDbVYqmC1c77ptZqFA9BI=,tag:zoDdKj9bR7fgIDo1/dEU2g==,type:str] CENSUS_API_KEY=ENC[AES256_GCM,data:qqG971573aGq9MiHI2xLlanKKFwjfcNNoMXtm8LNbyh0rMbQN2XukQ==,iv:az2i0ldH75nHGah4DeOxaXmDbVYqmC1c77ptZqFA9BI=,tag:zoDdKj9bR7fgIDo1/dEU2g==,type:str]
sops_age__list_0__map_enc=-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBxNWNmUzVNUGdWRnE0ZFpF\nM0JQZWZ3UDdEVzlwTmIxakxOZXBkT2x2ZlNrClRtV2M3S2daSGxUZmFDSWQ2Nmh4\neU51QndFcUxlSE00RFovOVJTcDZmUUUKLS0tIDcvL3hRMDRoMWZZSXljNzA3WG5o\nMWFic21MV0krMzlIaldBTVU0ZDdlTE0K7euGQtA+9lHNws+x7TMCArZamm9att96\nL8cXoUDWe5fNI5+M1bXReqVfNwPTwZsV6j/+ZtYKybklIzWz02Ex4A==\n-----END AGE ENCRYPTED FILE-----\n sops_age__list_0__map_enc=-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBxNWNmUzVNUGdWRnE0ZFpF\nM0JQZWZ3UDdEVzlwTmIxakxOZXBkT2x2ZlNrClRtV2M3S2daSGxUZmFDSWQ2Nmh4\neU51QndFcUxlSE00RFovOVJTcDZmUUUKLS0tIDcvL3hRMDRoMWZZSXljNzA3WG5o\nMWFic21MV0krMzlIaldBTVU0ZDdlTE0K7euGQtA+9lHNws+x7TMCArZamm9att96\nL8cXoUDWe5fNI5+M1bXReqVfNwPTwZsV6j/+ZtYKybklIzWz02Ex4A==\n-----END AGE ENCRYPTED FILE-----\n
sops_age__list_0__map_recipient=age1f5002gj4s78jju45jd28kuejtcfhn5cdujz885fl7z2p9ym68pnsgky87a sops_age__list_0__map_recipient=age1f5002gj4s78jju45jd28kuejtcfhn5cdujz885fl7z2p9ym68pnsgky87a
sops_lastmodified=2026-03-01T13:26:08Z sops_lastmodified=2026-03-01T13:34:16Z
sops_mac=ENC[AES256_GCM,data:WmbT6tCUEoCDyKu673NQoJNzmCiilpG8yDVGl6ObxTOYleWt+1DVdPS+XUV+0Wd4bfkEhGTEfXAyy+wfoCVfYnenMuDGjXUUdsvqrOX6nnNCJ8nIntL46LfbRsbVrU6eeYGu/TaTyfouWjkk6pqlxffNSS6rrEFNZE4Q+v58+EI=,iv:TuCEmK6YJXsYISbN4mbuVbS6OvUNuhPRLstjjNkkrPk=,tag:hWLS036q7H5lMNpR6gZBVA==,type:str] sops_mac=ENC[AES256_GCM,data:JLfGLbNTEcI6M/sUA5Zez6cfEUObgnUBmX52560PzBmeLZt0F5Y5QpeojIBqEDMuNB0hp1nnPI59WClLJtQ12VlHo9TkL3x9uCNUG+KneQrn1bTmJpA3cwNkWTzIm4l+TGbJbd4FpKJ9H0v1w+sqoKOgG8DqbtOeVdUfsVspAso=,iv:UqYxooXkEtx+y7fYzl+GFncpkjz8dcP7o9fp+kFf6w4=,tag:/maSb1aZGo+Ia8eGpB7PYw==,type:str]
sops_unencrypted_suffix=_unencrypted sops_unencrypted_suffix=_unencrypted
sops_version=3.12.1 sops_version=3.12.1

View File

@@ -42,8 +42,8 @@ SUPERVISOR_GIT_PULL=ENC[AES256_GCM,data:mg==,iv:KgqMVYj12FjOzWxtA1T0r0pqCDJ6MtHz
PROXY_URLS_RESIDENTIAL=ENC[AES256_GCM,data:vxRcXQ/8TUTCtr6hKWBD1zVF47GFSfluIHZ8q0tt8SqQOWDdDe2D7Of6boy/kG3lqlpl7TjqMGJ7fLORcr0klKCykQ==,iv:YjegXXtIXm2qr0a3ZHRHxj3L1JoGZ1iQXkVXQupGQ2E=,tag:kahoHRskXbzplZasWOeiig==,type:str] PROXY_URLS_RESIDENTIAL=ENC[AES256_GCM,data:vxRcXQ/8TUTCtr6hKWBD1zVF47GFSfluIHZ8q0tt8SqQOWDdDe2D7Of6boy/kG3lqlpl7TjqMGJ7fLORcr0klKCykQ==,iv:YjegXXtIXm2qr0a3ZHRHxj3L1JoGZ1iQXkVXQupGQ2E=,tag:kahoHRskXbzplZasWOeiig==,type:str]
PROXY_URLS_DATACENTER=ENC[AES256_GCM,data:23TgU6oUeO7J+MFkraALQ5/RO38DZ3ib5oYYJr7Lj3KXQSlRsgwA+bJlweI5gcUpFphnPXvmwFGiuL6AeY8LzAQ3bx46dcZa5w9LfKw2PMFt,iv:AGXwYLqWjT5VmU02qqada3PbdjfC0mLK2sPruO0uru8=,tag:Z2IS/JPOqWX+x0LZYwyArA==,type:str] PROXY_URLS_DATACENTER=ENC[AES256_GCM,data:23TgU6oUeO7J+MFkraALQ5/RO38DZ3ib5oYYJr7Lj3KXQSlRsgwA+bJlweI5gcUpFphnPXvmwFGiuL6AeY8LzAQ3bx46dcZa5w9LfKw2PMFt,iv:AGXwYLqWjT5VmU02qqada3PbdjfC0mLK2sPruO0uru8=,tag:Z2IS/JPOqWX+x0LZYwyArA==,type:str]
WEBSHARE_DOWNLOAD_URL=ENC[AES256_GCM,data:/N77CFf6tJWCk7HrnBOm2Q1ynx7XoblzfbzJySeCjrxqiu4r+CB90aDkaPahlQKI00DUZih3pcy7WhnjdAwI30G5kJZ3P8H8/R0tP7OBK1wPVbsJq8prQJPFOAWewsS4KWNtSURZPYSCxslcBb7DHLX6ZAjv6A5KFOjRK2N8usR9sIabrCWh,iv:G3Ropu/JGytZK/zKsNGFjjSu3Wt6fvHaAqI9RpUHvlI=,tag:fv6xuS94OR+4xfiyKrYELA==,type:str] WEBSHARE_DOWNLOAD_URL=ENC[AES256_GCM,data:/N77CFf6tJWCk7HrnBOm2Q1ynx7XoblzfbzJySeCjrxqiu4r+CB90aDkaPahlQKI00DUZih3pcy7WhnjdAwI30G5kJZ3P8H8/R0tP7OBK1wPVbsJq8prQJPFOAWewsS4KWNtSURZPYSCxslcBb7DHLX6ZAjv6A5KFOjRK2N8usR9sIabrCWh,iv:G3Ropu/JGytZK/zKsNGFjjSu3Wt6fvHaAqI9RpUHvlI=,tag:fv6xuS94OR+4xfiyKrYELA==,type:str]
PROXY_CONCURRENCY=ENC[AES256_GCM,data:vdEZ,iv:+eTNQO+s/SsVDBLg1/+fneMzEEsFkuEFxo/FcVV+mWc=,tag:i/EPwi/jOoWl3xW8H0XMdw==,type:str] PROXY_CONCURRENCY=ENC[AES256_GCM,data:WWpx,iv:4RdNHXPXxFS5Yf1qa1NbaZgXydhKiiiEiMhkhQxD3xE=,tag:6UOQmBqj+9WlcxFooiTL+A==,type:str]
RECHECK_WINDOW_MINUTES=ENC[AES256_GCM,data:L2s=,iv:fV3mCKmK5fxUmIWRePELBDAPTb8JZqasVIhnAl55kYw=,tag:XL+PO6sblz/7WqHC3dtk1w==,type:str] RECHECK_WINDOW_MINUTES=ENC[AES256_GCM,data:9wQ=,iv:QS4VfelUDdaDbIUC8SJBuy09VpiWM9QQcYliQ7Uai+I=,tag:jwkJY95qXPPrgae8RhKPSg==,type:str]
#ENC[AES256_GCM,data:RC+t2vqLwLjapdAUql8rQls=,iv:Kkiz3ND0g0MRAgcPJysIYMzSQS96Rq+3YP5yO7yWfIY=,tag:Y6TbZd81ihIwn+U515qd1g==,type:comment] #ENC[AES256_GCM,data:RC+t2vqLwLjapdAUql8rQls=,iv:Kkiz3ND0g0MRAgcPJysIYMzSQS96Rq+3YP5yO7yWfIY=,tag:Y6TbZd81ihIwn+U515qd1g==,type:comment]
GSC_SERVICE_ACCOUNT_PATH=ENC[AES256_GCM,data:Vki6yHk+gd4n,iv:rxzKvwrGnAkLcpS41EZ097E87NrIpNZGFfl4iXFvr40=,tag:EZkBJpCq5rSpKYVC4H3JHQ==,type:str] GSC_SERVICE_ACCOUNT_PATH=ENC[AES256_GCM,data:Vki6yHk+gd4n,iv:rxzKvwrGnAkLcpS41EZ097E87NrIpNZGFfl4iXFvr40=,tag:EZkBJpCq5rSpKYVC4H3JHQ==,type:str]
GSC_SITE_URL=ENC[AES256_GCM,data:K0i1xRym+laMP6kgOMEfUyoAn2eNgQ==,iv:kyb+grzFq1e5CG/0NJRO3LkSXexOuCK07uJYApAdWsA=,tag:faljHqYjGTgrR/Zbh27/Yw==,type:str] GSC_SITE_URL=ENC[AES256_GCM,data:K0i1xRym+laMP6kgOMEfUyoAn2eNgQ==,iv:kyb+grzFq1e5CG/0NJRO3LkSXexOuCK07uJYApAdWsA=,tag:faljHqYjGTgrR/Zbh27/Yw==,type:str]
@@ -52,13 +52,18 @@ BING_SITE_URL=ENC[AES256_GCM,data:M33VI97DyxH8gRR3ZUXoXg4QrEv5og==,iv:GxZtwfbBVi
#ENC[AES256_GCM,data:OTUMKNkRW0zrupNppXthwE1oieILhNjM+cjx5hFn69g=,iv:48ID2qtSe9ggD2X+G/iUqp3v2uwEc7fZw8lxHIvVXmk=,tag:okBn0Npk1K9dDOFWA/AB1A==,type:comment] #ENC[AES256_GCM,data:OTUMKNkRW0zrupNppXthwE1oieILhNjM+cjx5hFn69g=,iv:48ID2qtSe9ggD2X+G/iUqp3v2uwEc7fZw8lxHIvVXmk=,tag:okBn0Npk1K9dDOFWA/AB1A==,type:comment]
GEONAMES_USERNAME=ENC[AES256_GCM,data:UXd/S2TzXPiGmLY=,iv:OMURM5E6SFEsaqroUlH76DEnr7C/ujNk9UQnbWT0hK4=,tag:VsjjS12QDbudiEhdAQ/OCQ==,type:str] GEONAMES_USERNAME=ENC[AES256_GCM,data:UXd/S2TzXPiGmLY=,iv:OMURM5E6SFEsaqroUlH76DEnr7C/ujNk9UQnbWT0hK4=,tag:VsjjS12QDbudiEhdAQ/OCQ==,type:str]
CENSUS_API_KEY=ENC[AES256_GCM,data:9RbKlxSD17LqIuuNXaOKSgZ8LnFh9Wbze3XHgpctfV/1TqBMZTIedQ==,iv:WwsmR3HLUEcgUpLliGRaUPhGM9vFNPMGXSAQQ6+9UVc=,tag:R4EMNy5MxxvK0UTaCL0umA==,type:str] CENSUS_API_KEY=ENC[AES256_GCM,data:9RbKlxSD17LqIuuNXaOKSgZ8LnFh9Wbze3XHgpctfV/1TqBMZTIedQ==,iv:WwsmR3HLUEcgUpLliGRaUPhGM9vFNPMGXSAQQ6+9UVc=,tag:R4EMNy5MxxvK0UTaCL0umA==,type:str]
#ENC[AES256_GCM,data:SL402gYB8ngjqkrG03FmaA==,iv:I326cYnOWdFnaUwnSfP+s2p9oCDCnqDzUJuPOzSFJc0=,tag:MBW5AqAaq4hTMmNXq1tXKw==,type:comment]
R2_LANDING_BUCKET=ENC[AES256_GCM,data:yZXLNQb8yN9nQPdxqmqv61fLWbRYCjjOqQ==,iv:fAwBLC/EuU0lgYOxZSkTagWyeQCdEadjssapxpCEGjA=,tag:VUmuVw76WZAaukp71Desag==,type:str]
R2_LANDING_ACCESS_KEY_ID=ENC[AES256_GCM,data:Y6y+U1ayhpFDcoaDjl7hyMVjU3gVvtORAH5gbd+HXbM=,iv:ra9kuch1DT+2tfz140bvxQRIXypsdiUrX1QYQ59gNRI=,tag:Wt85qliUMFvgbvoUrOXT7A==,type:str]
R2_LANDING_SECRET_ACCESS_KEY=ENC[AES256_GCM,data:99wB9aKSq2GihW9FOwBSMgHYzNKBHlol2Mf2kg4Ma6Fr4Cr21t/blzPxNQ7YRdeKk6ypFgViXlS4BJz9nC+v0g==,iv:/AmbXtj/uSGcMp+NBhN5tiVb2U56tvO5e1UpG2/ijPo=,tag:Qg2Tt11DUJPyeYcq9iSVnQ==,type:str]
R2_ENDPOINT=ENC[AES256_GCM,data:PBWTzUfhc/qVZ4n3GqJdZu8W7Ee0+FpsgikWVxgptQ3BJ2rQ4ewDuEB05inB1Agz1sB42VEBAsTtR3c5waPPRNs=,iv:ILZ0999fsPYYzVQYuIgAxpyystcplnykVoT5RpSEW2w=,tag:FxFOjQ+YcZuLf+jJr2OVFQ==,type:str]
sops_age__list_0__map_enc=-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBaUVk0UEVqdmtsM3VzQnpZ\nZjJDZ1lsM0VqWFpVVXUvNzdQcCtHbVJLNjFnCmhna01vTkVBaFQ5ZVlXeGhYNXdH\ncWJ5Qi9PdkxLaHBhQnR3cmtoblkxdEUKLS0tIDhHamY4NXhxOG9YN1NpbTN1aVRh\nOHVKcEN1d0QwQldVTDlBWUU4SDVDWlUKRJU+CTfTzIx6LLKin9sTXAHPVAfiUerZ\nCqYVFncsCJE3TbMI424urQj7kragPoGl1z4++yqAXNTRxfZIY4KTkg==\n-----END AGE ENCRYPTED FILE-----\n sops_age__list_0__map_enc=-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBaUVk0UEVqdmtsM3VzQnpZ\nZjJDZ1lsM0VqWFpVVXUvNzdQcCtHbVJLNjFnCmhna01vTkVBaFQ5ZVlXeGhYNXdH\ncWJ5Qi9PdkxLaHBhQnR3cmtoblkxdEUKLS0tIDhHamY4NXhxOG9YN1NpbTN1aVRh\nOHVKcEN1d0QwQldVTDlBWUU4SDVDWlUKRJU+CTfTzIx6LLKin9sTXAHPVAfiUerZ\nCqYVFncsCJE3TbMI424urQj7kragPoGl1z4++yqAXNTRxfZIY4KTkg==\n-----END AGE ENCRYPTED FILE-----\n
sops_age__list_0__map_recipient=age1f5002gj4s78jju45jd28kuejtcfhn5cdujz885fl7z2p9ym68pnsgky87a sops_age__list_0__map_recipient=age1f5002gj4s78jju45jd28kuejtcfhn5cdujz885fl7z2p9ym68pnsgky87a
sops_age__list_1__map_enc=-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBmVEticFRVemlzZnlzek4x\nbWJ0d0h5ejJVUk5remo1VkdxNjVpdllqbFhFClc1UXlNd09xVVA5MnltMlN5MWRy\nYUlNRmNybHh1RGdPVC9yWlYrVmRTdkkKLS0tIHBUbU9qSDMrVGVHZDZGSFdpWlBh\nT3NXTGl0SmszaU9hRmU5bXI0cDRoRW8KLvbNYsBEwz+ITKvn7Yn+iNHiRzyyjtQt\no9/HupykJ3WjSdleGz7ZN6UiPGelHp0D/rzSASTYaI1+0i0xZ4PUoQ==\n-----END AGE ENCRYPTED FILE-----\n sops_age__list_1__map_enc=-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBmVEticFRVemlzZnlzek4x\nbWJ0d0h5ejJVUk5remo1VkdxNjVpdllqbFhFClc1UXlNd09xVVA5MnltMlN5MWRy\nYUlNRmNybHh1RGdPVC9yWlYrVmRTdkkKLS0tIHBUbU9qSDMrVGVHZDZGSFdpWlBh\nT3NXTGl0SmszaU9hRmU5bXI0cDRoRW8KLvbNYsBEwz+ITKvn7Yn+iNHiRzyyjtQt\no9/HupykJ3WjSdleGz7ZN6UiPGelHp0D/rzSASTYaI1+0i0xZ4PUoQ==\n-----END AGE ENCRYPTED FILE-----\n
sops_age__list_1__map_recipient=age1wjepykv3glvsrtegu25tevg7vyn3ngpl607u3yjc9ucay04s045s796msw sops_age__list_1__map_recipient=age1wjepykv3glvsrtegu25tevg7vyn3ngpl607u3yjc9ucay04s045s796msw
sops_age__list_2__map_enc=-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBFeHhaOURNZnRVMEwxNThu\nUjF4Q0kwUXhTUE1QSzZJbmpubnh3RnpQTmdvCjRmWWxpNkxFUmVGb3NRbnlydW5O\nWEg3ZXJQTU4vcndzS2pUQXY3Q0ttYjAKLS0tIE9IRFJ1c2ZxbGVHa2xTL0swbGN1\nTzgwMThPUDRFTWhuZHJjZUYxOTZrU00KY62qrNBCUQYxwcLMXFEnLkwncxq3BPJB\nKm4NzeHBU87XmPWVrgrKuf+PH1mxJlBsl7Hev8xBTy7l6feiZjLIvQ==\n-----END AGE ENCRYPTED FILE-----\n sops_age__list_2__map_enc=-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBFeHhaOURNZnRVMEwxNThu\nUjF4Q0kwUXhTUE1QSzZJbmpubnh3RnpQTmdvCjRmWWxpNkxFUmVGb3NRbnlydW5O\nWEg3ZXJQTU4vcndzS2pUQXY3Q0ttYjAKLS0tIE9IRFJ1c2ZxbGVHa2xTL0swbGN1\nTzgwMThPUDRFTWhuZHJjZUYxOTZrU00KY62qrNBCUQYxwcLMXFEnLkwncxq3BPJB\nKm4NzeHBU87XmPWVrgrKuf+PH1mxJlBsl7Hev8xBTy7l6feiZjLIvQ==\n-----END AGE ENCRYPTED FILE-----\n
sops_age__list_2__map_recipient=age1c783ym2q5x9tv7py5d28uc4k44aguudjn03g97l9nzs00dd9tsrqum8h4d sops_age__list_2__map_recipient=age1c783ym2q5x9tv7py5d28uc4k44aguudjn03g97l9nzs00dd9tsrqum8h4d
sops_lastmodified=2026-03-01T13:25:41Z sops_lastmodified=2026-03-01T20:26:09Z
sops_mac=ENC[AES256_GCM,data:EL9Bgo0pWWECeHaaM1bHtkvwBgBmS3P2cX+6oahHKmLEJLI7P7fiomP7G8SdrfUyNpZaP9d4LlfwZSuCPqH6rP8jzF67oNkfXfd/xK4OW2U2TqSvouCMzlhqVQgS4HHl5EgvOI488WEIZko7KK2A1rxnpkm8C29WG9d9G64LKvw=,iv:XzsNm3CXnlC6SIef63BdddALjGustp8czHQCWOtjXBQ=,tag:zll0db6K1+M4brOpfVWnhg==,type:str] sops_mac=ENC[AES256_GCM,data:IxzU6VehA0iHgpIEqDSoMywKyKONI6jSr/6Amo+g3JI72awJtk6ft0ppfDWZjeHhL0ixfnvgqMNwai+1e0V/U8hSP8/FqYKEVpAO0UGJfBPKP3pbw+tx3WJQMF5dIh2/UVNrKvoACZq0IDJfXlVqalCnRMQEHGtKVTIT3fn8m6c=,iv:0w0ohOBsqTzuoQdtt6AI5ZdHEKw9+hI73tycBjDSS0o=,tag:Guw7LweA4m4Nw+3kSuZKWA==,type:str]
sops_unencrypted_suffix=_unencrypted sops_unencrypted_suffix=_unencrypted
sops_version=3.12.1 sops_version=3.12.1

View File

@@ -1,31 +0,0 @@
stages:
- test
- tag
test:
stage: test
image: python:3.12-slim
before_script:
- pip install uv
script:
- uv sync
- uv run pytest web/tests/ -x -q -p no:faulthandler
- uv run ruff check web/src/ web/tests/
rules:
- if: $CI_COMMIT_BRANCH == "master"
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
tag:
stage: tag
image:
name: alpine/git
entrypoint: [""]
script:
- git tag "v${CI_PIPELINE_IID}"
- git push "https://gitlab-ci-token:${CI_JOB_TOKEN}@${CI_SERVER_HOST}/${CI_PROJECT_PATH}.git" "v${CI_PIPELINE_IID}"
rules:
- if: $CI_COMMIT_BRANCH == "master"
# Deployment is handled by the on-server supervisor (src/padelnomics/supervisor.py).
# It polls git every 60s, fetches tags, and deploys only when a new passing tag exists.
# No CI secrets needed — zero SSH keys, zero deploy credentials.

View File

@@ -6,12 +6,36 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
## [Unreleased] ## [Unreleased]
### Changed
- **Semantic compression pass** — applied Casey Muratori's compression workflow (write concrete → observe patterns → compress genuine repetitions) across all three packages. Net result: ~200 lines removed, codebase simpler.
- **`count_where()` helper** (`web/core.py`): compresses the `fetch_one("SELECT COUNT(*) ...") + null-check` pattern. Applied across 30+ call sites in admin, suppliers, directory, dashboard, public, and planner routes. Dashboard stats function shrinks from 75 to 25 lines.
- **`_forward_lead()` helper** (`web/admin/routes.py`): extracts shared DB logic from `lead_forward` and `lead_forward_htmx` — both routes now call the helper and differ only in response format.
- **SQLMesh macros** (`transform/macros/__init__.py`): 5 new macros compress repeated country code patterns across 7 SQL models: `@country_name`, `@country_slug`, `@normalize_eurostat_country`, `@normalize_eurostat_nuts`, `@infer_country_from_coords`.
- **Extract helpers** (`extract/utils.py`): `skip_if_current()` compresses cursor-check + early-return pattern (3 extractors); `write_jsonl_atomic()` compresses working-file → JSONL → compress pattern (2 extractors).
- **Coding philosophy updated** (`~/.claude/coding_philosophy.md`): added `<compression>` section documenting the workflow, the test ("Did this abstraction make the total codebase smaller?"), and distinction from premature DRY.
- **Test suite compression pass** — applied same compression workflow to `web/tests/` (30 files, 13,949 lines). Net result: -197 lines across 11 files.
- **`admin_client` fixture** lifted from 7 duplicate definitions into `conftest.py`.
- **`mock_send_email` fixture** added to `conftest.py`, replacing 60 inline `with patch("padelnomics.worker.send_email", ...)` blocks across `test_emails.py` (51), `test_waitlist.py` (4), `test_businessplan.py` (2). Each refactored test drops one indentation level.
### Fixed
- **Admin: empty confirm dialog on auto-poll** — `htmx:confirm` handler now guards with `if (!evt.detail.question) return` so auto-poll requests (`hx-trigger="every 5s"`, no `hx-confirm` attribute) no longer trigger an empty dialog every 5 seconds.
### Changed
- **Admin: styled confirm dialog for all destructive actions** — replaced all native `window.confirm()` calls with the existing `#confirm-dialog` styled `<dialog>`. A new global `htmx:confirm` handler intercepts HTMX confirmation prompts and shows the dialog; form-submit buttons on affiliate pages were updated to use `confirmAction()`. Affected: pipeline Transform tab (Run Transform, Run Export, Run Full Pipeline), pipeline Overview tab (Run extractor), affiliate product delete, affiliate program delete (both form and list variants).
- **Pipeline tabs: no scrollbar** — added `scrollbar-width: none` and `::-webkit-scrollbar { display: none }` to `.pipeline-tabs` to suppress the spurious horizontal scrollbar on narrow viewports.
### Fixed ### Fixed
- **Stale-tier failures no longer exhaust the next proxy tier** — with parallel workers, threads that fetched a proxy just before tier escalation reported failures after the tier changed, immediately blowing through the new tier's circuit breaker before it ever got tried (Rayobyte was skipped entirely). `record_failure(proxy_url)` now checks which tier the proxy belongs to and ignores the circuit breaker when the proxy is from an already-escalated tier. - **Stale-tier failures no longer exhaust the next proxy tier** — with parallel workers, threads that fetched a proxy just before tier escalation reported failures after the tier changed, immediately blowing through the new tier's circuit breaker before it ever got tried (Rayobyte was skipped entirely). `record_failure(proxy_url)` now checks which tier the proxy belongs to and ignores the circuit breaker when the proxy is from an already-escalated tier.
- **Proxy URL scheme validation in `load_proxy_tiers()`** — URLs in `PROXY_URLS_DATACENTER` / `PROXY_URLS_RESIDENTIAL` that are missing an `http://` or `https://` scheme are now logged as a warning and skipped, rather than being passed through and causing SSL handshake failures or connection errors at request time. Also fixed a missing `http://` prefix in the dev `.env` `PROXY_URLS_DATACENTER` entry. - **Proxy URL scheme validation in `load_proxy_tiers()`** — URLs in `PROXY_URLS_DATACENTER` / `PROXY_URLS_RESIDENTIAL` that are missing an `http://` or `https://` scheme are now logged as a warning and skipped, rather than being passed through and causing SSL handshake failures or connection errors at request time. Also fixed a missing `http://` prefix in the dev `.env` `PROXY_URLS_DATACENTER` entry.
### Changed ### Changed
- **Unified confirm dialog — pure HTMX `hx-confirm` + `<form method="dialog">`** — eliminated the `confirmAction()` JS function and the duplicate `cloneNode` hack. All confirmation prompts now go through a single `showConfirm()` Promise-based function called by the `htmx:confirm` interceptor. The dialog HTML uses `<form method="dialog">` for native close semantics (`returnValue` is `"ok"` or `"cancel"`), removing the need to clone and replace buttons on every invocation. All 12 Padelnomics call sites converted from `onclick=confirmAction(...)` to `hx-boost="true"` + `hx-confirm="..."` on the submit button. Pipeline trigger endpoints updated to treat `HX-Boosted: true` requests as non-HTMX (returning a redirect rather than an inline partial) so boosted form submissions flow through the normal redirect cycle. Same changes applied to BeanFlows and the quart-saas-boilerplate template.
- `web/src/padelnomics/admin/templates/admin/base_admin.html`: replaced dialog `<div>` with `<form method="dialog">`, replaced `confirmAction()` + inline `htmx:confirm` handler with unified `showConfirm()` + single `htmx:confirm` listener
- `web/src/padelnomics/admin/pipeline_routes.py`: `pipeline_trigger_extract` and `pipeline_trigger_transform` now exclude `HX-Boosted: true` from the HTMX partial path
- 12 templates updated: `pipeline.html`, `partials/pipeline_extractions.html`, `affiliate_form.html`, `affiliate_program_form.html`, `partials/affiliate_program_results.html`, `partials/affiliate_row.html`, `generate_form.html`, `articles.html`, `audience_contacts.html`, `template_detail.html`, `partials/scenario_results.html`
- Same changes mirrored to BeanFlows and quart-saas-boilerplate template
- **Per-proxy dead tracking in tiered cycler** — `make_tiered_cycler` now accepts a `proxy_failure_limit` parameter (default 3). Individual proxies that hit the limit are marked dead and permanently skipped by `next_proxy()`. If all proxies in the active tier are dead, `next_proxy()` auto-escalates to the next tier without needing the tier-level threshold. `record_failure(proxy_url)` and `record_success(proxy_url)` accept an optional `proxy_url` argument for per-proxy tracking; callers without `proxy_url` are fully backward-compatible. New `dead_proxy_count()` callable exposed for monitoring. - **Per-proxy dead tracking in tiered cycler** — `make_tiered_cycler` now accepts a `proxy_failure_limit` parameter (default 3). Individual proxies that hit the limit are marked dead and permanently skipped by `next_proxy()`. If all proxies in the active tier are dead, `next_proxy()` auto-escalates to the next tier without needing the tier-level threshold. `record_failure(proxy_url)` and `record_success(proxy_url)` accept an optional `proxy_url` argument for per-proxy tracking; callers without `proxy_url` are fully backward-compatible. New `dead_proxy_count()` callable exposed for monitoring.
- `extract/padelnomics_extract/src/padelnomics_extract/proxy.py`: added per-proxy state (`proxy_failure_counts`, `dead_proxies`), updated `next_proxy`/`record_failure`/`record_success`, added `dead_proxy_count` - `extract/padelnomics_extract/src/padelnomics_extract/proxy.py`: added per-proxy state (`proxy_failure_counts`, `dead_proxies`), updated `next_proxy`/`record_failure`/`record_success`, added `dead_proxy_count`
- `extract/padelnomics_extract/src/padelnomics_extract/playtomic_tenants.py`: `_fetch_page_via_cycler` passes `proxy_url` to `record_success`/`record_failure` - `extract/padelnomics_extract/src/padelnomics_extract/playtomic_tenants.py`: `_fetch_page_via_cycler` passes `proxy_url` to `record_success`/`record_failure`

View File

@@ -21,6 +21,7 @@ extract-census-usa = "padelnomics_extract.census_usa:main"
extract-census-usa-income = "padelnomics_extract.census_usa_income:main" extract-census-usa-income = "padelnomics_extract.census_usa_income:main"
extract-ons-uk = "padelnomics_extract.ons_uk:main" extract-ons-uk = "padelnomics_extract.ons_uk:main"
extract-geonames = "padelnomics_extract.geonames:main" extract-geonames = "padelnomics_extract.geonames:main"
extract-gisco = "padelnomics_extract.gisco:main"
[build-system] [build-system]
requires = ["hatchling"] requires = ["hatchling"]

View File

@@ -11,9 +11,12 @@ from datetime import UTC, datetime
from pathlib import Path from pathlib import Path
import niquests import niquests
from dotenv import load_dotenv
from .utils import end_run, open_state_db, start_run from .utils import end_run, open_state_db, start_run
load_dotenv()
LANDING_DIR = Path(os.environ.get("LANDING_DIR", "data/landing")) LANDING_DIR = Path(os.environ.get("LANDING_DIR", "data/landing"))
HTTP_TIMEOUT_SECONDS = 30 HTTP_TIMEOUT_SECONDS = 30

View File

@@ -7,7 +7,7 @@ A graphlib.TopologicalSorter schedules them: tasks with no unmet dependencies
run immediately in parallel; each completion may unlock new tasks. run immediately in parallel; each completion may unlock new tasks.
Current dependency graph: Current dependency graph:
- All 8 non-availability extractors have no dependencies (run in parallel) - All 9 non-availability extractors have no dependencies (run in parallel)
- playtomic_availability depends on playtomic_tenants (starts as soon as - playtomic_availability depends on playtomic_tenants (starts as soon as
tenants finishes, even if other extractors are still running) tenants finishes, even if other extractors are still running)
""" """
@@ -26,6 +26,8 @@ from .eurostat_city_labels import EXTRACTOR_NAME as EUROSTAT_CITY_LABELS_NAME
from .eurostat_city_labels import extract as extract_eurostat_city_labels from .eurostat_city_labels import extract as extract_eurostat_city_labels
from .geonames import EXTRACTOR_NAME as GEONAMES_NAME from .geonames import EXTRACTOR_NAME as GEONAMES_NAME
from .geonames import extract as extract_geonames from .geonames import extract as extract_geonames
from .gisco import EXTRACTOR_NAME as GISCO_NAME
from .gisco import extract as extract_gisco
from .ons_uk import EXTRACTOR_NAME as ONS_UK_NAME from .ons_uk import EXTRACTOR_NAME as ONS_UK_NAME
from .ons_uk import extract as extract_ons_uk from .ons_uk import extract as extract_ons_uk
from .overpass import EXTRACTOR_NAME as OVERPASS_NAME from .overpass import EXTRACTOR_NAME as OVERPASS_NAME
@@ -50,6 +52,7 @@ EXTRACTORS: dict[str, tuple] = {
CENSUS_USA_INCOME_NAME: (extract_census_usa_income, []), CENSUS_USA_INCOME_NAME: (extract_census_usa_income, []),
ONS_UK_NAME: (extract_ons_uk, []), ONS_UK_NAME: (extract_ons_uk, []),
GEONAMES_NAME: (extract_geonames, []), GEONAMES_NAME: (extract_geonames, []),
GISCO_NAME: (extract_gisco, []),
TENANTS_NAME: (extract_tenants, []), TENANTS_NAME: (extract_tenants, []),
AVAILABILITY_NAME: (extract_availability, [TENANTS_NAME]), AVAILABILITY_NAME: (extract_availability, [TENANTS_NAME]),
} }

View File

@@ -19,7 +19,7 @@ from pathlib import Path
import niquests import niquests
from ._shared import HTTP_TIMEOUT_SECONDS, run_extractor, setup_logging from ._shared import HTTP_TIMEOUT_SECONDS, run_extractor, setup_logging
from .utils import get_last_cursor, landing_path, write_gzip_atomic from .utils import landing_path, skip_if_current, write_gzip_atomic
logger = setup_logging("padelnomics.extract.census_usa") logger = setup_logging("padelnomics.extract.census_usa")
@@ -73,10 +73,10 @@ def extract(
return {"files_written": 0, "files_skipped": 1, "bytes_written": 0} return {"files_written": 0, "files_skipped": 1, "bytes_written": 0}
# Skip if we already have data for this month (annual data, monthly cursor) # Skip if we already have data for this month (annual data, monthly cursor)
last_cursor = get_last_cursor(conn, EXTRACTOR_NAME) skip = skip_if_current(conn, EXTRACTOR_NAME, year_month)
if last_cursor == year_month: if skip:
logger.info("already have data for %s — skipping", year_month) logger.info("already have data for %s — skipping", year_month)
return {"files_written": 0, "files_skipped": 1, "bytes_written": 0} return skip
year, month = year_month.split("/") year, month = year_month.split("/")
url = f"{ACS_URL}&key={api_key}" url = f"{ACS_URL}&key={api_key}"

View File

@@ -19,7 +19,6 @@ Output: one JSON object per line, e.g.:
import gzip import gzip
import io import io
import json
import os import os
import sqlite3 import sqlite3
import zipfile import zipfile
@@ -28,7 +27,7 @@ from pathlib import Path
import niquests import niquests
from ._shared import HTTP_TIMEOUT_SECONDS, run_extractor, setup_logging from ._shared import HTTP_TIMEOUT_SECONDS, run_extractor, setup_logging
from .utils import compress_jsonl_atomic, get_last_cursor, landing_path from .utils import landing_path, skip_if_current, write_jsonl_atomic
logger = setup_logging("padelnomics.extract.geonames") logger = setup_logging("padelnomics.extract.geonames")
@@ -139,10 +138,10 @@ def extract(
tmp.rename(dest) tmp.rename(dest)
return {"files_written": 0, "files_skipped": 1, "bytes_written": 0} return {"files_written": 0, "files_skipped": 1, "bytes_written": 0}
last_cursor = get_last_cursor(conn, EXTRACTOR_NAME) skip = skip_if_current(conn, EXTRACTOR_NAME, year_month)
if last_cursor == year_month: if skip:
logger.info("already have data for %s — skipping", year_month) logger.info("already have data for %s — skipping", year_month)
return {"files_written": 0, "files_skipped": 1, "bytes_written": 0} return skip
year, month = year_month.split("/") year, month = year_month.split("/")
@@ -168,11 +167,7 @@ def extract(
dest_dir = landing_path(landing_dir, "geonames", year, month) dest_dir = landing_path(landing_dir, "geonames", year, month)
dest = dest_dir / "cities_global.jsonl.gz" dest = dest_dir / "cities_global.jsonl.gz"
working_path = dest.with_suffix(".working.jsonl") bytes_written = write_jsonl_atomic(dest, rows)
with open(working_path, "w") as f:
for row in rows:
f.write(json.dumps(row, separators=(",", ":")) + "\n")
bytes_written = compress_jsonl_atomic(working_path, dest)
logger.info("written %s bytes compressed", f"{bytes_written:,}") logger.info("written %s bytes compressed", f"{bytes_written:,}")
return { return {

View File

@@ -0,0 +1,95 @@
"""GISCO NUTS-2 boundary GeoJSON extractor.
Downloads NUTS-2 boundary polygons from Eurostat GISCO. The file is stored
uncompressed because DuckDB's ST_Read cannot read gzipped files.
NUTS classification revises approximately every 7 years (current: 2021).
The partition path is fixed to the revision year, not the run date, making
the source version explicit. Cursor tracking still uses year_month to avoid
re-downloading on every monthly run.
Landing: {LANDING_DIR}/gisco/2024/01/nuts2_boundaries.geojson (~5 MB, uncompressed)
"""
import sqlite3
from pathlib import Path
import niquests
from ._shared import HTTP_TIMEOUT_SECONDS, run_extractor, setup_logging
from .utils import skip_if_current
logger = setup_logging("padelnomics.extract.gisco")
EXTRACTOR_NAME = "gisco"
# NUTS 2021 revision, 20M scale (1:20,000,000), WGS84 (EPSG:4326), LEVL_2 only.
# 20M resolution gives simplified polygons that are fast for point-in-polygon
# matching without sacrificing accuracy at the NUTS-2 boundary level.
GISCO_URL = (
"https://gisco-services.ec.europa.eu/distribution/v2/nuts/geojson/"
"NUTS_RG_20M_2021_4326_LEVL_2.geojson"
)
# Fixed partition: NUTS boundaries are a static reference file, not time-series data.
# The 2024/01 partition reflects when this NUTS 2021 dataset was first ingested.
DEST_REL = Path("gisco/2024/01/nuts2_boundaries.geojson")
_GISCO_TIMEOUT_SECONDS = HTTP_TIMEOUT_SECONDS * 4 # ~5 MB; generous for slow upstreams
def extract(
landing_dir: Path,
year_month: str,
conn: sqlite3.Connection,
session: niquests.Session,
) -> dict:
"""Download NUTS-2 GeoJSON. Skips if already run this month or file exists."""
skip = skip_if_current(conn, EXTRACTOR_NAME, year_month)
if skip:
logger.info("already ran for %s — skipping", year_month)
return skip
dest = landing_dir / DEST_REL
if dest.exists():
logger.info("file already exists (skipping download): %s", dest)
return {
"files_written": 0,
"files_skipped": 1,
"bytes_written": 0,
"cursor_value": year_month,
}
dest.parent.mkdir(parents=True, exist_ok=True)
logger.info("GET %s", GISCO_URL)
resp = session.get(GISCO_URL, timeout=_GISCO_TIMEOUT_SECONDS)
resp.raise_for_status()
content = resp.content
assert len(content) > 100_000, (
f"GeoJSON too small ({len(content)} bytes) — download may have failed"
)
assert b'"FeatureCollection"' in content, "Response does not look like GeoJSON"
# Write uncompressed — ST_Read requires a plain file, not .gz
tmp = dest.with_suffix(".geojson.tmp")
tmp.write_bytes(content)
tmp.rename(dest)
size_mb = len(content) / 1_000_000
logger.info("written %s (%.1f MB)", dest, size_mb)
return {
"files_written": 1,
"files_skipped": 0,
"bytes_written": len(content),
"cursor_value": year_month,
}
def main() -> None:
run_extractor(EXTRACTOR_NAME, extract)
if __name__ == "__main__":
main()

View File

@@ -434,8 +434,10 @@ def _find_venues_with_upcoming_slots(
if not start_time_str: if not start_time_str:
continue continue
try: try:
# Parse "2026-02-24T17:00:00" format # start_time is "HH:MM:SS"; combine with resource's start_date
slot_start = datetime.fromisoformat(start_time_str).replace(tzinfo=UTC) start_date = resource.get("start_date", "")
full_dt = f"{start_date}T{start_time_str}" if start_date else start_time_str
slot_start = datetime.fromisoformat(full_dt).replace(tzinfo=UTC)
if window_start <= slot_start < window_end: if window_start <= slot_start < window_end:
tenant_ids.add(tid) tenant_ids.add(tid)
break # found one upcoming slot, no need to check more break # found one upcoming slot, no need to check more
@@ -520,6 +522,10 @@ def extract_recheck(
dest_dir = landing_path(landing_dir, "playtomic", year, month) dest_dir = landing_path(landing_dir, "playtomic", year, month)
dest = dest_dir / f"availability_{target_date}_recheck_{recheck_hour:02d}.jsonl.gz" dest = dest_dir / f"availability_{target_date}_recheck_{recheck_hour:02d}.jsonl.gz"
if not venues_data:
logger.warning("Recheck fetched 0 venues (%d errors) — skipping file write", venues_errored)
return {"files_written": 0, "files_skipped": 0, "bytes_written": 0}
captured_at = datetime.now(UTC).strftime("%Y-%m-%dT%H:%M:%SZ") captured_at = datetime.now(UTC).strftime("%Y-%m-%dT%H:%M:%SZ")
working_path = dest.with_suffix("").with_suffix(".working.jsonl") working_path = dest.with_suffix("").with_suffix(".working.jsonl")
with open(working_path, "w") as f: with open(working_path, "w") as f:

View File

@@ -21,7 +21,6 @@ Rate: 1 req / 2 s per IP (see docs/data-sources-inventory.md §1.2).
Landing: {LANDING_DIR}/playtomic/{year}/{month}/tenants.jsonl.gz Landing: {LANDING_DIR}/playtomic/{year}/{month}/tenants.jsonl.gz
""" """
import json
import os import os
import sqlite3 import sqlite3
import time import time
@@ -33,7 +32,7 @@ import niquests
from ._shared import HTTP_TIMEOUT_SECONDS, run_extractor, setup_logging, ua_for_proxy from ._shared import HTTP_TIMEOUT_SECONDS, run_extractor, setup_logging, ua_for_proxy
from .proxy import load_proxy_tiers, make_tiered_cycler from .proxy import load_proxy_tiers, make_tiered_cycler
from .utils import compress_jsonl_atomic, landing_path from .utils import landing_path, write_jsonl_atomic
logger = setup_logging("padelnomics.extract.playtomic_tenants") logger = setup_logging("padelnomics.extract.playtomic_tenants")
@@ -215,11 +214,7 @@ def extract(
time.sleep(THROTTLE_SECONDS) time.sleep(THROTTLE_SECONDS)
# Write each tenant as a JSONL line, then compress atomically # Write each tenant as a JSONL line, then compress atomically
working_path = dest.with_suffix(".working.jsonl") bytes_written = write_jsonl_atomic(dest, all_tenants)
with open(working_path, "w") as f:
for tenant in all_tenants:
f.write(json.dumps(tenant, separators=(",", ":")) + "\n")
bytes_written = compress_jsonl_atomic(working_path, dest)
logger.info("%d unique venues -> %s", len(all_tenants), dest) logger.info("%d unique venues -> %s", len(all_tenants), dest)
return { return {

View File

@@ -3,10 +3,9 @@
Proxies are configured via environment variables. When unset, all functions Proxies are configured via environment variables. When unset, all functions
return None/no-op — extractors fall back to direct requests. return None/no-op — extractors fall back to direct requests.
Three-tier escalation: free → datacenter → residential. Two-tier escalation: datacenter → residential.
Tier 1 (free): WEBSHARE_DOWNLOAD_URL — auto-fetched from Webshare API Tier 1 (datacenter): PROXY_URLS_DATACENTER — comma-separated paid DC proxies
Tier 2 (datacenter): PROXY_URLS_DATACENTER — comma-separated paid DC proxies Tier 2 (residential): PROXY_URLS_RESIDENTIAL — comma-separated paid residential proxies
Tier 3 (residential): PROXY_URLS_RESIDENTIAL — comma-separated paid residential proxies
Tiered circuit breaker: Tiered circuit breaker:
Active tier is used until consecutive failures >= threshold, then escalates Active tier is used until consecutive failures >= threshold, then escalates
@@ -69,22 +68,15 @@ def fetch_webshare_proxies(download_url: str, max_proxies: int = MAX_WEBSHARE_PR
def load_proxy_tiers() -> list[list[str]]: def load_proxy_tiers() -> list[list[str]]:
"""Assemble proxy tiers in escalation order: free → datacenter → residential. """Assemble proxy tiers in escalation order: datacenter → residential.
Tier 1 (free): fetched from WEBSHARE_DOWNLOAD_URL if set. Tier 1 (datacenter): PROXY_URLS_DATACENTER (comma-separated).
Tier 2 (datacenter): PROXY_URLS_DATACENTER (comma-separated). Tier 2 (residential): PROXY_URLS_RESIDENTIAL (comma-separated).
Tier 3 (residential): PROXY_URLS_RESIDENTIAL (comma-separated).
Empty tiers are omitted. Returns [] if no proxies configured anywhere. Empty tiers are omitted. Returns [] if no proxies configured anywhere.
""" """
tiers: list[list[str]] = [] tiers: list[list[str]] = []
webshare_url = os.environ.get("WEBSHARE_DOWNLOAD_URL", "").strip()
if webshare_url:
free_proxies = fetch_webshare_proxies(webshare_url)
if free_proxies:
tiers.append(free_proxies)
for var in ("PROXY_URLS_DATACENTER", "PROXY_URLS_RESIDENTIAL"): for var in ("PROXY_URLS_DATACENTER", "PROXY_URLS_RESIDENTIAL"):
raw = os.environ.get(var, "") raw = os.environ.get(var, "")
urls = [u.strip() for u in raw.split(",") if u.strip()] urls = [u.strip() for u in raw.split(",") if u.strip()]

View File

@@ -101,6 +101,19 @@ def get_last_cursor(conn: sqlite3.Connection, extractor: str) -> str | None:
return row["cursor_value"] if row else None return row["cursor_value"] if row else None
_SKIP_RESULT = {"files_written": 0, "files_skipped": 1, "bytes_written": 0}
def skip_if_current(conn: sqlite3.Connection, extractor: str, year_month: str) -> dict | None:
"""Return an early-exit result dict if this extractor already ran for year_month.
Returns None when the extractor should proceed with extraction.
"""
if get_last_cursor(conn, extractor) == year_month:
return _SKIP_RESULT
return None
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# File I/O helpers # File I/O helpers
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@@ -176,6 +189,20 @@ def write_gzip_atomic(path: Path, data: bytes) -> int:
return len(compressed) return len(compressed)
def write_jsonl_atomic(dest: Path, items: list[dict]) -> int:
"""Write items as JSONL, then compress atomically to dest (.jsonl.gz).
Compresses the working-file → JSONL → gzip pattern into one call.
Returns compressed bytes written.
"""
assert items, "items must not be empty"
working_path = dest.with_suffix(".working.jsonl")
with open(working_path, "w") as f:
for item in items:
f.write(json.dumps(item, separators=(",", ":")) + "\n")
return compress_jsonl_atomic(working_path, dest)
def compress_jsonl_atomic(jsonl_path: Path, dest_path: Path) -> int: def compress_jsonl_atomic(jsonl_path: Path, dest_path: Path) -> int:
"""Compress a JSONL working file to .jsonl.gz atomically, then delete the source. """Compress a JSONL working file to .jsonl.gz atomically, then delete the source.

View File

@@ -54,6 +54,40 @@ chmod 600 "${REPO_DIR}/.env"
sudo -u "${SERVICE_USER}" bash -c "cd ${REPO_DIR} && ${UV} sync --all-packages" sudo -u "${SERVICE_USER}" bash -c "cd ${REPO_DIR} && ${UV} sync --all-packages"
# ── rclone config (r2-landing remote) ────────────────────────────────────────
_env_get() { grep -E "^${1}=" "${REPO_DIR}/.env" 2>/dev/null | head -1 | cut -d= -f2- | tr -d '"'"'" || true; }
R2_LANDING_KEY=$(_env_get R2_LANDING_ACCESS_KEY_ID)
R2_LANDING_SECRET=$(_env_get R2_LANDING_SECRET_ACCESS_KEY)
R2_ENDPOINT=$(_env_get R2_ENDPOINT)
if [ -n "${R2_LANDING_KEY}" ] && [ -n "${R2_LANDING_SECRET}" ] && [ -n "${R2_ENDPOINT}" ]; then
RCLONE_CONF_DIR="/home/${SERVICE_USER}/.config/rclone"
RCLONE_CONF="${RCLONE_CONF_DIR}/rclone.conf"
sudo -u "${SERVICE_USER}" mkdir -p "${RCLONE_CONF_DIR}"
grep -v '^\[r2-landing\]' "${RCLONE_CONF}" 2>/dev/null > "${RCLONE_CONF}.tmp" || true
cat >> "${RCLONE_CONF}.tmp" <<EOF
[r2-landing]
type = s3
provider = Cloudflare
access_key_id = ${R2_LANDING_KEY}
secret_access_key = ${R2_LANDING_SECRET}
endpoint = ${R2_ENDPOINT}
acl = private
no_check_bucket = true
EOF
mv "${RCLONE_CONF}.tmp" "${RCLONE_CONF}"
chown "${SERVICE_USER}:${SERVICE_USER}" "${RCLONE_CONF}"
chmod 600 "${RCLONE_CONF}"
echo "$(date '+%H:%M:%S') ==> rclone [r2-landing] remote configured."
else
echo "$(date '+%H:%M:%S') ==> R2_LANDING_* not set — skipping rclone config."
fi
# ── Systemd services ────────────────────────────────────────────────────────── # ── Systemd services ──────────────────────────────────────────────────────────
cp "${REPO_DIR}/infra/landing-backup/padelnomics-landing-backup.service" /etc/systemd/system/ cp "${REPO_DIR}/infra/landing-backup/padelnomics-landing-backup.service" /etc/systemd/system/

View File

@@ -7,15 +7,5 @@ Wants=network-online.target
Type=oneshot Type=oneshot
User=padelnomics_service User=padelnomics_service
EnvironmentFile=/opt/padelnomics/.env EnvironmentFile=/opt/padelnomics/.env
Environment=LANDING_DIR=/data/padelnomics/landing ExecStart=/bin/sh -c 'exec /usr/bin/rclone sync /data/padelnomics/landing/ r2-landing:${R2_LANDING_BUCKET}/padelnomics/ --log-level INFO --exclude ".state.sqlite*"'
ExecStart=/usr/bin/rclone sync ${LANDING_DIR} :s3:${LITESTREAM_R2_BUCKET}/padelnomics/landing \ TimeoutStartSec=1800
--s3-provider Cloudflare \
--s3-access-key-id ${LITESTREAM_R2_ACCESS_KEY_ID} \
--s3-secret-access-key ${LITESTREAM_R2_SECRET_ACCESS_KEY} \
--s3-endpoint https://${LITESTREAM_R2_ENDPOINT} \
--s3-no-check-bucket \
--exclude ".state.sqlite*"
StandardOutput=journal
StandardError=journal
SyslogIdentifier=padelnomics-landing-backup

View File

@@ -39,3 +39,23 @@ module = "padelnomics_extract.playtomic_availability"
entry = "main_recheck" entry = "main_recheck"
schedule = "0,30 6-23 * * *" schedule = "0,30 6-23 * * *"
depends_on = ["playtomic_availability"] depends_on = ["playtomic_availability"]
[census_usa]
module = "padelnomics_extract.census_usa"
schedule = "monthly"
[census_usa_income]
module = "padelnomics_extract.census_usa_income"
schedule = "monthly"
[eurostat_city_labels]
module = "padelnomics_extract.eurostat_city_labels"
schedule = "monthly"
[ons_uk]
module = "padelnomics_extract.ons_uk"
schedule = "monthly"
[gisco]
module = "padelnomics_extract.gisco"
schedule = "monthly"

View File

@@ -1,4 +1,35 @@
# Building a Padel Hall — Complete Guide # Padel Hall — Question Bank & Gap Analysis
> **What this file is**: A structured question bank covering the full universe of questions a padel hall entrepreneur needs to answer — from concept to exit. It is **not** an article for publication.
>
> **Purpose**: Gap analysis — identify which questions Padelnomics already answers (planner, city articles, pipeline data, business plan PDF) and which are unanswered gaps we could fill to improve product value.
>
> **Coverage legend**:
> - `ANSWERED` — fully covered by the planner, city articles, or BP export
> - `PARTIAL` — partially addressed; notable gap or missing depth
> - `GAP` — not addressed at all; actionable opportunity
---
## Gap Analysis Summary
| Tier | Gap | Estimated Impact | Status |
|------|-----|-----------------|--------|
| 1 | Subsidies & grants (Germany) | High | Not in product; data exists in `research/padel-hall-economics.md` |
| 1 | Buyer segmentation (sports club / commercial / hotel / franchise) | High | Not in planner; segmentation table exists in research |
| 1 | Indoor vs outdoor decision framework | High | Planner models both; no comparison table or decision guide |
| 1 | OPEX benchmarks shown inline | Medium-High | Planner has inputs; defaults not visually benchmarked |
| 2 | Booking platform strategy (Playtomic vs Matchi vs custom) | Medium | Zero guidance; we scrape Playtomic so know it well |
| 2 | Depreciation & tax shield | Medium | All calcs pre-tax; Germany: 30% effective, 7yr courts |
| 2 | Legal & regulatory checklist (Germany) | Medium | Only permit cost line; Bauantrag, TA Lärm, GmbH etc. missing |
| 2 | Court supplier selection framework | Medium | Supplier directory exists; no evaluation criteria |
| 2 | Staffing plan template | Medium | BP has narrative field; no structured role × FTE × salary |
| 3 | Zero-court location pages (white-space pSEO) | High data value | `location_opportunity_profile` scores them; none published |
| 3 | Pre-opening / marketing playbook | Low-Medium | Out of scope; static article possible |
| 3 | Catchment area isochrones (drive-time) | Low | Heavy lift; `nearest_padel_court_km` is straight-line only |
| 3 | Trend/fad risk quantification | Low | Inherently speculative |
---
## Table of Contents ## Table of Contents
@@ -16,6 +47,8 @@
### Market & Demand ### Market & Demand
> **COVERAGE: PARTIAL** — Venue counts, density (venues/100K), Market Score, and Opportunity Score per city are all answered by pipeline data (`location_opportunity_profile`) and surfaced in city articles. Missing: actual player counts, competitor utilization rates, household income / age demographics for the catchment area. No drive-time isochrone analysis (Tier 3 gap).
- How many padel players are in your target area? Is the sport growing locally or are you betting on future adoption? - How many padel players are in your target area? Is the sport growing locally or are you betting on future adoption?
- What's the competitive landscape — how many existing courts within a 2030 minute drive radius? Are they full? What are their peak/off-peak utilization rates? - What's the competitive landscape — how many existing courts within a 2030 minute drive radius? Are they full? What are their peak/off-peak utilization rates?
- What's the demographic profile of your catchment area (income, age, sports participation)? - What's the demographic profile of your catchment area (income, age, sports participation)?
@@ -23,6 +56,8 @@
### Site & Location ### Site & Location
> **COVERAGE: GAP** — The planner has a rent/land cost input and a `own` toggle for buy vs lease, but there is no guidance on site selection criteria (ceiling height, column spacing, zoning classification, parking ratios). A static article or checklist would cover this. See also Tier 2 gap: legal/regulatory checklist.
- Do you want to build new (greenfield), convert an existing building (warehouse, industrial hall), or add to an existing sports complex? - Do you want to build new (greenfield), convert an existing building (warehouse, industrial hall), or add to an existing sports complex?
- What zoning and building regulations apply? Is a padel hall classified as sports, leisure, commercial? - What zoning and building regulations apply? Is a padel hall classified as sports, leisure, commercial?
- What's the required ceiling height? (Minimum ~810m for indoor padel, ideally 10m+) - What's the required ceiling height? (Minimum ~810m for indoor padel, ideally 10m+)
@@ -30,6 +65,8 @@
### Product & Scope ### Product & Scope
> **COVERAGE: PARTIAL** — Court count is fully answered (planner supports 112 courts, sensitivity analysis included). Ancillary revenue streams (coaching, F&B, pro shop, events, memberships, corporate) are modelled. Indoor vs outdoor is modelled but there is no structured decision framework comparing CAPEX, revenue ceiling, seasonal risk, noise, and permits (Tier 1 gap #3). Quality level / positioning is not addressed.
- How many courts? (Typically 48 is the sweet spot for a standalone hall; fewer than 4 struggles with profitability, more than 8 requires very strong demand) - How many courts? (Typically 48 is the sweet spot for a standalone hall; fewer than 4 struggles with profitability, more than 8 requires very strong demand)
- Indoor only, outdoor, or hybrid with a retractable/seasonal structure? - Indoor only, outdoor, or hybrid with a retractable/seasonal structure?
- What ancillary offerings: pro shop, café/bar/lounge, fitness area, changing rooms, padel school/academy? - What ancillary offerings: pro shop, café/bar/lounge, fitness area, changing rooms, padel school/academy?
@@ -37,6 +74,8 @@
### Financial ### Financial
> **COVERAGE: ANSWERED** — All four questions are directly answered by the planner: equity/debt split, rent/land cost, real peak/off-peak prices per city (from Playtomic via `planner_defaults`), utilization ramp curve (Year 15), and breakeven utilization (sensitivity grid).
- What's your total budget, and what's the split between equity and debt? - What's your total budget, and what's the split between equity and debt?
- What rental or land purchase cost can you sustain? - What rental or land purchase cost can you sustain?
- What are realistic court booking prices in your market? - What are realistic court booking prices in your market?
@@ -45,6 +84,8 @@
### Legal & Organizational ### Legal & Organizational
> **COVERAGE: GAP** — Only a permit cost line item exists in CAPEX. No entity guidance (GmbH vs UG vs Verein), no permit checklist, no license types, no insurance guidance. A Germany-first legal/regulatory checklist (Bauantrag, Nutzungsänderung, TA Lärm, Gewerbeerlaubnis, §4 Nr. 22 UStG sports VAT exemption) would be high-value static content (Tier 2 gap #7). Buyer segmentation (sports club vs. commercial) affects entity choice and grant eligibility (Tier 1 gap #2).
- What legal entity will you use? - What legal entity will you use?
- Do you need partners (operational, financial, franchise)? - Do you need partners (operational, financial, franchise)?
- What permits, licenses, and insurance do you need? - What permits, licenses, and insurance do you need?
@@ -56,6 +97,10 @@
### Phase 1: Feasibility & Concept (Month 13) ### Phase 1: Feasibility & Concept (Month 13)
> **COVERAGE: ANSWERED** — This phase is fully supported. Market research → city articles (venue density, Market Score, Opportunity Score). Concept development → planner inputs. Location scouting → city articles + planner. Preliminary financial model → planner. Go/no-go → planner output (EBITDA, IRR, NPV).
>
> Missing: Buyer segmentation (Tier 1 gap #2) — the planner treats all users identically. A "project type" selector (sports club / commercial / hotel / franchise) would adjust CAPEX defaults, grant eligibility, and entity guidance.
1. **Market research**: Survey local players, visit competing facilities, analyze demographics within a 1520 minute drive radius. Talk to padel coaches and club organizers. 1. **Market research**: Survey local players, visit competing facilities, analyze demographics within a 1520 minute drive radius. Talk to padel coaches and club organizers.
2. **Concept development**: Define your number of courts, target audience, service level, and ancillary revenue streams. 2. **Concept development**: Define your number of courts, target audience, service level, and ancillary revenue streams.
3. **Location scouting**: Identify 35 candidate sites. Evaluate each on accessibility, visibility, size, ceiling height (if conversion), zoning, and cost. 3. **Location scouting**: Identify 35 candidate sites. Evaluate each on accessibility, visibility, size, ceiling height (if conversion), zoning, and cost.
@@ -64,6 +109,8 @@
### Phase 2: Planning & Design (Month 36) ### Phase 2: Planning & Design (Month 36)
> **COVERAGE: PARTIAL** — Detailed financial model (step 9) and financing (step 10) are fully answered by the planner (DSCR, covenants, sensitivity). Court supplier selection (step 8) has a partial answer: a supplier directory exists in the product but there is no evaluation framework (Tier 2 gap #8: origin, price/court, warranty, glass type, installation, lead time). Permit process (step 11) is a gap (Tier 2 gap #7). Site security and architect hiring are operational advice, out of scope.
6. **Secure the site**: Sign a letter of intent or option agreement for purchase or lease. 6. **Secure the site**: Sign a letter of intent or option agreement for purchase or lease.
7. **Hire an architect** experienced in sports facilities. They'll produce floor plans, elevations, structural assessments (for conversions), and MEP (mechanical, electrical, plumbing) layouts. 7. **Hire an architect** experienced in sports facilities. They'll produce floor plans, elevations, structural assessments (for conversions), and MEP (mechanical, electrical, plumbing) layouts.
8. **Padel court supplier selection**: Get quotes from manufacturers (e.g., Mondo, Padelcreations, MejorSet). Courts come as prefabricated modules — coordinate dimensions, drainage, lighting, and glass specifications with your architect. 8. **Padel court supplier selection**: Get quotes from manufacturers (e.g., Mondo, Padelcreations, MejorSet). Courts come as prefabricated modules — coordinate dimensions, drainage, lighting, and glass specifications with your architect.
@@ -73,6 +120,8 @@
### Phase 3: Construction / Conversion (Month 612) ### Phase 3: Construction / Conversion (Month 612)
> **COVERAGE: PARTIAL** — Booking system (step 15) is partially addressed: booking system cost is a planner input, but there is no guidance on platform selection (Playtomic vs Matchi vs custom) despite this being a real decision with revenue and data implications (Tier 2 gap #5). Construction, installation, fit-out, and inspections are operational steps outside Padelnomics' scope.
12. **Tender and contract construction**: Either a general contractor or construction management approach. Key trades: structural/civil, flooring, HVAC (critical for indoor comfort), electrical (LED court lighting to specific lux standards), plumbing. 12. **Tender and contract construction**: Either a general contractor or construction management approach. Key trades: structural/civil, flooring, HVAC (critical for indoor comfort), electrical (LED court lighting to specific lux standards), plumbing.
13. **Install padel courts**: Usually done after the building shell is complete. Courts take 24 weeks to install per batch. 13. **Install padel courts**: Usually done after the building shell is complete. Courts take 24 weeks to install per batch.
14. **Fit-out ancillary areas**: Reception, changing rooms, lounge/bar, pro shop. 14. **Fit-out ancillary areas**: Reception, changing rooms, lounge/bar, pro shop.
@@ -81,6 +130,8 @@
### Phase 4: Pre-Opening (Month 1013) ### Phase 4: Pre-Opening (Month 1013)
> **COVERAGE: PARTIAL** — Staffing plan (step 17): the BP export has a `staffing_plan` narrative field, but there is no structured template with role × FTE × salary defaults. Research benchmarks (€9.914.2K/month for 23 FTE + manager) could pre-fill this based on court count (Tier 2 gap #9). Marketing playbook (step 18): not addressed; could be a static article (Tier 3 gap #11). Soft/grand opening: out of scope.
17. **Hire staff**: Manager, reception, coaches, cleaning, potentially F&B staff. 17. **Hire staff**: Manager, reception, coaches, cleaning, potentially F&B staff.
18. **Marketing launch**: Social media, local partnerships (sports clubs, corporate wellness), opening event, introductory pricing. 18. **Marketing launch**: Social media, local partnerships (sports clubs, corporate wellness), opening event, introductory pricing.
19. **Soft opening**: Invite local players, influencers, press for a trial period. 19. **Soft opening**: Invite local players, influencers, press for a trial period.
@@ -88,6 +139,8 @@
### Phase 5: Operations & Optimization (Ongoing) ### Phase 5: Operations & Optimization (Ongoing)
> **COVERAGE: PARTIAL** — Utilization monitoring and financial review are covered by the planner model. Upsell streams (coaching, equipment, F&B, memberships) are all revenue line items. Community building and dynamic pricing strategy are not addressed — these are operational, not data-driven, and are out of scope.
21. **Monitor utilization** by court, time slot, and day. Adjust pricing dynamically. 21. **Monitor utilization** by court, time slot, and day. Adjust pricing dynamically.
22. **Build community**: Leagues, tournaments, social events, corporate bookings. 22. **Build community**: Leagues, tournaments, social events, corporate bookings.
23. **Upsell**: Coaching, equipment, food/beverage, memberships. 23. **Upsell**: Coaching, equipment, food/beverage, memberships.
@@ -97,6 +150,8 @@
## Plans You Need to Create ## Plans You Need to Create
> **COVERAGE: PARTIAL** — Business Plan and Financial Plan are both fully answered (planner + BP PDF export with 15+ narrative sections). Architectural Plans, Marketing Plan, and Legal/Permit Plan are outside the product's scope. Operational Plan is partial: staffing and booking system inputs exist but lack depth (Tier 2 gaps #5, #9).
- **Business Plan** — the master document covering market analysis, concept, operations plan, management team, and financials. This is what banks and investors want to see. - **Business Plan** — the master document covering market analysis, concept, operations plan, management team, and financials. This is what banks and investors want to see.
- **Architectural Plans** — floor plans, cross-sections, elevations, structural drawings, MEP plans. Required for permits and construction. - **Architectural Plans** — floor plans, cross-sections, elevations, structural drawings, MEP plans. Required for permits and construction.
- **Financial Plan** — the core of your business plan. Includes investment budget, funding plan, P&L forecast (35 years), cash flow forecast, and sensitivity analysis. - **Financial Plan** — the core of your business plan. Includes investment budget, funding plan, P&L forecast (35 years), cash flow forecast, and sensitivity analysis.
@@ -112,6 +167,8 @@
### Investment Budget (CAPEX) ### Investment Budget (CAPEX)
> **COVERAGE: ANSWERED** — The planner covers all 15+ CAPEX line items for both lease (`rent`) and purchase (`own`) scenarios. Subsidies and grants are **not** modelled (Tier 1 gap #1): `research/padel-hall-economics.md` documents Landessportbund grants (35% for sports clubs), KfW 150 loans, and a real example of €258K → €167K net after grant (padel-court.de). A "Fördermittel" (grants) section in the BP or a callout in DE city articles would surface this.
| Item | Estimate | | Item | Estimate |
|---|---| |---|---|
| Building lease deposit or land | €50,000€200,000 | | Building lease deposit or land | €50,000€200,000 |
@@ -131,6 +188,8 @@ Realistic midpoint for a solid 6-court hall: **~€1.21.5M**.
### Revenue Model ### Revenue Model
> **COVERAGE: ANSWERED** — Court utilization × price per hour is the core model. Real peak/off-peak prices per city are pre-filled via `planner_defaults` from Playtomic data. Ramp curve (Year 15 utilization), 6 ancillary streams, and monthly seasonal curve are all modelled.
Core driver: **court utilization × price per hour**. Core driver: **court utilization × price per hour**.
- 6 courts × 15 bookable hours/day × 365 days = **32,850 court-hours/year** (theoretical max) - 6 courts × 15 bookable hours/day × 365 days = **32,850 court-hours/year** (theoretical max)
@@ -149,6 +208,8 @@ Core driver: **court utilization × price per hour**.
### Operating Costs (OPEX) ### Operating Costs (OPEX)
> **COVERAGE: PARTIAL** — All OPEX line items exist as planner inputs. The defaults are reasonable but are not visually benchmarked against market data (Tier 1 gap #4). Research benchmarks from `research/padel-hall-economics.md` §7: electricity €2.54.5K/month, staff €9.914.2K/month for 23 FTE + manager, rent €815K/month. Showing "typical range for your market" next to each OPEX input field would improve trust in the defaults.
| Cost Item | Year 1 | Year 2 | Year 3 | | Cost Item | Year 1 | Year 2 | Year 3 |
|---|---|---|---| |---|---|---|---|
| Rent / lease | €120k | €123k | €127k | | Rent / lease | €120k | €123k | €127k |
@@ -164,6 +225,8 @@ Core driver: **court utilization × price per hour**.
### Profitability ### Profitability
> **COVERAGE: ANSWERED** — EBITDA, EBITDA margin, debt service, and free cash flow after debt are all computed by the planner for all 60 months.
| Metric | Year 1 | Year 2 | Year 3 | | Metric | Year 1 | Year 2 | Year 3 |
|---|---|---|---| |---|---|---|---|
| **EBITDA** | €310k | €577k | €759k | | **EBITDA** | €310k | €577k | €759k |
@@ -173,6 +236,8 @@ Core driver: **court utilization × price per hour**.
### Key Metrics to Track ### Key Metrics to Track
> **COVERAGE: ANSWERED** — Payback period, IRR (equity + project), NPV, MOIC, DSCR per year, breakeven utilization, and revenue per available hour are all computed and displayed.
- **Payback period**: Typically 35 years for a well-run padel hall - **Payback period**: Typically 35 years for a well-run padel hall
- **ROI on equity**: If you put in €500k equity and generate €300k+ annual free cash flow by year 3, that's a 60%+ cash-on-cash return - **ROI on equity**: If you put in €500k equity and generate €300k+ annual free cash flow by year 3, that's a 60%+ cash-on-cash return
- **Breakeven utilization**: Usually around 3540% — below which you lose money - **Breakeven utilization**: Usually around 3540% — below which you lose money
@@ -180,12 +245,18 @@ Core driver: **court utilization × price per hour**.
### Sensitivity Analysis ### Sensitivity Analysis
> **COVERAGE: ANSWERED** — 12-step utilization sensitivity and 8-step price sensitivity are both shown as grids, each including DSCR values.
Model what happens if utilization is 10% lower than planned, if the average price drops by €5, or if construction costs overrun by 20%. This is what banks want to see — that you survive the downside. Model what happens if utilization is 10% lower than planned, if the average price drops by €5, or if construction costs overrun by 20%. This is what banks want to see — that you survive the downside.
--- ---
## How to Decide Where to Build ## How to Decide Where to Build
> **COVERAGE: PARTIAL overall** — The product answers competition mapping (venue density, Opportunity Score) and rent/cost considerations (planner input). Missing: drive-time catchment analysis (Tier 3 gap #12 — would need isochrone API), accessibility/visibility/building suitability assessment (static checklist possible), growth trajectory (no new-development data), and regulatory environment (Tier 2 gap #7).
>
> **Tier 3 opportunity**: `location_opportunity_profile` scores thousands of GeoNames locations including zero-court towns. Only venues with existing courts get a public article. Generating pSEO pages for top-scoring zero-court locations would surface "build here" recommendations (white-space pages).
1. **Catchment area analysis**: Draw a 15-minute and 30-minute drive-time radius around candidate sites. Analyze population density, household income, age distribution (2555 is the core padel demographic), and existing sports participation rates. 1. **Catchment area analysis**: Draw a 15-minute and 30-minute drive-time radius around candidate sites. Analyze population density, household income, age distribution (2555 is the core padel demographic), and existing sports participation rates.
2. **Competition mapping**: Map every existing padel facility within 30 minutes. Call them, check their booking systems — are courts booked out at peak? If competitors are running at 80%+ utilization, that's a strong signal of unmet demand. 2. **Competition mapping**: Map every existing padel facility within 30 minutes. Call them, check their booking systems — are courts booked out at peak? If competitors are running at 80%+ utilization, that's a strong signal of unmet demand.
@@ -208,70 +279,104 @@ Model what happens if utilization is 10% lower than planned, if the average pric
### NPV & IRR ### NPV & IRR
> **COVERAGE: ANSWERED** — Both equity IRR and project IRR are computed. NPV is shown with the WACC input. Hurdle rate is a user input.
Discount your projected free cash flows at your WACC (or required return on equity if all-equity financed) to get a net present value. The IRR tells you whether the project clears your hurdle rate. For a padel hall, you'd typically want an unlevered IRR of 1525% to justify the risk of a single-asset, operationally intensive business. Compare this against alternative uses of your capital. Discount your projected free cash flows at your WACC (or required return on equity if all-equity financed) to get a net present value. The IRR tells you whether the project clears your hurdle rate. For a padel hall, you'd typically want an unlevered IRR of 1525% to justify the risk of a single-asset, operationally intensive business. Compare this against alternative uses of your capital.
### WACC & Cost of Capital ### WACC & Cost of Capital
> **COVERAGE: ANSWERED** — WACC is a planner input used in NPV calculations. Debt cost and equity cost are separately configurable.
If you're blending debt and equity, calculate your weighted average cost of capital properly. Bank debt for a sports facility might run 47% depending on jurisdiction and collateral. Your equity cost should reflect the illiquidity premium and operational risk — this isn't a passive real estate investment, it's an operating business. A reasonable cost of equity might be 1220%. If you're blending debt and equity, calculate your weighted average cost of capital properly. Bank debt for a sports facility might run 47% depending on jurisdiction and collateral. Your equity cost should reflect the illiquidity premium and operational risk — this isn't a passive real estate investment, it's an operating business. A reasonable cost of equity might be 1220%.
### Terminal Value ### Terminal Value
> **COVERAGE: ANSWERED** — Terminal value is computed as EBITDA × exit multiple at the end of the hold period. MOIC and value bridge are displayed.
If you model 5 years of explicit cash flows, you need a terminal value. You can use a perpetuity growth model (FCF year 5 × (1+g) / (WACC g)) or an exit multiple. For the exit multiple approach, think about what a buyer would pay — likely 47x EBITDA for a mature, well-run single-location padel hall, potentially higher if it's part of a multi-site rollout story. If you model 5 years of explicit cash flows, you need a terminal value. You can use a perpetuity growth model (FCF year 5 × (1+g) / (WACC g)) or an exit multiple. For the exit multiple approach, think about what a buyer would pay — likely 47x EBITDA for a mature, well-run single-location padel hall, potentially higher if it's part of a multi-site rollout story.
### Lease vs. Buy ### Lease vs. Buy
> **COVERAGE: ANSWERED** — The `own` toggle in the planner changes the entire CAPEX/OPEX structure: land purchase replaces lease deposit, mortgage replaces rent, and property appreciation is modelled in terminal value.
A critical capital allocation decision. Buying the property ties up far more capital but gives you residual asset value and eliminates landlord risk. Leasing preserves capital for operations and expansion but exposes you to rent increases and lease termination risk. Model both scenarios and compare the risk-adjusted NPV. Also consider sale-and-leaseback if you build on owned land. A critical capital allocation decision. Buying the property ties up far more capital but gives you residual asset value and eliminates landlord risk. Leasing preserves capital for operations and expansion but exposes you to rent increases and lease termination risk. Model both scenarios and compare the risk-adjusted NPV. Also consider sale-and-leaseback if you build on owned land.
### Operating Leverage ### Operating Leverage
> **COVERAGE: ANSWERED** — The sensitivity grids explicitly show how a 10% utilization swing affects EBITDA and DSCR.
A padel hall has high fixed costs (rent, staff base, debt service) and relatively low variable costs. This means profitability is extremely sensitive to utilization. Model the operating leverage explicitly — a 10% swing in utilization might cause a 2530% swing in EBITDA. This is both the opportunity and the risk. A padel hall has high fixed costs (rent, staff base, debt service) and relatively low variable costs. This means profitability is extremely sensitive to utilization. Model the operating leverage explicitly — a 10% swing in utilization might cause a 2530% swing in EBITDA. This is both the opportunity and the risk.
### Depreciation & Tax Shield ### Depreciation & Tax Shield
> **COVERAGE: GAP** — All planner calculations are pre-tax (Tier 2 gap #6). Adding a depreciation schedule and effective tax rate would materially improve the financial model for Germany: 7-year depreciation for courts/equipment, ~30% effective tax rate (15% KSt + 14% GewSt). This would require jurisdiction selection (start with Germany only). Non-trivial but the most common user geography.
Padel courts depreciate over 710 years, building fit-out over 1015 years, equipment over 35 years. The depreciation tax shield is meaningful. Interest expense on debt is also tax-deductible. Model your effective tax rate and the present value of these shields — they improve your after-tax returns materially. Padel courts depreciate over 710 years, building fit-out over 1015 years, equipment over 35 years. The depreciation tax shield is meaningful. Interest expense on debt is also tax-deductible. Model your effective tax rate and the present value of these shields — they improve your after-tax returns materially.
### Working Capital Cycle ### Working Capital Cycle
> **COVERAGE: ANSWERED** — Pre-opening cash burn and ramp-up period are modelled in the 60-month cash flow. Working capital reserve is a CAPEX line item.
Padel halls are generally working-capital-light (customers pay at booking or on arrival, you pay suppliers on 3060 day terms). But model the initial ramp-up period where you're carrying costs before revenue reaches steady state. The pre-opening cash burn and first 612 months of sub-breakeven operation is where most of your working capital risk sits. Padel halls are generally working-capital-light (customers pay at booking or on arrival, you pay suppliers on 3060 day terms). But model the initial ramp-up period where you're carrying costs before revenue reaches steady state. The pre-opening cash burn and first 612 months of sub-breakeven operation is where most of your working capital risk sits.
### Scenario & Sensitivity Analysis ### Scenario & Sensitivity Analysis
> **COVERAGE: ANSWERED** — Utilization sensitivity (12 steps) and price sensitivity (8 steps) grids are shown, both with DSCR. Bear/base/bull narrative is covered in the BP export.
Model three scenarios (bear/base/bull) varying utilization, pricing, and cost overruns simultaneously. Identify the breakeven utilization rate precisely. A Monte Carlo simulation on the key variables (utilization, average price, construction cost, ramp-up speed) gives you a probability distribution of outcomes rather than a single point estimate. Model three scenarios (bear/base/bull) varying utilization, pricing, and cost overruns simultaneously. Identify the breakeven utilization rate precisely. A Monte Carlo simulation on the key variables (utilization, average price, construction cost, ramp-up speed) gives you a probability distribution of outcomes rather than a single point estimate.
### Exit Strategy & Valuation ### Exit Strategy & Valuation
> **COVERAGE: ANSWERED** — Hold period, exit EBITDA multiple, terminal value, MOIC, and value bridge are all displayed in the planner.
Think about this upfront. Are you building to hold and cash-flow, or building to sell to a consolidator or franchise operator? The exit multiple depends heavily on whether you've built a transferable business (brand, systems, trained staff, long lease) or an owner-dependent operation. Multi-site operators and franchise groups trade at higher multiples (610x EBITDA) than single sites. Think about this upfront. Are you building to hold and cash-flow, or building to sell to a consolidator or franchise operator? The exit multiple depends heavily on whether you've built a transferable business (brand, systems, trained staff, long lease) or an owner-dependent operation. Multi-site operators and franchise groups trade at higher multiples (610x EBITDA) than single sites.
### Optionality Value ### Optionality Value
> **COVERAGE: GAP** — Real option value (second location, franchise, repurposing) is mentioned in the BP narrative but not quantified. Out of scope for the planner; noting as a caveat in the BP export text would be sufficient.
A successful first hall gives you the option to expand — second location, franchise model, or selling the playbook. This real option has value that a static DCF doesn't capture. Similarly, if you own the land/building, you have conversion optionality (the building could be repurposed if padel demand fades). A successful first hall gives you the option to expand — second location, franchise model, or selling the playbook. This real option has value that a static DCF doesn't capture. Similarly, if you own the land/building, you have conversion optionality (the building could be repurposed if padel demand fades).
### Counterparty & Concentration Risk ### Counterparty & Concentration Risk
> **COVERAGE: PARTIAL** — The planner models this implicitly (single-site, single-sport), and DSCR warnings flag over-leverage. No explicit counterparty risk section. Mentioning it in the BP risk narrative would be low-effort coverage.
You're exposed to a single landlord (lease risk), a single location (demand risk), and potentially a single sport (trend risk). A bank or sophisticated investor will flag all three. Mitigants include long lease terms with caps on escalation, diversified revenue streams (F&B, events, coaching), and contractual protections. You're exposed to a single landlord (lease risk), a single location (demand risk), and potentially a single sport (trend risk). A bank or sophisticated investor will flag all three. Mitigants include long lease terms with caps on escalation, diversified revenue streams (F&B, events, coaching), and contractual protections.
### Subsidies & Grants ### Subsidies & Grants
> **COVERAGE: GAP — Tier 1 priority.** `research/padel-hall-economics.md` documents: Landessportbund grants (up to 35% CAPEX for registered sports clubs), KfW 150 low-interest loans, and a worked example: €258K gross → €167K net CAPEX after grant. The planner has no grants input. Quick wins: (a) add a "Fördermittel" accordion section to DE city articles; (b) add a grant percentage input to the planner CAPEX section (reduces total investment and boosts IRR). Note: grant eligibility depends on buyer type (Tier 1 gap #2) — sports clubs qualify, commercial operators typically do not.
Many municipalities and national sports bodies offer grants or subsidized loans for sports infrastructure. In some European countries, this can cover 1030% of CAPEX. Factor this into your funding plan — it's essentially free equity that boosts your returns. Many municipalities and national sports bodies offer grants or subsidized loans for sports infrastructure. In some European countries, this can cover 1030% of CAPEX. Factor this into your funding plan — it's essentially free equity that boosts your returns.
### VAT & Tax Structuring ### VAT & Tax Structuring
> **COVERAGE: GAP** — Not modelled. Germany-specific: court rental may qualify for §4 Nr. 22 UStG sports VAT exemption (0% VAT) if operated by a non-commercial entity; commercial operators pay 19% VAT on court rental. F&B is 19% (or 7% eat-in). Getting this wrong materially affects revenue net-of-VAT. Worth a callout in the legal/regulatory article (Tier 2 gap #7).
Depending on your jurisdiction, court rental may be VAT-exempt or reduced-rate (sports exemption), while F&B is standard-rated. This affects pricing strategy and cash flow. The entity structure (single GmbH, holding structure, partnership) has implications for profit extraction, liability, and eventual exit taxation. Worth getting tax advice early. Depending on your jurisdiction, court rental may be VAT-exempt or reduced-rate (sports exemption), while F&B is standard-rated. This affects pricing strategy and cash flow. The entity structure (single GmbH, holding structure, partnership) has implications for profit extraction, liability, and eventual exit taxation. Worth getting tax advice early.
### Insurance & Business Interruption ### Insurance & Business Interruption
> **COVERAGE: PARTIAL** — Insurance is a planner OPEX line item. No guidance on coverage types or BI insurance sizing. Low priority to expand.
Price in comprehensive insurance — property, liability, business interruption. A fire or structural issue that shuts you down for 3 months could be existential without BI coverage. This is a real cost that's often underestimated. Price in comprehensive insurance — property, liability, business interruption. A fire or structural issue that shuts you down for 3 months could be existential without BI coverage. This is a real cost that's often underestimated.
### Covenant Compliance ### Covenant Compliance
> **COVERAGE: ANSWERED** — DSCR is computed for each of the 5 years and shown with a warning band. LTV warnings are also displayed.
If you take bank debt, you'll likely face covenants — DSCR (debt service coverage ratio) minimums of 1.21.5x, leverage caps, possibly revenue milestones. Model your covenant headroom explicitly. Breaching a covenant in year 1 during ramp-up is a real risk if you've over-leveraged. If you take bank debt, you'll likely face covenants — DSCR (debt service coverage ratio) minimums of 1.21.5x, leverage caps, possibly revenue milestones. Model your covenant headroom explicitly. Breaching a covenant in year 1 during ramp-up is a real risk if you've over-leveraged.
### Inflation Sensitivity ### Inflation Sensitivity
> **COVERAGE: ANSWERED** — The planner has separate `revenue_growth_rate` and `opex_growth_rate` inputs, allowing asymmetric inflation scenarios.
Energy costs, staff wages, and maintenance all inflate. Can you pass these through via price increases without killing utilization? Model a scenario where costs inflate at 35% but you can only raise prices by 23%. Energy costs, staff wages, and maintenance all inflate. Can you pass these through via price increases without killing utilization? Model a scenario where costs inflate at 35% but you can only raise prices by 23%.
### Residual / Liquidation Value ### Residual / Liquidation Value
> **COVERAGE: PARTIAL** — Terminal/exit value is modelled (EBITDA multiple). A true liquidation scenario (courts resale, lease termination penalties, building write-off) is not separately modelled. Sufficient for the current product.
In a downside scenario, what are your assets worth? Padel courts have some resale value. Building improvements are largely sunk. If you've leased, your downside is limited to equity invested plus any personal guarantees. If you've bought property, the real estate retains value but may take time to sell. Model the liquidation scenario honestly. In a downside scenario, what are your assets worth? Padel courts have some resale value. Building improvements are largely sunk. If you've leased, your downside is limited to equity invested plus any personal guarantees. If you've bought property, the real estate retains value but may take time to sell. Model the liquidation scenario honestly.
--- ---
@@ -280,24 +385,34 @@ In a downside scenario, what are your assets worth? Padel courts have some resal
### Existential Risks ### Existential Risks
> **COVERAGE: PARTIAL** — Trend/fad risk is acknowledged in the BP narrative but not quantified (Tier 3 gap #13). FIP/Playtomic data (7,187 new courts globally in 2024, +26% YoY new clubs) exists but long-term quantification is inherently speculative. Force majeure/pandemic risk is not addressed; a reserve fund input (CAPEX working capital) provides partial mitigation modelling.
- **Trend / Fad Risk**: Padel is booming now, but so did squash in the 1980s. You're locking in a 1015 year investment thesis on a sport that may plateau or decline. The key question is whether padel reaches self-sustaining critical mass in your market or stays a novelty. If utilization drops from 65% to 35% in year 5 because the hype fades, your entire model breaks. This is largely unhedgeable. - **Trend / Fad Risk**: Padel is booming now, but so did squash in the 1980s. You're locking in a 1015 year investment thesis on a sport that may plateau or decline. The key question is whether padel reaches self-sustaining critical mass in your market or stays a novelty. If utilization drops from 65% to 35% in year 5 because the hype fades, your entire model breaks. This is largely unhedgeable.
- **Force Majeure / Pandemic Risk**: COVID shut down indoor sports facilities for months. Insurance may not cover it. Having enough cash reserves or credit facilities to survive 36 months of zero revenue is prudent. - **Force Majeure / Pandemic Risk**: COVID shut down indoor sports facilities for months. Insurance may not cover it. Having enough cash reserves or credit facilities to survive 36 months of zero revenue is prudent.
### Construction & Development Risks ### Construction & Development Risks
> **COVERAGE: PARTIAL** — A contingency/overrun percentage is a planner CAPEX input. Delay cost (carrying costs during construction) is not explicitly modelled.
- **Construction Cost Overruns & Delays**: Sports facility builds routinely overrun by 1530%. Every month of delay is a month of carrying costs (rent, debt service, staff already hired) with zero revenue. Build a contingency buffer of 1520% of CAPEX minimum and negotiate fixed-price construction contracts where possible. - **Construction Cost Overruns & Delays**: Sports facility builds routinely overrun by 1530%. Every month of delay is a month of carrying costs (rent, debt service, staff already hired) with zero revenue. Build a contingency buffer of 1520% of CAPEX minimum and negotiate fixed-price construction contracts where possible.
### Property & Lease Risks ### Property & Lease Risks
> **COVERAGE: GAP** — No lease-term inputs or landlord risk guidance. The `own` toggle handles the buy scenario. A callout in the BP template about minimum lease length (15+ years, renewal options) would be useful but is low priority.
- **Landlord Risk**: If you're leasing, you're spending €500k+ fitting out someone else's building. What happens if the landlord sells, goes bankrupt, or refuses to renew? You need a long lease (15+ years), with options to renew, and ideally a step-in right or compensation clause for tenant improvements. - **Landlord Risk**: If you're leasing, you're spending €500k+ fitting out someone else's building. What happens if the landlord sells, goes bankrupt, or refuses to renew? You need a long lease (15+ years), with options to renew, and ideally a step-in right or compensation clause for tenant improvements.
### Competitive Risks ### Competitive Risks
> **COVERAGE: PARTIAL** — City articles show existing venue density and Opportunity Score. The planner does not model a "competitor opens nearby" scenario. A simple sensitivity scenario (utilization drop) is the best proxy available in the current model.
- **Cannibalization from New Entrants**: Your success is visible — full courts, long waitlists. This attracts competitors. Someone opens a new hall 10 minutes away, and your utilization drops from 70% to 50%. There's no real moat in padel besides location, community loyalty, and service quality. Model what happens when a competitor opens nearby in year 3. - **Cannibalization from New Entrants**: Your success is visible — full courts, long waitlists. This attracts competitors. Someone opens a new hall 10 minutes away, and your utilization drops from 70% to 50%. There's no real moat in padel besides location, community loyalty, and service quality. Model what happens when a competitor opens nearby in year 3.
### Operational Risks ### Operational Risks
> **COVERAGE: PARTIAL** — Court maintenance OPEX and maintenance reserve are planner inputs. F&B, staffing, and booking platform risks are not addressed. See Tier 2 gaps #5 (booking platform strategy) and #9 (staffing plan). Seasonality is fully modelled (12-month outdoor seasonal curve; monthly cash flow).
- **Key Person Dependency**: If the whole operation depends on one founder-operator or one star coach who brings all the members, that's a fragility. Illness, burnout, or departure can crater the business. - **Key Person Dependency**: If the whole operation depends on one founder-operator or one star coach who brings all the members, that's a fragility. Illness, burnout, or departure can crater the business.
- **Staff Retention & Labor Market**: Good facility managers, coaches, and front-desk staff with a hospitality mindset are hard to find and keep. Turnover is expensive and disruptive. In tight labor markets, wage pressure can erode margins. - **Staff Retention & Labor Market**: Good facility managers, coaches, and front-desk staff with a hospitality mindset are hard to find and keep. Turnover is expensive and disruptive. In tight labor markets, wage pressure can erode margins.
@@ -310,6 +425,8 @@ In a downside scenario, what are your assets worth? Padel courts have some resal
### Financial Risks ### Financial Risks
> **COVERAGE: PARTIAL** — Energy volatility: energy OPEX is a modelled input with growth rate, but no locking/hedging guidance. Financing environment: debt rate is a planner input; stress-test at +2% is covered by the sensitivity grid indirectly. Personal guarantee and customer concentration: not addressed (out of scope for data-driven product). Inflation pass-through: answered (separate revenue vs OPEX growth rates).
- **Energy Price Volatility**: Indoor padel halls consume significant energy. Energy costs spiking can destroy margins. Consider locking in energy contracts, investing in solar panels, or using LED lighting and efficient HVAC to reduce exposure. - **Energy Price Volatility**: Indoor padel halls consume significant energy. Energy costs spiking can destroy margins. Consider locking in energy contracts, investing in solar panels, or using LED lighting and efficient HVAC to reduce exposure.
- **Financing Environment**: If interest rates rise between when you plan the project and when you draw down the loan, your debt service costs increase. Lock in rates where possible, or stress-test your model at rates 2% higher than current. - **Financing Environment**: If interest rates rise between when you plan the project and when you draw down the loan, your debt service costs increase. Lock in rates where possible, or stress-test your model at rates 2% higher than current.
@@ -322,22 +439,32 @@ In a downside scenario, what are your assets worth? Padel courts have some resal
### Regulatory & Legal Risks ### Regulatory & Legal Risks
> **COVERAGE: GAP — Tier 2 priority.** Noise complaints (TA Lärm), injury liability, and permit risks are all unaddressed. A Germany-first regulatory checklist article would cover: Bauantrag, Nutzungsänderung, TA Lärm compliance, GmbH vs UG formation, Gewerbeerlaubnis, §4 Nr. 22 UStG sports VAT, and Gaststättengesetz (liquor license). High value for Phase 1/2 users who are evaluating feasibility.
- **Noise Complaints**: Padel is loud — the ball hitting glass walls generates significant noise. Neighbors can complain and municipal authorities can impose operating hour restrictions or require expensive sound mitigation. Check local noise ordinances thoroughly before committing. - **Noise Complaints**: Padel is loud — the ball hitting glass walls generates significant noise. Neighbors can complain and municipal authorities can impose operating hour restrictions or require expensive sound mitigation. Check local noise ordinances thoroughly before committing.
- **Injury Liability**: Padel involves glass walls, fast-moving balls, and quick lateral movement. Player injuries happen. Proper insurance, waiver systems, and court maintenance protocols are essential. - **Injury Liability**: Padel involves glass walls, fast-moving balls, and quick lateral movement. Player injuries happen. Proper insurance, waiver systems, and court maintenance protocols are essential.
### Technology & Platform Risks ### Technology & Platform Risks
> **COVERAGE: GAP — Tier 2 priority.** Booking platform dependency is a real decision point for operators (Playtomic commission ~1520%, data ownership implications, competitor steering risk). We scrape Playtomic and know it intimately. A standalone article "Playtomic vs Matchi vs eigenes System" or a section in the BP template would address this. The booking system commission rate is already a planner input — we could link to a decision guide from there.
- **Booking Platform Dependency**: If you rely on a third-party booking platform like Playtomic, you're giving them access to your customer relationships and paying commission. They could raise fees, change terms, or steer demand to competitors. - **Booking Platform Dependency**: If you rely on a third-party booking platform like Playtomic, you're giving them access to your customer relationships and paying commission. They could raise fees, change terms, or steer demand to competitors.
### Reputational Risks ### Reputational Risks
> **COVERAGE: GAP** — Not addressed. Out of scope for a data-driven product; operational advice.
- **Brand / Reputation Risk**: One viral negative review, a hygiene issue, a safety incident, or a social media complaint can disproportionately hurt a local leisure business. - **Brand / Reputation Risk**: One viral negative review, a hygiene issue, a safety incident, or a social media complaint can disproportionately hurt a local leisure business.
### Currency & External Risks ### Currency & External Risks
> **COVERAGE: GAP** — FX risk from Spanish/Italian manufacturers is not modelled. Minor; most German buyers pay in EUR. Note in BP template as a caveat if importing outside Eurozone.
- **Currency Risk**: Relevant if importing courts or equipment from another currency zone — padel court manufacturers are often Spanish or Italian, so FX moves can affect CAPEX if you're outside the Eurozone. - **Currency Risk**: Relevant if importing courts or equipment from another currency zone — padel court manufacturers are often Spanish or Italian, so FX moves can affect CAPEX if you're outside the Eurozone.
### Opportunity Cost ### Opportunity Cost
> **COVERAGE: PARTIAL** — IRR and NPV implicitly address opportunity cost (you enter the hurdle rate as WACC/cost of equity). No explicit comparison against passive investment alternatives is shown. Sufficient for current product.
The capital, time, and energy you put into this project could go elsewhere. If you could earn 810% passively in diversified investments, a padel hall needs to deliver meaningfully more on a risk-adjusted basis to justify the concentration, illiquidity, and personal time commitment. The capital, time, and energy you put into this project could go elsewhere. If you could earn 810% passively in diversified investments, a padel hall needs to deliver meaningfully more on a risk-adjusted basis to justify the concentration, illiquidity, and personal time commitment.

View File

@@ -1,81 +0,0 @@
"""Download NUTS-2 boundary GeoJSON from Eurostat GISCO.
One-time (or on NUTS revision) download of NUTS-2 boundary polygons used for
spatial income resolution in dim_locations. Stored uncompressed because DuckDB's
ST_Read function cannot read gzipped files.
NUTS classification changes approximately every 7 years. Current revision: 2021.
Output: {LANDING_DIR}/gisco/2024/01/nuts2_boundaries.geojson (~5MB, uncompressed)
Usage:
uv run python scripts/download_gisco_nuts.py [--landing-dir data/landing]
Idempotent: skips download if the file already exists.
"""
import argparse
import sys
from pathlib import Path
import niquests
# NUTS 2021 revision, 20M scale (1:20,000,000), WGS84 (EPSG:4326), LEVL_2 only.
# 20M resolution gives simplified polygons that are fast for point-in-polygon
# matching without sacrificing accuracy at the NUTS-2 boundary level.
GISCO_URL = (
"https://gisco-services.ec.europa.eu/distribution/v2/nuts/geojson/"
"NUTS_RG_20M_2021_4326_LEVL_2.geojson"
)
# Fixed partition: NUTS boundaries are a static reference file, not time-series data.
# Use the NUTS revision year as the partition to make the source version explicit.
DEST_REL_PATH = "gisco/2024/01/nuts2_boundaries.geojson"
HTTP_TIMEOUT_SECONDS = 120
def download_nuts_boundaries(landing_dir: Path) -> None:
dest = landing_dir / DEST_REL_PATH
if dest.exists():
print(f"Already exists (skipping): {dest}")
return
dest.parent.mkdir(parents=True, exist_ok=True)
print(f"Downloading NUTS-2 boundaries from GISCO...")
print(f" URL: {GISCO_URL}")
with niquests.Session() as session:
resp = session.get(GISCO_URL, timeout=HTTP_TIMEOUT_SECONDS)
resp.raise_for_status()
content = resp.content
assert len(content) > 100_000, (
f"GeoJSON too small ({len(content)} bytes) — download may have failed"
)
assert b'"FeatureCollection"' in content, "Response does not look like GeoJSON"
# Write uncompressed — ST_Read requires a plain file
tmp = dest.with_suffix(".geojson.tmp")
tmp.write_bytes(content)
tmp.rename(dest)
size_mb = len(content) / 1_000_000
print(f" Written: {dest} ({size_mb:.1f} MB)")
print("Done. Run SQLMesh plan to rebuild stg_nuts2_boundaries.")
def main() -> None:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--landing-dir", default="data/landing", type=Path)
args = parser.parse_args()
if not args.landing_dir.is_dir():
print(f"Error: landing dir does not exist: {args.landing_dir}", file=sys.stderr)
sys.exit(1)
download_nuts_boundaries(args.landing_dir)
if __name__ == "__main__":
main()

View File

@@ -16,5 +16,92 @@ def padelnomics_glob(evaluator) -> str:
return f"'{landing_dir}/padelnomics/**/*.csv.gz'" return f"'{landing_dir}/padelnomics/**/*.csv.gz'"
# Add one macro per landing zone subdirectory you create. # ── Country code helpers ─────────────────────────────────────────────────────
# Pattern: def {source}_glob(evaluator) → f"'{landing_dir}/{source}/**/*.csv.gz'" # Shared lookup used by dim_cities and dim_locations.
_COUNTRY_NAMES = {
"DE": "Germany", "ES": "Spain", "GB": "United Kingdom",
"FR": "France", "IT": "Italy", "PT": "Portugal",
"AT": "Austria", "CH": "Switzerland", "NL": "Netherlands",
"BE": "Belgium", "SE": "Sweden", "NO": "Norway",
"DK": "Denmark", "FI": "Finland", "US": "United States",
"AR": "Argentina", "MX": "Mexico", "AE": "UAE",
"AU": "Australia", "IE": "Ireland",
}
def _country_case(col: str) -> str:
"""Build a CASE expression mapping ISO 3166-1 alpha-2 → English name."""
whens = "\n ".join(
f"WHEN '{code}' THEN '{name}'" for code, name in _COUNTRY_NAMES.items()
)
return f"CASE {col}\n {whens}\n ELSE {col}\n END"
@macro()
def country_name(evaluator, code_col) -> str:
"""CASE expression: country code → English name.
Usage in SQL: @country_name(vc.country_code) AS country_name_en
"""
return _country_case(str(code_col))
@macro()
def country_slug(evaluator, code_col) -> str:
"""CASE expression: country code → URL-safe slug (lowercased, spaces → dashes).
Usage in SQL: @country_slug(vc.country_code) AS country_slug
"""
return f"LOWER(REGEXP_REPLACE({_country_case(str(code_col))}, '[^a-zA-Z0-9]+', '-'))"
@macro()
def normalize_eurostat_country(evaluator, code_col) -> str:
"""Normalize Eurostat country codes to ISO 3166-1 alpha-2: EL→GR, UK→GB.
Usage in SQL: @normalize_eurostat_country(geo_code) AS country_code
"""
col = str(code_col)
return f"CASE {col} WHEN 'EL' THEN 'GR' WHEN 'UK' THEN 'GB' ELSE {col} END"
@macro()
def normalize_eurostat_nuts(evaluator, code_col) -> str:
"""Normalize NUTS code prefix: EL→GR, UK→GB, preserving the suffix.
Usage in SQL: @normalize_eurostat_nuts(geo_code) AS nuts_code
"""
col = str(code_col)
return (
f"CASE"
f" WHEN {col} LIKE 'EL%' THEN 'GR' || SUBSTR({col}, 3)"
f" WHEN {col} LIKE 'UK%' THEN 'GB' || SUBSTR({col}, 3)"
f" ELSE {col}"
f" END"
)
@macro()
def infer_country_from_coords(evaluator, lat_col, lon_col) -> str:
"""Infer ISO country code from lat/lon using bounding boxes for 8 European markets.
Usage in SQL:
COALESCE(NULLIF(TRIM(UPPER(country_code)), ''),
@infer_country_from_coords(lat, lon)) AS country_code
"""
lat = str(lat_col)
lon = str(lon_col)
return (
f"CASE"
f" WHEN {lat} BETWEEN 47.27 AND 55.06 AND {lon} BETWEEN 5.87 AND 15.04 THEN 'DE'"
f" WHEN {lat} BETWEEN 35.95 AND 43.79 AND {lon} BETWEEN -9.39 AND 4.33 THEN 'ES'"
f" WHEN {lat} BETWEEN 49.90 AND 60.85 AND {lon} BETWEEN -8.62 AND 1.77 THEN 'GB'"
f" WHEN {lat} BETWEEN 41.36 AND 51.09 AND {lon} BETWEEN -5.14 AND 9.56 THEN 'FR'"
f" WHEN {lat} BETWEEN 45.46 AND 47.80 AND {lon} BETWEEN 5.96 AND 10.49 THEN 'CH'"
f" WHEN {lat} BETWEEN 46.37 AND 49.02 AND {lon} BETWEEN 9.53 AND 17.16 THEN 'AT'"
f" WHEN {lat} BETWEEN 36.35 AND 47.09 AND {lon} BETWEEN 6.62 AND 18.51 THEN 'IT'"
f" WHEN {lat} BETWEEN 37.00 AND 42.15 AND {lon} BETWEEN -9.50 AND -6.19 THEN 'PT'"
f" ELSE NULL"
f" END"
)

View File

@@ -110,55 +110,9 @@ SELECT
vc.city_slug, vc.city_slug,
vc.city_name, vc.city_name,
-- Human-readable country name for pSEO templates and internal linking -- Human-readable country name for pSEO templates and internal linking
CASE vc.country_code @country_name(vc.country_code) AS country_name_en,
WHEN 'DE' THEN 'Germany'
WHEN 'ES' THEN 'Spain'
WHEN 'GB' THEN 'United Kingdom'
WHEN 'FR' THEN 'France'
WHEN 'IT' THEN 'Italy'
WHEN 'PT' THEN 'Portugal'
WHEN 'AT' THEN 'Austria'
WHEN 'CH' THEN 'Switzerland'
WHEN 'NL' THEN 'Netherlands'
WHEN 'BE' THEN 'Belgium'
WHEN 'SE' THEN 'Sweden'
WHEN 'NO' THEN 'Norway'
WHEN 'DK' THEN 'Denmark'
WHEN 'FI' THEN 'Finland'
WHEN 'US' THEN 'United States'
WHEN 'AR' THEN 'Argentina'
WHEN 'MX' THEN 'Mexico'
WHEN 'AE' THEN 'UAE'
WHEN 'AU' THEN 'Australia'
WHEN 'IE' THEN 'Ireland'
ELSE vc.country_code
END AS country_name_en,
-- URL-safe country slug -- URL-safe country slug
LOWER(REGEXP_REPLACE( @country_slug(vc.country_code) AS country_slug,
CASE vc.country_code
WHEN 'DE' THEN 'Germany'
WHEN 'ES' THEN 'Spain'
WHEN 'GB' THEN 'United Kingdom'
WHEN 'FR' THEN 'France'
WHEN 'IT' THEN 'Italy'
WHEN 'PT' THEN 'Portugal'
WHEN 'AT' THEN 'Austria'
WHEN 'CH' THEN 'Switzerland'
WHEN 'NL' THEN 'Netherlands'
WHEN 'BE' THEN 'Belgium'
WHEN 'SE' THEN 'Sweden'
WHEN 'NO' THEN 'Norway'
WHEN 'DK' THEN 'Denmark'
WHEN 'FI' THEN 'Finland'
WHEN 'US' THEN 'United States'
WHEN 'AR' THEN 'Argentina'
WHEN 'MX' THEN 'Mexico'
WHEN 'AE' THEN 'UAE'
WHEN 'AU' THEN 'Australia'
WHEN 'IE' THEN 'Ireland'
ELSE vc.country_code
END, '[^a-zA-Z0-9]+', '-'
)) AS country_slug,
vc.centroid_lat AS lat, vc.centroid_lat AS lat,
vc.centroid_lon AS lon, vc.centroid_lon AS lon,
-- Population cascade: Eurostat EU > US Census > ONS UK > GeoNames string > GeoNames spatial > 0. -- Population cascade: Eurostat EU > US Census > ONS UK > GeoNames string > GeoNames spatial > 0.

View File

@@ -215,55 +215,9 @@ SELECT
l.geoname_id, l.geoname_id,
l.country_code, l.country_code,
-- Human-readable country name (consistent with dim_cities) -- Human-readable country name (consistent with dim_cities)
CASE l.country_code @country_name(l.country_code) AS country_name_en,
WHEN 'DE' THEN 'Germany'
WHEN 'ES' THEN 'Spain'
WHEN 'GB' THEN 'United Kingdom'
WHEN 'FR' THEN 'France'
WHEN 'IT' THEN 'Italy'
WHEN 'PT' THEN 'Portugal'
WHEN 'AT' THEN 'Austria'
WHEN 'CH' THEN 'Switzerland'
WHEN 'NL' THEN 'Netherlands'
WHEN 'BE' THEN 'Belgium'
WHEN 'SE' THEN 'Sweden'
WHEN 'NO' THEN 'Norway'
WHEN 'DK' THEN 'Denmark'
WHEN 'FI' THEN 'Finland'
WHEN 'US' THEN 'United States'
WHEN 'AR' THEN 'Argentina'
WHEN 'MX' THEN 'Mexico'
WHEN 'AE' THEN 'UAE'
WHEN 'AU' THEN 'Australia'
WHEN 'IE' THEN 'Ireland'
ELSE l.country_code
END AS country_name_en,
-- URL-safe country slug -- URL-safe country slug
LOWER(REGEXP_REPLACE( @country_slug(l.country_code) AS country_slug,
CASE l.country_code
WHEN 'DE' THEN 'Germany'
WHEN 'ES' THEN 'Spain'
WHEN 'GB' THEN 'United Kingdom'
WHEN 'FR' THEN 'France'
WHEN 'IT' THEN 'Italy'
WHEN 'PT' THEN 'Portugal'
WHEN 'AT' THEN 'Austria'
WHEN 'CH' THEN 'Switzerland'
WHEN 'NL' THEN 'Netherlands'
WHEN 'BE' THEN 'Belgium'
WHEN 'SE' THEN 'Sweden'
WHEN 'NO' THEN 'Norway'
WHEN 'DK' THEN 'Denmark'
WHEN 'FI' THEN 'Finland'
WHEN 'US' THEN 'United States'
WHEN 'AR' THEN 'Argentina'
WHEN 'MX' THEN 'Mexico'
WHEN 'AE' THEN 'UAE'
WHEN 'AU' THEN 'Australia'
WHEN 'IE' THEN 'Ireland'
ELSE l.country_code
END, '[^a-zA-Z0-9]+', '-'
)) AS country_slug,
l.location_name, l.location_name,
l.location_slug, l.location_slug,
l.lat, l.lat,

View File

@@ -30,11 +30,7 @@ parsed AS (
) )
SELECT SELECT
-- Normalise to ISO 3166-1 alpha-2: EL→GR, UK→GB -- Normalise to ISO 3166-1 alpha-2: EL→GR, UK→GB
CASE geo_code @normalize_eurostat_country(geo_code) AS country_code,
WHEN 'EL' THEN 'GR'
WHEN 'UK' THEN 'GB'
ELSE geo_code
END AS country_code,
ref_year, ref_year,
median_income_pps, median_income_pps,
extracted_date extracted_date

View File

@@ -28,11 +28,7 @@ WITH raw AS (
SELECT SELECT
NUTS_ID AS nuts2_code, NUTS_ID AS nuts2_code,
-- Normalise country prefix to ISO 3166-1 alpha-2: EL→GR, UK→GB -- Normalise country prefix to ISO 3166-1 alpha-2: EL→GR, UK→GB
CASE CNTR_CODE @normalize_eurostat_country(CNTR_CODE) AS country_code,
WHEN 'EL' THEN 'GR'
WHEN 'UK' THEN 'GB'
ELSE CNTR_CODE
END AS country_code,
NAME_LATN AS region_name, NAME_LATN AS region_name,
geom AS geometry, geom AS geometry,
-- Pre-compute bounding box for efficient spatial pre-filter in dim_locations. -- Pre-compute bounding box for efficient spatial pre-filter in dim_locations.

View File

@@ -48,17 +48,8 @@ deduped AS (
with_country AS ( with_country AS (
SELECT SELECT
osm_id, lat, lon, osm_id, lat, lon,
COALESCE(NULLIF(TRIM(UPPER(country_code)), ''), CASE COALESCE(NULLIF(TRIM(UPPER(country_code)), ''),
WHEN lat BETWEEN 47.27 AND 55.06 AND lon BETWEEN 5.87 AND 15.04 THEN 'DE' @infer_country_from_coords(lat, lon)) AS country_code,
WHEN lat BETWEEN 35.95 AND 43.79 AND lon BETWEEN -9.39 AND 4.33 THEN 'ES'
WHEN lat BETWEEN 49.90 AND 60.85 AND lon BETWEEN -8.62 AND 1.77 THEN 'GB'
WHEN lat BETWEEN 41.36 AND 51.09 AND lon BETWEEN -5.14 AND 9.56 THEN 'FR'
WHEN lat BETWEEN 45.46 AND 47.80 AND lon BETWEEN 5.96 AND 10.49 THEN 'CH'
WHEN lat BETWEEN 46.37 AND 49.02 AND lon BETWEEN 9.53 AND 17.16 THEN 'AT'
WHEN lat BETWEEN 36.35 AND 47.09 AND lon BETWEEN 6.62 AND 18.51 THEN 'IT'
WHEN lat BETWEEN 37.00 AND 42.15 AND lon BETWEEN -9.50 AND -6.19 THEN 'PT'
ELSE NULL
END) AS country_code,
NULLIF(TRIM(name), '') AS name, NULLIF(TRIM(name), '') AS name,
NULLIF(TRIM(city_tag), '') AS city, NULLIF(TRIM(city_tag), '') AS city,
postcode, operator_name, opening_hours, fee, extracted_date postcode, operator_name, opening_hours, fee, extracted_date

View File

@@ -30,11 +30,7 @@ parsed AS (
) )
SELECT SELECT
-- Normalise to ISO 3166-1 alpha-2 prefix: EL→GR, UK→GB -- Normalise to ISO 3166-1 alpha-2 prefix: EL→GR, UK→GB
CASE @normalize_eurostat_nuts(geo_code) AS nuts_code,
WHEN geo_code LIKE 'EL%' THEN 'GR' || SUBSTR(geo_code, 3)
WHEN geo_code LIKE 'UK%' THEN 'GB' || SUBSTR(geo_code, 3)
ELSE geo_code
END AS nuts_code,
-- NUTS level: 3-char = NUTS-1, 4-char = NUTS-2 -- NUTS level: 3-char = NUTS-1, 4-char = NUTS-2
LENGTH(geo_code) - 2 AS nuts_level, LENGTH(geo_code) - 2 AS nuts_level,
ref_year, ref_year,

View File

@@ -54,17 +54,8 @@ deduped AS (
with_country AS ( with_country AS (
SELECT SELECT
osm_id, lat, lon, osm_id, lat, lon,
COALESCE(NULLIF(TRIM(UPPER(country_code)), ''), CASE COALESCE(NULLIF(TRIM(UPPER(country_code)), ''),
WHEN lat BETWEEN 47.27 AND 55.06 AND lon BETWEEN 5.87 AND 15.04 THEN 'DE' @infer_country_from_coords(lat, lon)) AS country_code,
WHEN lat BETWEEN 35.95 AND 43.79 AND lon BETWEEN -9.39 AND 4.33 THEN 'ES'
WHEN lat BETWEEN 49.90 AND 60.85 AND lon BETWEEN -8.62 AND 1.77 THEN 'GB'
WHEN lat BETWEEN 41.36 AND 51.09 AND lon BETWEEN -5.14 AND 9.56 THEN 'FR'
WHEN lat BETWEEN 45.46 AND 47.80 AND lon BETWEEN 5.96 AND 10.49 THEN 'CH'
WHEN lat BETWEEN 46.37 AND 49.02 AND lon BETWEEN 9.53 AND 17.16 THEN 'AT'
WHEN lat BETWEEN 36.35 AND 47.09 AND lon BETWEEN 6.62 AND 18.51 THEN 'IT'
WHEN lat BETWEEN 37.00 AND 42.15 AND lon BETWEEN -9.50 AND -6.19 THEN 'PT'
ELSE NULL
END) AS country_code,
NULLIF(TRIM(name), '') AS name, NULLIF(TRIM(name), '') AS name,
NULLIF(TRIM(city_tag), '') AS city, NULLIF(TRIM(city_tag), '') AS city,
extracted_date extracted_date

View File

@@ -35,7 +35,7 @@ from pathlib import Path
from quart import Blueprint, flash, redirect, render_template, request, url_for from quart import Blueprint, flash, redirect, render_template, request, url_for
from ..auth.routes import role_required from ..auth.routes import role_required
from ..core import csrf_protect from ..core import count_where, csrf_protect
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -298,11 +298,8 @@ async def _inject_sidebar_data():
"""Load unread inbox count for the admin sidebar badge.""" """Load unread inbox count for the admin sidebar badge."""
from quart import g from quart import g
from ..core import fetch_one
try: try:
row = await fetch_one("SELECT COUNT(*) as cnt FROM inbound_emails WHERE is_read = 0") g.admin_unread_count = await count_where("inbound_emails WHERE is_read = 0")
g.admin_unread_count = row["cnt"] if row else 0
except Exception: except Exception:
g.admin_unread_count = 0 g.admin_unread_count = 0
@@ -780,7 +777,8 @@ async def pipeline_trigger_extract():
else: else:
await enqueue("run_extraction") await enqueue("run_extraction")
is_htmx = request.headers.get("HX-Request") == "true" is_htmx = (request.headers.get("HX-Request") == "true"
and request.headers.get("HX-Boosted") != "true")
if is_htmx: if is_htmx:
return await _render_overview_partial() return await _render_overview_partial()
@@ -1005,7 +1003,8 @@ async def pipeline_trigger_transform():
(task_name,), (task_name,),
) )
if existing: if existing:
is_htmx = request.headers.get("HX-Request") == "true" is_htmx = (request.headers.get("HX-Request") == "true"
and request.headers.get("HX-Boosted") != "true")
if is_htmx: if is_htmx:
return await _render_transform_partial() return await _render_transform_partial()
await flash(f"A '{step}' task is already queued (task #{existing['id']}).", "warning") await flash(f"A '{step}' task is already queued (task #{existing['id']}).", "warning")
@@ -1013,7 +1012,8 @@ async def pipeline_trigger_transform():
await enqueue(task_name) await enqueue(task_name)
is_htmx = request.headers.get("HX-Request") == "true" is_htmx = (request.headers.get("HX-Request") == "true"
and request.headers.get("HX-Boosted") != "true")
if is_htmx: if is_htmx:
return await _render_transform_partial() return await _render_transform_partial()

View File

@@ -25,7 +25,7 @@ from ..content.health import (
get_template_freshness, get_template_freshness,
get_template_stats, get_template_stats,
) )
from ..core import csrf_protect, fetch_all, fetch_one from ..core import count_where, csrf_protect, fetch_all, fetch_one
bp = Blueprint( bp = Blueprint(
"pseo", "pseo",
@@ -41,8 +41,7 @@ async def _inject_sidebar_data():
from quart import g from quart import g
try: try:
row = await fetch_one("SELECT COUNT(*) as cnt FROM inbound_emails WHERE is_read = 0") g.admin_unread_count = await count_where("inbound_emails WHERE is_read = 0")
g.admin_unread_count = row["cnt"] if row else 0
except Exception: except Exception:
g.admin_unread_count = 0 g.admin_unread_count = 0
@@ -80,8 +79,7 @@ async def pseo_dashboard():
total_published = sum(r["stats"]["published"] for r in template_rows) total_published = sum(r["stats"]["published"] for r in template_rows)
stale_count = sum(1 for f in freshness if f["status"] == "stale") stale_count = sum(1 for f in freshness if f["status"] == "stale")
noindex_row = await fetch_one("SELECT COUNT(*) as cnt FROM articles WHERE noindex = 1") noindex_count = await count_where("articles WHERE noindex = 1")
noindex_count = noindex_row["cnt"] if noindex_row else 0
# Recent generation jobs — enough for the dashboard summary. # Recent generation jobs — enough for the dashboard summary.
jobs = await fetch_all( jobs = await fetch_all(

View File

@@ -28,6 +28,7 @@ from ..auth.routes import role_required
from ..core import ( from ..core import (
EMAIL_ADDRESSES, EMAIL_ADDRESSES,
config, config,
count_where,
csrf_protect, csrf_protect,
execute, execute,
fetch_all, fetch_all,
@@ -91,8 +92,7 @@ async def _inject_admin_sidebar_data():
"""Load unread inbox count for sidebar badge on every admin page.""" """Load unread inbox count for sidebar badge on every admin page."""
from quart import g from quart import g
try: try:
row = await fetch_one("SELECT COUNT(*) as cnt FROM inbound_emails WHERE is_read = 0") g.admin_unread_count = await count_where("inbound_emails WHERE is_read = 0")
g.admin_unread_count = row["cnt"] if row else 0
except Exception: except Exception:
logger.exception("Failed to load admin sidebar unread count") logger.exception("Failed to load admin sidebar unread count")
g.admin_unread_count = 0 g.admin_unread_count = 0
@@ -114,76 +114,32 @@ async def get_dashboard_stats() -> dict:
now = utcnow() now = utcnow()
today = now.date().isoformat() today = now.date().isoformat()
week_ago = (now - timedelta(days=7)).strftime("%Y-%m-%d %H:%M:%S") week_ago = (now - timedelta(days=7)).strftime("%Y-%m-%d %H:%M:%S")
users_total = await fetch_one("SELECT COUNT(*) as count FROM users WHERE deleted_at IS NULL")
users_today = await fetch_one(
"SELECT COUNT(*) as count FROM users WHERE created_at >= ? AND deleted_at IS NULL",
(today,)
)
users_week = await fetch_one(
"SELECT COUNT(*) as count FROM users WHERE created_at >= ? AND deleted_at IS NULL",
(week_ago,)
)
subs = await fetch_one( # Two queries that aren't simple COUNT(*) — keep as fetch_one
"SELECT COUNT(*) as count FROM subscriptions WHERE status = 'active'" planner_row = await fetch_one(
"SELECT COUNT(DISTINCT user_id) AS n FROM scenarios WHERE deleted_at IS NULL"
) )
credits_row = await fetch_one(
tasks_pending = await fetch_one("SELECT COUNT(*) as count FROM tasks WHERE status = 'pending'") "SELECT COALESCE(SUM(ABS(delta)), 0) AS n FROM credit_ledger WHERE delta < 0"
tasks_failed = await fetch_one("SELECT COUNT(*) as count FROM tasks WHERE status = 'failed'")
# Lead funnel stats
leads_total = await fetch_one(
"SELECT COUNT(*) as count FROM lead_requests WHERE lead_type = 'quote'"
)
leads_new = await fetch_one(
"SELECT COUNT(*) as count FROM lead_requests WHERE status = 'new' AND lead_type = 'quote'"
)
leads_verified = await fetch_one(
"SELECT COUNT(*) as count FROM lead_requests WHERE verified_at IS NOT NULL AND lead_type = 'quote'"
)
leads_unlocked = await fetch_one(
"SELECT COUNT(*) as count FROM lead_requests WHERE unlock_count > 0 AND lead_type = 'quote'"
)
# Planner users
planner_users = await fetch_one(
"SELECT COUNT(DISTINCT user_id) as count FROM scenarios WHERE deleted_at IS NULL"
)
# Supplier stats
suppliers_claimed = await fetch_one(
"SELECT COUNT(*) as count FROM suppliers WHERE claimed_by IS NOT NULL"
)
suppliers_growth = await fetch_one(
"SELECT COUNT(*) as count FROM suppliers WHERE tier = 'growth'"
)
suppliers_pro = await fetch_one(
"SELECT COUNT(*) as count FROM suppliers WHERE tier = 'pro'"
)
total_credits_spent = await fetch_one(
"SELECT COALESCE(SUM(ABS(delta)), 0) as total FROM credit_ledger WHERE delta < 0"
)
leads_unlocked_by_suppliers = await fetch_one(
"SELECT COUNT(*) as count FROM lead_forwards"
) )
return { return {
"users_total": users_total["count"] if users_total else 0, "users_total": await count_where("users WHERE deleted_at IS NULL"),
"users_today": users_today["count"] if users_today else 0, "users_today": await count_where("users WHERE created_at >= ? AND deleted_at IS NULL", (today,)),
"users_week": users_week["count"] if users_week else 0, "users_week": await count_where("users WHERE created_at >= ? AND deleted_at IS NULL", (week_ago,)),
"active_subscriptions": subs["count"] if subs else 0, "active_subscriptions": await count_where("subscriptions WHERE status = 'active'"),
"tasks_pending": tasks_pending["count"] if tasks_pending else 0, "tasks_pending": await count_where("tasks WHERE status = 'pending'"),
"tasks_failed": tasks_failed["count"] if tasks_failed else 0, "tasks_failed": await count_where("tasks WHERE status = 'failed'"),
"leads_total": leads_total["count"] if leads_total else 0, "leads_total": await count_where("lead_requests WHERE lead_type = 'quote'"),
"leads_new": leads_new["count"] if leads_new else 0, "leads_new": await count_where("lead_requests WHERE status = 'new' AND lead_type = 'quote'"),
"leads_verified": leads_verified["count"] if leads_verified else 0, "leads_verified": await count_where("lead_requests WHERE verified_at IS NOT NULL AND lead_type = 'quote'"),
"leads_unlocked": leads_unlocked["count"] if leads_unlocked else 0, "leads_unlocked": await count_where("lead_requests WHERE unlock_count > 0 AND lead_type = 'quote'"),
"planner_users": planner_users["count"] if planner_users else 0, "planner_users": planner_row["n"] if planner_row else 0,
"suppliers_claimed": suppliers_claimed["count"] if suppliers_claimed else 0, "suppliers_claimed": await count_where("suppliers WHERE claimed_by IS NOT NULL"),
"suppliers_growth": suppliers_growth["count"] if suppliers_growth else 0, "suppliers_growth": await count_where("suppliers WHERE tier = 'growth'"),
"suppliers_pro": suppliers_pro["count"] if suppliers_pro else 0, "suppliers_pro": await count_where("suppliers WHERE tier = 'pro'"),
"total_credits_spent": total_credits_spent["total"] if total_credits_spent else 0, "total_credits_spent": credits_row["n"] if credits_row else 0,
"leads_unlocked_by_suppliers": leads_unlocked_by_suppliers["count"] if leads_unlocked_by_suppliers else 0, "leads_unlocked_by_suppliers": await count_where("lead_forwards WHERE 1=1"),
} }
@@ -446,10 +402,7 @@ async def get_leads(
params.append(f"-{days} days") params.append(f"-{days} days")
where = " AND ".join(wheres) where = " AND ".join(wheres)
count_row = await fetch_one( total = await count_where(f"lead_requests WHERE {where}", tuple(params))
f"SELECT COUNT(*) as cnt FROM lead_requests WHERE {where}", tuple(params)
)
total = count_row["cnt"] if count_row else 0
offset = (page - 1) * per_page offset = (page - 1) * per_page
rows = await fetch_all( rows = await fetch_all(
@@ -679,26 +632,14 @@ async def lead_new():
return await render_template("admin/lead_form.html", data={}, statuses=LEAD_STATUSES) return await render_template("admin/lead_form.html", data={}, statuses=LEAD_STATUSES)
@bp.route("/leads/<int:lead_id>/forward", methods=["POST"]) async def _forward_lead(lead_id: int, supplier_id: int) -> str | None:
@role_required("admin") """Forward a lead to a supplier. Returns error message or None on success."""
@csrf_protect
async def lead_forward(lead_id: int):
"""Manually forward a lead to a supplier (no credit cost)."""
form = await request.form
supplier_id = int(form.get("supplier_id", 0))
if not supplier_id:
await flash("Select a supplier.", "error")
return redirect(url_for("admin.lead_detail", lead_id=lead_id))
# Check if already forwarded
existing = await fetch_one( existing = await fetch_one(
"SELECT 1 FROM lead_forwards WHERE lead_id = ? AND supplier_id = ?", "SELECT 1 FROM lead_forwards WHERE lead_id = ? AND supplier_id = ?",
(lead_id, supplier_id), (lead_id, supplier_id),
) )
if existing: if existing:
await flash("Already forwarded to this supplier.", "warning") return "Already forwarded to this supplier."
return redirect(url_for("admin.lead_detail", lead_id=lead_id))
now = utcnow_iso() now = utcnow_iso()
await execute( await execute(
@@ -710,15 +651,27 @@ async def lead_forward(lead_id: int):
"UPDATE lead_requests SET unlock_count = unlock_count + 1, status = 'forwarded' WHERE id = ?", "UPDATE lead_requests SET unlock_count = unlock_count + 1, status = 'forwarded' WHERE id = ?",
(lead_id,), (lead_id,),
) )
# Enqueue forward email
from ..worker import enqueue from ..worker import enqueue
await enqueue("send_lead_forward_email", { await enqueue("send_lead_forward_email", {"lead_id": lead_id, "supplier_id": supplier_id})
"lead_id": lead_id, return None
"supplier_id": supplier_id,
})
await flash("Lead forwarded to supplier.", "success")
@bp.route("/leads/<int:lead_id>/forward", methods=["POST"])
@role_required("admin")
@csrf_protect
async def lead_forward(lead_id: int):
"""Manually forward a lead to a supplier (no credit cost)."""
form = await request.form
supplier_id = int(form.get("supplier_id", 0))
if not supplier_id:
await flash("Select a supplier.", "error")
return redirect(url_for("admin.lead_detail", lead_id=lead_id))
error = await _forward_lead(lead_id, supplier_id)
if error:
await flash(error, "warning")
else:
await flash("Lead forwarded to supplier.", "success")
return redirect(url_for("admin.lead_detail", lead_id=lead_id)) return redirect(url_for("admin.lead_detail", lead_id=lead_id))
@@ -751,25 +704,9 @@ async def lead_forward_htmx(lead_id: int):
return Response("Select a supplier.", status=422) return Response("Select a supplier.", status=422)
supplier_id = int(supplier_id_str) supplier_id = int(supplier_id_str)
existing = await fetch_one( error = await _forward_lead(lead_id, supplier_id)
"SELECT 1 FROM lead_forwards WHERE lead_id = ? AND supplier_id = ?", if error:
(lead_id, supplier_id), return Response(error, status=422)
)
if existing:
return Response("Already forwarded to this supplier.", status=422)
now = utcnow_iso()
await execute(
"""INSERT INTO lead_forwards (lead_id, supplier_id, credit_cost, status, created_at)
VALUES (?, ?, 0, 'sent', ?)""",
(lead_id, supplier_id, now),
)
await execute(
"UPDATE lead_requests SET unlock_count = unlock_count + 1, status = 'forwarded' WHERE id = ?",
(lead_id,),
)
from ..worker import enqueue
await enqueue("send_lead_forward_email", {"lead_id": lead_id, "supplier_id": supplier_id})
lead = await get_lead_detail(lead_id) lead = await get_lead_detail(lead_id)
return await render_template( return await render_template(
@@ -929,13 +866,10 @@ async def get_suppliers_list(
async def get_supplier_stats() -> dict: async def get_supplier_stats() -> dict:
"""Get aggregate supplier stats for the admin list header.""" """Get aggregate supplier stats for the admin list header."""
claimed = await fetch_one("SELECT COUNT(*) as cnt FROM suppliers WHERE claimed_by IS NOT NULL")
growth = await fetch_one("SELECT COUNT(*) as cnt FROM suppliers WHERE tier = 'growth'")
pro = await fetch_one("SELECT COUNT(*) as cnt FROM suppliers WHERE tier = 'pro'")
return { return {
"claimed": claimed["cnt"] if claimed else 0, "claimed": await count_where("suppliers WHERE claimed_by IS NOT NULL"),
"growth": growth["cnt"] if growth else 0, "growth": await count_where("suppliers WHERE tier = 'growth'"),
"pro": pro["cnt"] if pro else 0, "pro": await count_where("suppliers WHERE tier = 'pro'"),
} }
@@ -1017,11 +951,7 @@ async def supplier_detail(supplier_id: int):
(supplier_id,), (supplier_id,),
) )
enquiry_row = await fetch_one( enquiry_count = await count_where("supplier_enquiries WHERE supplier_id = ?", (supplier_id,))
"SELECT COUNT(*) as cnt FROM supplier_enquiries WHERE supplier_id = ?",
(supplier_id,),
)
enquiry_count = enquiry_row["cnt"] if enquiry_row else 0
# Email activity timeline — correlate by contact_email (no FK) # Email activity timeline — correlate by contact_email (no FK)
timeline = [] timeline = []
@@ -1239,7 +1169,6 @@ _PRODUCT_CATEGORIES = [
@role_required("admin") @role_required("admin")
async def billing_products(): async def billing_products():
"""Read-only overview of Paddle products, subscriptions, and revenue proxies.""" """Read-only overview of Paddle products, subscriptions, and revenue proxies."""
active_subs_row = await fetch_one("SELECT COUNT(*) as cnt FROM subscriptions WHERE status = 'active'")
mrr_row = await fetch_one( mrr_row = await fetch_one(
"""SELECT COALESCE(SUM( """SELECT COALESCE(SUM(
CASE WHEN pp.key LIKE '%_yearly' THEN pp.price_cents / 12 CASE WHEN pp.key LIKE '%_yearly' THEN pp.price_cents / 12
@@ -1249,14 +1178,12 @@ async def billing_products():
JOIN paddle_products pp ON s.plan = pp.key JOIN paddle_products pp ON s.plan = pp.key
WHERE s.status = 'active' AND pp.billing_type = 'subscription'""" WHERE s.status = 'active' AND pp.billing_type = 'subscription'"""
) )
active_boosts_row = await fetch_one("SELECT COUNT(*) as cnt FROM supplier_boosts WHERE status = 'active'")
bp_exports_row = await fetch_one("SELECT COUNT(*) as cnt FROM business_plan_exports WHERE status = 'completed'")
stats = { stats = {
"active_subs": (active_subs_row or {}).get("cnt", 0), "active_subs": await count_where("subscriptions WHERE status = 'active'"),
"mrr_cents": (mrr_row or {}).get("total_cents", 0), "mrr_cents": (mrr_row or {}).get("total_cents", 0),
"active_boosts": (active_boosts_row or {}).get("cnt", 0), "active_boosts": await count_where("supplier_boosts WHERE status = 'active'"),
"bp_exports": (bp_exports_row or {}).get("cnt", 0), "bp_exports": await count_where("business_plan_exports WHERE status = 'completed'"),
} }
products_rows = await fetch_all("SELECT * FROM paddle_products ORDER BY key") products_rows = await fetch_all("SELECT * FROM paddle_products ORDER BY key")
@@ -1342,23 +1269,18 @@ async def get_email_log(
async def get_email_stats() -> dict: async def get_email_stats() -> dict:
"""Aggregate email stats for the list header.""" """Aggregate email stats for the list header."""
total = await fetch_one("SELECT COUNT(*) as cnt FROM email_log")
delivered = await fetch_one("SELECT COUNT(*) as cnt FROM email_log WHERE last_event = 'delivered'")
bounced = await fetch_one("SELECT COUNT(*) as cnt FROM email_log WHERE last_event = 'bounced'")
today = utcnow().date().isoformat() today = utcnow().date().isoformat()
sent_today = await fetch_one("SELECT COUNT(*) as cnt FROM email_log WHERE created_at >= ?", (today,))
return { return {
"total": total["cnt"] if total else 0, "total": await count_where("email_log WHERE 1=1"),
"delivered": delivered["cnt"] if delivered else 0, "delivered": await count_where("email_log WHERE last_event = 'delivered'"),
"bounced": bounced["cnt"] if bounced else 0, "bounced": await count_where("email_log WHERE last_event = 'bounced'"),
"sent_today": sent_today["cnt"] if sent_today else 0, "sent_today": await count_where("email_log WHERE created_at >= ?", (today,)),
} }
async def get_unread_count() -> int: async def get_unread_count() -> int:
"""Count unread inbound emails.""" """Count unread inbound emails."""
row = await fetch_one("SELECT COUNT(*) as cnt FROM inbound_emails WHERE is_read = 0") return await count_where("inbound_emails WHERE is_read = 0")
return row["cnt"] if row else 0
@bp.route("/emails") @bp.route("/emails")
@@ -1824,11 +1746,7 @@ async def template_detail(slug: str):
columns = await get_table_columns(config["data_table"]) columns = await get_table_columns(config["data_table"])
sample_rows = await fetch_template_data(config["data_table"], limit=10) sample_rows = await fetch_template_data(config["data_table"], limit=10)
# Count generated articles generated_count = await count_where("articles WHERE template_slug = ?", (slug,))
row = await fetch_one(
"SELECT COUNT(*) as cnt FROM articles WHERE template_slug = ?", (slug,),
)
generated_count = row["cnt"] if row else 0
return await render_template( return await render_template(
"admin/template_detail.html", "admin/template_detail.html",
@@ -1959,8 +1877,8 @@ async def _query_scenarios(search: str, country: str, venue_type: str) -> tuple[
f"SELECT * FROM published_scenarios WHERE {where} ORDER BY created_at DESC LIMIT 500", f"SELECT * FROM published_scenarios WHERE {where} ORDER BY created_at DESC LIMIT 500",
tuple(params), tuple(params),
) )
total_row = await fetch_one("SELECT COUNT(*) as cnt FROM published_scenarios") total = await count_where("published_scenarios WHERE 1=1")
return rows, (total_row["cnt"] if total_row else 0) return rows, total
@bp.route("/scenarios") @bp.route("/scenarios")
@@ -2927,11 +2845,9 @@ _CSV_IMPORT_LIMIT = 500 # guard against huge uploads
async def get_follow_up_due_count() -> int: async def get_follow_up_due_count() -> int:
"""Count pipeline suppliers with follow_up_at <= today.""" """Count pipeline suppliers with follow_up_at <= today."""
row = await fetch_one( return await count_where(
"""SELECT COUNT(*) as cnt FROM suppliers "suppliers WHERE outreach_status IS NOT NULL AND follow_up_at <= date('now')"
WHERE outreach_status IS NOT NULL AND follow_up_at <= date('now')"""
) )
return row["cnt"] if row else 0
async def get_outreach_pipeline() -> dict: async def get_outreach_pipeline() -> dict:

View File

@@ -226,10 +226,9 @@ document.addEventListener('DOMContentLoaded', function() {
<a href="{{ url_for('admin.affiliate_products') }}" class="btn-outline">Cancel</a> <a href="{{ url_for('admin.affiliate_products') }}" class="btn-outline">Cancel</a>
</div> </div>
{% if editing %} {% if editing %}
<form method="post" action="{{ url_for('admin.affiliate_delete', product_id=product_id) }}" style="margin:0"> <form method="post" action="{{ url_for('admin.affiliate_delete', product_id=product_id) }}" style="margin:0" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button type="submit" class="btn-outline" <button type="submit" class="btn-outline"
onclick="return confirm('Delete this product? This cannot be undone.')">Delete</button>
</form> </form>
{% endif %} {% endif %}
</div> </div>

View File

@@ -120,10 +120,9 @@ document.addEventListener('DOMContentLoaded', function() {
<a href="{{ url_for('admin.affiliate_programs') }}" class="btn-outline">Cancel</a> <a href="{{ url_for('admin.affiliate_programs') }}" class="btn-outline">Cancel</a>
</div> </div>
{% if editing %} {% if editing %}
<form method="post" action="{{ url_for('admin.affiliate_program_delete', program_id=program_id) }}" style="margin:0"> <form method="post" action="{{ url_for('admin.affiliate_program_delete', program_id=program_id) }}" style="margin:0" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button type="submit" class="btn-outline" <button type="submit" class="btn-outline"
onclick="return confirm('Delete this program? Blocked if products reference it.')">Delete</button>
</form> </form>
{% endif %} {% endif %}
</div> </div>

View File

@@ -11,9 +11,10 @@
</div> </div>
<div class="flex gap-2"> <div class="flex gap-2">
<a href="{{ url_for('admin.article_new') }}" class="btn btn-sm">New Article</a> <a href="{{ url_for('admin.article_new') }}" class="btn btn-sm">New Article</a>
<form method="post" action="{{ url_for('admin.rebuild_all') }}" class="m-0" style="display:inline"> <form method="post" action="{{ url_for('admin.rebuild_all') }}" class="m-0" style="display:inline" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button type="button" class="btn-outline btn-sm" onclick="confirmAction('Rebuild all articles? This will re-render every article from its template.', this.closest('form'))">Rebuild All</button> <button type="submit" class="btn-outline btn-sm"
hx-confirm="Rebuild all articles? This will re-render every article from its template.">Rebuild All</button>
</form> </form>
</div> </div>
</header> </header>

View File

@@ -27,10 +27,11 @@
<td class="text-sm">{{ c.email if c.email is defined else (c.get('email', '-') if c is mapping else '-') }}</td> <td class="text-sm">{{ c.email if c.email is defined else (c.get('email', '-') if c is mapping else '-') }}</td>
<td class="mono text-sm">{{ (c.created_at if c.created_at is defined else (c.get('created_at', '-') if c is mapping else '-'))[:16] if c else '-' }}</td> <td class="mono text-sm">{{ (c.created_at if c.created_at is defined else (c.get('created_at', '-') if c is mapping else '-'))[:16] if c else '-' }}</td>
<td style="text-align:right"> <td style="text-align:right">
<form method="post" action="{{ url_for('admin.audience_contact_remove', audience_id=audience.audience_id) }}" style="display:inline"> <form method="post" action="{{ url_for('admin.audience_contact_remove', audience_id=audience.audience_id) }}" style="display:inline" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<input type="hidden" name="contact_id" value="{{ c.id if c.id is defined else (c.get('id', '') if c is mapping else '') }}"> <input type="hidden" name="contact_id" value="{{ c.id if c.id is defined else (c.get('id', '') if c is mapping else '') }}">
<button type="button" class="btn-outline btn-sm" style="color:#DC2626" onclick="confirmAction('Remove this contact from the audience?', this.closest('form'))">Remove</button> <button type="submit" class="btn-outline btn-sm" style="color:#DC2626"
hx-confirm="Remove this contact from the audience?">Remove</button>
</form> </form>
</td> </td>
</tr> </tr>

View File

@@ -228,21 +228,29 @@
<dialog id="confirm-dialog"> <dialog id="confirm-dialog">
<p id="confirm-msg"></p> <p id="confirm-msg"></p>
<div class="dialog-actions"> <form method="dialog" class="dialog-actions">
<button id="confirm-cancel" class="btn-outline btn-sm">Cancel</button> <button value="cancel" class="btn-outline btn-sm">Cancel</button>
<button id="confirm-ok" class="btn btn-sm">Confirm</button> <button value="ok" class="btn btn-sm">Confirm</button>
</div> </form>
</dialog> </dialog>
<script> <script>
function confirmAction(message, form) { function showConfirm(message) {
var dialog = document.getElementById('confirm-dialog'); var dialog = document.getElementById('confirm-dialog');
document.getElementById('confirm-msg').textContent = message; document.getElementById('confirm-msg').textContent = message;
var ok = document.getElementById('confirm-ok');
var newOk = ok.cloneNode(true);
ok.replaceWith(newOk);
newOk.addEventListener('click', function() { dialog.close(); form.submit(); });
document.getElementById('confirm-cancel').addEventListener('click', function() { dialog.close(); }, { once: true });
dialog.showModal(); dialog.showModal();
return new Promise(function(resolve) {
dialog.addEventListener('close', function() {
resolve(dialog.returnValue === 'ok');
}, { once: true });
});
} }
document.body.addEventListener('htmx:confirm', function(evt) {
if (!evt.detail.question) return;
evt.preventDefault();
showConfirm(evt.detail.question).then(function(ok) {
if (ok) evt.detail.issueRequest(true);
});
});
</script> </script>
{% endblock %} {% endblock %}

View File

@@ -19,7 +19,7 @@
<p class="text-slate text-sm">No data rows found. Run the data pipeline to populate <code>{{ config_data.data_table }}</code>.</p> <p class="text-slate text-sm">No data rows found. Run the data pipeline to populate <code>{{ config_data.data_table }}</code>.</p>
</div> </div>
{% else %} {% else %}
<form method="post" class="card"> <form method="post" class="card" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<div class="mb-4"> <div class="mb-4">
@@ -45,7 +45,8 @@
</p> </p>
</div> </div>
<button type="button" class="btn" style="width: 100%;" onclick="confirmAction('Generate articles? Existing articles will be updated in-place.', this.closest('form'))"> <button type="submit" class="btn" style="width: 100%;"
hx-confirm="Generate articles? Existing articles will be updated in-place.">
Generate Articles Generate Articles
</button> </button>
</form> </form>

View File

@@ -21,10 +21,9 @@
</td> </td>
<td class="text-right" style="white-space:nowrap"> <td class="text-right" style="white-space:nowrap">
<a href="{{ url_for('admin.affiliate_program_edit', program_id=prog.id) }}" class="btn-outline btn-sm">Edit</a> <a href="{{ url_for('admin.affiliate_program_edit', program_id=prog.id) }}" class="btn-outline btn-sm">Edit</a>
<form method="post" action="{{ url_for('admin.affiliate_program_delete', program_id=prog.id) }}" style="display:inline"> <form method="post" action="{{ url_for('admin.affiliate_program_delete', program_id=prog.id) }}" style="display:inline" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button type="submit" class="btn-outline btn-sm" <button type="submit" class="btn-outline btn-sm"
onclick="return confirm('Delete {{ prog.name }}? This is blocked if products reference it.')">Delete</button>
</form> </form>
</td> </td>
</tr> </tr>

View File

@@ -20,10 +20,9 @@
<td class="mono text-right">{{ product.click_count or 0 }}</td> <td class="mono text-right">{{ product.click_count or 0 }}</td>
<td class="text-right" style="white-space:nowrap"> <td class="text-right" style="white-space:nowrap">
<a href="{{ url_for('admin.affiliate_edit', product_id=product.id) }}" class="btn-outline btn-sm">Edit</a> <a href="{{ url_for('admin.affiliate_edit', product_id=product.id) }}" class="btn-outline btn-sm">Edit</a>
<form method="post" action="{{ url_for('admin.affiliate_delete', product_id=product.id) }}" style="display:inline"> <form method="post" action="{{ url_for('admin.affiliate_delete', product_id=product.id) }}" style="display:inline" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button type="submit" class="btn-outline btn-sm" <button type="submit" class="btn-outline btn-sm"
onclick="return confirm('Delete {{ product.name }}?')">Delete</button>
</form> </form>
</td> </td>
</tr> </tr>

View File

@@ -29,10 +29,10 @@
</div> </div>
</form> </form>
<form method="post" action="{{ url_for('pipeline.pipeline_trigger_extract') }}" class="m-0"> <form method="post" action="{{ url_for('pipeline.pipeline_trigger_extract') }}" class="m-0" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button type="button" class="btn-outline btn-sm" <button type="submit" class="btn-outline btn-sm"
onclick="confirmAction('Enqueue a full extraction run? This will run all extractors in the background.', this.closest('form'))"> hx-confirm="Enqueue a full extraction run? This will run all extractors in the background.">
Run All Extractors Run All Extractors
</button> </button>
</form> </form>
@@ -112,11 +112,11 @@
{% if run.status == 'running' %} {% if run.status == 'running' %}
<form method="post" <form method="post"
action="{{ url_for('pipeline.pipeline_mark_stale', run_id=run.run_id) }}" action="{{ url_for('pipeline.pipeline_mark_stale', run_id=run.run_id) }}"
class="m-0"> class="m-0" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button type="button" class="btn-danger btn-sm" <button type="submit" class="btn-danger btn-sm"
style="padding:2px 8px;font-size:11px" style="padding:2px 8px;font-size:11px"
onclick="confirmAction('Mark run #{{ run.run_id }} as failed? Only do this if the process is definitely dead.', this.closest('form'))"> hx-confirm="Mark run #{{ run.run_id }} as failed? Only do this if the process is definitely dead.">
Mark Failed Mark Failed
</button> </button>
</form> </form>

View File

@@ -40,7 +40,7 @@
hx-target="#pipeline-overview-content" hx-target="#pipeline-overview-content"
hx-swap="outerHTML" hx-swap="outerHTML"
hx-vals='{"extractor": "{{ wf.name }}", "csrf_token": "{{ csrf_token() }}"}' hx-vals='{"extractor": "{{ wf.name }}", "csrf_token": "{{ csrf_token() }}"}'
onclick="if (!confirm('Run {{ wf.name }} extractor?')) return false;">Run</button> hx-confirm="Run {{ wf.name }} extractor?">Run</button>
</div> </div>
<p class="text-xs text-slate">{{ wf.schedule_label }}</p> <p class="text-xs text-slate">{{ wf.schedule_label }}</p>
{% if run %} {% if run %}

View File

@@ -53,7 +53,7 @@
hx-target="#pipeline-transform-content" hx-target="#pipeline-transform-content"
hx-swap="outerHTML" hx-swap="outerHTML"
hx-vals='{"step": "transform", "csrf_token": "{{ csrf_token() }}"}' hx-vals='{"step": "transform", "csrf_token": "{{ csrf_token() }}"}'
onclick="if (!confirm('Run SQLMesh transform (prod --auto-apply)?')) return false;"> hx-confirm="Run SQLMesh transform (prod --auto-apply)?">
Run Transform Run Transform
</button> </button>
</div> </div>
@@ -107,7 +107,7 @@
hx-target="#pipeline-transform-content" hx-target="#pipeline-transform-content"
hx-swap="outerHTML" hx-swap="outerHTML"
hx-vals='{"step": "export", "csrf_token": "{{ csrf_token() }}"}' hx-vals='{"step": "export", "csrf_token": "{{ csrf_token() }}"}'
onclick="if (!confirm('Export serving tables (lakehouse → analytics.duckdb)?')) return false;"> hx-confirm="Export serving tables (lakehouse → analytics.duckdb)?">
Run Export Run Export
</button> </button>
</div> </div>
@@ -138,7 +138,7 @@
hx-target="#pipeline-transform-content" hx-target="#pipeline-transform-content"
hx-swap="outerHTML" hx-swap="outerHTML"
hx-vals='{"step": "pipeline", "csrf_token": "{{ csrf_token() }}"}' hx-vals='{"step": "pipeline", "csrf_token": "{{ csrf_token() }}"}'
onclick="if (!confirm('Run full ELT pipeline (extract → transform → export)?')) return false;"> hx-confirm="Run full ELT pipeline (extract → transform → export)?">
Run Full Pipeline Run Full Pipeline
</button> </button>
</div> </div>

View File

@@ -36,9 +36,10 @@
<a href="{{ url_for('admin.scenario_pdf', scenario_id=s.id, lang='en') }}" class="btn-outline btn-sm">PDF EN</a> <a href="{{ url_for('admin.scenario_pdf', scenario_id=s.id, lang='en') }}" class="btn-outline btn-sm">PDF EN</a>
<a href="{{ url_for('admin.scenario_pdf', scenario_id=s.id, lang='de') }}" class="btn-outline btn-sm">PDF DE</a> <a href="{{ url_for('admin.scenario_pdf', scenario_id=s.id, lang='de') }}" class="btn-outline btn-sm">PDF DE</a>
<a href="{{ url_for('admin.scenario_edit', scenario_id=s.id) }}" class="btn-outline btn-sm">Edit</a> <a href="{{ url_for('admin.scenario_edit', scenario_id=s.id) }}" class="btn-outline btn-sm">Edit</a>
<form method="post" action="{{ url_for('admin.scenario_delete', scenario_id=s.id) }}" class="m-0" style="display: inline;"> <form method="post" action="{{ url_for('admin.scenario_delete', scenario_id=s.id) }}" class="m-0" style="display: inline;" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button type="button" class="btn-outline btn-sm" onclick="confirmAction('Delete this scenario? This cannot be undone.', this.closest('form'))">Delete</button> <button type="submit" class="btn-outline btn-sm"
hx-confirm="Delete this scenario? This cannot be undone.">Delete</button>
</form> </form>
</td> </td>
</tr> </tr>

View File

@@ -15,8 +15,9 @@
.pipeline-tabs { .pipeline-tabs {
display: flex; gap: 0; border-bottom: 2px solid #E2E8F0; margin-bottom: 1.5rem; display: flex; gap: 0; border-bottom: 2px solid #E2E8F0; margin-bottom: 1.5rem;
overflow-x: auto; -webkit-overflow-scrolling: touch; overflow-x: auto; -webkit-overflow-scrolling: touch; scrollbar-width: none;
} }
.pipeline-tabs::-webkit-scrollbar { display: none; }
.pipeline-tabs button { .pipeline-tabs button {
padding: 0.625rem 1.25rem; font-size: 0.8125rem; font-weight: 600; padding: 0.625rem 1.25rem; font-size: 0.8125rem; font-weight: 600;
color: #64748B; background: none; border: none; cursor: pointer; color: #64748B; background: none; border: none; cursor: pointer;
@@ -56,11 +57,11 @@
<p class="text-sm text-slate mt-1">Extraction status, data catalog, and ad-hoc query editor</p> <p class="text-sm text-slate mt-1">Extraction status, data catalog, and ad-hoc query editor</p>
</div> </div>
<div class="flex gap-2"> <div class="flex gap-2">
<form method="post" action="{{ url_for('pipeline.pipeline_trigger_transform') }}" class="m-0"> <form method="post" action="{{ url_for('pipeline.pipeline_trigger_transform') }}" class="m-0" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<input type="hidden" name="step" value="pipeline"> <input type="hidden" name="step" value="pipeline">
<button type="button" class="btn btn-sm" <button type="submit" class="btn btn-sm"
onclick="confirmAction('Run full ELT pipeline (extract → transform → export)? This runs in the background.', this.closest('form'))"> hx-confirm="Run full ELT pipeline (extract → transform → export)? This runs in the background.">
Run Pipeline Run Pipeline
</button> </button>
</form> </form>

View File

@@ -13,9 +13,10 @@
</div> </div>
<div class="flex gap-2"> <div class="flex gap-2">
<a href="{{ url_for('admin.template_generate', slug=config_data.slug) }}" class="btn">Generate Articles</a> <a href="{{ url_for('admin.template_generate', slug=config_data.slug) }}" class="btn">Generate Articles</a>
<form method="post" action="{{ url_for('admin.template_regenerate', slug=config_data.slug) }}" style="display:inline"> <form method="post" action="{{ url_for('admin.template_regenerate', slug=config_data.slug) }}" style="display:inline" hx-boost="true">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button type="button" class="btn-outline" onclick="confirmAction('Regenerate all articles for this template with fresh data? Existing articles will be overwritten.', this.closest('form'))"> <button type="submit" class="btn-outline"
hx-confirm="Regenerate all articles for this template with fresh data? Existing articles will be overwritten.">
Regenerate Regenerate
</button> </button>
</form> </form>

View File

@@ -192,6 +192,15 @@ async def fetch_all(sql: str, params: tuple = ()) -> list[dict]:
return [dict(row) for row in rows] return [dict(row) for row in rows]
async def count_where(table_where: str, params: tuple = ()) -> int:
"""Count rows matching a condition. Compresses the fetch_one + null-check pattern.
Usage: await count_where("users WHERE deleted_at IS NULL")
"""
row = await fetch_one(f"SELECT COUNT(*) AS n FROM {table_where}", params)
return row["n"] if row else 0
async def execute(sql: str, params: tuple = ()) -> int: async def execute(sql: str, params: tuple = ()) -> int:
"""Execute SQL and return lastrowid.""" """Execute SQL and return lastrowid."""
db = await get_db() db = await get_db()

View File

@@ -6,7 +6,7 @@ from pathlib import Path
from quart import Blueprint, flash, g, redirect, render_template, request, url_for from quart import Blueprint, flash, g, redirect, render_template, request, url_for
from ..auth.routes import login_required, update_user from ..auth.routes import login_required, update_user
from ..core import csrf_protect, fetch_one, soft_delete, utcnow_iso from ..core import count_where, csrf_protect, fetch_one, soft_delete, utcnow_iso
from ..i18n import get_translations from ..i18n import get_translations
bp = Blueprint( bp = Blueprint(
@@ -18,17 +18,13 @@ bp = Blueprint(
async def get_user_stats(user_id: int) -> dict: async def get_user_stats(user_id: int) -> dict:
scenarios = await fetch_one(
"SELECT COUNT(*) as count FROM scenarios WHERE user_id = ? AND deleted_at IS NULL",
(user_id,),
)
leads = await fetch_one(
"SELECT COUNT(*) as count FROM lead_requests WHERE user_id = ?",
(user_id,),
)
return { return {
"scenarios": scenarios["count"] if scenarios else 0, "scenarios": await count_where(
"leads": leads["count"] if leads else 0, "scenarios WHERE user_id = ? AND deleted_at IS NULL", (user_id,)
),
"leads": await count_where(
"lead_requests WHERE user_id = ?", (user_id,)
),
} }

View File

@@ -6,7 +6,7 @@ from pathlib import Path
from quart import Blueprint, g, make_response, redirect, render_template, request, url_for from quart import Blueprint, g, make_response, redirect, render_template, request, url_for
from ..core import csrf_protect, execute, fetch_all, fetch_one, utcnow_iso from ..core import count_where, csrf_protect, execute, fetch_all, fetch_one, utcnow_iso
from ..i18n import COUNTRY_LABELS, get_translations from ..i18n import COUNTRY_LABELS, get_translations
bp = Blueprint( bp = Blueprint(
@@ -79,11 +79,7 @@ async def _build_directory_query(q, country, category, region, page, per_page=24
where = " AND ".join(wheres) if wheres else "1=1" where = " AND ".join(wheres) if wheres else "1=1"
count_row = await fetch_one( total = await count_where(f"suppliers s WHERE {where}", tuple(params))
f"SELECT COUNT(*) as cnt FROM suppliers s WHERE {where}",
tuple(params),
)
total = count_row["cnt"] if count_row else 0
offset = (page - 1) * per_page offset = (page - 1) * per_page
# Tier-based ordering: sticky first, then pro > growth > free, then name # Tier-based ordering: sticky first, then pro > growth > free, then name
@@ -159,16 +155,16 @@ async def index():
"SELECT category, COUNT(*) as cnt FROM suppliers GROUP BY category ORDER BY cnt DESC" "SELECT category, COUNT(*) as cnt FROM suppliers GROUP BY category ORDER BY cnt DESC"
) )
total_suppliers = await fetch_one("SELECT COUNT(*) as cnt FROM suppliers") total_suppliers = await count_where("suppliers")
total_countries = await fetch_one("SELECT COUNT(DISTINCT country_code) as cnt FROM suppliers") total_countries = await count_where("(SELECT DISTINCT country_code FROM suppliers)")
return await render_template( return await render_template(
"directory.html", "directory.html",
**ctx, **ctx,
country_counts=country_counts, country_counts=country_counts,
category_counts=category_counts, category_counts=category_counts,
total_suppliers=total_suppliers["cnt"] if total_suppliers else 0, total_suppliers=total_suppliers,
total_countries=total_countries["cnt"] if total_countries else 0, total_countries=total_countries,
) )
@@ -204,11 +200,9 @@ async def supplier_detail(slug: str):
# Enquiry count (Basic+) # Enquiry count (Basic+)
enquiry_count = 0 enquiry_count = 0
if supplier.get("tier") in ("basic", "growth", "pro"): if supplier.get("tier") in ("basic", "growth", "pro"):
row = await fetch_one( enquiry_count = await count_where(
"SELECT COUNT(*) as cnt FROM supplier_enquiries WHERE supplier_id = ?", "supplier_enquiries WHERE supplier_id = ?", (supplier["id"],)
(supplier["id"],),
) )
enquiry_count = row["cnt"] if row else 0
lang = g.get("lang", "en") lang = g.get("lang", "en")
cat_labels, country_labels, region_labels = get_directory_labels(lang) cat_labels, country_labels, region_labels = get_directory_labels(lang)

View File

@@ -12,6 +12,7 @@ from quart import Blueprint, Response, g, jsonify, render_template, request
from ..auth.routes import login_required from ..auth.routes import login_required
from ..core import ( from ..core import (
config, config,
count_where,
csrf_protect, csrf_protect,
execute, execute,
feature_gate, feature_gate,
@@ -50,11 +51,9 @@ COUNTRY_PRESETS = {
async def count_scenarios(user_id: int) -> int: async def count_scenarios(user_id: int) -> int:
row = await fetch_one( return await count_where(
"SELECT COUNT(*) as cnt FROM scenarios WHERE user_id = ? AND deleted_at IS NULL", "scenarios WHERE user_id = ? AND deleted_at IS NULL", (user_id,)
(user_id,),
) )
return row["cnt"] if row else 0
async def get_default_scenario(user_id: int) -> dict | None: async def get_default_scenario(user_id: int) -> dict | None:

View File

@@ -5,7 +5,7 @@ from pathlib import Path
from quart import Blueprint, g, render_template, request, session from quart import Blueprint, g, render_template, request, session
from ..core import check_rate_limit, csrf_protect, execute, fetch_all, fetch_one from ..core import check_rate_limit, count_where, csrf_protect, execute, fetch_all, fetch_one
from ..i18n import get_translations from ..i18n import get_translations
bp = Blueprint( bp = Blueprint(
@@ -17,13 +17,9 @@ bp = Blueprint(
async def _supplier_counts(): async def _supplier_counts():
"""Fetch aggregate supplier stats for landing/marketing pages.""" """Fetch aggregate supplier stats for landing/marketing pages."""
total = await fetch_one("SELECT COUNT(*) as cnt FROM suppliers")
countries = await fetch_one(
"SELECT COUNT(DISTINCT country_code) as cnt FROM suppliers"
)
return ( return (
total["cnt"] if total else 0, await count_where("suppliers"),
countries["cnt"] if countries else 0, await count_where("(SELECT DISTINCT country_code FROM suppliers)"),
) )
@@ -75,15 +71,15 @@ async def suppliers():
total_suppliers, total_countries = await _supplier_counts() total_suppliers, total_countries = await _supplier_counts()
# Live stats # Live stats
calc_requests = await fetch_one("SELECT COUNT(*) as cnt FROM scenarios WHERE deleted_at IS NULL") calc_requests = await count_where("scenarios WHERE deleted_at IS NULL")
avg_budget = await fetch_one( avg_budget = await fetch_one(
"SELECT AVG(budget_estimate) as avg FROM lead_requests WHERE budget_estimate > 0 AND lead_type = 'quote'" "SELECT AVG(budget_estimate) as avg FROM lead_requests WHERE budget_estimate > 0 AND lead_type = 'quote'"
) )
active_suppliers = await fetch_one( active_suppliers = await count_where(
"SELECT COUNT(*) as cnt FROM suppliers WHERE tier IN ('growth', 'pro') AND claimed_by IS NOT NULL" "suppliers WHERE tier IN ('growth', 'pro') AND claimed_by IS NOT NULL"
) )
monthly_leads = await fetch_one( monthly_leads = await count_where(
"SELECT COUNT(*) as cnt FROM lead_requests WHERE lead_type = 'quote' AND created_at >= date('now', '-30 days')" "lead_requests WHERE lead_type = 'quote' AND created_at >= date('now', '-30 days')"
) )
# Lead feed preview — 3 recent verified hot/warm leads, anonymized # Lead feed preview — 3 recent verified hot/warm leads, anonymized
@@ -100,10 +96,10 @@ async def suppliers():
"suppliers.html", "suppliers.html",
total_suppliers=total_suppliers, total_suppliers=total_suppliers,
total_countries=total_countries, total_countries=total_countries,
calc_requests=calc_requests["cnt"] if calc_requests else 0, calc_requests=calc_requests,
avg_budget=int(avg_budget["avg"]) if avg_budget and avg_budget["avg"] else 0, avg_budget=int(avg_budget["avg"]) if avg_budget and avg_budget["avg"] else 0,
active_suppliers=active_suppliers["cnt"] if active_suppliers else 0, active_suppliers=active_suppliers,
monthly_leads=monthly_leads["cnt"] if monthly_leads else 0, monthly_leads=monthly_leads,
preview_leads=preview_leads, preview_leads=preview_leads,
) )

View File

@@ -11,6 +11,7 @@ from werkzeug.utils import secure_filename
from ..core import ( from ..core import (
capture_waitlist_email, capture_waitlist_email,
config, config,
count_where,
csrf_protect, csrf_protect,
execute, execute,
feature_gate, feature_gate,
@@ -776,9 +777,8 @@ async def dashboard_overview():
supplier = g.supplier supplier = g.supplier
# Leads unlocked count # Leads unlocked count
unlocked = await fetch_one( leads_unlocked = await count_where(
"SELECT COUNT(*) as cnt FROM lead_forwards WHERE supplier_id = ?", "lead_forwards WHERE supplier_id = ?", (supplier["id"],)
(supplier["id"],),
) )
# New leads matching supplier's area since last login # New leads matching supplier's area since last login
@@ -787,22 +787,20 @@ async def dashboard_overview():
new_leads_count = 0 new_leads_count = 0
if service_area: if service_area:
placeholders = ",".join("?" * len(service_area)) placeholders = ",".join("?" * len(service_area))
row = await fetch_one( new_leads_count = await count_where(
f"""SELECT COUNT(*) as cnt FROM lead_requests f"""lead_requests
WHERE lead_type = 'quote' AND status = 'new' AND verified_at IS NOT NULL WHERE lead_type = 'quote' AND status = 'new' AND verified_at IS NOT NULL
AND country IN ({placeholders}) AND country IN ({placeholders})
AND NOT EXISTS (SELECT 1 FROM lead_forwards WHERE lead_id = lead_requests.id AND supplier_id = ?)""", AND NOT EXISTS (SELECT 1 FROM lead_forwards WHERE lead_id = lead_requests.id AND supplier_id = ?)""",
(*service_area, supplier["id"]), (*service_area, supplier["id"]),
) )
new_leads_count = row["cnt"] if row else 0
else: else:
row = await fetch_one( new_leads_count = await count_where(
"""SELECT COUNT(*) as cnt FROM lead_requests """lead_requests
WHERE lead_type = 'quote' AND status = 'new' AND verified_at IS NOT NULL WHERE lead_type = 'quote' AND status = 'new' AND verified_at IS NOT NULL
AND NOT EXISTS (SELECT 1 FROM lead_forwards WHERE lead_id = lead_requests.id AND supplier_id = ?)""", AND NOT EXISTS (SELECT 1 FROM lead_forwards WHERE lead_id = lead_requests.id AND supplier_id = ?)""",
(supplier["id"],), (supplier["id"],),
) )
new_leads_count = row["cnt"] if row else 0
# Recent activity (last 10 events from credit_ledger + lead_forwards) # Recent activity (last 10 events from credit_ledger + lead_forwards)
recent_activity = await fetch_all( recent_activity = await fetch_all(
@@ -825,16 +823,14 @@ async def dashboard_overview():
# Enquiry count for Basic tier # Enquiry count for Basic tier
enquiry_count = 0 enquiry_count = 0
if supplier.get("tier") == "basic": if supplier.get("tier") == "basic":
eq_row = await fetch_one( enquiry_count = await count_where(
"SELECT COUNT(*) as cnt FROM supplier_enquiries WHERE supplier_id = ?", "supplier_enquiries WHERE supplier_id = ?", (supplier["id"],)
(supplier["id"],),
) )
enquiry_count = eq_row["cnt"] if eq_row else 0
return await render_template( return await render_template(
"suppliers/partials/dashboard_overview.html", "suppliers/partials/dashboard_overview.html",
supplier=supplier, supplier=supplier,
leads_unlocked=unlocked["cnt"] if unlocked else 0, leads_unlocked=leads_unlocked,
new_leads_count=new_leads_count, new_leads_count=new_leads_count,
recent_activity=recent_activity, recent_activity=recent_activity,
active_boosts=active_boosts, active_boosts=active_boosts,

View File

@@ -125,6 +125,32 @@ async def auth_client(app, test_user):
yield c yield c
@pytest.fixture
async def admin_client(app, db):
"""Test client with an admin user pre-loaded in session."""
now = datetime.now(UTC).isoformat()
async with db.execute(
"INSERT INTO users (email, name, created_at) VALUES (?, ?, ?)",
("admin@test.com", "Admin", now),
) as cursor:
user_id = cursor.lastrowid
await db.execute(
"INSERT INTO user_roles (user_id, role) VALUES (?, 'admin')", (user_id,)
)
await db.commit()
async with app.test_client() as c:
async with c.session_transaction() as sess:
sess["user_id"] = user_id
yield c
@pytest.fixture
def mock_send_email():
"""Patch padelnomics.worker.send_email for the duration of the test."""
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock:
yield mock
# ── Subscriptions ──────────────────────────────────────────── # ── Subscriptions ────────────────────────────────────────────
@pytest.fixture @pytest.fixture

View File

@@ -9,7 +9,6 @@ Covers:
""" """
from datetime import UTC, datetime from datetime import UTC, datetime
from pathlib import Path from pathlib import Path
from unittest.mock import AsyncMock, patch
import pytest import pytest
from padelnomics.businessplan import generate_business_plan, get_plan_sections from padelnomics.businessplan import generate_business_plan, get_plan_sections
@@ -184,19 +183,18 @@ async def _insert_export(db, user_id: int, scenario_id: int, status: str = "pend
@requires_weasyprint @requires_weasyprint
class TestWorkerHandler: class TestWorkerHandler:
async def test_happy_path_generates_pdf_and_updates_status(self, db, scenario): async def test_happy_path_generates_pdf_and_updates_status(self, db, scenario, mock_send_email):
from padelnomics.worker import handle_generate_business_plan from padelnomics.worker import handle_generate_business_plan
export = await _insert_export(db, scenario["user_id"], scenario["id"]) export = await _insert_export(db, scenario["user_id"], scenario["id"])
output_file = None output_file = None
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_email: await handle_generate_business_plan({
await handle_generate_business_plan({ "export_id": export["id"],
"export_id": export["id"], "user_id": scenario["user_id"],
"user_id": scenario["user_id"], "scenario_id": scenario["id"],
"scenario_id": scenario["id"], "language": "en",
"language": "en", })
})
# Status should be 'ready' # Status should be 'ready'
from padelnomics.core import fetch_one from padelnomics.core import fetch_one
@@ -214,14 +212,14 @@ class TestWorkerHandler:
assert output_file.read_bytes()[:4] == b"%PDF" assert output_file.read_bytes()[:4] == b"%PDF"
# Email should have been sent # Email should have been sent
mock_email.assert_called_once() mock_send_email.assert_called_once()
assert "to" in mock_email.call_args.kwargs assert "to" in mock_send_email.call_args.kwargs
assert "subject" in mock_email.call_args.kwargs assert "subject" in mock_send_email.call_args.kwargs
finally: finally:
if output_file and output_file.exists(): if output_file and output_file.exists():
output_file.unlink() output_file.unlink()
async def test_marks_failed_on_bad_scenario(self, db, scenario): async def test_marks_failed_on_bad_scenario(self, db, scenario, mock_send_email):
"""Handler marks export failed when user_id doesn't match scenario owner.""" """Handler marks export failed when user_id doesn't match scenario owner."""
from padelnomics.worker import handle_generate_business_plan from padelnomics.worker import handle_generate_business_plan
@@ -229,14 +227,13 @@ class TestWorkerHandler:
wrong_user_id = scenario["user_id"] + 9999 wrong_user_id = scenario["user_id"] + 9999
export = await _insert_export(db, scenario["user_id"], scenario["id"]) export = await _insert_export(db, scenario["user_id"], scenario["id"])
with patch("padelnomics.worker.send_email", new_callable=AsyncMock): with pytest.raises(ValueError):
with pytest.raises(ValueError): await handle_generate_business_plan({
await handle_generate_business_plan({ "export_id": export["id"],
"export_id": export["id"], "user_id": wrong_user_id,
"user_id": wrong_user_id, "scenario_id": scenario["id"],
"scenario_id": scenario["id"], "language": "en",
"language": "en", })
})
from padelnomics.core import fetch_one from padelnomics.core import fetch_one
row = await fetch_one( row = await fetch_one(

View File

@@ -938,26 +938,6 @@ class TestRouteRegistration:
# Admin routes (require admin session) # Admin routes (require admin session)
# ════════════════════════════════════════════════════════════ # ════════════════════════════════════════════════════════════
@pytest.fixture
async def admin_client(app, db):
"""Test client with admin user (has admin role)."""
now = utcnow_iso()
async with db.execute(
"INSERT INTO users (email, name, created_at) VALUES (?, ?, ?)",
("admin@test.com", "Admin", now),
) as cursor:
admin_id = cursor.lastrowid
await db.execute(
"INSERT INTO user_roles (user_id, role) VALUES (?, 'admin')", (admin_id,)
)
await db.commit()
async with app.test_client() as c:
async with c.session_transaction() as sess:
sess["user_id"] = admin_id
yield c
class TestAdminTemplates: class TestAdminTemplates:
async def test_template_list_requires_admin(self, client): async def test_template_list_requires_admin(self, client):
resp = await client.get("/admin/templates") resp = await client.get("/admin/templates")

View File

@@ -9,7 +9,6 @@ Admin gallery tests: access control, list page, preview page, error handling.
""" """
import pytest import pytest
from padelnomics.core import utcnow_iso
from padelnomics.email_templates import EMAIL_TEMPLATE_REGISTRY, render_email_template from padelnomics.email_templates import EMAIL_TEMPLATE_REGISTRY, render_email_template
# ── render_email_template() ────────────────────────────────────────────────── # ── render_email_template() ──────────────────────────────────────────────────
@@ -124,26 +123,6 @@ class TestRenderEmailTemplate:
# ── Admin gallery routes ────────────────────────────────────────────────────── # ── Admin gallery routes ──────────────────────────────────────────────────────
@pytest.fixture
async def admin_client(app, db):
"""Test client with a user that has the admin role."""
now = utcnow_iso()
async with db.execute(
"INSERT INTO users (email, name, created_at) VALUES (?, ?, ?)",
("gallery_admin@test.com", "Gallery Admin", now),
) as cursor:
admin_id = cursor.lastrowid
await db.execute(
"INSERT INTO user_roles (user_id, role) VALUES (?, 'admin')", (admin_id,)
)
await db.commit()
async with app.test_client() as c:
async with c.session_transaction() as sess:
sess["user_id"] = admin_id
yield c
class TestEmailGalleryRoutes: class TestEmailGalleryRoutes:
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_gallery_requires_auth(self, client): async def test_gallery_requires_auth(self, client):

View File

@@ -50,59 +50,51 @@ def _assert_common_design(html: str, lang: str = "en"):
class TestMagicLink: class TestMagicLink:
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_sends_to_correct_recipient(self): async def test_sends_to_correct_recipient(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_magic_link({"email": "user@example.com", "token": "abc123"})
await handle_send_magic_link({"email": "user@example.com", "token": "abc123"}) kw = _call_kwargs(mock_send_email)
kw = _call_kwargs(mock_send) assert kw["to"] == "user@example.com"
assert kw["to"] == "user@example.com"
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_subject_contains_app_name(self): async def test_subject_contains_app_name(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_magic_link({"email": "user@example.com", "token": "abc123"})
await handle_send_magic_link({"email": "user@example.com", "token": "abc123"}) kw = _call_kwargs(mock_send_email)
kw = _call_kwargs(mock_send) assert core.config.APP_NAME.lower() in kw["subject"].lower()
assert core.config.APP_NAME.lower() in kw["subject"].lower()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_verify_link(self): async def test_html_contains_verify_link(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_magic_link({"email": "user@example.com", "token": "abc123"})
await handle_send_magic_link({"email": "user@example.com", "token": "abc123"}) kw = _call_kwargs(mock_send_email)
kw = _call_kwargs(mock_send) assert "/auth/verify?token=abc123" in kw["html"]
assert "/auth/verify?token=abc123" in kw["html"]
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_fallback_link_text(self): async def test_html_contains_fallback_link_text(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_magic_link({"email": "user@example.com", "token": "tok"})
await handle_send_magic_link({"email": "user@example.com", "token": "tok"}) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] assert "word-break:break-all" in html # fallback URL block
assert "word-break:break-all" in html # fallback URL block
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_uses_transactional_from_addr(self): async def test_uses_transactional_from_addr(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_magic_link({"email": "user@example.com", "token": "tok"})
await handle_send_magic_link({"email": "user@example.com", "token": "tok"}) assert _call_kwargs(mock_send_email)["from_addr"] == core.EMAIL_ADDRESSES["transactional"]
assert _call_kwargs(mock_send)["from_addr"] == core.EMAIL_ADDRESSES["transactional"]
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_preheader_mentions_expiry(self): async def test_preheader_mentions_expiry(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_magic_link({"email": "user@example.com", "token": "tok"})
await handle_send_magic_link({"email": "user@example.com", "token": "tok"}) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] # preheader is hidden span; should mention minutes
# preheader is hidden span; should mention minutes assert "display:none" in html # preheader present
assert "display:none" in html # preheader present
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_design_elements_present(self): async def test_design_elements_present(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_magic_link({"email": "user@example.com", "token": "tok"})
await handle_send_magic_link({"email": "user@example.com", "token": "tok"}) _assert_common_design(_call_kwargs(mock_send_email)["html"])
_assert_common_design(_call_kwargs(mock_send)["html"])
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_respects_lang_parameter(self): async def test_respects_lang_parameter(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_magic_link({"email": "user@example.com", "token": "tok", "lang": "de"})
await handle_send_magic_link({"email": "user@example.com", "token": "tok", "lang": "de"}) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] _assert_common_design(html, lang="de")
_assert_common_design(html, lang="de")
# ── Welcome ────────────────────────────────────────────────────── # ── Welcome ──────────────────────────────────────────────────────
@@ -110,59 +102,51 @@ class TestMagicLink:
class TestWelcome: class TestWelcome:
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_sends_to_correct_recipient(self): async def test_sends_to_correct_recipient(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_welcome({"email": "new@example.com"})
await handle_send_welcome({"email": "new@example.com"}) assert _call_kwargs(mock_send_email)["to"] == "new@example.com"
assert _call_kwargs(mock_send)["to"] == "new@example.com"
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_subject_not_empty(self): async def test_subject_not_empty(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_welcome({"email": "new@example.com"})
await handle_send_welcome({"email": "new@example.com"}) assert len(_call_kwargs(mock_send_email)["subject"]) > 5
assert len(_call_kwargs(mock_send)["subject"]) > 5
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_quickstart_links(self): async def test_html_contains_quickstart_links(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_welcome({"email": "new@example.com"})
await handle_send_welcome({"email": "new@example.com"}) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] assert "/planner" in html
assert "/planner" in html assert "/markets" in html
assert "/markets" in html assert "/leads/quote" in html
assert "/leads/quote" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_uses_first_name_when_provided(self): async def test_uses_first_name_when_provided(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_welcome({"email": "new@example.com", "name": "Alice Smith"})
await handle_send_welcome({"email": "new@example.com", "name": "Alice Smith"}) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] assert "Alice" in html
assert "Alice" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_fallback_greeting_when_no_name(self): async def test_fallback_greeting_when_no_name(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_welcome({"email": "new@example.com"})
await handle_send_welcome({"email": "new@example.com"}) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] # Should use "there" as fallback first_name
# Should use "there" as fallback first_name assert "there" in html.lower()
assert "there" in html.lower()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_uses_transactional_from_addr(self): async def test_uses_transactional_from_addr(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_welcome({"email": "new@example.com"})
await handle_send_welcome({"email": "new@example.com"}) assert _call_kwargs(mock_send_email)["from_addr"] == core.EMAIL_ADDRESSES["transactional"]
assert _call_kwargs(mock_send)["from_addr"] == core.EMAIL_ADDRESSES["transactional"]
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_design_elements_present(self): async def test_design_elements_present(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_welcome({"email": "new@example.com"})
await handle_send_welcome({"email": "new@example.com"}) _assert_common_design(_call_kwargs(mock_send_email)["html"])
_assert_common_design(_call_kwargs(mock_send)["html"])
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_german_welcome(self): async def test_german_welcome(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_welcome({"email": "new@example.com", "lang": "de"})
await handle_send_welcome({"email": "new@example.com", "lang": "de"}) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] _assert_common_design(html, lang="de")
_assert_common_design(html, lang="de")
# ── Quote Verification ─────────────────────────────────────────── # ── Quote Verification ───────────────────────────────────────────
@@ -180,57 +164,50 @@ class TestQuoteVerification:
} }
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_sends_to_contact_email(self): async def test_sends_to_contact_email(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_quote_verification(self._BASE_PAYLOAD)
await handle_send_quote_verification(self._BASE_PAYLOAD) assert _call_kwargs(mock_send_email)["to"] == "lead@example.com"
assert _call_kwargs(mock_send)["to"] == "lead@example.com"
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_verify_link(self): async def test_html_contains_verify_link(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_quote_verification(self._BASE_PAYLOAD)
await handle_send_quote_verification(self._BASE_PAYLOAD) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] assert "token=verify_tok" in html
assert "token=verify_tok" in html assert "lead=lead_tok" in html
assert "lead=lead_tok" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_project_recap(self): async def test_html_contains_project_recap(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_quote_verification(self._BASE_PAYLOAD)
await handle_send_quote_verification(self._BASE_PAYLOAD) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] assert "6 courts" in html
assert "6 courts" in html assert "Indoor" in html
assert "Indoor" in html assert "Germany" in html
assert "Germany" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_uses_first_name_from_contact(self): async def test_uses_first_name_from_contact(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_quote_verification(self._BASE_PAYLOAD)
await handle_send_quote_verification(self._BASE_PAYLOAD) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] assert "Bob" in html
assert "Bob" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_handles_minimal_payload(self): async def test_handles_minimal_payload(self, mock_send_email):
"""No court_count/facility_type/country — should still send.""" """No court_count/facility_type/country — should still send."""
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_quote_verification({
await handle_send_quote_verification({ "email": "lead@example.com",
"email": "lead@example.com", "token": "tok",
"token": "tok", "lead_token": "ltok",
"lead_token": "ltok", })
}) mock_send_email.assert_called_once()
mock_send.assert_called_once()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_uses_transactional_from_addr(self): async def test_uses_transactional_from_addr(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_quote_verification(self._BASE_PAYLOAD)
await handle_send_quote_verification(self._BASE_PAYLOAD) assert _call_kwargs(mock_send_email)["from_addr"] == core.EMAIL_ADDRESSES["transactional"]
assert _call_kwargs(mock_send)["from_addr"] == core.EMAIL_ADDRESSES["transactional"]
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_design_elements_present(self): async def test_design_elements_present(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_quote_verification(self._BASE_PAYLOAD)
await handle_send_quote_verification(self._BASE_PAYLOAD) _assert_common_design(_call_kwargs(mock_send_email)["html"])
_assert_common_design(_call_kwargs(mock_send)["html"])
# ── Lead Forward (the money email) ────────────────────────────── # ── Lead Forward (the money email) ──────────────────────────────
@@ -238,89 +215,71 @@ class TestQuoteVerification:
class TestLeadForward: class TestLeadForward:
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_sends_to_supplier_email(self, db): async def test_sends_to_supplier_email(self, db, mock_send_email):
lead_id, supplier_id = await _seed_lead_and_supplier(db) lead_id, supplier_id = await _seed_lead_and_supplier(db)
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: assert _call_kwargs(mock_send_email)["to"] == "supplier@test.com"
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
assert _call_kwargs(mock_send)["to"] == "supplier@test.com"
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_subject_contains_heat_and_country(self, db): async def test_subject_contains_heat_and_country(self, db, mock_send_email):
lead_id, supplier_id = await _seed_lead_and_supplier(db) lead_id, supplier_id = await _seed_lead_and_supplier(db)
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: subject = _call_kwargs(mock_send_email)["subject"]
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id}) assert "[HOT]" in subject
subject = _call_kwargs(mock_send)["subject"] assert "Germany" in subject
assert "[HOT]" in subject assert "4 courts" in subject
assert "Germany" in subject
assert "4 courts" in subject
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_heat_badge(self, db): async def test_html_contains_heat_badge(self, db, mock_send_email):
lead_id, supplier_id = await _seed_lead_and_supplier(db) lead_id, supplier_id = await _seed_lead_and_supplier(db)
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: html = _call_kwargs(mock_send_email)["html"]
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id}) assert "#DC2626" in html # HOT badge color
html = _call_kwargs(mock_send)["html"] assert "HOT" in html
assert "#DC2626" in html # HOT badge color
assert "HOT" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_project_brief(self, db): async def test_html_contains_project_brief(self, db, mock_send_email):
lead_id, supplier_id = await _seed_lead_and_supplier(db) lead_id, supplier_id = await _seed_lead_and_supplier(db)
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: html = _call_kwargs(mock_send_email)["html"]
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id}) assert "Indoor" in html
html = _call_kwargs(mock_send)["html"] assert "Germany" in html
assert "Indoor" in html
assert "Germany" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_contact_info(self, db): async def test_html_contains_contact_info(self, db, mock_send_email):
lead_id, supplier_id = await _seed_lead_and_supplier(db) lead_id, supplier_id = await _seed_lead_and_supplier(db)
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: html = _call_kwargs(mock_send_email)["html"]
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id}) assert "lead@buyer.com" in html
html = _call_kwargs(mock_send)["html"] assert "mailto:lead@buyer.com" in html
assert "lead@buyer.com" in html assert "John Doe" in html
assert "mailto:lead@buyer.com" in html
assert "John Doe" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_urgency_callout(self, db): async def test_html_contains_urgency_callout(self, db, mock_send_email):
lead_id, supplier_id = await _seed_lead_and_supplier(db) lead_id, supplier_id = await _seed_lead_and_supplier(db)
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: html = _call_kwargs(mock_send_email)["html"]
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id}) # Urgency callout has yellow background
html = _call_kwargs(mock_send)["html"] assert "#FEF3C7" in html
# Urgency callout has yellow background
assert "#FEF3C7" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_direct_reply_cta(self, db): async def test_html_contains_direct_reply_cta(self, db, mock_send_email):
lead_id, supplier_id = await _seed_lead_and_supplier(db) lead_id, supplier_id = await _seed_lead_and_supplier(db)
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: html = _call_kwargs(mock_send_email)["html"]
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id}) # Direct reply link text should mention the contact email
html = _call_kwargs(mock_send)["html"] assert "lead@buyer.com" in html
# Direct reply link text should mention the contact email
assert "lead@buyer.com" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_uses_leads_from_addr(self, db): async def test_uses_leads_from_addr(self, db, mock_send_email):
lead_id, supplier_id = await _seed_lead_and_supplier(db) lead_id, supplier_id = await _seed_lead_and_supplier(db)
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: assert _call_kwargs(mock_send_email)["from_addr"] == core.EMAIL_ADDRESSES["leads"]
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
assert _call_kwargs(mock_send)["from_addr"] == core.EMAIL_ADDRESSES["leads"]
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_updates_email_sent_at(self, db): async def test_updates_email_sent_at(self, db, mock_send_email):
lead_id, supplier_id = await _seed_lead_and_supplier(db, create_forward=True) lead_id, supplier_id = await _seed_lead_and_supplier(db, create_forward=True)
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock):
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
async with db.execute( async with db.execute(
"SELECT email_sent_at FROM lead_forwards WHERE lead_id = ? AND supplier_id = ?", "SELECT email_sent_at FROM lead_forwards WHERE lead_id = ? AND supplier_id = ?",
@@ -331,30 +290,24 @@ class TestLeadForward:
assert row["email_sent_at"] is not None assert row["email_sent_at"] is not None
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_skips_when_no_supplier_email(self, db): async def test_skips_when_no_supplier_email(self, db, mock_send_email):
"""No email on supplier record — handler exits without sending.""" """No email on supplier record — handler exits without sending."""
lead_id, supplier_id = await _seed_lead_and_supplier(db, supplier_email="") lead_id, supplier_id = await _seed_lead_and_supplier(db, supplier_email="")
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: mock_send_email.assert_not_called()
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
mock_send.assert_not_called()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_skips_when_lead_not_found(self, db): async def test_skips_when_lead_not_found(self, db, mock_send_email):
"""Non-existent lead_id — handler exits without sending.""" """Non-existent lead_id — handler exits without sending."""
_, supplier_id = await _seed_lead_and_supplier(db) _, supplier_id = await _seed_lead_and_supplier(db)
await handle_send_lead_forward_email({"lead_id": 99999, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: mock_send_email.assert_not_called()
await handle_send_lead_forward_email({"lead_id": 99999, "supplier_id": supplier_id})
mock_send.assert_not_called()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_design_elements_present(self, db): async def test_design_elements_present(self, db, mock_send_email):
lead_id, supplier_id = await _seed_lead_and_supplier(db) lead_id, supplier_id = await _seed_lead_and_supplier(db)
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: _assert_common_design(_call_kwargs(mock_send_email)["html"])
await handle_send_lead_forward_email({"lead_id": lead_id, "supplier_id": supplier_id})
_assert_common_design(_call_kwargs(mock_send)["html"])
# ── Lead Matched Notification ──────────────────────────────────── # ── Lead Matched Notification ────────────────────────────────────
@@ -362,70 +315,55 @@ class TestLeadForward:
class TestLeadMatched: class TestLeadMatched:
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_sends_to_lead_contact_email(self, db): async def test_sends_to_lead_contact_email(self, db, mock_send_email):
lead_id = await _seed_lead(db) lead_id = await _seed_lead(db)
await handle_send_lead_matched_notification({"lead_id": lead_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: assert _call_kwargs(mock_send_email)["to"] == "lead@buyer.com"
await handle_send_lead_matched_notification({"lead_id": lead_id})
assert _call_kwargs(mock_send)["to"] == "lead@buyer.com"
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_subject_contains_first_name(self, db): async def test_subject_contains_first_name(self, db, mock_send_email):
lead_id = await _seed_lead(db) lead_id = await _seed_lead(db)
await handle_send_lead_matched_notification({"lead_id": lead_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: assert "John" in _call_kwargs(mock_send_email)["subject"]
await handle_send_lead_matched_notification({"lead_id": lead_id})
assert "John" in _call_kwargs(mock_send)["subject"]
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_what_happens_next(self, db): async def test_html_contains_what_happens_next(self, db, mock_send_email):
lead_id = await _seed_lead(db) lead_id = await _seed_lead(db)
await handle_send_lead_matched_notification({"lead_id": lead_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: html = _call_kwargs(mock_send_email)["html"]
await handle_send_lead_matched_notification({"lead_id": lead_id}) # "What happens next" section and tip callout (blue bg)
html = _call_kwargs(mock_send)["html"] assert "#F0F9FF" in html # tip callout background
# "What happens next" section and tip callout (blue bg)
assert "#F0F9FF" in html # tip callout background
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_project_context(self, db): async def test_html_contains_project_context(self, db, mock_send_email):
lead_id = await _seed_lead(db) lead_id = await _seed_lead(db)
await handle_send_lead_matched_notification({"lead_id": lead_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: html = _call_kwargs(mock_send_email)["html"]
await handle_send_lead_matched_notification({"lead_id": lead_id}) assert "Indoor" in html
html = _call_kwargs(mock_send)["html"] assert "Germany" in html
assert "Indoor" in html
assert "Germany" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_uses_leads_from_addr(self, db): async def test_uses_leads_from_addr(self, db, mock_send_email):
lead_id = await _seed_lead(db) lead_id = await _seed_lead(db)
await handle_send_lead_matched_notification({"lead_id": lead_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: assert _call_kwargs(mock_send_email)["from_addr"] == core.EMAIL_ADDRESSES["leads"]
await handle_send_lead_matched_notification({"lead_id": lead_id})
assert _call_kwargs(mock_send)["from_addr"] == core.EMAIL_ADDRESSES["leads"]
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_skips_when_lead_not_found(self, db): async def test_skips_when_lead_not_found(self, db, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_lead_matched_notification({"lead_id": 99999})
await handle_send_lead_matched_notification({"lead_id": 99999}) mock_send_email.assert_not_called()
mock_send.assert_not_called()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_skips_when_no_contact_email(self, db): async def test_skips_when_no_contact_email(self, db, mock_send_email):
lead_id = await _seed_lead(db, contact_email="") lead_id = await _seed_lead(db, contact_email="")
await handle_send_lead_matched_notification({"lead_id": lead_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: mock_send_email.assert_not_called()
await handle_send_lead_matched_notification({"lead_id": lead_id})
mock_send.assert_not_called()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_design_elements_present(self, db): async def test_design_elements_present(self, db, mock_send_email):
lead_id = await _seed_lead(db) lead_id = await _seed_lead(db)
await handle_send_lead_matched_notification({"lead_id": lead_id})
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: _assert_common_design(_call_kwargs(mock_send_email)["html"])
await handle_send_lead_matched_notification({"lead_id": lead_id})
_assert_common_design(_call_kwargs(mock_send)["html"])
# ── Supplier Enquiry ───────────────────────────────────────────── # ── Supplier Enquiry ─────────────────────────────────────────────
@@ -441,50 +379,43 @@ class TestSupplierEnquiry:
} }
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_sends_to_supplier_email(self): async def test_sends_to_supplier_email(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD)
await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD) assert _call_kwargs(mock_send_email)["to"] == "supplier@corp.com"
assert _call_kwargs(mock_send)["to"] == "supplier@corp.com"
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_subject_contains_contact_name(self): async def test_subject_contains_contact_name(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD)
await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD) assert "Alice Smith" in _call_kwargs(mock_send_email)["subject"]
assert "Alice Smith" in _call_kwargs(mock_send)["subject"]
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_message(self): async def test_html_contains_message(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD)
await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] assert "4 courts" in html
assert "4 courts" in html assert "alice@buyer.com" in html
assert "alice@buyer.com" in html
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_html_contains_respond_fast_nudge(self): async def test_html_contains_respond_fast_nudge(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD)
await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] # The respond-fast nudge line should be present
# The respond-fast nudge line should be present assert "24" in html # "24 hours" reference
assert "24" in html # "24 hours" reference
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_skips_when_no_supplier_email(self): async def test_skips_when_no_supplier_email(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_supplier_enquiry_email({**self._BASE_PAYLOAD, "supplier_email": ""})
await handle_send_supplier_enquiry_email({**self._BASE_PAYLOAD, "supplier_email": ""}) mock_send_email.assert_not_called()
mock_send.assert_not_called()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_uses_transactional_from_addr(self): async def test_uses_transactional_from_addr(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD)
await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD) assert _call_kwargs(mock_send_email)["from_addr"] == core.EMAIL_ADDRESSES["transactional"]
assert _call_kwargs(mock_send)["from_addr"] == core.EMAIL_ADDRESSES["transactional"]
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_design_elements_present(self): async def test_design_elements_present(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD)
await handle_send_supplier_enquiry_email(self._BASE_PAYLOAD) _assert_common_design(_call_kwargs(mock_send_email)["html"])
_assert_common_design(_call_kwargs(mock_send)["html"])
# ── Waitlist (supplement existing test_waitlist.py) ────────────── # ── Waitlist (supplement existing test_waitlist.py) ──────────────
@@ -494,33 +425,29 @@ class TestWaitlistEmails:
"""Verify design & content for waitlist confirmation emails.""" """Verify design & content for waitlist confirmation emails."""
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_general_waitlist_has_preheader(self): async def test_general_waitlist_has_preheader(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_waitlist_confirmation({"email": "u@example.com", "intent": "signup"})
await handle_send_waitlist_confirmation({"email": "u@example.com", "intent": "signup"}) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] assert "display:none" in html # preheader span
assert "display:none" in html # preheader span
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_supplier_waitlist_mentions_plan(self): async def test_supplier_waitlist_mentions_plan(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_waitlist_confirmation({"email": "s@example.com", "intent": "supplier_growth"})
await handle_send_waitlist_confirmation({"email": "s@example.com", "intent": "supplier_growth"}) kw = _call_kwargs(mock_send_email)
kw = _call_kwargs(mock_send) assert "growth" in kw["subject"].lower()
assert "growth" in kw["subject"].lower() assert "supplier" in kw["html"].lower()
assert "supplier" in kw["html"].lower()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_general_waitlist_design_elements(self): async def test_general_waitlist_design_elements(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_waitlist_confirmation({"email": "u@example.com", "intent": "signup"})
await handle_send_waitlist_confirmation({"email": "u@example.com", "intent": "signup"}) _assert_common_design(_call_kwargs(mock_send_email)["html"])
_assert_common_design(_call_kwargs(mock_send)["html"])
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_supplier_waitlist_perks_listed(self): async def test_supplier_waitlist_perks_listed(self, mock_send_email):
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_waitlist_confirmation({"email": "s@example.com", "intent": "supplier_pro"})
await handle_send_waitlist_confirmation({"email": "s@example.com", "intent": "supplier_pro"}) html = _call_kwargs(mock_send_email)["html"]
html = _call_kwargs(mock_send)["html"] # Should have <li> perks
# Should have <li> perks assert html.count("<li>") >= 3
assert html.count("<li>") >= 3
# ── DB seed helpers ────────────────────────────────────────────── # ── DB seed helpers ──────────────────────────────────────────────

View File

@@ -10,7 +10,6 @@ import sqlite3
from unittest.mock import AsyncMock, patch from unittest.mock import AsyncMock, patch
import pytest import pytest
from padelnomics.core import utcnow_iso
from padelnomics.migrations.migrate import migrate from padelnomics.migrations.migrate import migrate
from padelnomics import core from padelnomics import core
@@ -25,25 +24,6 @@ def mock_csrf_validation():
yield yield
@pytest.fixture
async def admin_client(app, db):
"""Test client with an admin-role user session (module-level, follows test_content.py)."""
now = utcnow_iso()
async with db.execute(
"INSERT INTO users (email, name, created_at) VALUES (?, ?, ?)",
("flags_admin@test.com", "Flags Admin", now),
) as cursor:
admin_id = cursor.lastrowid
await db.execute(
"INSERT INTO user_roles (user_id, role) VALUES (?, 'admin')", (admin_id,)
)
await db.commit()
async with app.test_client() as c:
async with c.session_transaction() as sess:
sess["user_id"] = admin_id
yield c
async def _set_flag(db, name: str, enabled: bool, description: str = ""): async def _set_flag(db, name: str, enabled: bool, description: str = ""):
"""Insert or replace a flag in the test DB.""" """Insert or replace a flag in the test DB."""
await db.execute( await db.execute(

View File

@@ -46,26 +46,6 @@ def _bypass_csrf():
yield yield
@pytest.fixture
async def admin_client(app, db):
"""Test client with an admin user pre-loaded in session."""
now = datetime.now(UTC).isoformat()
async with db.execute(
"INSERT INTO users (email, name, created_at) VALUES (?, ?, ?)",
("admin@example.com", "Admin User", now),
) as cursor:
user_id = cursor.lastrowid
await db.execute(
"INSERT INTO user_roles (user_id, role) VALUES (?, 'admin')", (user_id,)
)
await db.commit()
async with app.test_client() as c:
async with c.session_transaction() as sess:
sess["user_id"] = user_id
yield c
async def _insert_supplier( async def _insert_supplier(
db, db,
name: str = "Test Supplier", name: str = "Test Supplier",

View File

@@ -14,31 +14,10 @@ from unittest.mock import AsyncMock, MagicMock, patch
import padelnomics.admin.pipeline_routes as pipeline_mod import padelnomics.admin.pipeline_routes as pipeline_mod
import pytest import pytest
from padelnomics.core import utcnow_iso
# ── Fixtures ────────────────────────────────────────────────────────────────── # ── Fixtures ──────────────────────────────────────────────────────────────────
@pytest.fixture
async def admin_client(app, db):
"""Authenticated admin test client."""
now = utcnow_iso()
async with db.execute(
"INSERT INTO users (email, name, created_at) VALUES (?, ?, ?)",
("pipeline-admin@test.com", "Pipeline Admin", now),
) as cursor:
admin_id = cursor.lastrowid
await db.execute(
"INSERT INTO user_roles (user_id, role) VALUES (?, 'admin')", (admin_id,)
)
await db.commit()
async with app.test_client() as c:
async with c.session_transaction() as sess:
sess["user_id"] = admin_id
yield c
@pytest.fixture @pytest.fixture
def state_db_dir(): def state_db_dir():
"""Temp directory with a seeded .state.sqlite for testing.""" """Temp directory with a seeded .state.sqlite for testing."""

View File

@@ -10,7 +10,6 @@ Covers:
import json import json
from unittest.mock import patch from unittest.mock import patch
import pytest
from padelnomics.content.health import ( from padelnomics.content.health import (
check_broken_scenario_refs, check_broken_scenario_refs,
check_hreflang_orphans, check_hreflang_orphans,
@@ -27,26 +26,6 @@ from padelnomics import core
# ── Fixtures ────────────────────────────────────────────────────────────────── # ── Fixtures ──────────────────────────────────────────────────────────────────
@pytest.fixture
async def admin_client(app, db):
"""Authenticated admin test client."""
now = utcnow_iso()
async with db.execute(
"INSERT INTO users (email, name, created_at) VALUES (?, ?, ?)",
("pseo-admin@test.com", "pSEO Admin", now),
) as cursor:
admin_id = cursor.lastrowid
await db.execute(
"INSERT INTO user_roles (user_id, role) VALUES (?, 'admin')", (admin_id,)
)
await db.commit()
async with app.test_client() as c:
async with c.session_transaction() as sess:
sess["user_id"] = admin_id
yield c
# ── DB helpers ──────────────────────────────────────────────────────────────── # ── DB helpers ────────────────────────────────────────────────────────────────

View File

@@ -89,26 +89,6 @@ async def articles_data(db, seo_data):
await db.commit() await db.commit()
@pytest.fixture
async def admin_client(app, db):
"""Authenticated admin client."""
now = utcnow_iso()
async with db.execute(
"INSERT INTO users (email, name, created_at) VALUES (?, ?, ?)",
("admin@test.com", "Admin", now),
) as cursor:
admin_id = cursor.lastrowid
await db.execute(
"INSERT INTO user_roles (user_id, role) VALUES (?, 'admin')", (admin_id,)
)
await db.commit()
async with app.test_client() as c:
async with c.session_transaction() as sess:
sess["user_id"] = admin_id
yield c
# ── Query function tests ───────────────────────────────────── # ── Query function tests ─────────────────────────────────────
class TestSearchPerformance: class TestSearchPerformance:

View File

@@ -286,24 +286,20 @@ class TestLoadProxyTiers:
assert len(tiers) == 1 assert len(tiers) == 1
assert tiers[0] == ["http://res1:8080"] assert tiers[0] == ["http://res1:8080"]
def test_three_tiers_correct_order(self, monkeypatch): def test_two_tiers_correct_order(self, monkeypatch):
self._clear_proxy_env(monkeypatch) self._clear_proxy_env(monkeypatch)
with patch("padelnomics_extract.proxy.fetch_webshare_proxies", return_value=["http://user:pass@1.2.3.4:1080"]): monkeypatch.setenv("PROXY_URLS_DATACENTER", "http://dc1:8080")
monkeypatch.setenv("WEBSHARE_DOWNLOAD_URL", "http://example.com/list") monkeypatch.setenv("PROXY_URLS_RESIDENTIAL", "http://res1:8080")
monkeypatch.setenv("PROXY_URLS_DATACENTER", "http://dc1:8080") tiers = load_proxy_tiers()
monkeypatch.setenv("PROXY_URLS_RESIDENTIAL", "http://res1:8080") assert len(tiers) == 2
tiers = load_proxy_tiers() assert tiers[0] == ["http://dc1:8080"] # datacenter (tier 1)
assert len(tiers) == 3 assert tiers[1] == ["http://res1:8080"] # residential (tier 2)
assert tiers[0] == ["http://user:pass@1.2.3.4:1080"] # free
assert tiers[1] == ["http://dc1:8080"] # datacenter
assert tiers[2] == ["http://res1:8080"] # residential
def test_webshare_fetch_failure_skips_tier(self, monkeypatch): def test_webshare_env_var_is_ignored(self, monkeypatch):
self._clear_proxy_env(monkeypatch) self._clear_proxy_env(monkeypatch)
with patch("padelnomics_extract.proxy.fetch_webshare_proxies", return_value=[]): monkeypatch.setenv("WEBSHARE_DOWNLOAD_URL", "http://example.com/list")
monkeypatch.setenv("WEBSHARE_DOWNLOAD_URL", "http://example.com/list") monkeypatch.setenv("PROXY_URLS_DATACENTER", "http://dc1:8080")
monkeypatch.setenv("PROXY_URLS_DATACENTER", "http://dc1:8080") tiers = load_proxy_tiers()
tiers = load_proxy_tiers()
assert len(tiers) == 1 assert len(tiers) == 1
assert tiers[0] == ["http://dc1:8080"] assert tiers[0] == ["http://dc1:8080"]

View File

@@ -188,59 +188,55 @@ class TestWorkerTask:
"""Test send_waitlist_confirmation worker task.""" """Test send_waitlist_confirmation worker task."""
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_sends_entrepreneur_confirmation(self): async def test_sends_entrepreneur_confirmation(self, mock_send_email):
"""Task sends confirmation email for entrepreneur signup.""" """Task sends confirmation email for entrepreneur signup."""
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_waitlist_confirmation({
await handle_send_waitlist_confirmation({ "email": "entrepreneur@example.com",
"email": "entrepreneur@example.com", "intent": "signup",
"intent": "signup", })
})
mock_send.assert_called_once() mock_send_email.assert_called_once()
call_args = mock_send.call_args call_args = mock_send_email.call_args
assert call_args.kwargs["to"] == "entrepreneur@example.com" assert call_args.kwargs["to"] == "entrepreneur@example.com"
assert "notify you at launch" in call_args.kwargs["subject"].lower() assert "notify you at launch" in call_args.kwargs["subject"].lower()
assert "waitlist" in call_args.kwargs["html"].lower() assert "waitlist" in call_args.kwargs["html"].lower()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_sends_supplier_confirmation(self): async def test_sends_supplier_confirmation(self, mock_send_email):
"""Task sends confirmation email for supplier signup.""" """Task sends confirmation email for supplier signup."""
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_waitlist_confirmation({
await handle_send_waitlist_confirmation({ "email": "supplier@example.com",
"email": "supplier@example.com", "intent": "supplier_growth",
"intent": "supplier_growth", })
})
mock_send.assert_called_once() mock_send_email.assert_called_once()
call_args = mock_send.call_args call_args = mock_send_email.call_args
assert call_args.kwargs["to"] == "supplier@example.com" assert call_args.kwargs["to"] == "supplier@example.com"
assert "growth" in call_args.kwargs["subject"].lower() assert "growth" in call_args.kwargs["subject"].lower()
assert "supplier" in call_args.kwargs["html"].lower() assert "supplier" in call_args.kwargs["html"].lower()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_supplier_email_includes_plan_name(self): async def test_supplier_email_includes_plan_name(self, mock_send_email):
"""Supplier confirmation should mention the specific plan.""" """Supplier confirmation should mention the specific plan."""
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_waitlist_confirmation({
await handle_send_waitlist_confirmation({ "email": "supplier@example.com",
"email": "supplier@example.com", "intent": "supplier_pro",
"intent": "supplier_pro", })
})
call_args = mock_send.call_args call_args = mock_send_email.call_args
html = call_args.kwargs["html"] html = call_args.kwargs["html"]
assert "pro" in html.lower() assert "pro" in html.lower()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_uses_transactional_email_address(self): async def test_uses_transactional_email_address(self, mock_send_email):
"""Task should use transactional sender address.""" """Task should use transactional sender address."""
with patch("padelnomics.worker.send_email", new_callable=AsyncMock) as mock_send: await handle_send_waitlist_confirmation({
await handle_send_waitlist_confirmation({ "email": "test@example.com",
"email": "test@example.com", "intent": "signup",
"intent": "signup", })
})
call_args = mock_send.call_args call_args = mock_send_email.call_args
assert call_args.kwargs["from_addr"] == core.EMAIL_ADDRESSES["transactional"] assert call_args.kwargs["from_addr"] == core.EMAIL_ADDRESSES["transactional"]
# ── TestAuthRoutes ──────────────────────────────────────────────── # ── TestAuthRoutes ────────────────────────────────────────────────