Replace .gitlab/.gitlab-ci.yml with .gitea/workflows/ci.yaml, update
CI_JOB_TOKEN → github.token, CI_PIPELINE_IID → github.run_number, and
update setup instructions to point to git.padelnomics.io deploy keys.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- setup_server.sh: add git/curl/ca-certificates apt install, add uv install
as service user, fix SSH config write (root + chown vs sudo heredoc), remove
noise log lines after set -e makes them redundant
- bootstrap_supervisor.sh: remove all tool installs (apt, uv, sops, age) —
setup_server.sh is now the single source of truth; strip to ~45 lines:
age-key check, clone/fetch, tag checkout, decrypt, uv sync, systemd enable
- readme.md: update step 1 and step 3 descriptions
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- materia-supervisor.service: User=root → User=beanflows_service,
add PATH so uv (~/.local/bin) is found without a login shell
- setup_server.sh: full rewrite — creates beanflows_service (nologin),
generates SSH deploy key + age keypair as service user at XDG path
(~/.config/sops/age/keys.txt), installs age/sops/rclone as root,
prints both public keys + numbered next-step instructions
- bootstrap_supervisor.sh: full rewrite — removes GITLAB_READ_TOKEN
requirement, clones via SSH as service user, installs uv as service
user, decrypts with SOPS auto-discovery, uv sync as service user,
systemctl as root
- web/deploy.sh: remove self-contained sops/age install + keypair
generation; replace with simple sops check (exit if missing) and
SOPS auto-discovery decrypt (no explicit key file needed)
- infra/readme.md: update architecture diagram for beanflows_service
paths, update setup steps to match new scripts
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Remove distributed R2/Iceberg/SSH pipeline architecture in favor of
local subprocess execution with NVMe storage. Landing data backed up
to R2 via rclone timer.
- Strip Iceberg catalog, httpfs, boto3, paramiko, prefect, pyarrow
- Pipelines run via subprocess.run() with bounded timeouts
- Extract writes to {LANDING_DIR}/psd/{year}/{month}/{etag}.csv.gzip
- SQLMesh reads LANDING_DIR variable, writes to DUCKDB_PATH
- Delete unused provider stubs (ovh, scaleway, oracle)
- Add rclone systemd timer for R2 backup every 6h
- Update supervisor to run pipelines with env vars
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>