/sub-packages/photo-uploader-worker/CLAUDE.md
CLAUDE.md at /sub-packages/photo-uploader-worker/CLAUDE.md
Path: sub-packages/photo-uploader-worker/CLAUDE.md
photo-uploader-worker
Cloudflare Worker that backs the Photo Uploader app (super-epic #1470,
worker-cutover epic #1592). It owns the four Photo Uploader API routes that
previously lived as Netlify Functions, talks to Cloudflare D1 via a native
binding for photo metadata, and signs PUT URLs against the zmodmedia R2
bucket via aws4fetch. The Worker runs in two environments, preview and
production, each bound to its own D1 database (see wrangler.toml).
After the cutover, Netlify holds zero Cloudflare credentials. All Cloudflare access — D1 reads/writes, R2 signing, deploy auth — terminates at the Worker side or in GitHub Actions secrets.
Architecture overview
A single Worker exposes four routes (3 POST + 1 GET), routed in src/index.ts:
| Method | Path | Purpose |
|---|---|---|
| POST | /photo-uploader-login | Verify operator password, mint a session cookie |
| POST | /photo-uploader-sign | Return a presigned PUT URL for an R2 original |
| POST | /photo-uploader-commit | Insert/upsert the photo row in D1, fire build hook |
| GET | /photos.json | Read photo rows from D1 for the build pipeline |
Two named envs in wrangler.toml:
[env.preview]→ Workerphoto-uploader-preview, D1photos-metadata-preview[env.production]→ Workerphoto-uploader-prod, D1photos-metadata
D1 access is via the native binding (env.DB) — there is no REST client,
no CLOUDFLARE_API_TOKEN at runtime, no IDs to leak. Database IDs are baked
into wrangler.toml and only exist in the Worker config.
R2 PUT URLs are signed with aws4fetch.AwsClient using the S3-compatible
endpoint (https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com) and a 5-minute
TTL pinned via X-Amz-Expires=300 on the URL search params before signing.
Bundle size: ~47 KiB total / ~12 KiB gzip after wrangler deploy --dry-run,
well under the 1 MB Workers free-tier cap.
Local dev
pnpm --filter photo-uploader-worker dev
# or, from repo root:
pnpm worker:dev
This runs wrangler dev --env preview and binds against the preview D1
database. Default port: 8787. The dev server reads Worker secrets from
the deployed preview environment (Cloudflare-hosted), not from local
.env files — so secrets must be configured for --env preview first via
wrangler secret put (see Secrets runbook below). wrangler dev does
not read GitHub Actions secrets.
The Worker is a thin remote API; the Photo Uploader UI itself runs on
localhost:14189 (pnpm photo-uploader:dev) and points at the deployed
preview Worker URL via the Netlify rewrite (or a local override during
development).
Secrets runbook
There are two distinct kinds of secrets in play. Don’t conflate them.
1. Worker runtime secrets — wrangler secret put
These are read by the Worker at request time via the env binding. They live
in Cloudflare, per environment, and must be configured for both
--env preview and --env production before the Worker can serve any real
traffic. New environments start empty.
Run each command twice — once with --env preview, once with
--env production:
# Operator password for /photo-uploader-login.
wrangler secret put PHOTO_UPLOADER_PASSWORD --env preview
# HMAC key for signing the session cookie. Generate fresh per environment.
# openssl rand -hex 32
wrangler secret put PHOTO_UPLOADER_SESSION_SECRET --env preview
# R2 S3-compatible credentials for aws4fetch — create a scoped R2 token in
# the Cloudflare dashboard (Object Read & Write on the zmodmedia bucket).
wrangler secret put R2_ACCOUNT_ID --env preview
wrangler secret put R2_ACCESS_KEY_ID --env preview
wrangler secret put R2_SECRET_ACCESS_KEY --env preview
# Bucket name. Only set this if you're deviating from the default
# (zmodmedia) — otherwise leave it unbound and let the Worker fall back.
wrangler secret put R2_BUCKET_NAME --env preview
# Netlify build hook URL fired after a successful /photo-uploader-commit.
# Copy from Netlify → Site settings → Build hooks (preview vs prod URLs differ).
wrangler secret put PHOTOS_BUILD_HOOK_URL --env preview
# Shared bearer token the build pipeline presents on /photos.json.
# openssl rand -hex 32
wrangler secret put BUILD_AUTH_TOKEN --env preview
# Shared bearer token the Photo Manager Tauri app presents on the admin
# endpoints (e.g. patching captions / hashtags / product links via the
# Worker). Generate fresh per environment, treat as a long-lived secret.
# openssl rand -hex 32
#
# This MUST be set on BOTH --env preview AND --env production before the
# admin endpoints will serve any request — they hard-fail closed when the
# binding is missing or the presented bearer doesn't match.
#
# Tauri-app pairing: the same value baked into PHOTO_ADMIN_TOKEN here is
# the value compiled into the Photo Manager Tauri app build at build
# time (per-env). Rotating the Worker secret without rebuilding the
# Tauri app with the matching value will lock the operator out of admin
# until the app is rebuilt and reinstalled.
wrangler secret put PHOTO_ADMIN_TOKEN --env preview
# Repeat all of the above with --env production.
To inspect what’s currently bound:
wrangler secret list --env preview
wrangler secret list --env production
To rotate a value, run wrangler secret put again with the same name — it
overwrites in place.
2. Deploy-time secrets — GitHub Actions
These are NOT Worker runtime secrets. They are only consumed by the deploy workflow so it can talk to the Cloudflare API on your behalf:
CLOUDFLARE_API_TOKEN— a deploy-scoped API token with the permissions Workers Scripts → Edit and D1 → Read. Create it under Cloudflare dashboard → My Profile → API Tokens, scoped to the account that owns the Worker. This token is never read by the Worker at runtime; do not put it throughwrangler secret put.CLOUDFLARE_ACCOUNT_ID— the Cloudflare account ID. Already configured as a repository GitHub Actions secret (shared with other Cloudflare workflows).
3. Things that look like Worker secrets but are not
CLOUDFLARE_API_TOKEN— deploy-time only (above). Not a Worker runtime secret.CLOUDFLARE_D1_DATABASE_ID,CLOUDFLARE_D1_DATABASE_ID_PREVIEW— these database IDs are baked directly intowrangler.tomlunder[[env.production.d1_databases]]/[[env.preview.d1_databases]]. The D1 binding (env.DB) is wired by Wrangler at deploy time; the Worker never reads the raw IDs. Do not runwrangler secret putfor these.R2_*bindings declared via[[r2_buckets]]inwrangler.toml— if / when we move from S3-compatible signing to a native R2 binding, the bucket binding replaces the per-key/secret credentials. Until then, the credential set above is the source of truth.
Secret rotation procedure
For secrets that are duplicated on both sides (Worker runtime + Netlify env or Worker runtime + operator-known value), rotation has a brief skew window. Plan the order so the Worker leads:
-
PHOTO_UPLOADER_PASSWORD— rotate the Worker secret first, then notify authorized uploader users out-of-band. There is no Netlify-side copy. Existing sessions stay valid until their cookie expires (24h TTL). -
PHOTO_UPLOADER_SESSION_SECRET— rotate the Worker secret. All existing session cookies become invalid immediately. Users must re-login. No Netlify-side copy. -
BUILD_AUTH_TOKEN— Worker secret AND Netlify env var must match. Rotate the Worker secret first, then update the Netlify env var (per context: production / preview). During the brief skew window, the next Netlify build with the old token gets 401 from the Worker and falls back to the filesystem snapshot — non-fatal, but the build won’t pick up new D1 captions until the env var is updated. -
R2_*credentials — rotate the R2 token in Cloudflare, thenwrangler secret putthe new values into both Worker envs. Until both are rotated, sign requests against the affected env produce broken PUT URLs. Keep the old token alive briefly to drain in-flight uploads, then revoke. -
PHOTOS_BUILD_HOOK_URL— rotate by creating a new Netlify build hook, setting it as the Worker secret, then deleting the old hook in Netlify. -
PHOTO_ADMIN_TOKEN— Worker secret AND the value compiled into the Photo Manager Tauri app build must match. Procedure:- Generate the new token:
openssl rand -hex 32. wrangler secret put PHOTO_ADMIN_TOKEN --env previewand again with--env productionso both Workers carry the new value.- Rebuild the Photo Manager Tauri app with the new token baked in (per-env builds), then reinstall on operator machines.
- Deploy the Workers if a code change rides along; the secret update itself takes effect immediately without a redeploy.
There is no dual-token acceptance path on the Worker — old tokens are rejected the moment the secret is rotated, so plan rotation during a quiet window and have the rebuilt Tauri app ready before flipping production. Preview can lead so the rebuilt app can be smoke-tested against
--env previewbefore touching--env production. - Generate the new token:
D1 binding setup
The two D1 databases (production + preview) were created during epic #1573
sub-task 1a. Their IDs are baked into wrangler.toml:
[[env.production.d1_databases]]
binding = "DB"
database_name = "photos-metadata"
database_id = "0941ed32-ffe6-4ad0-9345-111b17c7d497"
[[env.preview.d1_databases]]
binding = "DB"
database_name = "photos-metadata-preview"
database_id = "4ebba8cb-efb0-4f20-bc36-9666ed4662ec"
The schema is already applied to both databases (see schema/photos.sql at
the repo root for the canonical definition). The Worker reads/writes via
env.DB.prepare(...).bind(...) — no migration tooling, no ORM.
If you need to bootstrap a fresh D1 database (e.g., when standing up a new preview environment), use:
# One-time create (only if the database does not yet exist):
wrangler d1 create photos-metadata-preview
# Apply the schema:
wrangler d1 execute photos-metadata-preview \
--remote --file=schema/photos.sql
Then update the matching database_id in wrangler.toml and commit.
Schema evolution policy
Future schema changes are append-only at runtime: the Worker should tolerate missing columns gracefully (typed reads, default fallbacks) so a deploy that lands before the schema migration doesn’t break.
There are two databases that must both be updated, by name, against the
remote Cloudflare D1 instance (--remote is critical — without it,
Wrangler writes to a local in-memory D1 and the cloud copy is untouched):
| Env | D1 database name | Wrangler invocation |
|---|---|---|
| Production | photos-metadata | wrangler d1 execute photos-metadata --remote --file=<path> |
| Preview | photos-metadata-preview | wrangler d1 execute photos-metadata-preview --remote --file=<path> |
Re-applying the canonical schema
schema/photos.sql uses CREATE TABLE IF NOT EXISTS and
CREATE INDEX IF NOT EXISTS, so re-running it on an existing database is a
no-op for already-migrated columns and indexes. Use this when bootstrapping
a fresh DB or to confirm baseline shape:
wrangler d1 execute photos-metadata \
--remote --file=schema/photos.sql
wrangler d1 execute photos-metadata-preview \
--remote --file=schema/photos.sql
Note: because the whole CREATE TABLE is wrapped in IF NOT EXISTS, adding a
new column to that statement only takes effect against fresh databases. For
existing databases, the actual column is added by the matching dated
migration file (next section). Both files stay in lockstep so the canonical
file remains the source of truth for fresh DBs while the migrations carry
incremental diffs forward.
Incremental migrations
Dated migration files live under schema/migrations/YYYYMMDD-description.sql
and contain just the incremental DDL (e.g. an ALTER TABLE … ADD COLUMN).
Migrations are applied in the order they land on main; do not re-order or
rewrite history.
Apply pattern (replace the filename — run BOTH commands, once per env):
# Production:
wrangler d1 execute photos-metadata \
--remote --file=schema/migrations/YYYYMMDD-description.sql
# Preview:
wrangler d1 execute photos-metadata-preview \
--remote --file=schema/migrations/YYYYMMDD-description.sql
Concrete example — applying the products_json column migration:
wrangler d1 execute photos-metadata \
--remote --file=schema/migrations/20260427-add-products-json.sql
wrangler d1 execute photos-metadata-preview \
--remote --file=schema/migrations/20260427-add-products-json.sql
Operator workflow:
- Land the migration file + matching
schema/photos.sqledit onmain(or the active base branch). - Run the migration against
photos-metadata-previewfirst, deploy the Worker to--env preview, smoke-test. - Run the migration against
photos-metadata(production), then deploy the Worker to--env production.
Because the Worker is written to tolerate missing columns, the order of “apply migration” vs “deploy Worker” within an env is not strict — but running the migration first removes one variable from the rollout.
R2 bucket CORS
The browser uploads originals directly to the zmodmedia R2 bucket using
the presigned PUT URL minted by /photo-uploader-sign. R2 buckets serve no
CORS by default, so without an explicit allow-list the browser preflight
fails with No 'Access-Control-Allow-Origin' header is present on the requested resource and the upload never starts (#1640).
The canonical config lives at r2-cors.json next to wrangler.toml. It
mirrors the uploader allow-list in src/auth.ts (ALLOWED_ORIGIN_PATTERNS)
so the R2 ACL stays in sync with what the Worker accepts.
Apply / re-apply (idempotent):
wrangler r2 bucket cors put zmodmedia --file ./r2-cors.json
Inspect:
wrangler r2 bucket cors list zmodmedia
Delete (nuclear; only if you intend to lock down R2 to no browser origins):
wrangler r2 bucket cors delete zmodmedia
When you change ALLOWED_ORIGIN_PATTERNS in src/auth.ts, update
r2-cors.json to match in the same PR and re-apply with the put command
above. The two stay in sync by hand — there is no automation.
R2 CORS is a single bucket-level config; it is not environment-scoped
(prod vs preview) the way Worker secrets and D1 bindings are. Both envs sign
URLs against the same zmodmedia bucket, so one CORS config covers both.
Monitoring and logs
By default, Worker logs land in Cloudflare Dashboard → Workers & Pages → your Worker → Logs. Live tail for the active environment:
wrangler tail --env preview
wrangler tail --env production
For longer retention or routing to an external sink (R2, S3, Datadog, etc.), configure Logpush in the Cloudflare dashboard. Logpush is a follow-up if needed and not in scope for the cutover epic.
The Worker uses console.log / console.error for diagnostic output —
console.log is acceptable here (Workers context, not the website source
which the repo’s CI rule covers).
Deploy
Manual deploys (Wrangler from your machine):
pnpm worker:deploy:preview
pnpm worker:deploy:prod
# or:
pnpm --filter photo-uploader-worker run deploy:preview
pnpm --filter photo-uploader-worker run deploy:prod
Automated deploys are the normal path — see
.github/workflows/deploy-photo-uploader-worker.yml.
The workflow deploys:
pushtomain→--env productionpushtobase/photo-uploader-workersorbase/top-page-renewal→--env previewworkflow_dispatch→ chooseprevieworproductionfrom the UI
The workflow is path-filtered to sub-packages/photo-uploader-worker/** so
unrelated commits don’t waste CI minutes. SHA-pinned
cloudflare/wrangler-action@v3.15.0. Tests run before the deploy step;
a failing pnpm test blocks the deploy.
Rollback procedure
The Worker cutover is an atomic group of changes (Wave 3 sub-issues #1597, #1598, #1599). Rollback strategy depends on what failed:
Worker code regression (e.g., a deploy ships a broken handler):
# Roll back to the previous Worker version via Cloudflare dashboard:
# Workers & Pages → your Worker → Deployments → choose previous → Rollback
# Or via wrangler:
wrangler rollback --env production --message "rolling back broken handler"
This restores the previous Worker code without touching D1 data or Netlify.
Cutover-level regression (the rewrite path itself is broken — Netlify is forwarding traffic to a Worker that can’t handle it):
The legacy Netlify functions were deleted in #1599. There is no
two-line revert — restoring service requires reverting the rewrite block
in netlify.toml AND static/_redirects, AND restoring the deleted
handlers from git history. Procedure:
- Identify the merge commit that brought in #1599
(
Merge photo-uploader-workers/sub7-delete-legacy ...). git revert -m 1 <merge-commit-sha>to bring the deleted handlers back.- Revert the rewrite blocks in
netlify.tomlandstatic/_redirects(the merge commits for #1597 and the netlify.toml piece of the cutover). - Push to
main. Netlify redeploys without the rewrite, and the restored functions take over.
This is a deliberately heavy rollback — it exists so the cutover is recoverable but not so easy that it’s the first instinct. Prefer fixing the Worker forward.
D1 data corruption is a separate concern: D1 has no point-in-time
restore on the free tier. Treat the on-disk photo-metadata-db.json
snapshot in git as the recovery source if needed (scripts/seed-d1-photos.mjs
seeds D1 from it).
Operator runbook (initial cutover)
Reproduced verbatim from epic #1592 for searchability — this is the one-time
procedure to run before merging the base PR into base/top-page-renewal.
pnpm dlx wrangler login(one-time, per operator machine).- Set Worker secrets via
wrangler secret putfor both--env productionand--env preview:PHOTO_UPLOADER_PASSWORDPHOTO_UPLOADER_SESSION_SECRET(32-byte hex fromopenssl rand -hex 32)R2_ACCOUNT_ID,R2_ACCESS_KEY_ID,R2_SECRET_ACCESS_KEY,R2_BUCKET_NAMEPHOTOS_BUILD_HOOK_URL(existing per-env Netlify build hook URLs)BUILD_AUTH_TOKEN(32-byte hex fromopenssl rand -hex 32; same value goes into Netlify env)
- Trigger the GitHub Actions workflow (
Deploy Photo Uploader Worker) → both Workers deploy. Confirm bothphoto-uploader-previewandphoto-uploader-prodshow in Cloudflare dashboard with green deploys. - Manually verify a sign → PUT → commit flow against preview (use the
Photo Uploader UI on
localhost:14189pointed at preview, orcurl). - Update Netlify env vars:
- Delete:
CLOUDFLARE_API_TOKEN,CLOUDFLARE_ACCOUNT_ID,CLOUDFLARE_D1_DATABASE_ID,CLOUDFLARE_D1_DATABASE_ID_PREVIEW - Add (per context):
PHOTOS_BUILD_READ_URL(production Worker URL forproductioncontext, preview Worker URL for the rest),BUILD_AUTH_TOKEN(matches Worker secret)
- Delete:
- Trigger a Netlify preview build, confirm
pnpm photos:buildresolves through the Worker (build log line:photos:build: reading from Cloudflare Worker /photos.json). - Merge the base PR into
base/top-page-renewal.
Close each sub-issue as its implementation merges.
Reference
- Epic: #1592 — Photo Uploader Workers cutover
- Super-epic: #1470 — Photo Uploader
- Deploy workflow:
.github/workflows/deploy-photo-uploader-worker.yml - D1 schema:
schema/photos.sql - Wrangler config:
wrangler.toml - Netlify rewrite:
netlify.tomlandstatic/_redirects(proxy/.netlify/functions/photo-uploader-*to the Worker) - Build read script:
scripts/build-photos-metadata.mjs(consumes/photos.json, falls back to filesystem snapshot)