/sub-packages/zmod-preorder-worker/CLAUDE.md
CLAUDE.md at /sub-packages/zmod-preorder-worker/CLAUDE.md
Path: sub-packages/zmod-preorder-worker/CLAUDE.md
zmod-preorder-worker
Cloudflare Worker that backs the preorder / notify / reservation / keyword-log / email-template API for takazudomodular.com (epic #1798 — zzmod migration, sub-issue #1805).
Replaces the Netlify Functions in netlify/functions/ over Waves 4–6. During
Waves 3–6 the Worker runs against the isolated preorder-data-preview D1 only;
production traffic continues to Netlify Blobs until the Wave 7 write-freeze +
cutover (see __inbox/storage-backend-decision-20260516.md § Decision 2).
keyword_logs storage decision (locked Wave 8)
keyword_logs uses KV (not D1). The KEYWORD_LOGS KV binding is declared on the
Pages project (functions/api/_shared/types.ts) and written in functions/api/search.ts
(fire-and-forget kv.put()). The admin read endpoint is functions/api/keyword-logs.ts.
Wave 2 decision doc storage-backend-decision-20260516.md abstractly preferred D1 for all
data, but Wave 3.2 (#1806) implemented KV with a defensive if (kv) guard that makes
the whole log pipeline gracefully optional. KV is the correct choice for append-only
timestamped log data (list-everything + key-per-event maps cleanly to KV.list / KV.put).
Switching to D1 would require functions/api/keyword-logs.ts and search.ts to use env.DB
instead of env.KEYWORD_LOGS — beyond Wave 8 scope. Decision is locked: KV is correct.
The Wave 2 doc’s D1 keyword_logs table definition is superseded by this decision.
Architecture overview
A single Worker routing in src/index.ts:
| Method | Path | Purpose | Added in |
|---|---|---|---|
| GET | /healthz | Smoke endpoint | Wave 3 (this PR) |
| … | … | Wave 4 routes mount here | Wave 4 |
Two named envs in wrangler.toml:
[env.preview]→ Workerzmod-preorder-preview, D1preorder-data-preview[env.production]→ Workerzmod-preorder-prod, D1preorder-data
D1 access is via the native binding (env.DB) — no REST client, no
CLOUDFLARE_API_TOKEN at runtime.
Local dev
pnpm --filter zmod-preorder-worker dev
Runs wrangler dev --env preview --port 8788 against the preorder-data-preview D1.
Port 8788 is pinned (see wrangler.toml [dev] port = 8788) to avoid collision with
photo-uploader-worker which uses port 8787. The dev server reads Worker secrets from the
deployed preview environment (Cloudflare-hosted), not from local .env files — so
secrets must be set via wrangler secret put first.
Secrets runbook
1. Worker runtime secrets — wrangler secret put
Set each secret for both environments before the Worker serves real traffic.
Run each command twice — once with --env preview, once with --env production.
# Bearer token for admin endpoint authentication (Authorization: Bearer <token>).
# Generate: openssl rand -hex 32
wrangler secret put PREORDER_API_TOKEN --env preview
# Resend transactional email API key.
# Create at https://resend.com/api-keys (scoped to takazudomodular.com domain).
wrangler secret put RESEND_API_KEY --env preview
# Discord webhook URL for public product notifications (no PII).
# Optional — if not set, Discord notifications are silently skipped.
# Create at: Discord Server Settings → Integrations → Webhooks
wrangler secret put DISCORD_WEBHOOK_URL --env preview
# Slack webhook URL for restock admin notifications (may contain PII).
# Optional — if not set, Slack restock notifications are silently skipped.
# Create at: Slack App → Incoming Webhooks
wrangler secret put SLACK_RESTOCK_WEBHOOK_URL --env preview
# Slack webhook URL for reservation admin notifications (may contain PII).
# Optional — if not set, Slack reservation notifications are silently skipped.
wrangler secret put SLACK_RESERVATION_WEBHOOK_URL --env preview
# Repeat all of the above with --env production.
To inspect bound secrets:
wrangler secret list --env preview
wrangler secret list --env production
To rotate: run wrangler secret put again with the same name — it overwrites in place.
2. Deploy-time secrets — GitHub Actions
These are consumed by the deploy workflow only (not Worker runtime):
CLOUDFLARE_API_TOKEN— a token with Workers Scripts → Edit + D1 → Read permissions. Create under Cloudflare dashboard → My Profile → API Tokens.CLOUDFLARE_ACCOUNT_ID— the Cloudflare account ID. Already a repository secret.
3. Things that are NOT runtime secrets
CLOUDFLARE_API_TOKEN— deploy-time only. Never runwrangler secret putfor it.- D1
database_idvalues — baked intowrangler.toml. Theenv.DBbinding resolves them at deploy time; the Worker never reads raw IDs.
D1 binding setup
One-time bootstrap (run per environment, once, before the first deploy):
wrangler d1 create preorder-data
wrangler d1 create preorder-data-preview
Capture the database_id values from each command’s output and paste them into
the matching [[env.production.d1_databases]] and [[env.preview.d1_databases]]
sections of wrangler.toml, then commit.
Apply the initial schema to both databases:
wrangler d1 execute preorder-data --remote --file=migrations/0001_init.sql
wrangler d1 execute preorder-data-preview --remote --file=migrations/0001_init.sql
Schema evolution policy
Append-only, mirroring the photo-uploader-worker convention. Each schema
change lands as a dated file migrations/YYYYMMDD-description.sql containing
only the incremental DDL (ALTER TABLE … ADD COLUMN, new indexes, etc.).
0001_init.sql uses IF NOT EXISTS throughout so it can be re-applied to a
fresh database at any time.
Apply migrations to both databases:
# Production:
wrangler d1 execute preorder-data \
--remote --file=migrations/YYYYMMDD-description.sql
# Preview:
wrangler d1 execute preorder-data-preview \
--remote --file=migrations/YYYYMMDD-description.sql
Operator workflow: apply preview first, deploy to --env preview, smoke-test,
then apply production and deploy to --env production.
Deploy
Manual:
pnpm --filter zmod-preorder-worker deploy:preview
pnpm --filter zmod-preorder-worker deploy:prod
Automated via .github/workflows/deploy-zmod-preorder-worker.yml:
pushtomain→--env productionpushtobase/zzmod-migrationorexpreview/**→--env previewworkflow_dispatch→ choose env from UI
Note: the automated workflow uses Blacksmith runners
(blacksmith-2vcpu-ubuntu-2204). These are not available on this repository
yet — the workflow will not run until the bigbang migration. Manual deploys via
the commands above work today.
Code organisation
src/
index.ts — central router (Wave 4: add route mounts here only)
lib/
env.ts — Env interface (D1 binding + runtime secrets) [FROZEN]
cors.ts — CORS helpers: public allow-list + admin wildcard [FROZEN]
response.ts — JSON response builders [FROZEN]
auth.ts — Bearer token verification for admin routes [FROZEN]
routes/
healthz.ts — GET /healthz smoke endpoint (Wave 4 template)
<name>.ts — Wave 4 route files land here
tests/
healthz.test.ts — Vitest tests for /healthz + router (Wave 4 template)
<name>.test.ts — Wave 4 test files land here
migrations/
0001_init.sql — D1 initial schema (all 5 tables)
The files in src/lib/ are frozen — Wave 4 agents must not edit them.
Route files in src/routes/ are owned per-topic by Wave 4 agents.
Wave 4 route-file template
This is the exact pattern a Wave 4 agent must follow to add a new route.
Step 1 — Create src/routes/<name>.ts
/**
* <METHOD> /<path> — brief description (Wave 4, sub-issue #XXXX).
*/
import type { Env } from '../lib/env';
import { jsonResponse, publicJsonResponse, publicErrorResponse } from '../lib/response';
import { isPublicOriginAllowed } from '../lib/cors';
// For admin routes instead:
// import { verifyBearerToken, adminJsonResponse, adminErrorResponse } from '../lib/auth';
export function myRouteHandler(
request: Request,
env: Env,
_ctx: ExecutionContext,
): Promise<Response> | Response {
const origin = request.headers.get('origin');
// Admin route auth check (omit for public routes):
// const authError = verifyBearerToken(request, env);
// if (authError) return authError;
// ... handler logic using env.DB for D1 access ...
return jsonResponse({ success: true });
}
Step 2 — Mount in src/index.ts
In the WAVE 4 EXTENSION POINT block:
import { myRouteHandler } from './routes/my-route';
// ...
if (method === 'POST' && pathname === '/api/my-route') {
return myRouteHandler(request, env, ctx);
}
Step 3 — Add tests/<name>.test.ts
Mirror the structure of tests/healthz.test.ts:
import { describe, expect, it } from 'vitest';
import worker from '../src/index';
import type { Env } from '../src/lib/env';
function makeEnv(overrides: Partial<Env> = {}): Env {
return {
DB: {} as D1Database,
PREORDER_API_TOKEN: 'test-token',
RESEND_API_KEY: 'test-resend-key',
...overrides,
};
}
function makeCtx(): ExecutionContext {
return {
waitUntil: () => undefined,
passThroughOnException: () => undefined,
} as unknown as ExecutionContext;
}
describe('POST /api/my-route', () => {
it('returns expected response', async () => {
const req = new Request('https://worker.invalid/api/my-route', { method: 'POST' });
const res = await worker.fetch(req, makeEnv(), makeCtx());
expect(res.status).toBe(200);
});
});
Then run:
pnpm --filter zmod-preorder-worker test
pnpm --filter zmod-preorder-worker typecheck
Both must pass before committing.
Reference
- Epic: #1798 — zzmod migration
- Sub-issue: #1805 — Worker scaffold
- Storage decision:
__inbox/storage-backend-decision-20260516.md - D1 schema:
migrations/0001_init.sql - Wrangler config:
wrangler.toml - Deploy workflow:
.github/workflows/deploy-zmod-preorder-worker.yml - Pattern reference:
sub-packages/photo-uploader-worker/(this package mirrors its structure)