Takazudo Modular Docs

Type to search...

to open search from anywhere

/scripts/CLAUDE.md

CLAUDE.md at /scripts/CLAUDE.md

Path: scripts/CLAUDE.md

scripts/ Directory Guidelines

R2 Scripts

Five scripts manage Cloudflare R2 image storage:

  • upload-images-to-r2.mjs — incremental upload (MD5/ETag comparison)
  • download-images-from-r2.mjs — incremental download
  • check-r2-orphans.mjs — detect R2 images without originals in /imgs/
  • verify-r2-sync.mjs — compare local vs R2 slugs (used in pre-push)
  • convimgs-and-upload.mjs — wrapper: convimgs then upload in one step

R2 Layout

All R2 objects live in the zmodmedia bucket (shared with production image delivery). Two top-level prefixes are in use, plus a third reserved for the Photo Uploader epic (#1473):

PrefixPurposeShape
images/p/{slug}/...Product images (existing pipeline)*.webp, mercari.png, metadata.json, optional ogp.jpg
photos/originals/{YYYY}/{MM}/{slug}.{ext}Photo Uploader: user-uploaded originalsHEIC/JPEG/PNG preserved as-is
photos/variants/{slug}/{400w,800w,1600w}.webpPhoto Uploader: generated multi-res variantsthree WebP files per slug

The photos/ prefix is deliberately separate from images/p/: different record shape (photo-metadata-db.json, not metadata-db.json), different uploader, different lifecycle. See root CLAUDE.md “Photo Pipeline” for the full contract.

OGP Convention

See root CLAUDE.md “Image Pipeline > OGP Image Convention” section for full documentation.

All R2 scripts use @aws-sdk/client-s3 with S3Client (region: 'auto', endpoint: https://{accountId}.r2.cloudflarestorage.com).

.env Loading Pattern

R2 scripts load .env manually (no dotenv dependency). Follow this pattern:

// Parse KEY=VALUE lines, strip quotes, don't overwrite existing env vars
const content = await readFile(envPath, 'utf-8');
for (const line of content.split('\n')) {
  const trimmed = line.trim();
  if (!trimmed || trimmed.startsWith('#')) continue;
  const eqIndex = trimmed.indexOf('=');
  if (eqIndex === -1) continue;
  const key = trimmed.slice(0, eqIndex).trim();
  let value = trimmed.slice(eqIndex + 1).trim();
  // Remove surrounding quotes
  if (
    (value.startsWith('"') && value.endsWith('"')) ||
    (value.startsWith("'") && value.endsWith("'"))
  ) {
    value = value.slice(1, -1);
  }
  if (!(key in process.env)) {
    process.env[key] = value;
  }
}

Key rules:

  • Always strip surrounding quotes ("value" or 'value')
  • Never overwrite existing env vars (environment takes precedence over .env)
  • .env is optional — catch and ignore ENOENT

Console Output

  • Use console.error for status messages in CLI scripts that might have their stdout piped
  • process.exit(1) for errors, process.exit(0) for success

Photos R2 Scripts

Four scripts manage the Photo Uploader epic (#1473) pipeline. They parallel the product-image R2 scripts but target the photos/ prefix tree and the array-shaped photo-metadata-db.json.

  • download-photos-from-r2.mjs — download originals from photos/originals/ into ./photos/originals/{YYYY}/{MM}/ (creates a plain photos/ directory when the Dropbox symlink is absent)
  • build-photos-metadata.mjs — walk ./photos/originals/, emit 400w/800w/1600w.webp under static/photos/variants/{slug}/, and refresh ./photo-metadata-db.json (array, one record per line, sorted by slug). Uses a sidecar cache at ./photos/.build-cache.json so re-runs are no-ops. Never deletes fixture entries whose original is absent from the tree.
  • upload-photo-variants-to-r2.mjs — incremental upload of the three WebP widths under photos/variants/. photo-metadata-db.json stays in git — it is NOT uploaded to R2.
  • check-r2-photo-orphans.mjs — report R2 objects under photos/ with no matching slug in photo-metadata-db.json. Originals are reported as WARN (mid-flight uploads are legitimate); variants are reported as ERROR and exit non-zero.

Shared helpers

scripts/lib/photo-helpers.mjs holds the pure slug / orientation / aspect- ratio / variant-width helpers used across the four scripts. Unit tests live at tests/unit/photo-helpers.test.ts.

Layout

photos/
  originals/{YYYY}/{MM}/{slug}.{heic,jpg,jpeg,png}   # local source (symlink to Dropbox)
  .build-cache.json                                   # sidecar cache, gitignored
static/photos/variants/{slug}/{400w,800w,1600w}.webp  # generated variants
photo-metadata-db.json                                # committed, one record per line

R2 mirrors the same shape under the photos/originals/ and photos/variants/ prefixes.

Metadata DB Merge Driver

Two scripts back the custom git merge driver for metadata-db.json (epic #1719):

  • merge-metadata-db.mjs — the merge driver entry point; invoked by git on conflicts, merges metadata-db.json by treating it as a slug-keyed map so disjoint additions auto-resolve.
  • lib/metadata-db-serializer.mjs — shared serializer/deserializer for the one-record-per-line JSON format used by metadata-db.json.

Metadata DB Build Script

build-metadata-db.mjs regenerates metadata-db.json from static/images/p/{slug}/metadata.json sidecars. It is non-destructive by default: the committed file is loaded as a baseline, the local tree is scanned, and the output is union(baseline, scan) with the scan winning on overlapping slugs. This protects sparsely-populated worktrees and local clones from silently dropping baseline slugs.

Pass --prune (alias: --strict) for the legacy scan-only behavior. Use only when truly retiring slugs from a fully-populated local static/images/p/.

The script preserves a CI shortcut: when CI=true and static/images/p/ does not exist (Netlify’s clone), it no-ops and leaves the committed file as-is.

Pure helpers (loadBaseline, mergeBaselineWithScan, prunedSlugs) are exported and unit-tested in tests/unit/build-metadata-merge.test.ts.