Solving the Time-Traveler's Dilemma: Engineering a Deterministic Global Spotlight
How I solved the greedy algorithm problem in daily content scheduling using two-phase orchestration and backward planning.
Building a "Daily Photo" feature sounds simple until you remember that "today" is a relative concept. At any given moment, the world spans nearly 48 hours of timezones, from UTC+14 in Kiritimati to UTC-12 in Baker Island.
When I set out to build the spotlight system for my photography portfolio, I didn't just want to show a random photo every day. I wanted a system that felt intelligent, curating content based on holidays (Christmas, World Photography Day), historical anniversaries ("On This Day"), and aesthetic themes that matched the season.
But I faced two architectural challenges that would make or break the entire system:
Challenge 1: Timezone Chaos. A naive cron job running at midnight local time would create a fragmented history where the "Daily Photo" changed unpredictably depending on who asked and when. What happens when it's already tomorrow in Tokyo but still yesterday in Los Angeles?
Challenge 2: The Greedy Algorithm Trap. If I simply picked the best photo for the earliest timezone, that selection might consume a high-quality candidate that would have been perfect for a special occasion occurring 20 hours later in a different timezone. By the time the algorithm reached the latest timezones, the pool of best photos would be depleted.
The solution? A Deterministic Orchestration system with a twist: it plans the future first.
In this deep dive, I'll break down how I used Two-Phase Planning, the Skeleton-Hydration Pattern, and Distributed Locking to build a self-correcting, globally consistent content feed that serves coherent narratives to users regardless of where they are on the planet.
The core of the system is the Cron Orchestrator, running daily at 10:00 AM UTC. This specific time is strategic: it sits comfortably between the end of the day in the furthest west timezones and the start of the day in the furthest east, allowing me to pre-compute the spotlight for the entire 48-hour global date window in a single pass.
To solve the greedy algorithm problem, I engineered a two-phase planning strategy that fundamentally changes how the system thinks about time:
First, I scan the entire global date window for Special Occasions like "World Photography Day" or "Christmas." These dates get first priority and can reuse photos regardless of exclusion lists. Why? Because a Christmas photo should appear on Christmas, even if it was shown recently. Context matters more than variety for these special moments.
Here's where the magic happens. For standard showcase days, I don't plan chronologically. Instead, I sort the dates in DESCENDING order: furthest in the future first. By planning Sunday before Saturday, I ensure that the future date gets its optimal pick, and the earlier date must work with the remaining pool.
This backward-planning approach ensures that a high-priority future event isn't cannibalized by a random Tuesday. The system literally travels to the future, makes its selections, then works backward to the present.
The Two-Phase Orchestration Pipeline
The implementation reflects this two-phase philosophy. Notice how Phase 2 explicitly sorts showcaseDates in descending order using b.localeCompare(a). This ensures we process the furthest future date before the immediate present.
As each date is processed, selected photo IDs are added to an exclusion list (excludedIds), preventing duplication and ensuring fair distribution across the entire window.
Here's the orchestrator logic from :
// ============================================ // PHASE 1: Process Priority Days (Special Occasions) // ============================================ const priorityDates = datePlans .filter((plan) => plan.hasSpecialOccasion) .map((plan) => plan.dateISO); for (const dateISO of priorityDates) { try { // Check if already cached const cached = await getDailyPhotoSkeleton(dateISO); if (cached) { // Lock cached IDs to prevent reuse in showcase days cached.photoIds.forEach((ref) => excludedIds.add(ref.id)); results.push({ dateISO, success: true, type: cached.type, selectedIds: cached.photoIds.map((ref) => ref.id), }); continue; } // Compute fresh selection // Note: Special occasions do NOT pass excludedIds - they can reuse photos const skeleton = await computeDailyPhotoSelection(dateISO); // Cache it await setDailyPhotoSkeleton( dateISO, skeleton, getSkeletonTtlSeconds(dateISO), ); // Lock selected IDs for subsequent days const selectedIds = skeleton.photoIds.map((ref) => ref.id); selectedIds.forEach((id) => excludedIds.add(id)); results.push({ dateISO, success: true, type: skeleton.type, selectedIds, }); } catch (error) { results.push({ dateISO, success: false, error: (error as Error).message, }); } } // ============================================ // PHASE 2: Process Showcase & On This Day (in Descending Order) // ============================================ // Sort remaining dates in DESCENDING order (furthest date first) // This ensures photos planned for future days aren't used on earlier days const showcaseDates = datePlans .filter((plan) => !plan.hasSpecialOccasion) .map((plan) => plan.dateISO) .sort((a, b) => b.localeCompare(a)); // Descending order - TIME TRAVEL! for (const dateISO of showcaseDates) { try { // Check if already cached const cached = await getDailyPhotoSkeleton(dateISO); if (cached) { // Lock cached IDs for earlier dates cached.photoIds.forEach((ref) => excludedIds.add(ref.id)); results.push({ dateISO, success: true, type: cached.type, selectedIds: cached.photoIds.map((ref) => ref.id), }); continue; } // Compute fresh selection with exclusion list const orchestration: OrchestrationContext = { excludedIds: Array.from(excludedIds), isOrchestrated: true, }; const skeleton = await computeDailyPhotoSelection( dateISO, orchestration, ); // Cache it await setDailyPhotoSkeleton( dateISO, skeleton, getSkeletonTtlSeconds(dateISO), ); // Lock selected IDs for earlier dates in the window const selectedIds = skeleton.photoIds.map((ref) => ref.id); selectedIds.forEach((id) => excludedIds.add(id)); results.push({ dateISO, success: true, type: skeleton.type, selectedIds, }); } catch (error) { results.push({ dateISO, success: false, error: (error as Error).message, }); } }
Once the orchestrator determines which date to process, the actual selection logic kicks in. I use a strict 3-tier system with no fallbacks between tiers to maintain contextual clarity:
- Tier 1: Special Occasions - Holidays and commemorative events
- Tier 2: On This Day - Historical anniversaries from past years
- Tier 3: Showcase - High-quality aesthetic selections
For the Showcase tier, I didn't want purely random photos. I implemented a metric I call Backtrack Distance: a measure of how close a photo's original capture date (month/day) is to the current target date.
This means in November, the system naturally prioritizes photos taken in November of previous years. The result? Snowy photos appear in winter, beach photos in summer, and autumn foliage dominates in fall, all without explicitly hardcoding seasonal rules. The content feels seasonally appropriate through subtle algorithmic nudging.
The selection logic sorts candidates first by selection_count (how often they've been shown), then by backtrack distance as a tiebreaker. Photos that have never been shown AND were taken near the current calendar date rise to the top.
Here's the sorting logic from :
/** * Query showcase candidates using selection_count and backtrack distance. */ async function queryShowcaseCandidateIds( date: DateTime, excludedIds: number[] = [], ): Promise<number[]> { // Build the WHERE clause const whereConditions = [eq(photos.isPublic, true)]; if (excludedIds.length > 0) { whereConditions.push(notInArray(photos.id, excludedIds)); } const rows = await drizzleDb .select({ id: photos.id, taken_at: photos.takenAt, selection_count: photos.selectionCount, }) .from(photos) .where(and(...whereConditions)) .orderBy(asc(photos.selectionCount), desc(photos.takenAt)); // Secondary sort: within same selection_count, prioritize by backtrack distance return rows .sort((a, b) => { // First: strictly by selection_count (least shown first) if (a.selection_count !== b.selection_count) { return a.selection_count - b.selection_count; } // Second: by backtrack distance (photos taken closer to today first) const distanceA = calculateBacktrackDistance(a.taken_at, date); const distanceB = calculateBacktrackDistance(b.taken_at, date); return distanceA - distanceB; // ASCENDING order - smaller distance first }) .map((row) => row.id); }
Storing full photo objects in Redis would be a mistake. Metadata changes, love counts update, and signed URLs expire. If I cached the entire object for 24 hours, users would see stale out-of-sync data, incorrect engagement metrics, and broken image URLs.
I adopted the Skeleton-Hydration Pattern instead. The orchestrator computes and caches only the minimal decision data (the "Skeleton") which includes:
- Photo IDs (just the numbers)
- Selection type (Special Day, On This Day, Showcase)
- Selection metadata (reason, caption)
When a client requests the daily photo, the API:
- Fetches the lightweight skeleton from Redis (~100 bytes)
- Immediately "hydrates" it with fresh data from Postgres (descriptions, tags, current URLs, love counts)
- Returns the complete, current photo object to the client
This pattern keeps the cache lightweight and ensures strict data consistency. The hydration process is fast (single query by ID) and allows for last-mile personalization like checking if the current user has liked the photo.
Here's the hydration logic from :
/** * Hydrate a skeleton into a full CachedDailyPhoto with all photo data. */ export async function hydrateDailyPhotoSkeleton( skeleton: DailyPhotoSkeleton, ): Promise<CachedDailyPhoto> { const uniqueIds = Array.from( new Set(skeleton.photoIds.map((ref) => ref.id).filter(Boolean)), ); const hydrated = uniqueIds.length ? await hydratePhotos(uniqueIds) : []; if (uniqueIds.length && hydrated.length < uniqueIds.length) { throw new StaleSkeletonError( `Expected ${uniqueIds.length} photos but only found ${hydrated.length}`, ); } const orderedPhotos = skeleton.photoIds.length > 0 ? orderPhotosById( skeleton.photoIds.map((ref) => ref.id), hydrated, ) : hydrated; orderedPhotos.forEach((photo, index) => { const ref = skeleton.photoIds[index]; if (!ref) return; photo.selectionLabel = ref.label; if (ref.isFresh) { photo.isFreshSelection = true; } }); const referenceDate = DateTime.fromISO(skeleton.date, { zone: DAILY_PHOTO_SOURCE_TIMEZONE, }).startOf("day"); const yearMap: Record<number, number> = {}; orderedPhotos.forEach((photo) => { const years = calculateYearsAgo(photo.taken_at, referenceDate); if (typeof years === "number") { yearMap[photo.id] = years; } }); const expiresAt = referenceDate.plus({ days: 1 }).toISO() ?? `${referenceDate.toFormat("yyyy-MM-dd")}T00:00:00Z`;
Because this system is distributed across multiple server instances, we face two critical concurrency risks:
Risk 1: Cache Stampede. If the cache expires exactly when thousands of users visit, every request would attempt to run the expensive AI selection logic simultaneously. This would crush the database and waste compute on redundant calculations.
Risk 2: Orchestration Race. If the cron job overlaps with a manual admin override, we could corrupt the exclusion list, leading to duplicate photos or missing selections.
I implemented Distributed Locking using Redis SETNX (Set if Not Exists). Before computing anything, a process must acquire a lock. If it succeeds, it proceeds with the computation. If it fails (lock already held), it enters a polling wait loop until the skeleton appears in the cache.
This guarantees Deterministic Orchestration: only one process computes the selection for a given date, and everyone else waits for that authoritative result. The lock has a TTL (time-to-live) to prevent deadlocks if a process crashes mid-computation.
Here's the lock acquisition from :
/** * Acquire a distributed lock for computing daily photo selection. * Prevents race conditions when multiple requests hit cache miss simultaneously. * * @param dateISO - The date to lock * @returns Lock key if acquired, null if lock is held by another process */ export async function acquireComputeLock( dateISO: string, ): Promise<string | null> { if (!redisClient) { return `no-redis-${dateISO}`; // Allow computation without Redis } const lockKey = `${LOCK_PREFIX}${dateISO}`; try { // SETNX with expiry - atomic "set if not exists" const acquired = await redisClient.set(lockKey, Date.now().toString(), { nx: true, ex: LOCK_TTL_SECONDS, }); if (acquired) { logCacheEvent("lock_acquired", { dateISO, lockKey }); return lockKey; } logCacheEvent("lock_contention", { dateISO, lockKey }, "warn"); return null; } catch (error) { logCacheEvent( "lock_acquire_error", { dateISO, error: (error as Error).message }, "error", ); return `error-fallback-${dateISO}`; // Allow computation on error } }
The final piece of the puzzle is handling manual overrides. If I decide to manually feature a specific photo for tomorrow, that changes the availability pool for the days after tomorrow. The carefully orchestrated plan is invalidated.
I built a Cascade Recompute system to handle this butterfly effect. When an override is placed:
- The system identifies all future cached skeletons (dates >= override date)
- It invalidates those cache entries
- It re-runs the orchestration for that window using the same descending-order logic
- The new timeline flows logically from the override point forward
This ensures the exclusion list remains valid and the global timeline heals itself automatically. No photos are accidentally repeated or skipped due to manual intervention.
Here's the cascade logic from :
export async function cascadeRecompute(triggerDateISO: string): Promise<void> { if (!isDailyPhotoCacheEnabled()) { return; } const normalized = normalizeDateISO(triggerDateISO); if (!normalized) { return; } // Check in-memory lock if (CASCADE_LOCKS.has(normalized)) { logRevalidationEvent( "cascade_skip", { triggerDate: normalized, reason: "already_running" }, "warn", ); return; } CASCADE_LOCKS.add(normalized); // Acquire global orchestration lock const orchestrationLock = await acquireOrchestrationLock( `cascade:${normalized}`, ); if (!orchestrationLock) { CASCADE_LOCKS.delete(normalized); logRevalidationEvent( "cascade_skip", { triggerDate: normalized, reason: "orchestration_lock_held" }, "warn", ); return; } try { const keys = await listDailyPhotoCacheKeys(); if (!keys.length) { return; } const targets = keys .map((key) => ({ key, date: extractDateFromCacheKey(key), })) .filter( (entry) => entry.date && entry.date.localeCompare(normalized) >= 0, ) as { key: string; date: string }[]; if (!targets.length) { return; } // Sort by date DESCENDING (furthest date first) - matches Cron Orchestrator strategy targets.sort((a, b) => b.date.localeCompare(a.date)); logRevalidationEvent("cascade_start", { triggerDate: normalized, targetCount: targets.length, dates: targets.map((t) => t.date), });
By moving from a simple "pick a photo for today" approach to a globally orchestrated timeline, I solved the fairness issues inherent in greedy scheduling algorithms. The system now thinks across timezones, plans from the future backward, and self-heals when manual interventions occur.
The architectural patterns that made this possible:
- Two-Phase Planning ensures priority events get first pick before the greedy algorithm depletes the pool
- Backward Planning (descending date order) prevents the present from stealing photos meant for future special occasions
- Skeleton-Hydration keeps cache payloads microscopic while ensuring data consistency
- Distributed Locking prevents cache stampedes and guarantees deterministic orchestration
- Cascade Revalidation maintains timeline consistency when manual overrides occur
The result is a system that feels alive and intelligent, serving a coherent narrative to users regardless of where they are on the planet, whether they're watching the sunrise in Kiritimati or the sunset in Baker Island.
For your next scheduling system, consider planning backwards. Sometimes the best way to determine the present is to first secure the future.