Skip to content
/practice·request-coalescer

Request Coalescer

advancedcachingconcurrencycoalescingthundering-herd
/concept
Request Coalescing
Preventing thundering herds with in-flight deduplication
Read full guide

Problem

Build a proxy endpoint that coalesces concurrent identical requests to a slow backend, ensuring only one backend call is made even when many clients request the same resource simultaneously.

Background

When a cache entry expires and many clients request the same resource at once, a naive implementation sends N requests to the backend — a thundering herd or cache stampede. Request coalescing (also called request deduplication or single-flighting) ensures only one request goes to the backend; all other concurrent callers wait for and share the result.

This pattern is critical in CDNs, API gateways, and any service that proxies to a slower backend.

Requirements

  • Implement GET /api/data/:key — a proxy endpoint that fetches from a simulated slow backend
  • The backend is available at http://localhost:3001/backend/:key (provided by the harness)
  • When multiple concurrent requests arrive for the same key, only one request should be made to the backend
  • All concurrent callers for the same key should receive the same response
  • Different keys should be fetched independently
  • After a request completes, subsequent requests for the same key should call the backend again (no persistent caching — just coalescing in-flight requests)

Your Server

  • Start an HTTP server on the port specified by the PORT environment variable (default: 3000)
  • The harness starts a simulated slow backend on port 3001 before your server starts
  • The backend responds with { "key": "", "value": "", "timestamp": } after a 500ms delay

API Contract

`GET /health`

Health check endpoint. Return 200 when your server is ready.

`GET /api/data/:key`

Fetch data for the given key from the backend, with request coalescing.

Response:

  • Status: 200
  • Body: The backend's response { "key": "", "value": "", "timestamp": }
Scenarios·4 visible + hidden

Your solution will be tested against these scenarios plus a hidden set.

  1. 01GET /api/data/:key proxies to the backend and returns its response
  2. 0210 concurrent requests for the same key result in only 1 backend call
  3. 03Requests for different keys each make their own backend call
  4. 04All concurrent callers receive the same response data

Hints

Click each hint to reveal it. Take your time — try before you peek.