Circuit Breaker
Problem
Build an HTTP proxy with a circuit breaker that protects a downstream backend. When the backend starts failing, the circuit should open and reject requests immediately — preventing a cascade of slow, doomed calls. After a cooldown period, it should probe with a single request and either recover or stay open.
Background
Downstream services fail. Without protection, your service keeps sending requests to a dead backend — each one waiting for a timeout, tying up connections, and potentially cascading the failure upstream. A circuit breaker detects when a backend is unhealthy and fails fast, giving the downstream time to recover.
The pattern has three states:
- Closed — Traffic flows normally. Failures are counted. After a threshold of consecutive failures, the circuit opens.
- Open — All requests are rejected immediately with 503. After a cooldown period, the circuit transitions to half-open.
- Half-Open — A single probe request is allowed through. If it succeeds, the circuit closes (backend is healthy again). If it fails, the circuit re-opens and the cooldown restarts.
This is the same pattern Netflix popularized with Hystrix and that every major cloud provider implements in their service mesh.
Requirements
- Implement
GET /api/resourcethat proxies tohttp://localhost:3001/backend/resource - Implement the circuit breaker state machine:
- Closed: Forward all requests to the backend. Track consecutive failures. After 3 consecutive failures, transition to Open.
- Open: Reject immediately with 503. After a 5-second cooldown, transition to Half-Open.
- Half-Open: Allow one probe request through. If it succeeds, transition to Closed (reset failure count). If it fails, transition back to Open (restart the cooldown timer).
- A successful backend response resets the consecutive failure count to 0
- Return an
X-Circuit-Stateheader on every response with the current state:closed,open, orhalf-open - When the circuit is open, include a
Retry-Afterheader with the number of seconds remaining in the cooldown (rounded up)
Your Server
- Start an HTTP server on the port specified by the
PORTenvironment variable (default:3000) - The harness starts a mock backend on port
3001before your server starts - The backend returns
{ "data": "ok", "timestamp":on success, or} 500when it's configured to fail
API Contract
`GET /health`
Health check endpoint. Return 200 when your server is ready.
`GET /api/resource`
Proxy to http://localhost:3001/backend/resource through the circuit breaker.
Responses:
| Circuit State | Backend Result | Status | Body | Headers |
| ------------- | ---------------- | ------ | ---------------------------------- | ------------------------------------------------- |
| Closed | Success | 200 | Backend response | X-Circuit-State: closed |
| Closed | Failure | 502 | {"error": "backend unavailable"} | X-Circuit-State: closed |
| Open | — | 503 | {"error": "circuit open"} | X-Circuit-State: open, Retry-After: |
| Half-Open | Success | 200 | Backend response | X-Circuit-State: half-open |
| Half-Open | Failure | 502 | {"error": "backend unavailable"} | X-Circuit-State: half-open |
| Half-Open | (extra requests) | 503 | {"error": "circuit open"} | X-Circuit-State: half-open |
Your solution will be tested against these scenarios plus a hidden set.
- 01GET /api/resource proxies to the backend and returns its response with X-Circuit-State: closed
- 02After 3 consecutive backend failures, the circuit opens and rejects with 503
- 03After 5-second cooldown, the circuit enters half-open and allows one probe request through
- 04If the half-open probe fails, the circuit re-opens and rejects subsequent requests
Hints
Click each hint to reveal it. Take your time — try before you peek.