
Food Delivery API: Event-Driven Order Lifecycle & Live Tracking
I built a backend that treats “place order” as a fast, safe entry point, then moves the heavy work into workers with controlled concurrency. Order state is streamed live using SSE instead of polling — reducing API chatter and keeping the DB off the hot path.
⚡ Engineering Challenge: Realtime State Without Polling
Food delivery systems don’t break on CRUD — they break on state transitions under concurrency. When hundreds of users place orders at the same time, a naive synchronous flow causes: high latency, DB pressure, and inconsistent “live tracking” behavior.
Fast Path vs Heavy Path Latency
Split responsibilities: the API accepts the order quickly, then publishes an event. Workers handle lifecycle progression and persistence — so the HTTP layer stays responsive.
Realtime Updates via SSE No Polling
Replaced client polling with a single persistent SSE stream. The client receives state updates instantly (prepared → on the way → delivered).
🧠 Baseline vs. Improved Architecture
This worked at low traffic, but under load it amplified DB waits and increased tail latency. It also forced the client to keep polling, which multiplies load exactly when the system is stressed.
The API does two things only: (1) write the initial order snapshot and (2) publish an event. Everything else happens off the request path — with explicit control over concurrency.
🏗️ High-level Architecture
🧩 Order Lifecycle Model
The order lifecycle was treated as a state machine. Each transition emits an event that both persists the change and updates the client stream.
| State | Triggered By | Side Effect |
|---|---|---|
| CREATED | Client places order | Publish event → start worker pipeline |
| PREPARING | Kitchen accepts | DB update + SSE broadcast |
| ON_THE_WAY | Driver pickup | DB update + SSE broadcast |
| DELIVERED | Delivery completion | Finalize order + write history log |
