Practice/Uber/Design Stock price alerting system
Design Stock price alerting system
System DesignMust
Problem Statement
Design a stock price alerting system that allows users to set price thresholds for stocks and receive real-time notifications when those conditions are met. Users define rules like "Notify me when AAPL drops below 150 dollars" and expect near-instant, reliable notifications. Think of the alert features in Robinhood, Yahoo Finance, or Google Finance.
The system must ingest high-frequency market data from stock exchanges, perform stateful condition matching at scale, and fan out notifications without melting down on hot symbols. The core challenges are partitioning the matching engine by symbol, handling thundering herds when popular stocks cross widely-set thresholds, ensuring idempotent notification delivery, and separating the matching path from slower external notification dependencies.
Interviewers at Uber ask this to test whether you can design a low-latency streaming system that ingests high-frequency events, performs stateful condition matching, and fans out notifications reliably under traffic skew.
Key Requirements
Functional
- Alert management -- users create, update, pause/resume, and delete price alerts for specific tickers with conditions (above, below, or crossing a threshold)
- Real-time notifications -- users receive near real-time notifications through preferred channels (in-app push, email, SMS) when alert conditions are met
- Alert status and history -- users view current alert status and a recent history of triggered notifications
- User preferences -- users set per-channel opt-in, quiet hours, and deduplication settings (one-time vs recurring alerts)
Non-Functional
- Scalability -- handle tens of thousands of ticker price updates per second during market hours, with millions of active alerts
- Reliability -- no alert missed during normal operation; tolerate processing node failures with minimal delay
- Latency -- from price crossing threshold to notification dispatch in under 5 seconds
- Consistency -- at-least-once notification delivery with idempotency to prevent duplicate alerts
What Interviewers Focus On
Based on real interview experiences at Uber and Amazon, these are the areas interviewers probe most deeply:
1. Streaming Price Ingestion and Matching Architecture
Brute-force scanning all alerts on every tick is O(ticks x alerts) and will fail immediately. Interviewers expect efficient per-symbol matching.
Hints to consider:
- Ingest market data into Kafka partitioned by ticker symbol so all price updates for a symbol go to the same partition
- For each symbol, maintain an in-memory sorted set of alert thresholds (using a balanced BST or Redis sorted set)
- On each price update, binary-search the threshold set to find all alerts that cross the current price
- Only evaluate alerts for the specific symbol that received a price update, not all alerts globally
2. Hot Symbol and Thundering Herd Mitigation
Popular tickers (AAPL, TSLA) may have millions of alerts. When the price crosses a common threshold, thousands of notifications trigger simultaneously.
Hints to consider:
- Decouple matching from notification delivery: the matcher identifies triggered alerts and publishes to a notification queue
- Rate-limit notification fan-out per symbol to prevent overwhelming downstream notification services
- Use sharded notification workers so fan-out for one symbol does not block other symbols
- Pre-sort alerts by threshold to enable efficient batch triggering rather than individual evaluation
3. Idempotent Alert Triggering
Stream processing retries and consumer rebalancing can cause duplicate triggers. Users should never receive the same alert twice.
Hints to consider:
- Assign a unique trigger_id per (alert_id, price_event_timestamp) combination
- Check trigger_id in Redis before dispatching the notification; skip if already triggered
- For one-time alerts, mark them as triggered atomically with the notification dispatch
- Store triggered state durably so it survives consumer restarts
4. Alert State Management
Alerts have lifecycle states (active, paused, triggered, expired) that must be managed correctly alongside the streaming pipeline.
Hints to consider:
- Store alert definitions in a durable database (DynamoDB, PostgreSQL) as the source of truth
- Load active alerts for each symbol into the stream processor's state on startup and refresh on changes
- Propagate alert CRUD operations to the matcher via a separate Kafka topic or change-data-capture stream
- Handle race conditions between alert updates and price events (e.g., user deletes alert while it is being triggered)