Practice/Uber/Design a Top-K System
Design a Top-K System
System DesignMust
Problem Statement
Design a system that efficiently retrieves top-k items (songs, videos, hashtags, etc.) based on user activity or engagement metrics within specified time windows. The system should handle real-time data aggregation and support queries like "top 10 songs in the last 7 days" or "trending hashtags in the past 24 hours."
Top-K Ranking is a real-time leaderboard service built into apps like YouTube (trending videos), Spotify (top songs), Instagram/Twitter (trending hashtags), and Uber Eats (popular restaurants). Users and downstream services request the top K items for a time window and segment (global, region, category), and expect fast, fresh, and stable results.
Interviewers at Uber ask this to test whether you can stream-process high-volume events, maintain time-windowed counts, handle hot keys, precompute results, and serve low-latency reads at scale. Strong answers show a clear separation of write aggregation and read serving, thoughtful handling of late/out-of-order data, and pragmatic tradeoffs between freshness, cost, and correctness.
Key Requirements
Functional
- Time-windowed rankings -- support multiple fixed windows (1 hour, 24 hours, 7 days, 30 days, all-time) with each window updating independently
- Segmented leaderboards -- partition rankings by dimensions like geography, category, or user attributes
- Configurable result size -- allow clients to request top-K where K ranges from 10 to 1000, with stable ordering when scores tie
- Pagination and consistency -- enable page-through for large result sets with consistent snapshots across paginated requests
Non-Functional
- Scalability -- handle 10M+ events per second during peak hours with independent scaling of write throughput and read capacity
- Reliability -- tolerate datacenter failures without losing more than 5 minutes of leaderboard history
- Latency -- p99 read latency under 100 ms for top-K queries; write-to-visibility delay under 10 seconds for non-viral content
- Consistency -- eventual consistency acceptable for leaderboards; rankings should converge within seconds
What Interviewers Focus On
Based on real interview experiences at Uber and Meta, these are the areas interviewers probe most deeply:
1. Write Path and Event Aggregation
Interviewers want to see how you handle millions of incoming score events without creating database hotspots or losing data. They test whether you understand stream processing, event deduplication, and the difference between raw events and aggregated counts.
Hints to consider:
- Use an event streaming platform (Kafka) as a durable buffer between producers and consumers to decouple ingestion from processing
- Partition events by item_id to enable parallel processing while maintaining ordering per item
- Apply tumbling or sliding windows in a stream processor (Flink) to maintain per-window counts with watermarks handling late arrivals
- Batch writes to the storage layer (flush every 5 seconds or 1000 events) to reduce write amplification
2. Handling Hot Keys and Skew
Viral content creates enormous skew where a single item receives 1000x more events than average. Interviewers assess whether you recognize this problem and can propose solutions beyond naive sharding.
Hints to consider:
- Implement split counters where hot keys are sharded into N sub-counters and merged periodically
- Use probabilistic counting (HyperLogLog for unique counts, Count-Min Sketch for frequency) when exact precision is not required
- Apply skew-aware routing where the ingestion layer detects hot items and distributes their events across more partitions dynamically
- Separate the "top 10" (which changes rarely) from the "long tail" using tiered storage with different update frequencies
3. Maintaining and Serving Top-K Efficiently
Computing top-K by scanning all items on every query is a critical mistake. Interviewers look for pre-computation strategies, appropriate data structures, and cache-friendly designs.
Hints to consider:
- Pre-compute top-K lists for each window/segment combination using a stream processor with windowed aggregations
- Store pre-computed rankings in Redis sorted sets (ZADD with scores) or a key-value store with TTL matching the window duration
- Use min-heaps of size K during aggregation to maintain top candidates efficiently
- Implement tiered caching: application cache (30s TTL) to Redis (authoritative, 10s freshness) to batch recomputation fallback
4. Time Window Management and Freshness
Different time windows have different update patterns and expiration logic. Interviewers probe whether you understand sliding vs tumbling windows and how to balance freshness with cost.
Hints to consider:
- Use tumbling windows for fixed-duration leaderboards (daily resets at midnight) and sliding windows for "last 24 hours" that continuously updates
- Implement window expiration through TTLs on Redis keys or partition-level deletion
- Trade off freshness for cost: update "last hour" every 30 seconds but "last 30 days" only every 5 minutes
- Handle late-arriving events with allowed lateness in stream processing