For a full example answer with detailed architecture diagrams and deep dives, see our Design Payment System guide. The payment guide covers ledger design, idempotent transaction processing, and saga patterns that directly apply to a deposit and withdrawal system.
Also review the Databases and Message Queues building blocks for background on ACID-compliant storage and event-driven transaction workflows.
Design a transaction processing system that allows users to perform withdrawals and deposits, ensuring data consistency, security, and reliable transaction handling. Think of the ledger engine inside Cash App, Venmo, or a bank's mobile application that must never lose money, double-spend, or show an incorrect balance.
The core challenge is maintaining correctness under concurrency, failures, and retries. When two requests hit the same account simultaneously -- a deposit and a withdrawal -- the system must guarantee that the resulting balance is accurate and that neither operation is lost or duplicated. You need to reason about atomicity, isolation, idempotency, and auditability, and design a write path that scales horizontally without sacrificing per-account consistency. Interviewers expect you to discuss tradeoffs between latency and strict guarantees, handle hot accounts that receive many concurrent transactions, and ensure durable event-driven integration with downstream systems like notifications and analytics.
Based on real interview experiences, these are the areas interviewers probe most deeply:
The heart of this system is ensuring that concurrent operations on the same account never result in incorrect balances. Interviewers want to see how you prevent two simultaneous withdrawals from spending the same money.
Hints to consider:
Network failures and client retries are inevitable. Interviewers expect a concrete strategy to prevent duplicate deposits or withdrawals when a request is retried after a timeout.
Hints to consider:
Financial systems require a complete, immutable history of all state changes. Interviewers want to see an append-only design rather than in-place balance mutation.
Hints to consider:
Some accounts (merchants, promotional accounts) may receive thousands of transactions per second, creating a serialization bottleneck. Interviewers assess whether you can scale writes for these cases.
Hints to consider:
Begin by confirming scope and constraints. Ask about the expected transaction volume, the ratio of deposits to withdrawals, and whether cross-account transfers are in scope. Clarify whether the system needs to integrate with external banking rails or operates as a closed-loop wallet. Verify consistency requirements: can users tolerate brief delays in seeing incoming deposits, or must everything be immediately consistent? Ask about regulatory requirements such as transaction limits, KYC verification, and data retention policies.
Sketch the core components: an API Gateway for authentication and rate limiting, a Transaction Service that validates requests and coordinates the processing flow, a Ledger Service that maintains the append-only transaction log in PostgreSQL, a Balance Service that serves materialized balances from a Redis cache backed by the ledger, and a Notification Service for real-time alerts. Introduce Kafka as the event backbone: after a ledger write commits, an event is published for downstream consumers (notifications, analytics, fraud detection). Show the transaction flow: client sends a request with an idempotency key, the Transaction Service validates and acquires a per-account lock, writes to the ledger atomically, updates the balance projection, publishes an event, and returns the result.
Walk through a withdrawal in detail. The client sends a withdrawal request with an idempotency key and amount. The Transaction Service checks the idempotency key in Redis (fast path) and then PostgreSQL (authoritative). If the key is new, it opens a database transaction, locks the account row (SELECT FOR UPDATE), reads the current balance projection, validates sufficient funds, writes an immutable ledger entry (debit), updates the materialized balance atomically within the same transaction, stores the idempotency key with the transaction ID, and commits. On success, an event is published to Kafka for notifications and analytics. If any step fails, the database transaction rolls back and the client can safely retry with the same idempotency key. Discuss how optimistic locking with version numbers provides an alternative to SELECT FOR UPDATE that reduces lock contention for accounts with moderate traffic.
Cover hot account handling: detect accounts exceeding a transaction-rate threshold and dynamically partition their ledger into sub-shards processed by dedicated workers. Discuss disaster recovery: synchronous replication to a standby PostgreSQL instance, point-in-time recovery from WAL archives, and Kafka log replay for rebuilding downstream state. Address monitoring: instrument transaction latency, ledger write throughput, idempotency key hit rate, and lock contention metrics. Mention security: encrypt sensitive fields at rest, enforce TLS for all communication, implement rate limiting per account, and flag anomalous patterns for fraud review. Finally, discuss scaling reads: serve balance queries from Redis with a write-through cache updated on every ledger commit, and use read replicas for transaction history queries.