Design a centralized Feature Store that serves both offline model training and online model inference at Reddit scale. The system must support thousands of features across dozens of product teams, handle 100k+ requests per second for online serving with p99 < 5 ms, store petabytes of historical feature data for training, and guarantee point-in-time correctness to prevent label leakage. Teams should be able to register new features via a declarative API, backfill them over years of historical data, and have them automatically materialized to both a low-latency online store (Redis/Memcached) and an offline store (S3/Data Lake) while keeping the two in sync. The store must support real-time streaming updates from Kafka/Kinesis, batch backfills via Spark, feature versioning, per-feature SLA monitoring, and discovery/search so that any ML engineer can find and reuse existing features instead of rebuilding them.