Design a self-service data-ingestion platform that allows non-technical users (analysts, portfolio managers, quants) at a hedge fund to configure, launch, and monitor ETL jobs without writing code. The system must support hundreds of concurrent jobs ingesting data from Bloomberg terminals, REST APIs, FTP drops, S3 buckets, and on-prem databases into the fund’s research data lake. Users should discover available connectors through a web UI, fill in source credentials and schedule, pick a destination schema, and click “Ingest”. The platform automatically infers the source schema, samples the data, runs user-defined quality rules (null checks, value ranges, referential integrity), and on success loads into a partitioned Iceberg table. It must retry transient failures with exponential back-off, quarantine poison records to a dead-letter queue, and page the data owner on hard failures. Every run must emit lineage and quality metrics (completeness, freshness, drift) that are queryable in the UI. The service should scale to 10k daily jobs, 1 TB/hour peak throughput, and guarantee exactly-once loading despite duplicate source files.