Meta doesn't publicly release verbatim interview questions, so no single source provides an exact "Design Harmful Content / Weapon Ad Detection" problem statement with full input/output examples and constraints matching those tags. However, based on closely related ML system design problems from Meta-style interviews (e.g., harmful content moderation and illegal item detection in ads/marketplaces), here's a compiled breakdown from common formulations.[1][5][10]
Design an end-to-end ML system to detect harmful content, specifically ads or posts promoting weapons (e.g., firearms, explosives) or broader harmful categories like violence/terrorism, in a platform like Facebook Marketplace or Instagram. The system processes multimodal posts (text, images, videos) at massive scale, flags harmful ones for removal/demotion, and minimizes harm while handling false positives/negatives. Actions include auto-deletion for high-confidence predictions (>95% precision), demotion for uncertain cases, and routing to human moderators.[5][1]
No canonical examples exist, but typical interview clarifications include:
Inputs:
{"text": "Selling AR-15 rifle, like new, $800 OBO #guns", "images": [url1.jpg showing firearm], "user_id": 12345, "views": 1000}Outputs:
{"is_harmful": true, "confidence": 0.97, "category": "weapon_ad", "action": "delete"}{"is_harmful": true, "prob_nudity": 0.05, "prob_weapon": 0.92, "prob_violence": 0.1}{"text": "Hunting knife for camping", "images": [kitchen_knife.jpg], "is_harmful": false}[2][10][1]