Apple interview questions are typically proprietary and not publicly released, so no exact match exists online for a problem titled "Implement Beam Search" with those specific tags (Machine Learning, NLP, Decoding, Search). Standard beam search implementations in NLP contexts, as described in resources like machine learning tutorials, focus on decoding sequences from probability matrices.[1][2]
Typical Problem Statement
Beam search is used in NLP tasks like machine translation or text generation to find high-probability output sequences from a model's predictions, balancing completeness and efficiency. Given a sequence of probability distributions (e.g., from a neural decoder) over a vocabulary and a beam width k, implement beam search to generate the top-k most likely sequences up to a maximum length, using log-probabilities to avoid underflow. Scoring often multiplies (or sums in log space) probabilities along each path.[2]
Core Algorithm Steps
Initialize a beam with the start token (score 0).
At each step t (up to max_length):
Expand each beam candidate by appending every vocabulary token.
Compute new scores: previous_score + log(prob[token | previous_sequence]).
Prune to top-k candidates by score.
Halt paths on end-of-sequence token or max length.[1][2]