Given an array of lowercase English strings, identify all characters that appear in every single string. If a character appears multiple times in every string, include it that many times in the result based on its minimum frequency across all strings.
Return the result as an array of characters (each character is a single-letter string). The order of characters in the output does not matter.
Example 1:
` Input: words = ["bella", "label", "roller"] Output: ["e", "l", "l"] Explanation:
Example 2:
Input: words = ["cool"] Output: ["c", "o", "o", "l"] Explanation: With a single word, all characters are returned with their original counts.
Example 3:
Input: words = ["abc", "def", "ghi"] Output: [] Explanation: No character appears in all three words, resulting in an empty array.
Hint 1: Character Frequency Tracking Consider counting the frequency of each character in every word. What data structure would help you efficiently track character counts for 26 possible lowercase letters?
Hint 2: Finding the Intersection The key insight is that a character can only appear in the final result as many times as it appears in the word where it's least frequent. How would you compute the minimum frequency of each character across all words?
Hint 3: Optimization You can solve this by maintaining a running frequency count that gets updated to the minimum as you process each word. Start with the frequency count of the first word, then iteratively take the element-wise minimum with each subsequent word's frequency.
Full Solution ` def findCommonCharacters(words): # Initialize a frequency array for the first word # This represents our current "minimum frequency" for each letter min_freq = [0] * 26
for char in words[0]: min_freq[ord(char) - ord('a')] += 1 # For each subsequent word, update min_freq to the element-wise minimum for word in words[1:]: # Count frequency of characters in current word word_freq = [0] * 26 for char in word: word_freq[ord(char) - ord('a')] += 1 # Update min_freq to contain the minimum count for each letter for i in range(26): min_freq[i] = min(min_freq[i], word_freq[i]) # Build result array based on final minimum frequencies result = [] for i in range(26): # Add each character min_freq[i] times char = chr(i + ord('a')) result.extend([char] * min_freq[i]) return result`
Explanation:
The solution uses a frequency counting approach where we track the minimum occurrence of each character across all words.
Algorithm Steps:
- Initialize with first word: Create a frequency array of size 26 (for 'a' to 'z') and populate it with character counts from the first word.
- Process remaining words: For each subsequent word, count its character frequencies and update our running minimum frequency array by taking the element-wise minimum.
- Build result: After processing all words, construct the result array by adding each character as many times as indicated by its minimum frequency.
Time Complexity: O(n × m) where n is the number of words and m is the average length of a word. We iterate through each character of each word once.
Space Complexity: O(1) since we use fixed-size arrays of length 26 regardless of input size. The output array size is bounded by the shortest word's length.
Alternative Approach:
You could also use Python's Counter class from the collections module to make the code more concise:
` from collections import Counter
def findCommonCharacters(words): # Start with the counter of the first word common_count = Counter(words[0])
# Intersect with each subsequent word's counter for word in words[1:]: word_count = Counter(word) # Update to minimum counts (intersection) common_count = common_count & word_count # Convert counter to list of characters result = [] for char, count in common_count.items(): result.extend([char] * count) return result`
The Counter intersection operator
&keeps the minimum count for each key, which is exactly what we need.