Smoothing
Unigram Smoothing
The unigram model in the previous section faces a challenge when confronted with words that do not occur in the corpus, resulting in a probability of 0. One common technique to address this challenge is smoothing, which tackles issues such as zero probabilities, data sparsity, and overfitting that emerge during probability estimation and predictive modeling with limited data.
Laplace smoothing (aka. add-one smoothing) is a simple yet effective technique that avoids zero probabilities and distributes the probability mass more evenly. It adds the count of 1 to every word and recalculates the unigram probabilities:
Thus, the probability of any unknown word with Laplace smoothing is calculated as follows:
The unigram probability of an unknown word is guaranteed to be lower than the unigram probabilities of any known words, whose counts have been adjusted to be greater than 1.
Note that the sum of all unigram probabilities adjusted by Laplace smoothing is still 1:
Let us define a function unigram_smoothing() that takes a file path and returns a dictionary with bigrams and their probabilities as keys and values, respectively, estimated by Laplace smoothing:
from src.ngram_models import unigram_count, Unigram
UNKNOWN = ''
def unigram_smoothing(filepath: str) -> Unigram:
counts = unigram_count(filepath)
total = sum(counts.values()) + len(counts)
unigrams = {word: (count + 1) / total for word, count in counts.items()}
unigrams[UNKNOWN] = 1 / total
return unigrams
- L1: Import the
unigram_count()function from the src.ngram_models package. - L3: Define a constant representing the unknown word.
- L7: Increment the total count by the vocabulary size.
- L8: Increment each unigram count by 1.
- L9: Add the unknown word to the unigrams with a probability of 1 divided by the total count.
We then test unigram_smoothing() with a text file dat/chronicles_of_narnia.txt:
from src.ngram_models import test_unigram
corpus = 'dat/chronicles_of_narnia.txt'
test_unigram(corpus, unigram_smoothing)
- L1: Import
test_unigram()from the ngram_models package.
I 0.010225
Aslan 0.001796
Lucy 0.001762
Edmund 0.001369
Narnia 0.001339
Caspian 0.001300
Jill 0.001226
Peter 0.001005
Shasta 0.000902
Digory 0.000899
Eustace 0.000853
Susan 0.000636
Tirian 0.000585
Polly 0.000533
Aravis 0.000523
Bree 0.000479
Puddleglum 0.000479
Scrubb 0.000469
Andrew 0.000396
Unigram With Smoothing W/O Smoothing
I 0.010225 0.010543
Aslan 0.001796 0.001850
Lucy 0.001762 0.001815
Edmund 0.001369 0.001409
Narnia 0.001339 0.001379
Caspian 0.001300 0.001338
Jill 0.001226 0.001262
Peter 0.001005 0.001034
Shasta 0.000902 0.000928
Digory 0.000899 0.000925
Eustace 0.000853 0.000877
Susan 0.000636 0.000654
Tirian 0.000585 0.000601
Polly 0.000533 0.000547
Aravis 0.000523 0.000537
Bree 0.000479 0.000492
Puddleglum 0.000479 0.000492
Scrubb 0.000469 0.000482
Andrew 0.000396 0.000406
Compared to the unigram results without smoothing (see the "Comparison" tab above), the probabilities for these top unigrams have slightly decreased.
Q4: When applying Laplace smoothing, do unigram probabilities always decrease? If not, what conditions can cause a unigram's probability to increase?
The unigram probability of any word (including unknown) can be retrieved using the UNKNOWN key:
def smoothed_unigram(probs: Unigram, word: str) -> float:
return probs.get(word, unigram[UNKNOWN])
- L2: Use
get()to retrieve the probability of the target word fromprobs. If the word is not present, default to the probability of theUNKNOWNtoken.
unigram = unigram_smoothing(corpus)
for word in ['Aslan', 'Jinho']:
print(f'{word} {smoothed_unigram(unigram, word):.6f}')
- L2: Test a known word, 'Aslan', and an unknown word, 'Jinho'.
Aslan 0.001796
Jinho 0.000002
Bigram Smoothing
The bigram model can also be enhanced by applying Laplace smoothing:
Thus, the probability of an unknown bigram where is known but is unknown is calculated as follows: