In the textbook, language modeling was defined as the task of predicting the next word in a sequence given the previous words. In this assignment, we will focus on the related problem of predicting the next character in a sequence given the previous characters.
The learning goals of this assignment are to:
- Understand how to compute language model probabilities using maximum likelihood estimation
- Implement basic smoothing, back-off and interpolation.
- Have fun using a language model to probabilistically generate texts.
- Use a set of language models to perform text classification.
Part 1: Unsmoothed Maximum Likelihood Character-Level Language Models
Were going to be starting with some nice, compact code for character-level language models. that was written by Yoav Goldberg. Below is Yoavs code for training a language model.
Train a language model
Note: we provide you this code in the Python stub file.
fname
is a file to read the characters from. order
is the history size to consult. Note that we pad the data with leading ~
so that we also learn how to start.
Now you can train a language model. First grab some text like this corpus of Shakespeare:
Now train the model:
P(hello world)
Ok. Now we can look-up the probability of the next letter given some history. Here are some example queries:
Actually, lets pretty print the output, and sort the letters based on their probabilities.
OK, print again:
This means that hell
can be followed by any of these dozen characters:
,o.!-?i;
':s
and that the probability of o
given hell
is 15.7%, p(ohell)=0.157p(ohell)=0.157. The most probable character to see after hell
is a space, p(ohell)=0.221p(ohell)=0.221.
The distribution of letters that occur after worl
is different than the distribution of letters that occur after hell
. Here is that distribution:
What does that mean? It means that in our corpus, the only possible continuation that we observed for worl
was the letter d
, and we assign 100% of probability mass to it, p(dworl)=1.0p(dworl)=1.0.
Lets write some Shakespeare!
Generating text with the model is simple. To generate a letter, we will look up the last n
characters, and then sample a random letter based on the probability distribution for those letters. Heres Yoavs code for that:
To generate a passage of text, we just seed it with the initial history and run letter generation in a loop, updating the history at each turn. Well stop generating after a specified number of letters.
Now, try generating some Shakespeare with different order n-gram models. You should try running the following commands.
What do you think? Is it as good as 1000 monkeys working at 1000 typewriters?
What the F?
Try generating a bunch of short passages:
Do you notice anything? They all start with F! In fact, after we hit a certain order, the first word is always First? Why is that? Is the model trying to be clever? First, generate the word First. Explain what is going on in your writeup.
Part 2: Perplexity, smoothing, back-off and interpolation
In this part of the assignment, youll adapt Yoavs code in order to implement several of the techniques described in Section 4.2 of the Jurafsky and Martin textbook.
Perplexity
How do we know whether a LM is good? There are two basic approaches:
- Task-based evaluation (also known as extrinsic evaluation), where we use the LM as part of some other task, like automatic speech recognition, or spelling correcktion, or an OCR system that tries to covert a professors messy handwriting into text.
- Intrinsic evaluation. Intrinsic evaluation tries to directly evalute the goodness of the language model by seeing how well the probability distributions that it estimates are able to explain some previously unseen test set.
Heres what the textbook says:
For an intrinsic evaluation of a language model we need a test set. As with many of the statistical models in our field, the probabilities of an N-gram model come from the corpus it is trained on, the training set or training corpus. We can then measure the quality of an N-gram model by its performance on some unseen data called the test set or test corpus. We will also sometimes call test sets and other datasets that are not in our training sets held out corpora because we hold them out from the training data.
So if we are given a corpus of text and want to compare two different N-gram models, we divide the data into training and test sets, train the parameters of both models on the training set, and then compare how well the two trained models fit the test set.
But what does it mean to fit the test set? The answer is simple: whichever model assigns a higher probability to the test set is a better model.
Well implement the most common method for intrinsic metric of language models: perplexity. The perplexity of a language model on a test set is the inverse probability of the test set, normalized by the number of words. For a test set W=w1w2...wNW=w1w2wN:
OK lets implement it. Heres a possible function signature for perplexity. (We might update it during class on Wednesday). Give it a go.
A couple of things to keep in mind:
- Remember to pad the front of the file
- Numeric underflow is going to be a problem, so consider using logs.
- Perplexity is undefined if LM assigns any zero probabilities to the test set. In that case your code should return positive infinity
float("inf")
. - On your unsmoothed models, youll definitely get some zero probabilities for the test set. To test you code, you should try computing perplexity on the trianing set, and you should compute perplexity for your LMs that use smoothing and interpolation.
In your report:
Discuss the perplexity for text that is similar and different from Shakespeares plays. We provide you two dev text files, a New York Times article and several of Shakespeares sonnets, but feel free to experiment with your own text.
Note: you may want to create a smoothed language model before calculating perplexity, otherwise you will get a perplexity of 0.
Laplace Smoothing and Add-k Smoothing
Laplace Smoothing is described in section 4.4.1. Laplace smoothing adds one to each count (hence its alternate name add-one smoothing). Since there are V words in the vocabulary and each one was incremented, we also need to adjust the denominator to take into account the extra V observations.
A variant of Laplace smoothing is called Add-k smoothing or Add-epsilon smoothing. This is described in section Add-k 4.4.2. Lets change the function definition of train_char_lm
so that it takes a new argument, add_k
, which specifies how much to add. By default, well set it to one, so that it acts like Laplace smoothing:
Interpolation
Next, lets implement interpolation. The idea here is to calculate the higher order n-gram probabilities also combining the probabilities for lower-order n-gram models. Like smoothing, this helps us avoid the problem of zeros if we havent observed the longer sequence in our training data. Heres the math:
where 1+2+3=11+2+3=1.
Now, write a back-off function:
You should also write a helper function to set the lambdas. Heres a function definition that gives you access to a development set. You can also experiment with setting them manually.
In your report:
Experiment with a couple different lambdas and values of k, and discuss their effects.
Part 3: Text Classification using LMs
Language models can be applied to text classification. If we want to classify a text DD into a category cC=c1,...,cNcC=c1,,cN. We can pick the category ccthat has the largest posterior probability given the text. That is,
Using Bayes rule, this can be rewritten as:
If we assume that all classes are equally likely, then we can just drop the P(c)P(c) term:
Here P(Dc)P(Dc) is the likelihood of DD under category cc, which can be computed by training language models for all texts associated with category cc. This technique of text classification is drawn from literature on authorship identification, where the approach is to learn a separate language model for each author, by training on a data set from that author. Then, to categorize a new text D, they use each language model to calculate the likelihood of D under that model, and pick the category that assigns the highest probability to D.
Try it! We have provided you training and validation datsets consisting of the names of cities. The task is to predict the country a city is in. The following countries are including in the dataset.
Leaderboard
Well set up a leaderboard for the text classification task. Your job is to configure a set of language models that perform the best on the text classification task. We will use the city names dataset, which you should have already downloaded. The test set has one unlabeled city name per line. Your code should output a file labels.txt
with one two-letter country code per line.
In next weeks assignment, you will use a recurrent neural network on the same dataset in order to compare performance.
In your report:
Describe the parameters of your final leaderboard model and any experimentation you did before settling on it.
Deliverables
Recommended readings
Language Modeling with N-grams. Dan Jurafsky and James H. Martin. Speech and Language Processing (3rd edition draft) . |
The Unreasonable Effectiveness of Character-level Language Models. Yoav Goldberg. Response to Andrej Karpathys blog post. 2015. |
Language Independent Authorship Attribution using Character Level Language Models. Fuchun Pen, Dale Schuurmans, Vlado Keselj, Shaojun Wan. EACL 2003. |
5/5 – (3 votes)
Here are the materials that you should download for this assignment:
- Skeleton python code.
- training data for text classification task.
- dev data for text classification task.
- test file for leaderboard
Note: all of this code is included in the provided code stub called language_model.py
. No need to copy and paste.
$ wget http://cs.stanford.edu/people/karpathy/char-rnn/shakespeare_input.txt`
Perplexity(W)=P(w1w2...wN)1NPerplexity(W)=P(w1w2wN)1N
=1P(w1w2...wN)N=1P(w1w2wN)N
=i=1N1P(wiw1...wi1)N=i=1N1P(wiw1wi1)N
PLaplace(wi)=counti+1N+VPLaplace(wi)=counti+1N+V
Pbackoff(wi|wi2wi1)=1P(wi|wi2wi1)+2P(wi|wi1)+3P(wi)Pbackoff(wi|wi2wi1)=1P(wi|wi2wi1)+2P(wi|wi1)+3P(wi)
c=argmaxcCP(c|D)c=argmaxcCP(c|D)
c=argmaxcCP(D|c)P(c)c=argmaxcCP(D|c)P(c)
=argmaxcCP(D|c)=argmaxcCP(D|c)
af Afghanistancn Chinade Germanyfi Finlandfr Francein Indiair Iranpk Pakistanza South Africa
Here are the deliverables that you will need to submit:
- writeup.pdf
- code (.zip). It should be written in Python 3 and include a README.txt briefly explaining how to run it.
labels.txt
predictions for leaderboard.
Reviews
There are no reviews yet.