The goal of this lab session is to accelerate the structured perceptron using two methods, Viterbi and Beam search. You should use the same data as in Lab 3.
- Implement the Viterbi algorithm to accelerate the argmax operation during both training and testing. Recall that the Viterbi is an exact search algorithm, i.e. it should return exactly the same result as the naive implementation, only faster. To ensure this happens, pay attention to random seeds, use of (ordered) dictionaries, etc. Here is an adaptation of Viterbi for the structured perceptron: : word sequence x = [x1,,xN], weights w set matrix V |Y|N = 0 set backpointer
B|Y|N = 0
V [y,n] = maxV [y0,n 1] + w (y,y0,x)
y0Y
B[y,n] = argmaxy0YV [y0,n 1] + w (y,y0,x)
- What speed up do you get with Viterbi compared to the standard structured perceptron?
- Implement beam search to accelerate the argmax. Recall that it is an inexact search method so the results might differ. But given a large enough beam size it should achieve the same results as viterbi and the structured perceptron from Lab 3.
- Does beam search affect your accuracy? What is the effect of beam size on speed and accuracy compared to the standard structured perceptron and Viterbi? Tip 1: Try three different beam sizes and report the results. Tip 2: Use a table to summarise the results (accuracy and training time) for the standard perceptron (from the lab 3), Viterbi and Beam search.
Reviews
There are no reviews yet.