We will evaluate your code by executing script.py file, which will internally call the problem specific functions. You must submit an assignment report (pdf file) summarizing your findings. In the problem statements below, the portions under REPORT heading need to be discussed in the assignment report.
Data Sets
In this assignment, we still use MNIST. In the script file provided to you, we have implemented a function, called preprocess(), with preprocessing steps. This will apply feature selection, feature normalization, and divide the dataset into 3 parts: training set, validation set, and testing set.
Your tasks
- Implement Logistic Regression and give the prediction results.
- Use the Support Vector Machine (SVM) toolbox svm.SVM to perform classification.
- Write a report to explain the experimental results with these 2 methods.
- Extra credit: Implement the gradient descent minimization of multi-class Logistic Regression (using softmax function).
1.1 Problem 1: Implementation of Logistic Regression (40 code + 15 report = 55 points)
You are asked to implement Logistic Regression to classify hand-written digit images into correct corresponding labels. The data is the same that was used for the second programming assignment. Since the labels associated with each digit can take one out of 10 possible values (multiple classes), we cannot directly use a binary logistic regression classifier. Instead, we employ the one-vs-all strategy. In particular, you have to build 10 binary-classifiers (one for each class) to distinguish a given class from all other classes.
1.1.1 Implement blrObjFunction() function (20 points)
In order to implement Logistic Regression, you have to complete function blrObjFunction() provided in the base code (script.py). The input of blrObjFunction.m includes 3 parameters:
- X is a data matrix where each row contains a feature vector in original coordinate (not including the bias 1 at the beginning of vector). In other words, X RND. So you have to add the bias into each feature vector inside this function. In order to guarantee the consistency in the code and utilize automatic grading, please add the bias at the beginning of feature vector instead of the end. wk is a column vector representing the parameters of Logistic Regression. Size of wk is (D + 1) 1.
- yk is a column vector representing the labels of corresponding feature vectors in data matrix X. Each entry in this vector is either 1 or 0 to represent whether the feature vector belongs to a class Ck or not (k = 0,1, ,K 1). Size of yk is N 1 where N is the number of rows of X. The creation of yk is already done in the base code.
Function blrObjFunction() has 2 outputs:
- error is a scalar value which is the result of computing equation (2)
- error grad is a column vector of size (D + 1) 1 which represents the gradient of error function obtained by using equation (3).
1.1.2 Implement blrPredict() function
For prediction using Logistic Regression, given 10 weight vectors of 10 classes, we need to classify a feature vector into a certain class. In order to do so, given a feature vector x, we need to compute the posterior probability P(y = Ck|x) and the decision rule is to assign x to class Ck that maximizes P(y = Ck|x). In particular, you have to complete the function blrPredict() which returns the predicted label for each feature vector. Concretely, the input of blrPredict() includes 2 parameters:
- Similar to function blrObjFunction(), X is also a data matrix where each row contains a feature vector in original coordinate (not including the bias 1 at the beginning of vector). In other words, X has size N D. In order to guarantee the consistency in the code and utilize automatic grading, please add the bias at the beginning of feature vector instead of the end.
- W is a matrix where each column is a weight vector (wk) of classifier for digit k. Concretely, W has size (D + 1) K where K = 10 is the number of classifiers.
The output of function blrPredict() is a column vector label which has size N 1.
1.1.3 Report
In your report, you should train the logistic regressor using the given data X (Preprocessed feature vectors of MNIST data) with labels y. Record the total error with respect to each category in both training data and test data. And discuss the results in your report and explain why there is a difference between training error and test error.
1.2 For Extra Credit: Multi-class Logistic Regression
In this part, you are asked to implement multi-class Logistic Regression. Traditionally, Logistic Regression is used for binary classification. However, Logistic Regression can also be extended to solve the multi-class classification. With this method, we dont need to build 10 classifiers like before. Instead, we now only need to build 1 classifier that can classify 10 classes at the same time.
1.2.1 Implement mlrObjFunction() function (10 points)
In order to implement Multi-class Logistic Regression, you have to complete function mlrObjFunction() provided in the base code (script.py). The input of mlrObjFunction.m includes the same definition of parameter as above. Function mlrObjFunction() has 2 outputs that has the same definition as above. You should use multi-class logistic function to regress the probability of each class.
1.2.2 Report
In your report, you should train the logistic regressor using the given data X(Preprocessed feature vectors of MNIST data) with labels y. Record the total error with respect to each category in both training data and test data. And discuss the results in your report and explain why there is a difference between training error and test error. Compare the performance difference between multi-class strategy with one-vs-all strategy.
1.3 Support Vector Machines
In this part of assignment you are asked to use the Support Vector Machine tool in sklearn.svm.SVM to perform classification on our data set. The details about the tool are provided here: http://scikit-learn. org/stable/modules/generated/sklearn.svm.SVC.html.
1.3.1 Implement script.py function)
Your task is to fill the code in Support Vector Machine section of script.py to learn the SVM model. The SVM models are known to be difficult to scale well to a large dataset. Please randomly sample 10,000 training samples to learn the SVM models, and compute accuracy of prediction with respect to training data, validation data and testing using the following parameters:
- Using linear kernel (all other parameters are kept default).
- Using radial basis function with value of gamma setting to 1 (all other parameters are kept default).
- Using radial basis function with value of gamma setting to default (all other parameters are kept default).
- Using radial basis function with value of gamma setting to default and varying value of C (1,10,20,30, ,100) and plot the graph of accuracy with respect to values of C in the report.
After those experiments, choose the best choice of parameters, and train with the whole training dataset and report the accuracy in training, validation and testing data.
1.3.2 Report
In your report, you should train the SVM using the given data X(Preprocessed feature vectors of MNIST data) with labels y. And discuss the performance differences between linear kernel and radial basis, different gamma setting.
Appendices
A Logistic Regression
Consider x RD as an input vector. We want to classify x into correct class C1 or C2 (denoted as a random variable y). In Logistic Regression, the posterior probability of class C1 can be written as follow:
P(y = C1|x) = (wTx + w0)
where w RD is the weight vector.
For simplicity, we will denote x = [1,x1,x2, ,xD] and w = [w0,w1,w2, ,wD]. With this new notation, the posterior probability of class C1 can be rewritten as follow:
P(y = C1|x) = (wTx) (1)
And posterior probability of class C2 is:
P(y = C2|x) = 1 P(y = C1|x)
We now consider the data set {x1,x2, ,xN} and corresponding label {y1,y2, ,yN} where
1 if xi C1
0 if xi C2
for i = 1,2, ,N.
With this data set, the likelihood function can be written as follow:
where n = (wTxn) for n = 1,2, ,N.
We also define the error function by taking the negative logarithm of the log likelihood, which gives the cross-entropy error function of the form:
(2)
Note that this function is different from the squared loss function that we have used for Neural Networks and Perceptrons.
The gradient of error function with respect to w can be obtained as follow:
(3)
Up to this point, we can use again gradient descent to find the optimal weight w to minimize the error b
function with the formula:
wnew = wold E(wold) (4)
B Multi-Class Logistic Regression
For multi-class Logistic Regression, the posterior probabilities are given by a softmax transformation of linear functions of the feature variables, so that
P(y = Ck|x) = Pj exp(wTj x) (5)
Now we write down the likelihood function. This is most easily done using the 1-of-K coding scheme in which the target vector yn for a feature vector xn belonging to class Ck is a binary vector with all elements zero except for element k, which equals one. The likelihood function is then given by
N KP(Y|w1, ,wK) = YYn=1k=1 | N KP(y = Ck|xn)ynk = YY nkynkn=1k=1 | (6) |
where nk is given by (5) and Y is an N K matrix (obtained using 1-of-K encoding) of target variables with elements ynk. Taking the negative logarithm then gives
N K
E(w1, ,wK) = ln P(Y|w1, ,wK) = XXynk ln nk (7)
n=1k=1
which is known as the cross-entropy error function for the multi-class classification problem.
We now take the gradient of the error function with respect to one of the parameter vectors wk . Making use of the result for the derivatives of the softmax function, we obtain:
E(w1, ,wK) XN
= (nk ynk)xn (8)
wk n=1
then we could use the following updating function to get the optimal parameter vector w iteratively:
Reviews
There are no reviews yet.