[Solved] ML Homework3- Gaussian Process for Regression

$25

File Name: ML_Homework3-_Gaussian_Process_for_Regression.zip
File Size: 423.9 KB

SKU: [Solved] ML Homework3- Gaussian Process for Regression Category: Tag:
5/5 - (1 vote)

1 Gaussian Process for Regression

In this exercise, please implement Gaussin process (GP) for regression. The file data gp x.csv and gp t.csv have input data x : {x1,x2,,x100},0 < xi < 1 and the corresponding target data t : {t1,t2,,t100} respectively. Please take the first 50 points as the training set and the rest as the test set. A regression function y() is used to express the target value by

where the noisy signal n is Gaussian distributed,) with 1 = 1.

  1. Please implement the GP with exponential-quadratic kernel function given by

where the hyperparameters = {0,1,2,3} are fixed. Please use the training set with four different combinations:

  • linear kernel = {0,0,0,1}
  • squared exponential kernel = {1,16,0,0}
  • exponential-quadratic kernel = {1,16,0,4}
  • exponential-quadratic kernel = {1,64,32,0}
  1. Please plot the prediction result like Figure 6.8 of textbook for training set but one standard deviation instead of two and without the green curve. The title of the figure should be the value of the hyperparameters used in this model. The red line shows the mean m() of the GP predictive distribution. The pink region corresponds to plus and minus one standard deviation. Training data points are shown in blue. An example is provided in below.

1

  1. Show the corresponding root-mean-square errors

for both training and test sets with respect to the four kernels.

  1. Try to tune the hyperparameters by yourself to find the best combination for the dataset. You can tune the hyperparameters by trial and error or use automatic relevance determination (ARD) in Chapter 6.4.4 of textbook. (If you implement the ARD method, you will get the bonus points.)
  2. Explain your findings and make some discussion.

2 Support Vector Machine

Support vector machines (SVM) is known as a popular method for pattern classification. In this exercise, you will implement SVM for classification. Here, the Tibetan-MNIST dataset is given in x train.csv and t train.csv. Tibetan-MNIST is a dataset of Tibetan images. The input data including three categories: Tibetan 0, Tibetan 1 and Tibetan 2. Each example is a 2828 grayscale image, associated with a label.

Data Description

  • x train is a 300 784 matrix, where each row is the first two scaled principal values of a training image.
  • t train is a 300 1 matrix, which records the classes of the training images. 0, 1, 2 represent Tibetan 0, Tibetan 1 and Tibetan 2, respectively.

In the training procedure of SVM, you need to optimize with respect to the Lagrange multiplier = {n}. Here, we use the Sequential Minimal Optimization to solve the problem. For details, you can refer to the paper [John Platt, Sequential minimal optimization: A fast algorithm for training support vector machines. (1998)]. Scikit-learn is a free software machine learning library based on Python. This library provides the sklearn.svm. You are allowed to use the library to calculate the multipliers (coefficients) rather than using the prediction function directly.

In this exercise, you will implement SVM based on two kinds of kernel functions

w

  • Linear kernel:

k(xi,xj) = (xi)>(xj) = x>i xj

  • Polynomial (homogeneous) kernel of degree 2:

x = [x1,x2]

SVM is a binary classifier, but the application here has three classes. To handle this problem, there are two decision approaches, one is the one-versus-the-rest, and another is the oneversus-one

  1. Analyze the difference between two decision approaches (one-versus-the-rest and oneversus-one). Decide which one you want to use and explain why you choose this approach.
  2. Use the dataset to build a SVM with linear kernel to do multi-class classification. Then plot the corresponding decision boundary and support vectors.
  3. Repeat (2) with polynomial kernel (degree = 2).
  4. Discuss the difference between (2) and (3).

Hints

  • In this exercise, we strongly recommend using matlab to avoid tedious preprocessing occurred in python.
  • If you use other languages, you are allowed to use toolbox only for multipliers (coefficients).
  • You need to implement the whole algorithms except for multipliers (coefficients).

3 Gaussian Mixture Model

In this exercise, you will implement a Gaussian mixture model (GMM) and apply it in image segmentation. First, use a K-means algorithm to find K central pixels. Second, use Expectation maximization (EM) algorithm (please refer to textbook p.438-p.439) to optimize the parameters of the model. The input data is hw3 3.jpeg. According to the maximum likelihood, you can decide the color k, k [1,,K] of each pixel xn of ouput image 1. Please build a K-means model by minimizing

N K

J = XXnk||xn k||2

n=1 k=1

and show the table of estimated .

  1. Use calculated by the K-means model as means, and calculate the corresponding variances k2 and mixing coefficient k for initialization of GMM

Optimize the model by maximizing the log likelihood function logp(x|,,2) through EM algorithm. Plot the log likelihood curve of GMM. (Please terminate EM algorithm when the iteration arrives 100)

  1. Repeat step (1) and (2) for K = 3, 5, 7, and 10. Please show the resulting images in your report. Below are some examples.
  2. Please show the graph of Log likelihood at different iterations for K = 3, 5, 7, 10 Example are shown below. (This graph is only for your reference.)
  3. You can make some discussion about what is crucial factor to affect the output image and explain the reason?
  4. The image shown below is your input image taken by TA, and it is only allowed to be used for homework3.

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[Solved] ML Homework3- Gaussian Process for Regression
$25