- In this question we consider the hard margin SVM formulation as developed in class and thetextbook. Recall that we stated that for this type of optimization problem there is no duality gap. That is, the primal and dual objective functions have the same value at their optimal solutions. Use this fact and other facts developed in class to show that the maximum margin can be calculated from the dual solution . More specifically show that .
- The soft margin SVM formulation developed in class called for minimizing subject to constraints (for all i) 1 i ti(vT(xi) + v0) 0 and i 0.
Consider the alternative formulation QSVM that uses a quadratic penalty on i which is: minimize subject to constraints (for all i) 1 i ti(vT(xi) + v0) 0.
Note that we do not need the constraint i 0 in this formulation because i < 0 increases the objective function and makes the constraints harder to satisfy. Develop the dual formulation of QSVM: that is (1) compute the dual objective function, (2) state the dual optimization problem, and (3) argue that QSVM is also a kernel method.
- In this problem we show that many machine learning problems have solutions that can beexpressed through kernels. Consider the standard formulation where ai = wT(xi), and where we have a loss function `(ai,ti). For example we might use the square loss `(ai,ti) = (tiai)2 or the classification log loss `(ai,ti) = ti lnyi +(1ti)ln(1yi) where yi = (ai) etc. But for this question we consider any loss function. In addition consider any regularization function R(b) which is monotonically increasing, where b is a scalar.
Now consider a machine learning objective of the form Pi `(ai,ti) + R(wTw).
Show the the optimal solution can be expressed as w = Pi i(xi) that is, it is a weighted sum of training examples. Note that this shows that predictions on test example z can be computed as wT(z) = Pi iK(xi,z).
- In lecture we provided a general form for optimizing the hyper-parameters of Gaussian ProcessRegression (GPR). In general there is no closed form and we must use some iterative gradient based optimization. Consider GPR where one hyperparameter, 0, gives the overall scale of the covariance. Specifically, assume the covariance matrix has the form C(xn,xm) = 0k(xn,xm) where k(,) is any valid kernel. Show that the marginal likelihood can be optimized w.r.t. 0 in closed form and develop the solution for 0.
1
Reviews
There are no reviews yet.