Exercise 1 (10 points)
Show that the `1 norm is a convex function (as all norms), yet it is not strictly convex. In contrast, show that the squared Euclidean norm is a strictly convex function.
Exercise 2 (10 points)
Let the observations resulting from an experiment be xn, n = 1,2,,N. Assume that they are independent and that they originate from a Gaussian PDF with mean and standard deviation 2. Both, the mean and the variance, are unknown. Prove that the maximum likelihood (ML) estimates of these quantities are given by
Exercise 3 (15 points)
For the regression model where the noise vector = [1,,N]> comprises samples from zero mean Gaussian random variable, with covariance matrix n, show that the Fisher information matrix is given by
,
where X is the input matrix.
Exercise 4 (20 points) Consider the regression problem described in one of our labs. Read the same audio file, then add white Gaussian noise at a 15 dB level and randomly hit 10% of the data samples with outliers (set the outlier values to 80% of the maximum value of the data samples).
- Find the reconstructed data samples obtained by the support vector regression. Employthe Gaussian kernel with = 0.004 and set 003 and C = 1. Plot the fitted curve of the reconstructed samples together with the data used for training.
- Repeat step (a) using C = 0.05,0.1,0.5,5,10,
- Repeat step (a) using.
- Repeat step (a) using = 0.001,0.002,0.01,0.05,0.1.
- Comment on the results.
Exercise 5 (15 points)
Show, using Lagrange multipliers, that the `2 minimizer in equation (9.18) from the textbook accepts the closed form solution
= X>(XX>)1y
Now, show that for the system y = X with X Rnl and n > l the least squares solution is given by
= (X>X)1X>y
Exercise 6 (10 points)
Show that the null space of a full rank N l matrix X is a subspace of imensionality l N, for N < l.
Exercise 7 (20 points)
Generate in Python a sparse vector Rl, l = 100, with its first five components taking random values drawn from a normal distribution with mean zero and variance one and the rest being equal to zero. Build, also, a sensing matrix X with N = 30 rows having samples normally distributed, with mean zero and variance 1/N, in order to get 30 observations based on the linear regression model y = X. Then perform the following tasks.
- Use a LASSO implementation to reconstruct from y and X.
- Repeat the experiment 500 times, with different realizations of X, in order to compute the probability of correct reconstruction (assume the reconstruction is exact when ||y = X|| < 108).
- Repeat the same experiment (500 times) with matrices of the form
, with probability
X(i,j) = 0, with probability 1
, with probability
for p equal to 1,9,25,36,64 (make sure that each row and each column of X has at least a nonzero component). Give an explanation why the probability of reconstruction falls as p increases (observe that both the sensing matrix and the unknown vector are sparse).
Reviews
There are no reviews yet.