(When You Integrate Out..) Suppose x is a scalar random variable drawn from a univariate Gaussian p(x|) = N(x|0,). The variance itself is drawn from an exponential distribution: p(|) = Exp(|2/2), where > 0. Note that the exponential distribution is defined as Exp(x|) = exp(x). Derive the expression of the marginal distribution of x, i.e., p(x|) = R p(x|)p(|)d after integrating out . What does the marginal distribution p(x|) mean?
Plot both p(x|) and p(x|) and include in the writeup PDF itself. What difference do you see between the shapes of these two distributions? Note: You dont need to submit the code used to generate the plots. Just the plots (appropriately labeled) are fine.
Hint: You will notice that R p(x|)p(|)d is a hard to compute integral. However, the solution does have a closed form expression. One way to get the result is to compute the moment generating function (MGF)[1]of R p(x|)p(|)d (note that this is a p.d.f.) and compare the obtained MGF expression with the MGFs of various p.d.f.s given in the table on the following Wikipedia page: https://en.wikipedia.org/wiki/ Moment-generating_function, and identify which p.d.f.s MGF it matches with. That will give you the form of distribution p(x|). Specifically, name this distribution and identify its parameters.
Problem 2
(It Gets Better..) Recall that, for a Bayesian linear regression model with likelihood p(y|x,w) = N(w>x,1) and prior p(w) = Nor(0,1I), the predictive posterior
N(>Nx,N2 (x)), where we have definedand N are the mean and covariance matrix of the Gaussian posterior on w, s.t., and .
Here, we have used the subscript N to denote that the model is learned using N training examples. As the training set size N increases, what happens to the variance of the predictive posterior? Does it increase or decrease or remain the same? You must also prove your answer formally. You might find the following identity useful: You may make use the following matrix identity:
> 1 1 (M1v)(v>M1)
(M + vv ) = M
1 + v>M1v
Where M denotes a square matrix and v denotes a column vector.
Problem 3
(Distribution of Empirical Mean of Gaussian Observations) Consider N scalar-valued observations x1,,xN drawn i.i.d. from N(,2). Consider their empirical mean. Representing the empirical mean as a linear transformation of a random variable, derive the probability distribution of x. Briefly explain why the result makes intuitive sense.
Problem 4
(Benefits of Probabilistic Joint Modeling-1) Consider a dataset of test-scores of students from M schools in a district: x, where Nm denotes the number of students in school m.
Assume the scores of students in school m are drawn independently as where the Gaussians mean m is unknown and the variance 2 is same for all schools and known (for simplicity). Assume the means 1,,M of the M Gaussians to also be Gaussian distributed where 0 and are hyperparameters.
- Assume the hyperparameters 0 and to be known. Derive the posterior distribution of m and write down the mean and variance of this posterior distribution. Note: While you can derive it the usual way, the derivation will be much more compact if you use the result of Problem 2 and think of each schools data as a single observation (the empirical mean of observations) having the distribution derived in Problem 3.
- Assume the hyperparameter 0 to be unknown (but still keep as fixed for simplicity). Derive the marginal likelihood and use MLE-II to estimate 0 (note again that 2 and are known here). Note: Looking at the form/expression of the marginal likelihood, if the MLE-II result looks obvious to you, you may skip the derivation and directly write the result.
- Consider using this MLE-II estimate of 0 from part (2) in the posteriors of each m you derived in part (1). Do you see any benefit in using the MLE-II estimate of 0 as opposed to using a known value of 0?
Problem 5
(Benefits of Probabilistic Joint Modeling-2) Suppose we have student data from M schools where Nm denotes the number of students in school m. The data for each school m = 1,,M is in the following form: For student n in school m, there is a response variable (e.g., score in some exam) and a feature vector
x.
Assume a linear regression model for these scores, i.e.,, where wm RD denotes the regression weight vector for school m, and is known. Note that this can also be denoted as p(y(m)|X(m),wm) = N(y(m)|X(m)wm,1IN), where y(m) is Nm 1 and X(m) is Nm D. Assume a prior p(wm) = N(wm|w0,1ID), to be known and w0 to be unknown.
Derive the expression for the log of the MLE-II objective for estimating w0. You do not need to optimize this objective w.r.t. w0; just writing down the final expression of objective function is fine. Also state what is the benefit of this approach as opposed to fixing w0 to some value, if our goal is to learn the school-specific weight vectors w1,,wM? (Feel free to make direct use of properties of Gaussian distributions).
Problem 6 : Programming Assignment
(Bayesian Linear Regression) Consider a toy data set consisting of 10 training examples with each input xn as well as the output yn being scalars. The data is given below.
x = [2.23,1.30,0.42,0.30,0.33,0.52,0.87,1.80,2.74,3.62]; y = [1.01,0.69,0.66,1.34,1.75,0.98,0.25,1.57,1.65,1.51]
We would like to learn a Bayesian linear regression model using this data, assuming a Gaussian likelihood model for the outputs with fixed noise precision = 4. However, instead of working with the original scalar-valued inputs, we will map each input x using a degree-k polynomial as k(x) = [1,x,x2,,xk]>. Note that, when using the mapping k, each original input becomes k + 1 dimensional. Denote the entire set of mapped inputs as k(x), a 10 (k + 1) matrix. Consider k = 1,2,3 and 4, and learn a Bayesian linear regression model for each case. Assume the following prior on the regression weights: p(w) = N(w|0,I) with w Rk+1.
- For each k, compute the posterior of w and show a plot with 10 random functions drawn from the inferred posterior (show the functions for the input range x [4,4]). Also show the original training examples on the same plot to illustrate how well the functions fit the training data.
- For each k, compute and plot the mean of the posterior predictive p(y|k(x),k(x),y,) on the interval x [4,4]. On the same plot, also show the predictive posterior mean plus-and-minus two times the predictive posterior standard deviation.
- Compute the log marginal likelihood logp(y | k(x),) of the training data for each of the 4 mappings k = 1,2,3,4. Which of these 4 models seems to explain the data the best?
- Using the MAP estimate wMAP, Compute the log likelihood logp(y|wMAP,k(x),) for each k. Which of these 4 models seems to have the highest log likelihood? Is your answer the same as that based on the log marginal likelihood (part 3)? Which of these two criteria (highest log likelihood or highest log marginal likelihood) do you think is more reasonable to select the best model and why?
- For your best model, suppose you could include an additional training input x0 (along with its output y0) to improve your learned model using this additional example. Where in the region x [4,4] would you like the chosen x0 to be? Explain your answer briefly,
Your implementation should be in Python notebook (and should not use an existing implementation of Bayesian linear regression from any library).
Submit the plots as well as the code in a single zip file (named yourrollnumber.zip).
[1] MGF of a p.d.f. p(x) is defined as
Reviews
There are no reviews yet.