Problem 2: Bayesian Optimization
In this section, you will implement Bayesian Optimization using Gaussian Processes and compare the average regret over time for three acquisition functions: GP-greedy, GP-UCB, and GP-Thompson.
In this section your solution will consist of a single plot, with each curve in that plot being described in a subsection below. This allows you to get partial credit if you are not able to implement all 3 acquisition functions.
We provide you with the squared exponential kernel (also known as the RBF kernel) you will use for this assignment, defined as:
In the function templates we provide, we describe the parameters:
- the parameter l is the scale of the RBF kernel.
- sigma f is the vertical variation parameter f of the RBF kernel.
- sigma y is the standard deviation of the intrinsic noise in y.
Feel free to experiment with values for these, but do not change the default GP arguments (arguments l, sigma f, sigma y) for the final plot. Use the following GP update equations for the posterior:
Ky = K + y2IN fy = N 0,KTy KK
K
p(f|X,X,y) = N(f|u,) = KTKy 1y
Review slide 193 of https://cmudeeprl.github.io/Spring202010403website/assets/ lectures/s20_lecture3.pdf for more details. These equations dier from the ones in the slides because here we omit the oset in since we assume zero mean.
For all questions in this section, you only need to take into account the domain defined by the python constants [MIN X,MAX X]. The terms posterior and posterior predictive can be considered synonyms for the purposes of this question.
- The first acquisition function you will implement is the simplest: GP-greedy. This function will take a training dataset, find the maximum mean value of the posterior distribution, and output the x-value corresponding to that mean. Since the GPs we are dealing with here are 1-dimensional, we recommend using grid search as an eective way to estimate the maximizer.
More specifically, to do grid search on a function f, construct a grid of uniformly spaced points using numpys linspace function, evaluate the function f at all such points, and return the grid point with maximum value. We recommend you use at least 100 such grid points when finding the maximum.
- Next, implement the GP-UCB acquisition function described in class. This function is similar to the greedy one but you will need to take into account the standard deviation of each point you evaluate in the grid search. Use the following optimization process to choose the next xt to evaluate:
xt = argmax(x) + 1.96 (x)
x
where (x) represents the GPs mean at x and (x) represents the standard deviation at x. Notice here that we provide a constant 1.96 instead of the t1/2 described in class, which is a function of time. While having a time-varying constant is important for some applications, we choose a constant here to make convergence faster.
- Finally, implement Thompson sampling using the GP. This involves sampling a function from the GP and then taking the argmax of this sampled function.
Once you have completed the subsections, plot a graph of the regrets of the acquisition functions you implemented using the provided function bayes opt(n trials=200, n steps=50). This might take a while, so try to debug with fewer n trials. Finally, include a short description of the regret curve of each acquisition function and why you believe the regret behaves the way it does for that function (does it reach exactly zero regret? If it doesnt, why not? etc.).
Problem 3: CMA-ES
For this problem, you will be working with the Cartpole (Cartpole-v0) environment from OpenAI gym. Please take a look at the implementation to learn more about the state and action spaces: https://github.com/openai/gym/blob/master/gym/envs/classic_ control/cartpole.py.
For Cartpole, we have continuous states and hence we cannot use policy learning methods for finite MDPs. In class, we learned about covariance matrix adaptation evolutionary strategy (CMA-ES), a black box, gradient-free optimization method. In this problem, you will implement CMA-ES to find a policy for the Cartpole environment. There is template code provided for CMA-ES in the Colab notebook. Start by filling in the missing pieces in the CMAES class. For the CMA-ES update equations, follow the hints mentioned in the Colab notebook comments.
- Run CMA-ES on Cartpole for 200 iterations (or until it reaches a reward of 200), using the provided hyperparameters. Plot the best point reward and average point reward throughout training.
- Run your CMA-ES algorithm with population size of 20, 50, and 100. For each of the population sizes, run CMA-ES for 200 iterations (or until it reaches a reward of 200) and plot the best point reward at each iteration. In another plot, plot the best point reward as a function of the number of weights evaluated (# of iterations population size).
- (Optional) We provided most of the hyperparameters to you for this problem. However, in practice, significant time is spent tuning hyperparameters. Play around with the dimensionality of your network as well as other parameters of CMA-ES to get a feel for how the algorithm behaves.
Reviews
There are no reviews yet.