Machine Learning 1 TU Berlin, WiSe 2020/21
Error Backpropagation
In this homework, our goal is to test two approaches to implement backpropagation in neural networks. The neural network we consider is depicted below:
Exercise 1: Implementing backpropagation (20 P)
The following code loads the data and current parameters, applies the neural network forward pass, and computes the error. Pre-activations at each layer are stored in a list so that they can be reused for the backward pass.
In [1]: import numpy,utils
# 1. Get the data and parameters
X,T = utils.getdata()
W,B = utils.getparams()
# 2. Run the forward pass
Z1 = X.dot(W[0])+B[0]
A1 = numpy.maximum(0,Z1)
Z2 = A1.dot(W[1])+B[1]
A2 = numpy.maximum(0,Z2)
Z3 = A2.dot(W[2])+B[2]
A3 = numpy.maximum(0,Z3)
Y= A3.dot(W[3])+B[3];
# 3. Compute the error
err = ((Y-T)**2).mean()
Here, you are asked to implement the backward pass, and obtain the gradient with respect to the weight and bias parameters.
Task:
Write code that computes the gradient (and format it in the same way as the parameters them- selves, i.e. as lists of arrays).
In [2]: # # TODO: Replace by your code
#
1
import solution
DW,DB = solution.exercise1(W,B,X,Z1,A1,Z2,A2,Z3,A3,Y,T) #
To test the implementation, we print the gradient w.r.t. the first parameter in the first layer.
In [3]: print(numpy.linalg.norm(DW[0][0,0])) 1.5422821523392451
Exercise 2: Using Automatic Differentiation (10 P)
Because manual computation of gradients can be tedious and error-prone, it is now more common to use libraries that perform automatic differentiation. In this exercise, we make use of the PyTorch library. You are then asked to compute the error of the neural network within that framework, and this error can then be automatically differentiated.
In [4]: import torch
import torch.nn as nn
# 1. Get the data and parameters
X,T = utils.getdata()
W,B = utils.getparams()
# 2. Convert to PyTorch objects
X = torch.Tensor(X)
T = torch.Tensor(T)
W = [nn.Parameter(torch.Tensor(w)) for w in W] B = [nn.Parameter(torch.Tensor(b)) for b in B]
Task:
Write code that computes the forward pass and the error in a way that can be differentiated automatically by PyTorch.
In [5]: # # TODO: Replace by your code
#
import solution
err = solution.exercise2(W,B,X,T)
#
Now that the error has been computed, we can apply automatic differentiation to get the parameters. Like for the first exercise, we print the gradient of the first weight parameter of the first layer.
In [6]: err.backward() print(numpy.linalg.norm(W[0].grad[0,0]))
1.5422822
Here, we can verify that the value of the gradient obtained by manual and automatic differentiation are the same.
2
Reviews
There are no reviews yet.