- Multi-Layer Perceptron
In class we discussed the derivation of the backpropagation algorithm for neural networks. In this problem, you will train a neural network on the CIFAR10 data set. Train a Multi-Layer Perceptron (MLP) neural network on the CIFAR10 data set. This is an opened implementation problem, but I expect that you implement the MLP with at least two different hidden layer sizes and use regularization.
- Report the classification error on the training and testing data each configuration of the neural network. For example, you should report the results in the form of a table
| Classificati training | on Error testing | |
| 50HLN+no regularization | 0.234 | 0.253 |
| 50HLN+L2 regularization | 0.192 | 0.203 |
| 250HLN+no regularization | 0.134 | 0.153 |
| 250HLN+L2 regularization | 0.092 | 0.013 |
List all the parameters that you are using (i.e., number of learning rounds, regularization parameters, learning rate, etc.)
- I would suggest using Googles TensorFlow, PyTorch or Keras library to implement the MLP; however, you are free to use whatever library youd like. If that is the case, here is a link to the data
- I recommend using a cloud platform such as Google Colab to run the code.
- Adaboost [20pts]
Write a class that implements the Adaboost algorithm. Your class should be similar to sklearns in that it should have a fit and predict method to train and test the classifier, respectively. You should also use the sampling function from Homework #1 to train the weak learning algorithm, which should be a shallow decision tree. The Adaboost class should be compared to sklearns implementation on datasets from the course Github page.
- Recurrent Neural Networks for Languange Modeling
Read LSTM: A Search Space Odyssey (https://arxiv.org/abs/1503.04069). One application of an RNN is the ability model language, which is what your phone does when it is predicting the top three words when youre texting. In this problem, you will need to build a language model.
You are encouraged to start out with the code here. While this code will implement a language model, you are required to modify the code to attempt to beat the baseline for the experiments they have implemented. For example, one modification would be to train multiple language models and average, or weight, their outputs to generate language. Write a couple of paragraphs about what you did and if the results improve the model over the baseline on Github.

![[Solved] ECE523 Homework 4](https://assignmentchef.com/wp-content/uploads/2022/08/downloadzip.jpg)

![[Solved] ECE523 Homework2](https://assignmentchef.com/wp-content/uploads/2022/08/downloadzip-1200x1200.jpg)
Reviews
There are no reviews yet.