[Solved] CSE676 Project1 -Introduction to Deep Learning Neural Networks

$25

File Name: CSE676_Project1_-Introduction_to_Deep_Learning_Neural_Networks.zip
File Size: 584.04 KB

SKU: [Solved] CSE676 Project1 -Introduction to Deep Learning Neural Networks Category: Tag:
5/5 - (1 vote)

Many deep learning models have been proposed and implemented towards the task of extracting features from a given image for classification. There is a lot of literature discussing new architectures from the point of view of the layers composition and recognition performance. Hence, it is very informative as an introductory project to analyze the aspects of a few architectures in terms of memory usage and inference time and how computational cost impacts the recognition accuracy.

2 Networks

2.1 VGGNet [1]

The VGG network architecture was introduced by Simonyan and Zisserman in their 2014 paper, Very Deep Convolutional Networks for Large Scale Image Recognition.

This network is characterized by its simplicity, using only 33 convolutional layers stacked on top of each other in increasing depth. Reducing volume size is handled by max pooling. Two fully-connected layers, each with 4,096 nodes are then followed by a softmax classifier.

2.2 ResNet [1]

Unlike traditional sequential network architectures such as AlexNet, OverFeat, and VGG, ResNet is instead a form of exotic architecture that relies on micro-architecture modules (also called network-in-network architectures).

The term micro-architecture refers to the set of building blocks used to construct the network. A collection of micro-architecture building blocks (along with your standard CONV, POOL, etc. layers) leads to the macro-architecture (i.e,. the end network itself).

First introduced by He et al. in their 2015 paper, Deep Residual Learning for Image Recognition, the ResNet architecture has become a seminal work

2.3 InceptionNet [1]

The Inception micro-architecture was first introduced by Szegedy et al. in their 2014 paper, Going Deeper with Convolutions

The goal of the inception module is to act as a multi-level feature extractor by computing 11, 33, and 55 convolutions within the same module of the network the output of these filters are then stacked along the channel dimension and before being fed into the next layer in the network.

The original incarnation of this architecture was called GoogLeNet, but subsequent manifestations have simply been called Inception vN where N refers to the version number put out by Google.

The Inception V3 architecture included in the Keras core comes from the later publication by Szegedy et al., Rethinking the Inception Architecture for Computer Vision (2015) which proposes updates to the inception module to further boost ImageNet classification accuracy.

The weights for Inception V3 are smaller than both VGG and ResNet, coming in at 96MB.

3 Project Requirements

3.1 Implemention and Coding

3.1.1 Task Definition

  1. Implement VGGNet 16, ResNet 18, InceptionV2 architectures using SGD[2] and ADAM[3] optimization with the following variations in network regularization schemes:
    • No Regularization
    • Batch Normalization [4]
    • Dropouts [5]
  2. Use CIFAR-100 dataset for training and testing.
  3. Calculate Precision, Recall and Accuracy to evaluate each of the 18 experiments.
  4. Implement Early Stopping regularization scheme in all the experiments.

3.2 Report

The report must contain the following points:

  1. Introduction section describing the project in general and why is it useful to do the comparison.
  2. Describe implementation of the architectures, the training as well as the specifics of the training procedure used.
  3. Explain the networks as best as possible in your words.
  4. Plots must be included in the report which shows the performance of the given model in terms of Accuracy and Loss over the trained epochs.
  5. Display a chart like the one shown in Figure 1 in the report to compare the above networks with various combinations and explain the results section.
Arch VGG 16 ResNet 18 Inception v2
Optimizer Score
Setting Precision Recall Accuracy Precision Recall Accuracy Precision Recall Accuracy
SGD With BatchNorm 1 1 1 2 2 2 3 3 3
With DropOut 4 4 4 5 5 5 6 6 6
No Regularization 7 7 7 8 8 8 9 9 9
ADAM With BatchNorm 10 10 10 11 11 11 12 12 12
With DropOut 13 13 13 14 14 14 15 15 15
No Regularization 16 16 16 17 17 17 18 18 18

Figure 1: Sample output table, where 1, 2, 3, represent the experiment number.

  1. A conclusion section.

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[Solved] CSE676 Project1 -Introduction to Deep Learning Neural Networks
$25