[SOLVED] deep learning html python graph network GPU 1

$25

File Name: deep_learning_html_python_graph_network_GPU_1.zip
File Size: 423.9 KB

5/5 - (1 vote)

1
1.1
1.2
Assignment4 November 29, 2017
CSE 252A Computer Vision I Fall 2017
Assignment 4
Problem 1: Install Tensorflow 2 pts
Follow the directions on https:www.tensorflow.orginstall to install Tensorflow on your com
puter.
Note: You will not need GPU support for this assignment so dont worry if you dont have one.
Furthermore, installing with GPU support is often more difficult to configure so it is suggested that you install the CPU only version. However, if you have a GPU and would like to install GPU support feel free to do so at your own risk :
Note: On windows, Tensorflow is only supported in python3, so you will need to install python3 for this assignment.
Run the following cell to verify your instalation.
In: import tensorflow as tf
hellotf.constantHello, TensorFlow!
sesstf.Session
printsess.runhello
1.3 Problem 2: Downloading CIFAR10 1 pts
Download the CIFAR10 dataset http:www.cs.toronto.edukrizcifar.html. You will need the python version: http:www.cs.toronto.edukrizcifar10python.tar.gz
Extract the data to .data Once extracted run the following cell to view a few example images.
In 1: import numpy as np
unpickles raw data files
def unpicklefile:
import pickle
import sys
with openfile, rb as fo:
if sys.versioninfo03:
dictpickle.loadfo
else:
dictpickle.loadfo, encodingbytes
1

return dict
loads data from a single file
def getBatchfile:
dictunpicklefile
datadictbdata.reshape1,3,32,32.transpose0,2,3,1
labelsnp.asarraydictblabels, dtypenp.int64
return data,labels
loads all training and testing data
def getDatapath.data:
classess.decodeUTF8 for s in unpicklepathbatches.metablabelnames
trainData, trainLabels,
for i in range5:
data, labelsgetBatchpathdatabatchdi1
trainData.appenddata
trainLabels.appendlabels
trainDatanp.concatenatetrainData
trainLabelsnp.concatenatetrainLabels
testData, testLabelsgetBatchpathtestbatch
return classes, trainData, trainLabels, testData, testLabels
training and testing data that will be used in the following problems
classes, trainData, trainLabels, testData, testLabelsgetData
display some example images
import matplotlib.pyplot as plt
matplotlib inline
plt.figurefigsize14, 6
for i in range14:
plt.subplot2,7,i1
plt.imshowtrainDatai
plt.titleclassestrainLabelsi
plt.show
print train shape: strtrainData.shape, strtrainLabels.shape
print test shape : strtestData.shape, strtestLabels.shape
2

train shape: 50000, 32, 32, 3, 50000,
test shape : 10000, 32, 32, 3, 10000,
Below are some helper functions that will be used in the following problems.
In:a generator for batches of data
yields data batchsize, 3, 32, 32 and labels batchsizeif shuffle, it will load batches in a random order
def DataBatchdata, label, batchsize, shuffleTrue:
ndata.shape0
if shuffle:
indexnp.random.permutationn
else:
indexnp.arangen
for i in rangeintnp.ceilnbatchsize:
indsindexibatchsize : minn,i1batchsize
yield datainds, labelinds
tests the accuracy of a classifier
def testtestData, testLabels, classifier:
batchsize50
correct0.
for data,label in DataBatchtestData,testLabels,batchsize:
predictionclassifierdata
print prediction
correctnp.sumpredictionlabel
return correcttestData.shape0100
a sample classifier
given an input it outputs a random class class RandomClassifier:
3

def initself, classes10:
self.classesclasses
def callself, x:
return np.random.randintself.classes, sizex.shape0
randomClassifierRandomClassifier
print Random classifier accuracy: ftesttestData, testLabels, randomClassifier
1.4 Problem 3: Confusion Matirx 5 pts
Here you will implement a test script that computes the confussion matrix for a classifier. The matrix should be nxn where n is the number of classes. Entry Mi,j should contain the number of times an image of class i was classified as class j. M should be normalized such that each row sums to 1.
Hint: see the function test above for reference.
In: def confusiontestData, testLabels, classifier: your code here
return M
def VisualizeConfussionM:
plt.figurefigsize14, 6
plt.imshowM, vmin0, vmax1 plt.xticksnp.arangelenclasses, classes, rotationvertical plt.yticksnp.arangelenclasses, classes
plt.show
MconfusiontestData, testLabels, randomClassifier
VisualizeConfussionM
1.5 Problem 4: KNearest Neighbors KNN 5 pts
Here you will implemnet a simple knn classifer. The distance metric is euclidian in pixel space. k refers to the number of neighbors involved in voting on the class.
Hint: you may want to use: sklearn.neighbors.KNeighborsClassifier
In: from sklearn.neighbors import KNeighborsClassifier
class KNNClassifer:
def initself, k3:
k is the number of neighbors involved in voting your code here
def trainself, trainData, trainLabels: your code here
def callself, x:
this method should take a batch of images batchsize, 32, 32, 3 and return a
4

predictions should be int64 values in the range 0,9 corrisponding to the cla
your code here
test your classifier with only the first 100 training examples use this while debugginote you should get around 1020 accuracy
knnClassiferXKNNClassifer
knnClassiferX.traintrainData:100, trainLabels:100
print KNN classifier accuracy: ftesttestData, testLabels, knnClassiferX
In:test your classifier with all the training examples This may take a whilenote you should get around 30 accuracy
knnClassiferKNNClassifer
knnClassifer.traintrainData, trainLabels
print KNN classifier accuracy: ftesttestData, testLabels, knnClassifer
display confusion matrix for your KNN classifier with all the training examples
MconfusiontestData, testLabels, knnClassifer
VisualizeConfussionM
1.6 Problem 5: Principal Component Analysis PCA KNearest Neighbors KNN 5 pts
Here you will implemnet a simple knn classifer in PCA space. You should implement PCA your self using svd you may not use sklearn.decomposition.PCA or any other package that directly implements PCA transofrmations
Hint: Dont forget to apply the same normalization at test time.
Note: you should get similar accuracy to above, but it should run faster.
In: from sklearn.decomposition import PCA
class PCAKNNClassifer:
def initself, components25, k3: your code here
def trainself, trainData, trainLabels: your code here
def callself, x: your code here
test your classifier with only the first 100 training examples use this while debuggi
pcaknnClassiferXPCAKNNClassifer
pcaknnClassiferX.traintrainData:100, trainLabels:100
print PCAKNN classifier accuracy: ftesttestData, testLabels, pcaknnClassiferX
5

In:test your classifier with all the training examples This may take a few minutes pcaknnClassiferPCAKNNClassifer
pcaknnClassifer.traintrainData, trainLabels
print KNN classifier accuracy: ftesttestData, testLabels, pcaknnClassifer
display the confusion matrix
MconfusiontestData, testLabels, pcaknnClassifer
VisualizeConfussionM
1.7 Deep learning
Below is some helper code to train your deep networks
Hint: see https:www.tensorflow.orggetstartedmnistpros or
https:www.tensorflow.orggetstartedmnistbeginners for reference
In:base class for your Tensorflow networks. It implements the training loop train and pYou will need to implement the init function to define the networks structures in class TFClassifier:
def initself:
pass
def trainself, trainData, trainLabels, epochs1, batchsize50:
self.predictiontf.argmaxself.y,1
self.crossentropytf.reducemeantf.nn.sparsesoftmaxcrossentropywithlogi
self.trainsteptf.train.AdamOptimizer1e4.minimizeself.crossentropy
self.correctpredictiontf.equalself.prediction, self.y
self.accuracytf.reducemeantf.castself.correctprediction, tf.float32
self.sess.runtf.globalvariablesinitializer
for epoch in rangeepochs:
for i, data,label in enumerateDataBatchtrainData, trainLabels, batchsize
, accself.sess.runself.trainstep, self.accuracy, feeddictself if i10099:
print dd d fepoch, epochs, i, acc
print testing epoch:d accuracy: fepoch1, testtestData, testLabels,
def callself, x:
return self.sess.runself.prediction, feeddictself.x: x
helper function to get weight variable
def weightvariableshape:
initialtf.truncatednormalshape, stddev0.01
return tf.Variableinitial
helper function to get bias variable
def biasvariableshape:
6

initialtf.constant0.1, shapeshape
return tf.Variableinitial
example linear classifier
class LinearClassiferTFClassifier:
def initself, classes10:
self.sesstf.Session
self.xtf.placeholdertf.float32, shapeNone,32,32,3input batch of image
self.ytf.placeholdertf.int64, shapeNoneinput labels
model variables
self.Wweightvariable32323,classes
self.bbiasvariableclasses
linear operation
self.ytf.matmultf.reshapeself.x,1,32323,self.Wself.b
test the example linear classifier note you should get around 2030 accuracy
linearClassiferLinearClassifer
linearClassifer.traintrainData, trainLabels, epochs20
display confusion matrix
MconfusiontestData, testLabels, linearClassifer
VisualizeConfussionM
1.8 Problem 6: Multi Layer Perceptron MLP 5 pts
Here you will implement an MLP. The MLP shoud consist of 3 linear layers matrix multiplcation and bias offset that map to the following feature dimensions:
32x32x3hidden
hiddenhidden
hiddenclasses
The first two linear layers should be followed with a ReLU nonlinearity. The final layer should
not have a nonlinearity applied as we desire the raw logits output see: the documentation for tf.nn.sparsesoftmaxcrossentropywithlogits used in the training
The final output of the computation graph should be stored in self.y as that will be used in the training.
Hint: see the example linear classifier Note: you should get around 50 accuracy
In: class MLPClassiferTFClassifier:
def initself, classes10, hidden100:
self.sesstf.Session
self.xtf.placeholdertf.float32, shapeNone,32,32,3input batch of image
self.ytf.placeholdertf.int64, shapeNoneinput labels
7

your code here
test your MLP classifier note you should get around 50 accuracy
mlpClassiferMLPClassifer
mlpClassifer.traintrainData, trainLabels, epochs20
display confusion matrix
MconfusiontestData, testLabels, mlpClassifer
VisualizeConfussionM
1.9 Problem 7: Convolutional Neural Netork CNN 7 pts
Here you will implement a CNN with the following architecture: ReLU Convkernelsize4x4 stride2, outputfeaturesnReLU Convkernelsize4x4 stride2, outputfeaturesn2ReLU Convkernelsize4x4 stride2, outputfeaturesn4Linearoutputfeaturesclasses
In: def conv2dx, W, stride2:
return tf.nn.conv2dx, W, strides1, stride, stride, 1, paddingSAME
class CNNClassiferTFClassifier:
def initself, classes10, n16:
self.sesstf.Session
self.xtf.placeholdertf.float32, shapeNone,32,32,3input batch of image
self.ytf.placeholdertf.int64, shapeNoneinput labels your code here
test your CNN classifier note you should get around 65 accuracy
cnnClassiferCNNClassifer
cnnClassifer.traintrainData, trainLabels, epochs20
display confusion matrix
MconfusiontestData, testLabels, cnnClassifer
VisualizeConfussionM
1.10 Further reference
To see how state of the art deep networks do on this dataset see: https:github.comtensorflowmodelstreemasterresearchresnet
8

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] deep learning html python graph network GPU 1
$25