[SOLVED] 代写 algorithm math network Lab 4

30 $

File Name: 代写_algorithm_math_network_Lab_4.zip
File Size: 329.7 KB

SKU: 2495788319 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


Lab 4
With the help of neural networks, artificial intelligence has demonstrated extraordinary abilities in several tasks. Neuron is the most fundamental building block of neural networks. The perceptron is one of the simplest types of artificial neurons. In this model, we have n inputs (usually given as a vector) and their corresponding weights. We multiply these together, sum them up, and add a bias term. We denote this as z:
n
z = w x +b =WT X +b
ii i=1
After that, we apply an activation function,  , to this and produce an activation  (the output). The activation function for perceptron is sometimes called a step function.
(q)=1 q0 0 q0

 =(z)
This is the mathematical model for a single neuron, the most fundamental unit for a neural network.
1.(4 points)
⚫ Implement a single Perceptron with two dimensional input using numpy.
⚫ With the given neuronal parameters, i.e. ( b = 0.5, w = 0.5, w = −0.5 ), compute the 12
outputs in response to the real input data saved in the data.txt file.
⚫ Print the percentage of samples whose outputs are 1.
⚫ Show all the samples in a scatter chart where red and blue points represent the
samples with the output 1 and 0, respectively. Then, draw a decision line: w x + w x + b = 0 and save the figure as a pdf file.
2. Do the same tasks as mentioned above for the data presented in Table I. (1 point) Table I: AND gates
1i 22
X1
X2
Desired Output
0
0
0
0
1
0
1
0
0
1
1
1

We can see that part of predicted outputs are different from their corresponding desired outputs. How to correctly classify all of our inputs? An idea is to run each sample input through the perceptron and, if the perceptron fires when it shouldn’t have, inhibit it. If the perceptron doesn’t fire when it should have, excite it. In order to inhibit or excite, the weights and the bias should be changed according an update rule:
w=w+x(d−y)
b = b +(d − y)
Where d is the desired output, y is the predicted output, and  is the learning rate (a hyperparameter given manually)
With the update rule, we can create a function to keep applying this update rule until our perceptron can correctly classify all of our inputs. We need to keep iterating through our training data until this happens. One epoch is when our perceptron has seen all of the training data once. Usually, we run our learning algorithm for multiple epochs.
3. (4 points)
⚫ Implement the learning function mentioned above.
⚫ Use the function to make perceptron classify all of inputs correctly and print the
weight vector and bias in each epoch.
⚫ Show the learning process of the weights and bias using line chart and save the
chart as a pdf file.
4. Train perceptron to classify the data set in table II and also show the learning process
of parameters using line chart, and then save the chart as a pdf file. (1 point) Table II: NOR gates
X1
X2
Desired Output
0
0
0
0
1
1
1
0
1
1
1
0

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] 代写 algorithm math network Lab 4
30 $