[SOLVED] 代写 algorithm python scala Course assignment

30 $

File Name: 代写_algorithm_python_scala_Course_assignment.zip
File Size: 442.74 KB

SKU: 1557483189 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


Course assignment
ENG5322M: Introduction to computer programming
October 28, 2019
Instructions
You should submit the following through the ENG5322M Moodle submission system by 4pm,
22nd November 2019:
1. Python notebook .ipynb format used for to solve each of the assignment problems.
This can be a separate notebook for each problem or a single file. You will be marked according to the following criteria:
Program correctness 60: Does your program solve the problem given to you?
Program readability 20: Have you written your program in such a way that it is
easy to follow? Have you included appropriate comments?
Testing 20: Have you done appropriate testing to show your program works as intended?
You are given two problems to solve for your project assignment which will be outlined in more detail shortly. The two problems are focussed on:
1. Conjugate gradient method
2. Multivariate least square method
Notebook format
I expect to see you explain your code throughout your notebook both using text and images, much like I have down in the lectures during this course. As a quick guide, the default type of a cell is code, but if you change it to markdown look in the toolbar and then hit shift and enter you will see it become text in the notebook.
1. Testing: You should write appropriate code that performs tests on your program. 1

1 PROBLEM 1: CONJUGATE GRADIENT METHOD
1 Problem 1: Conjugate gradient method
Overview of task: Write a program that solves equations of the form Axb using Conjugate
Gradient Method. Use this to solve the equation:
1.1 Derivation
4 1 1 x1 12
1 4 2 x21 1
124×3 5
The problem involves finding the vector x that minimizes the scalar function
fx 1xTAxbTx 2
2
where the matrix A is symmetric and positive definite. Because fx is minimized when its
gradient fAxb, we see that the minimization is equivalent to solving
Axb 3
Gradient methods solve this problem of minimization through iteration starting with an initial estimate of vector x0 and then iterate for k computations using
xk1 xk ksk 4 such that the step length k is chosen so that xk1 minimizes fxk1 in the search direction sk
such that xk1 satisfies
If we now introduce a residual
Axkkskb 5 rk bAxk 6
and premultiplying both sides by sTk and solving for k gives
k sTkrk 7
sTk Ask
While intuitively we can choose skfrk as this is the direction of the largest negative change in fx called method of steepest descent, it converges very slow. A better choice is to use conjugate gradient method that uses the search direction
sk1rk1ksk 8 2

1.2 Hint 1 PROBLEM 1: CONJUGATE GRADIENT METHOD The constant k is then chosen so that the two successive search directions are conjugate to
each other, i.e.,
Subsituiting Eq. 9 into Eq. 8 gives:
91 2.7 Iterative Methods The algorithm is then as follows:
sTk1Ask 0 9
krTk1Ask 10 sTk Ask
Here is the outline of the conjugate gradient algorithm:
Choose x0 any vector will do.
Letr0 bAx0.
Let s0r0 lacking a previous search direction,
choose the direction of steepest descent. dowithk 0,1,2,…:
ks kT r k . s kT A s k
xk1 xk ksk.
rk1bAxk1.
if rk1 exit loopis the error tolerance.k r kT1 A s k .
s kT A s k
sk1 rk1 ksk.
It can be shown that the residual vectors r1 , r2 , r3 , . . . produced by the algorithm Figure 1: Pseudo algorithm for conjugate gradient method.
are mutually orthogonal; that is, rir j0, ij . Now suppose that we have carried
out enough iterations to have computed the whole set of n residual vectors. The resid
ual resulting from the next iteration must be a null vector rn10, indicating that
Itcanbeshowntheastoltuhtieonrheassidbueeanlovbetcaitnoerds.Irtth,ursa,prpe,ar.s..thaartethmecuotnujuaglalyteogratdhioengtoanlgaolriit.hem. r.r 0,ij 123 ij
is not an iterative method at all, because it reaches the exact solution after n compu tational cycles. In practice, however, convergence is usually achieved in less than n
Please note that while writing the algorithm, if you want to multiply a transpose of a vector
iterations. T
with a regular vector, say x y, you can use in numpy, np.dotx,y. While implementing the
The conjugate gradient method is not competitive with direct methods in the
algorithm, set the tolerance to very low tol1.0e9.
solution of small sets of equations. Its strength lies in the handling of large, sparse
1.2 Hint
systems where most elements of A are zero. It is important to note that A enters the algorithm only through its multiplication by a vector; that is, in the form Av, where v is a vector either xk1 or sk. If A is sparse, it is possible to write an efficient subrou tine for the multiplication and pass it, rather than A itself, to the conjugate gradient algorithm.
The algorithm requires calculating r, , andif you can work back from the worked example
along with mod!ulcuosnojfGra,dthen you can break down this whole algorithm to very simple ma
trixvector multiplicationsadditionsubtraction that you have to do in a loop whether it is for The function conjGrad shown next implements the conjugate gradient algorithm.
or a while loop:The maximum allowable number of iterations is set to n the number of unknowns.
Note that conjGrad calls the function Av that returns the product Av. This func
xnp.ones3
Conjugate gradient method for solving Axb.
then rAxb is nothing but rnp.dotA,xb
tion must be supplied by the user see Example 2.18. The user must also supply the
a Multiplying a matrix A with x and subtracting from a vector b to get r. This can all be done starting vector x0 and the constant righthandside vector b. The function returns
using numpy arrays,
the solution vector x and the number of iterations.
Anp.array 4, 1 ,1,1, 4 ,2,1, 2 ,4
bnp.array12, 1, 5
module conjGrad
x, numIterconjGradAv,x,b,tol1.0e9
3

1.2 Hint 1 PROBLEM 1: CONJUGATE GRADIENT METHOD
b You also have to solve forand . If I take , then the numerator bit is nothing but multiplying the transpose of s with r can be done using np.dot. The denominator is transpose of s with As which you can again solve using np.dot provided if you can precalculate A times s using np.dot.
numeratornp.dots,r
Asnp.dotA,s
denominatornp.dots,As
alphanumeratordenominator
You can similarly take hints from above to solve for
c xk1xkksk, i.e., calculating next value of x is nothing but
xxalphas
or in short form
x alphas
We have also covered theoperator in the lectures. You can similarly solve for s. d To calculate r, you can simply use
rnormnp.dotr,r
and then use an if statement to see if rnorm is less than a tolerance say 1e5 to break from the loop.
If I have said not to use numpy library function then it means not to use an equivalent function such as np.linalg.solve to cheat which, we already covered in the lectures and act as a one line solution
def conjugategradientcheatingA,b
xnp.linalg.solveA,b
return x
instead I want you to implement your own algorithm:
def conjugategradientsnazzyA,b
algorithm
return x
4

2 PROBLEM 2: MULTIVARIATE LEAST SQUARES METHOD
2 Problem 2: Multivariate least squares method
We want to find the regression coefficients a0, a1,…,am for the following equation:
yi a0 a1x1i a2x2i a3x3i …amxmi 11
To solve this, let us suppose, we take a case for m2 and add an error term i to our equation: yi i a0 a1x1i a2x2i 12
We can then obtain an objective function S to minimize as:
i a0 a1x1i a2x2i yi 13
2i a 0a 1 x 1 ia 2 x 2 iy i2 14
nn
S2i a0a1x1ia2x2iyi2 15
i1 i1
Minimizing S with respect to a0, a1, and a2, we get:
S 2a0 a1x1i a2x2i yi 16
a0
S 2a0a1x1ia2x2iyix1i 17
a1
S 2a0a1x1ia2x2iyix2i 18
a2
Equating these to zero, we obtain three equations in three unknowns, a0, a1, a2:
a0 a1 x1 a2 x2 y 19
a0 x1 a1 x1x1 a2 x1x2 x1y 20
a0 x2 a1 x1x2 a2 x2x2 x2y 21
5

2.1 Hint 2 PROBLEM 2: MULTIVARIATE LEAST SQUARES METHOD
where.represents average quantity defined as:
1 n
xn
We can represent in matrix form to use matrix methods as:
22
xi
i1
1 x1x2a0
x1x1x1x1x2 a1x1y 23
yx2x1x2x2x2a2 x2y
or XAY . The matrix of unknown coefficients, A, is obtained by finding the inverse of X, so that AX1Y .
Extending to m variables, we get:
1 x1
x1x1x1x1x2…
x2x1x2x2x2…
…xm y
.. x1xmx1y
.. x2xm ,Y x2y 24
Xx2 …
… … …xmx1xmx2xm… xmxmxmy
Overview of task: The assignment is to use the above logic to find the coefficients a0,a1,a2 for the data given in Table 1 also provided as a tabdelimited text file. The exact solution is y2.1453.117×11.486×2.
2.1 Hint
To program this efficiently in python, we can make use of a trick. Imagine we get the values of x and y from file, and since we are reading one line at a time, rather than calculating the averages, we can simply maintain the sums by taking 1n out and cancelling it on both sides:
n ni1x1i ni1xi2 a0ni1yi
1 n x1i n x1ix1i n x1ix2i a1 1 n x1iyi 25
i1i1 i1i1 n n x n x x n x x a n n x y
i1 i2 i1 1i2i i1 2i2i 2 i1 2ii
If you can import all the function from numpy library from numpy import , and if you have a variable m as the number of independent variables, and if you can define xzerosm1,m1, yzerosm1,1, and xi0.0m1, then you can populate both matrices x and y in a nestedloop with two equations, xj,kxijxik and yj,0yixij, provided if you have xi01, xi1 the value of first predictor first column, xi2 the value of second predictor second column, and so on. yi is dependent variable last column in Table 1. Finally, you can simply convert numpy arrays x and y to numpy matrix so that you can get the inverse automatically: Xmatx.copy, Ymaty.copy, and AX.IY.
Also, the data file provided is tabdelimited. Look up how to split a string using .split function.
6

2.1 Hint 2 PROBLEM 2: MULTIVARIATE LEAST SQUARES METHOD
Table 1: Data to be fitted
x1
x2
y
0.408 0.429 0.492 0.529 0.569 0.677 0.703 0.841 0.911 0.936 0.96 0.978 0.993 1.038 1.051 1.119 1.134 1.16 1.209 1.272 1.303 1.353 1.367 1.388 1.425 1.453 1.484 1.503 1.536 1.563 1.655 1.684 1.897
0.58 0.51 0.53 0.6 0.58 0.64 0.61 0.66 0.73 0.72 0.57 0.84 0.69 0.75 0.76 0.85 0.75 0.85 0.72 0.82 0.86 0.98 0.9 0.71 1.04 0.9 0.77 0.93 0.81 0.83 1.2 0.96 1.24
2.39 2.53 3.38 2.72 2.95 3.32 3.45 3.81 3.9 4.11 4.24 4.81 4.1 4.28 4.05 4.23 4.58 4.4 5.05 4.79 4.76 4.65 5.18 5.5 5.01 5.15 5.8 5.33 5.23 6.11 5.34 6.13 6.45
7

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] 代写 algorithm python scala Course assignment
30 $