- Consider the linear system Anφ = ρ, where An is an n × n matrix with 2’s on the main diagonal, −1’s directly above and below the main diagonal and 0’s everywhere else. For instance, A5 is
|2 −1 A5 = 0 00||−12−100||0−12−10||0 0−12−1||0 0 0 . −1 2|
This is a discretized version of Poisson’s equation:
This equation appears very often in physical applications. We will discuss discretizations and differential equations, including the origin of this matrix An, later in the class.
Construct the matrix A80 in Matlab. In addition, make the 80 × 1 vector ρ such that the jth entry of ρ is defined according to the formula
- The Jacobi iteration method for this problem can be written as φk+1 = Mφk+c. (Note that φk means the kth guess for φ and is a vector. It does not mean the kth entry of φ.) Find the eigenvalue of M that has the largest absolute value (or magnitude for complex numbers) and save the absolute value (or magnitude) of this eigenvalue in dat.
- Use Jacobi iteration to solve for φ. Your initial guess should be a 80×1 vector of all ones, and you should stop when the infinity norm of φk+1 − φk goes below a tolerance of 10−4. (Recall that you can calculate the infinity norm of a vector x in Matlab using the command norm(x,Inf).) Save the total number
of iterations in A2.dat. The initial guess does not count as an iteration, but every other guess does. Calculate the “true solution” φ using the backslash command, and find the error between your final guess and the true solution. Save the infinity norm of this error in A3.dat.
- The Gauss-Seidel iteration method for this problem can also be written as φk+1 = Mφk + c. Note that these are not the same M and c as in part (a). Find the eigenvalue of M that has the largest absolute value (or magnitude for complex numbers) and save the absolute value (or magnitude) of this eigenvalue in dat.
- Use Gauss-Seidel iteration to solve for φ. Your initial guess and stopping criterion should be the same as in part (b). Save the total number of iterations in dat. Find the error between your final guess and the “true solution” φ and save the infinity norm of this error in A6.dat.
Things to think about: Is the matrix A80 strictly diagonally dominant? Does that matter? Based on the eigenvalues you found in parts (a) and (b), do you think this method is efficient? Which of the two methods would you expect to be faster and why? If you change n, how does it change the eigenvalues of M? What does this say about the speed of the method? Did you find a particularly accurate solution using either of the methods? In practice, would you use one of these methods to solve this problem?
- Any matrix A can be decomposed into a diagonal, upper, and lower part:
a21 A = …
|a11 0D = …0||0a22…0||…………||0 0 … ann|
U = …
a12 … a1n
a22 … a2n
… … … ,
an2 … ann
|0a21L = …an1||00……||……… an,n−1||00…0|
a12 … a1n 0 .. a2n
… … an−1,n
0 … 0
so that A = D + U + L. These matrices are useful for devising different matrix splitting methods. For instance, the Jacobi method uses A = P + T where P = D and T = U + L. The Gauss-Seidel method uses P = D + L and T = U. Another
common matrix splitting method is called successive over-relaxation (SOR). It is exactly the same as the splitting methods we have already seen, except we set
where ω is a constant between 1 and 2. (Notice that if ω = 1 then this is GaussSeidel.) We will use this method to solve the system A80φ = ρ from problem 1.
- For any given ω, the SOR method can be written as φk+1 = Mφk + c. For ω = 1.2, find the eigenvalue of M that has the largest absolute value (or magnitude for complex numbers) and save the absolute value (or magnitude) of this eigenvalue in dat.
- For every value of ω between ω = 1 and ω = 1.99 in increments of 0.01, find the eigenvalue of M that has the largest absolute value (or magnitude for complex numbers). Find the value of ω that corresponds to the smallest of these eigenvalues (in absolute value) and save the value of ω in dat. (You may want to look up the min command in the Matlab help.)
- Use SOR to solve for φ with the optimal ω that you found in part (b). Your initial guess and stopping criterion should be the same as in problem 1(b) and 1(d). Save the total number of iterations in dat. Find the error between your final guess and the “true solution” φ and save the infinity norm of this error in A10.dat.
Things to think about: Plot the eigenvalues from part (b) versus ω. What do you think would happen if you chose a slightly different value of ω from the one we used in part (c)? How does the eigenvalue you found for A8.dat compare to the ones from problem 1? What does this tell you about how fast the method will converge? Does this match with the number of steps the method took in part (c)? Was this method more or less accurate than the other two? Try plotting the result of each method on the same axes as the “true solution.” Overall, do you think the gains in efficiency and accuracy are worth the time taken to find a good ω?