Problem 1
Suppose A is a lower triangular matrix. Let I be the identity matrix. We wish to show that A1 is also lower triangular. Notice that I = AA1. Let aij and bij be the elements of A and A1 respectively. Then we can establish the following relationship:
a11b1j = 1j
a21b1j + a22b2j = 2j
an1b1j + + annbnj = nj
where ij is the Kronecker Delta. This may be rewritten as follows:
Thus it can be seen that whenever the column index exceeds the row index for bij, bij = 0, so A1 is lower triangular.
Problem 2
Let ||A|| be an induced matrix on some A Cmm. By definition, (A) = max. Consider the eigenvector for max, x. By definition, Ax = maxx. Taking their norms gives ||Ax|| = ||maxx|| = |max||x|. This can be rewritten as ||A|||x| |max|x|, then removing |x| gives ||A|| |max = (A) as desired.
Problem 3
Suppose A C202202 with ||A||2 = 100 and ||A||F = 101. From the text, we know ||A||2 = 1 and ||A||F =
. Thus, 101 = after plugging in values. Rearranging, this becomes 201 =
. Since the smallest value 202 can obtain is 1, we arrive at 1/202 100.
Problem 4
For this problem, I implemented a code Problem4 in MATLAB that generated random matrices of size m m
with a mean value of 0 and a standard deviation of m. Using MATLABs built-in norm functions, I obtained the p-norm for p= 1,2, for the random matrices. As my GTID is odd, I computed the ratio between the 2-norm and infinity-norm as well. Additionally, I computed the average condition number for the matrices. My results are shown below:
Page 1
#104 Average One Norm Values
These results follow intuition, I believe. Both the one norm and inf norm can be defined as the maximum of the sum of columns and rows, respectively. So as m , it follows that ||A||1 and ||A|| . The ||A||2 results behave in a more linear manner, which is makes sense given ||A||2 |max|. Because the two norm behaves linearly and the inf norm behaves super linearly, the ratio between the two should cover to 0 as the inf norm increases at a faster rate. The condition number for the random matrix overall increases as the size of the matrix increases, though there are dips and spikes in the results. Overall, this still makes sense as a random matrix will tend to being ill-conditioned as its size increases.
Problem 5
For this problem, we had to approximate solutions to using Gaussian elimination with and
without partial pivoting. My MATLAB code for this is called Problem5. My results may be seen in the following graphs:
(a) Gaussian elimination for = 0 (b) Gaussian elimination for = 2
(c) True solution for = 0 (d) True solution for = 2
0
0
-0.2
-0.2
-0.4
-0.4
-0.6-0.6
-0.8-0.8
-1-1
-1.2-1.2
-1.4-1.4
-1.6
-1.6
-1.8
-1.8
-2
-20 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
When comparing the true solutions to the differential equation with the approximation, for = 0 the average relative error was on the scale of 1010 while for = 2 the average relative error was on the scale of 1016. These are extremely good results, but not surprising as small step sizes were used, which lead to refined answers. For the various computations, the CPU runtimes are presented below:
Table 1: CPU Runtimes in seconds
Size | = 0 without pivoting | = 0 with pivoting | = 2 without pivoting | = 2 with pivoting |
200 | 0.096323 | 0.264187 | 0.142005 | 0.156812 |
400 | 0.300473 | 0.552547 | 0.527399 | 0.421511 |
800 | 2.041219 | 3.074913 | 4.295169 | 2.745189 |
1000 | 3.963348 | 5.379148 | 8.774305 | 6.098762 |
2000 | 34.324527 | 40.059037 | 107.276456 | 85.589778 |
4000 | 398.864609 | 313.789217 | 906.203220 | 859.576564 |
These CPU times make sense, as both algorithms theoretically flops.
Reviews
There are no reviews yet.