- Implement the
*PA*=*LDU*decomposition algorithm discussed in class. Do so yourself (in other words, do not merely use predefined Gaussian elimination code in MatLab or Python).

Simplifications: (i) You may assume that the matrix *A *is square and invertible. (ii) Do not worry about column interchanges, just row interchanges.

Demonstrate that your implementation works properly on some examples.

- Compute the
*PA*=*LDU*decomposition and the SVD decomposition for each of the following matrices: (In fact, here it is enough to let*P*be an identity matrix, so*A*=*LDU*.)

You may use any method or tools you wish to solve this problem, including pre-defined routines from MatLab or Python, your code, and/or hand calculations. (Hand calculations may be easiest for computing the *PA *= *LDU *decompositions of these examples. We recommend using pre-defined code to compute the SVD decompositions.) Show how you obtained your solutions, that is, show your work.

- Solve the systems of equations
*Ax*=*b*for the values of*A*and*b*given below.

For each system, specify whether the system has zero, one, or many exact solutions. If a system has zero exact solutions, give “the SVD solution” (as defined in class) and explain what this solution means. If a system has a unique exact solution, compute that solution. If a system has more than one exact solution, specify both “the SVD solution” and all solutions, using properties of the SVD decomposition of the matrix *A*, as discussed in class. Show your work, including verifying that your answers are correct.

(a) | 657 | |||

(b) | 633 | |||

(c) | 633 |

Explain the difference/similarity of the answers for parts (b) and (c) in terms of the column and null spaces provided by the SVD decomposition of *A*.

1

- Suppose that
*u*is an*n*-dimensional column vector of unit length in, and let^{n}*u*be its transpose. Then^{T }*uu*is a matrix. Consider the^{T }*n*×*n*matrix*A*=*I*−*uu*.^{T}- Describe the action of the matrix
*A* - Give the eigenvalues of
*A*. - Describe the null space of
*A*. - What is
*A*^{2}?

- Describe the action of the matrix

(As always, show your work.)

- The following problem arises in a large number of robotics and vision problems: Suppose
**p**_{1}*,…,***p**are the 3D coordinates of_{n }*n*points located on a rigid body in three-space. Suppose further that**q**_{1}*,…,***q**are the 3D coordinates of these same points after the body has been translated and rotated by some unknown amount. Derive an algorithm in which SVD plays a central role for inferring the body’s translation and rotation. (You may assume that the coordinate values are precise not noisy, but see comment and caution below.) Show that your algorithm works correctly by running it on some examples._{n }

**Comment: **Your algorithm should make use of all the information available. True, in principle you only need three pairs of points – but if you use more points your solution will be more robust, something that might come in handy some day when you need to do this for real with noisy data.

**Caution: **A common mistake is to derive an algorithm that finds the best affine transformation, rather than the best rigid body transformation. Even though you may assume precise coordinate values, imagine how your algorithm would behave with noise. Your algorithm should still produce a rigid body transformation.

**One more comment: **This problem requires some thought. You can probably find a solution on the web or in a vision text book. Try to solve it yourself before looking at any such sources. Spend some time on the problem. It is good practice to develop your analytic skills. Feel free to discuss among yourselves. (As always, cite any sources, including discussions with others.)

Consider the hints below. **Hint #1: **Suppose for a moment that both sets of points have the origin as centroid. Assemble all the points {**p*** _{i}*} into a matrix

*P*and all the points {

**q**

*} into another matrix*

_{i}*Q*. Now think about the relationship between

*P*and

*Q*and also consider matrix products involving

*P*and

*Q*.

**Hint #2: **There are different approaches to this problem. In one approach, you may find the following fact useful (assuming the dimensions are sensible):

*x ^{T}R^{T}y *= Tr(

*Rxy*)

^{T}*.*

[Here *x *and *y *are column vectors (e.g., 3D vectors) and *R *is a matrix (e.g., a 3×3 rotation matrix). The superscript *T *means *transpose*, so *xy ^{T }*is a matrix as well. And Tr is the

*trace*operator that adds up the diagonal elements of its square matrix argument.]

## Reviews

There are no reviews yet.