- Interpolation code.
(a) Implement Newtons interpolation formula.
First, implement a function construct the coefficients (divided differences) required for evaluating the Newton form of the interpolation polynomial, that is f[x0,,xk] for 0 k n. This function should have the signature divdiff(x,y) and return the output [c] where
x Given sampling points. y Given function values. c The coefficients for the Newton interpolation polynomial
Now implement a function which evaluates Newtons interpolation polynomial at a new set of points xnew. Your function should have the signature interpnewt(c,x,xnew) and return the output [ynew] where c Interpolation coefficients computed by divdiff. x The same x from divdiff, i.e, sampling points for the polynomial construction. xnew New points to evaluate the interpolating polynomial on.
ynew Values of the interpolating polynomial at xnew.
Important: The variables c,x,y,xnew and ynew, should all be numpy arrays (ndarray) with dimensions 1 (number of points/coefficients).
Important: Also, note that the returned value from divdiff is [c] and not c itself, that is, you should put the array c inside a list (which contains the single member c) and then let divdiff output it. On the other hand, interpnewt(c,x,xnew) receives c as an input and NOT [c]. Please follow these instructions EXACTLY as they are. After you use divdiff, to use c with interpnewt(c,x,xnew) you can simply unpack c from [c] by using the * operator. Heres an example to illustrate the use of *:
def example(x): return x a=[[1,2]] example(a)
Out: [[1, 2]]
example(*a)
Out: [1, 2]
Similarly, the returned value of interpnewt is [ynew] and NOT ynew.
Note: You can use Horners rule (google it) to make running time of the function linear in the degree of the interpolation polynomial.
- Given the functions
cos5x, x [0,2] ex, x [0,10], .
- For each function ESTIMATE the maximal interpolation error numerically, and plotthe error (on a logarithmic scale) as a function of the number of points n for equallyspaced points and Chebyshev points. Explain how you computed the estimates. Choose different values of n so that you get an informative graph. Add the plots to the PDF.
- For the first 2 functions, estimate a bound on the error with Chebyshev points fromthe graph as a function of n. Compare the error with the analytical bounds derived in class. Write your analysis in the PDF.
- Edge Detection in images the Sobel operator
Download the image acrop512.png from the moodle. Load the image into an numpy ndarray M in the same way you did with the mandril.png from assignment No. 2. Now, theoretically for each pixel in the image, we wish to calculate the central difference in the x direction and save it into a numpy ndarray Dx. In practice, instead of the central difference in the pixel (i,j) we will calculate
Dx(i,j) = 1 M(i 1,j 1) 1 M(i 1,j + 1)+ (1)
2 M(i,j 1) 2 M(i,j + 1)
1 M(i + 1,j 1) 1 M(i + 1,j + 1)
Note, that this is not possible for pixels on the margins of the image ,i.e, of the form [0,j],[511,j],[i,0] and [i,511], so do not compute the central difference there, that is, Dx(i,j) should be of size 510510. Do the same for the y direction and save it into a numpy ndarray
Dy. Now create a numpy ndarray G(i,j) = pDx2 + Dy2, normalize it between 0 and 1 (by dividing all the arrays values by the maximal absolute value of the matrix), and use the imshow and imsave functions of matplotlib library to display and save it.
- Add the resulting image to your PDF.
- Explain why the resulting image G is a visualization of the original images gradient norm.
- What do we see in the image? Give a mathematical explanation to what we see.
- Differentiate the functions cosx, ex, and x at x = 0.1, 1.0, and 30, using single-precision forward-difference and central-difference
- For each function and point write to the PDF the derivative and its error E(h) as a function of h. Use the values h = 101,102,,1016.
- Plot log10 |E(h)| versus log10 h, and check whether the number of decimal places obtained agrees with the estimates from class. Add the plots to the PDF.
- See if you can identify truncation errors at large h and the roundoff error at small h in your plot. Explain how this is seen in the graph. Do the slopes agree with the predictions from class?
- Repeat your analysis in double precision and compare to the single precision results.
Reviews
There are no reviews yet.