====== Differences ====== This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
unh2010:iam931:hw3 [2010/10/29 12:20] gibson |
unh2010:iam931:hw3 [2010/11/01 10:06] (current) gibson |
||
---|---|---|---|
Line 5: | Line 5: | ||
(a) Let ''p(x)'' be a ''n-1''-th order polynomial. Derive a formula for the ''m x n'' matrix A that | (a) Let ''p(x)'' be a ''n-1''-th order polynomial. Derive a formula for the ''m x n'' matrix A that | ||
maps the ''n''-vector ''px = (p(x0), ... p(xn))'' to the ''m''-vector ''py = (p(y0), ... p(yn))''. | maps the ''n''-vector ''px = (p(x0), ... p(xn))'' to the ''m''-vector ''py = (p(y0), ... p(yn))''. | ||
- | (Read ''py'' as a vector marked with a subscript ''y''). | + | (Read ''py'' as a vector marked with a subscript ''y''). We can interpret this as forming a |
+ | polynomial interpolant ''p(x)'' to the "data" ''px'' and evaluating the interpolating polynomial | ||
+ | at the points ''{y1,...,xm}'' to get ''py = (p(y0), ... p(yn))''. | ||
An ''n-1''-th order polynomial ''p(x) = c0 + c1 x + ... c{n-1} x^{n-1}'' is defined by ''n'' | An ''n-1''-th order polynomial ''p(x) = c0 + c1 x + ... c{n-1} x^{n-1}'' is defined by ''n'' | ||
Line 20: | Line 23: | ||
Assuming that ''Vy'' is so truncated, we have ''py = Vy c = Vy Vx^{-1} px'', | Assuming that ''Vy'' is so truncated, we have ''py = Vy c = Vy Vx^{-1} px'', | ||
- | so the linear map from ''px'' to ''py'' is ''Vy Vx^{-1}''. Matlab code to compute this matrix is | + | so the linear map from ''px'' to ''py'' is ''Vy Vx^{-1}''. |
+ | |||
+ | |||
+ | (b) Matlab code to compute this matrix is | ||
<code> | <code> | ||
Line 28: | Line 34: | ||
X = vander(linspace(-1, 1, n)); | X = vander(linspace(-1, 1, n)); | ||
Y = vander(linspace(-1, 1, m)); | Y = vander(linspace(-1, 1, m)); | ||
- | Y = Y(:,n); | + | Y = Y(:,(m-n+1):m); % argh, matlab defines Vandermonde with reversed column order |
A = Y*inv(X); | A = Y*inv(X); | ||
</code> | </code> | ||
- | We can interpret this as forming a polynomial interpolant ''p(x)'' to the "data" ''px'' and | ||
- | evaluating the interpolating polynomial at the points ''{y1,...,xm}'' to get ''py = (p(y0), ... p(yn))''. | ||
+ | {{:unh2010:iam931:hw3:lebesgueconst.png?400}} | ||
+ | |||
+ | (b) The inf-norm of ''A'' is plotted as a function of ''n'' on the left. %%(c)%% The inf-norm condition | ||
+ | number of interpolating the constant function 1 from ''n'' to ''m'' equispaced points is | ||
+ | the condition number of the matrix multiplication ''A px'' | ||
+ | |||
+ | <latex> | ||
+ | \kappa = || A || || px || / ||A px|| = || A || | ||
+ | </latex> | ||
+ | |||
+ | The simplification occurs because both ''px = (1 1 1 1 ...)'' and ''A px'' have unit inf-norms. | ||
+ | |||
+ | {{:unh2010:iam931:hw3:polyinterp.png?400}} {{:unh2010:iam931:hw3:polyinterp1.png?400}} | ||
+ | |||
+ | (d) On the left is a reproduction of fig 11.1 using ''A'' for ''m x n = 21 x 11'' and | ||
+ | ''px = [0 0 0 1 1 1 0 0 0 0 0]''. The interpolant looks the same as in the book (good!) and the | ||
+ | inf-norm (max) amplification for this value of ''px'' appears to be about 4. This is somewhat smaller | ||
+ | than 24.7, the inf-norm condition number of interpolating the constant function 1, | ||
+ | but that's ok. The condition number of interpolation is the sup over the space of data ''px''; the particular ''px'' above does not achieve the sup. On the right is a similar plot for ''px = [1 1 1 1 1 1 1 1 1 1 1]'' , though on the | ||
+ | vertical axis we plot ''p(x)-1'' so that small deviations from unity are visible. I would expect the | ||
+ | errors in the interpolating polynomial to be ''O(kappa(A) epsilon_machine) = 24 * 2^-52 = 5e-15'' --that is, we think of the constant function 1 and the matrix multiplication as being perturbed by machine-epsilon rounding errors, which are then amplified by a factor ''kappa(A)''. But the errors are two orders of magnitude larger! Why is this? | ||
====== ex 15.1 e,f ====== | ====== ex 15.1 e,f ====== | ||
Line 115: | Line 140: | ||
Note that the structure of the driver program and eleftwards/rightwards functions ends up | Note that the structure of the driver program and eleftwards/rightwards functions ends up | ||
calculating the same partial sums many times over, but I'm optimizing for my time, not CPU time. | calculating the same partial sums many times over, but I'm optimizing for my time, not CPU time. | ||
- | |||
====== ex 16.2 ====== | ====== ex 16.2 ====== | ||
+ | |||
{{:unh2010:iam931:svderrs.png?400}} {{:unh2010:iam931:qrerrs.png?400}} | {{:unh2010:iam931:svderrs.png?400}} {{:unh2010:iam931:qrerrs.png?400}} | ||
- | These plots show (left) the normwise errors of a numerically computed SVD of randomly constructed matrices A with known SVD factors, as a function of the condition number of A, (right) ditto for QR decomposition. For the SVD, USV' = A, the errors in the computed unitary factors U and V appear to be bounded above linearly with condition number, suggesting that the accuracy of the SVD computation follows the error scaling law for backward stable algorithms (Trefethen eqn 15.1) --even though on dimensionality grounds the computation of f: A -> U,S,V cannot backward stable! Also, the singular values are computed to nearly machine precision, even when the condition number is O(10^18). | + | (b) These plots show (left) the normwise errors of a numerically computed SVD of randomly constructed matrices A with known SVD factors (with the signs of the columns of U and V fixed), as a function of the condition number of A, (right) ditto for QR decomposition. For the SVD, USV' = A, the errors in the computed unitary factors U and V appear to be bounded above linearly with condition number, suggesting that the accuracy of the SVD computation follows the error scaling law for backward stable algorithms (Trefethen eqn 15.1) --even though on dimensionality grounds the computation of f: A -> U,S,V cannot backward stable! Also, the singular values are computed to nearly machine precision, even when the condition number is O(10^18). |
The error scalings of the computed Q,R factors of the QR decomposition are similar to those of the SVD, though they are an order of magnitude or so worse. Also, whereas the errors in the SVD factors are sometimes spread well below the apparent upper bound, the errors in Q and R are almost always within an order of magnitude of linear-in-condition number | The error scalings of the computed Q,R factors of the QR decomposition are similar to those of the SVD, though they are an order of magnitude or so worse. Also, whereas the errors in the SVD factors are sometimes spread well below the apparent upper bound, the errors in Q and R are almost always within an order of magnitude of linear-in-condition number |