User Tools

Site Tools


unh2010:iam931:hw3

====== Differences ====== This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
unh2010:iam931:hw3 [2010/10/29 11:53]
gibson
unh2010:iam931:hw3 [2010/11/01 10:06] (current)
gibson
Line 1: Line 1:
 ====== ex 12.2 ====== ====== ex 12.2 ======
  
-Let ''​{x1,​...,​xn}''​ and ''​{y1,​...,​xm}''​ be n and m equispaced ​points ​on [-1, 1] respectively.+Let ''​x = {x1,​...,​xn}''​ and ''​y = {y1,​...,​xm}''​ be n and m equispaced ​gridpoints ​on [-1, 1] respectively.
  
-(a) Let ''​p(x)''​ be a n-th order polynomial. Derive a formula for the ''​m x n''​ matrix A that +(a) Let ''​p(x)''​ be a ''​n-1''​-th order polynomial. Derive a formula for the ''​m x n''​ matrix A that 
 maps the ''​n''​-vector ''​px = (p(x0), ... p(xn))''​ to the ''​m''​-vector ''​py = (p(y0), ... p(yn))''​. maps the ''​n''​-vector ''​px = (p(x0), ... p(xn))''​ to the ''​m''​-vector ''​py = (p(y0), ... p(yn))''​.
-We can intepret ​this as forming a polynomial interpolant ''​p(x)''​ to the "​data"​ ''​px''​ and  +(Read ''​py''​ as a vector marked with a subscript ''​y''​). ​We can interpret ​this as forming a  
-interpolating ​that polynomial at the points ''​{y1,​...,​xm}''​ to get ''​py = (p(y0), ... p(yn))''​.+polynomial interpolant ''​p(x)''​ to the "​data"​ ''​px''​ and evaluating the interpolating polynomial ​ 
 +at the points ''​{y1,​...,​xm}''​ to get ''​py = (p(y0), ... p(yn))''​.
  
  
 +An ''​n-1''​-th order polynomial ''​p(x) = c0 + c1 x + ... c{n-1} x^{n-1}''​ is defined by ''​n'' ​
 +coefficients ''​c = (c0, c1, ..., c{n-1})''​. Evaluating a polynomial at a given set of gridpoints
 +is equivalent to multiplying ''​c''​ by the Vandermonde matrix ''​Vx''​ for the gridpoints ''​x'', ​
 +as defined in example 1.1 (and where ''​Vx''​ is a matrix with a ''​x''​ subscript. Computing the 
 +coefficients from the data is equivalent to solving ''​c = Vx^{-1} px''​.
 +
 +Now to evaluate the interpolating polynomial at the gridpoints ''​{y1,​...,​xm}'',​ we would like 
 +to multiply ''​c''​ by the Vandermonde matrix for the points ''​y'',​ i.e.  ''​py = Vy c''​. However,
 +note that the ''​Vy''​ will be an ''​m x m''​ matrix, whereas ''​c''​ is ''​n''​-dimensional. Let's assume ​
 +''​m > n''​. We can either define ''​c_j = 0''​ for n-1<​j<​m-1 and compute ''​py = Vy c'',​ or, equivalently,​
 +remove the last ''​m-n''​ columns of ''​Vy''​ so that it is ''​m x n''​ and then compute ''​py = Vy c''​.
 +
 +Assuming that ''​Vy''​ is so truncated, we have ''​py = Vy c = Vy Vx^{-1} px'',​
 +so the linear map from ''​px''​ to ''​py''​ is ''​Vy Vx^{-1}''​. ​
 +
 +
 +(b) Matlab code to compute this matrix is 
 +
 +<​code>​
 +function A = ex12_2a(n)
 +m = 2*n-1;
 +
 +X = vander(linspace(-1,​ 1, n));
 +Y = vander(linspace(-1,​ 1, m));
 +Y = Y(:,​(m-n+1):​m); ​ % argh, matlab defines Vandermonde with reversed column order
 +
 +A = Y*inv(X);
 +</​code>​
 +
 +
 +
 +{{:​unh2010:​iam931:​hw3:​lebesgueconst.png?​400}} ​
 +
 +(b) The inf-norm of ''​A''​ is plotted as a function of ''​n''​ on the left. %%(c)%% The inf-norm condition ​
 +number of interpolating the constant function 1 from  ''​n''​ to ''​m''​ equispaced points is 
 +the condition number of the matrix multiplication ''​A px''​
 +
 +<​latex>​
 +\kappa = || A || || px || / ||A px|| = || A ||
 +</​latex>​
 +
 +The simplification occurs because both ''​px = (1 1 1 1 ...)''​ and ''​A px''​ have unit inf-norms. ​
 +
 +{{:​unh2010:​iam931:​hw3:​polyinterp.png?​400}} {{:​unh2010:​iam931:​hw3:​polyinterp1.png?​400}}
 +
 +(d) On the left is a reproduction of fig 11.1 using ''​A''​ for ''​m x n = 21 x 11''​ and 
 +''​px = [0 0 0 1 1 1 0 0 0 0 0]''​. The interpolant looks the same as in the book (good!) and the 
 +inf-norm (max) amplification for this value of ''​px''​ appears to be about 4. This is somewhat smaller ​
 +than 24.7, the inf-norm condition number of interpolating the constant function 1,
 +but that's ok. The condition number of interpolation is the sup over the space of data ''​px'';​ the particular ''​px''​ above does not achieve the sup. On the right is a similar plot for ''​px = [1 1 1 1 1 1 1 1 1 1 1]''​ , though on the 
 +vertical axis we plot ''​p(x)-1''​ so that small deviations from unity are visible. I would expect the 
 +errors in the interpolating polynomial to be ''​O(kappa(A) epsilon_machine) = 24 * 2^-52 = 5e-15''​ --that is, we think of the constant function 1 and the matrix multiplication as being perturbed by machine-epsilon rounding errors, which are then amplified by a factor ''​kappa(A)''​. But the errors are two orders of magnitude larger! Why is this?
  
 ====== ex 15.1 e,f ====== ====== ex 15.1 e,f ======
Line 87: Line 140:
 Note that the structure of the driver program and eleftwards/​rightwards functions ends up Note that the structure of the driver program and eleftwards/​rightwards functions ends up
 calculating the same partial sums many times over, but I'm optimizing for my time, not CPU time. calculating the same partial sums many times over, but I'm optimizing for my time, not CPU time.
- 
 ====== ex 16.2 ====== ====== ex 16.2 ======
 +
  
  
 {{:​unh2010:​iam931:​svderrs.png?​400}} {{:​unh2010:​iam931:​qrerrs.png?​400}} {{:​unh2010:​iam931:​svderrs.png?​400}} {{:​unh2010:​iam931:​qrerrs.png?​400}}
  
-These plots show (left) the normwise errors of a numerically computed SVD of randomly constructed matrices A with known SVD factors, as a function of the condition number of A, (right) ditto for QR decomposition. For the SVD, USV' = A, the errors in the computed unitary factors U and V appear to be bounded above linearly with condition number, suggesting that the accuracy of the SVD computation follows the error scaling law for backward stable algorithms (Trefethen eqn 15.1) --even though on dimensionality grounds the computation of f: A -> U,S,V cannot backward stable! Also, the singular values are computed to nearly machine precision, even when the condition number is O(10^18).+(b) These plots show (left) the normwise errors of a numerically computed SVD of randomly constructed matrices A with known SVD factors ​(with the signs of the columns of U and V fixed), as a function of the condition number of A, (right) ditto for QR decomposition. For the SVD, USV' = A, the errors in the computed unitary factors U and V appear to be bounded above linearly with condition number, suggesting that the accuracy of the SVD computation follows the error scaling law for backward stable algorithms (Trefethen eqn 15.1) --even though on dimensionality grounds the computation of f: A -> U,S,V cannot backward stable! Also, the singular values are computed to nearly machine precision, even when the condition number is O(10^18).
  
 The error scalings of the computed Q,R factors of the QR decomposition are similar to those of the SVD, though they are an order of magnitude or so worse. Also, whereas the errors in the SVD factors are sometimes spread well below the apparent upper bound, the errors in Q and R are almost always within an order of magnitude of linear-in-condition number The error scalings of the computed Q,R factors of the QR decomposition are similar to those of the SVD, though they are an order of magnitude or so worse. Also, whereas the errors in the SVD factors are sometimes spread well below the apparent upper bound, the errors in Q and R are almost always within an order of magnitude of linear-in-condition number
unh2010/iam931/hw3.1288378404.txt.gz · Last modified: 2010/10/29 11:53 by gibson