Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Preconditioning for linear equations

  1. Nov 25, 2016 #1
    I understand that for many iterative methods, convergence rates can be shown to depend on the condition number of the coefficient matrix A in the linear equation
    Therefore, if a preconditioner satisfies
    $$P \approx A,$$
    then by solving the transformed linear equation
    $$(AP^{-1}) (Px)=y.$$
    the new coefficient matrix will now have more favorable spectral properties and hence better convergence can be achieved.

    One of the main properties a good preconditioner should satisfy besides the above condition is that its inverse should be cheap to apply. Thus, they are often sought out for with a certain structure. Typical examples are the incomplete Cholesky and LU factorizations of the matrix A.

    My question is: why do we want to have P approximate A, or, in a more direct approach, why do we formulate finding preconditioners as:
    \min_{P} \left\| AP^{-1} - I \right\|_F,
    where F represents the Frobenius-norm? The identity matrix isn't the only one with a condition number of 1; would it not be better to formulate the problem as:
    \min_{P,Q} \left\| AP^{-1} - Q \right\|_F,
    with Q having to be orthogonal? Given a certain structure restriction on P, I imagine this could lead to better preconditioning than in the previous case. Yet, I have not run across any such examples.
  2. jcsd
  3. Nov 30, 2016 #2
    Thanks for the thread! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post? The more details the better.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted