Estimating singular values from QR decomposition

  • Thread starter Thread starter vibe3
  • Start date Start date
  • Tags Tags
    Decomposition
Click For Summary
Estimating the largest and smallest singular values of a matrix A from its QR decomposition is feasible, as the singular values of R are the same as those of A due to the orthogonality of Q. The diagonal entries of R can provide a product of the singular values, but caution is needed for ill-conditioned matrices, as traditional eigenvalue computations may be unreliable. The power method can be employed to find the largest eigenvalue, while the smallest eigenvalue can be derived from the inverse, assuming full rank. The discussion emphasizes the need for a numerically stable approach to estimate singular values in such cases. Overall, the logic presented confirms that A and R share identical singular values.
vibe3
Messages
39
Reaction score
1
I have a matrix for which I know its QR decomposition: A = QR. I want to estimate the largest and smallest singular values of A (\sigma_1 and \sigma_n) however in my application it is too expensive to compute the full SVD of A.

Is it possible to estimate the largest/smallest singular values from the QR decomposition? The only result I've been able to find so far is
<br /> \left| \prod_i r_{ii} \right| = \prod_i \sigma_i<br />
where r_{ii} are the diagonal entries of R. I'm not sure if this implies that the singular values of R are the same as the singular values of A. If that's true, it might be possible and less expensive for my application to compute SVD(R) rather than SVD(A).
 
Physics news on Phys.org
The singular values of ##A## are the nonzero eigenvalues of ## A^H A ## or ## A A^H ##. The largest eigenvalue of a matrix can usually be computed using the power method with just a few iterations. The smallest eigenvalue of a full-rank matrix is the largest eigenvalue of the inverse, so again the power method can work; if you have hte QR the inverse is cheap to compute. If not full rank ... I'm not sure what clever methods there are.

jason
 
The problem is that my matrix A is very ill-conditioned, so computing eigenvalues from A^T A is unreliable (hence the reason I'm computing the QR decomposition instead). I'm looking for a numerically stable way to estimate the singular values from the QR decomposition for ill-conditioned matrices.

Suppose we have A = QR = U \Sigma V^T. Then suppose the SVD of R is R = U_r \Sigma_r V_r^T.

Because Q is orthogonal, A = Q U_r \Sigma_r V_r^T which implies U = Q U_r and V = V_r. This would then mean that \Sigma = \Sigma_r and so the singular values of R and A are the same. Is this correct logic?
 
Last edited:
jasonRF said:
The singular values of ##A## are the nonzero eigenvalues of ## A^H A ## or ## A A^H ##. The largest eigenvalue of a matrix can usually be computed using the power method with just a few iterations. The smallest eigenvalue of a full-rank matrix is the largest eigenvalue of the inverse, so again the power method can work; if you have hte QR the inverse is cheap to compute. If not full rank ... I'm not sure what clever methods there are.

jason
Did you meant to write that the singular values of A are the square root of the eigenvalues of A^HA ?
 
  • Like
Likes jasonRF
vibe3 said:
The problem is that my matrix A is very ill-conditioned, so computing eigenvalues from A^T A is unreliable (hence the reason I'm computing the QR decomposition instead). I'm looking for a numerically stable way to estimate the singular values from the QR decomposition for ill-conditioned matrices.

Suppose we have A = QR = U \Sigma V^T. Then suppose the SVD of R is R = U_r \Sigma_r V_r^T.

Because Q is orthogonal, A = Q U_r \Sigma_r V_r^T which implies U = Q U_r and V = V_r. This would then mean that \Sigma = \Sigma_r and so the singular values of R and A are the same. Is this correct logic?

Yes, this is mathematically sound. A and R would have the same singular values.
 
rs1n said:
Did you meant to write that the singular values of A are the square root of the eigenvalues of A^HA ?
oops. yep.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

Replies
1
Views
2K
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
10
Views
3K
  • · Replies 1 ·
Replies
1
Views
14K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 1 ·
Replies
1
Views
6K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K