Graduate Numerically Calculating Eigenvalues

  • Thread starter Thread starter member 428835
  • Start date Start date
  • Tags Tags
    Eigenvalues
Click For Summary
The discussion revolves around solving the eigenvalue problem for matrices A and B using numerical methods in Mathematica and MATLAB. The user reports discrepancies in eigenvalue results, suggesting potential numerical errors or sensitivity due to the conditioning of matrix B. It is noted that both software tools are generally robust for eigenvalue calculations, and the user is encouraged to verify the singular values of the matrices to assess conditioning. The conversation emphasizes the importance of checking the setup and calculations, as well as understanding that eigenvectors may differ by a multiplicative factor. Ultimately, the user is advised to ensure the accuracy of their matrices and calculations.
member 428835
Hi PF!

I am trying to solve the eigenvalue problem ##Ax = \lambda Bx## where I have numerical entries for the square matrices ##A## and ##B##. I solve this by taking $$Ax = \lambda Bx\implies\\
B^{-1}Ax- \lambda Ix=0\implies\\
(B^{-1}A-\lambda I)x=0$$
where I then use a built in function in Mathematica and MATLAB (I'm using both to double check my work) to compute the eigenvalues/vectors of the matrix ##B^{-1}A##. But my answers are off by about an order of magnitude and a sign error. Any ideas if there is a more accurate way to check for eigenvalues?

Thanks!
 
Physics news on Phys.org
How big are these matrices? It is possible that they are very badly conditioned such that computer round-off is causing problems, but if both Mathematica and Matlab are giving the same answer, then maybe the solution you are comparing it to is wrong. If you plug the solution you get back into the matrix equation, does it satisfy the equality?
 
Hey joshmccraney! ;)

If Mathematica and Matlab give significantly different results, my first thought is that there was a mistake in using them somewhere.
Can you rule that out?

Otherwise it would suggest that the system is sensitive to numerical errors, which could typically happen if ##B## is close to being singular.
In particular that would also mean that we have eigenvalues close to zero.
Do you have those?

And, as NFuller mentioned, it would indeed be good to verify if the solution satisifies the original equation to figure out what is going on.
 
Last edited:
  • Like
Likes Ackbach
What's the largest singular value (##\sigma_1##) and the small singular value (##\sigma_n##) for ##\mathbf A## and ##\mathbf B##?

I.e. suppose you populate this table:

##
\begin{bmatrix}
& \mathbf A & \mathbf B\\
\sigma_1 & & \\
\sigma_n & &
\end{bmatrix}
##
 
  • Like
Likes Greg Bernhardt
NFuller said:
How big are these matrices? If you plug the solution you get back into the matrix equation, does it satisfy the equality?
The matrices are 3x3. Yes, plugging the solution satisfies the eigenvalue problem, but I spoke to a professor once and he said if there is a numerical mistake simply plugging in the eigenvalues/vectors into the problem to verify won't help identify the problem.

I like Serena said:
If Mathematica and Matlab give significantly different results, my first thought is that there was a mistake in using them somewhere.
Can you rule that out?

Otherwise it would suggest that the system is sensitive to numerical errors, which could typically happen if ##B## is close to being singular.
In particular that would also mean that we have eigenvalues close to zero.
Do you have those?
Eigenvalues are the same in value. Two eigenvalues are ##O(1)## and the third is ##O(10^{-2})##.
 
StoneTemplePython said:
What's the largest singular value (##\sigma_1##) and the small singular value (##\sigma_n##) for ##\mathbf A## and ##\mathbf B##?

I.e. suppose you populate this table:

##
\begin{bmatrix}
& \mathbf A & \mathbf B\\
\sigma_1 & & \\
\sigma_n & &
\end{bmatrix}
##
I'm unsure what ##\sigma## is and how to compute it. If you can describe it to me or direct me to a link I'll find it for you. I tried googling it but nothing definite came up.
 
joshmccraney said:
I'm unsure what ##\sigma## is and how to compute it. If you can describe it to me or direct me to a link I'll find it for you. I tried googling it but nothing definite came up.

Look into Singular Value Decomposition. For instance these links.
- - - -

https://en.wikipedia.org/wiki/Singular_value_decomposition

https://math.mit.edu/~gs/linearalgebra/linearalgebra5_7-1.pdf
https://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/positive-definite-matrices-and-applications/singular-value-decomposition/MIT18_06SCF11_Ses3.5sum.pdf

- - - -
If you agree on measuring discrepancies / changes using a 2 norm, you can use singular values to calculate the ill conditioning of a matrix. Singular values (##\sigma##) also happen to give bounds on the spectral radius (eigenvalues), can be used in calculating various matrix norms and so on.

Calculating these things is standard fare for a numerical library. I don't use Matlab or Mathematica, but getting the singular values should be pretty simple.

For example, in Python, you'd use a command like:

Python:
import numpy as np
A = np.random.random((2,2))
U, sigma, Vt = np.linalg.svd(A)print("this has the singular values for A", sigma)
 
  • Like
Likes Ackbach
joshmccraney said:
The matrices are 3x3. Yes, plugging the solution satisfies the eigenvalue problem, but I spoke to a professor once and he said if there is a numerical mistake simply plugging in the eigenvalues/vectors into the problem to verify won't help identify the problem.

Eigenvalues are the same in value. Two eigenvalues are ##O(1)## and the third is ##O(10^{-2})##.

How about posting those matrices and your results with discrepancies?
3x3 matrices should be small enough to process for us.
 
joshmccraney said:
Yes, plugging the solution satisfies the eigenvalue problem
Then the solution is correct.
joshmccraney said:
I spoke to a professor once and he said if there is a numerical mistake simply plugging in the eigenvalues/vectors into the problem to verify won't help identify the problem.
It's true that if you wrote your own numerical solver, that it may sometimes give the right answer and sometimes not. So getting the correct solution one time does not verify an algorithm. In this case however, you are using algorithms written by the developers of Mathematica and Matlab. Those algorithms are correct and if they are giving the correct solution to the equation, then I'm not sure what the issue is.

The only other thing I can think of is that if you are directly comparing two eigenvectors, the one you calculated (call it ##\mathbf{v}##) and the one from a solution sheet (call it ##\mathbf{u}##), they are allowed to be off by a multiplicative factor such that
$$\mathbf{v}=\alpha\mathbf{u}$$
Is this the case?
 
Last edited:
  • #10
Ok, sorry for taking so long! The ratio of the largest singular value to the smallest for ##A## is 3 and for ##B## is 22. The matrices are $$
A =\begin{bmatrix}
-1.0231 &0.5571 & 0.9796\\
0.5571 & -0.5749 & 0.2227\\
0.9796 & 0.2227 & -0.2982
\end{bmatrix}\\
B =
\begin{bmatrix}
10.3513 &3.7790 & 6.7384\\
3.7790 &2.3295 & 2.5858\\
6.7384 & 2.5858 & 5.4928
\end{bmatrix}
$$

I should say, I am calculating the numeric values of these matrices from a very complicated set of equations. Since my math there look correct, I was asking you all how robust typical built-in algorithms are for computing eigenvalues. It is possible my matrices, and hence the eigenvalues, are wrong. I just thought of troubleshooting this since I've triple checked everything else.
 
  • #11
joshmccraney said:
I was asking you all how robust typical built-in algorithms are for computing eigenvalues.
They are very robust.
joshmccraney said:
It is possible my matrices, and hence the eigenvalues, are wrong. I just thought of troubleshooting this since I've triple checked everything else.
I think it would be more likely that you've set something up wrong rather than there being a problem with the algorithms.
 
  • Like
Likes member 428835
  • #12
Thanks for your response!
 
  • #13
joshmccraney said:
Ok, sorry for taking so long! The ratio of the largest singular value to the smallest for ##A## is 3 and for ##B## is 22. The matrices are $$
A =\begin{bmatrix}
-1.0231 &0.5571 & 0.9796\\
0.5571 & -0.5749 & 0.2227\\
0.9796 & 0.2227 & -0.2982
\end{bmatrix}\\
B =
\begin{bmatrix}
10.3513 &3.7790 & 6.7384\\
3.7790 &2.3295 & 2.5858\\
6.7384 & 2.5858 & 5.4928
\end{bmatrix}
$$

I should say, I am calculating the numeric values of these matrices from a very complicated set of equations. Since my math there look correct, I was asking you all how robust typical built-in algorithms are for computing eigenvalues. It is possible my matrices, and hence the eigenvalues, are wrong. I just thought of troubleshooting this since I've triple checked everything else.

I'm a bit confused now. What is your problem exactly?
I haven't checked your eigenvalues, but didn't you say there was a discrepancy between the two math programs?
What's the discrepancy?
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
980
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K