Operating Time for Matrix Eigenvalue and Eigenvectors

  • Context: Graduate 
  • Thread starter Thread starter Cylab
  • Start date Start date
  • Tags Tags
    Matrix Time
Click For Summary

Discussion Overview

The discussion revolves around the computational complexity of finding eigenvalues and eigenvectors of a matrix using Jordan matrix form. Participants seek to analyze the operating time, specifically in terms of Big-O notation, and explore methods for improving performance.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants propose that calculating a general determinant or eigenvalues could take O(n!) time, where n is the rank of the matrix.
  • Others argue that using row operations to reduce the matrix to triangular form can lead to a more efficient O(n³) time complexity for calculating the determinant and eigenvalues.
  • There is a question about the definition of n in the context of Big-O notation, with some suggesting it corresponds to r (the rank of the matrix).
  • Participants discuss potential performance improvements, noting that optimizations may depend on the specific structure of the matrix.
  • One participant mentions that if the size of the matrix is known in advance, optimized routines could be created for better performance.
  • Another participant requests clarification on how suggested methods might affect the operation time for generating eigenvalues and eigenvectors.

Areas of Agreement / Disagreement

There is no consensus on the computational complexity of finding eigenvalues and eigenvectors, with participants presenting competing views on the efficiency of different methods and their respective Big-O notations.

Contextual Notes

Participants reference various methods for calculating determinants and eigenvalues, highlighting the differences in efficiency between inductive techniques and more optimal methods. The discussion includes assumptions about matrix structure that may affect performance but does not resolve these points.

Who May Find This Useful

This discussion may be useful for students and professionals interested in computational mathematics, numerical analysis, and those seeking to understand the complexities involved in matrix operations.

Cylab
Messages
54
Reaction score
0
Hello

Assume matrix A (r*r), and calculate eigenvalue and eigenvectors by using Jordan matrix form.
So how should I find the Operating Time O(?) for it?

Will you please provide some analysis for it? like how should it be calculated etc.

Thanks
 
Physics news on Phys.org
Cylab said:
Hello

Assume matrix A (r*r), and calculate eigenvalue and eigenvectors by using Jordan matrix form.
So how should I find the Operating Time O(?) for it?

Will you please provide some analysis for it? like how should it be calculated etc.

Thanks

If you need to find a general determinant (or at least an expression for the eigenvalues), then this will be O(n!) (by O I mean Big-O) where n is the rank of the matrix. (I got this from wikipedia in case you are interested).

In terms of finding the eigenvalues, you will probably use a root finding algorithm, especially if you can't guarantee small enough n. The time for this varies depending on the algorithm and the function in question.

In terms of calculated the Big-O for the determinant, you can use the inductive definition to help you prove the Big-O for that. For the other parts I can't help you.
 
Thanks Chiro for your analysis..
BTW, what is n in O(n!) ? Is it r*r = n?

Will there be any way of increasing performance by reducing Operation time O()?
 
Cylab said:
Thanks Chiro for your analysis..
BTW, what is n in O(n!) ? Is it r*r = n?

Will there be any way of increasing performance by reducing Operation time O()?

The O(n!) is the Big-O notation for computational complexity. The n is your r (the rank of the matrix).

As for increasing performance, the answer is "it depends".

If you know the size of your matrix in advance, you can create an optimized hard-coded routine for this. This kind of things happens in many applications including video games, rendering software, and scientific software.

Apart from that, you might be able to do further optimizations if the matrix in question has a particular structure. For example if its lower or upper triangular, the determinant is just the product of the diagonal elements and that is a lot quicker than using the general formula. So if you "knew" things in advance, you could use this to your advantage.
 
chiro said:
If you need to find a general determinant (or at least an expression for the eigenvalues), then this will be O(n!) (by O I mean Big-O) where n is the rank of the matrix. (I got this from wikipedia in case you are interested).

You didn't give a page reference to where you found that in Wikipedia, but it doesn't look right.

The time to calculate a determinant by summing up all the permutations might well be O(n!), but that is a very inefficient way to do it. If you use row operations to reduce the matrix to an upper (or lower) triangular form with the same determinant, the determinant of the triangular matrix is just the product of the diagonal terms, and the whole process will take O(n3) operations.

Efficient algorithms to calculate all the eigenvalues and vectors of a matrix also take O(n3) operations.
 
Can you tell me how your suggested stuff affect matrix `s performance of generating eigenvalues and eigenvectors? or simply operation time?
 
AlephZero said:
You didn't give a page reference to where you found that in Wikipedia, but it doesn't look right.

The time to calculate a determinant by summing up all the permutations might well be O(n!), but that is a very inefficient way to do it. If you use row operations to reduce the matrix to an upper (or lower) triangular form with the same determinant, the determinant of the triangular matrix is just the product of the diagonal terms, and the whole process will take O(n3) operations.

Efficient algorithms to calculate all the eigenvalues and vectors of a matrix also take O(n3) operations.

Thanks for your attention. please let me give an example.
E.g. Matrix (3*3) => then has to be 3 eigenvectors (right?): v_1, v_2.v_3, but eigenvalues can λ_1 or λ_1 and λ_2 or λ_1, λ_2 and λ_3. Thus, why operation time O(n3) {n=3}? Could you please give me some calculation hint? thanks.
 
AlephZero said:
You didn't give a page reference to where you found that in Wikipedia, but it doesn't look right.

The time to calculate a determinant by summing up all the permutations might well be O(n!), but that is a very inefficient way to do it. If you use row operations to reduce the matrix to an upper (or lower) triangular form with the same determinant, the determinant of the triangular matrix is just the product of the diagonal terms, and the whole process will take O(n3) operations.

Efficient algorithms to calculate all the eigenvalues and vectors of a matrix also take O(n3) operations.

Yeah I didn't look at the optimal methods, only the standard inductive based calculation:
http://en.wikipedia.org/wiki/Determinant (you can find a comparison of optimal with inductive techniques where it compares O(n^3) with O(n!)).

If the OP is interested in references to the techniques AlephZero is describing look at the link to see more information.
 

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 14 ·
Replies
14
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K