Operating Time for Matrix Eigenvalue and Eigenvectors

In summary: I don't think that's what you're asking. You're asking how the time for this varies depending on the algorithm and the function in question.
  • #1
Cylab
54
0
Hello

Assume matrix A (r*r), and calculate eigenvalue and eigenvectors by using Jordan matrix form.
So how should I find the Operating Time O(?) for it?

Will you please provide some analysis for it? like how should it be calculated etc.

Thanks
 
Physics news on Phys.org
  • #2
Cylab said:
Hello

Assume matrix A (r*r), and calculate eigenvalue and eigenvectors by using Jordan matrix form.
So how should I find the Operating Time O(?) for it?

Will you please provide some analysis for it? like how should it be calculated etc.

Thanks

If you need to find a general determinant (or at least an expression for the eigenvalues), then this will be O(n!) (by O I mean Big-O) where n is the rank of the matrix. (I got this from wikipedia in case you are interested).

In terms of finding the eigenvalues, you will probably use a root finding algorithm, especially if you can't guarantee small enough n. The time for this varies depending on the algorithm and the function in question.

In terms of calculated the Big-O for the determinant, you can use the inductive definition to help you prove the Big-O for that. For the other parts I can't help you.
 
  • #3
Thanks Chiro for your analysis..
BTW, what is n in O(n!) ? Is it r*r = n?

Will there be any way of increasing performance by reducing Operation time O()?
 
  • #4
Cylab said:
Thanks Chiro for your analysis..
BTW, what is n in O(n!) ? Is it r*r = n?

Will there be any way of increasing performance by reducing Operation time O()?

The O(n!) is the Big-O notation for computational complexity. The n is your r (the rank of the matrix).

As for increasing performance, the answer is "it depends".

If you know the size of your matrix in advance, you can create an optimized hard-coded routine for this. This kind of things happens in many applications including video games, rendering software, and scientific software.

Apart from that, you might be able to do further optimizations if the matrix in question has a particular structure. For example if its lower or upper triangular, the determinant is just the product of the diagonal elements and that is a lot quicker than using the general formula. So if you "knew" things in advance, you could use this to your advantage.
 
  • #5
chiro said:
If you need to find a general determinant (or at least an expression for the eigenvalues), then this will be O(n!) (by O I mean Big-O) where n is the rank of the matrix. (I got this from wikipedia in case you are interested).

You didn't give a page reference to where you found that in Wikipedia, but it doesn't look right.

The time to calculate a determinant by summing up all the permutations might well be O(n!), but that is a very inefficient way to do it. If you use row operations to reduce the matrix to an upper (or lower) triangular form with the same determinant, the determinant of the triangular matrix is just the product of the diagonal terms, and the whole process will take O(n3) operations.

Efficient algorithms to calculate all the eigenvalues and vectors of a matrix also take O(n3) operations.
 
  • #6
Can you tell me how your suggested stuff affect matrix `s performance of generating eigenvalues and eigenvectors? or simply operation time?
 
  • #7
AlephZero said:
You didn't give a page reference to where you found that in Wikipedia, but it doesn't look right.

The time to calculate a determinant by summing up all the permutations might well be O(n!), but that is a very inefficient way to do it. If you use row operations to reduce the matrix to an upper (or lower) triangular form with the same determinant, the determinant of the triangular matrix is just the product of the diagonal terms, and the whole process will take O(n3) operations.

Efficient algorithms to calculate all the eigenvalues and vectors of a matrix also take O(n3) operations.

Thanks for your attention. please let me give an example.
E.g. Matrix (3*3) => then has to be 3 eigenvectors (right?): v_1, v_2.v_3, but eigenvalues can λ_1 or λ_1 and λ_2 or λ_1, λ_2 and λ_3. Thus, why operation time O(n3) {n=3}? Could you please give me some calculation hint? thanks.
 
  • #8
AlephZero said:
You didn't give a page reference to where you found that in Wikipedia, but it doesn't look right.

The time to calculate a determinant by summing up all the permutations might well be O(n!), but that is a very inefficient way to do it. If you use row operations to reduce the matrix to an upper (or lower) triangular form with the same determinant, the determinant of the triangular matrix is just the product of the diagonal terms, and the whole process will take O(n3) operations.

Efficient algorithms to calculate all the eigenvalues and vectors of a matrix also take O(n3) operations.

Yeah I didn't look at the optimal methods, only the standard inductive based calculation:
http://en.wikipedia.org/wiki/Determinant (you can find a comparison of optimal with inductive techniques where it compares O(n^3) with O(n!)).

If the OP is interested in references to the techniques AlephZero is describing look at the link to see more information.
 

1. What is "Operating Time for Matrix"?

"Operating Time for Matrix" is a measure used in matrix operations to determine the efficiency and speed of a particular algorithm or method for solving a matrix problem. It refers to the amount of time it takes for a computer or other device to complete a specific matrix operation or set of operations.

2. How is "Operating Time for Matrix" typically measured?

The most common unit of measurement for "Operating Time for Matrix" is seconds. However, it can also be measured in milliseconds, microseconds, or even nanoseconds, depending on the complexity and speed of the algorithm being used.

3. What factors can affect the "Operating Time for Matrix"?

The "Operating Time for Matrix" can be affected by a variety of factors, including the size of the matrix, the complexity of the algorithm being used, the processing speed of the device, and any other tasks or processes running on the device at the same time.

4. Why is "Operating Time for Matrix" important in scientific research?

The "Operating Time for Matrix" is important in scientific research because it can help researchers determine the most efficient and effective methods for solving matrix problems. It can also be used to compare different algorithms and identify areas for improvement.

5. Are there any limitations to using "Operating Time for Matrix" as a measure of efficiency?

Yes, there are some limitations to using "Operating Time for Matrix" as a measure of efficiency. For example, it may not take into account the complexity of the underlying algorithm or the quality of the data being used. Additionally, it may not be a reliable measure for extremely large or small matrices, as the processing time may be affected by the limitations of the device being used.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
810
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
607
  • Engineering and Comp Sci Homework Help
Replies
18
Views
2K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
12
Views
1K
Replies
34
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
928
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
Back
Top