Proof that (x^t)Ax>0 if Eigenvalues of Matrix A > 0

In summary, the conversation discusses a proof for the relationship between a matrix A with eigenvalues greater than 0 and the expression (x^t)Ax, which should also be greater than 0 for any nonzero vector x. The proof involves considering if A is diagonalizable and if (A+A*)/2 is positive definite. However, it is then pointed out that the relationship is not always true, using a specific example of a non-symmetric matrix with positive eigenvalues.
  • #1
megaman
1
0
I am looking for a rather simple proof that if the matrix A has eigenvalues>0, then (x^t)Ax>0 for any vector x not 0.

My first tought was if the eigenvalues are bigger than 0, then (x^t)Ax=(x^t)"eigenvaulue"x="eigenvalue"(x^t)x>0, if x is nonzero and eigenvalues is bigger than 0.

Is this proof good enough? I am a little unsure, becuase I think I have not proven it for all x, just the eigenvectors.
 
Physics news on Phys.org
  • #2
I'm not sure what you mean by '(x^t)"eigenvaulue" x'. which eigenvalue?

It is true that if A is diagonalizable, then it is similar to a diagonal matrix having the eigenvalues on the diagonal.

For example, if A itself is
[tex]A= \begin{bmatrix}a_1 & 0 \\ 0 & a_2\end{bmatrix}[/tex]
where [itex]a_1[/itex] and [itex]a_2[/itex] are both positive, we have
[tex]x^*Ax= \begin{bmatrix}x_1 & x_2 \end{bmatrix}\begin{bmatrix}a_1 & 0 \\ 0 & a_2\end{bmatrix} \begin{bmatrix}x_1 \\ x_2\end{bmatrix}[/tex]
[tex]= a_1x_1^2+ a_2x_2^2[/itex]
which is postive because it is the sum of two positive numbers (or one positive number and 0).

But not all matrices are diagonalizable.
 
  • #3
You'll have a hard time proving this statement because it's not true.

Consider the following matrix:

[tex]
A= \begin{bmatrix}9 & 5.5 \\ 1 & 1\end{bmatrix}[/tex]

The eigenvalues of A are both positive (verify).

Let x = (1,-3.1)T

Then
[tex] x^*Ax= \begin{bmatrix} 1 & -3.1 \end{bmatrix}\begin{bmatrix}9 & 5.5 \\ 1 & 1\end{bmatrix} \begin{bmatrix}1 \\ -3.1\end{bmatrix} = -1.54
[/tex]

The relationship you describe is true a) if A is symmetric or b) if (A+A*)/2 is positive definite.
 
  • #4
hgfalling said:
You'll have a hard time proving this statement because it's not true.

Consider the following matrix:

[tex]
A= \begin{bmatrix}9 & 5.5 \\ 1 & 1\end{bmatrix}[/tex]

The eigenvalues of A are both positive (verify).

Let x = (1,-3.1)T

Then
[tex] x^*Ax= \begin{bmatrix} 1 & -3.1 \end{bmatrix}\begin{bmatrix}9 & 5.5 \\ 1 & 1\end{bmatrix} \begin{bmatrix}1 \\ -3.1\end{bmatrix} = -1.54
[/tex]

The relationship you describe is true a) if A is symmetric or b) if (A+A*)/2 is positive definite.
Well done.
 
  • #5


Your proof is on the right track, but it is not complete. You have correctly identified that if x is an eigenvector of A with eigenvalue λ, then (x^t)Ax = λ(x^t)x > 0 since λ and (x^t)x are both positive.

However, this only proves that (x^t)Ax > 0 for eigenvectors of A. To prove it for all vectors x, you need to use the fact that any vector x can be written as a linear combination of eigenvectors of A. This is because eigenvectors of A form a basis for the vector space.

So, let x be any nonzero vector. Then, we can write x as a linear combination of eigenvectors of A, say x = c1v1 + c2v2 + ... + cnvn, where c1, c2, ..., cn are scalars and v1, v2, ..., vn are eigenvectors of A.

Substituting this in (x^t)Ax, we get:

(x^t)Ax = (c1v1 + c2v2 + ... + cnvn)^t A (c1v1 + c2v2 + ... + cnvn)

= (c1v1)^t A (c1v1) + (c2v2)^t A (c2v2) + ... + (cnvn)^t A (cnvn)

= c1^2 (v1^t A v1) + c2^2 (v2^t A v2) + ... + cn^2 (vn^t A vn)

Now, since v1, v2, ..., vn are eigenvectors of A, we know that (v1^t A v1), (v2^t A v2), ..., (vn^t A vn) are all positive (as shown in your initial proof). And since c1^2, c2^2, ..., cn^2 are also positive (since they are squares of real numbers), we can conclude that each term in the above expression is positive.

Therefore, (x^t)Ax is a sum of positive terms and hence, it is positive. This proves that (x^t)Ax > 0 for all nonzero vectors x, given that the eigenvalues of A are positive.

In summary, your proof was correct for eig
 

1. What is the significance of the eigenvalues of a matrix in determining if (x^t)Ax>0?

The eigenvalues of a matrix represent the scaling factor of the corresponding eigenvector under the transformation represented by the matrix. For (x^t)Ax>0 to hold true, all eigenvalues of matrix A must be greater than 0. This ensures that the transformation represented by A preserves the direction of the vector x, resulting in a positive dot product.

2. How do you prove that (x^t)Ax>0 if all eigenvalues of matrix A are positive?

To prove that (x^t)Ax>0, we can use the Spectral Theorem which states that a real symmetric matrix has real eigenvalues and orthogonal eigenvectors. Since A is a real symmetric matrix, we can diagonalize it to obtain a diagonal matrix D with the eigenvalues of A on the diagonal. Then, we can rewrite (x^t)Ax as (x^t)UDU^tx, where U is the matrix of eigenvectors of A. Since all eigenvalues of A are positive, D will also have only positive entries on the diagonal. Therefore, (x^t)UDU^tx will always be positive, proving (x^t)Ax>0.

3. Can (x^t)Ax be greater than 0 if matrix A has negative eigenvalues?

No, (x^t)Ax can only be greater than 0 if all eigenvalues of matrix A are positive. If A has negative eigenvalues, then at least one of the eigenvalues will be negative, resulting in a negative dot product in (x^t)Ax.

4. Are there any exceptions to the rule that (x^t)Ax>0 if all eigenvalues of A are positive?

Yes, there are exceptions. If A is a singular matrix, meaning it has at least one eigenvalue of 0, then (x^t)Ax will not always be greater than 0. Additionally, if A is not a symmetric matrix, then the Spectral Theorem cannot be applied and the relationship between (x^t)Ax and the eigenvalues of A is not as straightforward.

5. How is the relationship between (x^t)Ax and the eigenvalues of A useful in real-world applications?

The relationship between (x^t)Ax and the eigenvalues of A is useful in many areas of science, particularly in linear algebra and optimization. In linear algebra, it helps to determine the positive definiteness of a matrix and the direction of a transformation. In optimization, it is used to find the minimum or maximum of a function by analyzing the eigenvalues of the Hessian matrix. It is also used in physics and engineering to analyze the stability of dynamic systems.

Similar threads

Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
499
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
24
Views
784
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
327
Replies
2
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
18
Views
2K
Back
Top