Prove Symmetric Matrixes Thm: A=0 or Skew Symmetric

  • Context: Graduate 
  • Thread starter Thread starter blue_m
  • Start date Start date
  • Tags Tags
    Symmetric
Click For Summary
SUMMARY

This discussion focuses on proving two theorems related to symmetric matrices. The first theorem states that a symmetric matrix A satisfies the condition x(transpose)*A*x=0 for all x in R^n if and only if A=0. The second theorem asserts that if x(transpose)*A*x=0 for all x in R^n, then A must be skew symmetric. The proof for the second theorem involves using the property A(transposed) = -A and analyzing the implications of the quadratic polynomial formed by the expression.

PREREQUISITES
  • Understanding of symmetric matrices and their properties
  • Familiarity with quadratic forms and their implications
  • Knowledge of skew symmetric matrices and their characteristics
  • Basic linear algebra concepts, including diagonalization and Jordan blocks
NEXT STEPS
  • Study the properties of symmetric and skew symmetric matrices in depth
  • Learn about quadratic forms and their applications in linear algebra
  • Explore the concept of diagonalization and Jordan canonical form
  • Investigate the implications of matrix transposition and its effects on matrix properties
USEFUL FOR

Mathematicians, students of linear algebra, and anyone interested in advanced matrix theory and its applications in various fields.

blue_m
Messages
1
Reaction score
0
I need to prove the following.
1. A is a symmetric matrix, and x(transpose)*A*x=0 for all x (belongs to R^n) if and only if A=0.
2. x(transpose)*A*x=0 for all x (belongs to R^n), if and only if A is skew symmetric.
 
Physics news on Phys.org
I can suggest that for the second one that you transpose both sides
and use the A(transposed) = -A property so that
x(transpose)*A*x = 0 = -(x(tranpose)*A*x)
so those two can equal 0 if and only if A is 0
 
For the first one, sufficiency is obvious. For necessity, either, write the explicit quadratic polynomial and plug in all values of x. Only possibility is the zero polynomial. Alternatively, Assume your [itex]A=V\Lambda V^{-1}[/itex] is diagonalizable with some Jordan blocks (possibly with size [itex]k \times k[/itex]). And use the structure of the Jordan blocks.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 21 ·
Replies
21
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K