The subtle difference between matrices and linear operators

Click For Summary
SUMMARY

The discussion clarifies the relationship between symmetric matrices and linear operators, specifically highlighting that symmetric matrices can be viewed as linear operators with respect to an orthonormal basis. The proof of diagonalizability for symmetric matrices relies on the Spectral theorem, confirming that if a matrix is self-adjoint, it is diagonalizable. The conversation also addresses the distinction between matrices and linear operators, emphasizing that while they are linked, they represent different classes of objects in linear algebra.

PREREQUISITES
  • Understanding of symmetric matrices and their properties
  • Familiarity with linear operators and transformations
  • Knowledge of the Spectral theorem
  • Basic concepts of vector spaces and basis representation
NEXT STEPS
  • Study the Spectral theorem in detail
  • Explore the concept of linear transformations and their matrix representations
  • Learn about inner products and their role in defining self-adjoint operators
  • Investigate the differences between linear operators and matrices in various contexts
USEFUL FOR

Mathematicians, students of linear algebra, and anyone interested in the theoretical foundations of matrices and linear transformations.

1230wc
Messages
27
Reaction score
0
For example, if I were to prove that all symmetric matrices are diagonalizable, may I say "view symmetric matrix A as the matrix of a linear operator T wrt an orthonormal basis. So, T is self-adjoint, which is diagonalizable by the Spectral thm. Hence, A is also so."

Is it a little awkward to specify a basis in the proof? Are linear operators and matrices technically two different classes of objects that may be linked by some "matrix representation function" wrt a basis? Thanks!
 
Physics news on Phys.org
Let L be a linear transformation from vector space U to vector space V. If [itex]\{u_1, u_2, ..., u_n}[/itex] is a basis for U and [itex]{v_1, v_2, ..., v_m} is a basis for V. Apply L to each [itex]u_i[/itex] in turn and write it as a linear combination of the V basis. The coefficients give the ith column of the matrix representation of L.<br /> (The bases do not have to be orthonormal. You just have to have some inner product defined on the space to talk about self-adjoint.)<br /> <br /> But to go the other way, you don't have to say "view matrix A as a linear transformation". An n by m matrix <b>is</b> a linear transformation from vector space [itex]R^n[/itex] to [itex]R^m[/itex].[/itex]
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 8 ·
Replies
8
Views
5K