The subtle difference between matrices and linear operators

Click For Summary
Symmetric matrices can be viewed as linear operators in relation to an orthonormal basis, which allows for the application of the Spectral theorem to prove their diagonalizability. Specifying a basis in proofs can feel awkward, but it is necessary for clarity in the context of linear transformations. Linear operators and matrices are indeed distinct concepts, though they are interconnected through a matrix representation function based on chosen bases. The process of applying a linear transformation to a basis and expressing the results as linear combinations leads to the formation of the transformation's matrix representation. Ultimately, while matrices can represent linear transformations, the reverse is not always necessary to establish the relationship.
1230wc
Messages
27
Reaction score
0
For example, if I were to prove that all symmetric matrices are diagonalizable, may I say "view symmetric matrix A as the matrix of a linear operator T wrt an orthonormal basis. So, T is self-adjoint, which is diagonalizable by the Spectral thm. Hence, A is also so."

Is it a little awkward to specify a basis in the proof? Are linear operators and matrices technically two different classes of objects that may be linked by some "matrix representation function" wrt a basis? Thanks!
 
Physics news on Phys.org
Let L be a linear transformation from vector space U to vector space V. If \{u_1, u_2, ..., u_n} is a basis for U and {v_1, v_2, ..., v_m} is a basis for V. Apply L to each u_i in turn and write it as a linear combination of the V basis. The coefficients give the ith column of the matrix representation of L.<br /> (The bases do not have to be orthonormal. You just have to have some inner product defined on the space to talk about self-adjoint.)<br /> <br /> But to go the other way, you don&#039;t have to say &quot;view matrix A as a linear transformation&quot;. An n by m matrix <b>is</b> a linear transformation from vector space R^n to R^m.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
12
Views
4K
Replies
8
Views
5K
  • · Replies 2 ·
Replies
2
Views
1K