A linear operator T on a finite-dimensional vector space

Click For Summary
A linear operator T on a finite-dimensional vector space V is diagonalizable if there exists an ordered basis B such that the matrix representation [T]_B is diagonal. The equation T(v_j) = Σ D_ij v_i indicates that T(v_j) can be expressed as a linear combination of the basis vectors, with coefficients from the j-th column of the matrix. When the matrix is diagonal, only the diagonal entry D_jj contributes, leading to T(v_j) = D_jj v_j, which shows that D_jj is an eigenvalue with v_j as its corresponding eigenvector. A linear operator is diagonalizable if there is a basis of eigenvectors, allowing the operator to be represented in diagonal form. Understanding these relationships is crucial for grasping the concept of diagonalization in linear algebra.
jeff1evesque
Messages
312
Reaction score
0
Definitions: A linear operator T on a finite-dimensional vector space V is called diagonalizable if there is an ordered basis B for V such that [T]_B is a diagonal matrix. A square matrix A is called diagonalizable if L_A is diagonalizable.

We want to determine when a linear operator T on a finite-dimensional vector space V is diagonalizable and, if so, how to obtain an ordered basis B = {v_1, v_2, ... , v_n} for V such that [T]_B is a diagonal matrix. Note that, if D = [T]_B is a diagonal matrix, then for each vector v_j in B, we have

T(v_j) = [SUMMATION: from i = 1 to n]D_i_jv_i = D_j_jv_j = (lambda_j)v_j
where (lambda_j) = Djj.

Questions: Could someone explain the following:
1. T(v_j) = [SUMMATION: from i = 1 to n]D_i_jv_i

2. And maybe touch upon the other two equality relation in the line above.



Thanks,


JL
 
Physics news on Phys.org


jeff1evesque said:
Definitions: A linear operator T on a finite-dimensional vector space V is called diagonalizable if there is an ordered basis B for V such that [T]_B is a diagonal matrix. A square matrix A is called diagonalizable if L_A is diagonalizable.

We want to determine when a linear operator T on a finite-dimensional vector space V is diagonalizable and, if so, how to obtain an ordered basis B = {v_1, v_2, ... , v_n} for V such that [T]_B is a diagonal matrix. Note that, if D = [T]_B is a diagonal matrix, then for each vector v_j in B, we have

T(v_j) = [SUMMATION: from i = 1 to n]D_i_jv_i = D_j_jv_j = (lambda_j)v_j
where (lambda_j) = Djj.

Questions: Could someone explain the following:
1. T(v_j) = [SUMMATION: from i = 1 to n]D_i_jv_i

2. And maybe touch upon the other two equality relation in the line above.

T is a linear operator mapping from V to V. So for each element v_j of the basis, you can represent T(v_j) uniquely as a linear combination of \{v_1,\ldots,v_n\}. The coefficients of that linear combination are precisely the elements of the j'th column of [T]_B.

T(v_j) = \sum_{i=1}^{n} D_{ij} v_i

expresses exactly what I wrote above: on the right hand side we are holding j (the column index) fixed, and summing over i (the row index).

Now, if the matrix is diagonal, only one of the D_{ij}'s in a given column is nonzero, which is why the next equality is true:

\sum_{i=1}^{n} D_{ij} v_i = D_{jj}v_j

So now you have the fact that

T(v_j) = D_{jj} v_j

Since v_j \neq 0 (because it is an element of a basis), this equality says precisely that D_{jj} is an eigenvalue of T, and v_j is a corresponding eigenvector. Thus the suggestive \lambda_j in place of D_{jj} in the last equality.
 


A linear operator, L, (and its associated matrix) is diagonalizable if and only if there is a basis for the space consisting entirely of eigenvectors of L. In that case we can 'diagonalize' L by D= P-1LP where P is the matrix having the eigenvectors of L as columns and D is the diagonal matrix having the eigenvalues of L on its main diagonal.
 


Thanks for the clarification. Now I have to try to commit this to memory.

JL
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K