A linear operator T on a finite-dimensional vector space

Click For Summary

Discussion Overview

The discussion revolves around the concept of diagonalizability of linear operators on finite-dimensional vector spaces. Participants explore definitions, properties, and implications of diagonalization, including the relationship between linear operators and their matrix representations in specific bases.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • JL defines a linear operator T and states that it is diagonalizable if there exists an ordered basis B such that [T]_B is a diagonal matrix.
  • JL poses questions regarding the expression T(v_j) = [SUMMATION: from i = 1 to n]D_i_jv_i and seeks clarification on the equality relations presented.
  • Another participant explains that T(v_j) can be represented as a linear combination of basis vectors, with coefficients corresponding to the j'th column of the matrix representation [T]_B.
  • This participant notes that in a diagonal matrix, only one entry in each column is non-zero, leading to the conclusion that T(v_j) = D_{jj}v_j, indicating that D_{jj} is an eigenvalue of T with v_j as the corresponding eigenvector.
  • A further contribution states that a linear operator is diagonalizable if there exists a basis consisting entirely of its eigenvectors, and describes the process of diagonalization using a transformation matrix P.

Areas of Agreement / Disagreement

Participants generally agree on the definitions and implications of diagonalizability, but there is no explicit consensus on the broader implications or applications of these concepts, as the discussion remains exploratory.

Contextual Notes

The discussion does not resolve the potential complexities involved in diagonalization, such as the conditions under which a linear operator may not be diagonalizable or the implications of having multiple eigenvalues.

Who May Find This Useful

Readers interested in linear algebra, particularly those studying properties of linear operators and their matrix representations, may find this discussion beneficial.

jeff1evesque
Messages
312
Reaction score
0
Definitions: A linear operator T on a finite-dimensional vector space V is called diagonalizable if there is an ordered basis B for V such that [tex][T]_B[/tex] is a diagonal matrix. A square matrix A is called diagonalizable if [tex]L_A[/tex] is diagonalizable.

We want to determine when a linear operator T on a finite-dimensional vector space V is diagonalizable and, if so, how to obtain an ordered basis B = [tex]{v_1, v_2, ... , v_n}[/tex] for V such that [tex][T]_B[/tex] is a diagonal matrix. Note that, if D = [tex][T]_B[/tex] is a diagonal matrix, then for each vector [tex]v_j[/tex] in B, we have

[tex]T(v_j)[/tex] = [SUMMATION: from i = 1 to n][tex]D_i_jv_i[/tex] = [tex]D_j_jv_j[/tex] = [tex](lambda_j)v_j[/tex]
where (lambda_j) = Djj.

Questions: Could someone explain the following:
1. [tex]T(v_j)[/tex] = [SUMMATION: from i = 1 to n][tex]D_i_jv_i[/tex]

2. And maybe touch upon the other two equality relation in the line above.



Thanks,


JL
 
Physics news on Phys.org


jeff1evesque said:
Definitions: A linear operator T on a finite-dimensional vector space V is called diagonalizable if there is an ordered basis B for V such that [tex][T]_B[/tex] is a diagonal matrix. A square matrix A is called diagonalizable if [tex]L_A[/tex] is diagonalizable.

We want to determine when a linear operator T on a finite-dimensional vector space V is diagonalizable and, if so, how to obtain an ordered basis B = [tex]{v_1, v_2, ... , v_n}[/tex] for V such that [tex][T]_B[/tex] is a diagonal matrix. Note that, if D = [tex][T]_B[/tex] is a diagonal matrix, then for each vector [tex]v_j[/tex] in B, we have

[tex]T(v_j)[/tex] = [SUMMATION: from i = 1 to n][tex]D_i_jv_i[/tex] = [tex]D_j_jv_j[/tex] = [tex](lambda_j)v_j[/tex]
where (lambda_j) = Djj.

Questions: Could someone explain the following:
1. [tex]T(v_j)[/tex] = [SUMMATION: from i = 1 to n][tex]D_i_jv_i[/tex]

2. And maybe touch upon the other two equality relation in the line above.

T is a linear operator mapping from V to V. So for each element [tex]v_j[/tex] of the basis, you can represent [tex]T(v_j)[/tex] uniquely as a linear combination of [tex]\{v_1,\ldots,v_n\}[/tex]. The coefficients of that linear combination are precisely the elements of the j'th column of [tex][T]_B[/tex].

[tex]T(v_j) = \sum_{i=1}^{n} D_{ij} v_i[/tex]

expresses exactly what I wrote above: on the right hand side we are holding j (the column index) fixed, and summing over i (the row index).

Now, if the matrix is diagonal, only one of the [tex]D_{ij}[/tex]'s in a given column is nonzero, which is why the next equality is true:

[tex]\sum_{i=1}^{n} D_{ij} v_i = D_{jj}v_j[/tex]

So now you have the fact that

[tex]T(v_j) = D_{jj} v_j[/tex]

Since [tex]v_j \neq 0[/tex] (because it is an element of a basis), this equality says precisely that [tex]D_{jj}[/tex] is an eigenvalue of T, and [tex]v_j[/tex] is a corresponding eigenvector. Thus the suggestive [tex]\lambda_j[/tex] in place of [tex]D_{jj}[/tex] in the last equality.
 


A linear operator, L, (and its associated matrix) is diagonalizable if and only if there is a basis for the space consisting entirely of eigenvectors of L. In that case we can 'diagonalize' L by D= P-1LP where P is the matrix having the eigenvectors of L as columns and D is the diagonal matrix having the eigenvalues of L on its main diagonal.
 


Thanks for the clarification. Now I have to try to commit this to memory.

JL
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K