What is the equation for representing a linear operator in terms of a matrix?

Click For Summary
Every linear operator A can be represented by a matrix A_{ij}, where the action of A on a basis vector e_i is expressed as A(e_i) = ∑_j A_{ji} e_j. The discussion emphasizes that A(e_i) is a vector formed by a linear combination of basis vectors, with coefficients corresponding to the operator's action on e_i. The use of the Einstein summation convention simplifies notation by eliminating explicit summation symbols. The relationship A_{ij} = (A u_j)_i illustrates how the operator's matrix representation relates to the components of vectors in their respective bases. This framework provides a clear understanding of how linear operators can be expressed in matrix form.
Favicon
Messages
14
Reaction score
0
I'm working through a proof that every linear operator, A, can be represented by a matrix, A_{ij}. So far I've got

Let \textbf{p}=\sum_{i}p_{i}\widehat{\textbf{e}}_{i}
A(\textbf{p}) = \sum_{i}p_{i}A(\textbf{e}_{i})

which is fine. Then it says that A(\textbf{e}_{i}) is a vector, given by:

A(e_{i}) = \sum_{j}A_{j}(p_{i})e_{j} = \sum_{j}A_{ji}e_{j}.

The fact that its a vector is fine with me, but I can't get my head around the equation for it. why does the operator acting on one of the base vectors depend on p_{i}? Surely the base vectors are independent of p_{i} and so should be any operation acting on them.
 
Physics news on Phys.org
Indeed, they don't.
I would write it like
A(\hat e_i) = \sum_j (A_i)_j \hat e_j
where Ai is some vector of coefficients.
 
This is how I do this thing: Suppose A:U\rightarrow V is linear, and that \{u_j\} is a basis for U, and \{v_i\} is a basis for V. Consider the equation y=Ax, and expand in basis vectors.

y=y_i v_i

Ax=A(x_j u_j)=x_j Au_j= x_j (Au_j)_i v_i

I'm using the Einstein summation convention: Since we're always supposed to do a sum over the indices that appear exactly twice, we can remember that without writing any summation sigmas (and since the operator is linear, it wouldn't matter if we put the summation sigma to the left or right of the operator). Now define A_{ij}=(Au_j)_i. The above implies that

y_i=x_j(Au_j)_i=A_{ij}x_j

Note that this can be interpreted as a matrix equation in component form. y_i is the ith component of y in the basis \{v_i\}. x_j is the jth component of x in the basis \{u_j\}. A_{ij} is row i, column j, of the matrix of A in the pair of bases \{u_j\}, \{v_i\}.

Favicon said:
A(e_{i}) = \sum_{j}A_{j}(p_{i})e_{j} = \sum_{j}A_{ji}e_{j}
This one should be

Ae_{i} = \sum_{j}(Ae_{i})_j e_{j} = \sum_{j}A_{ji}e_{j}

Note that the first step is just to express the vector Ae_i as a linear combination of basis vectors, and that (Ae_i)_j is just what I call the jth component.
 
Last edited:
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
27
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K