Apparent fallacy in linear operator theory

neelakash
Messages
491
Reaction score
1
Butkov's book present the theory of linear operators this way:

Suppose a linear operator \alpha transforms a basis vector
\hat{\ e_i} into some vector \hat{\ a_i}.That is we have

\alpha\hat{\ e_i}=\hat{\ a_i}......(A)

Now the vectors \hat{\ a_i} can be represented by its co-ordinates w.r.t. basis \{\hat{\ e_1},\hat{\ e_2}, ...,\hat{\ e_N}}.

\hat{\ a_i} = \sum\ a_j_i\hat{\ e_j} where i,j=1,2,3...N and summation over j is implied....(B)

Notice that the in last equation,we have put a row vector(a)=a row vector (e) times a matrix A

Now with the help of the transforming matrix \ a_j_i,we can find the co-ordiantes of \ y=\alpha\ x from the co-ordiantes of \ x

\ y=\alpha\ x=\alpha\sum [\ x_i\hat{ e_i}]=\sum [\ x_i\hat{ a_i}]...(C)

Employing the definition of \ a_i as in (B), we obtain

\ y= \sum\ x_i\sum\ a_j_i\hat{\ e_j} = \sum[\sum\ a_j_i\ x_i]\hat{\ e_j} in the last term the outer summation is on j.....(D)

From this we could identify that \ y=\sum\ y_j\hat{\ e_j}...(E)

where \ y_j=\sum[\ a_j_i\ x_i}...(F)

Last equation shows y and x are column vectors.If they were row vectors, the indices of \ a would have interchanged among themselves.

But our very first assumption was \hat{ a_i} is a row vector.And y is a linear combination of \hat{ a_i}.Thus, y should be a row vector!

Can anyone please help me to see where is the fallacy?

-Neel.
 
Physics news on Phys.org
there is a subtle, but important difference between a vector and the matrix of components that represent the vector. Incidentally, the same thing can be said about a linear operator and the matrix of components representing the operator. The vector you express in (B) is a linear combination of the basis vectors you list in the line above (B). The matrix representing this vector is usually written as a Nx1 array, or column vector. Does this help at all?
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Replies
1
Views
3K
Replies
0
Views
417
Replies
7
Views
2K
Replies
8
Views
5K
Replies
4
Views
3K
Replies
23
Views
2K
Replies
2
Views
1K
Replies
13
Views
3K
Back
Top