What Happens if Coefficients of Kronecker Delta are Zero?

  • Thread starter Thread starter jonjacson
  • Start date Start date
  • Tags Tags
    Delta Doubt
jonjacson
Messages
450
Reaction score
38
Well I don't understand this equality:

A^{j}_{i}*A_{j}^{k}=\delta_{i}^{k}

It is true because it is the result of a calculation. But assuming it is true ¿What happens if one of the A_{i}^{j} is zero?.

Then it does not matter which is the value of the A_{j}^{k} the equality will be false because 0*any number= 0, but supposedly it should be 1, due to delta having i=k.

So the point is ¿What happens if the coefficients are zero?¿Is it really true?.
 
Physics news on Phys.org
What is A i,j ? Are these entries of a matrix? What is the matrix?
 
i think there are some constraints on the Aij in that Aij = - 1/Aji or something like that right?

Anyway I found this reference:

http://books.google.com/books?id=O2...d=0CB0Q6AEwAA#v=onepage&q=Aji*Akj=δki&f=false

which indicates that the Aij are not the same matrix as in your formula instead they are inverses.

As an aside: I cut and pasted your formula (markup and all) into google and it found this reference - just amazing.
 
I was making a post but you found my original source XD. So you see the expression, A is simply the coefficients that change one base to another.

¿What would happen if one of that coefficients would be zero?¿Would be true the equality with the kronecker delta?.
 
speed of light issues my response to wisuze got in first somehow
 
The A i,j' matrix cannot give you a zero vector if you input a basis element, since it represents a map that changes one basis into another. And the bases have the same dimension ( bases for the same space )
 
wisvuze said:
The A i,j' matrix cannot give you a zero vector if you input a basis element, since it represents a map that changes one basis into another. And the bases have the same dimension ( bases for the same space )

they could be zero if it was only a scaling transform right? (ie Aij = 0 where i not = j)
 
I was making an example, extraordinarily simple:

e1=(1,0) ; e2=(0,1)

e1'=(2,0); e2'=(0,2)

I see that e' is not orthonormal so the e1'*e1'≠1 or 0 so the expression is not the same as a delta kronecker.

I am going to work with other numerical example and then I will post the result.

*Remark: I am tired of studying this topic without numerical examples to fix ideas, it's really frustrating :( .
 
jedishrfu said:
they could be zero if it was only a scaling transform right? (ie Aij = 0 where i not = j)

By scaling transform, I assume you mean a diagonal matrix? Anyway, these are specific matrices ( ones that take you from the { e_j' } basis to the { e_i } basis ), the matrix is determined by the relations e_j' = a1e1 + ... + anen ( and the ai's then form a column of the matrix ). Whatever this relation is, it could be via scaling , the resulting matrix images of the e_j' 's must be linearly independent and spanning, since the result is another basis for the vector space V. We cannot have a 0 vector in this image ( so a scaling matrix with a 0 factor on the diagonal couldn't be a change of basis matrix from V to V )
 
  • #10
Recall that the definition of matrix multiplication is ##(AB)^i_j=A^i_k B^k_j## (when the row indices are written upstairs, and the column indices downstairs). So the equality ##A^j_i A^k_j=\delta^j_i## is just saying that the matrix A satisfies ##A^2=I##. If this isn't immediately obvious, first note that the left-hand side can also be written as ##A^k_j A^j_i##, which according to the definition of matrix multiplication is equal to ##(A^2)^k_i##.

There are lots of examples of matrices that satisfy this condition, e.g. the 2×2 matrix
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}
 
  • #11
In the book's notation, ##A^i_{j'}## and ##A^{i'}_j## are the row i, column j components of two different matrices. (Zoom in if you find the primes hard to see). I would prefer a notation that uses different symbols instead of just primes. For example, suppose that ##\{e_i\}## and ##\{f_i\}## are two orthonormal bases for a vector space V. Since ##\{e_i\}## is a basis, every ##f_i## is a linear combination of the ##e_i##. $$f_i=A^j_i \,e_j.\qquad\text{(3.14)}$$ Since ##\{f_i\}## is a basis, every ##e_i## is a linear combination of the ##e_i##. $$e_i=B^j_i\, f_j\qquad\text{(3.15)}$$ Now use (3.14) in (3.15). $$e_i=B^j_i\, f_j=B^j_i\, A^k_j e_k.\qquad\text{(3.16)}$$ Since ##\{e_i\}## is linearly independent, this implies that $$B^j_i\, A^k_j=\delta^k_i.\qquad\text{(3.17)}$$ If we let A denote the matrix with ##A^k_j## on row k, column j, for all k and j, and B the matrix with ##B^j_i## on row j, column i, for all j and i, then (3.17) says that AB=I, or equivalently, that ##B=A^{-1}##.
 
Last edited:
Back
Top