Unitary Matrices and Their Entry Values Proof

  • Thread starter Thread starter RJLiberator
  • Start date Start date
  • Tags Tags
    Matrices Proof
Click For Summary
SUMMARY

The discussion focuses on proving that the absolute value of each entry in a unitary matrix \( A \) satisfies \( |A_{ij}| \leq 1 \). A unitary matrix is defined by the equation \( A^\dagger A = I \), where \( A^\dagger \) is the Hermitian conjugate. The proof involves analyzing the diagonal elements of the identity matrix, leading to the conclusion that the sum of the squares of the absolute values of the entries in each column must equal 1, thus establishing that each entry's absolute value is bounded by 1.

PREREQUISITES
  • Understanding of unitary matrices and their properties
  • Familiarity with Hermitian conjugates and complex numbers
  • Knowledge of matrix multiplication and summation notation
  • Basic concepts of linear algebra, particularly eigenvalues and eigenvectors
NEXT STEPS
  • Study the properties of unitary matrices in more depth
  • Learn about the implications of the spectral theorem for unitary matrices
  • Explore the relationship between unitary matrices and quantum mechanics
  • Practice proofs involving matrix norms and their properties
USEFUL FOR

Mathematics students, particularly those studying linear algebra, quantum mechanics, or anyone interested in the properties of unitary matrices and their applications in various fields.

RJLiberator
Gold Member
Messages
1,094
Reaction score
63

Homework Statement



Show that |A_ij| ≤ 1 for every entry A_ij of a Unitary Matrix A.

Homework Equations



A matrix is unitary when A^†*A=I
Where † is the hermitian operator, meaning you Transpose and take the complex conjugate
and I = the identity matrix

The Attempt at a Solution


I'm having a hard time starting this one out.
It seems to make sense to me, as we need to find a way to make them equal the identity matrix.

So we have something like:
[Aij*T][Aij]
[Aji*][Aij]

I'm not quite sure where to go in any direction, how I can get the necessary conditions applied to this proof.

Any point of guidance may help.
 
Physics news on Phys.org
The index notation for an identity matrix ##I## is ##I_{ij} = \delta_{ij}##. So, for a unitary matrix ##A##, ##A_{ij}A_{kj}^* = \delta_{ik}##, what can you conclude from this relation?
 
  • Like
Likes   Reactions: RJLiberator
Write the entries of the resultant matrix ##I = A^\dagger A## as ##\delta_{ij} = \sum_k \left[A^\dagger\right]_{ik}\cdot \left[A\right]_{kj}##.

Here I used that ##\left[ I\right]_{ij} = \delta_{ij}##.
Now you want to get rid of the hermitian conjugation, using what you said already.

Does that help?

Edit: Oops too slow
 
  • Like
Likes   Reactions: RJLiberator
Well, let's see.

So we have the notation for the identity matrix.
We use the summation formula for the components.

We can apply the definition of the hermitian conjugation to the sum.

We see the sum of [A_ki*][A_kj] = the delta element of the identity matrix.

Since we are looking at the ij'th component, the delta element = 0.

Sum[[A_ki*][A_kj]] = 0
(A_kj)*=0 ??
 
You want to look at the elements ##\delta_{ii}=1##.

The zeroes (##\delta_{ij}\text{ with } i\neq j##) are quite useless in this case.
 
  • Like
Likes   Reactions: RJLiberator
hm, I see.

Akj*=1

HM.

This reminds me of the form e^(-itheta) to represent a complex number.
Since A_kj is the kj'th element of the matrix, it is a number, a complex number that can be represented by e^(-itheta) in this case due to conjugation.

This means Akj*=1=cos(x)-isin(x)
Since we have no imaginary number here, sin(x)=0, and cos(x) is bounded by 1.

Is this the correct way to go about this proof? Albeit, needs a bit of polishing?
 
What you get is the following expression
##1 = \sum_k a^*_{ki}\cdot a_{ki} = \sum_k |a_{ki}|^2##

Here I used that the modulus squared of a complex number is ##c## is given by ##|c|^2 = c^* c##.
So we know that sum above equals one and that the numbers are non-negative.
Can you finish this reasoning?
 
  • Like
Likes   Reactions: RJLiberator
Hm, what's throwing me off is the indexing. How did it become _ki for both of them? I can see the transpose definition made a_ik into a_ki, but why is the other index going from a_kj to a_ki?

If ##1 = \sum_k |a_{ki}|^2##

Then it's clear to see that a sum of numbers, squared must equal 1, must mean that the components of the sum add up to one or are less than one.
consider it is all non-negative components.
 
Because we are looking at elements on the diagonal.
We started from ##\delta_{ii}##. Naturally this should be adopted in the sum as well.

The conclusion is right by the way.
 
  • Like
Likes   Reactions: RJLiberator
  • #10
Okay, I just wrote it down. I am starting to understand it more clearly now. It still is the most foreign one on this assignment, but let's see.

1. Write down as entries of the matrix.
##\delta_{ij} = \sum_k \left[A^\dagger\right]_{ik}\cdot \left[A\right]_{kj}##.

2. Next, note that we are looking at the interesting elements of the matrix, the diagonals where delta is not equal to 0, but instead equal to 1 of the identity matrix.
##delta_{ii} = \sum_k a^*_{ki}\cdot a_{ki} ##

3. Note that delta_ii = 1 and that we can apply a property of complex conjugates so that we see the absolute value of the square of the a_ki'th component.
##1 = \sum_k |a_{ki}|^2##

4. Now I use reasoning to simply state that, Hey, we are adding together components here, and then taking the square, and this must equal one, so the components must be less than or equal to 1.

Walouh.

A pretty smooth proof.

What threw me off was the indexing of the summation. I understand the delta summation part well. I'm still foggy on the summation process. So we are summing from k=1 to k=i? The rows? The columns?
 
  • #11
From k=1 to k=n. Here n is the number of rows and columns our square matrix has.

With regards to the confusion, are you using the Einstein summation convention?
Because we don't use it here.

The sum-expression is how we can define matrix multiplication. You can try it for small matrices if you still think its fishy.
 
  • Like
Likes   Reactions: RJLiberator
  • #12
JorisL said:
From k=1 to k=n. Here n is the number of rows and columns our square matrix has.

With regards to the confusion, are you using the Einstein summation convention?
Because we don't use it here.

The sum-expression is how we can define matrix multiplication. You can try it for small matrices if you still think its fishy.

Oh, no, I am using the same summation convention as you are, I believe.

From k=1 to k=n. Here n is the number of rows and columns our square matrix has.

This helped me out.
I think I simply need more practice with summation definitions of matrix elements.

Thank you kindly for your helping hand here.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 1 ·
Replies
1
Views
8K
  • · Replies 4 ·
Replies
4
Views
5K
  • · Replies 13 ·
Replies
13
Views
11K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
2
Views
6K