# Unitary operators

1. Mar 19, 2014

### Hazzattack

Hi everyone, I was hoping that someone might be able to tell me if what I'm doing is legit.

Firstly, I start by saying that a unitary transform can be made between two sets of operators (this is defined in this specific way);

b$_{n}$ = $\Sigma_{m}$ U$_{mn}$a$_{m}$ (1)

Now this is the bit I'm not sure about, if i want to do the inverse to solve for a$_{m}$ is the following acceptable;

$\Sigma_{n}$U$_{nm}$^{dagger}b$_{n}$ = $\Sigma_{n}$$\Sigma_{m}$U$_{mn}$U$_{nm}$^{dagger}a$_{m}$

Where $\Sigma_{n}$$\Sigma_{m}$U$_{mn}$U$_{nm}$^{dagger} = (UU^{dagger})$_{nn}$= $\delta_{nn}$ = 1

If this is entirely wrong, some pointers on how I isolate a$_{m}$ would be appreciated.

In essence what I'm asking is given (1), what is the inverse transformation of it?

Extra information: The components of U$_{mn}$ are real.

Last edited: Mar 19, 2014
2. Mar 19, 2014

### Hazzattack

In this paper - http://arxiv.org/abs/1006.4507

It seems to suggest that the inverse transform would be;

$\Sigma_{n}$ U$_{nm}$ b$_{n}$ = a$_{m}$

I'm not entirely sure how, specifically the sum index is what is confusing me.

3. Mar 19, 2014

### UltrafastPED

The summation calls for summing over all values of n for each index value of m.

Thus you end up with a collection of items a[m].

4. Mar 19, 2014

### Hazzattack

Yeah, i understand what the summation means. But i don't see how it follows from starting with (1), mathematically i mean. Could you possibly elaborate (by example)?

Thanks.

5. Mar 19, 2014

### strangerep

This is where you start going wrong. On the lhs, m is a free index, but on the rhs it was already a summation index from the previous eqn.

Try using $$\Sigma_n U_{kn}^\dagger b_n ~=~ \cdots$$instead.

BTW, you've got some of your indices roung the wrong way. E.g., It's less error-prone to write
$$b_n ~=~ \Sigma_m U_{nm} a_m$$because then it matches how "matrix $\times$ column vector" works.

You might also want to learn the summation convention to reduce the clutter.

Oh, and you don't need quite so many itex pairs in your equations. I adjusted the quoted part above to illustrate.

6. Mar 19, 2014

### wotanub

Why don't you think about it in terms of transformations on basis vectors? I'm going to appeal to your training in linear algebra...

Let's say that you know some vector $A$ is related to another vector $B$ by some transformation $T$.

$\vec{B}=T\vec{A}$

Now Let's assume that $T$ is linear and has an inverse. Multiply on the left by its inverse

$T^{-1} T \vec{A} = T^{-1}\vec{B}$

$\mathbb{1} \vec{A} = T^{-1}\vec{B}$, where $\mathbb{1}$ is the unit operator

$\vec{A} = T^{-1}\vec{B}$

we have now inverted the equation in a basis free way.

Now consider that we could express the vectors and the transformation in a specific basis and evaluate the transformation action the vector in the same way we do matrix multiplication. This is what you wrote down in your post, basically consider $a_n$ the basis vectors of $\vec{A}$ and $b_n$ the basis vectors of $\vec{B}$, for example.

So the only problem is finding the inverse of the transformation. But you know your transformation is unitary, so the inverse is the Hermitian conjugate $U^{-1} = U^{\dagger}$ so you could immediately write down

$a_{n} = \sum_{m} U_{nm} b_{m}$
$b_{n} = \sum_{m} (U^{\dagger})_{nm} a_{m}$

I hope this explanation is clear enough.

In case you really wanted to do it with a specific basis you'd do as strangerep suggested...
$a_{n} = \sum_{n}U_{nm}b_{m}$
$\sum_{k}(U^{\dagger})_{kn}a_{n} = \sum_{k}\sum_{n}(U^{\dagger})_{kn}U_{nm}b_{m}$
$\sum_{k}(U^{\dagger})_{kn}a_{n} = \sum_{k}(U^{\dagger}U)_{km}b_{m}$ ← If this step seems fishy, I could attempt to explain in a later post
$\sum_{k}(U^{\dagger})_{kn}a_{n} = \sum_{k}δ_{km}b_{m}$ (the Kronecker delta gives exactly the entries of the matrix form of $\mathbb{1} = diag(1, 1, 1, \cdots, 1)$)
$\sum_{k}(U^{\dagger})_{kn}a_{n} = b_{k}$

But you could just re label the indices in the last equation.
$b_{n} = \sum_{m} (U^{\dagger})_{nm} a_{m}$

I didn't really want to outright tell you everything since it might be homework, but I think it's important to understand that it's "only" linear algebra.

Last edited: Mar 19, 2014
7. Mar 20, 2014

### Hazzattack

Firstly, I want to thank both of you for taking the time to respond to me in the depth that you did. Both posts were very helpful.

Secondly, I'm working on this paper as part of my dissertation hence why I'm desperately trying to get to grips with all the notation. Could either (or both) of you perhaps suggest some online material/literature that may help with me getting up to speed on the all the index notations as I believe currently this is my largest source of errors (Just to put the paper into context they are using orthogonal polynomials because of their obvious benefits and I'm trying to replicate the method using a matrix representation).

Finally, i can follow everything you did Wotanub except for this line;

$\sum_{k}(U^{\dagger})_{kn}a_{n} = b_{k}$

Why does the sum over K get dropped here (on the RHS)? Is it because every entry along the diagonal will equal 1?

Also, the index of b changes to k but on your first line it has the index m - what does this actually mean? As i would have thought you want the index to remain as m in order for it to be an inverse transformation (or are you free to change this index without any consequences).

I think what would be helpful is if I could find something with strict definitions in this summation notation. As I'm sure it's the actual notation that is confusing me the most.

Added: Do you think you could explain the whole free index thing? I'm a little confused when I'm supposed to use this.

Thanks again!

Last edited: Mar 20, 2014
8. Mar 20, 2014

### Hazzattack

Also... When you take the Hermitian conjugate of a matrix do you just flip the indices (since that is what a transpose does), such that;

$(U^{\dagger})_{nm}$ = $(U)_{mn}$

Since in this specific case all entries of the unitary operator are real.

9. Mar 20, 2014

### Hazzattack

I thought I should elaborate as you guys took time to respond and it may be obvious to you where I'm explicitly going wrong.

so in the paper they implement a transform such that;

b$_{n}= \int dkh(k) \pi_{n}(k)a(k)$ (where $\pi_{n}(k)$ is a normalised nth monic orthogonal polynomial and both this and h(k) are real)

I have said this can be equivalently written;

$b_{n} = \Sigma_{k} h_{k} \pi_{nk} a_{k}$ (1)

In this case $U^{\dagger}_{nk} = h_{k} \pi_{nk}$

Therefore, we can write (1) as such;

$b_{n} = \Sigma_{k} U^{\dagger}_{nk} a_{k}$ (hence the first question i initially asked)

Now, it states in the paper that the inverse transformation can be written;

a(k) = $\Sigma_{n=0}^{n=\infty} h(k) \pi_{n}(k)b_{n}$

Thus, I would have expected that in 'my' notation this should read;

$a_{k} = \Sigma_{n} h_{k} \pi_{kn} b_{n} = \Sigma_{n} U_{kn} b_{n} ??$

This is why I was asking the questions about the unitary operator as I can't seem to understand how to get it to that from starting from expression (1). I assume it's because I'm struggling with the index notation.

Thanks as always for taking the time to read this.

Last edited: Mar 20, 2014
10. Mar 20, 2014

### ChrisVer

Let me illustrate it correctly, because in the wotanub's post there are some index mistakes...
$a_{n}=\sum_{m} U_{nm} b_{m}$ Watch? I have a free index n on the lhs, so I keep it in the rhs too... the summation takes place over m...
$\sum_{n} (U^{\dagger})_{kn} a_{n}=\sum_{m} \sum_{n} (U^{\dagger})_{kn} U_{nm} b_{m}$
$\sum_{n} (U^{\dagger})_{kn} a_{n}=\sum_{m} (U^{\dagger}U)_{km} b_{m}$
$\sum_{n} (U^{\dagger})_{kn} a_{n}=\sum_{m} δ_{km} b_{m}$
$\sum_{n} (U^{\dagger})_{kn} a_{n}= b_{k}$
That is now fine.... on the lhs there is only one free k, and I have the k on the rhs....n indices are summed over...
One point that might be sneaky is the where I erased the sum of n on the rhs...but that happened because the matrices multiplication is: $A_{ab}= \sum_{r} B_{ar}C_{rb}$

Now, why is the sum of the delta of kroenicker with b give that result?
Because you have the property:
$δ_{ab}=1$ for $a=b$ or $δ_{ab}=0$ else
So you can see it right away that from the sum over m, only the m=k will survive.... The beginner's way would be to expand the sum:
$\sum_{m} δ_{km} b_{m}=δ_{k1} b_{1}+δ_{k2} b_{2}+...+δ_{kN} b_{N}$
where I said that the sum happens from m=1 to m=N.
so if k=1, you will get only:
$\sum_{m} δ_{km} b_{m}=δ_{11} b_{1}+δ_{12} b_{2}+...+δ_{1N} b_{N}=δ_{11} b_{1}=b_{1}$
if k=2, you'll get only $b_{2}$ in exactly the same way...
Finally for any k, you will get only the k-th component.... So:
$\sum_{m} δ_{km} b_{m}= b_{k}$

The free index trick is no trick. You don't use it, it exists... Indices in general can represent vector components. If you have free indices you have something like vectors. If you don't have those indices, then you have something like numbers/scalars. As such you cannot have that a vector is equal to a number, so if you have a free index on one side, you must also have it in the other side as well...

Last edited: Mar 20, 2014
11. Mar 20, 2014

### Hazzattack

Thank you so much for responding. That all makes a lot of sense and was SO helpful. I see so in that case the Kroenicker delta is 'picking' out a single entity.

One quick question, the transpose of a unitary operator with only real entries is the same as just flipping the indices right? (since for a vector you are just going from for example a row vector to a column vector).

Does this imply you can write;

$U^{\dagger}_{nm} = U_{mn}$

Last edited: Mar 20, 2014
12. Mar 20, 2014

### ChrisVer

If the unitary matrix is real, yes you can make it....
in general the dagger symbol means that:
$(U_{ab})^{\dagger}= (U_{ab}^{*})^{T}$
Now if the unitary matrix elements are real, you get immediately the transpose...
$(U_{ab})^{\dagger}= (U_{ab})^{T}$

13. Mar 20, 2014

### Hazzattack

Great. I think I've obtained what I was after. Could you verify that I haven't added something (because I want it to be that way...) I shouldn't have.

I start by saying;

$b_n = \Sigma_{m} U_{nm} a_{m}$

Thus, b=Ua taking the transpose of each side (all entries of U are real)

$\Sigma_{n} U^{\dagger}_{kn}b_{n} = \Sigma_{m} \Sigma_{n} U^{\dagger}_{kn} U_{nm} a_{m} = \Sigma_{m} (U^{\dagger} U)_{km} a{m} = \Sigma_{m} δ_{km}a_{m} = a_{k}$

Since $U^{\dagger}_{kn} = U_{nk}$, the final expression yields;

$\Sigma_{n} U_{nk} b_{n} = a_{k}$

This is the only way I can seem to make both derivations equivalent, otherwise I fall into problems when i sub back in for $U_{nk}$ etc (see above for definition). as $h_{k}$is a diagonal matrix which should have the same index regardless.

Thanks again guys... Sorry I'm bombarding the forums...

14. Mar 20, 2014

### strangerep

Did you carefully study the Wikipedia link I gave you?

(Sorry, I'm pressed for time right now. I might try to reply further later if anything's still unclear. I'm glad ChrisVer corrected the typos in wotanub's post.)

15. Mar 20, 2014

### Fredrik

Staff Emeritus
I think you should start by taking another look at the definition of matrix multiplication. I will denote the entry on row i, column j of an arbitrary matrix X by $X_{ij}$. The definition of matrix multiplication says that if A is an m×n matrix, and B is an n×p matrix, then AB is the m×p matrix such that for all $i\in\{1,\dots,m\}$ and all $j\in\{1,\dots,p\}$,
$$(AB)_{ij}=\sum_{k=1}^n A_{ik}B_{kj}.$$ The summation convention, or at least the simplest version of it, is just the convention to drop the summation sigma from the notation and write the right-hand side as $A_{ik}B_{kj}$ instead.

This convention is based on the observation that the summation is always over all possible values of the index that appears twice, and there's (almost) always a sum over an index that appears twice. There are exceptions, like when you want to refer to the entry on row i, column i of A. If you have said that you're using the summation convention, you can't just write it as $A_{ii}$, because this is interpreted as $\sum_i A_{ii}=\operatorname{Tr} A$.