Solving Operator Problem with Matrix Representation

  • I
  • Thread starter Mentz114
  • Start date
  • Tags
    Operator
In summary: If we take a vector$$\psi = \left( \begin{array}{c} \cos(\theta) \\ \sin(\theta) \end{array} \right)$$then we find that your matrix ##A## changes the length of this vector to$$|| \hat{A} \psi ||^2 = \frac{a^2 + y^2}{a^2 - y^2}$$which is not 1 in general. So, as you say, your matrix is not unitary.On the other hand, it is true that your matrix is symmetric and has determinant 1, so it is orthogonal.
  • #1
Mentz114
5,432
292
This matrix, I had hoped was a good candidate for (a representation of ) a unitary, self-adjoint operator
\begin{align*}
\hat{A}&= \frac{1}{D} \left[ \begin{array}{cc}
a^2 & iy^2 \\\
-iy^2 & a^2\end{array} \right]
\end{align*}
##a## and ##y## are real with ##D^2=a^2-y^2\ >\ 0## . ##\hat{A}## has determinant ##1##, eigenvalues ##(a-y)/D,\ (a+y)/D## and eigenvectors ##\vec{e}_0=(1,i),\ \vec{e}_1=(1,-i)##. Also ##\hat{A}^{\dagger}=\hat{A}^{T}=adjoint(\hat{A})=\hat{A}^{-1}##.
When the unit vector ##(\cos(\theta),\sin(\theta))## is acted on by ##\hat{A}## its length changes to ##(a^2+y^2)/(a^2-y^2)##. This is unexpected because the determinant of ##\hat{A}=1##.

Is my expectation wrong, or is there some flaw in ##\hat{A}## that I have missed ?
 
Physics news on Phys.org
  • #2
My calculation is that the determinant is
$$\frac{a^2\cdot a^2 - iy^2\cdot(-iy^2)}{D^2}
= \frac{a^4+i^2y^4}{a^2-y^2}
= \frac{a^4-y^4}{a^2-y^2}
= a^2+y^2
$$
which is not necessarily 1 if the only constraints on ##a## and ##y## are that they are real.
 
  • #3
First, a terminology issue: I think you want your ##A## to be orthogonal, not unitary. A unitary matrix satisfies ##A^\dagger A = I##. An orthogonal matrix satisfies ##A^T A = I##. They are only the same for real matrices. Unitary matrices preserve ##\psi^\dagger \psi##, while orthogonal matrices satisfy ##\psi^T \psi##.

Second, for your matrix ##A## to be orthogonal, you need ##D = \sqrt{a^4 - y^4}## rather than ##D = \sqrt{a^2 - y^2}##

##A^T A = \frac{1}{D^2} \left( \begin{array} \\ a^4 - y^4 & 0 \\ 0 & a^4 - y^4 \end{array} \right)##
 
  • #4
andrewkirk said:
My calculation is that the determinant is
$$\frac{a^2\cdot a^2 - iy^2\cdot(-iy^2)}{D^2}
= \frac{a^4+i^2y^4}{a^2-y^2}
= \frac{a^4-y^4}{a^2-y^2}
= a^2+y^2
$$
which is not necessarily 1 if the only constraints on ##a## and ##y## are that they are real.
Thanks for that. I can't imagine how I get the wrong determinant. That cures my problem, though.

stevendaryl said:
First, a terminology issue: I think you want your ##A## to be orthogonal, not unitary. A unitary matrix satisfies ##A^\dagger A = I##. An orthogonal matrix satisfies ##A^T A = I##. They are only the same for real matrices. Unitary matrices preserve ##\psi^\dagger \psi##, while orthogonal matrices satisfy ##\psi^T \psi##.

Second, for your matrix ##A## to be orthogonal, you need ##D = \sqrt{a^4 - y^4}## rather than ##D = \sqrt{a^2 - y^2}##

##A^T A = \frac{1}{D^2} \left( \begin{array} \\ a^4 - y^4 & 0 \\ 0 & a^4 - y^4 \end{array} \right)##
Thanks, point taken.

I can move on to the next bit now ...
 
  • #5
andrewkirk said:
My calculation is that the determinant is ...

stevendaryl said:
First, a terminology issue: I think you want your ##A## to be orthogonal, not unitary. ...

A million aplogies. It is with a very red face that I have to admit to a terrible mess. The matrix is actually
\begin{align*}
\hat{A}&= \frac{1}{D} \left[ \begin{array}{cc}
a & iy \\\
-iy & a\end{array} \right]
\end{align*}
The other formulae are correct, including ## \hat{A}^{\dagger}=\hat{A}^{T}=adjoint(\hat{A})=\hat{A}^{-1}## and ##\hat{A}^{\dagger}\hat{A}=1##.

So, I'm stuck with an apparently Hermitian matrix with determinant 1 which does not preserve the length of a vector. Ugh.
 
  • #6
Mentz114 said:
an apparently Hermitian matrix with determinant 1 which does not preserve the length of a vector.

Are you trying to construct a Hermitian matrix or a unitary matrix? A Hermitian matrix shouldn't be expected to preserve the lengths of vectors. A unitary matrix would.
 
  • #7
Mentz114 said:
The other formulae are correct, including ##\hat{A}^{\dagger}=\hat{A}^{T}=adjoint(\hat{A})=\hat{A}^{-1}## and ##\hat{A}^{\dagger}\hat{A}=1##.

I'm not sure I see these. Your matrix ##\hat{A}## in this post is equal to its own conjugate transpose (because conjugating flips the signs of the ##i## factors after transposing them), so ##\hat{A} = \hat{A}^\dagger##. Thus ##\hat{A}^\dagger \hat{A} = \hat{A}^2##, which does not equal the identity matrix (because the off diagonal terms don't cancel).

If you just take the transpose of ##\hat{A}## here (without complex conjugating), then I think you have ##\hat{A}^T \hat{A} = I##, yes. But, as @stevendaryl pointed out, that means ##\hat{A}## is an orthogonal matrix, not a unitary matrix.
 
  • #8
PeterDonis said:
If you just take the transpose of ##\hat{A}## here (without complex conjugating), then I think you have ##\hat{A}^T \hat{A} = I##, yes

Actually, this isn't quite the case either. The determinant of

$$
\left[ \begin{array}{cc}
a & iy \\\
-iy & a\end{array} \right]
$$

is ##a^2 - y^2##; the square root of that is what should appear as a prefactor in ##\hat{A}## in order for ##\hat{A}^T \hat{A} = \hat{I}## to be true, since if we multiply the above matrix by its transpose we get ##\left(a^2 - y^2 \right) \hat{I}##, i.e., ##D \hat{I}##. In other words, we should have

$$
\begin{align*}
\hat{A}&= \frac{1}{\sqrt{a^2 - y^2}} \left[ \begin{array}{cc}
a & iy \\\
-iy & a\end{array} \right]
\end{align*}
$$
 
  • #9
Mentz114 said:
A million aplogies. It is with a very red face that I have to admit to a terrible mess. The matrix is actually
\begin{align*}
\hat{A}&= \frac{1}{D} \left[ \begin{array}{cc}
a & iy \\\
-iy & a\end{array} \right]
\end{align*}
The other formulae are correct, including ## \hat{A}^{\dagger}=\hat{A}^{T}=adjoint(\hat{A})=\hat{A}^{-1}## and ##\hat{A}^{\dagger}\hat{A}=1##.

So, I'm stuck with an apparently Hermitian matrix with determinant 1 which does not preserve the length of a vector. Ugh.

With your ##A##, ##A^\dagger \ne A^T##. The operation ##\dagger## is the complex conjugate of the transpose, not the transpose. The transpose is

##A^T = \frac{1}{D} \left( \begin{array} \\ a & -iy \\ iy & a \end{array} \right)##

Then ##A^\dagger## flips the sign of ##i## to get back to ##A##.

With your adjusted ##A##,

##A \left( \begin{array} \\ cos(\theta) \\ sin(\theta) \end{array} \right)##

##= \frac{1}{D} \left( \begin{array} \\ a\ cos(\theta) + iy\ sin(\theta) \\ a\sin(\theta) -iy\ cos(\theta) \end{array} \right)##

The sum of the squares of the terms gives:
##\frac{1}{D^2} (a^2 cos^2(\theta) +2iya cos(\theta) sin(\theta) - y^2 sin^2(\theta) + a^2 sin^2(\theta)) -2iya cos(\theta) sin(\theta) -y^2 cos^2(\theta)) ##
##= \frac{1}{D^2} (a^2 - y^2)##

So if ##D^2 = (a^2 - y^2)##, the sum of the squares is 1.

Note, that for a matrix ##\left( \begin{array} \\ u \\ v \end{array} \right)##, there is a distinction between: ##u^2 + v^2 = 1## and ##|u|^2 + |v|^2 = 1##. An orthogonal matrix preserves the first, and a unitary matrix preserves the second. Your matrix is not unitary, it is orthogonal.
 
  • #10
stevendaryl said:
[]

Note, that for a matrix ##\left( \begin{array} \\ u \\ v \end{array} \right)##, there is a distinction between: ##u^2 + v^2 = 1## and ##|u|^2 + |v|^2 = 1##. An orthogonal matrix preserves the first, and a unitary matrix preserves the second. Your matrix is not unitary, it is orthogonal.
Right, that clears it up a bit. So the sum of squares ##V\cdot V=(\hat{A}V)\cdot (\hat{A}V)## is preserved but ##(\hat{A}V)^{*}\cdot (\hat{A}V)## is not ?

I now recall that 'length preserving' refers to the inner product ( and the 'O' in 'SO(n)' is for 'Orthogonal').
I thought the sum of the squared amplitudes was the same thing i.e. a length. I'm glad to get that sorted.
 
  • #11
I'm answering my own mail because, thanks to @PeterDonis and @stevendaryl I have overcome my difficulties and have a unitary operator. The key thing is that the rows and columns of the operator must be normalised which gives,
\begin{align*}
\hat{A}&= \frac{1}{\sqrt{a^2+y^2}} \left[ \begin{array}{cc}
a & iy \\\
-iy & a\end{array} \right]
\end{align*}
Now ##\hat{A}\left( \begin{array} \\ \cos(\theta) \\ \sin(\theta) \end{array} \right) =\frac{1}{\sqrt{a^2+y^2}} \pmatrix{i\,\sin\left( \phi\right) \,y+a\,\cos\left( \phi\right) \cr a\,\sin\left( \phi\right) -i\,\cos\left( \phi\right) \,y}=\psi## and clearly ##\psi^*\psi = \frac{(a^2+y^2)(\cos(\theta)^2+\sin(\theta)^2)}{a^2+y^2}= 1##.
So probability is preserved at the cost of losing orthogonality. ##\hat{A}## is its own conjugate transpose which is required for unitarity.

Thanks, StevenDaryl for pointing out that the dagger op has a transpose which solved my problem with evolving density matrices.
 
Last edited:
  • #12
Mentz114 said:
A\hat{A} is its own conjugate transpose which is required for unitarity.

No, that's not what's required for unitarity. A matrix is unitary if its conjugate transpose is its inverse, i.e., ##\hat{A}^\dagger \hat{A} = \hat{I}##. Your matrix is its own conjugate transpose, yes, but it's not its own inverse, so it doesn't satisfy the requirement for unitarity. Your matrix is now neither unitary nor orthogonal (it was orthogonal before, with ##1 / \sqrt{a^2 - y^2}## as the prefactor).
 
  • #13
Mentz114 said:
When the unit vector ##(\cos(\theta),\sin(\theta))## is acted on by ##\hat{A}## its length changes to ##(a^2+y^2)/(a^2-y^2)##. This is unexpected because the determinant of ##\hat{A}=1##.

It's not enough for a matrix to have determinant ##1## in order to preserve the lengths of all vectors. (It's not necessary either; only the absolute value/complex norm of the determinant has to be ##1##.) In order to preserve the lengths of all vectors a matrix must be unitary, essentially by definition. If a matrix ##U## is unitary then ##\lvert \det(U) \rvert = 1##, but not every matrix with ##\lvert \det(A) \rvert = 1## is unitary.

Simple example: ##A = \begin{bmatrix} \lambda & 0 \\ 0 & \lambda^{-1} \end{bmatrix}## for any ##\lambda \neq 0 \text{ or } \pm 1## has determinant ##1## but isn't unitary.By the way,
Mentz114 said:
I'm answering my own mail because, thanks to @PeterDonis and @stevendaryl I have overcome my difficulties and have a unitary operator. The key thing is that the rows and columns of the operator must be normalised which gives,
\begin{align*}
\hat{A}&= \frac{1}{\sqrt{a^2+y^2}} \left[ \begin{array}{cc}
a & iy \\\
-iy & a\end{array} \right]
\end{align*}
That's not a unitary matrix unless ##y = 0## or ##a = 0##, in other words if ##A = \pm \mathbb{I}## or ##A = \pm \sigma_{\mathrm{y}}##. You only checked that it preserves the lengths of vectors with real-valued coefficients.

In general, the only ##2 \times 2## Hermitian matrices that are unitary are:
  1. ##\pm \mathbb{I}## where ##\mathbb{I}## is the identity.
  2. Linear combinations ##\boldsymbol{n} \cdot \boldsymbol{\sigma} = n_{\mathrm{x}} \sigma_{\mathrm{x}} + n_{\mathrm{y}} \sigma_{\mathrm{y}} + n_{\mathrm{z}} \sigma_{\mathrm{z}}## of the Pauli matrices with ##\lVert \boldsymbol{n} \rVert = \sqrt{{n_{\mathrm{x}}}^{2} + {n_{\mathrm{y}}}^{2} + {n_{\mathrm{z}}}^{2}} = 1##.
There are no ##2 \times 2## Hermitian unitary matrices other than these.
Mentz114 said:
##\hat{A}## is its own conjugate transpose which is required for unitarity.

Unitary matrices don't have to be Hermitian. For example, ##U = \begin{bmatrix} 1 & 0 \\ 0 & i \end{bmatrix}## is unitary but not Hermitian.
 
  • Like
Likes Mentz114
  • #14
wle said:
In order to preserve the lengths of all vectors a matrix must be unitary,

More precisely, in order to preserve the complex norms of all vectors in a complex vector space, a matrix must be unitary. To preserve the real norms of vectors in a real vector space, a matrix must be orthogonal.
 
  • #15
wle said:
[..]
By the way,

That's not a unitary matrix unless ##y = 0## or ##a = 0##, in other words if ##A = \pm \mathbb{I}## or ##A = \pm \sigma_{\mathrm{y}}##. You only checked that it preserves the lengths of vectors with real-valued coefficients.
[..]
Thanks. I found that it does not work for imaginary coefficients like ##(\cos(\phi),\ i\sin(\phi)##. However with density matrices it seems OK.
In view of the disqualifications of ##\hat{A}## is ##\hat{A} \rho {\hat{A}}^{\dagger}## to be considered non-unitary evolution ?
 
Last edited:
  • #16
Mentz114 said:
In view of the disqualifications of ##\hat{A}## is ##\hat{A} \rho {\hat{A}}^{\dagger}## to be considered non-unitary evolution ?

Of course, since ##\hat{A}## is not a unitary matrix.
 
  • #17
PeterDonis said:
Of course, since ##\hat{A}## is not a unitary matrix.
That's good. Non-unitary evolution is much more interesting !
 

What is the definition of "Solving Operator Problem with Matrix Representation"?

Solving Operator Problem with Matrix Representation refers to the process of using matrices to represent and solve mathematical problems involving operators, which are mathematical symbols that perform specific operations on a given set of numbers or functions.

Why is solving operator problems with matrix representation important in science?

Matrix representation is important in science because it allows for a more efficient and concise way of solving complex mathematical problems involving operators. It also provides a visual representation of the problem, making it easier to understand and manipulate.

What are the steps involved in solving an operator problem with matrix representation?

The steps involved in solving an operator problem with matrix representation include:

  1. Identifying the operator and its corresponding matrix representation
  2. Writing the problem in matrix form
  3. Applying any necessary operations to simplify the matrix
  4. Solving the resulting matrix equation
  5. Interpreting the solution in the context of the original problem

Can any operator problem be solved using matrix representation?

No, not all operator problems can be solved using matrix representation. This method is most effective for linear operators, which follow the properties of addition and multiplication. Non-linear operators may require different methods of solving.

Are there any limitations to solving operator problems with matrix representation?

One limitation of using matrix representation is that it may not always provide the most accurate or precise solution. Additionally, some problems may require a large number of matrix operations, making the process more time-consuming and complex.

Similar threads

Replies
11
Views
1K
  • Quantum Physics
Replies
3
Views
825
Replies
10
Views
1K
Replies
14
Views
1K
Replies
4
Views
1K
  • Quantum Physics
Replies
4
Views
2K
  • Quantum Physics
Replies
1
Views
653
Replies
21
Views
2K
  • Quantum Physics
Replies
5
Views
500
  • Quantum Physics
Replies
2
Views
957
Back
Top