Understanding Pauli X Matrix in Z & X Bases

  • Context: Graduate 
  • Thread starter Thread starter quietrain
  • Start date Start date
  • Tags Tags
    Bases Matrix Pauli
Click For Summary

Discussion Overview

The discussion revolves around the properties and representations of the Pauli X matrix in different bases, specifically the Z and X bases. Participants explore the mathematical relationships between these representations and seek clarification on the definitions and transformations involved.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants question why the Pauli X matrix is represented differently in the Z and X bases, specifically noting the matrices 0 1; 1 0 and 1 0; 0 -1.
  • Others suggest that the Pauli matrices are specific matrices associated with linear operators in different bases, and that understanding these relationships requires knowledge of linear algebra.
  • A participant introduces a transformation matrix U that relates the Z basis to the X basis, indicating that U can be used to convert between these bases.
  • There is confusion regarding the normalization of the transformation matrix and how to determine which basis a matrix belongs to, with some participants expressing uncertainty about the definitions of the bases.
  • One participant emphasizes that a matrix itself does not inherently belong to a basis; rather, it is the context of the linear operator and the basis that defines its representation.
  • Several participants discuss the conditions under which the transformation matrix U can be derived, with some proposing alternative methods to find U.
  • Concerns are raised about the clarity of the explanations provided, with some participants feeling overwhelmed and seeking further clarification on the concepts discussed.

Areas of Agreement / Disagreement

Participants express varying levels of understanding and agreement on the definitions and transformations related to the Pauli matrices. There is no consensus on the clarity of the explanations or the methods for deriving the transformation matrix.

Contextual Notes

Some participants mention the need for a deeper understanding of linear algebra concepts, such as matrix multiplication and vector spaces, to fully grasp the discussion. There are also unresolved questions regarding the normalization of matrices and the specific definitions of the bases.

Who May Find This Useful

This discussion may be useful for students and practitioners in quantum mechanics, linear algebra, and those interested in the mathematical foundations of quantum operators and their representations in different bases.

quietrain
Messages
648
Reaction score
2
hi, anyone can enlighten me why the pauli x matrix in the z basis is given as
0 1
1 0

while it in the x basis, it is given as
1 0
0 -1

is there a formula or something to calculate this? and how does one know that a matrix is in the z basis or the x basis? thanks a lot!
 
Physics news on Phys.org
I would say that the Pauli matrices are specifically the three matrices at the top of this page. You must be talking about the components of the operator Sx in two different bases.

The relationship between linear operators and matrices is described here. You can use that to find the components of Sx in the x and z bases. I recommend that you try it.
 
Observe that with U=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix} we have
U \sigma_3U^*=\sigma_1

Therefore U transforms from z basis to x basis. Now calculate U\sigma_1U^*.
 
arkajad said:
Observe that with U=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix} we have
U \sigma_3U^*=\sigma_1

Therefore U transforms from z basis to x basis. Now calculate U\sigma_1U^*.

wow by using your formula i can get from 1 basis to another, but what is U and how do i get it? it looks kind of similar to the |+x> normalized eigenvector of the pauli-x matrix. is there some kind of link ?
 
Fredrik said:
I would say that the Pauli matrices are specifically the three matrices at the top of this page. You must be talking about the components of the operator Sx in two different bases.

The relationship between linear operators and matrices is described here. You can use that to find the components of Sx in the x and z bases. I recommend that you try it.

yes those are the pauli matrix, i don't really know what i am talking about too :X

my lecturer says that the pauli matrix in the Z basis { |+x> , |-x> } is the matrix you stated in your link

but she also says that the pauli matrix in the X basis { |+x> , |-x> } are
1 0
0 -1,

0 i
-i 0

0 1
1 0

which i have no idea what she is talking about. if it is Z basis, shouldn't it be { |+z> , |-z> } instead of { |+x> , |-x> }? in fact, what is { |+x> , |-x> } ? does this mean X basis?

i don't understand your 2nd link too... sry for being too dumb.
 
quietrain said:
wow by using your formula i can get from 1 basis to another, but what is U and how do i get it?

You get it by finding the simplest solution of the equation:

U\sigma_3=\sigma_1U

where

U=\begin{pmatrix}a&b\\c&d\end{pmatrix}

and normalizing it requiring UU^*=I.
 
arkajad said:
You get it by finding the simplest solution of the equation:

U\sigma_3=\sigma_1U

where

U=\begin{pmatrix}a&b\\c&d\end{pmatrix}

and normalizing it requiring UU^*=I.

ok, so i work out the abcd and get a lot of simulataneous eqns and finally found
1 1
1 -1

but normalizing it would mean 1/sqrt(1+1+1+1) = 1/2? how come its 1/sqrt 2?

oh, and how do i tell that
0 1
1 0 is in the z basis and thus the other matrix is in the x basis?
 
quietrain said:
but normalizing it would mean 1/sqrt(1+1+1+1) = 1/2? how come its 1/sqrt 2?

Because you want UU*=I.

oh, and how do i tell that
0 1
1 0 is in the z basis

Because this is one of the Pauli's matrices with \sigma_z being diagonal.

and thus the other matrix is in the x basis?

Because U maps \sigma_3 to \sigma_1 - it maps eigenvectors of \sigma_3 to eigenvectors of \sigma_1. U is a similarity transformation from z basis to x basis.
 
arkajad said:
Because you want UU*=I.



Because this is one of the Pauli's matrices with \sigma_z being diagonal.



Because U maps \sigma_3 to \sigma_1 - it maps eigenvectors of \sigma_3 to eigenvectors of \sigma_1. U is a similarity transformation from z basis to x basis.

:X... thanks a lot, let me digest them now, i feel so confused...
 
  • #10
quietrain said:
the pauli matrix in the Z basis
A matrix is just an array of numbers. It doesn't involve a basis.

However, as I explained, every basis (actually every pair of bases) defines a way to associate a matrix with each linear operator. That's why I have to assume that she's talking about the components of a linear operator in different bases, not a about a matrix in different bases. The latter doesn't make sense, at least not without some additional information.

quietrain said:
i don't understand your 2nd link too
It's the single most important result from linear algebra, explained in a way that doesn't require you to understand anything more difficult than the terms "vector space", "linear" and "basis". So you should definitely try again.

Edit: You need one more thing: The definition of matrix multiplication. I suspect that's what's missing. If A and B are matrices such that the number of columns of A is the same as the number of rows of B, the product AB is defined by (AB)_{ij}=A_{ik}B_{kj}. Note that the right-hand side should be interpreted as \sum_k A_{ik}B_{kj}. (I explained that convention in that other post).
 
Last edited:
  • #11
Have you given it another shot yet? If anything there is causing you problems, let me know what, and I'll explain it.
 
  • #12
arkajad said:
You get it by finding the simplest solution of the equation:

U\sigma_3=\sigma_1U

where

U=\begin{pmatrix}a&b\\c&d\end{pmatrix}

and normalizing it requiring UU^*=I.
Hi,

I wonder if you can clarify a bit on how you come up with the relation
U\sigma_3=\sigma_1U

I think that there exists a matrix U that maps \sigma_3 to \sigma_1

then we can write U\sigma_1=\sigma_3

and hence U = \sigma_1 \sigma_3^{-1}

But I have a feeling that my method is not correct. I prefer your way since it can easily see that \sigma_i^2=I in any basis.

Thanks,
 
  • #13
You want to find U such that

U\sigma^3U^{-1}=\sigma_1

Now multiply both sides by U from the right and you end with

U\sigma_3=\sigma_1 U
 
  • #14
arkajad said:
You want to find U such that

U\sigma^3U^{-1}=\sigma_1

Now multiply both sides by U from the right and you end with

U\sigma_3=\sigma_1 U

Oh my question was how can you come up with such transformation; that is finding a U that satisfies U\sigma^3U^{-1}=\sigma_1 instead of a U that satisfies U\sigma^3=\sigma_1

Thanks,
 
  • #15
If \psi is an eigenvector of \sigma_3, you want U\psi to be an eigenvector of of \sigma_1[/tex]. Suppose \sigma_3\psi=\lambda\psi, then, with my formula, you will get<br /> <br /> \sigma_1U\psi=U\sigma_3U^{-1}U\psi=U\sigma_3\psi=U\lambda\psi=\lambda U\psi<br /> <br /> That&#039;s what you want. U maps eigenvectors of \sigma_3 to eigenvectors of \sigma_1. You will not get this result with your formula.
 
  • #16
Fredrik said:
Have you given it another shot yet? If anything there is causing you problems, let me know what, and I'll explain it.

.. i think i have to revisit all the lectures videos during the holidays again to understand what QM is about :X thanks for the help though
 
  • #17
quietrain said:
.. i think i have to revisit all the lectures videos during the holidays again to understand what QM is about :X thanks for the help though
We were talking about the relationship between linear operators and matrices. If you find anything that seems difficult in the post I linked to, it has to be because you have expectations that prevent you from seeing that every detail in that post is trivial.

I can relate to that. I once spent an hour not understanding how to prove that an identity element of a Banach algebra must be unique. The proof looks like this: 1'=1'1=1. I can't think of a reason for why I didn't see that immediately other than that I expected the proof to be hard because it was a problem in a difficult book on functional analysis. I'm pretty sure that if you just decide to trust me when I say that there's nothing difficult in that post, you will understand it completely in 10 minutes. If you understand the concepts "matrix multiplication" and "basis of a vector space" well, it might take you less than 2 minutes.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 4 ·
Replies
4
Views
5K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K