Law of transformation of vectors due to rotations

Click For Summary

Discussion Overview

The discussion centers on the transformation of vectors due to rotations, particularly in the context of Lie groups and algebras as applied in quantum mechanics. Participants explore the mathematical formulations and implications of vector transformations under unitary operations, addressing both classical and quantum perspectives.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant presents the transformation law for vectors under rotations, expressing it as \( U^{\dagger}(R)V_{\alpha}U(R) = R_{\alpha \beta}V_{\beta} \), and seeks clarification on the rationale behind this formulation.
  • Another participant explains that classical vectors transform like coordinates, while quantum operators transform via unitary transformations, leading to a comparison of expectation values that yields the transformation law for vector operators.
  • Discussion includes the nature of the rotation matrix \( R_{\alpha \beta} \) and its relationship to infinitesimal transformations, with references to the generators of rotations and their properties.
  • Participants debate the signs used in the unitary transformations \( U(R) \) and \( U^{\dagger}(R) \), noting that the choice is arbitrary but must be consistent, especially when dealing with vector fields.
  • One participant questions the validity of a transformation involving the commutator \( [V_{\alpha},J_{\mu\nu}] \) and the implications of the antisymmetry of \( \omega_{\mu\nu} \) in their calculations.
  • Another participant clarifies that the antisymmetry of \( \omega_{\mu\nu} \) must be considered when manipulating equations, emphasizing the importance of maintaining tensorial properties in the equations presented.

Areas of Agreement / Disagreement

Participants express differing views on the transformation laws and the implications of the signs in unitary transformations. There is no consensus on the best approach to the commutation relations and the handling of antisymmetric properties in the equations.

Contextual Notes

Participants note that the discussion involves both classical and quantum perspectives, and the transformations discussed are primarily infinitesimal. The implications of the antisymmetry of certain parameters and the definitions of vector fields are also highlighted as critical to the discussion.

LagrangeEuler
Messages
711
Reaction score
22
I currently styding applications of Lie groups and algebras in quantum mechanics.
U^{\dagger}(R)V_{\alpha}U(R)=\sum_{\beta}R_{\alpha \beta}V_{\beta}
Where ##U(R)## represents rotation. Letter ##U## is used because it is unitary transformation and ##R_{\alpha \beta}## matrix elements of matrix of rotations. Why this is the way for vector transformation? Is there any explanation?
Also for me is interesting that
R_{\alpha \beta}=\delta_{\alpha \beta}+\omega_{\alpha \beta}
And from that
U(R)=I+\frac{i}{2}\sum_{\mu \nu}\omega_{\mu \nu}J_{\mu \nu}
U^{\dagger}(R)=I-\frac{i}{2}\sum_{\mu \nu}\omega_{\mu \nu}J_{\mu \nu}
where ##\omega## is parameter and ##J## is generator of rotation. Second question. How to now what to take for ##U^{\dagger}(R)## and what for ##U(R)##? + or - sign.
 
Physics news on Phys.org
LagrangeEuler said:
Why this is the way for vector transformation? Is there any explanation?
As a classical vector on the relevant index space (in your case vector on \mathbb{R}^{3}), V^{\mu} transforms exactly like the coordinates do, i.e., (with repeated indices are summed over) V^{\mu} \to R^{\mu}{}_{\nu} \ V^{\nu} , \ \ \mbox{where} \ \ R^{\mu}{}_{\nu} = \frac{\partial \bar{x}^{\mu}}{\partial x^{\nu}} \ . But, as an operator V^{\mu} transforms (for each value of the index \mu) like quantum-mechanical operators do, i.e., by a unitary transformation V^{\mu} \to U^{\dagger} V^{\mu} U \ . This means that the expectation value of each component of V^{\mu} in the state |\psi \rangle transforms as \langle \psi |V^{\mu} | \psi \rangle \to \langle \psi |U^{\dagger} V^{\mu} U |\psi \rangle \ . \ \ \ \ (1) But, for each \mu the expectation value is a c-number (classical quantity). So, \langle \psi |V^{\mu} | \psi \rangle should also transform like a classical vector on our index space, i.e., \langle \psi |V^{\mu} | \psi \rangle \to R^{\mu}{}_{\nu} \langle \psi |V^{\nu} | \psi \rangle \ . \ \ \ (2) Now, comparing (1) with (2) and knowing that |\psi \rangle is an arbitrary state, we obtain the transformation law for the vector operator (i.e., q-number) U^{\dagger}(R) \ V^{\mu} U(R) = R^{\mu}{}_{\nu} \ V^{\nu} \ . \ \ \ (3) Group-theoretically speaking, this equation tells you how the (finite-dimensional) matrix representation R of the group (in question) is related to the (not necessarily finite-dimensional) unitary (operator) representation of the same group on the Hilbert space of the states.

How to now what to take for ##U^{\dagger}(R)## and what for ##U(R)##? + or - sign.
The sign is completely arbitrary. However, you need to be careful when V^{\mu} = V^{\mu}(x) is a vector field. In this case, Eq(3) should read U^{\dagger}(R) \ V^{\mu}(x)\ U(R) = R^{\mu}{}_{\nu} \ V^{\nu} \left(R^{-1} x \right) \ .
 
  • Like
Likes   Reactions: dextercioby, odietrich, LagrangeEuler and 2 others
LagrangeEuler said:
Also for me is interesting that
R_{\alpha \beta}=\delta_{\alpha \beta}+\omega_{\alpha \beta}
And from that
U(R)=I+\frac{i}{2}\sum_{\mu \nu}\omega_{\mu \nu}J_{\mu \nu}
U^{\dagger}(R)=I-\frac{i}{2}\sum_{\mu \nu}\omega_{\mu \nu}J_{\mu \nu}
where ##\omega## is parameter and ##J## is generator of rotation. Second question. How to now what to take for ##U^{\dagger}(R)## and what for ##U(R)##? + or - sign.
In addition to what @samalkhaiat already has written in #2 you should note that these are only "infinitesimal transformations". What you have here are the generators ##\hat{J}_{\mu \nu}## of rotations, i.e., a basis of the Lie algebra of the rotation group. You have ##\hat{J}_{\mu \nu}=-\hat{J}_{\nu \mu}## so that you as well can use the usual angular-momentum operators,
$$J_{\rho}=\frac{1}{2} \epsilon_{\rho \mu \nu} \hat{J}_{\mu \nu}.$$
Here and in the following I use the Einstein summation convention according to which over repeated indices one has to sum. I also do not distinguish between co- and contravariant components for this Euclidean case using a Cartesian basis.

You come from the infinitesimal transformations to finite transformations by exponentiation. In this case the rotation around an axis in direction ##\vec{n}## with rotation angle ##\phi## (in the sense of the right-hand rule), the corresponding unitary operation is given as
$$\hat{U}(\vec{n},\phi)=\exp(\mathrm{i} \phi \vec{n} \cdot \hat{\vec{J}}).$$
It's easy to see that this is a unitary operator, if all the ##\hat{\vec{J}}## are self-adjoint, because then
$$\hat{U}^{\dagger}(\vec{n},\phi)=\exp(-\mathrm{i} \phi \vec{n} \cdot \hat{\vec{J}})=\hat{U}^{-1}(\vec{n},\phi).$$
You can also show by analysis of how rotations compose that the generators fulfill fill the angular-momentum algebra,
$$[\hat{J}_{\mu},\hat{J}_{\nu}]=\mathrm{i} \epsilon_{\mu \nu \rho} J_{\rho},$$
and from these commutation relations you can construct all irreducible representations of the rotation group (or rather its covering group SU(2) which then introduces half-integer spin representations, a notion that is a pure quantum concept, not known within classical physics).
 
  • Like
Likes   Reactions: LagrangeEuler and blue_leaf77
Thanks a lot. I have one more question.
In calculations
\frac{i}{2}\sum_{\mu,\nu}\omega_{\mu,\nu}[V_{\alpha},J_{\mu,\nu}]=\sum_{\mu}\omega_{\alpha \mu}V_{\mu}
I one want to obtain commutator[V_{\alpha},J_{\mu,\nu}], why you can not transform right hand side with
\sum_{\mu}\omega_{\alpha \mu}V_{\mu}=\sum_{\mu \nu}\omega_{\nu \mu}\delta_{\alpha \nu}V_{\mu}=-\sum_{\mu \nu}\omega_{\mu \nu}\delta_{\alpha \nu}V_{\mu}
and then
\frac{i}{2}[V_{\alpha},J_{\mu\nu}]=-\delta_{\alpha \nu}V_{\mu}
Is it because ##\omega_{\alpha \beta}## is antysimetric and ##\delta_{\alpha \beta}## is symetric?
 
How do you come to this conclusion? You have to consider the ##\omega_{\mu \nu}## as varying independently (of course within the constraints that they are antisymmetric under interchange of their indices, because any symmetric part within the ##\omega_{\mu \nu}## cannot contribute to the left-hand side, because the angular-momentum operators are antisymmetric under exchange of their indices). So the logic is rather
$$\frac{\mathrm{i}}{2} \sum_{\mu \nu} \omega_{\mu \nu} [V_\alpha,J_{\mu \nu}] = \sum_{\mu \nu} \omega_{\nu \mu} \delta_{\nu \alpha} V_{\mu}= \frac{1}{2} \sum_{\mu \nu} \omega_{\nu \mu} (\delta_{\nu \alpha} V_{\mu}-\delta_{\mu \alpha} V_{\nu}) ,$$
from which you get (using the antisymmetry of the ##\omega_{\mu \nu}##)
$$\mathrm{i} [V_{\alpha}, J_{\mu \nu}] = (\delta_{\mu \alpha} V_{\nu} - \delta_{\nu \alpha} V_{\mu}).$$
 
LagrangeEuler said:
why you can not transform
and then
\frac{i}{2}[V_{\alpha},J_{\mu\nu}]=-\delta_{\alpha \nu}V_{\mu}

Because you end up with a wrong equation if you factor out \omega_{\mu\nu} before taking into the consideration the antisymmetric property of \omega_{\mu\nu}, as explained to you by vanhees71. As an advice for you, you should always check that your equations do make sense. And, it is very clear that your equation is very wrong because (1) the left-hand-side is antisymmetric in \mu\nu, while the right-hand-side has no definite symmetry with respect to \mu\nu, and that is a wrong thing to have in a tensorial equation, and (2) as a consequence of (1), if you multiply both-sides of your equation by the symmetric (\mathbb{R}^{3}-metric) tensor \delta_{\mu\nu} and sum over \mu\nu, you obtain 0 = V_{\alpha} for all values of \alpha, i.e., your equation is “correct” only for the zero vector so there is nothing good about it. So, if you encounter equation of the form \sum_{\mu\nu} \omega_{\mu\nu} J_{\mu\nu} = \sum_{\mu\nu} \omega_{\mu\nu} A_{\mu\nu} \ , \ \ \ \ (1) with \omega_{\mu\nu} = - \omega_{\nu\mu} and J_{\mu\nu} = - J_{\nu\mu}, and you want to factor-out the \omega, what do you do? If you know that A_{\mu\nu} = - A_{\nu\mu}, then you simply get J_{\mu\nu} = A_{\mu\nu}, but if A_{\mu\nu} \neq - A_{\nu\mu}, then you have to anti-symmetrise A_{\mu\nu} before factoring out the \omega, because \omega_{\mu\nu} (being anti-symmetric) kills the symmetric part of A_{\mu\nu}: any tensor A_{\mu\nu} can be uniquely decomposed into symmetric and anti-symmetric parts A_{\mu\nu} = \frac{1}{2} \left( A_{\mu\nu} + A_{\nu\mu}\right) + \frac{1}{2} \left( A_{\mu\nu} - A_{\nu\mu}\right) \ . Now if you contract this with \omega_{\mu\nu}, the first term on the right-hand-side vanishes because it is symmetric, and you get \sum_{\mu\nu} \omega_{\mu\nu} A_{\mu\nu} = \frac{1}{2} \sum_{\mu\nu} \omega_{\mu\nu} \left( A_{\mu\nu} - A_{\nu\mu}\right) \ , \ \ \ (2) Now, and only now, you can factor out \omega_{\mu\nu} from (1) and (2) to obtain a valid tensor equation J_{\mu\nu} = \frac{1}{2} \left( A_{\mu\nu} - A_{\nu\mu} \right) .
 
  • Like
Likes   Reactions: vanhees71

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K