- #1
- 1,239
- 34
The potential function:
[tex]V(x_r,y_r,\theta_r,x_g,y_g,\theta_g,x_b,y_b,\theta_b) = 3[/tex]
[tex] - exp(-x_r^2-y_r^2)(cos(\theta_r) + \sqrt{3}sin(\theta_r))[/tex]
[tex] - exp(-x_g^2-y_g^2)(cos(\theta_g) + \sqrt{3}sin(\theta_g)) [/tex]
[tex] - exp(-x_b^2-y_b^2)(cos(\theta_b) + \sqrt{3}sin(\theta_b))[/tex]
where the nine variables are real numbers, and are subject to the subsidiary conditions:
[tex]x_r + x_g + x_b = 0, [/tex]
[tex] y_r + y_g + y_b = 0,[/tex]
[tex] \theta_r + \theta_g + \theta_b = 0,[/tex]
and the region of interest is the neighborhood of [tex](0,0,0,0,0,0,0,0,0)[/tex] has what continuous symmetry? By the way, I know that the minimum value of V is 0, and this is achieved at only two places, the origin and again at [tex](0,0,2\pi/3,0,0,2\pi/3,0,0,-4\pi/3)[/tex].
By "neighborhood of", I do not mean to ask for the symmetry at the origin, which is kind of an unusual location, in terms of the vanishing of derivatives, but instead to suggest that taking a series expansion to 2nd order around the origin makes sense.
It is obvious that if you ignore the subsidiary conditions, one can rotate [tex]x_r[/tex] and [tex]y_r[/tex] into each other. You can do these to the g and b terms too, so you may satisfy the subsidiary condition by simultaneously counter-rotating r, g and b.
And it is clear that one can define an infinitesimal rotation of x into y into z, if one adjusts the amplitudes of the x, y, and z with the appropriate functions of the thetas.
My natural inclination is to make linear approximations around some arbitrary point (a,b,c,d,e,f), and write down the infinitesimal generators by solving the linear algebra problem. Then I can calculate the commutators. But at that point I'm not sure what I'll do next. Probably I'll go searching around for my copy of Georgi.
But I'm wondering if, when you have an explicit formula, there is an easy way of programming my computer to work out the symmetry.
Carl
[tex]V(x_r,y_r,\theta_r,x_g,y_g,\theta_g,x_b,y_b,\theta_b) = 3[/tex]
[tex] - exp(-x_r^2-y_r^2)(cos(\theta_r) + \sqrt{3}sin(\theta_r))[/tex]
[tex] - exp(-x_g^2-y_g^2)(cos(\theta_g) + \sqrt{3}sin(\theta_g)) [/tex]
[tex] - exp(-x_b^2-y_b^2)(cos(\theta_b) + \sqrt{3}sin(\theta_b))[/tex]
where the nine variables are real numbers, and are subject to the subsidiary conditions:
[tex]x_r + x_g + x_b = 0, [/tex]
[tex] y_r + y_g + y_b = 0,[/tex]
[tex] \theta_r + \theta_g + \theta_b = 0,[/tex]
and the region of interest is the neighborhood of [tex](0,0,0,0,0,0,0,0,0)[/tex] has what continuous symmetry? By the way, I know that the minimum value of V is 0, and this is achieved at only two places, the origin and again at [tex](0,0,2\pi/3,0,0,2\pi/3,0,0,-4\pi/3)[/tex].
By "neighborhood of", I do not mean to ask for the symmetry at the origin, which is kind of an unusual location, in terms of the vanishing of derivatives, but instead to suggest that taking a series expansion to 2nd order around the origin makes sense.
It is obvious that if you ignore the subsidiary conditions, one can rotate [tex]x_r[/tex] and [tex]y_r[/tex] into each other. You can do these to the g and b terms too, so you may satisfy the subsidiary condition by simultaneously counter-rotating r, g and b.
And it is clear that one can define an infinitesimal rotation of x into y into z, if one adjusts the amplitudes of the x, y, and z with the appropriate functions of the thetas.
My natural inclination is to make linear approximations around some arbitrary point (a,b,c,d,e,f), and write down the infinitesimal generators by solving the linear algebra problem. Then I can calculate the commutators. But at that point I'm not sure what I'll do next. Probably I'll go searching around for my copy of Georgi.
But I'm wondering if, when you have an explicit formula, there is an easy way of programming my computer to work out the symmetry.
Carl