From Simple Groups to Quantum Field Theory

In summary, the conversation discusses the use of "limit" definition for the exponential and Joe's expression for exponentiating a bivector. The relationship between su(2) generators and Euler's formula is also explored. The conversation also explains the permutation symbol and its generalization, the Kronecker delta, and different kinds of derivatives such as partial derivatives and total derivatives. The usefulness of divergence and curl in the context of differential equations is also mentioned.
  • #1
Mehdi_
65
0
Originally Posted by garrett:
using the "limit" definition for the exponential. This is an exact expression for the rotation of a vector by a bivector. In three dimensions an arbitrary bivector, B, can be written as
[tex]B = \theta b[/tex]
an amplitude, [itex]\theta[/itex] , multiplying a unit bivector encoding the orientation, [itex]bb=-1[/itex]. The exponential can then be written using Joe's expression for exponentiating a bivector:
[tex]U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)[/tex]

[tex]U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)[/tex]
we can then write:
[tex]U = e^{\frac{1}{2} \theta b} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)[/tex]
And if we rely on Joe's expression, [itex]r=\frac{\theta}{2}[/itex] (rotor angle is always half the rotation):
[tex]U = e^{br} = \cos(r) + b \sin(r)[/tex]
and
[tex]\begin{align*}U=e^T & = I \left(1 - \frac{1}{2!}r^2 + \frac{1}{4!}r^4 - \frac{1}{6!}r^6 + \dots\right) + T \left(1 - \frac{1}{3!}r^2 + \frac{1}{5!}r^4 - \frac{1}{7!}r^6 + \dots \right) \\ & = I \cos(r) + \frac{1}{r} T \sin(r)\end{align*}[/tex]
we can therefore see that [itex]br = T[/itex]
then : [itex]b = \frac{T}{r}[/itex]
We have then Joe's expression :
[tex]U = e^{T} = \cos(r) + \frac{T}{r} \sin(r)[/tex]

And from [itex]bb=-1[/itex] we can deduce:

[tex]\frac{T^2}{r^2}=-I[/tex]
and therefore [itex]TT =-r^2 I[/itex] (result defined by the joe previously)
 
Last edited:
Physics news on Phys.org
  • #2
Relation between su(2) generators and Euler's formula in complex analysis.

Euler's formula states that, for any real number x, [tex]e^{ix} = cos(x) + i sin(x)[/tex]
If however x is a Pauli matrix, Euler's formula becomes :
[tex]e^{i\sigma} = cos(\sigma) + i sin(\sigma)[/tex] being understood that [tex](\sigma_i)^2=1[/tex]
Pauli matrices [tex]T= i \sigma[/tex] form a basis for su(2) over R.
The matrices T could also be called generators of the Lie algebra su(2).
And the Euler's formula becomes then:
[tex]e^T = cos(\sigma) + \frac{T}{\sigma} sin(\sigma)[/tex]

To every Lie group, we can associate a Lie algebra, whose underlying vector space is the tangent space of G at the identity element. But how to construct a Lie group from a Lie algebra ?
Answer: The most general elements U of the group SU(2) can be obtained by exponentiating the generators of the algebra su(2).
[tex]U = e^T = I cos(\sigma) + \frac{T}{\sigma} sin(\sigma)[/tex]
 
Last edited:
  • #3
The permutation symbol [tex] \epsilon_{ijk} [/tex] is a three-index object sometimes called the Levi-Civita symbol.
[tex] \epsilon_{ijk}=0 [/tex] if i=j or j=k or i=k
[tex] \epsilon_{ijk}=+1[/tex] if (i,j,k) \in {(1,2,3),(2,3,1),(3,1,2)}
[tex] \epsilon_{ijk}=-1 [/tex] if (i,j,k) \in {(1,3,2),(3,2,1),(2,1,3)}

The symbol can be defined as the scalar triple product of unit vectors in a right-handed coordinate system: [tex] \epsilon_{ijk} \equiv \vec{x}_i . (\vec{x}_j \times \vec{x}_k) [/tex]

The symbol [tex] \epsilon_{ijk} [/tex] can be generalized to an arbitrary number of elements, in which case the permutation symbol is [tex] (-1)^{i(p)} [/tex] , where i(p) is the number of transpositions of pairs of elements that must be composed to build up the permutation p.

The permutation symbol satisfies
[tex] \delta_{ij}\epsilon_{ijk} = 0 [/tex]
[tex] \epsilon_{ipq}\epsilon_{jpq} = 2\delta_{ij} [/tex]
[tex] \epsilon_{ijk}\epsilon_{ijk} = 6 [/tex]
[tex] \epsilon_{ijk}\epsilon_{pqk} = \delta_{ip}\delta_{iq}-\delta_{iq}\delta_{jp}[/tex]
where [tex] \delta_{ij} [/tex] is the Kronecker delta
 
Last edited:
  • #4
The simplest interpretation of the Kronecker delta is as the discrete version of the delta function defined by
[tex] \delta_{ij}=0 [/tex] for i and j different
[tex] \delta_{ij}=1 [/tex] for i = j

In three-space, the Kronecker delta satisfies the identities
[tex] \delta_{ii} = 3 [/tex]
[tex] \delta_{ij}\epsilon_{ijk} = 0 [/tex]
[tex] \epsilon_{ipq}\epsilon_{jpq} = 2\delta_{ij} [/tex]
[tex] \epsilon_{ijk}\epsilon_{pqk} = \delta_{ip}\delta_{iq}-\delta_{iq}\delta_{jp}[/tex]

where Einstein summation is implicitly assumed (i,j = 1,2,3...)

Technically, the Kronecker delta is a mixed second-rank tensor defined by the relationship
[tex] \delta^{i}_{j} = \frac{\partial x_i}{\partial x_ j}[/tex]
Since the coordinates [tex] x_i [/tex] and [tex] x_ j [/tex] are independent for i not equal to j, and therefore :

[tex] \delta^{i}_{j} = \frac{\partial x_i}{\partial x_ k} \frac{\partial x_l}{\partial x_ j} \delta^{k}_{l} [/tex]

The [tex] n \times n [/tex] identity matrix [tex] I [/tex] can be written in terms of the Kronecker delta as simply the matrix of the delta, [tex] I_{ij} = \delta_{ij} [/tex] , or simply [tex] I = (\delta_{ij}) [/tex].

The generalized Kronecker is defined however by:
[tex] \delta^{jk}_{ab}=\epsilon_{abi}\epsilon^{jki} = \delta^{j}_{a}\delta^{k}_{b}-\delta^{k}_{a}\delta^{j}_{b}[/tex]
[tex] \delta_{abjk}= g_{aj}g_{bk}-g_{ak}g_{bj}[/tex]
[tex] \epsilon_{aij}\epsilon^{bij} = \delta^{bi}_{ai} = 2 \delta^{b}_{a}[/tex]

Actually, the generalized Kronecker delta could also be written as as a determinant :

[tex] \delta^{i_1...i_k}_{j_1...j_k} = \sum & \mbox{ sign } & \tau \delta^{i_\tau_1)}_{j_1}...\delta^{i_\tau_k}_{j_k}[/tex]

[tex] = & \mbox{ det} & \left[\begin{array}{ccc}\delta^{i_1}_{j_1} & \mbox{ ... } & \delta^{i_k}_{j_1} \\ \mbox{ ... } & \mbox{ ... } & \mbox{ ... } \\ \delta^{i_1}_{j_k} & \mbox{ ... } & \delta^{i_k}_{j_k} \end{array}\right][/tex]
 
Last edited:
  • #5
Under summation convention, [tex] \delta_{ij} \mbox{ A_j } = \mbox{ A_i }[/tex]

The cross product [tex] C = A \times B[/tex] can be written : [tex] C_i = \epsilon_{ijk}A_j B_k[/tex]

The curl of A is : [tex] (\nabla \times A)_i = \epsilon_{ijk} \frac{\partial A_k}{\partial x_ j} [/tex]

Orthonormality property : [tex] e_i . e_j = \delta_{ij}[/tex]
 
Last edited:
  • #6
Different kinds of Derivatives (extracted from Wikipedia, the free encyclopedia)

The derivative is often defined as the instantaneous rate of change of a function.
The simplest type of derivative is the derivative of a real-valued function of a single real variable; the derivative gives then the slope of the tangent to the graph of the function at a point or provides a mathematical formulation of rate of change.

A partial derivative of a function of several variables is its derivative with respect to one of the variables with the others held constant.

The total derivative however, all variables are allowed to vary.

For real valued functions from [tex]R^n[/tex] to R, the total derivative is often called the gradient. An intuitive interpretation of the gradient is that it points "up": in other words, it points in the direction of fastest increase of the function. It can be used to calculate directional derivatives of scalar functions or normal directions.

Several linear combinations of partial derivatives are especially useful in the context of differential equations defined by a vector valued function [tex]R^n \rightarrow R^n[/tex]. The divergence gives a measure of how much "source" or "sink" near a point there is. It can be used to calculate flux by divergence theorem. The curl measures how much "rotation" a vector field has near a point.

The other forms of derivatives will be studied individually and more extensively : Directional derivatives, Lie derivative, lie brackets, Lie bracket, Exterior derivative, Covariant derivative, Jacobian matrix, pushforward.

There is ever more other forms of derivatives that will not be studied here : Fréchet derivative, Gâteaux derivative, Exterior covariant derivative, Radon-Nikodym derivative, Kähler differential...
 
Last edited:
  • #7
Chain Rule & Derivative


[tex] \frac{d}{dx} (u \mbox{ o } v) = \frac{dv}{dx}( \frac{du}{dx} \mbox{ o } v) [/tex]

[tex] D( g \mbox{ o } f)(x) = D(g)(f(x)) \mbox{ o } D(f)(x) [/tex]

[tex] \frac{du}{dx} = \frac{\frac{du}{dv}}{\frac{dx}{dv}} = \frac{du}{dv} \frac{dv}{dx} [/tex]
 
Last edited:
  • #8
Derivative of the Exponential and Logarithmic functions

[tex] \frac{d}{dx} (ln(u)) = \frac{1}{u} \frac{du}{dx} [/tex]

Because [tex] log_a(x) = \frac{ln(x)}{ln(a)} [/tex] we have however [tex] \frac{d}{dx} (log_a(u)) = \frac{1}{\mbox{ ln(a) u }} \frac{du}{dx} [/tex]

[tex] \frac{d}{dx} (e^u) = e^u \frac{du}{dx} [/tex]

[tex] \frac{d}{dx} (a^u) = ln (a)a^u \frac{du}{dx} [/tex]

[tex] \frac{d}{dx} (u^v) = \frac{d}{dx} (e^{\mbox{v ln(u)}}) = e^{\mbox{v ln(u)} }} \frac{du}{dx} (\mbox{v ln(u)}) = v u^{v-1} \frac{du}{dx} \mbox{ + } u^v ln(u) \frac{dv}{dx}[/tex]
 
Last edited:
  • #9
The derivative at a point could be seen as a linear approximation of a function at that point.

For example, if for a given differentiable function f of one real variable, Taylor's theorem near the point a is :

[tex] f(x) = f(a) + \frac{f^{(1)}(a)}{1!} (x-a) + \frac{f^{(2)}(a)}{2!} (x-a)^2 + \frac{f^{(3)}(a)}{3!} (x-a)^3 +...+ \frac{f^{(n)}(a)}{n!} (x-a)^n + R_n[/tex]

When n=1, Taylor's theorem simply becomes :

[tex] f(x) = f(a) + \frac{f^{(1)}(a)}{1!} (x-a) + R_n[/tex]

The linear approximation is then obtained by dropping the remainder:

[tex] f(x) \approx f(a) + f^{(1)}(a) (x-a)[/tex]

This process could therefore also be called the tangent line approximation.
The function f is then approximated by a tangent line, fact which remaind us that in differential geometry, one can attach tangent vectors to every point p of a differentiable manifold.

We can also use linear approximations for vector functions of vector variables, in which case [tex] f^{(1)}(a) [/tex] is the Jacobian matrix [tex] J_f(a)[/tex]. The approximation is then the equation of a tangent line, plane, or hyperplane...

[tex] f(x) \approx f(a) + J_f(a) \mbox{.} (x-a)[/tex]


In the more general case of Banach spaces, one has

[tex] f(x) \approx f(a) + Df(a) (x-a)[/tex]

where Df(a) is the Fréchet derivative of f at a.
 
Last edited:
  • #10
Fréchet & Gâteaux derivative (mostly from Wikipedia, the free encyclopedia)

A Fréchet derivative is a derivative defined on Banach spaces.

If a function f is Fréchet differentiable at a point a, then its Fréchet derivative is :

[tex] Df(a) : R^n \rightarrow R^m \mbox{ with } Df(a)(v) = J_f(a) v[/tex]

where [tex] J_f(a)[/tex] denotes the Jacobian matrix of f at a

Furthermore, the partial derivatives of f are given by :

[tex] \frac{\partial f}{\partial x_i} (a) = Df(a)(e_i) = J_f(a) e_i[/tex]

where [itex] e_i [/itex] are the canonical basis of [tex] R^n[/tex].

Since Fréchet derivative is a linear function, the directional derivative of the function f along vector h is given by :

[tex] Df(a)(h) = \sum h_i \frac{\partial f}{\partial x_i} (a) [/tex]

That bring us naturally to Gâteaux derivative which is a generalisation of the concept of directional derivative (see functional derivatives for more details).
A Gâteaux derivative could sometimes be a Fréchet differentive but unlike the other forms of derivatives, the Gâteaux derivative is not linear.
 
Last edited:
  • #11
Jacobian matrix

The Jacobian matrix is a matrix which elements are first-order partial derivatives of a vector-valued function.
It represents a linear approximation to a differentiable function near a given point.

Suppose [tex]F \mbox{ : } R^n \rightarrow R^m[/tex] is a function.
Jacobian matrix of F is :

[tex]J_F(x_1, ... , x_n) = \frac{\partial (y_1, ... , y_m)}{\partial (x_1, ... , x_n)} = \left[\begin{array}{ccc}\frac{\partial (y_1)}{\partial (x_1)} & \mbox{ ... } & \frac{\partial (y_1)}{\partial (x_n)} \\ \mbox{ ... } & \mbox{ ... } & \mbox{ ... } \\ \frac{\partial (y_m)}{\partial (x_1)}& \mbox{ ... } & \frac{\partial (y_m)}{\partial (x_n)}\end{array}\right][/tex]

The determinant of [tex]J_F[/tex] is the Jacobian determinant [tex] \mid J_F \mid [/tex]

[tex]| J_F(x_1, ... , x_n) | = det ( \frac{\partial (y_1, ... , y_m)}{\partial (x_1, ... , x_n) ) = \left | \begin{array}{ccc}\frac{\partial (y_1)}{\partial (x_1)} & \mbox{ ... } & \frac{\partial (y_1)}{\partial (x_n)} \\ \mbox{ ... } & \mbox{ ... } & \mbox{ ... } \\ \frac{\partial (y_m)}{\partial (x_1)}& \mbox{ ... } & \frac{\partial (y_m)}{\partial (x_n)}\end{array}\right |[/tex]
 
Last edited:
  • #12
Examples involving the Jacobian determinant [tex] \mid J_F \mid [/tex]

We'll begin by looking first at an example which show how a definite integral is affected by a change of variables.
Suppose we want to evaluate the definite integral [itex] \int^{a}_{b} f(u) du [/itex]

[tex] \int^{a}_{b} f(u) du = \int^{c}_{d} f(u(x)) \frac{du}{dx}dx [/tex]

We see that the endpoints are changed and there is a new factor [itex] \frac{du}{dx} [/itex].
It can also be written [itex] du=\frac{du}{dx}dx [/itex]

The new factor [itex] \frac{du}{dx} [/itex] is a partial derivative which can then be considered as a [itex] 1 \times 1 [/itex] Jacobian matrix [tex] J_{u(x)} [/tex].

Often, because the limits of integration are not easily interchangeable one makes a change of variables to rewrite the integral on a different region of integration. To do that, the function must be changed to the new coordinates (i.e. Passage from cartesian to polar coordinates).

As an example, let's consider a domaine [itex] D = (x^2+y^2 \leq 9, x^2+y^2 \geq 4, y \geq 0) [/itex] , that is the circular crown in the semiplane of positive y. Therefore the transformed domain will be the following rectangle:
[tex] T = (2 \geq \rho \leq 3, 0 \geq \phi \leq \pi) [/tex]

The Jacobian determinant of that transformation is the following:

[tex]| J_{f(x,y)} | = det | \frac{\partial (x,y)}{\partial (\rho, \phi) }| = \left | \begin{array}{ccc} cos(\phi) & - \rho sin(\phi) \\ sin(\phi) & \rho cos(\phi)\end{array}\right | = \rho [/tex]

which has been got by inserting the partial derivatives of [tex] x = \rho cos(\phi) [/tex] and [tex] y = \rho sin(\phi) [/tex]

It's then possible to define the integral for the change of variables in polar coordinates:

[tex] \int \int_{D} f(x,y) dx dy = \int \int_{T} f(\rho cos(\phi),\rho sin(\phi)) \mbox{ det( J_{f(x,y)} ) } d \rho d \phi) = \int \int_{T} f(\rho cos(\phi),\rho sin(\phi)) \mbox{ \rho } d \rho d \phi) [/tex]
 
Last edited:
  • #13
Differential forms

A differential 0-form on [itex]R^3[/itex] is a scalar function [tex]w \mbox{ : } D \rightarrow R^3[/tex] of class [itex]C^1[/itex] on a domain D in [itex]R^3[/itex].

A differential 1-form on [itex]R^3[/itex] is an expression of the form

[tex]w = Adx + Bdy + Cdz [/tex]

where [tex] F(A(x,y,z),B(x,y,z),C(x,y,z)) \mbox{ : } D \rightarrow R^3[/tex] is a vector field on a domain D in [itex]R^3[/itex] for which the functions A,B,C belong to class [itex]C^1[/itex].
[itex] F(A(x,y,z),B(x,y,z),C(x,y,z))[/itex] could be also written [itex]F(a_i)[/itex] if [itex](a_1)=A(x_i)[/itex], [itex](a_2)=B(x_i),...[/itex].

The differential 1-form could then be written [itex] w = \sum a_i dx_i [/itex] or simply [itex] w = a_i dx^i [/itex]


A differential 2-form on [itex]R^3[/itex] is an expression of the form

[tex]w = Ady \wedge dz + Bdz \wedge dx + Cdx \wedge dy [/tex]

where [tex] F(A(x,y,z),B(x,y,z),C(x,y,z)) \mbox{ : } D \rightarrow R^3[/tex] is a vector field on a domain D in [itex]R^3[/itex] for which the functions A,B,C belong to class [itex]C^1[/itex].

If [itex]\alpha[/itex] and [itex]\beta[/itex] two 1-forms:

[itex]\alpha = f_1 dx^1 + f_2 dx^2 [/itex] and [itex]\beta = g_1 dx^1 + g_2 dx^2[/itex]

[tex]\alpha \wedge \beta = (f_1 dx^1 + f_2 dx^2 ) \wedge (g_1 dx^1 + g_2 dx^2)[/tex]
[tex]= f_1g_2 dx^1 \wedge dx^2 \mbox{+} f_2g_1 dx^2\wedge dx^1[/tex]
[tex] = f_1g_2 dx^1 \wedge dx^2 \mbox{-} f_2g_1 dx^1\wedge dx^2[/tex]
[tex]= (f_1g_2 \mbox{ - } f_2g_1) dx^1\wedge dx^2[/tex]
[tex]= \left | \begin{array}{ccc} f_1 & f_2 \\ g_1 & g_2 \end{array}\right | \mbox{ } dx^1\wedge dx^2[/tex]
Then

[tex] \alpha \wedge \beta = \sum_{i < j} (f_i g_j \mbox{ - } f_j g_i) dx^i \wedge dx^j [/tex]


A differential k-form could be written

[tex] \xi = \sum a_{i_1 ... i_k} (x) dx^{i_1} \wedge ... \wedge dx^{i_k} [/tex]

[tex] \gamma = \sum a_{i_1 ... i_j} (x) dx^{i_1} \wedge ... \wedge dx^{i_j} [/tex]

[tex] \xi \wedge \gamma = \sum a_{i_1 ... i_k} (x) b_{i_1 ... i_j} (x) dx^{i_1} \wedge ... \wedge dx^{i_{k+j}} [/tex]

which is differential (k+j)-form.

The coefficients of a differential n-form change under a change of basis by multiplication by the Jacobian.
If w is a differential n-form

[tex] \omega = \omega (x) dx^1 \wedge ... \wedge dx^n [/tex]

then if [itex] \bar{x^i} [/itex] are new coordinates for x, then, in these new coordinates

[tex] \omega = \omega (x) J \mbox{ } d \bar{x^1} \wedge ... \wedge d \bar{x^n} [/tex]

where [tex] J = \frac{ \partial x^i }{ \partial \bar{x^j} } [/tex]
 
Last edited:
  • #14
Now if F and G are a 0-form, [itex] \alpha [/itex] and [itex] \beta [/itex] could be written:

[tex] \alpha = dF = \frac{\partial F}{\partial x^1} dx^1 + \frac{\partial F}{\partial x^2} dx^2 [/tex]
and
[tex] \beta = dG = \frac{\partial G}{\partial x^1} dx^1 + \frac{\partial G}{\partial x^2} dx^2 [/tex]

[tex] dF \wedge dG = (\frac{\partial F}{\partial x^1} dx^1 + \frac{\partial F}{\partial x^2} dx^2) \wedge (\frac{\partial G}{\partial x^1} dx^1 + \frac{\partial G}{\partial x^2} dx^2 ) [/tex]

[tex] = (\frac{\partial F}{\partial x^1} \frac{\partial G}{\partial x^2} ) dx^1 \wedge dx^2 \mbox{ - } (\frac{\partial F}{\partial x^2} \frac{\partial G}{\partial x^1} ) dx^1 \wedge dx^2 [/tex]

[tex] = ( \frac{\partial F}{\partial x^1} \frac{\partial G}{\partial x^2} \mbox{ - } \frac{\partial F}{\partial x^2} \frac{\partial G}{\partial x^1} ) dx^1 \wedge dx^2 [/tex]

[tex] = | \frac{\partial (F,G)}{\partial (x^1,x^2) }| dx^1 \wedge dx^2 [/tex]

The 2-form [itex] dF \wedge dG [/itex] has then be converted to [itex]dx^1dx^2-coordinates[/itex].

[tex] dF \wedge dG = | \frac{\partial (F,G)}{\partial (x^1,x^2) }| dx^1 \wedge dx^2 [/tex]
 
Last edited:
  • #15
The exterior derivative of a k-differential form

[tex] d \xi = \sum_{i_1 < i_k} d a_{i_1 ... i_k} (x) dx^{i_1} \wedge ... \wedge dx^{i_k} [/tex]

[tex] d \xi = \sum_{i_1 < i_k} d a_{i_1 ... i_k} (x) \wedge dx^{i_1} \wedge ... \wedge dx^{i_k} [/tex]

example

[tex] w = \sum_i f_i dx_i [/tex]

[tex] w = b_1(x_1,x_2) dx_1 + b_2(x_1,x_2) dx_2[/tex]

[tex] dw = db_1(x_1,x_2) dx_1 + db_2(x_1,x_2) dx_2[/tex]

[tex] dw = db_1(x_1,x_2) \wedge dx_1 + db_2(x_1,x_2) \wedge dx_2[/tex]

[tex] dw = (\frac{\partial b_1}{\partial x^1} dx_1 + \frac{\partial b_1}{\partial x^2} dx_2) \wedge dx_1 + (\frac{\partial b_2}{\partial x^1} dx_1 + \frac{\partial b_2}{\partial x^2} dx_2) \wedge dx_2 [/tex]
 
Last edited:
  • #16
[tex] dw = (\frac{\partial b_1}{\partial x^1} dx_1 + \frac{\partial b_1}{\partial x^2} dx_2) \wedge dx_1 + (\frac{\partial b_2}{\partial x^1} dx_1 + \frac{\partial b_2}{\partial x^2} dx_2) \wedge dx_2 [/tex]

[tex] dw = (\frac{\partial b_1}{\partial x^1} dx_1 \wedge dx_1 + \frac{\partial b_1}{\partial x^2} dx_2 \wedge dx_1) + (\frac{\partial b_2}{\partial x^1} dx_1 \wedge dx_2 + \frac{\partial b_2}{\partial x^2} dx_2 \wedge dx_2 ) [/tex]

[tex] dw = \frac{\partial b_1}{\partial x^2} dx_2 \wedge dx_1 + \frac{\partial b_2}{\partial x^1} dx_1 \wedge dx_2 [/tex]

[tex] dw = - \frac{\partial b_1}{\partial x^2} dx_1 \wedge dx_2 + \frac{\partial b_2}{\partial x^1} dx_1 \wedge dx_2 [/tex]

[tex] dw = \frac{\partial b_2}{\partial x^1} dx_1 \wedge dx_2 - \frac{\partial b_1}{\partial x^2} dx_1 \wedge dx_2[/tex]

[tex] dw = ( \frac{\partial b_2}{\partial x^1} - \frac{\partial b_1}{\partial x^2} ) dx_1 \wedge dx_2 [/tex]
 
Last edited:
  • #17
A differential k-form should be written

[tex] \omega = \sum_{i_1 < i_k} a_{i_1 ... i_k} (x^m) dx^{i_1} \wedge ... \wedge dx^{i_k} [/tex]

The term [tex] i_1 < i_k [/tex] is very important.

It could however also be written without [tex] \sum_{i_1 < i_k} [/tex].
The new representation use the convenience of the summation convention.

[tex] \omega = \frac{1}{k!} a_{i_1 ... i_k} (x^m) dx^{i_1} \wedge ... \wedge dx^{i_k} [/tex]

We can see that the term [tex] \frac{1}{k!} [/tex] arises in this alternative way of writing a differential k-form in order to show clearly that we do not repeat k! times each term [tex] a_{i_1 ... i_k} (x^m) [/tex].
 
Last edited:
  • #18
Exterior algebra is the algebra of the exterior product [tex] \wedge [/tex], also called an alternating algebra or Grassmann algebra.

with the properties (if [itex] \omega [/itex] and [itex] \gamma [/itex] differential forms )

[tex] \omega \wedge \omega = 0 [/tex]

[tex] \omega \wedge \gamma = \mbox{ - } \gamma \wedge \omega [/tex]

Exterior algebra of a given vector space V over a field K is denoted by Λ(V) or Λ*(V).

The exterior algebra can be written as the direct sum of each of the k-th powers:

[tex] \bigwedge (V) = \bigoplus^{ n }_{k=0} \bigwedge^{k} V [/tex]

Therefore

[itex] \bigwedge (V) = \bigwedge^{0} V \bigoplus \bigwedge^{1} V \bigoplus ... \bigoplus \bigwedge^{k} V [/itex]

where [itex] \bigwedge^{0} V = k [/itex] and [itex] \bigwedge^{1} V = V [/itex]

The dimension of [itex] \bigwedge^{k} V [/itex] is n choose k, [tex] \left(\begin{array}{cc} n \\ k \end{array}\right) [/tex]

The dimension of [itex] \bigwedge (V) [/itex] is then equal to the sum of the binomial coefficients, [tex] \sum^{n}_{k=0} \left(\begin{array}{cc} n \\ k \end{array}\right) [/tex], which is [tex] 2^n [/tex]
 
Last edited:
  • #19
If E is a vector space, E* is then it's dual space.

[itex] \bigwedge^{r} E* [/itex] is the vector space of multilinear alternating r-forms on E.

We have [itex] \bigwedge^{0} E* = R [/itex] and [itex] \bigwedge^{1} E* = E* [/itex]

[itex] \bigwedge^{1} E* [/itex], the space of differential 1-forms, coincides with the dual space T*(E) which is the cotangent space.

The elements of [itex] \bigwedge^{1} E* [/itex], in terms of natural basis [tex] dx^i [/tex], have the representation:

[tex] \omega = \omega_ i (x^j) dx^i [/tex] which is a 1-form.

The elements of [itex] \bigwedge^{0} E* [/itex] is referred to as the space of the forms of degree zero which is the space of the functions f(x).
 
Last edited:
  • #20
Mehdi_ said:
To every Lie group, we can associate a Lie algebra, whose underlying vector space is the tangent space of G at the identity element. But how to construct a Lie group from a Lie algebra ?
Answer: The most general elements U of the group SU(2) can be obtained by exponentiating the generators of the algebra su(2).
[tex]U = e^T = I cos(\sigma) + \frac{T}{\sigma} sin(\sigma)[/tex]

I'm sure you realize that's not an answer to your question. For your question "But how to construct a Lie group from a Lie algebra ?", the answer is: generally you can't. It's just for simply connected group manifolds that you can apply the method suggested for SU(2).

Daniel.
 
  • #21
Hi dextercioby, welcome to "From Simple Groups to Quantum Field Theory " thread.
 
  • #22
The set of tangent vectors at a point p forms a vector space called the tangent space at p [itex] T_{p} [/itex].

If p is a point in an n-dimensional compact manifold M, the tangent space of M at p is then denoted [itex] T_{p} M [/itex]

The collection of tangent spaces [itex] T_{p_{ i }} M [/itex] on a manifold M forms a vector bundle called the tangent bundle [itex] T M [/itex].

[tex] T M = \bigcup_{i=0}^{\infty} T_{p_i} M [/tex]

The tangent bundle is a special class a vector bundle which means that it is also a special class of fiber bundle.

A fiber of a map [itex]f : X \longrightarrow Y[/itex] is the preimage of an element [itex] y \in Y [/itex]. That is,

[tex] f^{-1} (y) = [ x \in X : f(x) = y ]. [/tex]

For instance, when [itex] f(z) = z^2 [/itex], every fiber consists of two points [itex] [ -z , z ] [/itex], except for the fiber over 0, which has one point, the Ker (f) .

If [itex] \gamma [/itex] however is a smooth curve passing through p, then the derivative of [itex] \gamma [/itex] at p is a vector in the tangent space of M at p, [itex] T_{p} M [/itex].

A vector field is an assignment of a tangent vector for each point p of a manifold M.

The collection of tangent vectors forms the tangent bundle, and a vector field v is a section of this bundle [itex] T M [/itex].

A vector field on M is then a map v which assigns to each point [itex] p \in M [/itex] a tangent vector v(p).

[tex] v(p) = v_p \in T_p M [/tex]

A tangent vector is the manifold version of a directional derivative at a point.

A vector field v acts on a function f by the directional derivative [itex] d_v [/itex] on the function

[tex] d_v (f) = v . d(f) \equiv \bigtriangledown_v f [/tex]
 
Last edited:
  • #23
Scalar fields

Scalar field is a map over some space of scalar values. It is a map of values with no direction.

A simple example of a scalar field is a map of the temperature distribution in a room.

A scalar field could be viewed as a map [itex]f : R^n \longrightarrow R [/itex] which assigns each point [itex] x [/itex] in a n-dimentional space V with a scalar function [itex] f(x_i) [/itex].

The position vector of a point [itex] x [/itex], could be written in the form

[tex] \vec{ x } \equiv ( x_1, x_2, x_3 ) \equiv x_1 \vec{ e_1 } + x_2 \vec{ e_2 } + x_3 \vec{ e_3 } [/tex]

If with each point [itex] x [/itex] at the position [itex] ( x_1, x_2, x_3 ) [/itex] there corresponds a scalar [itex] f ( x_1, x_2, x_3 ) [/itex] such that

[tex] ( x_1, x_2, x_3 ) \longrightarrow f ( x_1, x_2, x_3 ) [/tex]

then the values of [itex] f ( x_1, x_2, x_3 ) [/itex] associated with all the points in V define a scalar field over V.

Example 1: [tex] f ( x_1, x_2, x_3 ) = 9 (x_1)^4 + 6 x_2x_ 3 - (x_3)^5 + 7 [/tex]

Example 2: [tex] f ( x ) = 9 x^4 + 6 x^2 - 5 [/tex]
 
Last edited:
  • #24
Vector fields, vector valued functions and parametric equations

A vector field could be viewed as a map [itex]f : R^n \longrightarrow R^n [/itex] that assigns each point [itex] x [/itex] in a n-dimentional space V with a vector valued function [itex] f ( x_i(t) ) [/itex] whose range is also n-dimensional.

A vector valued function r(t) is a function where the domain is a subset of the real numbers and the range is a vector : [itex] r(t) = r_1(t) \vec{ e_1 } + r_2(t) \vec{ e_2 } + r_3(t) \vec{ e_3 } [/itex]
Vector valued functions can also be referred to in a different notation : [itex] r(t) = <r_1(t), r_2(t), r_3(t) > [/itex]

Actually, there is an equivalence between vector valued functions and parametric equations.

As an example let's consider the vector valued function

[tex] r(t) = 2 . cos ( t ) . \vec{ i } + 2 . sin ( t ) . \vec{ j } + \frac{1}{2} . t . \vec{ k } [/tex].

To understand this function, consider the parametric curve

[tex] x = 2 . cos ( t ) [/tex]
[tex] y = 2 . sin ( t ) [/tex]
[tex] z = \frac{1}{2} t [/tex]

The two equations [itex] x = 2 . cos ( t ) [/itex] and [itex] y = 2 . sin ( t ) [/itex] describe a point in the xy-plane that is moving in a circle (This is because [itex] x^2 + y^2 = 4 [/itex] ).

Meanwhile, the value of [itex] z = \frac{1}{2} t [/itex] increases at t increases.

The result is a helix, a spiral curve that wraps around the cylinder [itex] x^2 + y^2 = 4 [/itex] .

Example of vector field : let's sketch first the following direction field.

[tex] \vec{ F } ( x, y ) = - y \vec{ i } + x \vec{ j } [/tex]

To graph the vector field we need to get some values of the function.
This means plugging in some points into the vector valued function.

[tex] \vec{ F } ( \frac{1}{2}, \frac{1}{2} ) = - \frac{1}{2} \vec{ i } + \frac{1}{2} \vec{ j } [/tex]

[tex] \vec{ F } ( \frac{1}{2}, - \frac{1}{2} ) = \frac{1}{2} \vec{ i } + \frac{1}{2} \vec{ j } [/tex]

[tex] \vec{ F } ( \frac{3}{2}, \frac{1}{4} ) = - \frac{1}{4} \vec{ i } + \frac{3}{2} \vec{ j } [/tex]

At the point [tex] ( \frac{1}{2}, \frac{1}{2} ) [/tex] we will plot the vector [tex] - \frac{1}{2} \vec{ i } + \frac{1}{2} \vec{ j } [/tex].

Likewise, the third evaluation tells us that at the point [tex] ( \frac{3}{2}, \frac{1}{4} ) [/tex] we will plot the vector [tex] - \frac{1}{4} \vec{ i } + \frac{3}{2} \vec{ j } [/tex]


We can continue in this fashion plotting vectors for several points and we’ll get the sketch of the vector field.
 
Last edited:
  • #25
Vector fields

Tangent vector fields are defined on manifolds as sections of the manifold's tangent bundle.

But more generally, a vector fields on a manifold could be simply defined as sections of the vector bundle (kind of fiber bundle).

Let X be a vector field on V.

[tex] X = \sum^{n}_{i=1} X (x_i) \frac{\partial }{\partial x_i} [/tex]

Each [itex] X (x_i) [/itex] is by definition just a differentiable function on V.

The tangent space basis [tex] \frac{\partial }{\partial x_i} [/tex] are also the vector field basis.
They are isomorphic to the euclidean basis [tex] \vec{ e_i } [/tex].

[tex] \frac{\partial }{\partial x_i} \equiv \vec{ e_i } \equiv \vec{ i }, \vec{ j } \mbox{ ou } \vec{ k } [/tex]

The vector field [tex] \vec{r}(t) = 2 . cos ( t ) . \vec{ i } + 2 . sin ( t ) . \vec{ j } + \frac{1}{2} . t . \vec{ k } [/tex] could then also be written :

[tex] \vec{r}(t) = 2 . cos ( t ) \frac{\partial }{\partial x_1} + 2 . sin ( t ) \frac{\partial }{\partial x_2} + \frac{1}{2} . t \frac{\partial }{\partial x_3} [/tex]
 
Last edited:
  • #26
Gradient and directional derivatives

Gradient is commonly used to describe the measure of the slope (derivative) of a function.

For vector-valued function, the gradient is then the Jacobian.

The gradient of a scalar field is a vector field which points in the direction of the greatest rate of increase of the scalar field, and whose magnitude is the greatest rate of change.

The gradient of a function f(x) could be denoted by [tex] grad(f) [/tex] or equivalent by [tex] \nabla f [/tex] where the symbol [tex] \nabla [/tex] is variously known as [tex] Nabla [/tex] or [tex] Del [/tex]

[tex] grad(f) = \nabla f = < f_x, f_y> [/tex] where [tex] f_x [/tex] and [tex] f_y [/tex] are partial derivatives

For example, the gradient of [tex] f (x,y,z) = 2x + {3y}^2 - sin(z) [/tex] is the vector

[tex] \nabla f = ( \frac{ \partial f }{ \partial x^1} + \frac{ \partial f }{ \partial x^2} + \frac{ \partial f }{ \partial x^3} )^T = ( 2, 6y, - cos(z) )^T[/tex]

The directional derivative (in terms of the gradient) [tex] D_{\vec{v}} f [/tex] of a scalar function [tex] f( \vec{x} ) = f(x_i) [/tex] along a vector [tex] \vec{v} = (v_1 ... v_n)^T [/tex] is the function

[tex] D_{\vec{v}} f = \nabla f . \vec{v} [/tex]

where the dot denotes the dot product (Euclidean inner product) , [tex] \nabla f [/tex] the gradient of the function f and [tex] \vec{v} [/tex] a unit vector

Therefore

[tex] D_{\vec{w}} f = \nabla f . \frac{ \vec{w} }{ \vec{| w |} } [/tex]


[tex] \frac{ \vec{w} }{ \vec{| w |} } = <cos (\theta), sin(\theta) > [/tex]

example : [tex] f (x,y) = x^2 + y^2 [/tex] and [tex] \vec{v} = <3, 4> [/tex]

The directional derivative is

[tex] \frac{ \vec{v} }{ \vec{| v |} } = \frac{ 1 }{ \sqrt{ 9 + 16} } } <3, 4> = < \frac{3} {5} , \frac{4} {5} > [/tex]

[tex] f_x = 2 x [/tex] and [tex] f_y = 2 y [/tex]

[tex] D_{\vec{v}} f (x,y) = (2 x ) \frac{3} {5} + (2 y ) \frac{4} {5} = \frac{ 6 x + 8 y } {5} [/tex]

At the point [tex] (1,2,5) [/tex]

[tex] D_{\vec{v}} f (1,2) = \frac{ 22 } {5} [/tex]

The directional derivative in a general direction is then

[tex] D_{\vec{v}} f = \frac{ d f }{ ds } = \frac{\partial f}{\partial x_1} \frac{d x_1}{ ds } + \frac{\partial f}{\partial x_2} \frac{ d x_2}{ ds }+ \frac{\partial f}{\partial x_3} \frac{ d x_3}{ ds } = f_{x_1} \frac{ d x_1}{ ds } + f_{x_2} \frac{ d x_2}{ ds }+ f_{x_3} \frac{ d x_3}{ ds } [/tex]

If [tex] \frac{ \vec{v} }{ \vec{| v |} } = < \frac{ d x_1}{ ds } , \frac{ d x_2}{ ds } , \frac{ d x_3}{ ds } > [/tex]

[tex]d s [/tex] is called element of arc or element of the curve C and [itex] s [/itex] is called arc length of the curve C.
[tex] \vec{v} [/tex] is a unit vector tangent to the curve C and directed in the direction of growing s

Two points of the curve C at the positions [itex] s [/itex] and [itex] (s+h) [/itex], determine a chord whose direction is given by the vector [itex] x(x+h)-x(s) [/itex]
The vector
[tex] \vec{v} Gradient and directional derivatives

Gradient is commonly used to describe the measure of the slope (derivative) of a function.

For vector-valued function, the gradient is then the Jacobian.

The gradient of a scalar field is a vector field which points in the direction of the greatest rate of increase of the scalar field, and whose magnitude is the greatest rate of change.

The gradient of a function f(x) could be denoted by [tex] grad(f) [/tex] or equivalent by [tex] \nabla f [/tex] where the symbol [tex] \nabla [/tex] is variously known as [tex] Nabla [/tex] or [tex] Del [/tex]

[tex] grad(f) = \nabla f = < f_x, f_y> [/tex] where [tex] f_x [/tex] and [tex] f_y [/tex] are partial derivatives

For example, the gradient of [tex] f (x,y,z) = 2x + {3y}^2 - sin(z) [/tex] is the vector

[tex] \nabla f = ( \frac{ \partial f }{ \partial x^1} + \frac{ \partial f }{ \partial x^2} + \frac{ \partial f }{ \partial x^3} )^T = ( 2, 6y, - cos(z) )^T[/tex]

The directional derivative (in terms of the gradient) [tex] D_{\vec{v}} f [/tex] of a scalar function [tex] f( \vec{x} ) = f(x_i) [/tex] along a vector [tex] \vec{v} = (v_1 ... v_n)^T [/tex] is the function

[tex] D_{\vec{v}} f = \nabla f . \vec{v} [/tex]

where the dot denotes the dot product (Euclidean inner product) , [tex] \nabla f [/tex] the gradient of the function f and [tex] \vec{v} [/tex] a unit vector

Therefore

[tex] D_{\vec{w}} f = \nabla f . \frac{ \vec{w} }{ \vec{| w |} } [/tex]


[tex] \frac{ \vec{w} }{ \vec{| w |} } = <cos (\theta), sin(\theta) > [/tex]

example : [tex] f (x,y) = x^2 + y^2 [/tex] and [tex] \vec{v} = <3, 4> [/tex]

The directional derivative is

[tex] \frac{ \vec{v} }{ \vec{| v |} } = \frac{ 1 }{ \sqrt{ 9 + 16} } } <3, 4> = < \frac{3} {5} , \frac{4} {5} > [/tex]

[tex] f_x = 2 x [/tex] and [tex] f_y = 2 y [/tex]

[tex] D_{\vec{v}} f (x,y) = (2 x ) \frac{3} {5} + (2 y ) \frac{4} {5} = \frac{ 6 x + 8 y } {5} [/tex]

At the point [tex] (1,2,5) [/tex]

[tex] D_{\vec{v}} f (1,2) = \frac{ 22 } {5} [/tex]

The directional derivative in a general direction is then

[tex] D_{\vec{v}} f = \frac{ d f }{ ds } = \frac{\partial f}{\partial x_1} \frac{d x_1}{ ds } + \frac{\partial f}{\partial x_2} \frac{ d x_2}{ ds }+ \frac{\partial f}{\partial x_3} \frac{ d x_3}{ ds } = f_{x_1} \frac{ d x_1}{ ds } + f_{x_2} \frac{ d x_2}{ ds }+ f_{x_3} \frac{ d x_3}{ ds } [/tex]

If [tex] \frac{ \vec{v} }{ \vec{| v |} } = < \frac{ d x_1}{ ds } , \frac{ d x_2}{ ds } , \frac{ d x_3}{ ds } > [/tex]

[tex]d s [/tex] is called element of arc or element of the curve C and [itex] s [/itex] is called arc length of the curve C.
[tex] \vec{v} [/tex] is a unit vector tangent to the curve C and directed in the direction of growing s

Two points of the curve C at the positions [itex] s [/itex] and [itex] (s+h) [/itex], determine a chord whose direction is given by the vector [itex] x(x+h)-x(s) [/itex]
The vector
[tex] \vec{v} = \liminf_0[/tex]
 
Last edited:
  • #27
Gradient and directional derivatives

Gradient is commonly used to describe the measure of the slope (derivative) of a function.

For vector-valued function, the gradient is then the Jacobian.

The gradient of a scalar field is a vector field which points in the direction of the greatest rate of increase of the scalar field, and whose magnitude is the greatest rate of change.

The gradient of a function f(x) could be denoted by [tex] grad(f) [/tex] or equivalent by [tex] \nabla f [/tex] where the symbol [tex] \nabla [/tex] is variously known as [tex] Nabla [/tex] or [tex] Del [/tex]

[tex] grad(f) = \nabla f = < f_x, f_y> [/tex] where [tex] f_x [/tex] and [tex] f_y [/tex] are partial derivatives

For example, the gradient of [tex] f (x,y,z) = 2x + {3y}^2 - sin(z) [/tex] is the vector

[tex] \nabla f = ( \frac{ \partial f }{ \partial x^1} + \frac{ \partial f }{ \partial x^2} + \frac{ \partial f }{ \partial x^3} )^T = ( 2, 6y, - cos(z) )^T[/tex]

The directional derivative (in terms of the gradient) [tex] D_{\vec{v}} f [/tex] of a scalar function [tex] f( \vec{x} ) = f(x_i) [/tex] along a vector [tex] \vec{v} = (v_1 ... v_n)^T [/tex] is the function

[tex] D_{\vec{v}} f = \nabla f . \vec{v} [/tex]

where the dot denotes the dot product (Euclidean inner product) , [tex] \nabla f [/tex] the gradient of the function f and [tex] \vec{v} [/tex] a unit vector

Therefore

[tex] D_{\vec{w}} f = \nabla f . \frac{ \vec{w} }{ \vec{| w |} } [/tex]


[tex] \frac{ \vec{w} }{ \vec{| w |} } = <cos (\theta), sin(\theta) > [/tex]

example : [tex] f (x,y) = x^2 + y^2 [/tex] and [tex] \vec{v} = <3, 4> [/tex]

The directional derivative is

[tex] \frac{ \vec{v} }{ \vec{| v |} } = \frac{ 1 }{ \sqrt{ 9 + 16} } } <3, 4> = < \frac{3} {5} , \frac{4} {5} > [/tex]

[tex] f_x = 2 x [/tex] and [tex] f_y = 2 y [/tex]

[tex] D_{\vec{v}} f (x,y) = (2 x ) \frac{3} {5} + (2 y ) \frac{4} {5} = \frac{ 6 x + 8 y } {5} [/tex]

At the point [tex] (1,2,5) [/tex]

[tex] D_{\vec{v}} f (1,2) = \frac{ 22 } {5} [/tex]

The directional derivative in a general direction is then

[tex] D_{\vec{v}} f = \frac{ d f }{ ds } = \frac{\partial f}{\partial x_1} \frac{d x_1}{ ds } + \frac{\partial f}{\partial x_2} \frac{ d x_2}{ ds }+ \frac{\partial f}{\partial x_3} \frac{ d x_3}{ ds } = f_{x_1} \frac{ d x_1}{ ds } + f_{x_2} \frac{ d x_2}{ ds }+ f_{x_3} \frac{ d x_3}{ ds } [/tex]

If [tex] \frac{ \vec{v} }{ \vec{| v |} } = < \frac{ d x_1}{ ds } , \frac{ d x_2}{ ds } , \frac{ d x_3}{ ds } > [/tex]

[tex]d s [/tex] is called element of arc or element of the curve C and [itex] s [/itex] is called arc length of the curve C.
[tex] \vec{v} [/tex] is a unit vector tangent to the curve C and directed in the direction of growing s

Two points of the curve C at the positions [itex] s [/itex] and [itex] (s+h) [/itex], determine a chord whose direction is given by the vector [itex] x(s+h)-x(s) [/itex]

[tex] \vec{v} = lim \frac{ x(s+h)-x(s)} {h} = \frac{ dx } {ds}[/tex]

The vector [tex] \vec{v} [/tex] is then called the unit tangent vector to the curve C at the point [itex] x(s) [/itex].

This vector is a unit vector because

[tex] | \vec{v}|^2 = \vec{v} . \vec{v} = \frac{ dx } {ds} . \frac{ dx } {ds} = 1 [/tex]
 
Last edited:
  • #28
Question : How to erase post #26 ?
 
  • #29
Divergence

The divergence of a vector field [tex] F [/tex], denoted [tex]div (F) [/tex] or [tex] \nabla.F [/tex], is defined by a limit of the surface integral

[tex] $ \nabla.F \equiv \displaystyle{\lim_{V \rightarrow 0}} \frac{ \oint_{ \partial V } F.da}{ V } $ [/tex]

where the surface integral gives the value of F integrated over a closed infinitesimal boundary surface [tex] \partial V [/tex] surrounding a volume element V, which is taken to size zero using a limiting process.

A closed infinitesimal boundary surface could be viewed as a sphere of radius 0.

Therefore the divergence could also be interpreted as an operator that measures a vector field's tendency to originate from or converge upon a given point.

The divergence of a continuously differentiable vector field [tex] F = F_x i + F_y j + F_z k \equiv (F_x, F_y, F_z)[/tex] is defined to be the scalar-valued function:

[tex] div (F) = \nabla.F = \frac{ \partial F_x }{ \partial x} + \frac{ \partial F_y }{ \partial y} + \frac{ \partial F_z }{ \partial z} [/tex]

The divergence of a three dimensional vector field is the extent to which the vector field flow behaves like a source or a sink at a given point.

An alternative equivalent definition, gives the divergence as the derivative of the net flow of the vector field across the surface of a small sphere relative to the volume of the sphere.

In physical terms, the divergence of a vector field is the rate at which "density" exits a given region of space. The definition of the divergence therefore follows naturally by noting that, in the absence of the creation or destruction of matter, the density within a region of space can change only by having it flow into or out of the region. By measuring the net flux of content passing through a surface surrounding the region of space, it is therefore immediately possible to say how the density of the interior has changed. This property is fundamental in physics, where it goes by the name "principle of continuity." When stated as a formal theorem, it is called the divergence theorem, also known as Gauss's theorem.

The divergence theorem (Gauss' theorem, Ostrogradsky's theorem, or Ostrogradsky–Gauss theorem) is a result that relates the outward flow of a vector field on a surface to the behaviour of the vector field inside the surface.

[tex] \int_{V} (\nabla.F) dV= \oint_{ \partial V } F.da[/tex]

The divergence of a linear transformation of a unit vector represented by a matrix A is given by the elegant formula

[tex] \nabla. \frac{ A \vec{x} }{ \vec{| x |} } = \frac{ Tr(A) }{ \vec{| x |} } - \frac{ \vec{x}^T Tr(A) \vec{x} }{ \vec{| x |^3} }[/tex]

The concept of divergence can be generalized to tensor fields, where it is a contraction of what is known as the covariant derivative (also called the semicolon derivative), written

[tex] \nabla.A = A^{a}_{\mbox{ };b} [/tex]

[tex] A^{a}_{\mbox{ };b} = \frac{ \partial A^a }{ \partial x^b } + \Gamma^{a}_{bk} A^{k} = A^{a}_{\mbox{ },b} + \Gamma^{a}_{bk} A^{k}[/tex]

where [tex] \Gamma^{a}_{bk} [/tex] is a Christoffel symbol
 
Last edited:
  • #30
The curl operator

The curl of a vector field [tex] F [/tex], denoted [tex]curl(F) [/tex] or [tex] \nabla \times F [/tex], is defined by a limit of the surface integral below where the magnitude of [tex] \nabla \times F [/tex] is the limiting value of circulation per unit area.

[tex] $ (\nabla \times F) . \hat{n} \equiv \displaystyle{\lim_{A \rightarrow 0}} \frac{ \oint_{ C } F.ds}{ A } $ [/tex]

where the right side is a line integral around an infinitesimal region of area [tex] A [/tex] that is allowed to shrink to zero via a limiting process and [tex] \hat{n} [/tex] is the unit normal vector to this region.

It can also be written as a special case of Stokes' theorem in which [tex] F [/tex] is a vector field and [tex] M [/tex] is an oriented, compact embedded 2-manifold with boundary in [tex] R^3 [/tex], and a generalization of Green's theorem from the plane into three-dimensional space.

[tex] $ \int_{S} (\nabla \times F) . dS = \oint_{ C } F.dl $ [/tex]

Each differential area [tex] (\nabla \times F) . dS [/tex] gives the line integral about that area since, by definition, the [tex] \nabla \times F [/tex] is the circulation per unit area.

The physical significance of the curl of a vector field is the amount of "rotation" or angular momentum of the contents of given region of space. It arises in fluid mechanics and elasticity theory. It is also fundamental in the theory of electromagnetism , where it arises in two of the four Maxwell equations.

In Cartesian coordinates, the curl is defined by

[tex] curl(F) = \nabla \times F = ( \frac{ \partial F_z }{ \partial y} - \frac{ \partial F_y }{ \partial z} ) \hat{x} + ( \frac{ \partial F_x }{ \partial z} - \frac{ \partial F_z }{ \partial x} ) \hat{y} + ( \frac{ \partial F_y }{ \partial x} - \frac{ \partial F_x }{ \partial y} ) \hat{z} [/tex]

which can also be written

[tex] \nabla \times F = \left | \begin{array}{ccc} \hat{x} & \hat{y} & \hat{z} \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ F_x & F_y & F_z \end{array}\right |[/tex]

A somewhat more elegant formulation of the curl is given by the matrix operator equation

[tex] \nabla \times F = \left [ \begin{array}{ccc} 0 & - \frac{\partial}{\partial z} & \frac{\partial}{\partial y} \\ \frac{\partial}{\partial z} & 0 & - \frac{\partial}{\partial x} \\ - \frac{\partial}{\partial y} & \frac{\partial}{\partial x} & 0 \end{array}\right ] F [/tex]

The curl can be generalized from a vector field to a tensor field as

[tex] (\nabla \times F)^i = \epsilon^{ijk} F_{k ; j} = \epsilon^{ijk} \nabla_j F_k[/tex]

where [tex] \epsilon^{ijk} [/tex] is the permutation tensor and ";" and [tex] \nabla_j [/tex] denotes the covariant derivative.
 
Last edited:
  • #31
Curl and Divergence

Some simple rules:

If a vector function [tex] f(x,y,z) [/tex] has continuous second order partial derivatives then [tex] curl (div F) = \nabla \times (\nabla . F) = 0 [/tex].

If [tex] \vec{F} [/tex] is a conservative vector field then [tex] curl ( \vec{F}) = 0 [/tex].

If [tex] \vec{F} [/tex] is defined on all [tex] R^3 [/tex] of whose components have continuous first order partial derivative and [tex] curl ( \vec{F}) = 0 [/tex] then [tex] \vec{F} [/tex] is a conservative vector field.

If [tex] curl ( \vec{F}) = 0 [/tex] then the fluid is called irrotational.

If [tex] div ( \vec{F}) = 0 [/tex] then the [tex] \vec{F} [/tex] is called incompressible.

[tex] div (curl F) [/tex] is always 0.

Green's theorem is a term used variously in mathematical literature to denote either the Gauss divergence theorem or the plane case (2D) of Stokes' theorem.

The first form of Green’s Theorem uses the curl of the vector field and is,

[tex] $ \oint_{ C } \vec{F}.d \vec{r} = \int \int_{D} (\nabla \times \vec{F}).\vec{k} dA $ [/tex]

where [tex] \vec{k} [/tex] is the standard unit vector in the positive z direction.

The second form uses the divergence.
In this case we also need the outward unit normal to the curve C.

If the curve is parameterized by [tex] \vec{r} (t) = x(t) \hat{i} + y(t) \hat{j} [/tex]

then the outward unit normal is given by,

[tex] \vec{n} = \frac{y'(t)} {| \vec{r'} (t)|} \hat{i} + \frac{x'(t)} {| \vec{r'} (t)|} \hat{j} [/tex]

The vector form of Green’s Theorem that uses the divergence is then given by,

[tex] $ \oint_{ C } \vec{F}.\vec{n} ds = \int \int_{D} (\nabla \vec{F}) dA $ [/tex]
 
Last edited:
  • #32
The Line Element and Metric of a torus

If the major radius of this torus is c and the minor radius a ; with c>a .
The torus [tex]S(u,v)[/tex] can be defined parametrically by:

[tex] x = (c + a \ cos(v) ) \ cos(u) [/tex]
[tex] y = (c + a \ cos(v) ) \ sin(u) [/tex]
[tex] z = a \ sin(v) [/tex]

where u and v [tex] \in [0, 2 \pi ] [/tex]

The coefficients E, F, and G of the first fundamental form (Line Element) are :

[tex] S_u = \frac{\partial S}{\partial u} = ( \ -(c + a \ cos(v) ) sin(u) \ , \ (c + a \ cos(v) ) \ cos(u) \ , \ 0 \ ) [/tex]
[tex] S_v = \frac{\partial S}{\partial v} = ( \ -(a \ cos(u) \ sin(v) ) \ , \ -(a \ sin(u) \ sin(v) ) \ , \ a \ cos(v) \ ) [/tex]

Therefore,

[tex] E = \frac{\partial S}{\partial u} \ . \ \frac{\partial S}{\partial u} = ( - (c + a \ cos(v) ) \ sin(u) )^2 \ + \ (( c + a \ cos(v) ) cos(u) )^2 \ + \ 0 = ( c + a \ cos(v) )^2 [/tex]

[tex] F = \frac{\partial S}{\partial u} \ . \ \frac{\partial S}{\partial v} = ( - (c + a \ cos(v) ) \ sin(u) ) \ -(a \ cos(u) \ sin(v) ) \ + \ (( c + a \ cos(v) ) cos(u) ) \ -(a \ sin(u) \ sin(v) ) \+ \ (0)\ a \ cos(v) \ = 0 [/tex]

[tex] G = \frac{\partial S}{\partial v} \ . \ \frac{\partial S}{\partial v} = ( \ -(a \ cos(u) \ sin(v) ) \ )^2 \ + \ (\ -(a \ sin(u) \ sin(v) ) \ )^2 \ + \ ( \ a \ cos(v) \ )^2 = a^2 [/tex]

The line element ds^2 (s here is an arc length) is :

[tex] ds^2 = E \ du^2 \ + \ 2 \ F \ du \ dv \ + G \ dv^2 \ [/tex]
[tex] ds^2 = ( c + a \ cos(v) )^2 \ du^2 \ + a^2 \ dv^2 \ [/tex]

The metric is [tex] g_{ij} [/tex] is :

[tex] g_{ij} = \left [ \begin{array}{ccc} ( c + a \ cos(v) )^2 & 0 \\ 0 & a^2 \end{array}\right ] [/tex]

[tex] g^{ij} = \left [ \begin{array}{ccc} \frac{1}{( c + a \ cos(v) )^2} & 0 \\ 0 & \frac{1}{a^2} \end{array}\right ] [/tex]
 
Last edited:
  • #33
The metric [tex] g_{ij} [/tex] of the torus above could also be computed by the formula :

[tex] g_{ij} = J^T \ J [/tex]

where [tex] J [/tex] denotes the Jacobian and [tex] J^T [/tex] its transpose.

If the torus can be defined parametrically by :

[tex] x = (c + a \ cos(v) ) \ cos(u) [/tex]
[tex] y = (c + a \ cos(v) ) \ sin(u) [/tex]
[tex] z = a \ sin(v) [/tex]

The Jacobian [tex] J [/tex] therefore is :

[tex] J = \left[ \begin {array}{cc} - \left( c+a\cos \left( v \right) \right) \sin \left( u \right) &-a\sin \left( v \right) \cos \left( u \right)
\\\noalign{\medskip} \left( c+a\cos \left( v \right) \right) \cos
\left( u \right) &-a\sin \left( v \right) \sin \left( u \right)
\\\noalign{\medskip}0&a\cos \left( v \right) \end {array} \right]
[/tex]

And its transpose is :

[tex] J^T = \left[ \begin {array}{ccc} - \left( c+a\cos \left( v \right) \right) \sin \left( u \right) & \left( c+a\cos \left( v \right)
\right) \cos \left( u \right) &0\\\noalign{\medskip}-a\sin \left( v
\right) \cos \left( u \right) &-a\sin \left( v \right) \sin \left( u
\right) &a\cos \left( v \right) \end {array} \right][/tex]

Therefore :

[tex] g_{ij} \ = J^T \ J = \left[ \begin {array}{cc} \left( c+a\cos \left( v \right) \right) ^{2} \left( \sin \left( u \right) \right) ^{2}+ \left( c+a\cos \left(
v \right) \right) ^{2} \left( \cos \left( u \right) \right) ^{2}&0
\\\noalign{\medskip}0&{a}^{2} \left( \sin \left( v \right) \right) ^{
2} \left( \cos \left( u \right) \right) ^{2}+{a}^{2} \left( \sin
\left( v \right) \right) ^{2} \left( \sin \left( u \right) \right)
^{2}+{a}^{2} \left( \cos \left( v \right) \right) ^{2}\end {array}
\right]
[/tex]

[tex] g_{ij} \ = \left[ \begin {array}{cc} \left( c+a\cos \left( v \right) \right) ^{2}&0\\\noalign{\medskip}0&{a}^{2} \left( \left( \sin \left( v
\right) \right) ^{2} \left( \left( \cos \left( u \right) \right) ^
{2}+ \left( \sin \left( u \right) \right) ^{2} \right) + \left( \cos
\left( v \right) \right) ^{2} \right) \end {array} \right]
[/tex]

[tex] g_{ij} \ = \left [ \begin{array}{ccc} ( c + a \ cos(v) )^2 & 0 \\ 0 & a^2 \end{array}\right ] [/tex]
 
Last edited:
  • #34
Holonomic bases

A holonomic basis for a manifold is a set of basis vectors [tex]e_k[/tex] for which all Lie derivatives vanish:[tex] [e_j, e_k] = 0[/tex]

Given coordinates [tex]x^a[/tex], we define basis vectors [tex] e_a [/tex] and basis one forms [tex] \omega^a [/tex] in the following way:
[tex]e_a = \partial_a= \frac{\partial }{\partial x^a}[/tex]
[tex] \omega^a = dx^a[/tex]

Holonomic or coordinate bases is then defined in terms of derivatives with respect to coordinates.

Spherical polar coordinates (holonomic) basis vectors are :

[tex]e_r = \partial_r= \frac{\partial }{\partial r} [/tex]
[tex] e_{\theta} = \partial_{\theta}= \frac{\partial }{\partial \theta} [/tex]
[tex] e_{\phi} = \partial_{\phi} = \frac{\partial }{\partial \phi} [/tex]

Coordinate bases need not necessary to be of unit length.

Line element of spherical coordinates is [tex] ds^2 = dr^2 \ + \ r^2 \ d \theta^2 + \ r^2 \sin^2(\theta) \d \phi^2 [/tex]


In a coordinate basis, basis vector satisfy : [tex] e_1 \ . \ e_2 = g_{12} [/tex]


[tex] e_r \ . \ e_r = 1 \ \ \ \ \ |e_r| = 1 [/tex]

[tex] e_{ \theta } \ . \ e_{ \theta } = r^2 \ \ \ \ \ |e_{ \theta }| = r [/tex]

[tex] e_{\phi} \ . \ e_{\phi} = \ r^2 \sin^2(\theta) \ \ \ \ \ |e_{\phi}| = \ r \sin(\theta) [/tex]

Since two of these vectors do not have unit length, these coordinate basis are therefore not orthonormal.

To choose spherical polar noncoordinates that are orthonormal, we need to define nonoholonomic (noncoordinate) bases given by the following :

[tex]e_R = \partial_r [/tex]
[tex] e_{\Theta} = \frac{1}{r} \partial_{\theta} [/tex]
[tex] e_{\Phi} = \frac{1}{r \ sin(\theta)} \partial_{\phi} [/tex]
 
Last edited:
  • #35
The metric [tex] g_{ij} = J^T \ J [/tex] of the sphere :

[tex] x = cos(\theta)sin(\phi)[/tex]
[tex] y = sin(\theta)sin(\phi)[/tex]
[tex] z = cos(\phi)[/tex]

The Jacobian [tex] J [/tex] is :

[tex] J = \left[ \begin {array}{ccc} \cos \left( \theta \right) \sin \left( \phi \right) &-r\sin \left( \theta \right) \sin \left( \phi \right) &r
\cos \left( \theta \right) \cos \left( \phi \right)
\\\noalign{\medskip}\sin \left( \theta \right) \sin \left( \phi
\right) &r\cos \left( \theta \right) \sin \left( \phi \right) &r\sin
\left( \theta \right) \cos \left( \phi \right) \\\noalign{\medskip}
\cos \left( \phi \right) &0&-r\sin \left( \phi \right) \end {array}
\right]
[/tex]

And its transpose is :

[tex] J^T = \left[ \begin {array}{ccc} \cos \left( \theta \right) \sin \left( \phi \right) &\sin \left( \theta \right) \sin \left( \phi \right) &
\cos \left( \phi \right) \\\noalign{\medskip}-r\sin \left( \theta
\right) \sin \left( \phi \right) &r\cos \left( \theta \right) \sin
\left( \phi \right) &0\\\noalign{\medskip}r\cos \left( \theta
\right) \cos \left( \phi \right) &r\sin \left( \theta \right) \cos
\left( \phi \right) &-r\sin \left( \phi \right) \end {array} \right]
[/tex]

[tex] \left[ \begin {array}{ccc} \cos( \theta)^{2} \sin(\phi)^{2}+
\sin(\theta)^{2} \sin( \phi)^{2} + \cos( \phi)^{2}&0
& \cos( \theta)^{2} \sin( \phi) r\cos( \phi) + \sin( \theta)^{2}\sin( \phi) r\cos( \phi) -\sin( \phi) r\cos( \phi)
\\0&{r}^{2} ( \sin( \theta)^{2} \sin( \phi)^{2}+{r}^{2} \cos( \theta)^{2} \sin(\phi)^{2}&0\\ \cos(
\theta)^{2}\sin( \phi) r\cos( \phi) + \sin( \theta)^{2}\sin(
\phi) r\cos( \phi) -\sin( \phi) r\cos( \phi) &0&{r}^{2} \cos( \theta)^{2} \cos( \phi)^{2}+{r}^{2}
\sin( \theta)^{2} \cos(\phi)^{2}+{r}^{2} \sin( \phi)^{2}\end {array} \right]
[/tex]


[tex] g_{ij} \ = \left[ \begin {array}{ccc} 1&0&0\\\noalign{\medskip}0&{r}^{2} \left( \sin \left( \phi \right) \right) ^{2}&0\\\noalign{\medskip}0&0&{r}^{2
}\end {array} \right]
[/tex]

Therefore the line element [tex] ds^2 [/tex] is :


[tex] ds^2 = dr^2 \ + \ r^2 \sin^2(\phi) \ d \theta^2 + \ r^2 \ d \phi^2[/tex]

Here [tex]\theta[/tex] is the polar angle in the xy-plane from the x-axis with [tex]0 \leq \theta \leq 2 \phi [/tex] while [tex]\phi[/tex] is the azimuthal angle from the z-axis with [tex]0 \leq \phi \leq 2 \phi [/tex]

However, because we are dealing with a sphere (symmetry!), the symbols [tex]\theta[/tex] and [tex]\phi[/tex] could be reversed.
 
Last edited:

Similar threads

Replies
3
Views
1K
  • Differential Geometry
Replies
1
Views
981
  • Differential Geometry
Replies
12
Views
3K
  • Introductory Physics Homework Help
Replies
10
Views
260
Replies
1
Views
1K
  • Introductory Physics Homework Help
Replies
2
Views
628
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
129
  • Differential Geometry
Replies
2
Views
891
  • Differential Geometry
Replies
2
Views
587
Back
Top