# From Simple Groups to Quantum Field Theory

1. Jul 13, 2006

### Mehdi_

$$U = e^{\frac{1}{2} B} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)$$
we can then write:
$$U = e^{\frac{1}{2} \theta b} = \cos(\frac{1}{2} \theta) + b \sin(\frac{1}{2} \theta)$$
And if we rely on Joe's expression, $r=\frac{\theta}{2}$ (rotor angle is always half the rotation):
$$U = e^{br} = \cos(r) + b \sin(r)$$
and
\begin{align*}U=e^T & = I \left(1 - \frac{1}{2!}r^2 + \frac{1}{4!}r^4 - \frac{1}{6!}r^6 + \dots\right) + T \left(1 - \frac{1}{3!}r^2 + \frac{1}{5!}r^4 - \frac{1}{7!}r^6 + \dots \right) \\ & = I \cos(r) + \frac{1}{r} T \sin(r)\end{align*}
we can therefore see that $br = T$
then : $b = \frac{T}{r}$
We have then Joe's expression :
$$U = e^{T} = \cos(r) + \frac{T}{r} \sin(r)$$

And from $bb=-1$ we can deduce:

$$\frac{T^2}{r^2}=-I$$
and therefore $TT =-r^2 I$ (result defined by the joe previously)

Last edited: Jul 13, 2006
2. Jul 15, 2006

### Mehdi_

Relation between su(2) generators and Euler's formula in complex analysis.

Euler's formula states that, for any real number x, $$e^{ix} = cos(x) + i sin(x)$$
If however x is a Pauli matrix, Euler's formula becomes :
$$e^{i\sigma} = cos(\sigma) + i sin(\sigma)$$ being understood that $$(\sigma_i)^2=1$$
Pauli matrices $$T= i \sigma$$ form a basis for su(2) over R.
The matrices T could also be called generators of the Lie algebra su(2).
And the Euler's formula becomes then:
$$e^T = cos(\sigma) + \frac{T}{\sigma} sin(\sigma)$$

To every Lie group, we can associate a Lie algebra, whose underlying vector space is the tangent space of G at the identity element. But how to construct a Lie group from a Lie algebra ?
Answer: The most general elements U of the group SU(2) can be obtained by exponentiating the generators of the algebra su(2).
$$U = e^T = I cos(\sigma) + \frac{T}{\sigma} sin(\sigma)$$

Last edited: Jul 16, 2006
3. Jul 18, 2006

### Mehdi_

The permutation symbol $$\epsilon_{ijk}$$ is a three-index object sometimes called the Levi-Civita symbol.
$$\epsilon_{ijk}=0$$ if i=j or j=k or i=k
$$\epsilon_{ijk}=+1$$ if (i,j,k) \in {(1,2,3),(2,3,1),(3,1,2)}
$$\epsilon_{ijk}=-1$$ if (i,j,k) \in {(1,3,2),(3,2,1),(2,1,3)}

The symbol can be defined as the scalar triple product of unit vectors in a right-handed coordinate system: $$\epsilon_{ijk} \equiv \vec{x}_i . (\vec{x}_j \times \vec{x}_k)$$

The symbol $$\epsilon_{ijk}$$ can be generalized to an arbitrary number of elements, in which case the permutation symbol is $$(-1)^{i(p)}$$ , where i(p) is the number of transpositions of pairs of elements that must be composed to build up the permutation p.

The permutation symbol satisfies
$$\delta_{ij}\epsilon_{ijk} = 0$$
$$\epsilon_{ipq}\epsilon_{jpq} = 2\delta_{ij}$$
$$\epsilon_{ijk}\epsilon_{ijk} = 6$$
$$\epsilon_{ijk}\epsilon_{pqk} = \delta_{ip}\delta_{iq}-\delta_{iq}\delta_{jp}$$
where $$\delta_{ij}$$ is the Kronecker delta

Last edited: Jul 18, 2006
4. Jul 18, 2006

### Mehdi_

The simplest interpretation of the Kronecker delta is as the discrete version of the delta function defined by
$$\delta_{ij}=0$$ for i and j different
$$\delta_{ij}=1$$ for i = j

In three-space, the Kronecker delta satisfies the identities
$$\delta_{ii} = 3$$
$$\delta_{ij}\epsilon_{ijk} = 0$$
$$\epsilon_{ipq}\epsilon_{jpq} = 2\delta_{ij}$$
$$\epsilon_{ijk}\epsilon_{pqk} = \delta_{ip}\delta_{iq}-\delta_{iq}\delta_{jp}$$

where Einstein summation is implicitly assumed (i,j = 1,2,3...)

Technically, the Kronecker delta is a mixed second-rank tensor defined by the relationship
$$\delta^{i}_{j} = \frac{\partial x_i}{\partial x_ j}$$
Since the coordinates $$x_i$$ and $$x_ j$$ are independent for i not equal to j, and therefore :

$$\delta^{i}_{j} = \frac{\partial x_i}{\partial x_ k} \frac{\partial x_l}{\partial x_ j} \delta^{k}_{l}$$

The $$n \times n$$ identity matrix $$I$$ can be written in terms of the Kronecker delta as simply the matrix of the delta, $$I_{ij} = \delta_{ij}$$ , or simply $$I = (\delta_{ij})$$.

The generalized Kronecker is defined however by:
$$\delta^{jk}_{ab}=\epsilon_{abi}\epsilon^{jki} = \delta^{j}_{a}\delta^{k}_{b}-\delta^{k}_{a}\delta^{j}_{b}$$
$$\delta_{abjk}= g_{aj}g_{bk}-g_{ak}g_{bj}$$
$$\epsilon_{aij}\epsilon^{bij} = \delta^{bi}_{ai} = 2 \delta^{b}_{a}$$

Actually, the generalized Kronecker delta could also be writen as as a determinant :

$$\delta^{i_1.....i_k}_{j_1....j_k} = \sum & \mbox{ sign } & \tau \delta^{i_\tau_1)}_{j_1}.....\delta^{i_\tau_k}_{j_k}$$

$$= & \mbox{ det} & \left[\begin{array}{ccc}\delta^{i_1}_{j_1} & \mbox{ .... } & \delta^{i_k}_{j_1} \\ \mbox{ .... } & \mbox{ .... } & \mbox{ .... } \\ \delta^{i_1}_{j_k} & \mbox{ .... } & \delta^{i_k}_{j_k} \end{array}\right]$$

Last edited: Jul 18, 2006
5. Jul 19, 2006

### Mehdi_

Under summation convention, $$\delta_{ij} \mbox{ A_j } = \mbox{ A_i }$$

The cross product $$C = A \times B$$ can be written : $$C_i = \epsilon_{ijk}A_j B_k$$

The curl of A is : $$(\nabla \times A)_i = \epsilon_{ijk} \frac{\partial A_k}{\partial x_ j}$$

Orthonormality property : $$e_i . e_j = \delta_{ij}$$

Last edited: Jul 19, 2006
6. Jul 20, 2006

### Mehdi_

The derivative is often defined as the instantaneous rate of change of a function.
The simplest type of derivative is the derivative of a real-valued function of a single real variable; the derivative gives then the slope of the tangent to the graph of the function at a point or provides a mathematical formulation of rate of change.

A partial derivative of a function of several variables is its derivative with respect to one of the variables with the others held constant.

The total derivative however, all variables are allowed to vary.

For real valued functions from $$R^n$$ to R, the total derivative is often called the gradient. An intuitive interpretation of the gradient is that it points "up": in other words, it points in the direction of fastest increase of the function. It can be used to calculate directional derivatives of scalar functions or normal directions.

Several linear combinations of partial derivatives are especially useful in the context of differential equations defined by a vector valued function $$R^n \rightarrow R^n$$. The divergence gives a measure of how much "source" or "sink" near a point there is. It can be used to calculate flux by divergence theorem. The curl measures how much "rotation" a vector field has near a point.

The other forms of derivatives will be studied individually and more extensively : Directional derivatives, Lie derivative, lie brackets, Lie bracket, Exterior derivative, Covariant derivative, Jacobian matrix, pushforward.

There is ever more other forms of derivatives that will not be studied here : Fréchet derivative, Gâteaux derivative, Exterior covariant derivative, Radon-Nikodym derivative, Kähler differential...

Last edited: Jul 20, 2006
7. Jul 24, 2006

### Mehdi_

Chain Rule & Derivative

$$\frac{d}{dx} (u \mbox{ o } v) = \frac{dv}{dx}( \frac{du}{dx} \mbox{ o } v)$$

$$D( g \mbox{ o } f)(x) = D(g)(f(x)) \mbox{ o } D(f)(x)$$

$$\frac{du}{dx} = \frac{\frac{du}{dv}}{\frac{dx}{dv}} = \frac{du}{dv} \frac{dv}{dx}$$

Last edited: Jul 25, 2006
8. Jul 24, 2006

### Mehdi_

Derivative of the Exponential and Logarithmic functions

$$\frac{d}{dx} (ln(u)) = \frac{1}{u} \frac{du}{dx}$$

Because $$log_a(x) = \frac{ln(x)}{ln(a)}$$ we have however $$\frac{d}{dx} (log_a(u)) = \frac{1}{\mbox{ ln(a) u }} \frac{du}{dx}$$

$$\frac{d}{dx} (e^u) = e^u \frac{du}{dx}$$

$$\frac{d}{dx} (a^u) = ln (a)a^u \frac{du}{dx}$$

$$\frac{d}{dx} (u^v) = \frac{d}{dx} (e^{\mbox{v ln(u)}}) = e^{\mbox{v ln(u)} }} \frac{du}{dx} (\mbox{v ln(u)}) = v u^{v-1} \frac{du}{dx} \mbox{ + } u^v ln(u) \frac{dv}{dx}$$

Last edited: Jul 24, 2006
9. Jul 25, 2006

### Mehdi_

The derivative at a point could be seen as a linear approximation of a function at that point.

For example, if for a given differentiable function f of one real variable, Taylor's theorem near the point a is :

$$f(x) = f(a) + \frac{f^{(1)}(a)}{1!} (x-a) + \frac{f^{(2)}(a)}{2!} (x-a)^2 + \frac{f^{(3)}(a)}{3!} (x-a)^3 +....+ \frac{f^{(n)}(a)}{n!} (x-a)^n + R_n$$

When n=1, Taylor's theorem simply becomes :

$$f(x) = f(a) + \frac{f^{(1)}(a)}{1!} (x-a) + R_n$$

The linear approximation is then obtained by dropping the remainder:

$$f(x) \approx f(a) + f^{(1)}(a) (x-a)$$

This process could therefore also be called the tangent line approximation.
The function f is then approximated by a tangent line, fact which remaind us that in differential geometry, one can attach tangent vectors to every point p of a differentiable manifold.

We can also use linear approximations for vector functions of vector variables, in which case $$f^{(1)}(a)$$ is the Jacobian matrix $$J_f(a)$$. The approximation is then the equation of a tangent line, plane, or hyperplane...

$$f(x) \approx f(a) + J_f(a) \mbox{.} (x-a)$$

In the more general case of Banach spaces, one has

$$f(x) \approx f(a) + Df(a) (x-a)$$

where Df(a) is the Fréchet derivative of f at a.

Last edited: Jul 25, 2006
10. Jul 25, 2006

### Mehdi_

A Fréchet derivative is a derivative defined on Banach spaces.

If a function f is Fréchet differentiable at a point a, then its Fréchet derivative is :

$$Df(a) : R^n \rightarrow R^m \mbox{ with } Df(a)(v) = J_f(a) v$$

where $$J_f(a)$$ denotes the Jacobian matrix of f at a

Furthermore, the partial derivatives of f are given by :

$$\frac{\partial f}{\partial x_i} (a) = Df(a)(e_i) = J_f(a) e_i$$

where $e_i$ are the canonical basis of $$R^n$$.

Since Fréchet derivative is a linear function, the directional derivative of the function f along vector h is given by :

$$Df(a)(h) = \sum h_i \frac{\partial f}{\partial x_i} (a)$$

That bring us naturally to Gâteaux derivative which is a generalisation of the concept of directional derivative (see functional derivatives for more details).
A Gâteaux derivative could sometimes be a Fréchet differentive but unlike the other forms of derivatives, the Gâteaux derivative is not linear.

Last edited: Jul 25, 2006
11. Jul 25, 2006

### Mehdi_

Jacobian matrix

The Jacobian matrix is a matrix which elements are first-order partial derivatives of a vector-valued function.
It represents a linear approximation to a differentiable function near a given point.

Suppose $$F \mbox{ : } R^n \rightarrow R^m$$ is a function.
Jacobian matrix of F is :

$$J_F(x_1, ... , x_n) = \frac{\partial (y_1, ... , y_m)}{\partial (x_1, ... , x_n)} = \left[\begin{array}{ccc}\frac{\partial (y_1)}{\partial (x_1)} & \mbox{ .... } & \frac{\partial (y_1)}{\partial (x_n)} \\ \mbox{ .... } & \mbox{ .... } & \mbox{ .... } \\ \frac{\partial (y_m)}{\partial (x_1)}& \mbox{ .... } & \frac{\partial (y_m)}{\partial (x_n)}\end{array}\right]$$

The determinant of $$J_F$$ is the Jacobian determinant $$\mid J_F \mid$$

$$| J_F(x_1, ... , x_n) | = det ( \frac{\partial (y_1, ... , y_m)}{\partial (x_1, ... , x_n) ) = \left | \begin{array}{ccc}\frac{\partial (y_1)}{\partial (x_1)} & \mbox{ .... } & \frac{\partial (y_1)}{\partial (x_n)} \\ \mbox{ .... } & \mbox{ .... } & \mbox{ .... } \\ \frac{\partial (y_m)}{\partial (x_1)}& \mbox{ .... } & \frac{\partial (y_m)}{\partial (x_n)}\end{array}\right |$$

Last edited: Jul 25, 2006
12. Jul 31, 2006

### Mehdi_

Examples involving the Jacobian determinant $$\mid J_F \mid$$

We'll begin by looking first at an example which show how a definite integral is affected by a change of variables.
Suppose we want to evaluate the definite integral $\int^{a}_{b} f(u) du$

$$\int^{a}_{b} f(u) du = \int^{c}_{d} f(u(x)) \frac{du}{dx}dx$$

We see that the endpoints are changed and there is a new factor $\frac{du}{dx}$.
It can also be written $du=\frac{du}{dx}dx$

The new factor $\frac{du}{dx}$ is a partial derivative which can then be considered as a $1 \times 1$ Jacobian matrix $$J_{u(x)}$$.

Often, because the limits of integration are not easily interchangeable one makes a change of variables to rewrite the integral on a different region of integration. To do that, the function must be changed to the new coordinates (i.e. Passage from cartesian to polar coordinates).

As an example, let's consider a domaine $D = (x^2+y^2 \leq 9, x^2+y^2 \geq 4, y \geq 0)$ , that is the circular crown in the semiplane of positive y. Therefore the transformed domain will be the following rectangle:
$$T = (2 \geq \rho \leq 3, 0 \geq \phi \leq \pi)$$

The Jacobian determinant of that transformation is the following:

$$| J_{f(x,y)} | = det | \frac{\partial (x,y)}{\partial (\rho, \phi) }| = \left | \begin{array}{ccc} cos(\phi) & - \rho sin(\phi) \\ sin(\phi) & \rho cos(\phi)\end{array}\right | = \rho$$

which has been got by inserting the partial derivatives of $$x = \rho cos(\phi)$$ and $$y = \rho sin(\phi)$$

It's then possible to define the integral for the change of variables in polar coordinates:

$$\int \int_{D} f(x,y) dx dy = \int \int_{T} f(\rho cos(\phi),\rho sin(\phi)) \mbox{ det( J_{f(x,y)} ) } d \rho d \phi) = \int \int_{T} f(\rho cos(\phi),\rho sin(\phi)) \mbox{ \rho } d \rho d \phi)$$

Last edited: Aug 1, 2006
13. Aug 1, 2006

### Mehdi_

Differential forms

A differential 0-form on $R^3$ is a scalar function $$w \mbox{ : } D \rightarrow R^3$$ of class $C^1$ on a domain D in $R^3$.

A differential 1-form on $R^3$ is an expression of the form

$$w = Adx + Bdy + Cdz$$

where $$F(A(x,y,z),B(x,y,z),C(x,y,z)) \mbox{ : } D \rightarrow R^3$$ is a vector field on a domain D in $R^3$ for which the functions A,B,C belong to class $C^1$.
$F(A(x,y,z),B(x,y,z),C(x,y,z))$ could be also written $F(a_i)$ if $(a_1)=A(x_i)$, $(a_2)=B(x_i),...$.

The differential 1-form could then be written $w = \sum a_i dx_i$ or simply $w = a_i dx^i$

A differential 2-form on $R^3$ is an expression of the form

$$w = Ady \wedge dz + Bdz \wedge dx + Cdx \wedge dy$$

where $$F(A(x,y,z),B(x,y,z),C(x,y,z)) \mbox{ : } D \rightarrow R^3$$ is a vector field on a domain D in $R^3$ for which the functions A,B,C belong to class $C^1$.

If $\alpha$ and $\beta$ two 1-forms:

$\alpha = f_1 dx^1 + f_2 dx^2$ and $\beta = g_1 dx^1 + g_2 dx^2$

$$\alpha \wedge \beta = (f_1 dx^1 + f_2 dx^2 ) \wedge (g_1 dx^1 + g_2 dx^2)$$
$$= f_1g_2 dx^1 \wedge dx^2 \mbox{+} f_2g_1 dx^2\wedge dx^1$$
$$= f_1g_2 dx^1 \wedge dx^2 \mbox{-} f_2g_1 dx^1\wedge dx^2$$
$$= (f_1g_2 \mbox{ - } f_2g_1) dx^1\wedge dx^2$$
$$= \left | \begin{array}{ccc} f_1 & f_2 \\ g_1 & g_2 \end{array}\right | \mbox{ } dx^1\wedge dx^2$$
Then

$$\alpha \wedge \beta = \sum_{i < j} (f_i g_j \mbox{ - } f_j g_i) dx^i \wedge dx^j$$

A differential k-form could be written

$$\xi = \sum a_{i_1 ... i_k} (x) dx^{i_1} \wedge .... \wedge dx^{i_k}$$

$$\gamma = \sum a_{i_1 ... i_j} (x) dx^{i_1} \wedge .... \wedge dx^{i_j}$$

$$\xi \wedge \gamma = \sum a_{i_1 ... i_k} (x) b_{i_1 ... i_j} (x) dx^{i_1} \wedge .... \wedge dx^{i_{k+j}}$$

which is differential (k+j)-form.

The coefficients of a differential n-form change under a change of basis by multiplication by the Jacobian.
If w is a differential n-form

$$\omega = \omega (x) dx^1 \wedge .... \wedge dx^n$$

then if $\bar{x^i}$ are new coordinates for x, then, in these new coordinates

$$\omega = \omega (x) J \mbox{ } d \bar{x^1} \wedge .... \wedge d \bar{x^n}$$

where $$J = \frac{ \partial x^i }{ \partial \bar{x^j} }$$

Last edited: Aug 1, 2006
14. Aug 1, 2006

### Mehdi_

Now if F and G are a 0-form, $\alpha$ and $\beta$ could be written:

$$\alpha = dF = \frac{\partial F}{\partial x^1} dx^1 + \frac{\partial F}{\partial x^2} dx^2$$
and
$$\beta = dG = \frac{\partial G}{\partial x^1} dx^1 + \frac{\partial G}{\partial x^2} dx^2$$

$$dF \wedge dG = (\frac{\partial F}{\partial x^1} dx^1 + \frac{\partial F}{\partial x^2} dx^2) \wedge (\frac{\partial G}{\partial x^1} dx^1 + \frac{\partial G}{\partial x^2} dx^2 )$$

$$= (\frac{\partial F}{\partial x^1} \frac{\partial G}{\partial x^2} ) dx^1 \wedge dx^2 \mbox{ - } (\frac{\partial F}{\partial x^2} \frac{\partial G}{\partial x^1} ) dx^1 \wedge dx^2$$

$$= ( \frac{\partial F}{\partial x^1} \frac{\partial G}{\partial x^2} \mbox{ - } \frac{\partial F}{\partial x^2} \frac{\partial G}{\partial x^1} ) dx^1 \wedge dx^2$$

$$= | \frac{\partial (F,G)}{\partial (x^1,x^2) }| dx^1 \wedge dx^2$$

The 2-form $dF \wedge dG$ has then be converted to $dx^1dx^2-coordinates$.

$$dF \wedge dG = | \frac{\partial (F,G)}{\partial (x^1,x^2) }| dx^1 \wedge dx^2$$

Last edited: Aug 1, 2006
15. Aug 3, 2006

### Mehdi_

The exterior derivative of a k-differential form

$$d \xi = \sum_{i_1 < i_k} d a_{i_1 ... i_k} (x) dx^{i_1} \wedge .... \wedge dx^{i_k}$$

$$d \xi = \sum_{i_1 < i_k} d a_{i_1 ... i_k} (x) \wedge dx^{i_1} \wedge .... \wedge dx^{i_k}$$

example

$$w = \sum_i f_i dx_i$$

$$w = b_1(x_1,x_2) dx_1 + b_2(x_1,x_2) dx_2$$

$$dw = db_1(x_1,x_2) dx_1 + db_2(x_1,x_2) dx_2$$

$$dw = db_1(x_1,x_2) \wedge dx_1 + db_2(x_1,x_2) \wedge dx_2$$

$$dw = (\frac{\partial b_1}{\partial x^1} dx_1 + \frac{\partial b_1}{\partial x^2} dx_2) \wedge dx_1 + (\frac{\partial b_2}{\partial x^1} dx_1 + \frac{\partial b_2}{\partial x^2} dx_2) \wedge dx_2$$

Last edited: Aug 3, 2006
16. Aug 4, 2006

### Mehdi_

$$dw = (\frac{\partial b_1}{\partial x^1} dx_1 + \frac{\partial b_1}{\partial x^2} dx_2) \wedge dx_1 + (\frac{\partial b_2}{\partial x^1} dx_1 + \frac{\partial b_2}{\partial x^2} dx_2) \wedge dx_2$$

$$dw = (\frac{\partial b_1}{\partial x^1} dx_1 \wedge dx_1 + \frac{\partial b_1}{\partial x^2} dx_2 \wedge dx_1) + (\frac{\partial b_2}{\partial x^1} dx_1 \wedge dx_2 + \frac{\partial b_2}{\partial x^2} dx_2 \wedge dx_2 )$$

$$dw = \frac{\partial b_1}{\partial x^2} dx_2 \wedge dx_1 + \frac{\partial b_2}{\partial x^1} dx_1 \wedge dx_2$$

$$dw = - \frac{\partial b_1}{\partial x^2} dx_1 \wedge dx_2 + \frac{\partial b_2}{\partial x^1} dx_1 \wedge dx_2$$

$$dw = \frac{\partial b_2}{\partial x^1} dx_1 \wedge dx_2 - \frac{\partial b_1}{\partial x^2} dx_1 \wedge dx_2$$

$$dw = ( \frac{\partial b_2}{\partial x^1} - \frac{\partial b_1}{\partial x^2} ) dx_1 \wedge dx_2$$

Last edited: Aug 4, 2006
17. Aug 6, 2006

### Mehdi_

A differential k-form should be written

$$\omega = \sum_{i_1 < i_k} a_{i_1 ... i_k} (x^m) dx^{i_1} \wedge .... \wedge dx^{i_k}$$

The term $$i_1 < i_k$$ is very important.

It could however also be written without $$\sum_{i_1 < i_k}$$.
The new representation use the convenience of the summation convention.

$$\omega = \frac{1}{k!} a_{i_1 ... i_k} (x^m) dx^{i_1} \wedge .... \wedge dx^{i_k}$$

We can see that the term $$\frac{1}{k!}$$ arises in this alternative way of writing a differential k-form in order to show clearly that we do not repeat k! times each term $$a_{i_1 ... i_k} (x^m)$$.

Last edited: Aug 6, 2006
18. Aug 7, 2006

### Mehdi_

Exterior algebra is the algebra of the exterior product $$\wedge$$, also called an alternating algebra or Grassmann algebra.

with the properties (if $\omega$ and $\gamma$ differential forms )

$$\omega \wedge \omega = 0$$

$$\omega \wedge \gamma = \mbox{ - } \gamma \wedge \omega$$

Exterior algebra of a given vector space V over a field K is denoted by Λ(V) or Λ*(V).

The exterior algebra can be written as the direct sum of each of the k-th powers:

$$\bigwedge (V) = \bigoplus^{ n }_{k=0} \bigwedge^{k} V$$

Therefore

$\bigwedge (V) = \bigwedge^{0} V \bigoplus \bigwedge^{1} V \bigoplus ... \bigoplus \bigwedge^{k} V$

where $\bigwedge^{0} V = k$ and $\bigwedge^{1} V = V$

The dimension of $\bigwedge^{k} V$ is n choose k, $$\left(\begin{array}{cc} n \\ k \end{array}\right)$$

The dimension of $\bigwedge (V)$ is then equal to the sum of the binomial coefficients, $$\sum^{n}_{k=0} \left(\begin{array}{cc} n \\ k \end{array}\right)$$, which is $$2^n$$

Last edited: Aug 7, 2006
19. Aug 8, 2006

### Mehdi_

If E is a vector space, E* is then it's dual space.

$\bigwedge^{r} E*$ is the vector space of multilinear alternating r-forms on E.

We have $\bigwedge^{0} E* = R$ and $\bigwedge^{1} E* = E*$

$\bigwedge^{1} E*$, the space of differential 1-forms, coincides with the dual space T*(E) which is the cotangent space.

The elements of $\bigwedge^{1} E*$, in terms of natural basis $$dx^i$$, have the representation:

$$\omega = \omega_ i (x^j) dx^i$$ which is a 1-form.

The elements of $\bigwedge^{0} E*$ is referred to as the space of the forms of degree zero wich is the space of the functions f(x).

Last edited: Aug 8, 2006
20. Aug 8, 2006

### dextercioby

I'm sure you realize that's not an answer to your question. For your question "But how to construct a Lie group from a Lie algebra ?", the answer is: generally you can't. It's just for simply connected group manifolds that you can apply the method suggested for SU(2).

Daniel.