Group Theory Basics: Where Can I Learn More?

Click For Summary
Group Theory is gaining interest among learners due to its widespread applications, particularly in physics and mathematics. Recommended resources for beginners include "Groups and Symmetry" by M.A. Armstrong, "An Introduction to the Theory of Groups" by J. Rotman, and Schaum's Outline of Group Theory, which is noted for its solved examples. There is a suggestion for a collaborative workshop to explore classical groups like SO(3) and SU(2), focusing on their relevance to physics. Online resources are also available, including a link to a free textbook by Marsden that covers Lie groups. The discussion emphasizes the importance of accessible learning materials and community engagement in mastering Group Theory.
  • #61
would it work to have one inclusive study group

I see several people are interested in group representations
and I'm thinking maybe we can just follow our interests.

I don't remember being part of an online study group and
dont have much idea of what works and what doesnt.

I propose chroot to be our nominal emcee or leader if we need one. But I don't care if we have a leader or are complete anarchy. And if somebody else is leader that is fine too.

Lonewolf defines the prerequisites, as I see it----one course in linear algebra and some time and willingness to work.

Why don't we see if we can get to some target in, say, the representation of some classic Lie group.

Maybe we will run out of gas halfway, but anyway we will have a destination.

What say this for a target-----classify the irreducible representations of SU(2). Can we get there from scratch?
Start with basic definitions and try to touch all the essential bases on the way?

I mention it because that target is highly visible. Maybe Hurkyl, or Chroot of Lethe can suggest a more practical goal.[
Having some goal will determine for us what things we have to cover, so we won't have to decide anything.

It might not matter what order we do things either.
Lethe for example could probably say right now what all the irred. reps of SU(2) are (up to isomorphism)

oops have to go

QUOTE]Originally posted by Lonewolf
I've covered the basics of group theory, and completed a course in linear algebra to be concluded next academic year. I'm pretty comfortable with the prerequisites you listed in the other thread. I'm willing to learn and I've got four months to fill, so I'm prepared to put some time in. [/QUOTE]
 
Physics news on Phys.org
  • #62


Originally posted by marcus

What say this for a target-----classify the irreducible representations of SU(2). Can we get there from scratch?
Start with basic definitions and try to touch all the essential bases on the way?

I mention it because that target is highly visible. Maybe Hurkyl, or Chroot of Lethe can suggest a more practical goal.[
Having some goal will determine for us what things we have to cover, so we won't have to decide anything.

It might not matter what order we do things either.
Lethe for example could probably say right now what all the irred. reps of SU(2) are (up to isomorphism)

a slightly more ambitious goal, that i would like to suggest, is the poincaré group SL(2,C)/Z2. it includes the rotation group as a subgroup (and thus includes all the concepts of SU(2), which would probably be a very good starting place), but it has a less trivial algebra, it is noncompact, so we can address those issues, and not simply connected, so we can also address those issues.

perhaps this is too ambitious. at any rate, SU(2) is a good starting point, and if that ends up being where we finish too, so be it.
 
  • #63
I too think going for the representations of SU(2) and SO(3) would be a good first goal, if only because of the importance of those groups in physics. In any case, that's the first goal I had set myself after I read that LQG primer. :smile:
 
  • #64
Originally posted by Hurkyl
I too think going for the representations of SU(2) and SO(3) would be a good first goal, if only because of the importance of those groups in physics. In any case, that's the first goal I had set myself after I read that LQG primer. :smile:

Two online books have been mentioned.

Hurkyl I believe you indicated you were using Brian Hall
("An Elementary Introduction to Groups and Reps.")

That is 128 pages and focuses on matrix groups so it works
with a lot of concrete relevant examples. I really like it.

Earlier I was talking about Marsden's Chapter 9, and Lonewolf
extracted some stuff from that source and posted his notes,
I essentially did likewise with another patch of Marsden.

It would be helpful if we all had one online textbook to focus on.

I now think Brian Hall (your preference) is better adapted to people's interests and that maybe I goofed when I suggested Marsden.

I regret possibly causing people to waste time and printer paper printing off that long Chapter 9. I'm personally glad to have it for reference though, not the end of the world. But Brian Hall on balance seems better.

Lets see what theorems he needs to get the representations of SU(2). I mean---work backwards and figure out a route.

Brian hall's chapter 3, especially pp 27-37, seem to me to be grand central station.

chapter 3 is "Lie algebras and the exponential mapping"

He shows how to find the *logarithm* of a matrix
and he proves the useful formula

det exp(A) = exp( trace(A) )

and he proves the "Lie product formula"

and I can honestly say to Lonewolf that there is nothing scary here----nothing (that I can see with my admittedly foggy vision) that is fundamentally hard

(except at one point he uses the Jordan canonical form of a matrix----the fact that you can put it in a specially nice upper triangular form---which is a bit tedious to prove so nobody ever does they just invoke it. just one small snag or catch which we need not belabor)

It seems to me that to get where we want to go the main "base camp" destination is to show Lonewolf (our only novice and thus the most important person in a curious sense) the logarithm map that gets you from the group up into its tangent space (the algebra)
and the exponential map that gets you back down from the tangent space to the group

these are essentially the facts Brian Hall summarizes in the first 10 pages or so of Chapter 3 and then he gives a whole bunch of nice concrete examples illustrating it----pages 37-39.

Hurkyl, I am glad you mentioned Brian Hall's book

arXiv;math-ph/0005032
 
Last edited:
  • #65
Have to say, if where we want to go is the representations of SU(2) that we can certainly take a peek at the destination
and it is lovely

just was glancing at Brian Hall's page 71

this being about five pages or so into his chapter 5 "Basic Representation Theory"

So simple!

SU(2) is just a nice kind of 2 x 2 matrices of complex numbers! We always knew that, but suddenly he does the obvious thing and uses a matrix U, or (just a slight variation on the idea) its inverse U-1 to STIR UP polynomials in two complex variables!

We have to be talking to an imaginary novice to define the level of explanation and for better or worse Lonewolf is standing in for that novice. I think this polynomial idea will make sense to him!

If you have a polynomial in two variables z1 and z2,
then you can, before plugging z1 and z2 into the polynomial,
operate on them with a 2 x 2 matrix!

This gives a new polynomial in effect. It is a sweet innocent obvious idea. Why not do this and get new polynomials?

And indeed the polynomials of any given combined degree in two variables are a vector space. So there we already have our group busily working upon some vectorspace and stirring the vectors around.

And to make everything as simple as possible we will consider only homogeneous polynomials of degree m
meaning that in each term the power z1 is raised to and the power z2 is raised to---those two powers add up to m.
It is a "uniformity" condition on the polynomial, all its terms have the same combined degree.

this must be the world's easiest way to come up with an action of SU(2) on an m+1 dimensional vectorspace. Must go back to the days of Kaiser Wilhelm.

a basis of our vectorspace Vm can consist of m+1 singletons like

(z1)2(z2)m-2

the coefficients can be complex numbers, it is a vector space over the complex numbers which may be somewhat less familiar than over the reals but still no big deal.

The official (possibly imaginary) novice may be wondering "what does irreducible mean". Indeed i hope Lonewolf is around and wondering this because we really need someone to explain to.
Well there is a group
and a mapping of the group into the linear operators on a vector space (some method for the group to act on vectors, like this scheme of using matrices to stir up polynomials)

that is called a representation (speaking unrigorously)
and it is irreducible if there is no part of the vectorspace left unstirred.
no subspace of V which is left invariant by the group.
no redundant part of V which doesn't get moved somewhere by at least one element of the group.

if there were an invariant subspace you could factor it out and
so-to-speak "reduce" the representation to a lower dimensional one.
so that's what irreducible means

it looks like these polynomials get pretty thoroughly churned around by preprocessing z1 and z2 with a matrix, but to be quite correct we need to check that they really are and that there is no invariant subspace.

******footnote*****

I think I said this before but just to be fully explicit about the action of the group:


If P(z1,z2) is the old polynomial, then the matrix U acts on it to produce a new polynomial by taking U-1 and acting on (z1, z2) to produce a new pair of complex numbers

(w1, w2) = U-1 (z1,z2)

and then evaluate the polynomial with (w1, w2):

P(U-1 (z1,z2) )

*****************
hope its not unwise to take a peek at the destination
first before trying to see how to get there
especially hope to get comments from Lethe Chroot Hurkyl
on how this should go, which theorems to hit, whether to have an orderly or random progression, whether Brian Hall gives a good focus etc.
 
Last edited by a moderator:
  • #66


Originally posted by lethe
a slightly more ambitious goal, that i would like to suggest, is the poincaré group SL(2,C)/Z2. it includes the rotation group as a subgroup (and thus includes all the concepts of SU(2), which would probably be a very good starting place), but it has a less trivial algebra, it is noncompact, so we can address those issues, and not simply connected, so we can also address those issues.

perhaps this is too ambitious. at any rate, SU(2) is a good starting point, and if that ends up being where we finish too, so be it.

first off, I would love it if you would do a whole bunch of explanation and get us started moving.
I tend to talk to much so I have to shut up and wait.
But I don't want this thread to get cold!

second. I totally agree. SU(2) and SO(3) are good initial targets but if it turns out to be fun to get to them then it would be
great to go on past to Poincare

I am counting (hoping) on you (plural) to explain the exponential map that connects the L.algebra to the L.group, because that seems to be crucial to everything including describing the reps
 
  • #67
Hey Lonewolf, is there anything you need explained.

I wish Chroot or Lethe, both of whom could take over,
would take over and move this ahead.
I tend to talk too much and would like to be quiet for a while.

It is a good thread. It should do something.
What are you up to mathwise now its summer vacation?
 
  • #68
Please don't slow down the threads on my behalf. I'll be around, just nodding and smiling in the background.
 
  • #69
Explaining? Only the exponential map. I can't seem to see how it relates to what it's supposed to...maybe that gets explained further along in the text than I am, or I'm just missing the point.
 
  • #70
Originally posted by Lonewolf
Please don't slow down the threads on my behalf. I'll be around, just nodding and smiling in the background.

OK I must have said something wrong and derailed the thread.
I have this fundamental fixed opinion that in any explanation the most important person is the novice and I cannot imagine having a explanation party about groups or lie algebras or anything else without one person who freely confesses to not knowing the subject.

Then you focus with one eye on the target (the theorems you want to get to) and with one eye on the novice

and you try to get the novice to the target destination

and the novice is also partly imaginary----the real one may get bored and go away meanwhiles.

but anyway that is how I imagine it. I can't picture doing groups with just Lethe and Chroot because they both already KNOW groups. Chroot is a tech Stanford student almost to his degree. Lethe is also clearly very capable and knowledgeable.

Dont sit in the background nodding for heavens sake. ASK these people to explain something to you. Well that is how I picture things and that is my advice. But who knows, it may all work out differently.

this is a great fact:

det( exp A) = exp (trace A)

do you know what det is and what trace is and do you know
what the exponential ex map is? I sort of assume so.
But if not then ask those guys and make them work it will be good for their mathematical souls.
 
  • #71
Could you elaborate on what you mean by

it is irreducible if there is no part of the vectorspace left unstirred.

and what an invariant subspace is, please?
 
  • #72
Originally posted by Lonewolf
Explaining? Only the exponential map. I can't seem to see how it relates to what it's supposed to...maybe that gets explained further along in the text than I am, or I'm just missing the point.

You have had a mathcourse where they said

exp(t) = 1 + t + t2/2! + ...(you can continue this)

If not you will be hurled from a high cliff.

Suppose instead of 1 one puts the n x n identity matrix

and instead of t one puts some n x n matrix A.

At some time in our history someone had this fiendishly clever idea, put a matrix into the series in place of a number. It will converge and give a matrix.

But here is an easy question for YOU Lonewolf.

What if A is a diagonal matrix with say 1/2 all the way down the diagonal

then what is exp (A)?

Dont be reluctant to ask things. Dont wait for it to be "covered later". Any of us may fail to give a coherent answer but ask.

But now I am asking you, can you calculate that nxn, well to be specific call it 3x3, matrix exp(A). Can you write it down.

What is the trace of A
What is the determinant of exp A

If I am poking at you a little it is because I am in the dark about what you know and don't know.
 
  • #73
We're supposed to think of a Lie Group as a group of transformations with various properties. One of the more interesting properties is that we can form "one-parameter families" that have the property that:

T0 x = x
Ts Tt x = Ts+t x

We can think of the parameter as being the "size" of the transformation. An example will probably make this clear.


Consider R2, and let Tθ be rotations around the origin through an angle of θ. Then, T0 is the identity transformation, and Tθ Tφ x = Tθ+φ x, so rotations form a one-parameter family when parametrized by the angle of rotation.


Since we have this continuous structure, it's natural to extend the ideas of calculus to Lie Groups. So, what if we consider an infinitessimal transformation Tdt in a one-parameter family?

Let's do an example using rotations in R2. Applying rotation Tθ can be expressed by premultiplying by the matrix:

Code:
/ cos θ -sin θ \
\ sin θ  cos θ /

So what if we plug in an infinitessimal parameter? We get

Code:
/ cos dθ -sin dθ \ = / 1  -dθ \
\ sin dθ  cos dθ /   \ dθ  1  /

 = / 1 0 \ + / 0 -1 \ * dθ
   \ 0 1 / + \ 1  0 /

So the infintessimal rotations are simply infinitessimal translations. This is true in general; we can make locally linear approximations to transformations just like ordinary real functions, such as:

f(x + dx) = f(x) + f'(x) dx

We call the algebra of infinitessimal transformations a Lie Algebra.


The interesting question is how to go the other way. What if we had the matrix

Code:
/ 0 -1 \
\ 1  0 /

and we wanted to go the other way to discover this is the derivative of a family of transformations?

Well, integration won't work, so let's take a different approach; let's repeatedly apply our linear approximation. If X is our element from the lie algebra, then (1 + t X) is approximately the transformation we seek Tt. We can improve our approximation by applying the approximation twice, but each time half as long:

(1 + (t/2) X)2

And in general we can break it up into n legs:

(1 + (t/n) X)n

So then we might suppose that:

Tt = limn->∞ (1 + (tX/n))n

And just like in the ordinary case, this limit evaluates to:

Tt = et X

That's from where the exponential map comes!

You can then verify that the derivitive of Tt at 0 is indeed t X


To summarize, we exponentiate elements of the Lie Algebra (iow apply an infinitessimal transformation an infinite number of times) to yield an elements of the Lie Group.



edit: fixed some hanging formatting tags
 
Last edited:
  • #74
my browser draws a blank sometimes and shows boxes so I am
experimenting with typography a bit here. Nice post.
I don't seem able to get the theta to show up inside a "code" area. All I get is a box.

Well that is all right. I can read the box as a theta OK
Strange that theta shows up outside "code" area but not
inside

That is a nice from-first-principles way to introduce the
exponential of matrices.

Can you show

det exp(A) = exp (trace A)

in a similarly down-to-earth way?

I see it easily for diagonal matrices but when I thought about it I had to imagine putting the matrix in a triangular form
 
Last edited:
  • #75
YO! LONEWOLF You are about to see sl(2, C)

Lonewolf your job is to react when people explain something in a way you can understand. stamp feet. make hubub of some kind

You are about to see an example of a Lie algebra.

Hurkyl is about to show you what the L.A. is that belongs to the group of DET = 1 matrices for example SL(2, C).
The L.A. for SL(2,C) is written with lowercase as sl(2, C)

The L.G. of matrices with det = 1 is made by exponential map exp(A) from TRACE ZERO matrices A.

because exp(0) = 1.

So if Hurkyl takes one more step he can characterize the L.A.
of the group of det = 1 matrices.

Actually of any size and over the reals as well as the complexes I think. But just to be specific think of 2x2 matrices.

Lonewolf, do you understand this. Do you like it. I think it is terrific, like sailing on a windy day. L.G. and L.A. are really neat.

Well probably it is 4 AM in the morning in the UK so you cannot answer.
 
Last edited:
  • #76
If not you will be hurled from a high cliff.

I guess you don't have to bother coming over here and finding a high cliff then. :wink:

then what is exp (A)?

exp(A) =
Code:
(e[sup]1/2[/sup] 0 0)
(0 e[sup]1/2[/sup] 0)
(0 0 e[sup]1/2[/sup])

What is the trace of A

trace(A) = sum of diagonal entries = a11 + a22 + a33

What is the determinant of exp A

det[exp(A)] = trace(A)
 
  • #77
And in general we can break it up into n legs:

(1 + (t/n) X)n

This is pretty much when the penny dropped.

because exp(0) = 1.

This makes sense as well, and I can see where the exponential map is used now. Thanks.
 
  • #78
det[exp(A)] = trace(A)
Very good! Except I think you mean:

det[exp(A)] = exp[trace(A)]

Probably a typo..

- Warren
 
  • #79
Oops, yeah. I probably should learn to read my posts...
 
  • #80
Well that is all right. I can read the box as a theta OK
Strange that theta shows up outside "code" area but not
inside

You're having font issues then. Your default font does indeed have the theta symbol, but the font your browser uses for the code blocks does not have a theta symbol (and replaces it with a box).


This is pretty much when the penny dropped.

Eep! I've never heard that phrase before, is that good or bad?


Can you show

det exp(A) = exp (trace A)

in a similarly down-to-earth way?

Nope. The only ways I know to show it are to diagonalize or to use the same limit approximation as above and the approximation:

det(I + A dt) = 1 + tr(A) dt

which you can verify by noting that all of the off diagonal entries are nearly zero, so the only important contribution is the product of the diagonal entries.


I think there's a really slick "down-to-earth" proof as well. I know the determinant is a measure of how much a transformation scales hypervolumes. (e.g. if the determinant of a 2x2 matrix near a point is 4, then applying the matrix will multiply the areas of figures near that point by 4) I know there's a nice geometrical interpretation of the trace, but I don't remember what it is.
 
  • #81
Originally posted by Hurkyl
Nope. The only ways I know to show it are to diagonalize or to use the same limit approximation as above and the approximation:

det(I + A dt) = 1 + tr(A) dt

which you can verify by noting that all of the off diagonal entries are nearly zero, so the only important contribution is the product of the diagonal entries.

All that shows is that the formula holds to good approximation for matrices with elements that are all much less than one.

One correct proof goes as follows:

For any matrix A, there is always a matrix C such that CAC-1 is upper triangular meaning that all elements below the diagonal vanish. The key properties needed for the proof are that the space of upper triangular matrices are closed under matrix multiplication, and their determinants are the product of the elements on their diagonals. The only other thing we use is the invariance of the trace under cyclic permutations of it's arguments so that Tr(CAC-1) = TrA. The proof follows trivially.
 
Last edited:
  • #82
The proof to which I was alluding is:

det(eA) = det(limn->∞(I + A/n)n)
= limn->∞ det((I + A/n)n)
= limn->∞ (det(I + A/n))n
= limn->∞ (1 + tr(A) / n + O(1 / n2))n
= etr(A)
 
Last edited:
  • #83
another proof if you know some topology: diagonalizable matrices are dense in GL(n).
 
  • #84
Eep! I've never heard that phrase before, is that good or bad?

It's a good thing. We use it over here to mean the point where somebody realizes something. Sorry about that, I thought it was in wider use than it is.
 
  • #85
Originally posted by Lonewolf
It's a good thing. We use it over here to mean the point where somebody realizes something. Sorry about that, I thought it was in wider use than it is.

I always assumed it was like the coin dropping in a payphone.
Maybe going back to old times when cooking gas was metered
out by coin-operated devices---the penny had to drop for something to turn on.

I have lost track of this thread so much has happened.

Just to review something:
A skewsymmetric means AT = - A
and a skewsymmetric matrix must be zero down the diagonal
so its trace is clearly zero, and another definition:
B orthogonal means BT = B-1

Can you prove that if
A is a skew symmetric matrix then exp(A) is orthogonal and
has det = 1?
I assume you can. It characterizes the Lie algebra "so(3)" that goes with the group SO(3). You may have noticed that they use lowercase "what(...)" to stand for the Lie algebra that goes with the Lie group "WHAT(...)"

Excuse if this is a repeat of something I or someone else said earlier.
 
  • #86
SO(3) is defined to be the space of all 3x3 real matrices G such that:

Gt = G-1
det G = 1

So what about its corresponding Lie Algebra so(3)? It is the set of all 3x3 matrices A such that exp(A) is in SO(3).

So how do the constraints on SO(3) translate to constraints on so(3)?

The second condition is easy. If A is in so(3), then:

exp(tr A) = det exp(A) = 1

so tr A must be zero. Conversely, for any matrix A with tr A zero, the second condition will be satisfied.


The first one is conceptually just as simple, but technically trickier. Translated into so(3) it requires:

exp(A)t = exp(A)-1
exp(At) = exp(-A)
*** this step to be explained ***
At = -A

Therefore if A is in so(3) then A must be skew symmetric. And conversely, it is easy to go the other way to see that any skew symmetric matrix A satisfies the first condition.

Therefore, so(3) is precisely the set of 3x3 traceless skew symmetric matrices.


I skipped over a technical detail in the short proof above. If exponents are real numbers then the marked step is easy to justify by taking the logarithm of both sides... however logarithms are only so nice when we're working with real numbers! I left that step in my reasoning because you need it when working backwards.

The way to prove it going forwards is to consider:

exp(s At) = exp(-s A)

If A is in so(3), then this must be true for every s, because so(3) forms a real vector space. Now, we differentiate with respect to s to yield:

(At) exp(s At) = (-A) exp(-s A)

Which again must be true for all s. Now, plug in s = 0 to yield:

At = -A

This trick is a handy replacement for taking logarithms!


Anyways, we've proven now that so(3) is precisely all 3x3 real traceless skew symmetric matrices. In fact, we can drop "traceless" because real skew symmetric matrices must be traceless.

For matrix algebras we usually define the lie bracket as being the commutator:

[A, B] = AB - BA

I will now do something interesting (to me, anyways); I will prove that so(3) is isomorphic (as a Lie Algebra) to R3 where the lie bracket is the vector cross product!


The first thing to do is find a (vector space) basis for so(3) over R. The most general 3x3 skew symmetric matrix is:

Code:
/  0  a -b \
| -a  0  c |
\  b -c  0 /

Where a, b, and c are any real number. This leads to a natural choice of basis:

Code:
    /  0  0  0 \
A = |  0  0 -1 |
    \  0  1  0 /

    /  0  0  1 \
B = |  0  0  0 |
    \ -1  0  0 /

    /  0 -1  0 \
C = |  1  0  0 |
    \  0  0  0 /

As an exercise for the reader, you can compute that:
AB - BA = C
BC - CB = A
CA - AC = B

So now I propose the following isomorphism &phi from so(3) to R3:

φ(A) = i
φ(B) = j
φ(C) = k

And this, of course, extends by linearity:

φ(aA + bB + cC) = ai + bj + ck


So now let's verify that this is actually an isomorphism:

First, the vector space structure is preserved; &phi is a linear map, and it takes a basis of the three dimensional real vector space so(3) onto a basis of the three dimensional real vector space R3, so φ must be a vector space isomorphism.

The only remaining thing to consider is whether &phi preserves lie brackets. We can do so by considering the action on all pairs of basis elements (since the lie bracket is bilinear)

φ([A, A]) = &phi(AA - AA) = φ(0) = 0 = i * i = [i, i] = [φ(A), φ(A)]
(and similarly for [B, B] and [C, C])
φ([A, B]) = φ(AB - BA) = φ(C) = k = i * j = [i, j] = [φ(A), φ(B)]
(and similarly for other mixed pairs)

So we have verified that so(3) and (R3, *) are isomorphic as Lie Algebras! If we so desired, we could then choose (R3, *) as the Lie Algebra associated with SO(3), and define the exponentional map as:

exp(v) = exp(φ-1(v))

So, for example:

exp(tk) = rotation of t radians in the x-y plane
 
Last edited:
  • #87
Hurkyl:

Great post!

- Warren
 
  • #88
Originally posted by chroot
Hurkyl:

Great post!

- Warren

I agree!
 
  • #89
Bah, there are no blushing emoticons!

Thanks guys!


I'm not entirely sure where to go from here, though, since I'm learning it with the rest of you! (so if any of you have things to post, or suggestions on which way we should be studing, feel free to say something! :smile:) But I did talk to one of my coworkers and got a three hour introductory lecture on Lie Groups / Algebras in various contexts, and I think going down the differential geometry route would be productive (and it allows us to keep the representation theory in the representation theory thread!)... I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory (which will motivate some differential geometry notions in the process), but I wanted to work out most of the details before introducing that.

Anyways, my coworker did suggest some things to do in the meanwhile; we should finish deriving the Lie algebras for the other standard Lie groups, such as su(2), sl(n; C), so(3, 1)... so I assign that as a homework problem for you guys to do in this thread! :smile:
 
  • #90
I think we are almost at the point where we can derive Maxwellean Electrodynamics as a U(1) gauge theory

As a what now?
 

Similar threads

  • · Replies 26 ·
Replies
26
Views
831
  • · Replies 42 ·
2
Replies
42
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 52 ·
2
Replies
52
Views
6K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
1
Views
1K