My Algebra Questions: Answers & Updates

  • Thread starter Thread starter JasonRox
  • Start date Start date
  • Tags Tags
    Algebra
JasonRox
Homework Helper
Gold Member
Messages
2,381
Reaction score
4
My Algebra Questions (continuously updated)

Note: Continuously updated because I will post new questions in the thread. I will bold them so you spot them.

Ok, this isn't really a problem solving question. It's basically me asking "what is this?"

The paragraphs starts as follows...

The first mathematicians who studied group-theoretic problems, e.g., Lagrange, were concerned with the question: What happens to the polynomial g(x_1,...,x_n) if one permutes the variables?

My question is...

What is g(x_1,...,x_n)?

How to I write 6x^2 + 4x +1 as g(x_1,...,x_n)?

Permuting what variables?
 
Last edited:
Physics news on Phys.org
g(x_1, ..., x_n) is a polynomial in x_1, ..., x_n, it's an element of R[x_1, ..., x_n], where R is some ring, maybe a field, etc. So it's a polynomial in n indeterminates(ie "variables") x_1, x_2,..., x_n with coefficients in some ring.

To say we permute the variables means just that, we interchange them. Sometimes polynomials don't change when we do this, like say g(x_1, x_2) = x_1 + x_2, this is the same as g(x_2,x_1) = x_2 + x_1. These are called symmetric polynomials. This doesn't always work though of course, if g(x_1, x_2) = x_1 - x_2, then g(x_2, x_1) = x_2 - x_1 != g(x_1, x_2). In your example you only have 1 variable, 1 indeterminate, and it's x.These are studied in Galois Theory, I'll let others say more on this since I don't know enough about this topic.
 
Last edited:
So, g(x_1,x_2) can be 6x_1^2 + 4x_2 + 1?
 
Nope. Think of x_1 = x ... x_2 = y ... x_3 = z etcetera

So yours is just g(x), but for a polynomial g(x_1,x_2)=g(x,y) and could be like x^2-3y-14x^3+37 y^99. Then you switch x and y so give g(y,x)= y^2-3x-14y^3+37 x^99
which is different to g(x,y) so they aren't symmetric.
 
i don't get perthvan's objection. Jason didn't ask about symmetric polys.
 
OH! I get it now.
 
The next part is...

Note: Summation and Product is from i=0 to n, where S is the summation and P is the product (the capital Pi looking thing).

If a polynomial f(x) = S a_ix^i has roots r_1,...,r_n, then each of the coeficients a_i of f(x) = a_n P (x - r_i) is a symmetric function of r_1,...,r_n.

Does this mean whichever order we choose to put r_1,...,r_n, we still get f(x) = g(x) (if g(x) is where we interchange say r_1 with r_2)?

Or is it, no matter how we interchange the coefficients, the roots don't change?
 
Last edited:
Now, here is a new question...

Prove that A_4 (the alternating group) is centerless.

Note: Centerless is when the center of the group only contains the identity element.

I have shown that for S_n, n>=3, it is centerless.

I can't think of a direct way to show that A_4 is centerless without using brute force. Going by brute force wouldn't take that long, but I'm curious to know if there is another way.

I'm pretty sure that A_n for n>=5 is centerless. (A_n, n>=5 is simple.)
 
Suppose the center of A_4 is not trivial, set H = Z(A_4), then by lagrange the possible orders of H are 1, 2, 3, 4, 6, 12. If the order is 1, then H is the trivial group, contradiction. If the order is 12 then H = A_4 so A_4 is abelian, which is ridiculous.

So the possible orders of H are 2, 3, 4, and 6.

Note H is normal in G so we can play with the quotient group.

Now if |H| = 6(the same argument I'm about to do works if |H| = 3)

Consider A_4/H, this has order 2. Now let f be 3-cycle in A_4(A_4 has 8 3-cycles). Then (Hf)^3 = Hf^3 = H, so o(Hf) | 3, but by lagrange the order of Hf divides 2 also, so we must have o(Hf) = 1, so Hf = H, so f is in H, so H contains at least all 8 3 cycles + the identity, so H contains at least 9 elements, a contradiction.

You'll have to do the cases when |H| = 2 and |H| = 4 differently, shouldn't be too bad to do them directly. I have an idea but I will let you do it.
Note that if |H| = 4, then H = {1, (12)(34), (13)(24),(14)(23)} =~ klein 4 =~ C_2 x C_2.Also note the reason the I knew how to do the case when |H| = 6, is because A_4 is the smallest subgroup for which the converse of lagranges fails, ie, what I did above is one way to show A_4 has no subgroup of order 6, for if it did, call it H, then [A_4: H] = 2, so H is normal, and then repeat the above.
 
Last edited:
  • #10
Actually for any non-abelian group G, the factor group G/Z(G) is non-cyclic, i.e. it definitely cannot have prime order. So the given |H| cannot be either 6 or 4, just by that statement.
 
  • #11
I got a faster proof, but it relies on a theorem we used later.

It's as follows...

Theorem - If H is normal in A_n, and H contains a 3-cycle, then H = A_n.

Therefore, Z(G) cannot contain a 3-cycle because then that means H = A_n.

Therefore, |H| can only be 2 and 4.

Then by what teleport said, we are left with only 4. That is all the elements in A_4 that is not a 3-cycle. That group is the 4-V group. This group can easily be checked to be centerless.

Ok, but the problem with the above is that the proof and teleport's statement comes after that question. The next question is to prove teleport's statement, which I know I can do. (I think I already did. I did so many I forgot.)

Note: 4-V is the Klein 4. I didn't know that.

Note: That was a nice note at the bottom, ircdan.

Thanks for the help guys!
 
  • #12
Of course I can still use what ircdan said, but then it still comes down to it being almost equivalent of just doing it by brute force.
 
  • #13
For the cases |H| = 2, 4, all the elements except e, are product of two 2-cycles. Take one, and denote it by (ab)(cd). Then take a 3-cycle from A_4 - H, say (abc). Now check both do not commute when multiplied.
 
  • #14
Actually it is much more simple. Because no product of 2-cycles commute with all the other elements of A_4 (in particular see my previous post) no 2-cycles will be in H. The same can be said of 3-cycles using the same previous sample calculation. That's it.
 
  • #15
That's what I was thinking about, but it still seems equivalent to doing it by brute force because it simply comes down to solve like 6-7 products.

JasonRox said:
The next part is...

Note: Summation and Product is from i=0 to n, where S is the summation and P is the product (the capital Pi looking thing).

If a polynomial f(x) = S a_ix^i has roots r_1,...,r_n, then each of the coeficients a_i of f(x) = a_n P (x - r_i) is a symmetric function of r_1,...,r_n.

Does this mean whichever order we choose to put r_1,...,r_n, we still get f(x) = g(x) (if g(x) is where we interchange say r_1 with r_2)?

Or is it, no matter how we interchange the coefficients, the roots don't change?

Any help on this part though?
 
  • #16
"That's what I was thinking about, but it still seems equivalent to doing it by brute force because it simply comes down to solve like 6-7 products."

No. Like I said, take any 2-cycles (ab)(cd). In A_4, (abc) exists. Then by showing (just these two) they do not commute, you have shown no 2-cycles are in H. Then you do a similar thing with 3-cycles. Take any (abc). In A_4, (ab)(cd) exists...
 
  • #17
For the other question I have no clue. We still haven't covered rings, etc. in my class yet. Sorry :)
 
  • #18
It's in my Group Theory textbook!

I want to skip it but I like understanding everything.
 
  • #19
You're awesome help by the way.
 
  • #20
You can show that A_4 has trivial center by looking at is conjugacy classes. Two classes of permutations are in the same conjugacy class if and only if they have the same # of disjoint cycles and length. Now we need to look at the even permutations.
 
  • #21
Here is a nice group theory problem to practice with. Let G be a finite group not divisible by three and so that (ab)^3=a^3b^3 prove that it is abelian.
 
  • #22
Kummer said:
You can show that A_4 has trivial center by looking at is conjugacy classes. Two classes of permutations are in the same conjugacy class if and only if they have the same # of disjoint cycles and length. Now we need to look at the even permutations.

You'll find that A_4 has 3 conjugacy classes, the 3-cycle's (8 of them), the two 2-cycle's (3 of them), and the one identity.

Since there are no conjugacy classes with cardinality 1, we conclude that no element commutes with all other elements in A_4.

Now that is an answer!

Thanks for pointing that out.
 
  • #23
Solving all these problems takes a long freaking time!
 
  • #24
JasonRox said:
The next part is...

Note: Summation and Product is from i=0 to n, where S is the summation and P is the product (the capital Pi looking thing).

If a polynomial f(x) = S a_ix^i has roots r_1,...,r_n, then each of the coeficients a_i of f(x) = a_n P (x - r_i) is a symmetric function of r_1,...,r_n.

Does this mean whichever order we choose to put r_1,...,r_n, we still get f(x) = g(x) (if g(x) is where we interchange say r_1 with r_2)?

Or is it, no matter how we interchange the coefficients, the roots don't change?
Define the polynomial-valued expression of n + 1 variables
F(a_n, r_1, \cdots, r_n)(x) := a_n \prod_{i=1}^n (x - r_i)

The "k-th coordinate of F" function can easily be seen to be a polynomial in a_n, r_1, \cdots, r_n.

F is symmetric in its last n arguments. e.g. if n = 2, then F(a, r, s) = F(a, s, r).


Things become interesting when you use a_n, r_1, \cdots, r_n as indeterminates! If my ring of coefficients is R, then I might want to work in the polynomial ring

R[a_n, r_1, \cdots, r_n]
 
Last edited:
  • #25
Ok, that definitely helps!

Now, after reading the section, the study of polynomials sounds more interesting! I hope it stays that way.

I hated polynomials last year during some Ring Theory. I didn't learn a thing. I remember extension fields and all that jazz, but I just knew enough to get by. Hopefully that can change.
 
  • #26
Here's a cute little application of how some of these things fit together.

Suppose you want to factor the polynomial
f(x) = 2x^4 + 7x^3 + 7x^2 + 7x + 2.​

The rational root theorem fails to find any rational solutions, but what about two quadratics?

Notice that f has an obvious symmetry: it's its own reverse, and so
x^4 f(1/x) = f(x).​
In particular, this means that inversion is a permutation of its roots.


So, we proceed as follows.

Let E be the splitting field of f -- it's the field generated by Q and the four roots of f.

Let S be the symmetry of E given by inverting the roots of f. I.E.
S(a_1 r_1 + a_2 r_2 + a_3 r_3 + a_4 r_4) = <br /> \frac{a_1}{r_1} + \frac{a_2}{r_2} + \frac{a_3}{r_3} + \frac{a_4}{r_4}

The subfield of E fixed by S should be interesting. Clearly, r_1 is not an invariant of S, but f(r_1) is. Can you think of any other expressions involving r_1 that are invariant under S?


(pausing while you think)


Let s = r_1 + 1 / r_1. This is an invariant under S. This suggests that we should make the substitution y = x + 1/x. With a little bit of work, you can compute
x^{-2} f(x) = 2y^2 + 7 y + 3
which factors, as expected.

<br /> \begin{equation*}\begin{split}<br /> 2x^4 + 7x^3 + 7x^2 + 7x + 2 &amp;= x^2(2y^2 + 7 y + 3) = x^2(2y + 1)(y + 3)<br /> \\&amp;= (2x^2 + x + 2) (x^2 + 3x + 1)<br /> \end{split}<br /> \end{equation*}<br />
 
Last edited:
  • #27
Jason, some times it really helps to do some examples. But you know this. So when you ask 'is it that the roots don't change if I swap coefficients', you should think 'do I really mean to ask if x+2 and 2x+1 have the same roots?'

Obviously swapping the roots of a polynomial doesn't alter the polynomial - you're multiplying the same factors together just in a different order.

I only say this because it seems like there's too much maths in this thread rather than just examining what's going on and thinking about it.
 
  • #28
matt grime said:
Obviously swapping the roots of a polynomial doesn't alter the polynomial - you're multiplying the same factors together just in a different order.

Yes, I thought about that of course.
 
  • #29
JasonRox said:
You're awesome help by the way.

Probably untrue, but awesome feedback. Thanks.:biggrin:

PF gives me a lot. Whenever I can, I'll give back.
 
  • #30
I have a new question now. Be free to discuss previous problems if you like, or add extra notes regarding anything. I read all the posts and think about each one.

The question is...

Let X be a G-set, let x, y, e X, and let y = g*x (the group action) for some g e G. Prove that G_y = g G_x g^-1 (stabilizer); conclude that |G_y| = |G_x|.

I proved the second part without even using the first part. It's just a matter of using...

|O(x)| = [G:G_x] , where O(x) is the orbit of x, and showing O(x) = O(y).

Any help on starting the first part?
 
  • #31
Start with an element h in G_y, and try to show that y is in g G_x g^-1. We have two pieces of information: hy=y and y=gx. Use them. Now try to do the reverse inclusion -- it's just as easy!

I'm curious though. How did you show that O(x)=O(y)?
 
  • #32
morphism said:
Start with an element h in G_y, and try to show that y is in g G_x g^-1. We have two pieces of information: hy=y and y=gx. Use them. Now try to do the reverse inclusion -- it's just as easy!

I'm curious though. How did you show that O(x)=O(y)?

I got the question now. Easier than I thought.

Um... the second part is so because the orbits are equivalence classes.
 
  • #33
But if you didn't use the first part of the question then you had no way of showing that the orbits of x and y were the same - they could easily be the same. In fact the notion that any two elements lie in the same orbit is so fundamentally different from the generic case that it has a name - the action is transitive if there is only one orbit.
 
  • #34
matt grime said:
But if you didn't use the first part of the question then you had no way of showing that the orbits of x and y were the same - they could easily be the same. In fact the notion that any two elements lie in the same orbit is so fundamentally different from the generic case that it has a name - the action is transitive if there is only one orbit.

Yeah, I was thinking about that. I assumed the orbits were the same, so then the second can be shown.

I must only show that the orbits are the same now.
 
  • #35
That's not going to work out very well. Instead, remember (or prove) that a subgroup has the same cardinality as any of its conjugates (hint: write down the obvious bijection).
 
  • #36
OMG! Oh boy, did I ever overcomplicate a simple problem.

I'll post my solution in a few minutes. I had another student plus another who hasn't touched algebra in a long time trying to get it today. We must have all overcomplicated. Dang, I'm happy I got it now.

Thanks for pointing that out too morphism.
 
  • #37
Ok, here is it...

We must show that G_y = g G_x g^-1 when y = g(x). (I use brackets to denote that it an action and not an operation.)

Let g_1 e G_y. Show that g^{-1} g_1 g e G_x. If this is so then, g_1 e g G_x g^{-1}. Because that is g g^{-1} g_1 g g^{-1} = g_1.

g^{-1} g_1 g (x) = g^{-1} g_1 (y) = g^{-1} y = g^{-1} y = x

...which means g^{-1} g_1 g e G_x and so g_1 e g G_x g^{-1}.

There reverse is done in a similar fashion.

Therefore, G_y = g G_x g^-1.

Now, let's show the second part.

Let's create a bijection from G_y to G_x. Let the bijection be as follows:

f(g_y) = g^{-1} g_y g

It is clear that f is well-defined. Now, let's show it is one-to-one.

f(g_1) = f(g_2)

g^{-1} g_1 g = g^{-1} g_2 g

g_1 = g_2

We have just shown that it is one-to-one. Now we are left to show it's onto. That is...

If g_3 e G_x, then...

f(g g_3 g^{-1}) = g g^{-1} g_3 g g^{-1} = g_3

Now, all we need to show is that g g_3 g^{-1} is in fact in G_y.

Well, g g_3 g^{-1} (y) = g g_3 (x) = g (x) = y, so it is in G_y.

Therefore, there is a bijection from G_y to G_x, and hence |G_y| = |G_x|.
 
  • #38
Although I made the dumbest errors, I learned a decent amount.

Therefore, I like that problem.
 
  • #39
JasonRox said:
I have a new question now. Be free to discuss previous problems if you like, or add extra notes regarding anything. I read all the posts and think about each one.

The question is...

Let X be a G-set, let x, y, e X, and let y = g*x (the group action) for some g e G. Prove that G_y = g G_x g^-1 (stabilizer); conclude that |G_y| = |G_x|.

I proved the second part without even using the first part. It's just a matter of using...

|O(x)| = [G:G_x] , where O(x) is the orbit of x, and showing O(x) = O(y).

Any help on starting the first part?
Here is how I would do it.

If we want to show G_y = g_0G_xg_0^{-1} where y=g_0x for some g_0\in G. We do it by show each if a subset of each other. I do it one way to show you the idea. Let a\in G_y we want to show a\in g_0 G_x g_0^{-1}. By definition ay = y, so ag_0x=g_0x that means g_0^{-1}ag_0 x = x so it means g_0^{-1}ag_0 \in G_x so g_0^{-1}ag_0= b for some b\in G_x. That means g_0^{-1}ag_0 = b thus a = g_0bg_0^{-1} \in g_0G_xg_0^{-1}.

Now we can show that |G_y|=|G_x| but |G_x| = |g_0Gg_0^{-1}| because you can define a bijection \phi : G_x \mapsto g_0Gg_0^{-1} as \phi (c) = g_0cg_0^{-1}. Thus, |G_x|=|G_y|.
 
  • #40
Kummer said:
Here is how I would do it.

If we want to show G_y = g_0G_xg_0^{-1} where y=g_0x for some g_0\in G. We do it by show each if a subset of each other. I do it one way to show you the idea. Let a\in G_y we want to show a\in g_0 G_x g_0^{-1}. By definition ay = y, so ag_0x=g_0x that means g_0^{-1}ag_0 x = x so it means g_0^{-1}ag_0 \in G_x so g_0^{-1}ag_0= b for some b\in G_x. That means g_0^{-1}ag_0 = b thus a = g_0bg_0^{-1} \in g_0G_xg_0^{-1}.

Now we can show that |G_y|=|G_x| but |G_x| = |g_0Gg_0^{-1}| because you can define a bijection \phi : G_x \mapsto g_0Gg_0^{-1} as \phi (c) = g_0cg_0^{-1}. Thus, |G_x|=|G_y|.
How is that any different from what Jason had done?
 
  • #41
morphism said:
How is that any different from what Jason had done?
I did not see what he posted. Anyway, mine is nice looking maybe he will like it more.
 
  • #42
Kummer said:
I did not see what he posted. Anyway, mine is nice looking maybe he will like it more.

Maybe. :P

Trying using the [itex ] command instead of [tex ]. It fits along the fonts a lot nicer.

For example: We have the polynomial ax^3 + \frac{1}{2}x^2 so find the derivative.

As opposted too...

We have the polynomial ax^3 + \frac{1}{2}x^2 so find the derivative.

I'll be posting more questions soon of course.

I have a whole pile of solutions to problems so I plan on making a website with all the solutions and hopefully people look over them and find mistakes and such.
 
  • #43
Here is my next question...

It involves the proof of the theorem. I don't quite understand the step. There is a different proof of this, but I sure would like to understand this one too. There is a step I don't quite understand. Here it is...

Theorem 4.6 - Let G be a finite p-group. If H is a proper subgroup of G, then H &lt; N_{G}(H).

Proof: If H is normal in G, then N_{G}(H) = G and the theorem is true. If X is the set of all the conjugates of H, then we may assume |X| = [G : N_{G}(H)] is not equal to 1. Now H acts on X by conjugation and, since H is a p-group, every orbit of X has size of a power of p. As {H} is an orbit size of 1, there must be atleast p - 1 other orbits of size 1. And the proof continues...

My question is on the italic line. How does H have an orbit size of 1?

Oh crap, I think I got it now. Is it because the orbit of {H} is as follows...

Then the orbit of {H} is |O(H)| = [H : H_{H}] and that is precisely 1?

Now since |X| is a power of p, it must certainly have atleast p-1 orbits of size 1.

So the proof continues as follows...

Thus there are atleast one conjugate gHg^(-1) =/= H with {gHg^(-1)} also of an orbit size of 1. Now a g H g^{-1} a^{-1} = g H g^{-1} for all a e H, and so g^{-1}ag is an element of N_{G}(H) for all a e H. But gHg^(-1) =/= H gives atleast one a e H with gag^(-1) not in H, and so H < N_G (H) (normalizer).

I think I got it now. Anyways, let me know what you think.

The prof. did the proof, but like it's barely explained. He just copies out of the textbook. Whenever I ask about details, he tells me not to worry about it because I'm just an undergrad. He doesn't seem to mind anymore since I go to his office and ask questions regardless.
 
  • #44
Dang, I spent like an hour trying to fill in all the details.
 
  • #45
H acts on the set of G-conjugates of itself. Well, H is conjugate in G to itself, and H conjugates H into itself, hence the action of H acting by conjugation on the set of G-conjugates has at least one orbit of size 1, and hence at least p-1 of them.

I think I just wrote out what was already there, but then again, it was self explanatory.
 
  • #46
matt grime said:
H acts on the set of G-conjugates of itself. Well, H is conjugate in G to itself, and H conjugates H into itself, hence the action of H acting by conjugation on the set of G-conjugates has at least one orbit of size 1, and hence at least p-1 of them.

I think I just wrote out what was already there, but then again, it was self explanatory.

The textbook originally said G acts on X by conjugation. That was a typo for sure.

I don't really find all the details self-explanatory.
 
  • #47
Ok, I need some help understand this passage or comment made in the textbook. The comments they make between the theorems.

Ok, here it goes.

Each term in the class equation of a finite group G is a divisor of |G|, so that multiplying by |G|^(-1) gives an equation of the form 1 = \sum_j \frac{1}{i_j} with each i_j a positive integer; moreover, |G| is the largest i_j occurring in this expression.

I understand that |G| is the largest possible i_j, but is there always such an i_j?

The only condition I can see that happening on is when G is non-abelian and the center of G is trivial.

Am I missing something here?
 
Last edited:
  • #48
What you wrote down doesn't make sense. Did you forget a summation symbol somewhere? What are i and j, and what's S_j?

I'm going to assume you meant to write
1 = \sum_j S_j \frac{1}{i_j}

In which case what do you mean by "is there always such an i_j"? Why wouldn't there be one? If I understood all this correctly, then when|Z(G)|=1, we'll have that S_j=1 and i_j = |G|.
 
  • #49
morphism said:
What you wrote down doesn't make sense. Did you forget a summation symbol somewhere? What are i and j, and what's S_j?

I'm going to assume you meant to write
1 = \sum_j S_j \frac{1}{i_j}

In which case what do you mean by "is there always such an i_j"? Why wouldn't there be one? If I understood all this correctly, then when|Z(G)|=1, we'll have that S_j=1 and i_j = |G|.

Ok, you came close! I totally forgot to write what S_j. I didn't know the LaTeX for it.

Here is the equation (which I'll edit in my last post):

1 = \sum_j \frac{1}{i_j}

So, my question is... will there always be an i_j = |G|? Or does the paragraph only mean that the largest i_j can be is |G| (which makes sense)?

I know |G| is the largest it can ever be. That's just stating the obvious. But I don't see how there will always be an i_j = |G|. In my opinion, if there is an i_j = |G|, then G is centerless.
 
  • #50
JasonRox said:
Ok, you came close! I totally forgot to write what S_j. I didn't know the LaTeX for it.

Here is the equation (which I'll edit in my last post):

1 = \sum_j \frac{1}{i_j}
Ah, that makes more sense! I should have figure that S_j was meant to be summation.

I know |G| is the largest it can ever be. That's just stating the obvious. But I don't see how there will always be an i_j = |G|. In my opinion, if there is an i_j = |G|, then G is centerless.
That's absolutely right.
 
Back
Top