Other Should I Become a Mathematician?

  • Thread starter Thread starter mathwonk
  • Start date Start date
  • Tags Tags
    Mathematician
Click For Summary
Becoming a mathematician requires a deep passion for the subject and a commitment to problem-solving. Key areas of focus include algebra, topology, analysis, and geometry, with recommended readings from notable mathematicians to enhance understanding. Engaging with challenging problems and understanding proofs are essential for developing mathematical skills. A degree in pure mathematics is advised over a math/economics major for those pursuing applied mathematics, as the rigor of pure math prepares one for real-world applications. The journey involves continuous learning and adapting, with an emphasis on practical problem-solving skills.
  • #121
Just a thought...I think that this suggestion is bloody brilliant and I was just wondering how much application there would be for a section where people can just post the more interesting problems they have solved with working included by category so that others can perhaps discuss them, find and comment upon other ways of reaching the solutions, or be inspired by them...like I said, it was just a thought :smile:
 
Last edited:
Physics news on Phys.org
  • #122
ircdan, my profile has my webpage address at school (somethinglike roy at mathdept UGA) , where there is an entire algebra book, in pdf, downloadable. also a 14 page linear algebra primer. how's that?
 
  • #123
mathwonk said:
ircdan, my profile has my webpage address at school (somethinglike roy at mathdept UGA) , where there is an entire algebra book, in pdf, downloadable. also a 14 page linear algebra primer. how's that?
Excellent thank you.:smile:
 
  • #124
it is a privilege to be of assistance.

I hope some will or has solved my second ode exercise as well. i realize i did not give enough help. there is an idea there, the idea of linearity.

i.e. the first step is to prove that (D-a)y = 0 iff y = ce^(at).

Then to solve (D-a)(D-b)y = 0, one needs a little preparation.

define the operator Ly = (D-a)(D-b)y. then show that L is linear i.e.
(i) L(cy) = cL(y) and
(ii) L(y+z) = L(y)+L(z).

and show also that L = (D-a)(D-b) = (D-b)(D-a).

then it follows that L(0) = 0. hence (D-a)y = 0 or (D-b)y=0 implies that also L(y) = 0.

when a and b are different this already gives as solutions at least all functions y = ce^(at) + de^(bt).

then we want to prove there are no others. try to get this far first.

notice we are introducing a concept, the concept of "linearity", into what was previously just a calculation. this distinction separates advanced math from elementary math.
 
Last edited:
  • #125
oh by the way my website actually has three algebra books, one elementary algebra book, that i teach from to our juniors, one i teach from to our grad students, and a linear algebra book i have never had the nerve to teach anyone from yet, since it covers the whole semester or more course in 14 pages.
[edit: (many years later) that 14 page book has been greatly expanded now into a longer version also on that webpage.]
 
Last edited:
  • #126
my preference is actually topology, differential topology, and complex analysis, or all of them combined in complex algebraic geometry. but because even the mathematical layperson thinks that anyone who calls himself an algebraic geometer must know some algebra, i have been called upon more often to teach algebra than complex analysis or topology. hence my books, which are really course notes, are almost all about algebra. it was good for me to have to learn the subject, but i hope someday they trust me to teach topology or complex analysis again, or even real analysis, so i can learn that too.

i did write some notes ages ago on sheaf theory, and serre's duality theorem proved by distribution theory (real and functional analysis) and complex riemann surfaces, but it was before the era of computers so i have no magnetic versions of those notes.
 
Last edited:
  • #127
mathwonk said:
it is a privilege to be of assistance.

I hope some will or has solved my second ode exercise as well. i realize i did not give enough help. there is an idea there, the idea of linearity.

i.e. the first step is to prove that (D-a)y = 0 iff y = ce^(at).

Then to solve (D-a)(D-b)y = 0, one needs a little preparation.

define the operator Ly = (D-a)(D-b)y. then show that L is linear i.e.
(i) L(cy) = cL(y) and
(ii) L(y+z) = L(y)+L(z).

and show also that L = (D-a)(D-b) = (D-b)(D-a).

then it follows that L(0) = 0. hence (D-a)y = 0 or (D-b)y=0 implies that also L(y) = 0.

when a and b are different this already gives as solutions at least all functions y = ce^(at) + de^(bt).

then we want to prove there are no others. try to get this far first.

notice aw are introducing a concept, the conecpt of linearity into what was previously just a calculation. this distinction separates advanced math from elementary math.

I already showed the first part in an earlier post I think. Well I showed that if (D - a)y = 0, then all solutions are of the form y = ce^(at). The other direction is just a calculation I assume. If y = ce^(at), then
(D - a)(ce^(at)) = D(ce^(at)) - ace^(at) = ace^(at) - ace^(at) = 0.

For the second part you just hinted on I had been trying and couldn't get it, but I think I got it now(at least the direction you gave hints for), it just did not occur to me define Ly = (D-a)(D-b)y and show is linear, and then since L is linear L(0) = 0. I think it's very nice to see that linearity can used here. I studied linear algebra but never used it to solve differential equations. I think this works, I'm not too sure it's correct.

First to show L is linear.(I showed a lot of the steps, habit, but maybe not necessary)

Define Ly = (D-a)(D-b)y.

L(cy) = (D - a)(D - b)(cy)
= [D^2 - (a + b)D + ab](cy)
= D^2(cy) - (a +b)D(cy) + ab(cy)
= cD^2(y) - c(a + b)D(y) + c(ab)y (by linearity of D)
= c(D^2(y) - (a + b)D(y) + aby)
= c[D^2 - (a + b) + ab](y)
= c(D - a)(D - b)y
= cLy

L(z + y) = (D - a)(D - b)(z + y)
= [D^2 - (a + b)D + ab)(z + y)
= D^2(z + y) -(a + b)D(z + y) + ab(z + y)
= D^2(z) + D^2(y) - (a + b)D(z) - (a + b)D(y) + abz + aby (by linearity of D)
= D^2(z) - (a + b)D(z) + abz + D^2(y) - (a + b)D(y) + aby
= [D^2 - (a + b)D + ab](z) + [D^2 - (a + b)D + ab](y)
= (D - a)(D - b)(z) + (D - a)(D - b)(y)
= Lz + Ly

Thus L is linear.

Also (D - a)(D - b) = [D^2 - (a + b)D + ab]
= [D^2 - (b + a)D + ba]
= (D - b)(D - a)Hence L(0) = 0.(this also follows from the fact L is linear, so the above is not really necessary right?)

Hence (D - a)(y_1) = 0 or (D - b)(y_2) = 0 implies L(y_1 + y_2) = L(y_1) + L(y_2) = 0 + 0 = 0
so y = y_1 + y_2 = ce^(at) + de^(bt)? (is that right?)
Edit: For the second part, does this work? (this doesn't use linear algebra, and I guess it isn't a proof since I didn't prove the method being used)

Suppose w is another solution to (D - a)(D-b)y =0, then
(D-a)(D-b)w = 0,
w'' -(a +b)w' + abw = 0, which has characteristic equation,
r^2 - (a+b)r + ab = 0 => r = a or r = b , hence w = ce^(at) + de^(bt) = y.

I'm assuming there is a way it can be done with linear algebra, I'll try later, thanks for the tip.
 
Last edited:
  • #128
excellent. it all looks correct and exemplary. as to the final argument, you are again right, it is not a proof since the word "hence" in the next to last line is the uniqueness we are trying to prove.

the point is that in linear algebra if you can find all solutions to the equation fx = 0, and if you can find one solution to fx = b, then you can get all the other solutions to fx=b, by adding solutions of fx=0 to the one solution you have.

you also want to use the fact, true of all functions, linear or not, that if g(a) = b, and if f(c)=a, then g(f(c)) = b. i.e. to find solutions for a composite function (D-a)(D-b)y = 0, find z such that (D-a)z =0, then find y such that (D-b)y = z.

so use the step you already did to solve for z such that (D-a)z = 0, then use hook or crook (e.g. characteristic equation) to find one solution of (D-b)y = z, and then finally use linearity to find ALL solutions of (D-b)y=z, hence also by linearity all solutions of (D-a)(D-b)y = 0.this is a completely self contained proof of the uniqueness step for these ode's that is often left out of books, by quoting the general existence and uniqueness theorem which many do not prove.but this proof is much easier than the general theorem, and uses the theory of linearity one has already studied in linear algebra.

In fact it is not too far a guess to imagine that most of linear algebra, such as jordan forms etc, was discovered by looking at differential equations, and was intended to be used in solving them. todays linear algebra classes that omit all mention of differential quations are hence absurd exercises in practicing the tedious and almost useless skill of multiplying and simplifying matrices. The idea of linearity, that L(f+g) = Lf +Lg is never even mentioned in some courses on linear algebra if you can believe it, and certainly not the fact that differentiation is a linear operator.
 
Last edited:
  • #129
mathwonk said:
todays linear algebra classes that omit all mention of differential quations are hence absurd exercises in practicing the tedious and almost useless skill of multiplying and simplifying matrices. The idea of linearity, that L(f+g) = Lf +Lg is never even mentioned in some courses on linear algebra if you can believe it, and certainly not the fact that differentiation is a linear operator.

Yea my first linear algebra class was very tedious! We mentioned linearity but I didn't really learn any of nice properties of linear operators until my second course in linear algebra.

Anyways I think I got the second part thanks to your hints.

(D - a)(D - b)y = 0 implies (D - b)y = z and (D - a)z = 0 (this follows from the hint you gave about general functions).

Now (D - a)z = 0 iff z = e^(at), so
(D - b)y = e^(at)
Let y_p = Ae^(at) for some A and note
(D - b)(Ae^(at)) = aAe^(at) - bAe^(at) = A(a - b)e^(at) = e^(at), hence A = 1/(a - b) so that y_p = e^(at)/(a - b) solves (D - b)y = e^(at).

Now suppose y_1 is any other solution to (D - b)y = e^(at).
Since (D - b) is linear,
(D - b)(y_1 - y_p) = (D - b)y_1 - (D - b)y_p = e^(at) - e^(at) = 0 and thus w = y_1 - y_p solves (D - b)y = 0, so w = de^(bt) for some d.

Again, since (D - b) is linear,
(D - b)(y_p + w) = (D - b)y_p + (D - b)w = e^(at) + 0 = e^(at), hence
y = y_p + w = e^(at)/(a - b) + de^(bt) solves (D - b)y = e^(at) and so y also solves (D - a)(D - b)y = 0 so all solutions have the form e^(at)/(a - b) + de^(bt) = ce^(at) + de^(bt) where c = 1/(b - a).

I notice this only works if a != b. If a = b, the y_p would be different so the same proof would work it seems.
 
Last edited:
  • #130
outstanding! and do you know the solution if a = b?
 
  • #131
mathwonk said:
outstanding! and do you know the solution if a = b?

Hey thanks for the help! Yea I just tried it now, and it didn't take me as long as that first one because it's almost the same. There are two differences in the proof. Originally I thought there would be only one difference in the proof until I tried it, so I'm glad I did it, it's good practice too.(D - a)(D - a)y = 0 implies (D - a)y = z and (D - a)z = 0.

Now (D - a)z = 0 iff z = ce^(at), so
(D - a)y = ce^(at)
Let y_p = Ate^(at) for some A and note
(D - a)(Ate^(at)) = D(Ate^(at)) -atAe^(at) = Ae^(at) + Atae^(at) - aAte^(at) = Ae^(at) = ce^(at), hence A = c so that y_p = cte^(at) solves (D - a)y = ce^(at).

Now suppose y_1 is any other solution to (D - a)y = ce^(at).
Since (D - a) is linear,
(D - a)(y_1 - y_p) = (D - a)y_1 - (D - a)y_p = cte^(at) - cte^(at) = 0 and thus w = y_1 - y_p solves (D - a)y = 0, so w = de^(at) for some d.

Again, since (D - a) is linear,
(D - a)(y_p + w) = (D - a)y_p + (D - a)w = cte^(at) + 0 = cte^(at), hence
y = y_p + w = cte^(at) + de^(at) solves (D - a)y = ce^(at) and so y also solves (D - a)(D - a)y = 0 so all solutions have the form cte^(at) + de^(at).

I think that works. Thanks for all the tips!
 
Last edited:
  • #132
this method works, modified, on any problem which can be factored into first order operators, and where one can solve first order problems. another example is the so called Eulers equation.
Similarly for Euler's equation, x^2y'' +(1-a-b)xy' + ab y = 0, with
indicial equation

(r-a)(r-b) = 0, just factor x^2y'' +(1-a-b)xy' + ab y = (xD-a)(xD-b)y = 0,

and solve (xD-a)z = 0, and then (xD-b)y = z.

As above, this proves existence and uniqueness simultaneously, and also
handles the equal roots cases at the same time, with no guessing.

Here you have to use, I guess, integrating factors to solve the first order cases, and be careful when "multiplying" the non constant coefficient operators (xD-a), since you must use the leibniz rule.

these are usually done by powers series methods, or just stating that the indicial equation should be used, again without proving there are no other solutions. of course the interval of the solution must be specified, or else I believe the space of solutions is infinite dimensional.
 
Last edited:
  • #133
O.K., great thread guys. I got one for you.

I want to study Functional Analysis (Operator theory, Measure theory - Probability) and its applications in Quantum Physics, Statistical Mechanics and any other interesting part of Physics. After asking about 10 Professors in my campus (half in Physics, half in Math), I got the feeling that a department of Mathematics would be the best choice for me (among other things, mathematical rigor is something that's important to me).

Any insights on that, and also recommendations on what schools I should apply to?
 
  • #134
mathwonk said:
this method works, modified, on any problem which can be factored into first order operators, and where one can solve first order problems.
Neat. Now that I think about it, after reading your post, I remembered that I had seen something similar for pdes about a year ago, in particular, for the wave equation.

For notation, let u = u(x,t), and u_tt, u_xx denote partial derivatives. Then if we consider u_tt = c^2u_xx for -inf < x < inf,
u_tt - c^2u_xx = (d/dt - cd/dx)(d/dt + cd/dx)u = 0.
Let v = (d/dt + cd/dx)u, then we must have (d/dt - cd/dx)v = 0. Anyways these pdes are easy to solve and you end up with u(x,t) = f(x + ct) + g(x - ct) using the same arguement.

This result is stated in my book(without proof) and I had always wondered how they did it. I knew how to solve the two individual pdes, but I never knew how to prove that all solutions of the wave equation for x in (-inf, inf) had the form f(x +ct) + g(x - ct), but now I know how, thanks to working out the simpler ones like (D - a)(D-b)y = 0. Thanks a lot for the help.:smile:
 
Last edited:
  • #135
Do most people major just in math? Or do they have a minor in something else? These days, it is hard to find a job if one just gets a PhD in pure math. What are some good combinations? I am considering major in math, and a minor in economics. Or majoring in math and minoring in biology. What are your thoughts and opinions about these options? Also, what is graduate school like? My father keeps telling me not pursue a PhD right after undergraduate school. Would it be better to work for a few years, and then consider getting a PhD? That way, you would have experienced the real world? Could you guys please tell me your experiences of graduate school, and your opinions about the PhD degree?

Thanks a lot :smile:
 
  • #136
when i get time and inspiration, i mean to continue my thread of general advice by going further into the grad school experience, passing prelims, writing a thesis, passing orals, applying for jobs, and grants, and then teaching and maybe departmental politics, survival, and retirement planning. and then getting into math heaven, reincarnation as a fields medalist...
 
Last edited:
  • #137
Recently I encountered a book "Mathematical problem solving methods" written by L.C.Larson. There are many problems from the Putnam competition.
My question is: how important is it for a physicist (mathematician) to be able to solve this kind of problems.
 
  • #138
well it is not essential, but it can't hurt. I myself have never solved a Putnam problem, and did not participate in the contest in college, but really bright, quick people who do well on them may also be outstanding mathematicians.

My feeling from reading a few of them is they do not much resemble real research problems, since they can presumably be done in a few hours as opposed to a few months or years.

E.g. the famous fermat problem was solved in several stages. first people tried a lot of special cases, i.e. special values of the exponent. None of these methods ever yielded enough insight to even prove it in an infinite number of cases.

Then finally Gerhard Frey thought of linking the problem with elliptic curves, by asking what kind of elliptic curve would arise from the usual equation y^2 = (x-a)(x-b)(x-c) if a,b,c, were constructed in a simple way from three solutions to fermat's problem.

he conjectured that the elliptic curve could not be "modular". this was indeed proved by Ribet I believe, and then finally Andrew Wiles felt there was enough guidance and motivation there to be worth a long hard attempt on the problem via the question of modularity.

Then he succeeded finally, after a famous well publicized error, and some corrective help from a student, at solving the requisite modularity problem.

He had to invent and upgrade lots of new techniques for the task and it took him over 7 years.

I am guessing a Putnam problem is a complicated question that may through sufficient cleverness be solved by also linking it with some simpler insight, but seldom requires any huge amount of theory.

However any practice at all in thinking up ways to simplify problems, apply old ideas to new situations, etc, or just compute hard quantities, is useful. I would do a few and see if they become fun. If not I would not punish myself.
 
Last edited:
  • #139
you could start a putnam thread here perhaps if people want to talk about these problems and get some first hand knowledge.but in research the smartest people, although they often do best on these tests, do not always do the deepest research. that requires something else, like taste, courage, persistence, luck and inspiration.

One of my better results coincided with the birth if one of my children. Hironaka (a famous fields medalist) once told me, somewhat tongue in cheek, that others had noticed a correlation between making discoveries and getting married, and "some of them do this more than once for that reason".I have noticed that success in research is in the long run, related to long hard consistent work. I.e. if you keep at it faithfully, doing what you have noticed works, you will have some success. Don't be afraid to make mistakes, or to make lengthy calculations that may not go anywhere.

And talk to people about it. This can be embarrassing, but after giving a talk on work that was still somewhat half baked, I have usually finished it off satisfactorily.

Here is an example that may be relevant: Marilyn Vos Savant seems to be an intelligent person, who embarrassed many well educated mathematicians a few years back with a simple probability problem published in a magazine. But she not only cannot do any research in the subject without further training, but even does not understand much of what she has read about mathematics. Still she has parlayed her fame into a newspaper column and some books.

The great Grothendieck, so deep a mathematician that his work discouraged fellow Fields medalist Rene Thom from even attempting to advance in algebraic geometry, once proposed 57 as an example of a prime number. This composite integer is now famous as "Grothendieck's prime".

But he made all of us begin to realize that to understand geometry, and also algebra, one must always study not just individual objects or spaces, but mappings between those objects. This is called category theory. Of course a study of Riemann's works reveals that he also focused as well on studying objects in families, i.e. mappings whose fibers are the objects of interest.

that is why the first few chapters of Hartshorne are about various types of maps, proper maps, finite maps, flat maps, etale maps, smooth maps, birational maps, generically finite maps, affine maps, etc...
 
Last edited:
  • #140
If someone wanted to get a Ph.D in mathematical physics should you pursue an undergrad degree in math or physics. I would like to eventually like to do research in M theory but as a Mathematical physicist. Thanks in advance for your reply.
 
  • #141
courtrigrad said:
Do most people major just in math? Or do they have a minor in something else? ... What are some good combinations?

i didn't minor in anything else but a subject where math is used heavily might be not hurt. physics, economics or computer science combined with math are somewhat obvious choices. statistics and computer science would be a good combination if you're interested in raking in far more $$$ than any engineering, comp sci or business student. depending on your interests, statistics and biology (biostatistician=$$$), statistics and economics, statistics and another social science (psych, soc, etc) might be good combinations.
 
Last edited:
  • #142
It depends on where you go to college what minors and majors will be available to you. At the college I go to, as part of the applied mathematics curriculum, we're required to get at least a minor in some other field, and as it is a tech school, the options are limited to mostly engineering and science fields.
 
  • #143
we need input from some mathematical physicists here. my acquaintances who were mathematical physicists seem to have majored in physics and then learned as much math as possible. on the other hand some lecturers at math/physics meetings seem to be mathematicians, but i do not elarn as much ffrom them sinbce i want to understand the ophysicists point of view and i already nuderstand the amth. i would major in physics if i wanted to be any kind of physicist and learn as much math as possible to use it there.
 
  • #144
fournier17 said:
If someone wanted to get a Ph.D in mathematical physics should you pursue an undergrad degree in math or physics. I would like to eventually like to do research in M theory but as a Mathematical physicist. Thanks in advance for your reply.
you could do an undergraduate degree in combined maths & physics, and afterwards you can pursue with a phd in theoretical physics (synonymous with mathematical physics).
 
  • #145
from pmb phy:

pmb_phy

pmb_phy is Online:
Posts: 1,682
Quote:
Originally Posted by mathwonk
by the way pete, if you are a mathematical physicist, some posters in the thread "who wants to be a mathematician" under academic guidance, have been asking whether they should major in math or physicts to become one. what do you advise?
I had two majors in college, physics and math. Most of what I do when I'm working in physics is only mathematical so in that sense I guess you could say that I'm a mathematical physicist.

I recommend to your friend that he double major in physics and math as I did. This way if he wants to be a mathematician he can utilize his physics when he's working on mathematical problems. E.g. its nice to have solid examples of the math one is working with, especially in GR.

Pete
 
  • #146
Thanks for the replys guys, this forum is so helpful.:smile:
 
  • #147
loop quantum gravity said:
you could do an undergraduate degree in combined maths & physics, and afterwards you can pursue with a phd in theoretical physics (synonymous with mathematical physics).
Is theoretical physics the same as mathematical physics? If they are then that's great, more potential graduate programs to which I can apply to.:smile: However, I have heard that mathematical physics relys more on mathematics, and that theoretical physics is more physics than math. I have seen some graduate programs in mathematical physics that are in the math department of the university instead of the physics department.
 
  • #148
Like many things in mathematics itself, the terms mathematical physics and theoretical physics mean different things to different people.
 
  • #149
phd prelim preparation

I wrote the following letter to my graduate committee today commenting on what seems to me wrong with our current prelims. these thoughts may help inform some students as to what to look for on prelims, and what they might preferably find there.

In preparing to teach grad algebra in fall, one thing that jumps out at me is not
the correctness of the exams, but their diversity. One examiner will ask only
examples, another only creative problems, another mostly statements of theorems.
only a few examiners ask straight forward proofs of theorems.

Overall they look pretty fair, but I noticed after preparing my outline for the
8000 course that test preparation would be almost independent of the course i
will teach. I.e. to do most of the old tests, all they need is the statements
of the basic theorems and a few typical example problems. They do not need the
proofs I am striving to make clear, and often not the ideas behind them.
anybody who can calculate with sylow groups and compute small galois groups can
score well on some tests.

In my experience good research is not about applying big theorems directly, as
such applications are already obvious to all experts. It is more often applying
proof techniques to new but analogous situations after realizing those
techniques apply. So proof methods are crucial.
Also discovering what to prove involves seeing the general patterns and concepts
behind the theorems.

The balance of the exams is somewhat lopsided at times. some people insist on
asking two-three or more questions out of 9, on finite group theory and
applications of sylow and counting principles, an elementary but tricky topic i
myself essentially never use in my research. this is probably the one
ubiquitous test topic and the one i need least. I don't mind one such question
but why more?

The percentage of the test covered by the questions on one topic should not
exceed that topic's share of the syllabus itself. if there are 6 clear topic
areas on the syllabus, no one of them should take 3/9 of the test.

also computing specific galois groups is to me another unnecessary skill in my
research. It is the idea of symmetry that is important to me. When I do need
them as monodromy groups, a basic technique for computing them is
specialization, i.e. reduction mod p, or finding an action which has certain
invariance properties, which is less often taught or tested.

Here is an easy sample question that illustrates the basic idea of galois
groups: State the FTGT, and use it to explain briefly why the galois group of
X^4 - 17 over Q cannot be Sym(4). This kind of thing involves some
understanding of symmetry. One should probably resist the temptation to ask it
about 53X^4 - 379X^2 + 1129.

[edit years later: did anyone understand this? I think my point was that the only way to get S(4) as Galois group for a quartic, is if you need to adjoin 4 roots, one at a time, and no root added automatically gives you another root for free. Thus equations like these of even degree, which have as root -r whenever r is a root, have smaller Galois group. I.e. after adjoining one root r, you get actually two, r and -r, so only need then to adjoin further the roots of a quadratic, so the splitting field has degree at most 8, and not 24. I hope this is correct, since it has been over 15 years since I wrote this.]

As of now, with the recent division of the syllabus into undergraduate and
graduate topics, more than half the previous tests cover undergraduate topics
(groups, linear algebra, canonical forms of matrices.) This makes it harder to
teach the graduate course and prepare people for the test at the same time,
unless one just writes off people with weak undergrduate background, or settles
for teaching them test skills instead of knowledge.

Thus to me it is somewhat unclear what we want the students to actually know
after taking the first algebra course. I like them to learn theorems and ideas
for making proofs, since in research they will need to prove things, often by
adapting known proof methods, but the lack of proof type question undermines
their interest in learning how to prove things.

The syllabus is now explicit on this point, but if we really want them to know
how to state and prove the basic theorems we should not only say so, but enforce
that by testing it.Suggestions:

We might state some principles for prelims, such as:

1) include at least one question of stating a basic theorem and applying it.
I.e. a student who can state all the basic theorems should not get a zero.
2) Include at least one request for a proof of a standard result at least in a
special case.
3) include at least one request for examples or counterexamples.
4) try to mostly avoid questions which are tricky or hard to answer even for
someone who "knows" all the basic material in the topic (such as a professor who
has taught the course).

I.e. try to test knowledge of the subject, rather than unusual cleverness or
prior familiarity with the specific question.

But do ask at least one question where application of a standard theorem
requires understanding what that theorem says, e.g.: what is the determinant,
minimal polynomial, and characteristic polynomial of an n by n matrix defining a
k[X] module structure on k^n, by looking at the standard decomposition of that
module as a product of cyclic k[X] modules. or explain why the cardinality of a
finite set admitting an action by a p-group, is congruent modp to the number of
fixed points.

5) point out to students that if they cannot do a given question, partial credit
will be given for solving a similar but easier question, i.e. taking n= 2, or
assuming commutativity, or finite generation. This skill of making the problem
easier is crucial in research, when one needs to add hypotheses to make
progress.

6) after writing a question, ask yourself what it tests, i.e. what is needed to
solve it?

These are just some ideas that arise upon trying to prepare to help students
pass the prelim as well as prepare the write a thesis.
 
Last edited:
  • #150
an actual prelim

Alg prelim 2002. Do any 6 problems including I.

I. True or false? Tell whether each statement is true or false, giving in each case a brief indication of why, e.g. by a one or two line argument citing an appropriate theorem or principle, or counterexample. Do not answer “this follows from B’s theorem” without indicating why the hypotheses of B’s theorem hold and what that theorem says in this case.

(i) A commutative ring R with identity 1 ≠ 0, always has a non trivial maximal ideal M (i.e. such that M ≠ R).

(ii) A group of order 100 has a unique subgroup of order 25.

(iii) A subgroup of a solvable group is solvable.

(iv) A square matrix over the rational numbers Q has a unique Jordan normal form.

(v) In a noetherian domain, every non unit can be expressed as a finite product of irreducible elements.

(vi) If F in K is a finite field extension, every automorphism of F extends to an automorphism of K.

(vii) A vector space V is always isomorphic to its dual space V*.

(viii) If A is a real 3 x 3 matrix such that AA^t = Id, (where A^t is the transpose of A), then there exist mutually orthogonal, non - zero, A - invariant subspaces V, W of R^3.

In the following proofs give as much detail as time allows.
II. Do either (i) or (ii):

(i) If G is a finite group with subgroups H,K such that G = HK, and K is normal, prove G is the homomorphic image of a “semi direct product” of H and K (and define that concept).

(ii) If G is a group of order pq, where p < q, are prime and p does not divide q-1, prove G is isomorphic to Z/p x Z/q.

III. If k is a field, prove there is an extension field F of k such that every irreducible polynomial over k has a root in F.

IV. Prove every ideal in the polynomial ring Z[X] is finitely generated where Z is the integers.

V. If n is a positive integer, prove the Galois group over the rational field Q, of X^n - 1, is abelian.

VI. Do both parts:
(i) State the structure theorem for finitely generated torsion modules over a pid.

(ii) Prove there is a one - one correspondence between conjugacy classes of elements of the group GL(3,Z/2) of invertible 3x3 matrices over Z/2, and the following six sequences of polynomials: (1+x, 1+x,1+x), (1+x, 1+x^2), (1+x+x^2+x^3), (1+x^3), (1+x+x^3), (1+x^2+x^3)

[omitted(iii) Give representatives for each of the 6 conjugacy classes in GL(3,Z2).]

VII. Calculate a basis that puts the matrix A :
with rows ( 8, -4) and (9, -4) in Jordan form.

VIII. Given k - vector spaces A, B and k - linear maps f:A-->A, g:B-->B, with matrices (x[ij]), (y[kl]), in terms of bases a1,...,an, and b1,...,bm, define the associated basis of AtensorB and compute the associated matrix of
ftensorg: AtensorB--->AtensorB.:devil:
 

Similar threads

  • · Replies 43 ·
2
Replies
43
Views
7K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
7
Views
414
Replies
41
Views
6K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K