Should I Become a Mathematician?

In summary, to become a mathematician, you should read books by the greatest mathematicians, try to solve as many problems as possible, and understand how proofs are made and what ideas are used over and over.
  • #71
matt grime said:
" And with the Part III, you can also specilise in Applied or Pure, right?"

No, you get to do whatever the hell you like.

In Part III the lectures are intense, far reaching, there are many different courses, far more than the average university is capable of handling, and widely recognised at international level to be outstanding. None of that applies to other taught masters courses in the UK, which then to be very narrowly focused on one particular area. You want to do graduate level courses in QFT, Lie Algebras, Differential Geometry, Non-linear dynamics and Galois Cohomology of number fields? Could be arranged, depending on the year (that was a selection of courses available when I did it). Where else would you be able to do that?

Feel like finding out about modular representation theory, combinatorics, functional analysis, fluid mechanics, and numerical analysis? Again, quite likely you can do that.

Of course, why you would want to do that is a something else entirely, but in terms of scope of work and expectations placed upon you it is the best preparation out there, far more so than most (ifnot any, but I can't bring myself to make such sweeping statements) MSc's by research, and certainly more so than any MMath course.

If you even want to do a PhD in maths at Cambridge, they will demand part III, and many other places use it as a training ground and ask their students to go there.

The reason it is the best is because in some sense it is 'the only': there is no other university with the resources to be able to offer a program like it. Even Oxford can't compete, and most UK maths departments are just too small to offer anything comparable.
so part III is equivalent to Msc in maths?
and you can also combine studies from pure maths with mathematical physics? sounds interesting cause as far as i know you cannot study in Msc both of them.
 
Physics news on Phys.org
  • #72
mathwonk said:
Matt's remarks on differences in expectations in US, UK remind me of a talk I heard at a conference. The speaker said something like, "this proof uses only mathematics that any sophomore undergraduate would know", then paused and added, "or here in the US, maybe any graduate student". This is true and getting worse.
Just a comment on this...

I think this is one of those universal things, where the quality of the students always look, or more so, seem better at other institutions, or in other countries.

However, through experience, I don't think this the case.

At any of the top universities in, eg., the UK - you're going to get good and bad students, ones who like to study and slackers.

As an example, in the German PhD, you're expected to know, and be asked in the viva, anything on your subject. In other words, you have to revise over everything you have been taught (or should know) from day one of UG study onwards. This is much more extreme than any UK viva, in terms of material you should know. However, this doesn't mean that the candidates are any better or any worse than UK, US... PhDs.

And, for me, the example above wouldn't suit at all. I'm not one for learning this theorem and this proof by heart. I prefer to go to a book. If I come across a new problem, of course, I have an idea of how to solve/proof it but from there it's down to searching through the literature - to see what's been done before. Then thinking about what can be done now...

People who come up with lines like, "At university X you'll be taught [add appropiate theorem] in fresher's week..." usually save them for coffee room chat, ime :smile:
 
  • #73
loop quantum gravity said:
so part III is equivalent to Msc in maths?



It has no equivalent. At the end you get a Certificate of Advanced Studies in Mathematics. But then Cambridge seems to like making itself an anomoly. For funding reasons it was (still is?) technically classed as an undergraduate course, when I did it; my Local Education Authority, who in those days paid your fees and gave you a subsistence allowance, counted it as the 4th year of my degree, though anyone from the UK doing it will have gotten their degree already.

Just look at the courses they offer in anyone year to get an idea of what goes on. Grojnowski gave a lecture course on 'The Geometry of the Punctured Disc' in 1999, which was essentially him lecturing on some research he'd just done (probably on Hilbert Schemes). Gowers (Field's Medallist) decided to give a course on K-Theory cos he wanted to learn about it and found the textbooks on the subject inadequate. The courses offered by DAMTP tend to be more predictable.

There is a definite feeling that anyone with a 'standard' PhD in maths from the UK (i.e. someone who did a 3/4 year undergrad at, oh pick a place like Nottingham, which is a good university for maths, then jumped into the PhD program immediately) is underprepared for life after the PhD in the real world of research. This is exactly because there is no scope currently for doing courses like part III (and this includes people with MScs already) whilst a PhD student (a lot of PhD students at Cambridge attend part III courses, and the truly exceptional like Ben Green lecture the courses) in this country. Whilst it is generally accepted that a US undergrad course is not as demanding as a UK one generically, the PhD courses in the US (that take a lot longer) are far better at preparing you for the real world of academia. The gap between an UK PhD and say a German one is actually a huge yawning chasm, if you ask me.
 
  • #74
isn't HallsofIvy a mathematician?
 
  • #75
mathwonk said:
beeza, i recommend henry helson's honors calculus book, pretty cheap ($24 new, including shipping), from his website, and very well done for a short expert treament of high level calculus, by a retired berkeley professor.

Mathwonk, Thank You! I skimmed over your notes quickly just now (before I go over them thoroughly later tonight) and for the most part, I have never seen any of that stuff before. I've never been exposed to real proofs-- as our professor never really did many "proofs" in our lectures. She always said to refer to our book for the proof, and then began presenting more practice problems. I don't even know where to begin constructing a formal proof. Heck, we weren't even exposed to the epsilon delta precise definition of a limit or hyperbolic trig functions.

With one quick glance, I could get the gyst or some of the material, but I definitely need a long sit-down to digest it. I'll be picking up that book you recommended and hopefully with some studying, it will make my current calculus II class more interesting.

I'm honestly quite disappointed with the classes at my school so far.
 
Last edited:
  • #76
ok if you did my last exercise, here is another harder one. recall f is continuous at a if for every e>0 there is some d>0 such that whenever |x-a|<d and f is defined at x, then |f(x)-f(a)| < e.

prove that if f is continuous at a and f(a) > 0, then f(x)>0 for all x on some interval centered at a, (assuming f is defined on some interval containing a).

then prove (harder) that if f is continuous on a closed bounded interval [a,b] and f(a) < 0 while f(b) > 0, then f(x) = 0 for some x in (a,b).

hint: let x be the least upper bound of the set S of all t in [a,b] such that f(t) < 0. I.e. x is the smallest number not smaller than any element of S.

Prove that f(x) >0 leads to a contradiction and also f(x) < 0 leads to a contradiction. hence we must have f(x) = 0.

this stuff is basic to first semester calc but considered too hard for the AP course. But you can do it if you try. Help is also available. (here)unfortunately your class is undersestimating your intelligence, but if you ask the prof for more, you may get it. That happened to me once. A student came and said the class was boring so I cranked it up, to both her and my enjoyment.
 
Last edited:
  • #77
ex. 3. if f is continuous on [a,b] and differentiable on (a,b) and f(a) = f(b), then there is some point x in (a,b) such that f'(x) = 0.

ex.4. if f is differentiable on (a,b) and f' is never zero on (a,b) then f is strictly monotone on (a,b).

this is what I call the basic principle of graphing. i.e. a function is strictly monotone on any interval in which there is no critical point.

this pretty much covers the entire theory of one variable differentiable calculus if you think about it.:smile:
 
  • #78
if you are a calculus student from a standard AP class, or basic college calc class, and if you can do these exercises, you can be a mathematician.
 
  • #79
I want to be a mathematician.

My interest is on logic and foundations.
Should I follow textbooks or study the original works by those great people-Russell,Turing,Godel...?
Any advises will be appreciated.
 
  • #80
do both. perhaps textbooks by great people, like paul cohen's text on independence of the continuum hypothesis. i myself am not enamored of russell's contributions but many logicians disagree.

could we have some input from logicians, or at least from people who love logic?
 
Last edited:
  • #81
I'm a physics major (freshman) (in Austria) and so far there was nothing new for me in your lecture notes. I'm taking a standard calculus course for physicist (we don't have anything like honor classes) and we proof each theorem we encounter. We also covered the things in your excercises.

We also have a good theoretical approach to linear algebra.

But I can't say that about ODE's. This semester we had a course called "Introduction to differential equations" but it was only some recepies for solving these equations - almost no theory. I'm thinking about taking the course that mathematics majors have - to understand the theory too.
Do you think, that it is a good approach (for maybe a prospective theoretical physicist), or should I rather focus on the physics courses more? How important is the theory for physicists?
 
  • #82
mathwonk said:
prove that if f is continuous at a and f(a) > 0, then f(x)>0 for all x on some interval centered at a, (assuming f is defined on some interval containing a).

Since nobody has posted solutions to these, here is my attempt. Don't read this if you're working on it!

Proof.
Suppose f is continuous at a and f(a) > 0. Now since f is continuous at a, for all e > 0 there is some d > 0 such that if |x - a| < d, then |f(x) - f(a)| < e; ie, if a - d < x < a + d, then f(a) - e < f(x) < f(a) + e. In particular, we have that if x is in (a - d, a + d), then f(x) > f(a) - e. Since this holds for all e > 0 and since f(a) > 0, we can choose e = f(a) > 0. Thus if x is in (a - d, a + d), then f(x) > f(a) - f(a) = 0.

I think that works, maybe I'll try the others. This is a great thread btw.Edit again. I just realized that this is true also: If f is continuous at a and f(a) < 0, then f(x) < 0 for all x on some interval centered at a, (assuming f is defined on some interval containing a). The proof is the same, except since f(a) < 0, -f(a) > 0 so choosing e = -f(a) > 0 gives the desired result.
 
Last edited:
  • #83
mathwonk said:
ex. 3. if f is continuous on [a,b] and differentiable on (a,b) and f(a) = f(b), then there is some point x in (a,b) such that f'(x) = 0.


Here's my attempt at this one. I used three other results to prove it.
1. If f:[a,b]->R is continuous then f:[a,b]->R attains a max and a min.
2. If f :(a,b)->R is differentiable at x in (a,b) and f attains a max or a min at x, then f'(x) = 0.
3. The derivative of a constant function is 0.


Proof.
Suppose f is continuous on [a,b] and differentiable on (a,b). Now since f is continuous on [a,b], f attains a max and min on [a,b].
If the max occurs at some x in (a,b), then f'(x) = 0.
If the min occurs at some x in (a,b), then f'(x) = 0.
If both the max and the min occur at the endpoints, since f(a) = f(b) the maximum and minimum values of f are equal, so f must be a constant function. Hence f'(x) = 0 for any x in (a,b).
In any case there is some x in (a,b) at which f'(x) = 0.
 
  • #84
mathwonk said:
ex.4. if f is differentiable on (a,b) and f' is never zero on (a,b) then f is strictly monotone on (a,b).

I did this one using the mean value theorem which says:
If f:[a,b]->R is continuous and f:(a,b)->R is differentiable, then there is a point x in (a,b) at which f'(x) = (f(b) - f(a))/(b - a).
Proof.
Suppose f is differentiable on (a,b) and f' is never zero on (a,b).
Then either f'(x) > 0 for every x in (a,b) or f'(x) < 0 for every x in (a,b).

Assume f'(x) > 0 for every x in (a,b) and let u and v be points in (a,b) with u < v. Now we can apply the mean value theorem to f:[u,v]->R to choose some x in (u,v) at which f'(x) = (f(v) - f(u))/(v - u). Since f'(x) > 0 and v - u > 0 it follows that f(v) - f(u) > 0; ie, f(u) < f(v). Hence f is strictly increasing.

Assume f'(x) < 0 for every x in (a,b) and let u and v be points in (a,b) with u > v. Now applying the mean value theorem to f:[v,u]->R we can choose some x in (v,u) at which f'(x) = (f(u) - f(v))/(u - v). Since f'(x) < 0 and u - v > 0 it follows that f(u) - f(v) < 0; that is, f(u) < f(v). Hence f is strictly decreasing.

In any case f is strictly monotonic.

A similar result is that if f:R->R is differentiable and f'(x) != 0 for each x in R, then f:R->R is strictly monotonic. Here's a hint for anyone who wants to do it. Use the fact that if f is differentiable on some open interval I, then the image of the derivative f':I->R is an interval. (I found arguing by contradiction easiest on this one)
 
Last edited:
  • #85
mathwonk said:
then prove (harder) that if f is continuous on a closed bounjded interval [a,b] and f(a) < 0 while f(b) > 0, then f(x) = 0 for some x in (a,b).

Here is a hint for another way to do this problem(a different way than what mathwonk suggested).
First show that for each natural number n, if a_n and b_n are numbers with a_n < b_n and I_(n+1) = [a_(n+1), b_(n+1)] is contained in I_n = [a_n, b_n] for each n and lim n->inf (b_n - a_n )= 0, then there is exactly one point x which belongs to I_n for all n and both of the sequences {a_n} and {b_n} converge this point x. Now recursively define a sequence of nested, closed subintervals of [a,b] whose endpoints converge to a point in [a,b] at which f(x) = 0. This problem is hard I think.
 
Last edited:
  • #86
r4nd0m,

i am happy you had a more thorough calc course. that may be the difference between instruction in austria and here. i will try to post higher level exercises. those were for people who had had only cookbook calc courses, as they are standard results proved in proof coures.

even in your case it may be that certain subtleties such as my concept of local boundedness is different from the proofs in your course, although of course the statements of the big results are the same.

here is a little slightly less standard exercise for you along those lines, to prove that the derivative of a differentiable function always has the intermediate value property, whether or not it is continuous. I.e. assume f is differentiable on [a,b] and that g is its derivative. of course f is continuous, but g may not be. even if g is not continuous however, i calim that if g(a) = f'(a) >0 and g(b) = f'(b) <0, then there is some x with a<x<b and g(x) = f'(x) = 0. try that.

for a good theoretical intro to diff eq i highly recommend v.i. arnol'd on ordinary diff eq, about $35. let me post some of my recent updates to my linear algebra notes on the topic, taken from his book.
 
Last edited:
  • #87
linear systems

exercise: prove the only solutions of f' = af, with a constant, are f = ce^(at).

Linear differential equations: Let V = vector space of continuously differentiable functions on the real line, W = continuous functions. The derivative map D:V-->W, is linear and surjective by the fundamental theorem of calculus. The kernel of D is all constant functions by the mean value theorem. For any scalar c, f(x) = e^(cx) is an eigenvector for D with eigenvalue c.

Ex: If Lf = f^(n)+a(n-1)f^(n-1)...+a(1)f’+a(0)f,
L:C?-->C? is a linear differential operator with constant coefficients a(i), then DL = LD, so D:ker(L)-->ker(L).

If X^n+a(n-1)X^(n-1)+...+a(1)X+a(0) = ?(X-c(i)), all c(i) distinct, then {f(i)(x) = e^(c(i)x), for i = 1,...,n}, is a basis for ker(L) of eigenvectors for D.

[We know dimker(D-c) = 1, when n = 1. So by induction dimker((D-c(1))(D-c(2))...(D-c(n)) =
dim(D-c(1))^(-1)(ker(D-c2)...(D-cn)) <= n. Then prove {e^(c(i)x): i = 1,..,n}, is linearly indept.]

If the polynomial above factors as P = ?(X-c(i))^(r(i)), with some r(i) > 1, there is no basis for kerL of eigenvectors of D, but there is a Jordan basis {... ; e^(c(i)x), xe^(c(i)x), (1/2)x^2e^(c(i)x),...,(1/(r(i)-1)!)x^(r(i)-1)e^(c(i)x); ...}, so dimker(L) = dim<prod>ker(D-c(i))^(r(i)) = <sum> r(i) = degP, and P = the minimal polynomial for D on kerL. There is exactly one Jordan block for each c(i), in the matrix of D on kerL.

Linear differential systems let C = space of smooth functions on R.
An nxn matrix A of scalars [aij] defines a linear map A:(C)^n-->(C?)^n, acting on columns of n functions, as does D:(C)^n-->(C)^n, acting on each function separately. The equation (D-A)y = 0, is a homogeneous linear differential system, where y = y(t) = (y1(t),...,yn(t)) is a column vector of unknown functions, to be solved for. If n=1, we know y = e^(at) is a basis of ker(D-A).

The general case has a formally similar solution, namely, a basis is given by the columns of the matrix of functions e^(tA) = <sum> (t^n/n!)A^n, defined by the familiar series for e(ta), but for matrices, which converges absolutely by the same argument as when n=1.

If A has a Jordan form, the entries in the matrix e^(tA) are polynomial combinations of ordinary exponential functions as follows. Let A = S+N be the Jordan decomposition above. Then e^(tA) = e^(tS).e^(tN), matrix product. But if S is diagonal with entries c(i), then e^(tS) is diagonal with entries e^(tc(i)), and since N is nilpotent, the series for e^(tN) is finite, and the entries of e^(tN) are polynomials in t.

Ex. Use this method to solve (D-A)y = 0, where A is the 2by2 matrix with rows (a,0), and (1, a), and y = (y1,y2). Show this is equivalent to solving (D-a)(y1) = 0, and (D-a)(y2) = y1, i.e. finding ker[(D-a)^2:C-->C].
 
Last edited:
  • #88
a remark r4nd0m, make sure you yourself can prove these results, not just that they were proved in class by the teacher. that is the difference between becoming a mathematician, or scientist, and a listener.
 
  • #89
mathwonk said:
it may be that certain subtleties such as my concept of local boundedness is different
Just want to add that I had never seen this before.
 
  • #90
mathwonk said:
exercise: prove the only solutions of f' = af, with a constant, are f = ce^(at).

It's been like 2 years since I've had ODE's but I think this works.

Proof.
It's clear e^(at) is a solution. Now suppose y(t) is any other solution. Then y'(t) = a*y(t). Let w(t) = e^(-at)*y(t), then w'(t) = -ae^(-at)y(t) + e^(-at)y'(t) = -ae^(-at)y(t) + e^(-at)a*y(t) = 0, so w'(t) = 0 for all t and thus w(t) = c = e^(-at)y(t). Hence y(t) = ce^(at). So any solution is a linear combination of e^(at); that is, any solution has the form ce^(at).

Also it doesn't seem to matter whether a is real or complex, and I guess we can say that {e^(at)} is a basis for the solution space of this equation.

Edit: Fixed, I think there was a mistake in the first proof I wrote. Looks ok now I think.

I like these problems because they seem to be right at my level, they are not extremely easy for me nor are they extremely difficult either. I'll try that intermediate value one later when I have more time, looks interesting.
 
Last edited:
  • #91
mathwonk said:
I am interested in starting this discussion in imitation of Zappers fine forum on becoming a physicist, although i have no such clean cut advice to offer on becoming a mathematician. All I can say is I am one.

My path here was that I love the topic, and never found another as compelling or fascinating. There are basically 3 branches of math, or maybe 4, algebra, topology, and analysis, or also maybe geometry and complex analysis.

There are several excellent books available in these areas: Courant, Apostol, Spivak, Kitchen, Rudin, and Dieudonne' for calculus/analysis; Shifrin, Hoffman/Kunze, Artin, Dummit/Foote, Jacobson, Zariski/Samuel for algebra/commutative algebra/linear algebra; and perhaps Kelley, Munkres, Wallace, Vick, Milnor, Bott/Tu, Guillemin/Pollack, Spanier on topology; Lang, Ahlfors, Hille, Cartan, Conway for complex analysis; and Joe Harris, Shafarevich, and Hirzebruch, for [algebraic] geometry and complex manifolds.

Also anything by V.I. Arnol'd.

But just reading these books will not make you a mathematician, [and I have not read them all].

The key thing to me is to want to understand and to do mathematics. When you have this goal, you should try to begin to solve as many problems as possible in all your books and courses, but also to find and make up new problems yourself. Then try to understand how proofs are made, what ideas are used over and over, and try to see how these ideas can be used further to solve new problems that you find yourself.

Math is about problems, problem finding and problem solving. Theory making is motivated by the desire to solve problems, and the two go hand in hand.

The best training is to read the greatest mathematicians you can read. Gauss is not hard to read, so far as I have gotten, and Euclid too is enlightening. Serre is very clear, Milnor too, and Bott is enjoyable. learn to struggle along in French and German, maybe Russian, if those are foreign to you, as not all papers are translated, but if English is your language you are lucky since many things are in English (Gauss), but oddly not Galois and only recently Riemann.

If these and other top mathematicians are unreadable now, then go about reading standard books until you have learned enough to go back and try again to see what the originators were saying. At that point their insights will clarify what you have learned and simplify it to an amazing degree.


Your reactions? more later. By the way, to my knowledge, the only mathematicians posting regularly on this site are Matt Grime and me. Please correct me on this point, since nothing this general is ever true.:wink:

Remark: Arnol'd, who is a MUCH better mathematcian than me, says math is "a branch of physics, that branch where experiments are cheap." At this late date in my career I am trying to learn from him, and have begun pursuing this hint. I have greatly enjoyed teaching differential equations this year in particular, and have found that the silly structure theorems I learned in linear algebra, have as their real use an application to solving linear systems of ode's.

I intend to revise my linear algebra notes now to point this out.

how much do you earn in a year as a math professor?
 
Last edited:
  • #92
kant said:
how much do you earn in a year as a math professor?

I'm not sure about other countries, but in Canada, most collective agreements are available online, so you can look this up. For example, http://www.uwfacass.uwaterloo.ca/floorsandthresholds20062008.pdf" [Broken] is the pay structure for the University of Waterloo.
 
Last edited by a moderator:
  • #93
George Jones said:
I'm not sure about other countries, but in Canada, most collective agreements are available online, so you can look this up. For example, http://www.uwfacass.uwaterloo.ca/floorsandthresholds20062008.pdf" [Broken] is the pay structure for the University of Waterloo.

Hmm... ok the money is reasonable.. but what about the chicks? girls don t like nerdy guys... or do they? hmm...
 
Last edited by a moderator:
  • #94
well, i guess what i am saying is this: Are girls usually impress about your profession? This a serious question. Well, i get pretty good grade, but i am always very conscious that others might view me as weak.
 
Last edited:
  • #95
Hmm.. ok ok. i got it.
 
  • #96
Well this is a family forum, but i will admit that to impress girls in my experience it is not sufficient to be able to solve their quadratic equations.It helps to know some jokes too. and compliment their shoes.Secret: Basically, to get dates it is sufficient to react the those girls who are trying to tell you you should ask them out.

[i deleted my earlier attempts at humor on this topic because my wife said they were "a little nerdy".]
 
Last edited:
  • #97
mathwonk said:
even in your case it may be that certain subtleties such as my concept of local boundedness is different from the proofs in your course, although of course the statements of the big results are the same.

Yes you're right, we didn't mention the local boundedness. What is this concept actually good for?

mathwonk said:
here is a little slightly less standard exercise for you along those lines, to prove that the derivative of a differentiable function always has the intermediate value property, whether or not it is continuous. I.e. assume f is differentiable on [a,b] and that g is its derivative. of course f is continuous, but g may not be. even if g is not continuous however, i calim that if g(a) = f'(a) >0 and g(b) = f'(b) <0, then there is some x with a<x<b and g(x) = f'(x) = 0. try that.

Well, I would proceed like this:

f is continuous on [a,b] then (from Weierstrass's second theorem ( I don't know how you call it in the US :smile: )) f has its maximum and minimum on [a,b].
But g(a)>0, hence f is rising in a (i.e. there exists a d such that for every x from (a,d) f(x)>f(a) ). Hence f(a) is not the maximum of f on[a,b]. The same holds for f(b).
Let f(m) be the maximum. Then m must be from the interval (a,b).
Hence f(m) is also a local maximum => g(m) = 0
Q.E.D.
 
  • #98
it is good for proving the boundedness result for possibly discontinuous functions. This shows that the boundedness of a function on a closed bounded interval does not actually need continuity, but is true with the weaker condition of local boundedness. it could help you prove a discontinuous function is also bounded if you could show it is everywhere locally bounded.

i just like it because it occurred to me while thinking through the proof from scratch. it convinces me that i thought up the proof myself and hence am beginning to understand it, instead if just remembering a proof i read.

i like your proof that f'(x) = 0 has a solution. it is very clear and complete, without being wordy at all. [i believe the needed weierstrass 2nd thm is proved in my notes above as well and follows quickly from the boundedness of reciprocals].

now can you refine it to give the full IVT for derivatives? I.e. assume f'(a) = c and f'(b) = d, and c<e<d. prove f'(x) = e has a solution too.
 
  • #99
ircdan, #82 and #83, look clean as a whistle. Also #84, but try that one again just using rolle's thm: if a differentiable f takes the same value twice on an interval, then f' has a zero in between.

i.e. if a differentiable f is not monotone on [a,b] can you prove it takes the same value twice?

as for the intermediate value thm, try it without sequences, just using the property you already proved, that a functioin which is positive or negative at a apoint, is so on an interval.

then let x be the smallest number in [a,b] which is not smaller than a point where f<0. if f(a) < 0 and f(b) > 0, prove f cannot be negative at x.
 
Last edited:
  • #100
Q. who wants to be a mathematician?
hmmm...I guess you have to be intelligent enough and very interested in maths, you have to study hard and you should study at a cool university.AND you can't say all people with a PHD in math is a mathematician.

A. I would, if I could. I can't so I don't want to be 1.
 
  • #101
r4nd0m. here is possible a test of the usefulness of the condition of local boundedness. is it true or not that if f has a derivative everywhere on [a,b] that then g = f' is bounded on [a,b]? [if it is true, then local boundedness might help prove it.]

unfortunately it appears to be false. i.e. f(x) = x^2sin(1/x^2), for x not 0, and f(0) = 0 seems to be differentiable everywhere with derivative locally unbounded at x=0.so I have not yet thought of an interesting case where local boundedness holds and continuity fails. but the concept still focuses attention on why the theorem is true. i.e. if a function f is unbounded on [a,b] then there is a point x in [a,b] with f unbounded on every interval containing x. that is the real content of the theorem. in particular continuous functions do not fail this condition at any point. so it let's you stop thinking about the whole interval and think about the nbhd of one point.

e.g. in finding the counterexample above it helped me to know that if a counterexample existed, it would have to also be a local counterexample. i.e. to know that if a derivative existed which was unbounded on [a,b], there must also be a point x in [a,b] at which the derivative is locally unbounded, which is a priori a stronger condition.:smile:
 
Last edited:
  • #102
doing mathematical research can be as simple as this: finding a theorem whose proof actually proves more than the theorem asserts, and then generalizing it to a new interesting case.

for example the famous kodaira vanishing theorem says that on a complex manifold, if L is a line bundle with L-K positive in a certain sense, i.e. positive curvature, or positive definite chern class, then the cohomology of L is zero, above degree 0. the proof by kodaira, modified by bochner, is long and hard, but analyzing it closely shows that works in each degree separately, by showing the curvature in that degree is positive, i.e. certain sums of eigenvalues are positive.

now when kodaira's hypothesis holds, then all degree one eigenvalues are positive and then those in higher degrees, which are sums of the ones in degree one, must also be positive. but in fact if only one eigenvalue is negative, and all others are not only positive but large compared to that one, then any sum of two or more eigenvalues wil be positive, i.e. cohomology will be zero in dimension 2 and more.

since on a complex torus, which is flat, eigenvalues can be scaled in size without affecting the fact that they represent the given line bundle, this gives a proof of Mumford's famous "intermediate cohomology vanishing theorem" on complex tori.

this theorem has in fact been published with this proof by a number of well known mathematicians.

a more significant and wide reaching generalization has been obtained by Kawamata and Viehweg, using branched covers of complex manifolds, to generalize to a sort of fractionally positive condition, which has many more applications than the original theorem. all the proofs reduce to the kodaira theorem, for which kolla'r has also given a nice understandable "topological" proof.

With my colleague, we also have given a generalization of riemann's famous "singularity theorem" on jacobian theta divisors, whose beautiful proof by kempf turned out to use only some conditions which were usually true also for theta divisors of prym varieties, so we published this.

this progress, and later work on less general cases, gave impetus to the subject which has culminated recently in a complete solution by a young mathematician, Sebastian Casalaina - Martin, of the prym singularities theorem over 100 years after prym theta divisors were introduced.

this in turn has led to progress in understanding abelian varieties of low dimension. e.g. it is shown now by Casalina Martin, that if a 5 diml abelian variety has a theta divisor with a triple point or worse, then in fact that abelian variety is either a hyperelliptic jacobian, or an intermediate jacobian of a cubic threefold.

thus understanding proofs helps one learn more than just knowing the traditional statements, and it is fun. this is why i try to teach students to think through proofs and make their own. it is hard getting many people to get past just memorizing the statements and problem - solving techniques, and even proofs, without analyzing them.

In the cases above many people had known and used those theorems for decades without noticing they could be strengthened.
 
Last edited:
  • #103
lisa, i am not sure about some of your restrictions on candidacy for being a mathematician, but i think you do have to want to.

some of the best mathematicians at my school went to colleges like University of Massachusetts, Grinnell, University of North Carolina (are they cool? i don't know), and the smartest guy in my grad class at Utah went to Univ of Iowa. i guess by my definition hurkyl is a mathematician even if he hasn't joined the union, since he likes it and does it.

but i enjoy singing in the shower too even if i am not a singer. why miss out on the fun?
 
  • #104
ircdan, your proof in #90 is right on. can you generalize it to prove that the only solutiuons of (D-a)(D-b)y = 0 are ce^(at) + de^(bt) whene a and b are different?

then try that all solutions of (D-a)(D-a)y = 0 are ce^(at) + t de^at).

my notation means (D-a)(D-b)y = [D^2 -(a+b)D+ab]y = y'' -(a+b)y' + ab y.TD I am glad to hear what thorough instruction is provided in Belgium. You say you skipped proving the inverse function theorem. can you prove it for one variable functions?
 
Last edited:
  • #105
mathwonk said:
TD I am glad to hear what thorough instruction is provided in Belgium. You say you skipped proving the inverse function theorem. can you prove it for one variable functions?
We skipped that indeed, according to my notes it would have required "more advance techniques" than we had developped at that point in the course. We then used it so prove the implicit function theorem for f : R²->R, which was a rather technical proof (more than we were used to at least).
I'm supposing the proof of the inverse function theorem would be at least equally technical/complicated, so I doubt that I would be able to prove it just like that :blushing:
 

Similar threads

  • STEM Academic Advising
2
Replies
43
Views
4K
  • STEM Academic Advising
Replies
4
Views
1K
  • STEM Academic Advising
Replies
4
Views
2K
  • STEM Academic Advising
Replies
2
Views
2K
  • STEM Academic Advising
Replies
3
Views
1K
  • STEM Academic Advising
Replies
14
Views
1K
  • STEM Academic Advising
Replies
3
Views
896
  • STEM Academic Advising
Replies
3
Views
1K
  • STEM Academic Advising
Replies
9
Views
2K
  • STEM Academic Advising
Replies
11
Views
521
Back
Top