How much math does a math professor remember?

  • Thread starter Thread starter andytoh
  • Start date Start date
  • Tags Tags
    Professor
AI Thread Summary
Math professors often forget material over time, similar to the general population, especially if they do not regularly use specific concepts. While they may retain a foundational understanding of their field, recalling detailed proofs from undergraduate courses can be challenging without practice. The discussion highlights that understanding and problem-solving skills are more critical than rote memorization of theorems. Professors may struggle to recall even basic concepts from earlier courses if they haven't taught them recently. Ultimately, while mathematicians can derive proofs with time and thought, they may not remember every theorem or proof verbatim.
andytoh
Messages
357
Reaction score
3
I've always wondered about this question. I've taken university math courses and gotten A+'s. But then years later, if I never used topics in that course again, I realize how much I have forgotten.

A math professor who does research in, say, number theory would essentially never use, say, the Gauss-Bonnet Theorem that he had learned many years ago in Differential Geometry. Would the number theorist be able to pick a textbook problem in the Gauss-Bonnet chapter and solve it from the top of his head? Are math professors so mentally powerful that the phrase "if you don't use it, you lose it" does not apply to them? Do they remember every math topic they have learned as much as they did just before walking into their final exam many years ago?

For example, how many math professors reading this post can prove the Inverse Function Theorem of second year calculus from scratch?
 
Last edited:
Mathematics news on Phys.org
Erm, pick a hard one, not an obvious one. And, no, that isn't being arrogant. I can't remember plenty of proofs, but that is an obvious one to reconstruct.
 
I'd say maths professor are as forgetful as the rest of the population.

I've witnessed an associate professor say he could not remember the basic trig identities (when it was needed in a complex analysis subject) because he hasn't taugh a first year maths subject for a decade or so.

Also I have asked a professor (who had taught the subject for 5 years in a row) a problem he had set but he only gave a sketch of the answer and said he couldn't remember the details (I don't think he was purposely trying to hide the answer but geniunely forgot).
 
matt grime said:
Erm, pick a hard one, not an obvious one. And, no, that isn't being arrogant. I can't remember plenty of proofs, but that is an obvious one to reconstruct.

Of course a mathematician cannot repeat all the proofs he reads in research papers, but can they repeat the proofs of all the theorems and remember vividly all the topics that they had learned in, say, the first 3 years of university undergraduate math? (which I would consider the fundamentals for all math aspirees)
 
Last edited:
I don't really understand why you'd focus on the memorization skills of a teacher. After all, understanding is much deeper subject than mere memorization, and math is a huge field.

- Warren
 
chroot said:
I don't really understand why you'd focus on the memorization skills of a teacher. After all, understanding is much deeper subject than mere memorization, and math is a huge field.

- Warren

I agree that problem solving skills and ideas are more important than plain knowledge. I was just curious about how much they really know and remember? I personally get frustrated when I feel I mastered a subject, and then about 1 year down the road after not using that topic again, I cannot solve from a problem in that topic without looking up my notes and textbook (so now here is where problem solving does come into the discussion).

In fact, problem solving skills is what I am truly concerned about, but knowledge and memory is tied into that. For example, just the other day I was trying to prove that the real projective space is a smooth manifold but had problem proving the diffeomorphisms of the coordinate transformations because I had forgotten the details of quotient topology that I had learned in my third year topology course years ago.
 
Last edited:
matt grime said:
Erm, pick a hard one, not an obvious one. And, no, that isn't being arrogant. I can't remember plenty of proofs, but that is an obvious one to reconstruct.

Ok. A function is reimann integrable iff it's discontionuities have measure zero.
 
That seems like a straight forward result. It's one of those that as soon as you start thinking about the definitions it is obvious. Like the Inv F. Th..

This is as opposed to say the classification of compact surfaces upto homeomorphism.
 
It's not at all obvious to me (though the reult in the case of finite discontinuities certainly is)
 
  • #10
DeadWolfe said:
It's not at all obvious to me (though the reult in the case of finite discontinuities certainly is)
The proof idea is the same, isn't it? You split the domain into one part where the integrand is continuous, and one part of arbitrarily small size that contains all of the discontinuities.

(p.s. you forgot some conditions of the theorem)
 
Last edited:
  • #11
Good stuff, guys! This is what I meant when I asked how much do mathematicians remember (I never said memorize). No one here has intentionally memorized the proofs of these theorems that we all studied in undergraduate university, but you have shown me that you "remember" the proofs in the sense that you can think it over and then repeat the proof by figuring it out on your own. I doubt anyone here has memorized the exact formula for the curvature tensor, but perhaps you can derive it from first principles. This is what I mean by remember.

So can I state that all mathematicians can prove every theorem that they had learned from university if all they were given was paper, a pencil, and sufficient time to think about it? Or is it possible that a mathematician would say (about a theorem from undergraduate math) "Yes, I know the theorem, but for the life of me I cannot repeat the proof" or even worse "What does the theorem state again?"
 
Last edited:
  • #12
I've had professors who couldn't even tell you the jist of the IFT.
 
  • #13
andytoh said:
So can I state that all mathematicians can prove every theorem that they had learned from university if all they were given was paper, a pencil, and sufficient time to think about it?

you can state it, but its quite incorrect.
 
  • #14
Doodle Bob said:
you can state it, but its quite incorrect.

I'm not talking about proofs like Fermat's Last Theorem. I'm talking about the proofs of the theorems that we have all learned from undergraduate math courses, all of which appear in any standard undergraduate math textbook. And the mathematician has all the time he needs to think about it, but without recourse to any aids.

If a mathematician can come up with their own new theorems, surely they can reprove any theorem at the level they would consider elementary, right? Similarly, they can solve any problem from a textbook from the top of their head without any aids too? For example, give a complex analysist a homework question in differential topology and he can solve it correctly and unaided if you just give him the time he needs.
 
Last edited:
  • #15
DeadWolfe said:
It's not at all obvious to me (though the reult in the case of finite discontinuities certainly is)

It's continuous on all but a set of measure zero, and hence integrable. It's straighforward from the definition of the lebesgue integral, isn't it? I've never actually seen a proof of this, nor even the statement as given.

This is one of those proofs that requires no leap of imagination, and can be derived straight away.
 
  • #16
andytoh said:
...Or is it possible that a mathematician would say (about a theorem from undergraduate math) "Yes, I know the theorem, but for the life of me I cannot repeat the proof" or even worse "What does the theorem state again?"

Yes, both of those are very possible. You must remember that undergraduate math is about showing you as big a spectrum of mathematics as possible, but your professors have spent years specializing themselves in one field.

For example, this situation would very likely come up if you went to your abstact algebra professor and asked him about, say, Dirichlet's problem in complex analysis. Or, vice versa, if you went to one of your analysis guys and asked about some of Sylow's theorems. They might remember what those theorems state, and even under what conditions, but it's not reasonable to expect them to be able to prove them from scratch in front of you, they're just human like you.
 
  • #17
matt grime said:
It's continuous on all but a set of measure zero, and hence integrable. It's straighforward from the definition of the lebesgue integral, isn't it? I've never actually seen a proof of this, nor even the statement as given.

This is one of those proofs that requires no leap of imagination, and can be derived straight away.

Hence why I said "REIMANN" integrable.
 
  • #18
But it is a short step from one to the other for 'nice' things like this. After all we're not trying to integrate the Dirichlet function (if I mean Dirichlet - 1 at irrational, 0 at rational). The key point is the 'set of measure zero' thing. Once you understand that and put in the correct criteria (like, bounded, I imagine, since the function 1/x for x=/=0 and 0 at x=0 is not integrable on the interval [-1,1])
 
Last edited:
  • #19
Ok, is it fair to say that a mathematician should be able to prove all theorems in every topic (at the undergraduate level) that leads up to his field of expertise?

If so, then I intend to study all the proofs of every theorem in every topic that leads to my area of interest.
 
  • #20
andytoh said:
Ok, is it fair to say that a mathematician should be able to prove all theorems in every topic (at the undergraduate level) that leads up to his field of expertise?

If so, then I intend to study all the proofs of every theorem in every topic that leads to my area of interest.

How can you be sure that those theorems you leave out as somewhat irrelevant will not provide your area of interest with unexpected insights and applications in the future?

Nowadays number theory(!) finds unexpected applications in physics. Now, number theory is traditionally recognized as fairly irrelevant for physics. Modern theories are beginning to prove this assumption wrong. I think there is also luck and intuition in play when it comes to choosing the right tools for discovery and inventions at the frontier of maths.
 
Last edited:
  • #21
there are two things that make it easier to remember something, 1) repetition, and 2) discovery.

I.e. we can remember things better that we teach over and over. and we can remember things that were our own discoveries.

i cannot remember say the proof of the radon nykodym theorem or the closed graph theorem, stuff that I memorrized as an senior in 1965 just to get an A.

but stuff like the inverse function theorem in one variable si almost trivial.

harder might be the infinite dimensional version.

the better you understand something the easier it is to remember.


as an example i will illustrate with the idea of the proof of the infinite dimensional inverse function theorem.


the idea is to use a versiion of the geometric series. i.e. the easiest inversion process is the series 1/(1-r) = 1=r + r^2 + r^3+...

now that looks like just inversion of multiplication, but properly understood it becomes also inversion of composition.


notice that convergence for this series holds if r is small.


the idea is then to recall that a function f has the identity as derivative at 0, if there is a very small function h such that f = I - h. i.e. where h(x)/|x| goes to zero as x does.

Then the inverse of f = I-h is given by something like I + h + h^2+...

only this doesnot quite work. I am forgetting the exact formulation now, but it is something like this: you form a sequence: I, I+r,I+r(I+r), I+r(I+r(I+r)),... and prove this conversge to the inverse.

having gotten thsi far I would have to sit down for a few minutes or hours and get it straight.



with really hard stuff (for me), like deformation theory, sheaves, cohomology, high dimensional abelian varieties, biratiuonal geometry, even my own research, I actually tend to forget it within about three months.


I also forget stuff i never elarned that well, like little tricks for anti - differentiating ("integrating") weird trig functions.


So most of us are the same as other people about forgetting, but we look at some of this stuff wayyyy more than an ordinary person.

And there are exceptions. I have a friend who never seems to forget anything. He is extremely smart. But I suspect he also spends a lot more time than others do thinking about it and understanding it in the first place.
 
  • #22
i do aspire to being able to prove, in multiple ways, all theorems leading up to my area. But I will probably die first.

I love understanding absolutely every detail, I like to be able to see where every proof relates to another one for the same result. But I do not really separate the areas as anything you understanbd can be used in another field, once you understand it.


I like finding my own proofs for results as often as possible, since as I said above I remember those better. The key for me to rememeber a proof is to analyze it down to its smallest parts, find the central point and put it back togotehr. then just remember the basic central idea. like the geometric series idea for inverse function theorem above.


here is another example: one of the most complicated basic proofs seems to be poincare duality ni algebraic topology. the argumkent goes on for pages in greenbergs book.


but i was sitting around once with John Morgan and (Fields medalist) Simon Donaldson and John told Simon he ahd found a nice proof of Poincare duality, using Morse theory. Simon thought for about 2 seconds and said " oh yes, just turn it upside down".

well if you know morse theory you know that it changes simplicial vertices and edges, or singular chains of various dimensions, into maxima, minima, and various kinds of saddle points. So turning the manifold upside down as Morgan and Donaldson realized, changes the maxima into minima, ..etc..., proves Poincares duality.


I have never remembered the detaield proof in Greenbergs book, and have never forgotten the one I heard that day from Morgan and Donaldson.

So if you are ever nearby and they say would you like to go to lunch with the speaker? say yes! thank you. you may hear something you will easily remember all your life.
 
Last edited:
  • #23
mathwonk said:
i do aspire to being able to prove, in multiple ways, all theorems leading up to my area. But I will probably die first.

This is precisely the dilemma I'm in. There are too many proofs of theoremes leading up to my area of interest that I cannot provide, and I want to know these proofs by heart before I read on to new material. But for every proof I read from the lower level math, I lose the time I could have used to read new material from higher level math.

What would you consider to be the best ratio for time spent on reviewing old material to the time spent learning new material? Currently the time ratio I'm using is about 1:1 (which includes the time I spend on doing problems in old topics), which many may consider wasting too much time on old material and I'll be much older than I should be by the time I reach the frontiers of my area of interest (though I will know the topics leading up to it much better).

Thank you mathwonk for your very honest answer about how well you remember the proofs leading up to your area. For a second there, Matt Grime convinced me that a mathematician should be able to prove from scratch every theorem leading up to his area of research.
 
Last edited:
  • #24
well matt is younger than i am and sharper, and remembers more than i do.

so as i get older i use more tricks for remembering.

i can probabl;y prove a function is rimenna integrable if and only if the discontinuities have measure zero by the way, in a few minutes.the idea is as matt said, the same as the proof that a functio with a fin ijte set of discon tinuities is inbtegrable.i.e. just approximate your function by step functions very well except on a set which isn't very big. then although we cannot control the height on that set, it won't matter since the base is so small.

lets try:
 
  • #25
andytoh said:
For a second there, Matt Grime convinced me that a mathematician should be able to prove from scratch every theorem leading up to his area of research.

Really? Then I can only presume you didn't read what I wrote. The ability to recreate 'bookkeeping' type proofs is something I suspect a lot of research mathematicians can do with a little time. There are no deep ideas, nor tricks, to remember. Since one of the results in my pre-phd stuff was the Riemann mapping theorem, I certainly don't claim to be able to reproduce all of the material I've ever seen. Heck, I can't remember a lot of it. And I even said so in my first (or perhaps second) post in this thread.
 
Last edited:
  • #26
riemann integrability

lets see what we have here. suppose we can cover the set of discon tinuities by a sequence of intervals whose total length is less than e. then some finite number of those intervals covers most of the discon tinutities and has total length e./2,.. is our funbtion bounded? i guess so, so we have a bounded function and given e we can


hmmmm this requires a little thought. maybe i need to cheat and actually use an idea, due to riemann, which fortunately i remember, called oscillation,

the oscillation of f at a point c is the limit of the lengths of the smallest intervals that contain the images of the numbers f(c-a,c+a) as a goes to zero. i.e. let (c-a,c+a) be an interval shrinking down to the point c, as goes to zero. the set of vaklues f takes on this interval lies in some smallest bounded interval.

as a goes to zero these "image intervals" are bounded, but shrink down to zero length if and only if f is continuous at c.

so riemann introduced this measure of how discontinuous f is at c. now the idea is to consider the set of points c where the oscillation is more than e, where e is a given positive number. then using compactness one shows that this set has "content zero", i.e. can be covered by a finite set of intervals, of total length less than e. then one proceeds like this.


i.e. suppose we want upper and lower riemann sums of f on an interval [a,b], that are closer together than e. If f is bounded by -M and M, choose a finite set of intervals that cover all the points of [a,b] where f has oscillation more than e/6M, and of total length less than e/6M.

Then on these intervals f may have ocillation as much as 2M, but the total lengths of those intervals is less than e/6M, so these parts of the graph of f can be covered by rectangles of total area les than let's see...

uh...e/3?

then off these intervals at all the points f has oscillation less than e/6M, so there exists rectangles covering those points with upper and lower sums differeing by less than say e/3M. so this gives upper and lower sums with total difference les than e.



i am going to stop now, because i have done a little example.

what i remember is this:

this proof appears in spivaks calculus on manifolds, and in riemanns works in his paper on trigomometric series.

i can say the last time i recall doing this proof was 1970, driving across country and bored, so i worked it out in my head in the car.

if this seems impressive as a feat of memory, remember this is what we do all the time.

as charles barkley said an average player has no chance against a pro because they play every day.


or let me give an example closer to home. my son was an all state basketball player in high school and led the nation for a while in college in 3 point shooting at 59 %.


I thought this was great. But he got no chances at all to play pro in the US but was a part time semi pro in Europe. But when he came home at xmas, while he was playing as a semi pro, I watched him play, and he was much better than he ever was in high school or college.

He almost never missed. Warming up, i saw him hit about 15-18 straight mid range jumpers.

And he said he had been at a camp with an NBA player from utah, who was suppose to hit 10/12, 3 - pointers or everyone had to run laps.

the pro missed the first two then hit 10 in a row. jeff hornacek, that was it.

people who play every day are so much more accurate than we are there is no comparison.

math professors are profesional mathematicians. still some of us are not as sharp as others.
 
  • #27
you may not remember back a ways when i asked matt to show me how to calculate a tangent space to a certain lie group, and he made it look like falling off a log.basically he used fermats method of just expanding a series and picking off the linear part.

i had forgotten how to do this, but when he showed me how easy it was i felt silly, since it is just what it had to be. i really should have remembered it but i don't use that stuff that much, and honestly i did not think about it that hard.if we go back to the basics and think about it, we can figure out and remember a lot. it always seems easier to ask someone else, but when we do, it just doesn't always stay with us, unless they make it really simple.
 
  • #28
here is a commenton what mathematicians need to remember.
being a researcher in math is not about what you know, but what you can do.

once i was listening to a seminar talk by a famous topologist and he was describing the work of some other topologist in solving an outstanding problem. He paused at a certain point and said he did not know whether the solver was aware of certain result B or not, at the time he solved the problem, but he was aware of result A.

i did not know the meaning of all the words he was using, so I asked for a definition of the terms in B.

when he told me the definition, I pointed out immediately that it followed from result A and explained why. This is supposed to illustrate that you can prove something even if you do not previously know the topics involved.

another time i was listening to a young person desceribe a problem he needed help with, and he seemed to think I would be no help because I did not know any of the terms or concepts before. there was another very smart mathematician there who was also listening and understanding everything.

At a certain point, I asked another dumb question, which was answered with a question like - why do you want to know that? I then ob served that if this question had a certain answer, then the symmetry of the situation should lead to a solution to his problem.

He asked why? And at this point, the smart mathematician immediately understood what i was saying, andd began t explain it in detail to the student. It took him a long time to lay it out in detail for the student, whop eventually understood it.

In my opinion, it took a combination of my instinct for problem solving and the other mans knowledge and intelligence to solve the propblem.the point of these examples is you can make at least some contribution to doing mathematics with inadequate mathematical knowledge, but it is hard or impossible to complete the job without the knowledge.

and without the knowledge you are more dependent on others, so the more you learn the more you improve your skills.

ideally you want to keep alive your "fresh eye" for problem solving, and yet also enhance your base of knowledge, although to some extent these may be contradictory.

as the great Raoul Bott put it to us once in algebraic topology class, "try to prove some of these things yourself before your mind gets full of other peoples ideas"
 
Last edited:
  • #29
mathwonk said:
here is a commenton what mathematicians need to remember.
being a researcher in math is not about what you know, but what you can do.

True, but the more you know and remember, the more you can do as well. I'm sure many mathematicians got stuck in a problem because they were lacking some knowledge (or had forgotten some results) that would have helped immensely.
 
  • #30
you are right. i try to admit this in the latter aprt of my lengthy post.by the way does this thread remind you of "how much wood would a woodchuck chuck?"
 
  • #31
I personally intend to remember every defintion, theorem, and their proofs that I read. To help myself out in this regard, I type out the proofs of all the theorems, and where there are gaps (gaps in the sense that the omitted detail is obvious to the writer but not immediately to me) I fill in the details myself. In case I forget the proof later on, I can reread what I typed out.

This may sound time-consuming, but I found that I spend just as much time reading the proof and fully understanding it anyway, so it is no real time loss for me at all. Fully understanding the remembering the proofs have also helped me understand the definitions and theorems much better and apply them to solve new problems.

After seeing a sample of my typing, selfAdjoint responded in another thread of mine:

selfAdjoint said:
Way to go, man! And the great advantage of this is that in addition to confirming your progress, this fixes the definitions in your memory, like "Locally Lipchitz", in this case. Use it and you won't (as easily) lose it.
 
Last edited:
  • #32
andytoh said:
I personally intend to remember every defintion, theorem, and their proofs that I read. To help myself out in this regard, I type out the proofs of all the theorems, and where there are gaps (gaps in the sense that the omitted detail is obvious to the writer but not immediately to me) I fill in the details myself. In case I forget the proof later on, I can reread what I typed out.

that is good practice, as long as you don't just memorize proofs - it is often easier to focus on the idea of the proof - the steps fill themselves in.

This may sound time-consuming, but I found that I spend just as much time reading the proof and fully understanding it anyway, so it is no real time loss for me at all. Fully understanding the remembering the proofs have also helped me understand the definitions and theorems much better and apply them to solve new problems.

understanding the proofs is the best way to learn the proofs. If the steps make sense, then you'll remember them more easily than if you just try to recite them parrot fashion - it will also help you to notice when you make a mistake in the proof in an exam.
 
  • #33
matt grime said:
that is good practice, as long as you don't just memorize proofs - it is often easier to focus on the idea of the proof - the steps fill themselves in.



When I retype a proof of a theorem, I break it down into sections, with each section devoted to a specific idea within the proof. This way, I remember the ideas, and the details in each section (much of which is added by me when the original proof has omitted detail) are simply the subproofs within the main proof.

The link below is a sample of how I understand and remember a proof of a theorem.
 
Last edited:
  • #34
andytoh, your idea is very good especially for people who have ordinary maths abilities. Most people are put off by maths or think it is too hard usually because they don't see the steps in between. Had they seen it and were willing to understand it than there should be no excuses for not understanding it because as Russel puts it 'Its all a bunch of taughtologies'.

I should start doing what you do.
 
  • #35
andytoh said:
Of course a mathematician cannot repeat all the proofs he reads in research papers, but can they repeat the proofs of all the theorems and remember vividly all the topics that they had learned in, say, the first 3 years of university undergraduate math? (which I would consider the fundamentals for all math aspirees)

I received a First in Math at a fairly renown UK university. Suffice to say, I can remember bollocks from the courses... though I was hell of an interested in them while I was there. It seems that level of interest doesn't really carry things into your long term permastore if the material is too complex and you don't refresh.

First year was too slow for me... I was pining for more work... I probably could have managed going twice as fast and covering twice as much material.

2nd year was a rude surprise though, and I found myself floundering in real analysis. That subject stole a lot of my time which I would have spent improving other subjects. Every hr spent on trying to prove a continuity theorem was very fatiguing, with low success rates at the start.

I suppose I'm a good applied problem solver, but bad at rigorous reasoning that involves untangling convoluted skeins of thought... I probably can remember all the problem solving techniques now... but not the proofs...

I tend to remember proofs that have a visual element to them... for example, all the Linear Algebra and Group Theory proofs.
 
Last edited:
  • #36
andytoh said:
I've always wondered about this question. I've taken university math courses and gotten A+'s. But then years later, if I never used topics in that course again, I realize how much I have forgotten.

A math professor who does research in, say, number theory would essentially never use, say, the Gauss-Bonnet Theorem that he had learned many years ago in Differential Geometry. Would the number theorist be able to pick a textbook problem in the Gauss-Bonnet chapter and solve it from the top of his head? Are math professors so mentally powerful that the phrase "if you don't use it, you lose it" does not apply to them? Do they remember every math topic they have learned as much as they did just before walking into their final exam many years ago?

For example, how many math professors reading this post can prove the Inverse Function Theorem of second year calculus from scratch?

Welcome to the human race.
As far as I know, not one Professor I had in university was able to reproduce high school trig identities... some were positively worse than I was in elementary computations... most made stupid mistakes on the board from time to time. One or two even made logically flawed side remarks for the sake of interest that were later shot down by the students.

From a biological perspective, the brain simply downgrades dendritic connections that aren't being used. Repeated activation of the same synapses over time induces LTP, which will serve to keep the traces in your mind for some time.

I suspect that this natural forgetting imposes a natural limit on human intelligence in the long run. Some scientists have been able to increase the intelligence of mice by up-regulating their LTP through increasing the number of NMDA receptors at Princeton.

The truly intelligent are those who not merely do not forget, but somehow manage to integrate new knowledge with the old, seamlessly.

I think some psychologists have shown that too much knowledge reduces speed of retrieval and actually dampens creative problem-solving capacity.
 
Last edited:
  • #37
andytoh said:
True, but the more you know and remember, the more you can do as well. I'm sure many mathematicians got stuck in a problem because they were lacking some knowledge (or had forgotten some results) that would have helped immensely.

Oh well. Tough luck. Excessive knowledge hinders intuition and blunts your problem-solving skills. In this respect, some degree of 'forgetting' could be good, since it gives you a chance to rearrange your thinking.

I think 10,000 hours is a good benchmark for time required to become a professional mathematician of respectable prowess. Then again, this is relative to existing players in the field. Only the best experts ever reach 10,000 hours of practise in their fields - regardless of whether this is music, chess, physics or whatever... You might want to think whether this investment is really worth your time (you probably would have gotten 5000 hrs of advanced mathematics done by the time you reached your PhD) and count all the opportunity costs.
 
  • #38
The last unversity math course I taught was over 20 years ago, and the majority of my work since that time has been in engineering applications rather than mathematics. I remember basic mathematical theorems and their proofs in the same way I remember old friends. I can't reproduce the exact conversations we had, nor exactly how they looked, but if I run across them on the street the details immediately return. What I find important is not remembering specifics but knowing where to look if I do need to return to the theory. For the basics, once I read the theorem it quickly comes back to mind and I can sketch the proof in my mind fairly easily. For more advanced topics, I recognize the theorems but would have difficulty reproducing the proof. I have a hard time even with my own thesis.
 
  • #39
andytoh said:
I personally intend to remember every defintion, theorem, and their proofs that I read. To help myself out in this regard, I type out the proofs of all the theorems, and where there are gaps (gaps in the sense that the omitted detail is obvious to the writer but not immediately to me) I fill in the details myself. In case I forget the proof later on, I can reread what I typed out.

This may sound time-consuming, but I found that I spend just as much time reading the proof and fully understanding it anyway, so it is no real time loss for me at all. Fully understanding the remembering the proofs have also helped me understand the definitions and theorems much better and apply them to solve new problems.

After seeing a sample of my typing, selfAdjoint responded in another thread of mine:

It's easy enough to understand a non-trivial proof from an undergraduate textbook and memorize the intuitions and general ideas behind them (15 minutes tops?!). That gives you about 4 undergraduate theorems per hour. Of course, it's difficult to go beyond 4 new theorems a day, since your mind will probably begin to mix them up if you tried. For most of us, memory lags behind understanding.

The difficult part is in translating the general idea into an explicit and rigorous mathematical statement. I presume this is the part where people need to practice, practice, practice.

Perhaps have a diary recording nothing else but your learning of theorems - when you read them, when you reviewed them, and where the gaps in the understanding were...

Sometimes I get too lazy to read up on an elementary result employed in a proof, especially when it seems intuitively plausible. This is where the weaknesses in the mathematical superstructure of an average student in mathematics probably lie. A little idleness here and there.

I also wonder if the following is an exercise in futility: if we not only tried to understand a proof, but more importantly, perhaps tried to understand the train of thought that led the author to construct that proof. If mathematicians were to write out how they stumbled on the insight that led to the solution of the problem... then perhaps mathematical thinking would be advanced significantly. Which mathematics student has not felt irked at the utilisation of a particularly ad-hoc result that emerged seemingly out of nowhere?
 
Last edited:
  • #40
nightdove said:
I also wonder if the following is an exercise in futility: if we not only tried to understand a proof, but more importantly, perhaps tried to understand the train of thought that led the author to construct that proof. If mathematicians were to write out how they stumbled on the insight that led to the solution of the problem... then perhaps mathematical thinking would be advanced significantly.

I've thought about this as well. But I have yet to see one math book that shows a proof in this manner. Let me make my own example:

Prove: If A={a_1,...,a_n} spans a vector space V, then every linearly independent set in V contains at most n elements.

Thinking process:
Let B={b_1,b_2,...} be a linearly independent set. We want to show that B cannot have more than n elements. But how? Hmmm...well, because A spans V, each of the b_i's is a linear combination of the a_i's. What would happen if we took, say, b_1 and joined it with A? The new set A'={b_1, a_1,...,a_n} (with n+1 elements) would have to be linearly dependent, right? Yes indeed, but so what? Well, that would mean that one of the a_i is a linear combination of the other elements of A'. So we can remove this particular a_i, and the resulting set, which has n elements again, would still span V.
Hey! Why don't we repeat this process until all the a_i's are gone and we end up getting A'={b_1,...,b_n}, which would still span V? But would we achieve by doing this? AHHHH! If there was another element b_(n+1) in B, then this element would have to be a linear combination of {b_1,...,b_n}, since {b_1,...,b_n} spans V. But that would contradict the assumption that B is a linearly independent set. There we go! Thus B cannot have more than n elements! Ok, let's write out the proof properly now...



Isn't this what coming up with a proof is really all about, gathering the ideas? The above is not a rigorous proof of course, but it captures the IDEAS and the THINKING PROCESS. Personally, I believe we should understand the ideas of a proof first (as in the above) before we study the formal proof itself. To be honest, I would for certain remember how to prove the above theorem after reading the above, whereas if I read a formal proof I may forget it in a few months.
 
Last edited:
  • #41
Maybe a better question is "can a mathematician see something that he hasn't used in 10 years, recognize it for what it is, and quickly:
find reference material related to the topic
read that material with complete comprehension
I would think that the concepts and broader ideas are more important than the minute details. I'm sure that although many of those professors mentioned may have forgotten some of the trig identities, if they wanted to, they could sit down and derive them. But, most would choose to simply open a text, and within a few seconds to a minute or so, could have complete recollection.

I think my thought is simpler to put into computer programming terms:
A computer programmer may learn 5 or even 10 different programming languages. But, if he's been working in java for 10 years, he may forget some of the syntax used in C++. If he needed to write a program in C++, he would be able to use the correct logic, but may need to glance at a reference book or two for the correct syntax.
 
  • #42
here is a well known description of the abilities of professors and others in academia: warning - it is PG - rated.

ACADEMIA
The Dean leaps tall buildings in a single bound; is more powerful than a locomotive; is faster than a speeding bullet; walks on water; gives policy to God.

The Professor leaps short buildings in a single bound; is more powerful than a switch engine; is just as fast as a speeding bullet; walks on water if it is calm; talks with God.

The Associate Professor leaps short buildings with a running start and favorable winds; is almost as powerful as a switch engine; is faster than a speeding BB; walks on water if it is indoors; talks with God if special request is approved.

The Assistant Professor barely clears a quonset hut; loses a tug of war with a switch engine; can fire a speeding bullet; swims well; is occasionally addressed by God.

The Teaching Assistant runs into buildings; recognizes locomotives two out of three times; has trouble deciding which end of the gun is dangerous; stays afloat with a life jacket; thinks he/she is God.

The Department Secretary lifts buildings and walks under them in a single bound; kicks locomotives off the tracks; catches speeding bullets with his/her teeth and eats them; freezes water with a single glance; he/she is God
 
  • #43
andytoh said:
I've thought about this as well. But I have yet to see one math book that shows a proof in this manner. Let me make my own example:

Prove: If A={a_1,...,a_n} spans a vector space V, then every linearly independent set in V contains at most n elements.

Thinking process:
Let B={b_1,b_2,...} be a linearly independent set. We want to show that B cannot have more than n elements. But how? Hmmm...well, because A spans V, each of the b_i's is a linear combination of the a_i's. What would happen if we took, say, b_1 and joined it with A? The new set A'={b_1, a_1,...,a_n} (with n+1 elements) would have to be linearly dependent, right? Yes indeed, but so what? Well, that would mean that one of the a_i is a linear combination of the other elements of A'. So we can remove this particular a_i, and the resulting set, which has n elements again, would still span V.
Hey! Why don't we repeat this process until all the a_i's are gone and we end up getting A'={b_1,...,b_n}, which would still span V? But would we achieve by doing this? AHHHH! If there was another element b_(n+1) in B, then this element would have to be a linear combination of {b_1,...,b_n}, since {b_1,...,b_n} spans V. But that would contradict the assumption that B is a linearly independent set. There we go! Thus B cannot have more than n elements! Ok, let's write out the proof properly now...



Isn't this what coming up with a proof is really all about, gathering the ideas? The above is not a rigorous proof of course, but it captures the IDEAS and the THINKING PROCESS. Personally, I believe we should understand the ideas of a proof first (as in the above) before we study the formal proof itself. To be honest, I would for certain remember how to prove the above theorem after reading the above, whereas if I read a formal proof I may forget it in a few months.

That linear algebra proof was a rather simple one. Proofs in linear algebra are almost always very intuitive to me. But yes, no matter how easy I found them when I first studied them, I can safely say I have forgotten nearly all of them, now that I'm in the commercial world.

For example, even some of the very simple proofs in probability theory about the Poisson and Normal distribution I may have forgotten, despite these having very straightforward geometrical insights.

For real analysis, I find that the proofs tend to have a very simple geometric logic, but a complicated descriptive structure necessitated by the rigour of the course tends to blind you to the very obvious implicit geometry. I have found that attempting to visualise the R^3 case, almost always enables you to capture the "solution insight" into a proof problem in R^n. Sometimes, even a discrete analog may give you an idea about how to prove the theorem - for instance, the theorem about the existence of a single accumulation point implying an infinite number of accumulation points, by applying logic from discrete mathematics (limited number of pigeonholes, infinite amount of letters).
 
Last edited:
  • #44
andytoh said:
For example, how many math professors reading this post can prove the Inverse Function Theorem of second year calculus from scratch?
Being a good mathematician certainly isn't about being able to recite such and such theorem, or know every exercise from such and such's book.
 
  • #45
nonetheless essentially any senior university mathematician can fairly easily prove all the little examples people are giving like inverse function theorem, riemann integrability, etc... that's what teaching them does for you.
 
Back
Top