Your favorite Eureka moment for proofs

In summary, the conversation revolved around Eureka moments and problem-solving experiences in the field of programming and computer science. One person shared their favorite moment of realizing the necessity of a definition in group structures, while another talked about finding a solution to a program that was slowing down due to memory copies. The conversation also touched on the P=NP question and the importance of balancing space and time in programming. The speakers also shared their experiences with optimizing databases and the challenges of finding the most efficient solution. Overall, the conversation highlighted the importance of constantly learning and adapting in the ever-changing world of technology.
  • #1
fresh_42
Mentor
Insights Author
2023 Award
18,994
23,994
What was your favorite proof or Eureka moment you remember?

Mine was about normal subgroups (##gNg^{-1}\subseteq N##). You learn the definition and prove a few properties and go to the next subject. I remember that I once tried to teach the concept to someone and in the middle of my explanations I all of a sudden realized, that this definition isn't just convenient, but necessary to get a well defined group structure on ##G/N##. Since then I know that being forced to explain something to others (while constantly asking your self why this is true what you are just talking about) is a really good way to learn something.What were your Eureka moments?
 
  • Like
Likes Charles Link, vanhees71, Abhishek11235 and 4 others
Physics news on Phys.org
  • #2
Our group was tasked with speeding up an executive news search program in the 1970's that created reverse word search indexes from relevant corporate news articles in the day of dailup timesharing services long before the birth of the internet.

The problem was if 10 executives used the program concurrently the system would get severely bogged down and could crash. Our site mgmt felt that would be a disaster especially since these execs controlled the funding of the center and asked us to find a way to improve its speed.

An hour before the meeting, I noticed the program did a lot of memory copies basically rebuilding the search tree each time before a query. It occurred to me that I could reuse the existing tree and tweak the first character of each indexed entry by oring in a bit that effectively took the entry out of the search list. At each new search, I zeroed the bit canceling the need to fully rebuild the tree.

The end result was a program that could handle hundreds of executive folks saving the day. It was a Eureka moment for me realizing that the memory copies were killing the performance and a magical moment for the program author and his manager who had struggled to find a speedup and couldn't.
 
  • Like
  • Informative
Likes Abhishek11235, Keith_McClary, berkeman and 1 other person
  • #3
That reminds me of the ##P=NP## question. If we simply read it as, if we distinguish statements ##S## which can be evaluated in running time ##R(S)=O(n^c)## from those which need ##R(S)=O(c^n)##, are they different classes or not, then it is just a funny counting problem in computer science. However, it gets personal if you read it as: Are there truly ##R(S)=O(c^n)## statements, or are we just too stupid to solve them in ##R(S)=O(n^c)##?
 
  • #4
One thing I learned after awhile was the spacetime curve for programming. To make a program fast, it needs to use more space for intermediate results. To make a program small, it needs to save the minimal amount of data and recompute the rest.

As an example, a simple sin(x) routine might either have a look up table of sin() values (more values more space) and use interpolation to find ones not listed or the routine would need to compute the sin() from a sin() series to some level of precision or some combination of both with any clever optimization variations.
 
  • #5
jedishrfu said:
One thing I learned after awhile was the spacetime curve for programming. To make a program fast, it needs to use more space for intermediate results. To make a program small, it needs to save the minimal amount of data and recompute the rest.

As an example, a simple sin(x) routine might either have a look up table of sin() values (more values more space) and use interpolation to find ones not listed or the routine would need to compute the sin() from a sin() series to some level of precision or some combination of both with any clever optimization variations.
Another important thing to learn is, that I/Os are so slow, that they can ruin all other considerations and tricks to speed up a program. Of course, if you have limited size, then there is no way to avoid I/Os. But an elegant program that shrubs the disk isn't of much use. And all these database decisions to make: keep the query open or close it?, create an index table or not?, how many secondary keys do we need?, etc.

The most difficult decision I ever encountered was the following: Do I upload crucial parts of the database to the java frontend, which resulted in a very long startup process due to low rates on the various lines, but has the advantage that the queries during the day were fast, or do I handover any query to the backend when it is needed, which costed response time to build up complicated database accesses?
 
  • Haha
Likes jedishrfu
  • #6
I had something similar happen in a database application. We were storing periodic data records each holding a hundreds of measurements. The data record was to be maintained for a number of days. Retrieval was for specific measurements over a given time period.

I decided to optimize for the retrieval using index tables breaking up the data record into individual measurement records to make it easy to aggregate for a given query. Things worked well until the first delete cycle where the database engine got caught in adding a new data record while deleting the hundreds of expired measurement records each time then it could barely keep up.

The issue was the various index tables that had to be updated along with the fact table as data was added and removed. Eventually I discovered I had to give up the notion of optimizing the queries and instead stored each data record as a blob and query retrieval returned a set of data records and the client program sifted through them to get the specific measurements to display. In this case, I/O was increased in order to reduce database churn.

Once again an 11th hour solution, with my boss watching over my shoulder.
 
  • Like
Likes fresh_42
  • #7
The trick I try to describe here, taking advantage of the Fast Fourier Transform in the 1D inverse acoustic scattering problem.
 
  • Like
Likes fresh_42
  • #8
Mine is kind of meta-mathematical.

By definition, two algebras ##S## and ##T##, e.g semigroups, are isomorphic if there exist homomorphisms ##f:S\to T## and ##g:T\to S## such that ##fg = \mathrm{id}_T## and ##gf=\mathrm{id}_S##. Although, I knew it was "equivalent" to there being a bijective homomorphism between the semigroups, it was the first time I actually started questioning assumptions.

For instance, is ##gf=\mathrm{id}_S## alone not sufficient? No and I was able to give a "counter-example". It made me understand what sufficient or necessary conditions were.

Also, why is the definition given as such? Where would a bijective homomorphism, generally, fail to be an isomorphism? Between Banach spaces, for example!

Of course, now, all of this is clear to me, but back then it was a revelation. Instead of trying to memorise proofs and arguments to the letter I began putting more value into definitions, producing examples and finding equivalent conditions.
 
Last edited:
  • Like
Likes Abhishek11235
  • #9
Still waiting for mine ...
 
  • Like
  • Haha
  • Informative
Likes DaveE, Abhishek11235, jasonRF and 4 others
  • #10
When I finally realized what Copernicus was talking about here (was working from an English translation in Stephen Hawking's book "On the Shoulders of Giants").

Theorema Secundum.png
 
  • Like
Likes Abhishek11235, hutchphd and Delta2
  • #11
Dr. Courtney said:
Still waiting for mine ...
I gave you a💡.
 
  • Like
Likes jedishrfu
  • #12
My eureka moment was on the day I was conferred the status of "a true experimental physicist". When I was a fourth year undergraduate student at UCSD we were required to complete a lab course which convened in a lab in the basement of Urey Hall. The lab was a large, windowless, concrete walled room outfitted with various experimental apparatus and whose entrance adjoined an open air atrium. The lab instructor, to my great good fortune, was a Nobel nominated high energy experimental physicist (why he didn't actually receive the Nobel prize was a matter of significant controversy, but that's a different story).
One day there was ten of us students performing various experiments and I was measuring the Hall effect of bismuth using a lock-in amplifier. My Professor (of beloved memory) approached me and demanded that I explain to him the operating principle of a lock-in amplifier. I hauled out the users manual and began to furtively point at stuff on the schematic diagram. Frankly, I was rather clueless as to what I was talking about. The lab door had been propped open and suddenly a small bird flew in. My Professor cried, "The bird! The bird! Help the bird!" The other students began running around, waving their arms and chasing the bird as it fluttered about. In a moment of insight I shouted, "Cut the lights!" The lab went dark except for the light coming through the entrance. The bird flew out. My Professor approached me and tearfully exclaimed in a thick Italian accent, "Fred, you are a true 'experimentalist'".
 
  • Like
  • Love
Likes vanhees71, Abhishek11235, Wrichik Basu and 5 others
  • #13
Most of my eureka moments as an experimental scientist occur either when I am designing a new experiment or analyzing the data.

As a theoretical scientist, most of my eureka moments occur either when I realize how to put the pieces together to solve a computational problem or when I realize how I already have (or can find) the data needed to address an important theoretical question.

But I'm just not wired for proofs. I'm wired to find data to "disprove" or "support" hypotheses and ideas on the theory side. I'm wired to ask, "what does that idea predict, and how can I test that?"

And I carry that way of thinking over even into my mathematical work.

I couldn't even begin to think of how to prove (in the rigid math sense) that time independent Hamiltonians with chaotic classical dynamics always have quantum solutions with a specific kind of energy level statistics. But I was able to disprove it by counter example with computational solutions.

I couldn't even begin to think of how to prove (in the rigid math sense) that a Fourier Transform computed using explicit numerical integration can yield more accurate results than the popular FFT. But I guided a student into showing it does through a number of numerical experiments using data generated with known frequencies, amplitudes, and phases.

I have no idea how to prove whether or why performing least squares fits on one functional form of power law provides more accurate parameter estimates than another functional form. But I've guided students through numerical experiments showing that in some cases, one functional form of power law is preferred.

Real mathematicians would be embarrassed by a student like me who attempted to "prove" the Pythagorean theorem by measuring a million right triangles and failing to find a counter example. By that's my hammer, so every problem looks like a nail.
 
  • Like
Likes Abhishek11235
  • #14
fresh_42 said:
What was your favorite proof or Eureka moment you remember?

I went to university in 1980. There was no Internet and no way realistically to know about anything outside the school curriculum. It's so different now when even primary school kids seem to know that gravity is curved spacetime. I'm not sure I had even ever heard the term Quantum Mechanics when I left school.

University maths was, therefore, a real shock to the system. The first few weeks were a struggle. I remember thinking the binomial theorem for fractional powers was the wildest thing I had ever seen. I couldn't believe how hard and abstract it was. And, I could not understand proof by induction.

That's what I'd been looking at when I went to bed: how did this proof by induction prove anything? My roommate came in late and woke me up and while he was getting ready for bed I started to think about induction and then suddenly it was all clear. I got up, looked at my notes and now it all seemed rather obvious.

After that, although there was some difficult material in the next four years, I never again felt like I was out of my depth.
 
  • Like
Likes vanhees71, Abhishek11235, etotheipi and 1 other person
  • #16
PeroK said:
After that, although there was some difficult material in the next four years, I never again felt like I was out of my depth.
This sounds like a meta-Eureka moment. You suddenly knew you could do math.
 
  • Like
Likes vanhees71 and Delta2
  • #17
Fred Wright said:
The lab instructor, to my great good fortune, was a Nobel nominated high energy experimental physicist. . .
Nicola Cabibbo ?

.
 
  • Like
Likes vanhees71
  • #18
I don’t have any regarding proofs, per se, but I do recall two major eureka moments of math understanding from early high school (1968 or so).

The first was seeing a definition of logarithms in a dictionary, seeing how useful they might be in that pre-calculator epoch, but then wondering how on Earth were they calculated. Obviously, you divide by the largest whole power of the base less than the target to get the integer part, but what then? I came up with the idea that if you computed successive square roots of the base, then found the biggest one less than what’s left after the integer power division. Then the next one after that, etc. Adding up the powers of 1/2 you used gives the fractional part. I computed several examples this way to 4 Significant digits. Much later I found this was one of the early methods Napier used to compute log tables.

Another was first seeing the formula ##e^{i\theta}=cos \theta+i sin \theta## and wondering how that could be true. At this point I knew basic Taylor series, so I put ##i\theta## into the exponential Taylor series, and voila, the result fell out (given also the sin and cos series).
 
  • Like
Likes vanhees71
  • #19
PAllen said:
Another was first seeing the formula ##e^{i\theta}=cos \theta+i sin \theta## and wondering how that could be true. At this point I knew basic Taylor series, so I put ##i\theta## into the exponential Taylor series, and voila, the result fell out (given also the sin and cos series).

This was something of a eureka moment for me also.

And being an experimentalist, I don't have trouble accepting the Taylor series result as "proof" since one can always compute a few more terms ...

But I wonder how rigorous the more formal mathematicians find such "proofs." Maybe I'm missing something, but the Taylor series approach seems more inductive than deductive. It's like showing some formula is valid for pi by computing some number of digits ... and offering to compute some more if challenged ...
 
  • #20
Dr. Courtney said:
But I wonder how rigorous the more formal mathematicians find such "proofs." Maybe I'm missing something, but the Taylor series approach seems more inductive than deductive. It's like showing some formula is valid for pi by computing some number of digits ... and offering to compute some more if challenged ...

An infinite series is defined as a limit. You don't evaluate it by computing terms!
 
  • Like
Likes vanhees71 and ChinleShale
  • #21
PeroK said:
An infinite series is defined as a limit. You don't evaluate it by computing terms!

Perhaps one shouldn't, but I do ... which is likely why I'm an experimentalist and when I dabble in theory it's on the computational side ...
 
  • #22
Dr. Courtney said:
Perhaps one shouldn't, but I do ... which is likely why I'm an experimentalist and when I dabble in theory it's on the computational side ...

I think the point was that "more rigorous mathematicians" prove that the series converges. This gives a guarantee that taking arbitrarily more terms will always give a better approximation and if a certain amount of accuracy is needed then it can be found by taking sufficiently many terms. On the computational side an interesting question is how fast series converge. Faster convergence means less computation for the same amount of accuracy.
 
Last edited:
  • #23
I got hooked on Euclid in middle school. The most memorable in retrospect were probably the very first construction in the Elements (equilateral triangle) and the proof that there are only 5 Platonic solids. It was just so simple, but such a radically different way of thinking from what I was used to. Later on, someone showed me the square-in-a-square proof of the Pythagorean theorem, which shocked me even further: the notion that you can get the correct answer to a problem in two completely different ways threw me for a loop.

More recently, anything connected to complex analysis seems like some sort of sorcery to me.
 
  • Like
Likes vanhees71, Abhishek11235 and ChinleShale
  • #24
Dr. Courtney said:
Real mathematicians would be embarrassed by a student like me who attempted to "prove" the Pythagorean theorem by measuring a million right triangles and failing to find a counter example.
What a luxury! A million triangles! Back in my days we all (~30 kids in class) had to cut out a triangle, rip of the corners and lay them together to see it is a straight line. 30! I envy you young ones for this million!
 
  • Like
Likes Abhishek11235 and Dr. Courtney
  • #25
TeethWhitener said:
I got hooked on Euclid in middle school. The most memorable in retrospect were probably the very first construction in the Elements (equilateral triangle) and the proof that there are only 5 Platonic solids. It was just so simple, but such a radically different way of thinking from what I was used to. Later on, someone showed me the square-in-a-square proof of the Pythagorean theorem, which shocked me even further: the notion that you can get the correct answer to a problem in two completely different ways threw me for a loop.

More recently, anything connected to complex analysis seems like some sort of sorcery to me.

I also got the math bug from Euclidean geometry in high school. For me the equivalence of the uniqueness of parallels with the Pythagorean theorem and the sum of the angles of a triangle equalling 180 degrees revealed an inner structure to geometry.
 
  • Like
Likes fresh_42
  • #26
fresh_42 said:
30!
yikes, too many :nb)
 
  • #27
Most of my eureka moments with mathematics involved understanding and intuition, not rigor. I am an engineer and most of the math classes I took in college were designed specifically for engineering majors. The professors of course proved almost everything in lecture, but we usually were not asked to perform many proofs ourselves. This suited me just fine, since I was much more interested in understanding and applying the math than in understanding the proofs.

In the Spring of my Junior year there were two moments I recall. That semester I was taking upper-division EE elective courses on electrodynamics and quantum & solid state physics, as well as an "advanced engineering math" course that mostly covered boundary value problems and special functions. The first lightbulb was from the first week of math class where we learned the spectral theorem for matrices (for some reason our required linear algebra course didn't cover it), with immediate applications to systems of ODEs. Soon afterwards we learned about Hermitian operators in quantum mechanics and Sturm-Liouville theory in math class, and later in electrodynamics we were using expansions of orthogonal solutions to the Helmholtz equation to represent fields in waveguides. The analogy between the operators and matrices really made the math come alive for me. Of course it was only rigorous in the finite-dimensional case, but I felt like I had a glimpse of the underlying structure of a lot of the math I was using.

The second moment related to the convergence of series. We were using power series to find a solution to an ODE arising from separation of variables for a boundary value problem. We found that the series diverged unless it only had a finite number of terms, thereby (if I recall correctly) requiring in integer separation constant. I believe this was in quantum mechanics class (although am not certain), but the fact that the convergence proof was required in order to understand the physics just blew my mind. I finally understood why engineers needed to learn such things.

jason
 
  • Like
Likes ChinleShale
  • #28
I remember having one physics Eureka moment when as a young kid of about 8 watching a Disney cartoon where they were playing pool. I saw some symmetry in the play that made me feel I could master pool shots with that idea but I can no longer remember what it was that I saw.

I think this is the video:

 
  • #29
I also recall having some false positive Eureka moments where everything was so clear and then reveling in the moment forgot what it was I saw or worse seeing a flaw in my Eureka and ending up in Topeka instead.
 
  • Like
  • Haha
Likes Frigus, fresh_42 and jasonRF
  • #30
It was very eye opening to me to learn that Archimedes had methods for finding the volumes of some familiar solids, equivalent or superior to the ones we use in calculus, namely clearer and simpler. E.g. he knew already that two solids have the same volume if the areas of all slices taken parallel to a given plane are the same for both solids ("Cavalieri principle"). Using this he showed the volume of a spherical ball equals the difference between the volumes of a circumscribing cylinder and a circular pyramid inscribed in that cylinder. Namely the pythagorean theorem show the slice areas of those figures differ by the slice area of the ball, hence also the volumes differ in the same way.

Reading Archimedes, I learned that he claimed the same method gave the volume of the "bicylinder", namely the intersection of two cylinders of the same radius. His solution however had been lost when his treatise was shortened centuries ago and some pages discarded.

Since I knew the horizontal slices of the bicylinder were squares, it dawned on me (Eureka!) that the method must be to show that those slice areas were the difference of two figures analogous to a cylinder and a cone, but having square rather than circular horizontal slices, hence certainly a cube and a square based cone. Indeed this works.

It amazed me that this simple computation of the volume of a bicylinder had existed for centuries, and yet we continue to challenge our students to solve it using calculus, and consider it to be a very difficult problem, even though by Archimedes' approach, one needs only the pythagorean theorem.

Of course I learned later that this had also been noted over 100 years ago by a modern mathematician (Zeuthen?), so my Eureka moment was not at all the first, after Archimedes, for this problem.

The same beautiful volume proof by Archimedes led me to another volume computation, namely for the volume of a 4 ball, which is much harder by calculus than the 3 ball, since the integral involves a harder antiderivative. But with Archimedes idea, and Pappus' principle, we can bootstrap up from the same calculation for the 3 ball. Namely, just as a 3 ball is swept out by revolving a half disc around a line in 3 space, so. a 4 ball is swept out by revolving a half 3-ball around a plane in 4 space. But, by Archimedes, that equals the difference of revolving a cylinder and a cone. Since he knew also the centers of gravity of those figures, one gets the volume of a 4 ball, by a calculation Archimedes could have done..
 
Last edited:
  • Like
Likes etotheipi, ChinleShale, fresh_42 and 1 other person
  • #31
May be not exactly on topic, but one thing that was an eye opener for me was when I relized that viewing finite sums as integrals with respect to the counting measure can be helpfull. For example in representation theory of finite groups. Even the definition of the group algebra as the space of all complex valued function on the group with product convolution is for me much clearer conceptually.
 
  • Like
Likes nuuskur
  • #32
It was a very esoteric and specific result, but my biggest Eureka moment as a physicist so far was my discovering the proof I published in https://arxiv.org/abs/1610.06568, the short proof given in Section II.B. I didn't believe it when I first wrote it down because it was such a simple and seemingly straight-forward result, whereas all my other results I've discovered in my research were slow and difficult slogs which I understood over an extended amount of time. But this was literally a case of looking at some equations on a chalkboard and realizing how they all worked out to give a simple and powerful result - and we immediately checked that my general result agreed with several previous special cases. It was a very satisfying moment compared to most of my research which is often long and difficult calculations!
 
  • Like
Likes vanhees71, ChinleShale, Keith_McClary and 2 others
  • #33
My biggest Eureka moment was in June 2010, when I was doing a test case to try to show consistency/inconsistency between the magnetic pole model and the surface current model of magnetostatics. I was using a uniformly magnetized cylinder of arbitrary radius ## a ## of semi-infinite length. The pole model gives a very simple result of ## B_z=0 ## for the z component of the magnetic field in the plane of the endface, for ##r>a ##, everywhere in the plane, (because the only pole is a uniform magnetic surface charge ## \sigma_m=M ## that sits on the endface). I thought it very unlikely that the Biot-Savart integral of the magnetic surface currents on the outer surface of the cylinder (of semi-infinite length) could possibly give this ## B_z=0 ## result for ## r>a ##, but they did, and they also gave the correct ## B_z=2 \pi M ## (cgs units) for ## r<a ## in the plane of the endface. I was very pleasantly surprised. The two models were completely consistent. With a little extra logic, I was able to prove the pole model formula ## B=H+4 \pi M ## follows from this Biot-Savart/surface current result.
 
  • Like
Likes etotheipi and jedishrfu
  • #34
Sometimes a Eureka moment comes following someone’s comment. We were doing a tough classical mechanics problem from Marions book. We were to prove the a particle falling from outer space would take 9/11 the total time to fall half the distance. (Marion/Thornton Chapter 5 problem 5.5 pg 205

Newton’s equation had an r and would give us the acceleration but we couldn't relate the r to the time. We mulled over it for quite awhile until another prof came in and suggested Kepler‘s law of equal areas in equal times.

It was then that we realized that we could make an orbit and collapse one axis and then we had the missing connection of r to t.
 
  • Like
Likes etotheipi, vanhees71 and Charles Link
  • #35
For me, I think it was when I understood the δ-ε proofs in calculus. Somehow, I wasn't impressed with the Limits I'd learned shortly before. This was the first time I understood that this could be a really useful and different approach. "Arbitrarily close" wasn't a concept I'd known before.
 

Similar threads

  • Sticky
  • Math Proof Training and Practice
Replies
0
Views
1K
  • STEM Academic Advising
Replies
1
Views
698
  • Special and General Relativity
Replies
6
Views
1K
  • Science and Math Textbooks
Replies
4
Views
1K
  • Math Proof Training and Practice
3
Replies
71
Views
9K
  • General Discussion
Replies
29
Views
2K
  • Math Proof Training and Practice
2
Replies
57
Views
8K
  • General Discussion
Replies
7
Views
3K
Replies
1
Views
127
  • Science and Math Textbooks
Replies
28
Views
3K
Back
Top