Is Mathematics Discovered or invented?

  • Thread starter Thread starter ComputerGeek
  • Start date Start date
  • Tags Tags
    Mathematics
Click For Summary
The discussion centers on the philosophical debate of whether mathematics is invented or discovered. Participants express diverse viewpoints, with some arguing that mathematical concepts are invented through human-defined axioms, while others contend that they are discovered as inherent truths that exist independently of human thought. The notion that mathematical ideas feel like discoveries is highlighted, as they are seen as well-defined and consistent, akin to physical sensations. The conversation also touches on the relationship between mathematics and the physical world, suggesting that while mathematics is a product of human cognition, it effectively describes natural phenomena. The influence of philosophers like Ayn Rand is critiqued, with some asserting that her views oversimplify the complexities of higher mathematics. The dialogue emphasizes that mathematical truths, such as theorems, are conditional statements based on axioms, leading to the conclusion that the nature of mathematical reality remains a nuanced and unresolved philosophical question.
  • #91
Sir_Deenicus said:
With the keyword being any one, I believe this follows trivially from the incompleteness theorem.
It doesn't follow, not trivially anyway, though a number of people have tried to develop philosophical arguments against Strong AI via the Incompleteness Theorems. They generally haven't convinced the logicians or the AI community, though.
 
Physics news on Phys.org
  • #92
matt grime said:
but not for any intrinsically mathematical reasons.
No. Similarly, my reasons for being fascinated by the Incompleteness Theorems are not intrinsically mathematical.

so the question was how much do the opinions ofmathematicians influence the opinions of mathematicians?
How much do the contingent value judgements and inherited societal beliefs of mathematicians affect the direction mathematical research takes? As another example, a number of 19th century mathematicans were hostile to set theory because of philosophical prejudices against the use of the infinite in mathematics. Fortunately, the younger generation of the time was more enthusiastic, but if they hadn't been, what sort of impact would it have had on present day mathematics?
 
  • #93
VazScep said:
How much do the contingent value judgements and inherited societal beliefs of mathematicians affect the direction mathematical research takes? As another example, a number of 19th century mathematicans were hostile to set theory because of philosophical prejudices against the use of the infinite in mathematics. Fortunately, the younger generation of the time was more enthusiastic, but if they hadn't been, what sort of impact would it have had on present day mathematics?

Now that is a good question. And no I've no idea, and no real interest to speculate.
 
  • #94
I edited this into my earlier reply and you may have missed it: An early example of possible religious involvement in mathematical development comes from Eratosthenes. He said that the problem of doubling of the cube arose because an oracle had said that `to get rid of a plague they must construct an altar double of the existing one'.
 
  • #95
VazScep said:
It doesn't follow, not trivially anyway, though a number of people have tried to develop philosophical arguments against Strong AI via the Incompleteness Theorems. They generally haven't convinced the logicians or the AI community, though.


And even if it were to follow in some way, then all it would say is that the set of results that would be computer generated may not be the same as those that are human generated, but then any two sets of human mathematicians from the same starting point will end up with different theorems from each other; it's not as if they just start from the ZF axioms is it? It would take a value call to say if that was unacceptable.

I can see no reason from the incompleteness arguments for you to assume that humans have proved things or will prove things that computers can't. It would appear to require that the computer has some fixed set of axioms that it can never change. Things are only true or false dependent on some hypothesis, there is no reason for the computer not to be able to change hypotheses 'at will'.
 
  • #96
VazScep said:
Because the way we react to certain results and the decision as to which mathematical problems we wish to pursue is going to affect what mathematics gets done. If you program a computer to churn out the theorems of Peano Arithemetic one by one, it is certainly not behaving anything like a mathematician, and at any point, if you ask it for some interesting theorems, even if it determines `interesting' by proof length, it will still be unlikely to give you anything you would consider worthwhile. Human tastes surely feature somewhere in this.
How is this relevant?
There is a danger in saying that all the theorems of number theory are out there among the consequences of the Peano Axioms waiting to be discovered, because we could also that every possible play is out waiting to be discovered among the sequences of English sentences. We are only going to pick out certain consequences according to what appeals to us.
This doesn't follow whatsoever. Have you ever written a story? Have you ever proven a theorem? Let p be a theorem, and A be some set of axioms. If I prove p from A, then prior to it, I might not have known that p followed from A, and discovered that it does. Now of l is a line from a play, what can we say that is at all analagous? It's not as though Shakespeared discovered that Hamlet happened to be a Dane. It's not that he discovers that l is an English sentence.

Playwrights make up plays. Anything a playwright makes up, so long as it is "well-formed" can count as a play. The same is simply not true for theorems. Not any well-formed formula a mathematician decides to dream up counts as a theorem. It has to follow from some axioms and rules of inference, and it is a discovery to find that a given wff actually does follow from these axioms and rules of inference. A mathematician discovers a wff turns out to be a theorem by discovering that it follows from certain things. Playwright don't discover that the things they write turn out to be plays, they make up plays. Mathematicians consider a proposition, and wonder if it is a theorem. Playwrights don't write a bunch of lines, and then wonder if it's a play, and then try to discover whether or not it is a play in the way mathematicians try to discover whether or not the proposition really does follow as a theorem.

So where is this inapt analogy going?
 
  • #97
Sir Deenicus said:
He did not go out and "discover" consequences from axioms but instead experimented with mathematical concepts to get what it is he wanted.
At one point, Fermat did not know that a certain formula was true, and later found through proof that it was true. Perhaps people don't go out and discover consequences, but they consider propositions that interest them, and discover that said propositions are consequences. Mathematicians don't discover sentences, they discover that sentences are theorems, and in that sense (what other sense do you think we meant) theorems are discovered.
Discovery figures little into it. Here again we see a clash of concepts that do not transfer well.Given a powerful enough computer it is not too far fetched that all that follows from a set of axioms can be generated in a very small time. Or even, at once.
I can't see how this is relevant. Personally, I would argue that if we found that we could program a computer to generate all the theorems, and then printed them out and discovered a sentence p on that list, then we've discovered that p is a theorem. However, even if finding out that p is a theorem in this way doesn't count as discovery, the point remains that mathematicians do nothing of this sort. If I think that I can prove p to be a theorem, and work at it and found out that p does in fact follow as a theorem, then I've discovered that it is a theorem. The suggestion that perhaps a person reading a print out of all the theorems of Peano arithmetic cannot be said to discover any theorems doesn't affect, whatsoever, that I still might have.

Suppose one person discovers for himself that a box B contains an object O. Suppose a second person is simply told that B contains O. Will you argue that the first person did not discover that B contains O because the second person was told it? What does one even have to do with the other?
Ofcourse not, he had a character in mind and drew from his experiences to assign it basic properties that he felt it should have based on his needs. I no longer see where the original point of contention lies although i suspect it has to do with our definitons of the word invent.
Hamlet is Danish because it pleased Shakespeare that he be a Dane. Fermat's proposition is not a theorem simply because it pleased Fermat that said proposition be a theorem. It was entirely Shakespeare's invention that Hamlet be a Dane. It wasn't Fermat's invention that his proposition be a theorem. He had a proposition, and found out that it was a theorem. (Actually, I'm unsure of the history, and whether he actually proved it. I think I remember reading/hearing that he had scribbled it in the margin of some paper, but I think Euler might be credited with its proof).
Again, Fermat did not work from axioms. I do not see why you state your opions as fact. There are those who believe mathematics to have a creative aspect, and thus requiring imagination and creativity in one's creations. Certainly it can be done mechanically but that is but a small aspect of the whole endeavour. I believe it has already been concluded that no one computer can ever replace mathematicians.
Sure, mathematics does have a creative aspect. In particular, it requires creativity to find out that p is a theorem. It also requires creativity to decide that Hamlet will be a Dane. The fact that creativity is used in both is irrelevant. What's relevant is that in the first case, the "theoremness" of a statement is found out, or discovered. In the second case, Hamlet's nationality was decided, made up, invented.

Again, I don't think that everything in mathematics can be said to be invented, nor do I think everything in mathematics is invented. Perhaps some things are discovered, some things are invented, some are both, and some are neither. Mathematics has many things: a formal language, definitions, theorems, problems, propositions, methods, etc. However, on the specific point of whether a given formula is a theorem, it is discovered that it is a theorem, and it is not invented that it is a theorem.
 
  • #98
AKG said:
How is this relevant?
My position, from the beginning, is that the question of whether mathematics is invented or discovered is naive. Yes, we cannot choose whether a theorem follows from axioms or accepted assumptions, and in this sense, we can discover whether a given proposition is a theorem. But to say this means that mathematics is discovered is to say that mathematics reduces to finding theorems, something which a computer can do very well whilst failing miserably to be a mathematician. I know this is not your position, but I responded to your original post mainly as an opportunity to introduce this argument.

I have tried to give a few examples suggesting that the body of mathematical knowledge we have today has been determined significantly by subjective and cultural factors, and this can be seen to a minor extent at the level of theorems (FLT does not provide a real example here, so I moved onto the Incompleteness Theorems). This indicates the inadequacy of the question "is mathematics invented or discovered."

[snipped criticisms of Shakespeare analogy]
This analogy was meant to bring out the above point. Mathematics does not reduce to enumerating theorems of axiomatic systems anymore than playwriting reduces to constructing sequences of English sentences. Perhaps the analogy is clumsy, but I introduced it in the context of the above position.
 
Last edited:
  • #99
VazScep said:
I know this is not your position, but I responded to your original post mainly as an opportunity to introduce this argument.
Well, that explains the confusion.
 
  • #100
From http://www.findarticles.com/p/articles/mi_qa3742/is_200108/ai_n8969938":
Leibniz saw in his binary arithmetic the image of Creation ... He imagined that Unity represented God, and Zero the Void; that the Supreme Being drew all beings from the void, just as unity and zero express all numbers in his system of numeration. This conception was so pleasing to Leibniz that he communicated it to the Jesuit Grimaldi, president of the Chinese tribunal for mathematics, in the hope that this emblem of creation would convert the Emperor of China, who was very fond of the Sciences. I mention this merely to show how the prejudices of childhood may cloud the vision even of the greatest men.

Prejudices or not, neither Leibniz nor his detractors could have imagined the role that his brainchild would be playing at the beginning of the third millennium!

...

If the historical facts discussed in this chapter are fairly well known, it is the insights and frequent anecdotal comments that make it a pleasure to read. Thus the author says, "Number theory is the child of number superstition and mysticism. Through the ages, mysterious powers were attributed to number, sometimes reaching unexpected heights with the numerology of the ancient Greeks and the innumerable forms of modern-day number superstition." This point is all too often ignored (perhaps intentionally) by modern-day purists who portray mathematics as the sole product of rational, logical thinking. We tend to forget that the Pythagoreans discovered many of the properties of numbers as a result of their mystical reverence of numbers; one need only think of words such as "perfect numbers", "amicable numbers', and "happy numbers" to be reminded of the indebtedness we owe to these early pseudo-scientific ruminations.
matt grime said:
no it wouldn't since the profundity we give it is not intrinsic to the objects that are being discussed. 2+3+5=10 is equally true whether or not I think them to be special numbers.
Yes but what you draw from this and what direction you choose to take it is dependant on how you view this fact. Hamilton believed that his quaternions and treatment of complex numbers represented a description of time (and space). This affected what he did with his quaternions, how he described them, how he operated on and what significance he attatched to them. I can assure you that his view of quaternions was very divorced from today as can be seen by the many results he derived that are now thought to be irrelevant.

some of them perhaps, and that is why a computer would be a good thing since it wouldn't leap to conclusions and not dismiss a conjecture as silly because it *feels* unlikely
not at all, usefulness can be cross referenced: how does a mathematician know something is useful somewhere else? becuase he recognizes that it can used somewhere else, there is nothing to suppose a computer couldn't do that as well.
This is not a very good example since first, the human brain does not quite cross reference as it does Infer from chunks of cross linked data to remember and use its information. Memories are stored as seperately and connections between these "nodes" allows memories to be built. Cross referencing is a bit different since in general the information tends to be more self contained and does not use the refernces to build a picture. And also the computers must be programmed by humans and the manner in which it cross references, in order that it be meaningful and not random, it must of neccessity be a reflection of what the human feels optimum. Most importantly though, is that computers have no emotion and thus no basis for acting upon what feels right or what others see as valuable.
Perhaps how we decide usefulenes, novelty etc. might be a function of our underlying psyche and societal influences?
no it is a function of our knowledge of what is already known.
as ever there is an answer of yes and no. mathematicians are paid by people to do things that the others feel are worthwhile, so there is the 'yes' part, but the decision as to what is worthwhile is usually left to the judgement of mathematicians (that is the point of peer reviewed research) and hence society in general has no influence on us, as it shouldn't since society is in general completely ignorant of mathematics.
However mathematicians are humans and Are influenced by the ideals of their society as a whole by virture of their being raised in it. In addition, there is also the pressure of the current climate and views of the the mathematical community which influence the shape of one's conceptions and directions which they take.

http://www.ipm.ac.ir/IPM/news/connes-interview.pdf"on the current infrastructure of the way reaserch is currently undertaken. Mathematician, being humans, are just as susceptible to Fads as the next guy.
 
Last edited by a moderator:
  • #101
sometimes what passes for mathematics is just plagiarized, or made up, or faked. But that is usually noticed eventually. in general when an idea comes into your head, where did it come from? was it whispered by a goddess in a dream? did it lie fallow from some overheard remark until you finally understood it? who knows these things for sure?
 
  • #102
VazScep said:
It doesn't follow, not trivially anyway, though a number of people have tried to develop philosophical arguments against Strong AI via the Incompleteness Theorems. They generally haven't convinced the logicians or the AI community, though.

I admit I do not have an extensive experience in mathematics anywhere near Matt Grime's and most certainly not yours, VazScep, but I feel that your conclusion on non triviality to not be the case. Admitedly, I only just got into Computer assisted formal proofs and functional and symbolic programming, but my limited overview of the situation where the mathematician needs to actively guide and participate in the proof development makes me feel that my statement is trivial.

Computers programs are essentially operating under/through/as formal axiomatic systems. Computers have finite memory. No one computer can deduce all possible theorems and axioms even if it were able to enumerate through many available systems.
 
  • #103
"A Friendly Introduction to Mathematical Logic" by Christopher C. Leary states as Corollary 5.3.5 (to the first Incompleteness Theorem):

If A is a consistent, recursive set of axioms in the language \mathcal{L}_{NT}, then:

THMA = {a | a is the Gödel number of a formula derivable from A}

is not recursive.


This is followed by the remark:

This corollary is the "computers will never put mathematicians out of a job" corllary: If you accept the identification between recursive sets and sets for which a computer can decide membership, Corollary 5.3.4 says that we weill never be able to write a computer program which will accept as input an \mathcal{L}_{NT}-formula \phi and will produce as output "\phi is a theorem" if A \vdash \phi and "\phi is not a theorem" if A \not \vdash \phi.

It should be noted that this corollary actually makes a hidden assumption that A \vdash N where N is taken to be a basic set of axioms for number theory (they are sufficient to prove all the technical stuff that the incompleteness theorem needs). However, Theorem 5.3.5 is even better as it doesn't even require that A be recursive:

Suppose that A is a consistent set of axioms extending N (i.e. A \vdash N) and in the language \mathcal{L}_{NT}. Then the set THMA is not representable in A (and therefore THMA is not recursive).

Corollary 5.3.4 should be properly regarded as a corollary of Theorem 5.3.5, and not of the Incompleteness Theorem. And 5.3.5 is a consequence of the Self Reference Lemma, it doesn't require the Incompleteness Theorem. The Incompletenes Theorem itself is a consequence of the Self Reference Lemma.

So putting all the corrections together, we have that although the SRL is related to GIT1 (as GIT1 follows from SRL), theorem 5.3.5 follows from SRL, not from GIT1. And the discussion that follows 5.3.4 should be thought to follow 5.3.5. I.e. it is really 5.3.5 that says that if you have a consistent set of axioms for number theory, A, no computer will be able to look at an arbitrary formula and decide whether A proves that formula or not.
 
  • #104
mathematics is the other side of language. we use these two things to explore everything mor precisely to communicate and display our ourselves. i have never heard questiions like who discoverd mathematics? or who invented it? nor have i ever heard who discovered physics? or who invented it? also for physiology, psycology, language, aerodynamics? etc etc. i just heard who is the father of -----? but if i were to choose i would say mathematics have been discovered and their formulas invebted...
 
  • #105
mathematics is the other side of language. we use these two things to explore everything mor precisely to communicate and display our ourselves. i have never heard questiions like who discoverd mathematics? or who invented it? nor have i ever heard who discovered physics? or who invented it? also for physiology, psycology, language, aerodynamics? etc etc. i just heard who is the father of -----? but if i were to choose i would say mathematics have been discovered and their formulas invebted..
 
  • #106
But this computer v. mathematician thing is all dependent on hypotheses, and assumptions about how computers will be made to work, and also the assumption that it is not acceptable to only have theorems that are derived by permuting through finite numbers of consequences of actions. Who says that those assumptions will continue to hold? Personally I don't believe that we should replace mathematicians with computers or that any replacement will be absolutely acceptable, especially in the opinion of the mathematical community , but that doesn't mean that it might not happen.

As I said before, computers will be able to prove some results, mathematicians will prove some results, those sets won't agree, but then two distinct sets of mathematicians won't produce the same research either. There may well be some techincal limitation of the style of proof that the computers can produce (based on continually changing assumptions). Since everyone is keen to adopt the 'views of society affect what is researched' attitude, who's to say that society won't think the copmuter proofs acceptable, and for that matter perhaps they can make a case that only those results really are 'acceptable'?

I don't know for sure, no one else does, but to reach the conclusion that 'the Incompleteness Theorems preclude us from replacing mathematicians with computers' has some unstated techincal and philosophical assumptions. I think you can make a case with stated assumptions for which it is true (AKG's post) and a case for which it is false.
 
  • #107
matt grime said:
I don't know for sure, no one else does, but to reach the conclusion that 'the Incompleteness Theorems preclude us from replacing mathematicians with computers' has some unstated techincal and philosophical assumptions. I think you can make a case with stated assumptions for which it is true (AKG's post) and a case for which it is false.
The assumption in that post, by the way, is the one relating recursive sets and what computers can do. This assumption is essentially the Church-Turing thesis.
 
  • #108
ComputerGeek said:
It is classic, but I would like to know what you all think.

has anyone mentioned that in latin, the word invent means "to see", aka, if you see (observe) something, you aren't creating it yourself, you're discovering it. I would argue that invent and discover are the same thing. When you are inventing something, you are infact discovering it. there is really no difference between inventing math and discovering it, as with anything. When a jazz musician improvises, is she a player or she is a listener or is she both? afterall she is discovering a new song at the same time she's inventing it, so to invent is to discover.
 
Last edited:
  • #109
I thought invent was from the latin to "come upone" not "to see", but potayto potartoI would like to clarify one point on my position towards the future of computing in maths. I realized that my position seems as though I believe they have a good chance of replacing mathematicians, when that is not the case.

My gut feeling is that we're safe in our jobs for a while yet, possibly for ever as long as we don't suddenly all merge with theoretcial physics or something (who knows what might happen there). Computers will become more useful to us for providing overwhelming evidence for conjectures and become more accepted in proving them too.

However I don't think they'll take over. Mainly because after all these thousands of years we still don't really understand what it is that let's us 'do' maths, where our ideas come from.

I do not think that Goedel's incompleteness theorem or any other logical result like it is the barrier; that barrier applies equally to human mathematicians, who are after all only reasoning machines themselves: is the axiom of choice true or false? For some of us it is always true, for some it is true when needed, for others it is ignored as a bastard son of set theory. We only ever reason from a finite number of basic rules, we are only finite machines ourselves, though we are capable of pretending we're more complicated than we are because we can't explain so much of ourselves.

Computing has shown an amazing ability to outperform all expectations placed upon it. We have machines that 'learn' to feed themselves, we ar finding more and more ingenious ways to store data that 20 years ago we were told would never be done. Sure we're approaching theoretical size barriers of the quantum world, but who knows what that'll make us do instead. And that is why my feeling is at best a gut reaction.
 
Last edited:
  • #110
I wholeheartedly agree with you. And yes, it did look like you were asserting that computers would take over mathematics :P.

I feel that Computers, specifically calculators, computer algebra systems and theorem proving enviroments and aids are necessary for the advancement and creation of even grander more awe inspiring mathematics. People must accept the limits of the mind as our ancestors did the strength of their arms.

Just as cranes, caterpillars, tractors, levers and pulleys - technology in general, allow man to leave trivialities (as block size, building size limits, construction - architectural considersations) to better able to make the constructive imaginings of their minds reality, so also will computating machines allow man to drop trivialities (such as computing 4x4 determinants by hand, wtf m8? their theory is far more important) and transcend the limits of their abilities to do deeper mathematics.
 
Last edited:
  • #111
Jonny_trigonometry said:
has anyone mentioned that in latin, the word invent means "to see", aka, if you see (observe) something, you aren't creating it yourself, you're discovering it. I would argue that invent and discover are the same thing. When you are inventing something, you are infact discovering it. there is really no difference between inventing math and discovering it, as with anything. When a jazz musician improvises, is she a player or she is a listener or is she both? afterall she is discovering a new song at the same time she's inventing it, so to invent is to discover.


What we must remember is that what we see is determined by how we percieve and what we are used to seeing.

If we have invented a system of numbers and counting wheat bails and urns of wine (which humans have done) then we begin to see a system similar to this in nature.

Math and number sets etc... may not actually be taking place in the universe but the fact is that the number system we have invented (a "vent" from within) to facilitate an orderly trade agreement and so on can be projected upon any environment to help us understand how it relates to our own survival and our own comfort. This arrangement causes us to see things that are not actually there, like... for example... mathematics... and "time".

Of course, now that time and mathematics are manifest and engrained in our consciousness they have begun to become a part of the many splendors of the universe. However, were there to be an unfortunate extinction of mankind, math and time, along with many other concepts, would too follow suit and become extinct as well, simultaneously.
 
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
347
Replies
19
Views
2K
  • · Replies 50 ·
2
Replies
50
Views
4K
Replies
3
Views
1K
  • · Replies 16 ·
Replies
16
Views
830
  • · Replies 26 ·
Replies
26
Views
6K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K