# I What was the historical problem with Imaginary Numbers?

1. Dec 19, 2016

### FallenApple

I don't see why imaginary numbers were necessarily so difficult among top mathematicians back then.

From pleano's axioms, we can derive the fact that any negative natural number times another negative natural number must be positive. Then this result extends to the reals, using theorems derived from those axioms as well. ( logic is a+(-a)=0 and multiply both sides by -b, then add ab to both sides)

All this means that that no two real negative numbers can multiply to be positive.

But surely it could be imagined that some mathematical structure times itself produce -1. It is not logically contractictory with the theorems of real numbers. So if it didn't pose any contradictions with previous mathematical work, what was the philosophical hurdle?

If I make up a mathematical structure that is self consistent, then it works, no matter what it is.

2. Dec 19, 2016

### Staff: Mentor

How did you come to this conclusion? Which hurdles are you speaking of?

3. Dec 19, 2016

### FallenApple

Well, I was just reading somewhere that there is a trend in the history of math for new number structures to be met with a lot of scepticism. For example, the acient Greeks with irrational numbers, negative numbers for a few hundred years ago etc. Why when we see so clearly now that none of them produces contradictions.

4. Dec 19, 2016

### dkotschessaa

The short and possibly wrong answer is that I don't think people were as accustomed to thinking abstractly as we are. Sure, they behave in predictable ways, but what ARE they? Lots of people new to math still have trouble with this. Yes, I know the definition, but what IS it?

I'm not sure about the statement "It is not logically contradictory with the theorems of real numbers." Surely it involves going outside the real numbers to get a real number, which is why you end up with a whole other set of numbers that are "imaginary." (I attribute later hesitation to bad marketing.. Calling them "imaginary" has not helped their case either.

I know that the numbers started just "showing up" in calculations of 4th and 5th degree polynomials. They eventually worked themselves out of the calculation but people just sort of whistled to themselves and pretended not to see anything. It just took awhile.

-Dave K

5. Dec 19, 2016

### Staff: Mentor

In this generality the only answer can be: People always oppose new methods or concepts. I don't think this applies to the mathematicians of the time.

6. Dec 19, 2016

### dkotschessaa

With the Greeks it was a deeply held belief that numbers had to be rational, because their entire religious world view was built on this premise. I don't know if this parallels imaginary numbers, but it might.

-Dave K

7. Dec 19, 2016

### dkotschessaa

8. Dec 19, 2016

### FallenApple

What I mean is that while imaginary numbers do not come from the properties of real numbers, defining i and then making it work in a self consistent way is logically valid.

Perhaps the numbers didn't make sense at a gut level to them. But we now know that math does need to make sense at a gut level. A proof is a proof and the results are the results.

9. Dec 19, 2016

### FallenApple

But isn't it true that currently, we have a very strict axiomatic/theorem based proof system based off of formal logic? So we know that if a proof is done correctly, then it must be valid. Maybe the philosophy back then was whether math can be applicaple to the real world. We now mostly treat math as it's own self contained universe.

For example, I can make a completely new type of number, call it a number, and if it works self consistently(even if it doesn't describe the world in any way), then it must be true(in a mathematical sense) and fall under the category of math.

10. Dec 22, 2016

### Logical Dog

How so? i thought peano only defined the natural number..and there are no negative natural numbers.
1. Zero is a number
2. The successor of any number is a number.
3. Zero is not te successor of any number.
4. Any property common to zero and its successor and its successors successors is a property of all numbers. (basically any property common to zero, 1 and 2 is a property of all numbers)
5. No two numbers have the same successor.

and then define addition between two numbers as:

a = s(s(s(s(0))) + b = s(s(s(s(b))) where b = s(s(s(s(0)))

I am confused by the post a little, I know the issue here is that the even root of a negative REAL number, cannot be negative REAL as the multiplication of a negative even times into itself gives a positve number. It also cannot be positive REAL, because the multiplication of a positve number even or odd times itself give positive.

I disagree, just because it is logically valid does not make it true in any sense of the word..look at economics. LOT OF FANCEY math but not gronded in reality!

11. Dec 22, 2016

### FallenApple

Oh sorry about that. I haven't studied analysis in a while, so I get somethings mixed up. Basically, my point is that you can eventually build up to the fact that negative time negative=positive. I must have confused peanos axioms with something else.

12. Dec 23, 2016

### TeethWhitener

First off, the notion that provability implies validity is known as soundness, and it's not a property that holds for all formal systems in general (although we tend to discard unsound systems for the reason that they're not particularly useful). Second, the notion that logic is kind of a "game" that we play, where the results are the results, regardless of any deeper meaning, is a philosophical stance, and a very recent one at that (originating in about the 1950's or so). But there's always some contentiousness of this type in math to be found. Just look at the initial resistance to category theory as "abstract nonsense" for a recent example.

13. Dec 23, 2016

### Aufbauwerk 2045

Mathematics was developed for most of its history as a tool to help us calculate about the real world. The idea of developing math as a logical game which may not apply to the real world is fairly new. For example, Newton would not have developed calculus unless he found it useful for his physics. But complex numbers did turn out to be very useful indeed in physics and of course we all learn about Euler's formula which Feynman described as our mathematical gem.

Last edited: Dec 23, 2016
14. Dec 23, 2016

### Staff: Mentor

What about Euclid's geometry, Pythagoras' addiction to numbers? Some of their results may have helped to solve engineering problems, but they were certainly not driven by them. What about Diophantine equations? In the 17th and 18th century mathematicians wrote letters to others, in which they posted questions which they had the answer to, only to see whether the other one will find it, too. Also has mathematics been regarded as a philosophical branch. The connection to natural sciences is actually the new perspective. Large parts of number theory had been developed to solve Fermat's last theorem.

So what do you mean by
? Sometime between the Sumerians, Babylonians and the Greek?

15. Dec 23, 2016

### TeethWhitener

The word "geometry" literally means "to measure the earth," reflecting the practical considerations and connection to the real world that have driven math throughout its history. Were mathematicians driven exclusively by practicality? Of course not, but I think it's disingenuous to deny that a great deal of math was inspired by the real world. I wonder how long the development of e.g., variational calculus would have taken without the brachistochrone or celestial mechanics. Would Fourier have come up with his series if he hadn't been interested in heat transport? How about Gibbs (an engineer) and vector calculus? Do you really think that group and representation theory would be where it is today if Wigner hadn't noticed how useful it was in quantum theory?

But this is all really a digression from the fact that the vast majority of mathematicians until quite recently viewed their proofs as unassailable truths about the world, at least in some respect. Whereas the view nowadays seems to be that the axioms we pick are largely chosen as a matter of convenience and pragmatism, and they and whatever theorems follow from them don't have any special ontological status outside that formal system. I think that's what David Reeves and I meant.

16. Dec 23, 2016

### Aufbauwerk 2045

I think von Neumann's famous article The Mathematician is relevant. Here is part of what he says about ancient geometry.

"Geometry was the major part of ancient mathematics. It is, with several of its ramifications, still one of the main divisions of modem mathematics. There can be no doubt that its origin in antiquity was empirical and that it began as a discipline not unlike theoretical physics today. Apart from all other evidence, the very name "geometry" indicates this."

Of course "geometry" means "earth measurement." The earliest civilizations developed geometry so they could figure out how to divide plots of land, make buildings, and so on.

Von Neumann goes on at great length to talk about how when math becomes too far removed from its empirical source it becomes more like art for art's sake. Which I suppose is fine if that's what you like. I'm just looking at it from a physics standpoint.

17. Dec 23, 2016

### Staff: Mentor

That's why I mentioned the Sumerians and Babylonians: they measured soil and did accounting. But since the Greeks, mathematics has been a theoretical subject in its own right. The Greek society had slaves for their daily work and thus the time to deal with geometrical objects in Plato's sense of ideals. And this didn't really change ever since. Did Euler had applications in mind? I seriously doubt this. Why did Hamilton spend years to find a field extension of complex numbers? The problem with application driven science is, that it narrows your perspective. Of course have achievements been used whenever they were available, but the logical game as you call it is by no way a modern phenomena.

18. Dec 23, 2016

### Staff: Mentor

Oh yes, very much so. Its usage in QM came definitely after its main parts had already been developed. If at all, it had been differential geometry, i.e. differential equations and surely not QM.

19. Dec 24, 2016

### TeethWhitener

This is going to become too philosophical for this forum pretty quickly (and I have a feeling we agree on this more than we disagree), but the point I think David and I are trying to make can be illustrated thus: We can ask the question "is it true that 1+1=2?" Until very recently, the answer was simply "yes." A more modern answer is "Yes, assuming certain axioms." So the statement "1+1=2" only says something about a system of axioms, and not necessarily something true about the world. It's related, somewhat tangentially, to the debate about whether math is discovered or invented.

Essentially, it wasn't until Godel (and Tarski with his undefinability theorem) crushed Hilbert's program that people started to accept that there were many different logical systems and that no one system was more intrinsically "true" than any other. There may have been hints at this fundamental shift earlier (my mind jumps to Gauss and Lobachevsky rejecting the parallel postulate) but my guess is that the full shift didn't come along until at least Russell, and probably not until after Godel and Tarski.

20. Dec 24, 2016

### micromass

They were directly driven by the desire to describe various aspects of nature. That was the main goal of the Greeks.