Proving 1 + 1 = 3: Abstract Math & Visualization

AI Thread Summary
The discussion revolves around the mathematical assertion that 1 + 1 = 3, which is fundamentally incorrect in standard arithmetic but can be explored through abstract mathematical concepts. Participants suggest that proving this statement requires redefining the meanings of the symbols involved, as traditional definitions do not support such an equation. Some mention the use of set theory and the concept of the null ring, where all numbers are considered equal, allowing for unconventional arithmetic outcomes. Additionally, the conversation touches on the complexities of division by zero and the inconsistencies in mathematical definitions, emphasizing the need for clarity in mathematical operations. Ultimately, the thread highlights the importance of understanding the underlying principles and definitions in mathematics to avoid confusion.
ffrancis
Messages
11
Reaction score
0
I had some colleagues in College who took a degree in Math on their first two years. After finishing their second year, they shifted to a different course. They already finished from Algebra to Calculus and they were in an even higher math. They were asked to prove that

1 + 1 = 3

This is obviously wrong in arithmetic. But in what branch of mathematics is it possible to prove that 1 + 1 = 3? How would you prove that then? What is the solution?

I have another colleague who said she was grateful she never took Mathematics because, even though she can manage Calculus, higher maths are so abstract that you cannot visualize them anymore.
 
Mathematics news on Phys.org
Hold up 2 fingers then add one more finger on 1 the same hand. If you add the spaces between the fingers (1+1), that will equal the number of fingers (3).

I saw this, I think, in one of Timothy Gower's articles.
 
Assuming this isn't meant as a joke, you'll have to define what '1', '3' '+' and '=' means before you can prove it. They clearly do not hold the same meaning as we recognise them by.
 
For large values of 1 ?
 
Maybe it's the result of a proof by contradiction, on the style:
"Prove that, if a + b = 3, then either a or b is different from 1."

A less conspicuous example would illustrate better, though.
 
The cardinal number of the set of numbers 1 and operators +, and = , |{1, +, = }| is 3.
Forgive me, I've just started teaching myself set theory, but I suppose you could cheese this as a way to start that sort of thing.
 
I thought surely someone would have mentioned the following:

Suppose x= y= 1. Then x- 3/2= 1- 3/2= -1/2 so (x- 3/2)2= 1/4. Also y- 3/2= 1- 3/2= -1/2 so (y- 3/2)2= 1/4. That is, (x- 3/2)2- (y- 3/2)2= 1/4- 1/4= 0. Multiplying that out, (x2- 3x+ 9/4)- (y2- 3y+ 9/4)= 0 or, canceling the "9/4", x2- 3x- y2+ 3y= 0.

We can rewrite that as x2- y2= 3x- 3y.

Since x2- y2= (x+ y)(x- y) and 3x- 3y= 3(x- y), we have (x+ y)(x-y)= 3(x- y) and dividing both sides by x- y gives x+ y= 3 or, since x= y= 1, 1+ 1= 3!

(of course, there is an error in that proof.)
 
This is a coincidence for me. About a month ago I was walking down one of our university's lecture halls and found an interesting poster that caught my eye. It said "1 + 1 = 3?", and it had the whole proof clearly written. I believe some high school kids who visit the school every summer to attend a math workshop proved this. lol

I was about to write it down, but I didn't have a pen on me...
 
  • #10
HallsofIvy said:
I thought surely someone would have mentioned the following:

Suppose x= y= 1. Then x- 3/2= 1- 3/2= -1/2 so (x- 3/2)2= 1/4. Also y- 3/2= 1- 3/2= -1/2 so (y- 3/2)2= 1/4. That is, (x- 3/2)2- (y- 3/2)2= 1/4- 1/4= 0. Multiplying that out, (x2- 3x+ 9/4)- (y2- 3y+ 9/4)= 0 or, canceling the "9/4", x2- 3x- y2+ 3y= 0.

We can rewrite that as x2- y2= 3x- 3y.

Since x2- y2= (x+ y)(x- y) and 3x- 3y= 3(x- y), we have (x+ y)(x-y)= 3(x- y) and dividing both sides by x- y gives x+ y= 3 or, since x= y= 1, 1+ 1= 3!

(of course, there is an error in that proof.)

yes, that was the one I saw!
 
Last edited by a moderator:
  • #11
HallsofIvy said:
I thought surely someone would have mentioned the following:

Suppose x= y= 1. Then x- 3/2= 1- 3/2= -1/2 so (x- 3/2)2= 1/4. Also y- 3/2= 1- 3/2= -1/2 so (y- 3/2)2= 1/4. That is, (x- 3/2)2- (y- 3/2)2= 1/4- 1/4= 0. Multiplying that out, (x2- 3x+ 9/4)- (y2- 3y+ 9/4)= 0 or, canceling the "9/4", x2- 3x- y2+ 3y= 0.

We can rewrite that as x2- y2= 3x- 3y.

Since x2- y2= (x+ y)(x- y) and 3x- 3y= 3(x- y), we have (x+ y)(x-y)= 3(x- y) and dividing both sides by x- y gives x+ y= 3 or, since x= y= 1, 1+ 1= 3!

(of course, there is an error in that proof.)

I heard dividing by zero isn't kosher.
 
  • #12
You think it would be an error?
 
  • #13
To err is human!
 
  • #14
HallsofIvy said:
I thought surely someone would have mentioned the following:

Suppose x= y= 1. Then x- 3/2= 1- 3/2= -1/2 so (x- 3/2)2= 1/4. Also y- 3/2= 1- 3/2= -1/2 so (y- 3/2)2= 1/4. That is, (x- 3/2)2- (y- 3/2)2= 1/4- 1/4= 0. Multiplying that out, (x2- 3x+ 9/4)- (y2- 3y+ 9/4)= 0 or, canceling the "9/4", x2- 3x- y2+ 3y= 0.

We can rewrite that as x2- y2= 3x- 3y.

Since x2- y2= (x+ y)(x- y) and 3x- 3y= 3(x- y), we have (x+ y)(x-y)= 3(x- y) and dividing both sides by x- y gives x+ y= 3 or, since x= y= 1, 1+ 1= 3!

(of course, there is an error in that proof.)
Dividing by zero appears to be incorrect, but where does the prohibition of this operation lie? I mean, we're all taught at school that division over 0 is not defined, but in calculus you define this operation as infinity
**(at least, my teacher taught me that division of a number by zero gives infinity and division of a number by infinity gives zero, as well as division of zero over infinity is zero)

I see it as an inconsistency in the definition of the operations - every filed that includes these operations is not closed with respect to division by zero, is it? Or generalised (correct me if I'm wrong) in every field (vector space) the interference of the inverse element of the multiplication with the neutral element of the addition collapses, or in other words the neutral element of the addition has no inverse with respect to the multiplication. Why is that? Is this for every vector space the case?
 
  • #15
ffrancis said:
This is obviously wrong in arithmetic. But in what branch of mathematics is it possible to prove that 1 + 1 = 3? How would you prove that then? What is the solution?

They are merely redefining the goal posts, for instance they are using the numerals as representations for actual concepts, consider if Itold you

My definition of one is actually 1.5, so I could write

1+1 =3

if you know that my definition is actually 1.5, what happens is people can't separate symbols from meaning, and things get confused.

When someone says "I believe 2+2=5" without telling you what he means and how he is thinking about it, then they are only talking about symbols, not meanings, because in the real world, if you have 2 groups of 2 apples, they can never equal five, unless you're redefining the rules of how you interpret them.
 
  • #16
Marin said:
Dividing by zero is appears to be incorrect, but where does the prohibition of this operation lie? I mean, we're all taught at school that division over 0 is not defined, but in calculus you define this operation as infinity
**(at least, my teacher taught me that division of a number by zero gives infinity and division of a number by infinity gives zero, as well as division of zero over infinity is zero)

I see it as an inconsistency in the definition of the operations - every filed that includes these operations is not closed with respect to division by zero, is it? Or generalised (correct me if I'm wrong) in every field (vector space) the interference of the inverse element of the multiplication with the neutral element of the addition collapses, or in other words the neutral element of the addition has no inverse with respect to the multiplication. Why is that? Is this for every vector space the case?

I doubt your teacher told you to do division and multiplication with infinity. Infinite is not a number (in R), and as such you don't do normal operations with it. You CAN talk about limits as things APPROACH infinity. Case in point if you have

f(x) = \frac{x}{x}

Then

f(0) = DNE

However

\lim_{x \rightarrow 0} f(x) = 1
 
  • #17
Marin said:
**(at least, my teacher taught me that division of a number by zero gives infinity and division of a number by infinity gives zero, as well as division of zero over infinity is zero)

Either you remember wrong, or you had a bad teacher.
 
  • #18
The original post-question reminds me of the written joke:

There are 10 kinds of people; those who understand binary and those who do not.
 
  • #19
There is this thing called the null ring, inside of which you can do arithmetic but all numbers are equal to 0.

So:

1 = 0
3 = 0
1 + 1 = 0 + 0 = 0 = 3
1 + 1 = 3

...the null ring is not really very interesting.
 
  • #20
Marin said:
Dividing by zero appears to be incorrect, but where does the prohibition of this operation lie? I mean, we're all taught at school that division over 0 is not defined, but in calculus you define this operation as infinity
**(at least, my teacher taught me that division of a number by zero gives infinity and division of a number by infinity gives zero, as well as division of zero over infinity is zero)

I see it as an inconsistency in the definition of the operations - every filed that includes these operations is not closed with respect to division by zero, is it? Or generalised (correct me if I'm wrong) in every field (vector space) the interference of the inverse element of the multiplication with the neutral element of the addition collapses, or in other words the neutral element of the addition has no inverse with respect to the multiplication. Why is that? Is this for every vector space the case?

How do you find out 3/2 in first place.is there any proof for that.
like x-3/2=1-3/2
 
  • #21
atyy said:
Hold up 2 fingers then add one more finger on 1 the same hand. If you add the spaces between the fingers (1+1), that will equal the number of fingers (3).

I saw this, I think, in one of Timothy Gower's articles.
Here's the Timothy Gowers article where I found 2+2=5:

http://www.dpmms.cam.ac.uk/~wtg10/philosophy.html
 
  • #22
CRGreathouse said:
Either you remember wrong, or you had a bad teacher.

Sorry, I forgot to clear out I meant the limits towards infinity :) - Thanks for the correction, CRGreathouse and NoMoreExams


But nobody payed attention to the point of my 'statement' (at least meant to be the point)

I see it as an inconsistency in the definition of the operations - every filed that includes these operations is not closed with respect to division by zero, is it? Or generalised (correct me if I'm wrong) in every field (vector space) the interference of the inverse element of the multiplication with the neutral element of the addition collapses, or in other words the neutral element of the addition has no inverse with respect to the multiplication. Why is that? Is this for every vector space the case?

Is this correct, and can anyone explain to me why?
 
  • #23
kmikias said:
How do you find out 3/2 in first place.is there any proof for that.
like x-3/2=1-3/2
Was this in response to me? There was no "3/2" in the the quote you gave. If so then I got the 3/2 basically by "trial and error". Using that (falacious) proof, you can get 1+ 1 equal to anything you want, by choosing the numbers correctly. The number I needed to get 3 was 3/2. I suspect that if you wanted to "prove" that 1+ 1= n, you would use n/2 but I haven't checked that.
 
  • #24
Okay Marin let me take a crack at this:

Marin said:
Dividing by zero appears to be incorrect, but where does the prohibition of this operation lie? I mean, we're all taught at school that division over 0 is not defined, but in calculus you define this operation as infinity
**(at least, my teacher taught me that division of a number by zero gives infinity and division of a number by infinity gives zero, as well as division of zero over infinity is zero)

I see it as an inconsistency in the definition of the operations - every filed that includes these operations is not closed with respect to division by zero, is it? Or generalised (correct me if I'm wrong) in every field (vector space) the interference of the inverse element of the multiplication with the neutral element of the addition collapses, or in other words the neutral element of the addition has no inverse with respect to the multiplication. Why is that? Is this for every vector space the case?

So, first off, if your teacher taught you that division of a number by zero gives infinity and division of a number by infinity gives zero, *your teacher was wrong*. However I don't think that was what happened. I think probably what happened is your teacher did something sneaky and didn't clearly explain it. I think that what your teacher probably said was that:

lim n->0 ( x/n = infinity )
lim n->infinity ( x/n = 0 )
...for all x.

I.E., *IN THE LIMIT*, any number divided by zero becomes infinity and any number divided by infinity becomes zero. Something being true in the limit is quite different from it being true in general! This distinction is important because in the limit the rules are different, in the limit things like "n/0" and "infinity" have a well defined meaning, normally "n/0" and "infinity" are undefined concepts.

So that aside, as for "why" you can't define a field where you can divide by zero:

So if you look at the field axioms, they simply conspicuously fail to describe what happens when you divide by zero. Looking at the wikipedia version of the field axioms axiom #5 is:

# Additive and multiplicative inverses: For every a belonging to F, there exists an element −a in F, such that a + (−a) = 0. Similarly, for any nonzero a, i.e. for any a ≠ 0, there exists an element a^−1 in F, such that a · a^-1 = 1.

This does not say what happens to 0. It just declines to specifically say anything about division by zero at all. (The vector space axioms, on the other hand, decline to say anything about whether ANY item in the vector space has a multiplicative inverse, period!)

But, although this axiom doesn't specifically say what happens when you divide by zero, it is possible to *derive* from this axiom what happens when you divide by zero.

The section on fields in my copy of "Introductory Modern Algebra" by Saul Stahl contains a proof that begins like this:

Set x = a * 0. By the distributivity of addition and multiplication,

x = a * 0 = a * (0 + 0) = a * 0 + a * 0 = x + x

Consequently

a * 0 = x = x + (x + (-x)) = (x + x) + (-x) = x + (-x) = 0

If we stop the proof there, we've just proven something interesting: a * 0 = 0, regardless of a.

This makes it very easy to prove by contradiction that b / 0 does not exist for any b:

Set y = b / 0 , for some nonzero b
We can rewrite that as y * 0 = b
But a * 0 = 0 for all a, therefore b = 0
?!? contradiction

Now, here's the trick: This proof depends on the field axioms, so you can get around this if you decide to declare that your "b/0" element is exempt from some of the field axioms, but once you start doing this it would no longer be a field (and I'm sure, but am not going to try to prove, that it would not be a vector space or a module or a ring either... you may want to check the axioms for a vector space and see whether the proof above applies to vector spaces too). People *DO* define sloppier structures where "b/0" is well defined or where something called "infinity" is in the member set, but you can't do that and still keep all the nice consistent properties that make people want to use fields and vector spaces and such.
 
  • #25
Marin said:
Sorry, I forgot to clear out I meant the limits towards infinity :) - Thanks for the correction, CRGreathouse and NoMoreExams

Yes, sorry for ignoring your other point -- the statement about division by zero was a bit jarring. I agree that
\lim_{x\to0^+}1/x=+\infty
although
\lim_{x\to0}1/x does not exist.

Marin said:
I see it as an inconsistency in the definition of the operations - every filed that includes these operations is not closed with respect to division by zero, is it? Or generalised (correct me if I'm wrong) in every field (vector space) the interference of the inverse element of the multiplication with the neutral element of the addition collapses, or in other words the neutral element of the addition has no inverse with respect to the multiplication. Why is that? Is this for every vector space the case?

A field is in fact defined in this way: every *nonzero* element has a multiplicative inverse. So far from being an inconsistency, it can be proven that a (nontrivial*) field cannot have an inverse for 0. They can have zero-divisors, but no general inverse.

(*If you count ({0}, +, *) as a field, then 0 has inverse 0.)
 
  • #26
I agree with CRGreathouse. The definition for field that I have seen was take a set A define 2 binary operations on it (for simplicity call it + and *) such that (A, +) and (A\{0}, *) give you Abelian groups and then throw in distributivity in there to "relate" the 2 operations such that a*(b + c) = a*b + a*c.
 
  • #27
*If you count ({0}, +, *) as a field, then 0 has inverse 0.

By 'inverse' you mean additive inverse here, don't you?

I've read about fields where infinity is defined separately. E.g: when you make the stereographic projection of the points from the surface of a sphere to a plane, the 'north pole'`s projection remains undefined and they say it to be infinity. How does this correspond to the concept of + and - infinity in the normal plain? What's the advantage of making infinity a point on the sphere?
 
  • #28
Are you talking about going from Reals to Extended Reals?
 
  • #29
well the article I've read was about expanding the field of complex numbers
 

Similar threads

Replies
2
Views
2K
3
Replies
105
Views
6K
Replies
7
Views
3K
Replies
8
Views
2K
Replies
5
Views
2K
Replies
16
Views
10K
Replies
6
Views
3K
Replies
32
Views
2K
Back
Top