# Math Myth: 1+1=2

From @fresh_42's Insight
https://www.physicsforums.com/insights/10-math-things-we-all-learnt-wrong-at-school/

This is funny because you currently read this text on a device that uses ##1+1=0.## If we switch off a light we use it because switched twice doesn't change anything, i.e. equals zero. Our clocks measure hours from ##1## to ##12## and start again after that. So obviously ##12=0## and ##3\cdot 4 = 2\cdot 6 = 0.## See, it depends on the context. There are all kinds of number systems: binary for the computer, duodecimal for clocks, the Babylonians used sexagesimal, and the Mayas used cycles of ##1,872,000## days. You won't convince your local grocery store that twice as much is actually zero, but you use it while you read those lines.

## 12= 0? 24=0? 60=0?​ Last edited:

Homework Helper
Whether modulo arithmetic is being used depends on the context and whether is has been explicitly specified.
In software engineering 1+1 normally specifies normal arithmetic - a "full add".
Of course, it can also refer to an "exclusive or".

They might as well include 16384+16384 = -32768 (normal signed 16-bit arithmetic - with overflow ignored).

"1+1=2". Four symbols. No intrinsic meaning. Try Peano's axioms - he did not define "+" or "2". It can be done (see https://en.wikipedia.org/wiki/Peano_axioms ), but it takes time and patience.

Mentor
In software engineering 1+1 normally specifies normal arithmetic - a "full add".
Of course, it can also refer to an "exclusive or".
I haven't seen "+" used to represent "exclusive or" but I have seen it used to represent "inclusive or" many times. One symbol that is sometimes used, among several others is ⊕.
See https://en.wikipedia.org/wiki/Exclusive_or

Mentor
There are all kinds of number systems: binary for the computer, duodecimal for clocks
In any number system, the digits used range from 0 to one less than the base indicated by the name of the system. In binary, the digits are 0 and 1; in octal, the digits are 0, 1, 2, ..., 7; in hexadecimal (base-16), the digits are 0, 1, 2, ..., 8, 9, A, B, C, D, E, F. All of these number systems are used in computers.
In a duodecimal system, the digits would run from 0 through 11, so clocks don't use a duodecimal system, at least as far as number bases are commonly used.

Staff Emeritus
Gold Member
Mark, in mod 2 math where 0 is false and 1 is true + is intrinsically an exclusive or.

Gold Member
Hi Greg:

I do not mean to criticize, but it seems to me that your "Myths" are simply an introduction of linguistic ambiguities (for example those common in English) into the language of math. As in English, an ambiguous sentence has no ambiguity if a context has been established. It is generally the case that in math a context is known to the reader, for example, that the topic is whole number arithmetic rather than a finite group theory.

Regards,
Buzz

• aheight, atyy and PeroK
Hi Greg:

I do not mean to criticize, but it seems to me that your "Myths" are simply an introduction of linguistic ambiguities (for example those common in English) into the language of math. As in English, an ambiguous sentence has no ambiguity if a context has been established. It is generally the case that in math a context is known to the reader, for example, that the topic is whole number arithmetic rather than a finite group theory.

Regards,
Buzz
These are taken from @fresh_42's recent Insight, please read the intro :)
https://www.physicsforums.com/insights/10-math-things-we-all-learnt-wrong-at-school/

Gold Member
• PeroK, atyy and Greg Bernhardt
Mentor
Mark, in mod 2 math where 0 is false and 1 is true + is intrinsically an exclusive or.
The comment from @.Scott that I quoted had a context of software engineering, in which inclusive or and exclusive or are denoted with different symbols.

Homework Helper
The comment from @.Scott that I quoted had a context of software engineering, in which inclusive or and exclusive or are denoted with different symbols.

When it comes to symbols (ie, "coding"), there is no single SW context. Software Engineering is a service where the practitioners are expected to be (or become) as familiar with the application context as with the various SW and HW contexts.

If we are examining algorithms that are inherently binary such as the Hammond or CRC codes, then the syntax used certainly includes 1+1=0. For example, CRC-16-CCITT is denoted: $$x^{16}+x^{12}+x^5+1$$

If I am using C code to operate of HW registers, I might use bit-field values to read and manipulated those registers. So I might have a statement "OnlyOneReady=InRegA.ready+InRegB.ready;". And again, we have an "exclusive OR" meaning for the addition symbol.

So when the OP says "you currently read this text on a device that uses 1+1=0", I wouldn't count him as wrong. Since the "device" is probably just a display monitor, the OP's statement is likely referring to a H/W gate operation - and using a SW context for describing it. It's a mixed context, but such mixes are common in HW and SW manuals.

More generally, the OP is taking note of the fact that within many contexts, modulus arithmetic is the rule - even when it is never explicitly stated.

Mentor
For example, CRC-16-CCITT is denoted:
The wikipedia article in your link uses the symbol ⊕ to make it explicit that the operation is exclusive or, and in the description of the algorithm, they write XOR, not OR.

Homework Helper
The Wikipedia article in your link uses the symbol ⊕ to make it explicit that the operation is exclusive or, and in the description of the algorithm, they write XOR, not OR.

The CCITT formula I specified is shown in the table about half way through their article. It, along with many other polynomials all of which use the "+" symbol, are in the "Polynomial Representations" column.

The ⊕ symbol is also used in that article - but that was not what I was referencing.

In the HW context (electronics), the plus sign is often used to indicate the operate of an OR gate. But this is not really part of the "native" SW context. For example, if you go to this site:
https://www.electronics-tutorials.ws/logic/logic_3.html
and search for "Multi-input OR Gate", you will find this equation:
Q = (A+B)+(C+D)+(E+F)
Where the plus signs indicate inclusive ORs.

It's also shown on slide 3 on this webpage:
https://fdocuments.in/document/chap...gate-basic-operations-of-boolean-algebra.html

To my knowledge, there are no general-purpose software languages that use + for inclusive OR. The plus sign indicates addition, which in some cases can be modulo-2 arithmetic - an effective exclusive OR.

In general, each language has its binary operators to indicate Boolean and bit-wise operations.

Mentor
To my knowledge, there are no general-purpose software languages that use + for inclusive OR.
I agree. What are commonly used in programming languages, at least those that stem from C, are || for logical or, && for logical and, and ^ or xor for exclusive or.

At the hardware design level, the texts I've seen use + for or (inclusive), ⋅ or * for and, and various symbols, including ⊕, for exclusive or.

Gold Member
What about the strange example 1+1 = 110 ?

How do you add 1 + 11?

Here,

11 = 10 + 1

101 = 100 +1

110 = 100 + 10

111 = 100 + 10 + 1 = 100 + 11

and so forth.

A question for computer experts: Can this number system be used in computers instead of the binary number system? Would it have any disadvantages or problems?

Last edited:
• Greg Bernhardt
Gold Member
A question for computer experts: Can this number system be used in computers instead of the binary number system? Would it have any disadvantages or problems?
A long time ago (prior to various binary memories) the IBM 650
had a decimal number system in which each decimal digit was represented by seven bits of data. It was a bi-quinary coding method much like an abacus. The memory was a rotating disk with 2000 10 digit numbers. Each digit used the representation shown in the following.

I think the primary advantage was avoiding any undetected errors due to accidentally reversed bit values. The disadvantage was smaller memories than those which were possible with the later binary systems involving each bit represented by a tiny doughnut shaped bit memory, these bits organized in a rectangular array. Later still came electronic chip memories.

Last edited:
• PeroK
Gold Member
What about the strange example 1+1 = 110 ?

How do you add 1 + 11?

Here,

11 = 10 + 1

101 = 100 +1

110 = 100 + 10

111 = 100 + 10 + 1 = 100 + 11

and so forth.

A question for computer experts: Can this number system be used in computers instead of the binary number system? Would it have any disadvantages or problems?

If 1+1 = 110

then 11+1 = 10 + 1 + 1 = 10 + 110 = 10(1+11) = 100(1+11) ...

One can conclude that 1+11= 0 but how by algorithm would a computer figure this out?

The usual addition algorithm I learned in school would go : 1+1 = 110, put down the zero and carry the 11. This algorithm never stops.

Question: Does this say anything about needing a principle of induction in arithmetic?

Despite all of this, this number system uses only 1's and 0's like the binary number system.