# Can Basic Addition Reach Higher Level Mathematics? A Mathematician's Perspective

• g4143
In summary, the conversation discusses the possibility of achieving all higher level mathematics functionality using only plain addition. While it is possible to relate multiplication to addition through the logarithmic function, there are limitations when it comes to dealing with exponentiation and abstract concepts such as sets. Additionally, the use of logical operators is essential in building more complex mathematical operations.

#### g4143

Hello, mathematicians,

I have a very simple question. Well its simple to state. Can you arrive at all the higher level mathematics with plain addition? Here's an example of what I mean. Can you calculate things like trigonometry and logarithms with just addition?

I ask this question because I've been following and participating in a debate which claims that you can achieve all the higher level mathematics functionality with just addition.

Myself, I fall on the side of basic addition can achieve all the higher level functions and I'm curious what a professional mathematicians would say.

Thank-you for any replies.

I'm at a loss as to how addition would give you category theory, so I'll say "no". Much of higher mathematics doesn't even concern itself with numbers.

Hey g4143 and welcome to the forums.

You do actually raise indirectly an interesting point about the duality between multiplication and addition, which is seen the logarithmic function where log(xy) = log(x) + log(y) so in this sense, there is actually a bridge to relate multiplication purely to addition.

However with this said, even if you are able to eventually reduce all log(xy) down to summations where x and y are real numbers (or even complex ones), you might want to think about exponentiation: log(x^y) = y*log(x) and although you could do a double log to convert to an addition, this raises an issues of how to deal with this.

I am basically being generous in the above assumption and am assuming you have a way to eventually break any product of real numbers into an addition which you can calculate quantities without resorting to addition, and basically I think this is going to involve a lot of abstract symbolic relations between things to get around the fact that calculating explicitly the logarithm requires a power series that has powers (i.e. repeated multiplication) if you want to do a specific finite computation.

The analogue that I am making can be seen with what people do with the sine and cosine functions: we know for example some exact expressions for sine and cosine (like for various fractions of pi) and we can use all the various sine and cosine expressions to get exact expressions for a particular quantity without needing to actually use the power series and I think if you wanted to do the above, you would need to do the exact same thing but for logarithms.

Interesting enoughly, there is a direct connection with exponentiation and the trig functions, so there is actually a bridge that you can build and see if you can cross.

Problem is though, that we don't really have a way currently that is known of, to just get the exact value for the sine of any number on the real line in exact form, so if you want to do your thing, you need to solve this problem first (I still think it's actually possible to do this, but it's going to be rather difficult).

But this would only look at being able to express arithmetic operations in terms of only two operations and a lot of this would require a completely complex symbolic framework.

The other thing you need to look at, as hinted above by Number Nine, is the abstract stuff that does not necessarily correlate to arithmetic in a direct way like sets and the rest of the other abstract things built on general sets.

Sets have a duality too but it deals with intersections and unions, and sets don't work like numbers: they are a completely different way of looking at things because sets do not have rank or order like numbers do. You can introduce relations and things on various sets but it is by no means required to do so.

So since you have these sets that do not have any notion of rank, or comparison in the arithmetic sense, you can't really apply that kind of thinking to all the stuff based on sets which is pretty much all of mathematics at some level.

I suggest you take a look at the set framework and the two operations of intersections and unions to get a feel of the ground-work for modern abstract mathematics and then consider that in light of your question.

Consider that in machine language (I.E. hexadecimal or binary) only addition and subtraction are possible. So at the root of all computer programs, the initial machine only adds and subtracts

jimgram said:
Consider that in machine language (I.E. hexadecimal or binary) only addition and subtraction are possible. So at the root of all computer programs, the initial machine only adds and subtracts

That is an interesting point.

In mathematics though, the variability and nature of information is a little bit more subtle: we aren't dealing with things that have something that corresponds to a numeric quantity.

You also have to consider the nature of the limits of what you can do with addition and subtraction: these operations are built on more fundamental and flexible ones that are the basis for boolean algebra on binary numbers.

If you look at an adder circuit, it is derived from using gates that implement AND and XOR routines for the most basic adder (and an extra term if you are using a full adder as opposed to a half-adder).

If you wanted to, you could write your own adder if only you had the ability to perform the basic binary operations (AND and OR).

The real absolute fundamental instruction of anything that is either binary based or is equivalent in some form to a the standard binary based computer are the logical operators, and you can derive every single instruction (and indeed this is actually done in practice) by using these things in various combinations.

Find me the area of a circle using basic addition.

Find me $\pi^2$ using basic addition.

Define $\sin(x)$ and $\log(x)$ using basic addition.

Define composition of functions using basic addition.

Describe the union of sets using basic addition.

If you can give a satisfactory answer to the previous five simple problems, then you might have a point.

Add $\pi$ to itself $\pi$ times.

Microprocessors consist of registers (a series of flip-flops) and memory registers. Parts of the circuits can be dedicated to (using addition) multiplication, division, squares and roots. Along with tables, other parts are dedicated to complex functions (using the aforementioned sections for addition, multiplication, etc.). Taken alone, a register can only increment or decrement - hence the need to do it extremely fast

micromass said:
Find me $\pi^2$ using basic addition.

Can you post a solution that solves the above equation so that the answer is in the form (3.141592653589793238462643383279502884197169399...)^2 accurately by any method?

chiro said:
There is a series expansion formula for this g4143: just slightly adjust the basel problem.
http://en.wikipedia.org/wiki/Basel_problem

My rudimentary math skills are probably apparent by now but can you represent pi^2 as a number in the form 9.86960440109... accurately?

jimgram said:
Add $\pi$ to itself $\pi$ times.

What does "∏ times" mean?

OP, do you know what a computable number is?

Last edited:
jimgram said:
Add $\pi$ to itself $\pi$ times.

Please define what it means to add something $\pi$ times.

Last edited:
g4143 said:
Can you post a solution that solves the above equation so that the answer is in the form (3.141592653589793238462643383279502884197169399...)^2 accurately by any method?

I wanted you to define $\pi^2$ using just basic addition. I don't want you to actually find the number. Just define it.

And please don't come with silliness such as adding things $\pi$ times, without defining what that means.

If 'x' = 3, then you could find 3^2 by adding x to itself x times (3+3+3=9)

jimgram said:
If 'x' = 3, then you could find 3^2 by adding x to itself x times (3+3+3=9)

Sure, and what if $x=\pi$? It doesn't seem to work well then, does it?

There's always a level of accuracy (e.g. number of decimal points) that must be accepted. What's you point?

jimgram said:
There's always a level of accuracy (e.g. number of decimal points) that must be accepted. What's you point?

Not in math. Everything is precise and accurate in math.

More: Categorize, up to homeomorphism, all topological 2-manifolds using only addition.

∏^2 = 3∏ + (∏/10) + (4∏/100) + (∏/1000)...

forever onwards

Okay, if you don't like $\pi$, how would you add 1.5 to itself 1.5 times? micromass's question has nothing to do with "level of accuracy".

jimgram said:
Consider that in machine language (I.E. hexadecimal or binary) only addition and subtraction are possible. So at the root of all computer programs, the initial machine only adds and subtracts
A bit late, but a couple of other members have dredged this thread up.

Many processors have MUL and DIV operators as part of their instruction sets, so machine code produced by these processors is not limited to only addition and subtraction. In any case, there really is no such thing as "machine language" per se. Each processor type produces its own sort of machine language.

Mark44 said:
Many processors have MUL and DIV operators as part of their instruction sets, so machine code produced by these processors is not limited to only addition and subtraction.
Even processors that don't have these will at least have right-shift and left-shift instructions. (i.e. multiply/divide by powers of 2), from which you can somewhat painfully build functions to multiply/divide by arbitrary numbers.

Mark44 said:
A bit late, but a couple of other members have dredged this thread up.

Many processors have MUL and DIV operators as part of their instruction sets, so machine code produced by these processors is not limited to only addition and subtraction. In any case, there really is no such thing as "machine language" per se. Each processor type produces its own sort of machine language.

Since the internal coding of numbers in any computer processor only supports a finite subset of the rational numbers, what a processor can do is irrelevant to the mathematical concept of multiplication.

HallsofIvy said:
Okay, if you don't like $\pi$, how would you add 1.5 to itself 1.5 times? micromass's question has nothing to do with "level of accuracy".

Machines have no particular use of decimal points. You register '15' (1111 binary or 0f hex) 15 times then shift the decimal twice.

lamball1 said:
∏^2 = 3∏ + (∏/10) + (4∏/100) + (∏/1000)...

forever onwards

Then you are making use of limits and not only of addition. This is my point.

AlephZero said:
Since the internal coding of numbers in any computer processor only supports a finite subset of the rational numbers, what a processor can do is irrelevant to the mathematical concept of multiplication.
I don't understand your point, unless possibly you were talking about real number multiplication, which I wasn't. The operations I mentioned, MUL and DIV, are Intel X86 opcodes that take integer operands. Floating point arithmetic operations usually use different opcodes, assuming they are actually supported by the processor.

My comment was in rebuttal to what jimgram said, which was that in machine language, only addition and subtraction were possible.

I think a professional mathematician would say the answer is no. (I'm an engineer).

In math there are often two basic operations: addition and multiplication. From these operations we can define other operations like subtraction, division, exponents, etc.
Now if you consider the real numbers its tempting to say that all operations can be defined from addition. This is because we can define multiplication in terms of addition. A*B is A added to itself B times.

But this is not always so. Consider 2x2 matrices. Can you define matrix multiplication in terms of matrix addition?

Its true that we can define matrix multiplication and addition in terms of real number operations, but when you consider more abstract groups this is not always the case.

There are also groups with a well defined multiplication, but no addition operation. For example Braid groups fall into this category (i think).

micromass said:
Then you are making use of limits and not only of addition. This is my point.

But then this also counts for adding pi together? Of course all irrational numbers cannot be completely calculated, but this does not present an argument against the idea that all other operations can be based on simple addition and subtraction.

lamball1 said:
But then this also counts for adding pi together? Of course all irrational numbers cannot be completely calculated, but this does not present an argument against the idea that all other operations can be based on simple addition and subtraction.

Define function composition using addition. Keep in mind that you'll have to define the notion of a binary operation, as well as the notion of a function between sets, using only addition.

lamball1 said:
But then this also counts for adding pi together? Of course all irrational numbers cannot be completely calculated, but this does not present an argument against the idea that all other operations can be based on simple addition and subtraction.

The premise of this thread was to ask whether everything in mathematics can be defined through only simple addition. You need limits to define addition of irrationals, and limits can't be defined through only simple addition. That is a counter example. Thread over.