## Main Question or Discussion Point

Hello, mathematicians,

I have a very simple question. Well its simple to state. Can you arrive at all the higher level mathematics with plain addition? Here's an example of what I mean. Can you calculate things like trigonometry and logarithms with just addition?

I ask this question because I've been following and participating in a debate which claims that you can achieve all the higher level mathematics functionality with just addition.

Myself, I fall on the side of basic addition can achieve all the higher level functions and I'm curious what a professional mathematicians would say.

Thank-you for any replies.

I'm at a loss as to how addition would give you category theory, so I'll say "no". Much of higher mathematics doesn't even concern itself with numbers.

chiro
Hey g4143 and welcome to the forums.

You do actually raise indirectly an interesting point about the duality between multiplication and addition, which is seen the logarithmic function where log(xy) = log(x) + log(y) so in this sense, there is actually a bridge to relate multiplication purely to addition.

However with this said, even if you are able to eventually reduce all log(xy) down to summations where x and y are real numbers (or even complex ones), you might want to think about exponentiation: log(x^y) = y*log(x) and although you could do a double log to convert to an addition, this raises an issues of how to deal with this.

I am basically being generous in the above assumption and am assuming you have a way to eventually break any product of real numbers into an addition which you can calculate quantities without resorting to addition, and basically I think this is going to involve a lot of abstract symbolic relations between things to get around the fact that calculating explicitly the logarithm requires a power series that has powers (i.e. repeated multiplication) if you want to do a specific finite computation.

The analogue that I am making can be seen with what people do with the sine and cosine functions: we know for example some exact expressions for sine and cosine (like for various fractions of pi) and we can use all the various sine and cosine expressions to get exact expressions for a particular quantity without needing to actually use the power series and I think if you wanted to do the above, you would need to do the exact same thing but for logarithms.

Interesting enoughly, there is a direct connection with exponentiation and the trig functions, so there is actually a bridge that you can build and see if you can cross.

Problem is though, that we don't really have a way currently that is known of, to just get the exact value for the sine of any number on the real line in exact form, so if you want to do your thing, you need to solve this problem first (I still think it's actually possible to do this, but it's going to be rather difficult).

But this would only look at being able to express arithmetic operations in terms of only two operations and a lot of this would require a completely complex symbolic framework.

The other thing you need to look at, as hinted above by Number Nine, is the abstract stuff that does not necessarily correlate to arithmetic in a direct way like sets and the rest of the other abstract things built on general sets.

Sets have a duality too but it deals with intersections and unions, and sets don't work like numbers: they are a completely different way of looking at things because sets do not have rank or order like numbers do. You can introduce relations and things on various sets but it is by no means required to do so.

So since you have these sets that do not have any notion of rank, or comparison in the arithmetic sense, you can't really apply that kind of thinking to all the stuff based on sets which is pretty much all of mathematics at some level.

I suggest you take a look at the set framework and the two operations of intersections and unions to get a feel of the ground-work for modern abstract mathematics and then consider that in light of your question.

Consider that in machine language (I.E. hexadecimal or binary) only addition and subtraction are possible. So at the root of all computer programs, the initial machine only adds and subtracts

chiro
Consider that in machine language (I.E. hexadecimal or binary) only addition and subtraction are possible. So at the root of all computer programs, the initial machine only adds and subtracts
That is an interesting point.

In mathematics though, the variability and nature of information is a little bit more subtle: we aren't dealing with things that have something that corresponds to a numeric quantity.

You also have to consider the nature of the limits of what you can do with addition and subtraction: these operations are built on more fundamental and flexible ones that are the basis for boolean algebra on binary numbers.

If you look at an adder circuit, it is derived from using gates that implement AND and XOR routines for the most basic adder (and an extra term if you are using a full adder as opposed to a half-adder).

If you wanted to, you could write your own adder if only you had the ability to perform the basic binary operations (AND and OR).

The real absolute fundamental instruction of anything that is either binary based or is equivalent in some form to a the standard binary based computer are the logical operators, and you can derive every single instruction (and indeed this is actually done in practice) by using these things in various combinations.

Find me the area of a circle using basic addition.

Find me $\pi^2$ using basic addition.

Define $\sin(x)$ and $\log(x)$ using basic addition.

Define composition of functions using basic addition.

Describe the union of sets using basic addition.

If you can give a satisfactory answer to the previous five simple problems, then you might have a point.

Add $\pi$ to itself $\pi$ times.

Microprocessors consist of registers (a series of flip-flops) and memory registers. Parts of the circuits can be dedicated to (using addition) multiplication, division, squares and roots. Along with tables, other parts are dedicated to complex functions (using the aforementioned sections for addition, multiplication, etc.). Taken alone, a register can only increment or decrement - hence the need to do it extremely fast

Find me $\pi^2$ using basic addition.
Can you post a solution that solves the above equation so that the answer is in the form (3.141592653589793238462643383279502884197169399....)^2 accurately by any method?

chiro
There is a series expansion formula for this g4143: just slightly adjust the basel problem.
http://en.wikipedia.org/wiki/Basel_problem
My rudimentary math skills are probably apparent by now but can you represent pi^2 as a number in the form 9.86960440109... accurately?

pwsnafu
Add $\pi$ to itself $\pi$ times.
What does "∏ times" mean?

OP, do you know what a computable number is?

Last edited:
Add $\pi$ to itself $\pi$ times.
Please define what it means to add something $\pi$ times.

Last edited:
Can you post a solution that solves the above equation so that the answer is in the form (3.141592653589793238462643383279502884197169399....)^2 accurately by any method?
I wanted you to define $\pi^2$ using just basic addition. I don't want you to actually find the number. Just define it.

And please don't come with silliness such as adding things $\pi$ times, without defining what that means.

If 'x' = 3, then you could find 3^2 by adding x to itself x times (3+3+3=9)

If 'x' = 3, then you could find 3^2 by adding x to itself x times (3+3+3=9)
Sure, and what if $x=\pi$? It doesn't seem to work well then, does it?

There's always a level of accuracy (e.g. number of decimal points) that must be accepted. What's you point?

There's always a level of accuracy (e.g. number of decimal points) that must be accepted. What's you point?
Not in math. Everything is precise and accurate in math.

More: Categorize, up to homeomorphism, all topological 2-manifolds using only addition.

∏^2 = 3∏ + (∏/10) + (4∏/100) + (∏/1000).......

forever onwards

HallsofIvy
Homework Helper
Okay, if you don't like $\pi$, how would you add 1.5 to itself 1.5 times? micromass's question has nothing to do with "level of accuracy".

Mark44
Mentor
Consider that in machine language (I.E. hexadecimal or binary) only addition and subtraction are possible. So at the root of all computer programs, the initial machine only adds and subtracts
A bit late, but a couple of other members have dredged this thread up.

Many processors have MUL and DIV operators as part of their instruction sets, so machine code produced by these processors is not limited to only addition and subtraction. In any case, there really is no such thing as "machine language" per se. Each processor type produces its own sort of machine language.

jbunniii
Homework Helper
Gold Member
Many processors have MUL and DIV operators as part of their instruction sets, so machine code produced by these processors is not limited to only addition and subtraction.
Even processors that don't have these will at least have right-shift and left-shift instructions. (i.e. multiply/divide by powers of 2), from which you can somewhat painfully build functions to multiply/divide by arbitrary numbers.

AlephZero
Homework Helper
A bit late, but a couple of other members have dredged this thread up.

Many processors have MUL and DIV operators as part of their instruction sets, so machine code produced by these processors is not limited to only addition and subtraction. In any case, there really is no such thing as "machine language" per se. Each processor type produces its own sort of machine language.
Since the internal coding of numbers in any computer processor only supports a finite subset of the rational numbers, what a processor can do is irrelevant to the mathematical concept of multiplication.

Okay, if you don't like $\pi$, how would you add 1.5 to itself 1.5 times? micromass's question has nothing to do with "level of accuracy".
Machines have no particular use of decimal points. You register '15' (1111 binary or 0f hex) 15 times then shift the decimal twice.

∏^2 = 3∏ + (∏/10) + (4∏/100) + (∏/1000).......

forever onwards
Then you are making use of limits and not only of addition. This is my point.