1. Sep 21, 2012

### g4143

Hello, mathematicians,

I have a very simple question. Well its simple to state. Can you arrive at all the higher level mathematics with plain addition? Here's an example of what I mean. Can you calculate things like trigonometry and logarithms with just addition?

I ask this question because I've been following and participating in a debate which claims that you can achieve all the higher level mathematics functionality with just addition.

Myself, I fall on the side of basic addition can achieve all the higher level functions and I'm curious what a professional mathematicians would say.

Thank-you for any replies.

2. Sep 21, 2012

### Number Nine

I'm at a loss as to how addition would give you category theory, so I'll say "no". Much of higher mathematics doesn't even concern itself with numbers.

3. Sep 21, 2012

### chiro

Hey g4143 and welcome to the forums.

You do actually raise indirectly an interesting point about the duality between multiplication and addition, which is seen the logarithmic function where log(xy) = log(x) + log(y) so in this sense, there is actually a bridge to relate multiplication purely to addition.

However with this said, even if you are able to eventually reduce all log(xy) down to summations where x and y are real numbers (or even complex ones), you might want to think about exponentiation: log(x^y) = y*log(x) and although you could do a double log to convert to an addition, this raises an issues of how to deal with this.

I am basically being generous in the above assumption and am assuming you have a way to eventually break any product of real numbers into an addition which you can calculate quantities without resorting to addition, and basically I think this is going to involve a lot of abstract symbolic relations between things to get around the fact that calculating explicitly the logarithm requires a power series that has powers (i.e. repeated multiplication) if you want to do a specific finite computation.

The analogue that I am making can be seen with what people do with the sine and cosine functions: we know for example some exact expressions for sine and cosine (like for various fractions of pi) and we can use all the various sine and cosine expressions to get exact expressions for a particular quantity without needing to actually use the power series and I think if you wanted to do the above, you would need to do the exact same thing but for logarithms.

Interesting enoughly, there is a direct connection with exponentiation and the trig functions, so there is actually a bridge that you can build and see if you can cross.

Problem is though, that we don't really have a way currently that is known of, to just get the exact value for the sine of any number on the real line in exact form, so if you want to do your thing, you need to solve this problem first (I still think it's actually possible to do this, but it's going to be rather difficult).

But this would only look at being able to express arithmetic operations in terms of only two operations and a lot of this would require a completely complex symbolic framework.

The other thing you need to look at, as hinted above by Number Nine, is the abstract stuff that does not necessarily correlate to arithmetic in a direct way like sets and the rest of the other abstract things built on general sets.

Sets have a duality too but it deals with intersections and unions, and sets don't work like numbers: they are a completely different way of looking at things because sets do not have rank or order like numbers do. You can introduce relations and things on various sets but it is by no means required to do so.

So since you have these sets that do not have any notion of rank, or comparison in the arithmetic sense, you can't really apply that kind of thinking to all the stuff based on sets which is pretty much all of mathematics at some level.

I suggest you take a look at the set framework and the two operations of intersections and unions to get a feel of the ground-work for modern abstract mathematics and then consider that in light of your question.

4. Sep 21, 2012

### jimgram

Consider that in machine language (I.E. hexadecimal or binary) only addition and subtraction are possible. So at the root of all computer programs, the initial machine only adds and subtracts

5. Sep 21, 2012

### chiro

That is an interesting point.

In mathematics though, the variability and nature of information is a little bit more subtle: we aren't dealing with things that have something that corresponds to a numeric quantity.

You also have to consider the nature of the limits of what you can do with addition and subtraction: these operations are built on more fundamental and flexible ones that are the basis for boolean algebra on binary numbers.

If you look at an adder circuit, it is derived from using gates that implement AND and XOR routines for the most basic adder (and an extra term if you are using a full adder as opposed to a half-adder).

If you wanted to, you could write your own adder if only you had the ability to perform the basic binary operations (AND and OR).

The real absolute fundamental instruction of anything that is either binary based or is equivalent in some form to a the standard binary based computer are the logical operators, and you can derive every single instruction (and indeed this is actually done in practice) by using these things in various combinations.

6. Sep 21, 2012

### micromass

Find me the area of a circle using basic addition.

Find me $\pi^2$ using basic addition.

Define $\sin(x)$ and $\log(x)$ using basic addition.

Define composition of functions using basic addition.

Describe the union of sets using basic addition.

If you can give a satisfactory answer to the previous five simple problems, then you might have a point.

7. Sep 21, 2012

### jimgram

Add $\pi$ to itself $\pi$ times.

Microprocessors consist of registers (a series of flip-flops) and memory registers. Parts of the circuits can be dedicated to (using addition) multiplication, division, squares and roots. Along with tables, other parts are dedicated to complex functions (using the aforementioned sections for addition, multiplication, etc.). Taken alone, a register can only increment or decrement - hence the need to do it extremely fast

8. Sep 21, 2012

### g4143

Can you post a solution that solves the above equation so that the answer is in the form (3.141592653589793238462643383279502884197169399....)^2 accurately by any method?

9. Sep 21, 2012

### chiro

10. Sep 21, 2012

### g4143

My rudimentary math skills are probably apparent by now but can you represent pi^2 as a number in the form 9.86960440109... accurately?

11. Sep 21, 2012

### pwsnafu

What does "∏ times" mean?

OP, do you know what a computable number is?

Last edited: Sep 21, 2012
12. Sep 21, 2012

### micromass

Please define what it means to add something $\pi$ times.

Last edited: Sep 21, 2012
13. Sep 21, 2012

### micromass

I wanted you to define $\pi^2$ using just basic addition. I don't want you to actually find the number. Just define it.

And please don't come with silliness such as adding things $\pi$ times, without defining what that means.

14. Sep 21, 2012

### jimgram

If 'x' = 3, then you could find 3^2 by adding x to itself x times (3+3+3=9)

15. Sep 21, 2012

### micromass

Sure, and what if $x=\pi$? It doesn't seem to work well then, does it?

16. Sep 21, 2012

### jimgram

There's always a level of accuracy (e.g. number of decimal points) that must be accepted. What's you point?

17. Sep 21, 2012

### micromass

Not in math. Everything is precise and accurate in math.

18. Sep 21, 2012

### Number Nine

More: Categorize, up to homeomorphism, all topological 2-manifolds using only addition.

19. Mar 18, 2013

### lamball1

∏^2 = 3∏ + (∏/10) + (4∏/100) + (∏/1000).......

forever onwards

20. Mar 18, 2013

### HallsofIvy

Okay, if you don't like $\pi$, how would you add 1.5 to itself 1.5 times? micromass's question has nothing to do with "level of accuracy".