Proving a^0=1: Step-by-Step Guide

  • B
  • Thread starter Rijad Hadzic
  • Start date
  • Tags
    Proof
In summary: So, this whole thread boils down to the following:- If you define ##a^0=1## then ##a^n a^m = a^{n+m}## for all integers n and m.- If you assume the law of indices (that's what it's called) for positive integers, then you can prove ##a^n a^m = a^{n+m}## for all integers n and m.- You can prove that ##a^0=1## is equivalent to the law of indices.- Which one you do depends on whether you are thinking as a mathematician or as a physicist.- You can't prove that ##a^0
  • #36
jack action said:
@jbriggs444 , @PeroK , @Stephen Tashi ,

Considering the OP's question AND the fact that this thread is classified as "B" [Thread Level: Basic (high school)], don't all of you think this is the kind of «proof» the OP is looking for?

The thread could and should have been all over by post #8, with the (simple) mainstream mathematical answer that it is essentially a definition. But, the thread got sidetracked after that. The subsequent complexities are a result of trying to answer the other posters.
 
Mathematics news on Phys.org
  • #37
Rijad Hadzic said:
So at what point can I just define something without proving its true

Definitions are only "true" (or false) in sociological sense. If a definition becomes accepted by most academic mathematicians, it becomes a "true" definition in the sense that that the community of academic mathematicians agree it's a useful definition.

Trying to determine the objective truth of a definition is a logically contradictory process. For example, if we are proposing a definition of "tropical geometry" then we can't debate whether "tropical geometry" really has this or that property before we say what we mean by "tropical geometry". People who have a preconceived idea of what a phrase means can object to another person's definition of that phrase. That sort of debate is not a debate about objective mathematical issues. It is a debate about personal preferences or the usefulness of one convention of vocabulary versus another.

That said, one often sees definitions that contain unstated assumptions - or definitions that lead one to make assumptions. For example, the definition
"##b^{-1}## is (defined to be) the multiplicative inverse of the number b"
might lead one to the assumption that each number b has a multiplicative inverse and that the multiplicative inverse of a number b is a unique number (i.e. that b cannot have two unequal multiplicative inverses). Neither of those statements is established merely by giving a definition of the notation "##b^{-1}##".

One approach to presenting definitions is to precede them by proving or assuming all the facts that must be incorporated in the definition. Another approach is to make the definition and admit that the thing defined may not exist or that two different things may exist that both satisfy the definition. After the definition is made, you offer proof that the thing defined does exist - and if the word "the" as been used to imply that there is only one thing that satisfies the definition, you also prove that fact.
 
  • Like
Likes jbriggs444
  • #38
This is a long thread and I haven't read it all but I wanted to have my say as well.

I think 1 is the only plausible value one can give ##a^0## to be consistent with arithmetic. The definition is forced, IMHO, by the symmetry that the basic operations have. Basically we can run them in reverse. Subtraction is just addition in reverse. Division is just multiplication in reverse. And running it in reverse, a^2 = a*a, a^1 = a, a^0 = 1. It can only be 1. So while this isn't a proof, it is at least justified by a holistic vision of the purity of arithmetic.
 
  • #39
let a>0, and let f be a continuous real valued function defined on all reals such that f(n) = a.a...a (n times) = a^n, for n = any positive integer. and assume that f(x+y) = f(x).f(y) for all reals x,y. then we claim that :

1) f(0) = 1,

proof: note that f(1) = a, and that thus a = f(1) = f(0+1) = f(0).f(1) = f(0).a, hence f(0) = 1.

note also that then f(1/n)...f(1/n), (n times), = f(1/n+...+1/n) = f(1) = a, so f(1/n) = nth root of a = a^(1/n). Thus also f(m/n) = f(1/n)...f(1/n), (m times), = a^(1/n)...a^(1/n), (m times).

Thus f is determined on all rational numbers, and hence by continuity also on all real numbers.

thus if a > 0, there is at most one continuous real valued function f of a real variable such that f(n) = a^n, for all positive integers n, and such that f(x+y) = f(x).f(y) for all reals x,y.

the fact that there does in fact exist such a function follows from integration theory, i.e. the study of the integral of 1/x, and the inverse of this function.
 
Last edited:
  • Like
Likes suremarc
  • #40
mathwonk said:
let a>0, and let f be a continuous real valued function defined on all reals such that f(n) = a.a...a (n times) = a^n, for n = any positive integer. and assume that f(x+y) = f(x).f(y) for all reals x,y. then we claim that :


But what is f(x) = a.a...a ( x times ) for all reals when x = -3 or x = 1,5 say, according to the first definition of f(n)? You haven't defined this!
 
  • #41
mathwonk said:
let a>0, and let f be a continuous real valued function defined on all reals such that f(n) = a.a...a (n times) = a^n, for n = any positive integer. and assume that f(x+y) = f(x).f(y) for all reals x,y. then we claim that :
@mathwonk Furthermore, why do you assume f(x+y) = f(x).f(y) for all reals x,y? Another interesting rule would be f(x+y) = f(x)/f(y). Then you get a whole new set of defntions for powers etc.
 
  • #42
Rada Demorn said:
@mathwonk Furthermore, why do you assume f(x+y) = f(x).f(y) for all reals x,y? Another interesting rule would be f(x+y) = f(x)/f(y). Then you get a whole new set of defntions for powers etc.
So ##f(x) = \frac{f(x/2)}{f(x/2)} = 1## for all x. That does not sound very interesting.
 
  • Like
Likes PeroK
  • #44
I have read the replies and would offer a different explanation that is not at the B level if full understanding is required. The following notes from Harvey Mudd give the full details if you know calculus:
https://www.math.hmc.edu/~kindred/math30g/lectures.html

I post it for reference only - the OP will not understand it - it is not at the B level - although IMHO should be taught to all HS students so you can understand things better. Just my view - but it would solve a lot of problems IMHO.

Here is the cut down version that is easy to understand. Given any positive number a it is possible to rigorously define a function called a^x where x is any real number with the property a^(x+y) = a^x*a^y and a^1 = a. That is the advanced bit the above proves - don't worry about why it is so at the beginning level. Now a^n where n is an integer is a^1*a^1...a^1 n times or a^n = a*a*a*...*a n times. This is what beginners are taught raising something to a power means. But x in a^x is a real number so we can now see what such funny things as a^0 and a^(1/n) are. a^x = a^(x+0) = a^x*a^0 which when you divide by a^x you get a^0 =1. a = a^1 = a^(n/n) = a^(1/n + 1/n ,,,,,, +1/n) = a^1/n * a^1/n ... * a^1/n = (a^1/n)^n or a^1/n = (n)√a. And finally a^(x-x) = a^0 = 1 = a^x*a^(-x) so a^(-x) = 1/a^x. One can then proceed to define logs as the inverse of a^x but that comes out of the calculus approach anyway and you get the properties of logarithms which I not do the detail of.

The reason its not done that way in elementary treatments is you need calculus which you normally do after this stuff. IMHO this makes it harder than it should be for students. I personally believe you should study calculus at the same time as you do this stuff and things will be a lot easier. What grade to do it in? I believe along with an intuitive treatment of calculus you could do it in grade 10 - for good students grade 9.

Thanks
Bill
 
Last edited:
  • #45
I'll make a suggestion here. I know OP codes and I think a lot of people on this thread do as well. First ignore the word "proof" of ##a^0 =1##. Second consider why it is quite natural. Really just some basic thinking about coding (and perhaps groups) is all that's needed here.

I am sticking with integer values of ##k##. I spoiler tagged it due to length.

Start with addition. How would it be done with a for loop?

Python:
running_sum = 0

a_list = [a_1, a_2, a_3]
# k terms... 3 are shown here for the example
for a_i in a_list:
    running_sum += a_i
print("done")

That's the forward case. Now run it backward and try to "undo" (or invert) the addition. (Sometimes people call this subtraction)

Python:
running_sum = 0

a_list = [a_1, a_2, a_3]

for a_i in a_list:
    running_sum += -a_i
print("done")

now if we homogenize the items in the a_list to be identical we can say, that the magnitude of ##k## is the length of said list and simply define the forward case as

for positive integer ##k##
##k\cdot a_1 = \vert k\vert \cdot a_1 := 0 + \sum_{i=1}^{k } a_1 = 0 + \sum_{i=1}^{\vert k \vert} a_1##

and the backward case, i.e. for negative integer ##k## is

##k\cdot a_1 = -\vert k\vert\cdot a_1 := 0 + \sum_{i=1}^{\vert k \vert} -a_1##.

Checking both definitions and we find that it if ##k=0## they both evaluate to zero and hence ##k\cdot a_1 := 0## when ## k =0##. The reason for this check is that we've shown the operation to be different for ##k## vs ##-k##... but if ##k=0## then ##k = -k## and hence we need to check for agreement.

Note that the final formula for a homogenous sum involves multiplication, but to be clear, everything above was about sums.
- - - -
Now we consider products instead:

the forward case would be

Python:
running_product = 1

a_list = [a_1, a_2, a_3]

for a_i in a_list:
    running_product *= a_i
print("done")

now consider the backwards case. How do we 'undo' (or invert) multiplication. And as before we can proceed to consider the case of homogenized terms.

for positive integer ##k##
##a_1^k = a_1^{\vert k\vert} := 1 \cdot \prod_{i=1}^{\vert k \vert} a_1##

and for negative integer ##k## is
##a_1^k = a_1^{-\vert k\vert} := 1 \cdot \prod_{i=1}^{\vert k \vert} a_1^{-1}##

The second approach does not work on ##a_i = 0## but is fine for all ##a_i \neq 0##. Now compare definitions and evaluate when ##k=0##. Both evaluate to 1, again for any ##a_1 \neq 0##.
- - - -
Notice that 0 is the identity element in sums and 1 is the identity element in products. In each case if you apply your operation ##\big \vert k\big \vert## times, you start by applying it to the identity element. If ##k## is zero then you retain the identity element for your operation. With products, a tiny bit of extra care is needed because multiplying by zero is not invertible. (To tie this back to part one, consider that ## a_1 \cdot 0 = 0 + \sum_{i=1}^{0} a_1 = 0 + \sum_{i=1}^{0} -a_1 = 0## for any legal ##a_1## in your field or whatever, hence you cannot work backward and figure out what the unique ##a_1## was before your operation).

That's really it.

PeroK said:
In this case, for example, you might write:

##2^0##

But, what does that mean? There's no immediate way to "multiply 2 by itself 0 times". Unlike ##2^1, 2^2, 2^3 \dots ##, which have a simple, clear definition.
My bigger point, I suppose, is puzzles like these become non-issues if we start with the identity element and then iteratively apply the operation to it. (It also fixes some zero vs one indexing errors which is a bonus.)

I know that both OP and Perok code. For whatever reason the point I made above seems peculiar when writing it as ' pen and paper math' but it seems obvious when writing for loops.

Again, not a 'proof', just one of many reasons why the definition is quite natural from certain viewpoints.
 
  • #46
jbriggs444 said:
So ##f(x) = \frac{f(x/2)}{f(x/2)} = 1## for all x. That does not sound very interesting.
@jbriggs444 No, you are wrong because division is not associative f( (a+b+c)+d ) = f(a) / { f(b).f(c).f(d) } <> f ( (a+b) + (c+d) ) = { f(a).f(d) } / { f(b).f(c) }.

You may guess what happens if we try to evaluate f (1/n + ... + 1/n) with the aforementioned rule.
 
  • #47
Rada Demorn said:
@jbriggs444 No, you are wrong because division is not associative f( (a+b+c)+d ) = f(a) / { f(b).f(c).f(d) } <> f ( (a+b) + (c+d) ) = { f(a).f(d) } / { f(b).f(c) }.
You seem to have skipped a few steps. The definition in question does not involve multiplication at all. But let us ignore that.

Let a = x/2.
Let b = x/2
By definition ##f(x) = f(a+b) = \frac{f(a)}{f(b)}##
But a=b so f(a) = f(b) and ##\frac{f(a)}{f(b)}=1##

You are correct that division is not normally associative. And you are correct that at first this would seem to be the kiss of death for the definition in question. You can't have a definition that requires f(x) to be two different things at the same time. However, it turns out that in any expression involving nothing but division and 1's, division is associative.

e.g. 1/(1/1) = (1/1)/1

You may guess what happens if we try to evaluate f (1/n + ... + 1/n) with the aforementioned rule.
No need to guess. It is equal to one.
 
Last edited:
  • #48
Rijad Hadzic said:
I'm trying to prove that a^0 is = 1
an/an = an-n
Since an = an , an/an = 1 (just like 100/100 = 1)
n-n always equals to 0
an-n = 1

Hope this helps.
 
  • #49
On the argument about definition versus proof, is part of the problem that there's only one sensible extension of ##a^n## from shorthand for "##n## ##a##s multiplied together" to the case of ##n\leq 0##? Several ways have been shown of doing it in this thread, but they all lead to the same result. Would a counter-example be trying to extend division (which already gives us a way into non-integer numbers starting from the integers) to cover the case of 0/0? You could argue it should be 1 because it's something divided by itself, or 0 because it's zero times something, or infinite because it's something divided by zero. There's no single definition consistent with all the rules, so we leave it undefined (note "undefined" not "unprovable" or something of that nature) and try to take limits on a case-by-case basis?

Too many question marks there, but am conscious that I'm outside my field here.
 
  • Like
Likes PeroK
  • #50
Well, due to that problem, that n can be replaced by 4,5 or any natural number.
Still counts as a proof,right?
 
  • #51
Young physicist said:
Well, with my proof, that n can be replaced by 4,5 or any natural number.

What about this, as a variation of your proof?

##1 = \frac{a^0}{a^0} = a^{0-0} = a^0##

What do you think of that?
 
  • #52
Is there really much actual research in mathematics where alternative axiom systems are studied for something like this? I mean, like, making a new axiom system for real numbers, and setting ##0x=0## ##\forall x\in\mathbb{R}## (which is usually left to be proven) as one of the basic rules while leaving out one or more of the usual ones.
 
  • #53
Ibix said:
On the argument about definition versus proof, is part of the problem that there's only one sensible extension of ##a^n## from shorthand for "##n## ##a##s multiplied together" to the case of ##n\leq 0##? Several ways have been shown of doing it in this thread, but they all lead to the same result. Would a counter-example be trying to extend division (which already gives us a way into non-integer numbers starting from the integers) to cover the case of 0/0? You could argue it should be 1 because it's something divided by itself, or 0 because it's zero times something, or infinite because it's something divided by zero. There's no single definition consistent with all the rules, so we leave it undefined (note "undefined" not "unprovable" or something of that nature) and try to take limits on a case-by-case basis?

Too many question marks there, but am conscious that I'm outside my field here.

Or, to do the maths:

##1 = \frac{n}{n}##

Letting ##n = 0## gives:

##1 = \frac00##

Multiply by ##m##

##m = m \frac00 = \frac{m \cdot 0}{0} = \frac{0}{0} = 1##

Letting ##m = 2## gives

##2 = 1##

That is that proved as well.
 
  • #54
Ibix said:
Too many question marks there, but am conscious that I'm outside my field here.

What I am about to say is not at the B level. So please B level people ignore it - it involves calculus.

For those still with me please read the Harvey Mudd notes I posted. Its basic simple calculus. There is no mystery like 0/0 or anything like that. Its simply you can rigorously define e^x for all real x. Its very basic really.

Normally this stuff is done before calculus so you must resort to definitions that make sense - but once you have the tool of calculus it falls out straight away. Pedagogically I prefer the calculus way - but either way is logically valid - its just I remember when I came across this stuff in the classroom I had the feeling of the first post - 'for some reason this does make sense to me but I have a feeling the result is not satisfying enough.'. That's all there is to it - some thinking students find the usual way not satisfying - me amongst them. I was fortunate in knowing calculus from self study - so it wasn't that 'unsatisfying' - but many do not have that tool yet. I am also suggesting maybe they should do them at the same time - hand-wavy calculus is not hard and much more satisfying than definitions. Just me - others it may not worry.

Thanks
Bill
 
  • #55
jbriggs444 said:
You seem to have skipped a few steps. The definition in question does not involve multiplication at all. But let us ignore that.

Let a = x/2.
Let b = x/2
By definition f(x)=f(a+b)=f(a)f(b)f(x)=f(a+b)=f(a)f(b)f(x) = f(a+b) = \frac{f(a)}{f(b)}
But a=b so f(a) = f(b) and f(a)f(b)=1f(a)f(b)=1\frac{f(a)}{f(b)}=1
Try this...

Let a = x/3
Let b = x/3
Let c = x/3

What do you get? Is it still equal to 1?
 
  • #56
bhobba said:
What I am about to say is not at the B level. So please B level people ignore it - it involves calculus.

For those still with me please read the Harvey Mudd notes I posted. Its basic simple calculus. There is no mystery like 0/0 or anything like that. Its simply you can rigorously define e^x for all real x. Its very basic really.

Normally this stuff is done before calculus so you must resort to definitions that make sense - but once you have the tool of calculus it falls out straight away. Pedagogically I prefer the calculus way - but either way is logically valid - its just I remember when I came across this stuff in the classroom I had the feeling of the first post - 'for some reason this does make sense to me but I have a feeling the result is not satisfying enough.'. That's all there is to it - some thinking students find the usual way not satisfying - me amongst them. I was fortunate in knowing calculus from self study - so it wasn't that 'unsatisfying' - but many do not have that tool yet. I am also suggesting maybe they should do them at the same time - hand-wavy calculus is not hard and much more satisfying than definitions. Just me - others it may not worry.

Thanks
Bill

Again, to make a comment above the B level:

But, in the sense of mathematical development, where does calculus come from? You might, if teaching mathematics, expect to resolve a simple issue like ##a^0## before you have developed the full power of calculus. The properties of real numbers, after all, depend ultimately on the properties of the integers. In a pure mathematical sense, it cannot be the other way round. You cannot use the properties of real numbers to prove the properties of integers. Or, at least, you have to be very careful that your development of the real numbers did not assume the properties of the integers that you are subsequently using the real numbers to prove!
 
  • #57
PeroK said:
What about this, as a variation of your proof?

##1 = \frac{a^0}{a^0} = a^{0-0} = a^0##

What do you think of that?
Well, except zero.
 
  • #58
Young physicist said:
Well, except zero.

Why?
 
  • #59
PeroK said:
Why?
Well, we are trying to prove a0 = 1 ,right?
You can't use the result of a proof in the proof!
Just like you can't use the vocabulary itself in it's own definition.
 
  • #60
Young physicist said:
Well, we are trying to prove a0 = 1 ,right?
You can't use the result of a proof in the proof!
Just like you can't use the vocabulary itself in it's own definition.

I didn't use the result. Here's my proof again:

##1 = \frac{a^0}{a^0} = a^{0-0} = a^0##

The conclusion is that ##a^0 = 1##, but I haven't assumed that to begin with.

PS can you see what I did assume about ##a^0##? I made two implicit assumptions. Can you see what they were?
 
  • #61
PeroK said:
But, in the sense of mathematical development, where does calculus come from?

A rigorous development? That's analysis and the motivation is the issues that can arise in calculus if you are not careful. By that stage you will know a^0 = 1 for sure. But a hand-wavy development doesn't require it eg:
https://en.wikipedia.org/wiki/Calculus_Made_Easy

Its based on intuitive ideas like very small numbers dx you can for all practical purposes ignore and most certainly you can ignore dx^2. You can interweave it into a pre-calculus or algebra 2 + trig course they call it in the US before a student tackles calculus proper with ideas of limits etc made clearer. In Aus calculus and pre-calculus is taught in an integrated way and you can most certainly do it in such a course. I have to say however I don't think they do it like that here in Aus - I think they do it like the link given previously to the UNSW paper on it. Why - well I do not think most students are like the OP and feel dissatisfied with the definition route - only people like me that like to think about it feel a bit uneasy - it's in the back of your mind - why is this based on definitions - there must be something more going on - and indeed there is.

Thanks
Bill
 
  • #62
bhobba said:
A rigorous development? That's analysis and the motivation is the issues that can arise in calculus if you are not careful. By that stage you will know a^0 = 1 for sure. But a hand-wavy development doesn't require it eg:
https://en.wikipedia.org/wiki/Calculus_Made_Easy

Its based on intuitive ideas like very small numbers dx you can for all practical purposes ignore and most certainly you can ignore dx^2. You can interweave it into a pre-calculus or algebra 2 + trig course they call it in the US before a student tackles calculus proper with ideas of limits etc made clearer. In Aus calculus and pre-calculus is taught in an integrated way and you can most certainly do it in such a course. I have to say however I don't think they do it like that here in Aus - I think they do it like the link given previously to the UNSW paper on it. Why - well I do not think most students are like the OP and feel dissatisfied with the definition route - only people like me that like to think about it feel a bit uneasy - it's in the back of your mind - why is this based on definitions - there must be something more going on - and indeed there is.

Thanks
Bill

This is interesting. I've never questioned the need to define certain things in mathematics. I suppose having studied pure maths it just is such a part of mathematics.

In the end, though, even with your calculus, at some point you are defining something like:

##e^x = \Sigma_{n = 0}^{\infty} \frac{x^n}{n!}##

Then, you might want to prove this. But, you can only prove it if you have defined ##e^x## some other way. In the end, ##e^x## mathematically has to be defined in some way. And, whatever way you choose to define it, you cannot then prove that.

Anyway, with that definition of the exponential function, let's evaluate ##e^0##:

##e^0 = \Sigma_{n = 0}^{\infty} \frac{0^n}{n!} = \frac{0^0}{0!} + \frac{0^1}{1!} + \dots = \frac{0^0}{0!} ##

Hmm. Maybe that doesn't resolve all your misgivings after all!

PS And, just to make the explicit point: defining real powers fundamentally rests on already having a definition of integer powers (including the zeroth power). And that's why you cannot use ##a^x## to prove ##a^0 = 1##. You can verify that your advanced definition of ##a^x## is consistent with your previously defined integer powers, ##a^n##, but you still need your integer powers to develop the advanced mathematical machinery in the first place.
 
Last edited:
  • Like
Likes S.G. Janssens
  • #63
Rada Demorn said:
Try this...

Let a = x/3
Let b = x/3
Let c = x/3

What do you get? Is it still equal to 1?
Yes. Recall that we are operating under the proposed property that ##f(a+b)=\frac{f(a)}{f(b)}##.

We have already established that if f(x) is defined at all (and is non-zero) then it must be equal to 1 everywhere. You ask about f(a+b+c) with a=b=c.

##f(a+b+c) = f((a+b)+c)= \frac{f(a+b)}{f(c)} = \frac{f(a)/f(b)}{f(c)}= \frac {1/1} {1} = \frac{1}{1} = 1##
##f(a+b+c) = f(a+(b+c)) = \frac{f(a)}{f(b+c)} = \frac{f(a)}{f(b)/f(c)} = \frac {1}{1/1} = \frac{1}{1} = 1##

As I had pointed out previously, division is associative if you only ever divide by one.

Edit: From the perspective of abstract algebra, if your only operator is "/" and your only operand is 1 then you have a pretty simple algebra. It is the algebraic group with one element and is clearly Abelian.
 
Last edited:
  • #64
jbriggs444 said:
Yes. Recall that we are operating under the proposed property that ##f(a+b)=\frac{f(a)}{f(b)}##.

We have already established that if f(x) is defined at all (and is non-zero) then it must be equal to 1 everywhere. You ask about f(a+b+c) with a=b=c.

##f(a+b+c) = f((a+b)+c)= \frac{f(a+b)}{f(c)} = \frac{f(a)/f(b)}{f(c)}= \frac {1/1} {1} = \frac{1}{1} = 1##
##f(a+b+c) = f(a+(b+c)) = \frac{f(a)}{f(b+c)} = \frac{f(a)}{f(b)/f(c)} = \frac {1}{1/1} = \frac{1}{1} = 1##

As I had pointed out previously, division is associative if you only ever divide by one.
No.

You are making a mistake!

In ##
f(a+b+c) = f((a+b)+c)= \frac{f(a+b)}{f(c)} = \frac{f(a)/f(b)}{f(c)}= \frac {1/1} {1} = \frac{1}{1} = 1
##, you are taking f(c) = 1 having only noticed this before and only for the case f (c/2+c/2) = 1. But what if c = c/3 + c/3 + c/3?

You better leave f(x) undetermined and admit that: ## f(x+x+x) = f((x+x)+x)= \frac{f(x+x)}{f(x)} = \frac{f(x)/f(x)}{f(x)}= \frac {1/1} {f(x)} = \frac{1}{f(x)}##
 
  • #65
Rada Demorn said:
you are taking f(c) = 1 having only noticed this before and only for the case f (c/2+c/2) = 1. But what if c = c/3 + c/3 + c/3?
You seem to have a fundamental misunderstanding.

If we demonstrate that f(c) must be equal to 1, then f(c) must be equal to 1. If we are able to demonstrate this without assuming anything about c then it follows that f(x) = 1 for all x. We have, in fact provided such a demonstration.

The fact that c = c/3+c/3+c/3 and that c = c/4+c/4+c/4+c/4, etc does not do anything to alter the fact that f(c) = 1.

It could have turned out that the proposed property that f(a+b)=f(a)/f(c) was inconsistent. It could have turned out that by evaluating f(c/3 + c/3 + c/3) we could demonstrate that the result would have to be something other than one. But that turns out not to be the case.

Edit: Note that we have a=b=c. You are trying to have f(a) and f(b) be well defined as 1 but have f(c) be undefined. That is silly.
 
Last edited:
  • #66
jbriggs444 said:
If we demonstrate that f(c) must be equal to 1, then f(c) must be equal to 1. If we are able to demonstrate this without assuming anything about c then it follows that f(x) = 1 for all x. We have, in fact provided such a demonstration.

You are assuming that c= c/2 + c/2 or x = x/2 + x/2. Period.

You have made a mistake and you are beating around the bush now!

I am not going to discuss this any further...
 
  • #67
Rada Demorn said:
You are assuming that c= c/2 + c/2 or x = x/2 + x/2.
No need to assume. Both(*) are obviously true and easily provable.

It is well known that 1/2 + 1/2 = 1 and that the real numbers are an algebraic field where 1 is the multiplicative identity element. Algebraic fields satisfy the distributive property of multiplication over addition.

c = c*1 = c*(1/2+1/2) = c*1/2+c*1/2 = c/2+c/2
x = x*1 = x*(1/2+1/2) = x*1/2+x*1/2 = x/2+x/2

(*) Just one claim really. The variable name is irrelevant.
 
  • #68
To be able to 'prove' satisfactorily the title problem we'd need to agree on what we know a priori. For the sake of discussion, let's operate in ##\mathbb R##. We have a seemingly simple explanation ##1 = \frac{a^n}{a^n} ## which is certainly true for positive integers ## n##, but we run into a potential issue. As pointed out by @PeroK , what is ##\frac{2^0}{2^0} ##. Knee jerk reaction is to say that the numerator and denominator must be equal so we have ##\frac{k}{k} =1 ##, right? Not so fast! How can we be sure that ##a^0\in\mathbb R ##? I.e how do we know the extension ##a^0 ## is well-defined? This might be a wrong assumption and it's not possible to utilise the known arithmetic as we'd hoped.

Perhaps, there is hope. Let ##a>0 ## and pick a sequence ##q_n\in\mathbb Q, n\in\mathbb N ## for which ##q_n\to 0 ##. It is shown that all elementary functions are continuous in their domain, therefore due to continuity of ##x\mapsto a^x ## we have ##a^{q_n} \to a^0 ##. Now we sing the epsilon song and show for every ##\varepsilon >0## ..dudu..you all know the lyrics.. we have
[tex]
n>N(\varepsilon) \implies \lvert a^{q_n} - 1\rvert < \varepsilon .
[/tex]
I'm not entirely convinced of this approach, either since I appeal to some witchcraft about elementary functions and some topological jargon.

The point is, we need to take something on faith. If all we have to work with is, say, ZF(C) then we can show the definition ##a^0=1 ## is consistent with what we already know. So, defining it as such (whenever the structure permits it) is a sensible thing to do.

It's very easy to accidentally get stuck in a circular argument here.

Utilising the axioms of ## \mathbb R## we could also say ##a^n\cdot 1 = a^n = a^{n+0} = a^{n} \cdot a^0 ##. If ##a^0 > 1 ## , then there are problems and like wise if ##a^0 <1 ##, but this is cheating, since I already implicitly assume ##a^0 ## is some meaningful proposition.
 
Last edited:
  • #69
PeroK said:
##e^x = \Sigma_{n = 0}^{\infty} \frac{x^n}{n!}##

It's easy. Define log(x) as ∫1/x from 1 to x. e^x is defined as its inverse. Differentiate log(a*b) wrt to say a and you get 1/a which is the same as the deriviative of log(a) so log(a*b) = log(a) + c. Let a =1 and you have c = log(b). So log(a*b) = log(a) + log(b). Let z = e^a*e^b. Log(z) = log(e^a) + log(e^b) = a + b or e^a*e^b = z = e^log(z) = e^(a+b). e by definition is e^1. Note the range of log x is all the reals - its domain is the positive reals. Hence e^x is defined for all reals and its range is positive reals. Also log(e^x) = x. Differentiating gives (e^x)' = e^x.

Now there are a couple of issues here such as showing log(x) has an inverse. Those issues are taken up in the Harvey Mudd notes:
https://www.math.hmc.edu/~kindred/cuc-only/math30g/lectures-s24/lect14.pdf

A fully rigorous treatment would of course need to wait until analysis. But as an initial explanation its not hard at all. You only need a few ideas from calculus such as a hand-wavy treatment like Calculus Made Easy.

Just as an aside, in chapter 6 of Calculus Lite there is a very simple proof at that hand-wavy level of sine'(x) = cos(x) and cos'(x) = -sine(x). Using that and a fairly simple argument using first order differential equations you can show e^ix = cos(x) + i*sine(x) and you can then easily derive the trigonometric identities. I think a combined algebra 2 + trigonometry type course including beginning calculus could easily be built around these ideas and you will be prepared for an honors calculus course like the following:
https://www.amazon.com/dp/0691125333/?tag=pfamazon01-20

Then something like the following in grade 12:
http://matrixeditions.com/5thUnifiedApproach.html

Only for good students of course, but you will be well prepared for university level math or math based subjects.

Thanks
Bill
 
Last edited:
  • #70
PeroK said:
This is interesting. I've never questioned the need to define certain things in mathematics. I suppose having studied pure maths it just is such a part of mathematics.

There is no logical issue - its done all the time. But in this case what you are doing by these definitions is extending, in a reasonable way, via the property you would like, namely x^(a+b) = x^a*x^b, what a^x is. Note - you can only go as far as the rationals via this. It just screams something more elegant should exist - that's the feeling I got from the final sentence of the original post. And it does - and it even exists for the reals - not just rationals. IMHO that more elegant way is better. But to be fair I don't think most students really care - only a few like the OP see surely there is something better than just defining things - and of those that do even less want to pursue it - even though if they did they will learn a lot about some more advanced math - namely calculus which will be to their credit.

Thanks
Bill
 

Similar threads

Replies
13
Views
1K
Replies
1
Views
760
  • General Math
Replies
8
Views
2K
Replies
55
Views
3K
Replies
6
Views
1K
Replies
3
Views
732
  • General Math
Replies
4
Views
895
Replies
5
Views
384
  • General Math
Replies
3
Views
810
Replies
5
Views
2K
Back
Top