B Proving a^0=1: Step-by-Step Guide

  • B
  • Thread starter Thread starter Rijad Hadzic
  • Start date Start date
  • Tags Tags
    Proof
AI Thread Summary
The discussion centers on proving that a^0 equals 1, with various participants exploring definitions and mathematical properties. One argument suggests defining a^n as the product of a multiplied n times, leading to the conclusion that a^0 must equal 1 to maintain consistency in exponent rules. Others point out that this approach is more of a motivation for a definition rather than a formal proof, emphasizing that a^0 = 1 is a necessary definition for the laws of exponents to hold true for all integers. Additionally, the conversation touches on the importance of defining mathematical terms clearly before engaging in proofs. Ultimately, the consensus is that defining a^0 = 1 is logical and consistent within the framework of exponentiation.
Rijad Hadzic
Messages
321
Reaction score
20
I'm trying to prove that a^0 is = 1

So if I define a^1 to be = (a)(1)

and a^n to be = (1)(a)(a)...(a) with the product being taken n times
and a^m to be = (1)(a)(a)...(a) with the product being taken m times

a^n * a^m would then = (1)[(a)(a)...(a) with the product being taken n times * and a^m to be = (1)(a)(a)...(a) with the product being taken m times]

which clearly gives a^n * a^m = a^(n+m)

if m = 0, a^n * a^0 = a^(n+0) = a^(n), so a^0 = 1

for some reason this does make sense to me but I have a feeling the result is not satisfying enough.
 
Mathematics news on Phys.org
Rijad Hadzic said:
Iwhich clearly gives a^n * a^m = a^(n+m)
But only for n and m both non-zero positive integers.
 
##a^0 =1 \ (a \ne 0)## by definition.

This definition is chosen so that you have:

##a^na^m = a^{n + m}##
 
  • Like
Likes Demystifier
jbriggs444 said:
But only for n and m both non-zero positive integers.
damn it forgot to state this. But my proof still doesn't satisfy me for some reason
 
PeroK said:
##a^0 =1 \ (a \ne 0)## by definition.

This definition is chosen so that you have:

##a^na^m = a^{n + m}##

Are you saying the proof is not valid if I start from a^n*a^m ??
 
Rijad Hadzic said:
Are you saying the proof is not valid if I start from a^n*a^m ??

I'm saying that, essentially, you cannot prove it. Anymore than you can prove that ##0! = 1##.

Sure, if you assume that

##a^na^m = a^{n + m}##

Then ##a^0 = 1## follows from that. But, that's more a motivation for a definition than a proof.
 
  • Like
Likes jim mcnamara and jbriggs444
PeroK said:
I'm saying that, essentially, you cannot prove it. Anymore than you can prove that ##0! = 1##.

Sure, if you assume that

##a^na^m = a^{n + m}##

Then ##a^0 = 1## follows from that. But, that's more a motivation for a definition than a proof.

Wouldn't

"So if I define a^1 to be = (a)(1)

and a^n to be = (1)(a)(a)...(a) with the product being taken n times
and a^m to be = (1)(a)(a)...(a) with the product being taken m times
"
allow it to constituted as a proof though, since I'm defining it in that way?
 
Rijad Hadzic said:
Wouldn't

"So if I define a^1 to be = (a)(1)

and a^n to be = (1)(a)(a)...(a) with the product being taken n times
and a^m to be = (1)(a)(a)...(a) with the product being taken m times
"
allow it to constituted as a proof though, since I'm defining it in that way?
A proof of what? That is an adequate definition of a^n for n an integer greater than zero. It says nothing about a^0.

It is also redundant. If you define a^n, you've defined a^m. The "n" and the "m" are dummy variables.

Edit: I do not think I was understanding what you were trying to express.

You want to define a^n as the result of evaluating "1(a)...(a)" where there are n a's. In the case of n=0, this means zero a's and it is just "1".

Under this definition, a^0 = 1 by definition (even when a=0) and there is nothing to prove.
 
Last edited:
  • Like
Likes Delta2
I think the appropriate way would be to assume that ##f(x)=a^x## is continuous and then show that the sequence

##a^{1/2},a^{1/4},a^{1/8},\dots##

or

##\sqrt{a},\sqrt[4]{a},\sqrt[8]{a},\dots##

has limit 1 when ##a\neq 0##.

It's just a matter of choosing some properties you want the exponential function to have, and then showing that the only value of ##a^0## that is logically compatible with those properties is 1.
 
Last edited:
  • Like
Likes scottdave and Demystifier
  • #10
hilbert2 said:
I think the appropriate way would be to assume that ##f(x)=a^x## is continuous and then show that the sequence

##a^{1/2},a^{1/4},a^{1/8},\dots##

or

##\sqrt{a},\sqrt[4]{a},\sqrt[8]{a},\dots##

has limit 1 when ##a\neq 0##.

It's just a matter of choosing some properties you want the exponential function to have, and then showing that the only value of ##a^0## that is logically compatible with those properties is 1.

That's fine, but perhaps a physicist's view. The function ##a^x## for real ##x## is a more advanced construction. You might hope to resolve what ##a^0## should be while you are still dealing with integer powers.
 
  • #11
PeroK said:
That's fine, but perhaps a physicist's view. The function ##a^x## for real ##x## is a more advanced construction. You might hope to resolve what ##a^0## should be while you are still dealing with integer powers.

Yes, in my version of the proof the property ##a^{1/n} = \sqrt[n]{a}## is assumed as an "axiom", while a simpler choice could also be possible. I guess we're playing the game of "inventing the exponential for the first time" here, instead of relying on commonly accepted sets of rules.
 
  • #12
hilbert2 said:
Yes, in my version of the proof the property ##a^{1/n} = \sqrt[n]{a}## is assumed as an "axiom", while a simpler choice could also be possible. I guess we're playing the game of "inventing the exponential for the first time" here, instead of relying on commonly accepted sets of rules.

That "game" is called pure mathematics!
 
  • #13
PeroK said:
Sure, if you assume that

##a^na^m = a^{n + m}##
This just looks to me like the associative law of multiplication: ##(aa...a)_{n\text{ times}}(aa...a)_{m\text{ times}} = (aa...a)_{n+m\text{ times}}##
 
  • #14
FactChecker said:
This just looks to me like the associative law of multiplication: ##(aa...a)_{n\text{ times}}(aa...a)_{m\text{ times}} = (aa...a)_{n+m\text{ times}}##
That doesn't get you to ##a^0=1##.

The issue is, for example, that you could verify this for positive integers:

##a^3 a^2 = a^5##

But, if you try to verify this for any integers you have:

##a^2 a^{-2} = 1##

But, you can't verify that ##a^0 =1## as "a multiplied by itself 0 times" is not immediately defined. You have to define ##a^0 =1## in order for your law of indices to extend to integers.

And that's what is done.
 
  • Informative
Likes scottdave
  • #15
##a^{-1}=\frac{1}{a}##. Therefore ##a^1\cdot a^{-1}=a^{1-1}=a^0=\frac{a}{a}=1##.
 
  • Like
Likes scottdave, Wes Turner, Delta2 and 1 other person
  • #16
mathman said:
##a^{-1}=\frac{1}{a}##
Only if you define it thus.
 
  • #17
mathman said:
##a^{-1}=\frac{1}{a}##. Therefore ##a^1\cdot a^{-1}=a^{1-1}=a^0=\frac{a}{a}=1##.
If you, a priori, assume that a basic law holds when extend the numbers involved, then:

##a^n b^n = (ab)^n##

Extends to:

##(-1)^{1/2}(-1)^{1/2} =1^{1/2} = 1##

Which is then a "proof" that ##-1 = 1##.
 
  • #18
In general, one can think of the expression ##A^B ## as the set of all mappings ##f:B\to A ##. For arbitrary cardinalities it holds that ##\left\lvert A^B\right\rvert = \left\lvert A\right\rvert ^\left\lvert B \right\rvert##. Thus ##a^0 = \{a\} ^\emptyset = 1_\mathbb N ##, as for any set ## A##, there is exactly one mapping ##f:\emptyset\to A ##

One can also use this to semi-prove things like ##0! = 1 ##. Factorial represents the number of permutations, thus there is exactly one "empty permutation". It is safer to define ##0! = 1 ##.

All of this depends on where you are operating. In a group, for instance, we just define ##g^0## to be equal to the identity as the expression ##g^0 ## doesn't really make sense, otherwise. In a semigroup ##s^0 ## might be an ill-defined array of symbols.

In the real numbers, one could think ##\log _a1 = 0 ## iff ##a^0 =1 ##. Which was first, the egg or the chicken? It's a lot of boring debate, to be honest. Let's just say that if the structure permits it, we define ##a^0 = 1 ## where the meanings of the symbols depend on context.
 
Last edited:
  • Like
Likes ZeGato, dextercioby and Delta2
  • #19
Maybe this is too basic, but hopefully it helps. Consider the sequence 2,4,8,16,32,\ldots. The n-th term of this sequence is 2^n. In going from the n-th term to the (n+1)-st term, we multiply by 2. If we want this pattern to hold for all integers n, then we are forced to have 2^0=1, 2^{-1}=1/2, etc. so that our sequence is \ldots 1/4,1/2,1,2,4,8,\ldots.

Another way of phrasing this is just that we want 2^{n+1}=2^1\cdot 2^n since 2^{a+b}=2^{a}2^b is a law that we would like to keep.

Probably you can't prove that a^n is the right thing for nonpositive exponents since usually exponentials are defined first for only when the exponent is a positive integer, and then you extend the definition to integer exponents in the above way, and then to rationals, and then to reals.
 
  • #20
jbriggs444 said:
Only if you define it thus.
How else would you define it?
 
  • #21
mathman said:
How else would you define it?
In a conversation where one is discussing the definition of ##a^0##, introducing a definition of ##a^{-1}## seems premature.
 
  • #22
So at what point can I just define something without proving its true and have a result that's true regardless if the definition is true or not

I guess what a better way to say is when can I make an assumption?
 
Last edited:
  • #23
Rijad Hadzic said:
So at what point can I just define something without proving its true and have a result that's true regardless if the definition is true or not

I guess what a better way to say is when can I make an assumption?

The fundamental issue is that when you use some mathematical symbols, you must define what you mean by that arrangement of symbols. Until you know what you mean by those symbols, you cannot start to do mathematics using them. In this case, for example, you might write:

##2^0##

But, what does that mean? There's no immediate way to "multiply 2 by itself 0 times". Unlike ##2^1, 2^2, 2^3 \dots ##, which have a simple, clear definition.

My recommended approach is to define ##2^0 = 1## before you go any further. Then you know what those symbols mean.

Now, of course, you need to be careful that a definition is consistent with other definitions, and you need to understand the implications of a certain definition.

In this case, the only other candidate might be to define ##2^0 = 0##. But, when you look at the way powers work, you see that defining ##2^0 =1## is logical and consistent.
 
Last edited:
  • Like
Likes Rijad Hadzic, Delta2, Stephen Tashi and 1 other person
  • #24
PeroK said:
My recommended approach is to define ##2^0 = 1## before you go any further (why?). Then you know what those symbols mean.

Now, of course, you need to be careful that a definition is consistent with other definitions, and you need to understand the implications of a certain definition.

In this case, the only other candidate might be to define ##2^0 = 0##. But, when you look at the way powers work (you define how powers work afterward?), you see that defining ##2^0 =1## is logical and consistent.
But that means that you basically define what ##2^{-1}## means and then you adjust your "guess" to ##2^0 = 1##.
jbriggs444 said:
In a conversation where one is discussing the definition of ##a^0##, introducing a definition of ##a^{-1}## seems premature.
Not only is defining ##a^{-1}## not premature, it is essential. Otherwise, you are only guessing arbitrarily as @PeroK explained, and you modify your guess as you (finally) define ##a^{-1}##. ##a^0 = 1## makes sense only if ##a^{-n} =\frac{1}{a^n}##. then a simple limit approach proves the definition of ##a^0##. Therefore, I tend to support @mathman 's approach:
mathman said:
##a^{-1}=\frac{1}{a}##. Therefore ##a^1\cdot a^{-1}=a^{1-1}=a^0=\frac{a}{a}=1##.
---------------------------------------
PeroK said:
If you, a priori, assume that a basic law holds when extend the numbers involved, then:

##a^n b^n = (ab)^n##

Extends to:

##(-1)^{1/2}(-1)^{1/2} =1^{1/2} = 1##

Which is then a "proof" that ##-1 = 1##.
Doesn't that only proves that ##-1 \times -1 = 1##?

##a^n## and ##a^{-n}## have both distinct definitions, so stating both «sources» ##a## are the same because they give the same result is as fair as saying that since ##\sin\frac{\pi}{2} = 1## and ##\cos 0=1##, then ##\frac{\pi}{2} = 0## must be true.
 
  • #25
jack action said:
##a^n## and ##a^{-n}## have both distinct definitions, so stating both «sources» ##a## are the same because they give the same result is as fair as saying that since ##\sin\frac{\pi}{2} = 1## and ##\cos 0=1##, then ##\frac{\pi}{2} = 0## must be true.

That makes no sense to me. Although looking at your avatar might explain things!
 
Last edited:
  • #26
jack action said:
Not only is defining ##a^{-1}## not premature, it is essential.
Ummm, no. It is not essential. Given a definition for raising a number to an arbitrary positive, non-zero, integer power, one can extend that definition to ##a^0## in one obvious way that preserves the truth of ##a^{n+1} = a \times a^n##

There is, of course, an extra bit of freedom in defining 0^0 to be consistent with that equality.

If one is working in the ring of integers, ##a^{-1}## may not be defined.
 
  • Like
Likes dextercioby and PeroK
  • #27
jbriggs444 said:
Given a definition for raising a number to an arbitrary positive, non-zero, integer power, one can extend that definition to ##a^0## in one obvious way that preserves the truth of ##a^{n+1} = a \times a^n##
Well, that looks more like a proof to me and it is a much more useful answer than your previous ones. And it is certainly better than @PeroK 's answer who basically said «let's create an arbitrary definition and see if it's "logical and consistent"» (which makes no sense to me as a mathematical proof).

I'm glad I made you elaborate your thoughts on the subject, because I like polite exchanges that enrich my life.
 
  • Like
Likes jim hardy
  • #28
You could start with ##a^{n+1}=a\times a^n## then ##a^n=\frac{a^{n+1}}{a}## and work backwards to get ##a^0=\frac{a^1}{a}=1##. Further continuation to get negative exponents.
 
  • #29
jack action said:
Well, that looks more like a proof to me
It is not a proof. You don't prove definitions.
 
  • #30
jbriggs444 said:
It is not a proof. You don't prove definitions.
I know Wikipedia is not considered the best reference, but here what they have to say about exponentiation:
Positive exponents
Formally, powers with positive integer exponents may be defined by the initial condition
7d240dbaf6181ae1801474f3d28dcd5504aacae6

and the recurrence relation
22becb6fbb370b056af0dc723f2af7e4db6a034a

[...]
Negative exponents
The following identity holds for an arbitrary integer n and nonzero b:
bc5945fefb607bd5dffd31f93161985362c8e547

[...]

The identity above may be derived through a definition aimed at extending the range of exponents to negative integers.
For non-zero b and positive n, the recurrence relation above can be rewritten as
50badcf877aaca6046b1f3a8bc15123e6a11943a

By defining this relation as valid for all integer n and nonzero b, it follows that
79324b0f81e42f7b891a815da86599083450324a
So the process is to define for positive exponents (##b^1 = b##). Then an identity is derived to extend to negative exponent (##b^n = \frac{b^{n+1}}{b}, n\ge 1##). Note that we use this relation to specifically extend the definition to negative exponents (I like to think this comes before an attempt to understand what ##b^0## could be, but you prove successfully that it is not necessary).

Finally, once this definition is accepted, ##b^0=1## must be true because ##\frac{b^{0+1}}{b}=1##. It is not defined as is, it is a consequence of the accepted general definition of what exponentiation is ("It follows that ##b^0=1##"). Isn't that a proof? Because I really doubt someone started with «Let's assume ##b^0 = 1## and find an exponentiation definition that includes that definition».
 
  • #31
jack action said:
Isn't that a proof?
No. It is not a proof. Again, you do not prove definitions. You write them down and prove that they are unambiguous and consistent.
 
  • Like
Likes Stephen Tashi
  • #32
jack action said:
Well, that looks more like a proof to me and it is a much more useful answer than your previous ones. And it is certainly better than @PeroK 's answer who basically said «let's create an arbitrary definition and see if it's "logical and consistent"» (which makes no sense to me as a mathematical proof).

So, where do you think something like the definition of the derivative, say, came from?

$$f'(x) = \lim_{h \rightarrow 0} \frac{f(x+h) - f(x)}{h}$$
You can't prove that. All you can do is show that such a definition leads to a logical and consistent concept of a derivative that has the properties you were expecting - with perhaps some surprises along the way.

These things are not arbitrary, they are based on a prior knowledge of mathematics and what you want to achieve. You cannot prove these things. You have to define them and see where those definitions lead.

Note that defining ##2^0## as either:

##2 \cdot 2^0 = 2^1##

or

##2^0 = 1##

Amount to the same thing.

The fundamental issue is that no mathematical symbols have any meaning until you define them. You cannot prove something like ##2^0 = 1## until you have defined what you mean by ##2^0## in the first place. That's part of pure mathematical thinking. If you study pure mathematics, one of the things you learn to do is recognise when something has been fully defined and when it hasn't.

And, as previously mentioned, you have the same issue with ##0! = 1##. You cannot prove that. And, I suggest, most people would expect ##0! = 0##. But, the usefulness of ##0! = 1## becomes apparent when you define the binomial coefficient. So, again, it's far from arbitrary.
 
Last edited:
  • Like
Likes jbriggs444 and Stephen Tashi
  • #33
jack action said:
I know Wikipedia is not considered the best reference, but here what they have to say about exponentiation:
The current Wikipedia article does not develop definitions and theorems dealing with exponents in a mathematically precise way. It is a survey article, similar to the treatment of exponents in high school mathematics texts, which give the "properties" of exponents without specifying which properties are definitions and which are theorems. For example, in the section "Zero Exponent" the statement "##b^0=1##" is made before the supposed deduction in a later section that ##b^0=1##.

On the "talk" page associated with the article you can find debate about whether ##b^0 =1## applies in the case ##b = 0##.

If you are interested in doing proofs in mathematics, you must begin the proof in a very specific context - being clear about what definitions, assumptions, and theorems are established before the proof begins. This is not a simple task because a given topic ( such a exponents) can be treated as a mathematical system in different ways. So you have to specify which particular way you are using.

The mathematically precise approaches to "elementary" topics like exponentiation are complicated and only taught in advanced classes. (For example, what is the proof or definition that tells us it is possible to multiply two irrational numbers? )

The average person (including myself) who contemplates the elementary properties of numbers takes a Platonic view - i.e. we think of the numbers and their "properties" existing independently of any particular mathematical set of definitions and assumptions. This type of intuitive thinking is a necessary tool, but it muddles the process of writing proofs because we are missing the required organization. We have in mind a collection of facts, but we don't know which are definitions, which are assumptions and which are theorems.

In addition to confusion caused by Platonic thinking, there is a confusion that comes from being hypnotized by the magic of symbols. Manipulations with symbols work so well that we begin to think that any nice looking string of symbols automatically has some meaning. If we were asked to prove that ##b^{*</} = 17##, we would rebel by demanding to know the meaning of "##b^{*</}##". However, if we are asked to prove that ##0^0 = 1##, we are tempted to start scribbling out a proof immediately because "##0^0##" is a string of symbols that we pattern-match to other strings like ##2^3##, whose definitions we know.

Some humorous but also serious advice: If you are interested in mathematical proofs, do proofs in intermediate and advanced mathematics where the elementary "properties" of numbers are taken for granted. Only much later in your career should you take on the challenge of developing the elementary properties of numbers as a precise mathematical system. Textbooks on the topic of "Real Analysis" usually have a few chapters devoted to such torture - topics like Dedekind Cuts etc.
 
Last edited:
  • Like
Likes jbriggs444 and PeroK
  • #34
@jbriggs444 , @PeroK , @Stephen Tashi ,
jbriggs444 said:
You write them down and prove that they are unambiguous and consistent.
Considering the OP's question AND the fact that this thread is classified as "B" [Thread Level: Basic (high school)], don't all of you think this is the kind of «proof» the OP is looking for?
 
  • #35
jack action said:
@jbriggs444 , @PeroK , @Stephen Tashi ,

Considering the OP's question AND the fact that this thread is classified as "B" [Thread Level: Basic (high school)], don't all of you think this is the kind of «proof» the OP is looking for?
My suspicion is that OP believes that there is an underlying "true" definition for ##a^0## which can be proven correct from first principles. As has been pointed out, such a belief is fundamentally mistaken.
 
  • #36
jack action said:
@jbriggs444 , @PeroK , @Stephen Tashi ,

Considering the OP's question AND the fact that this thread is classified as "B" [Thread Level: Basic (high school)], don't all of you think this is the kind of «proof» the OP is looking for?

The thread could and should have been all over by post #8, with the (simple) mainstream mathematical answer that it is essentially a definition. But, the thread got sidetracked after that. The subsequent complexities are a result of trying to answer the other posters.
 
  • #37
Rijad Hadzic said:
So at what point can I just define something without proving its true

Definitions are only "true" (or false) in sociological sense. If a definition becomes accepted by most academic mathematicians, it becomes a "true" definition in the sense that that the community of academic mathematicians agree it's a useful definition.

Trying to determine the objective truth of a definition is a logically contradictory process. For example, if we are proposing a definition of "tropical geometry" then we can't debate whether "tropical geometry" really has this or that property before we say what we mean by "tropical geometry". People who have a preconceived idea of what a phrase means can object to another person's definition of that phrase. That sort of debate is not a debate about objective mathematical issues. It is a debate about personal preferences or the usefulness of one convention of vocabulary versus another.

That said, one often sees definitions that contain unstated assumptions - or definitions that lead one to make assumptions. For example, the definition
"##b^{-1}## is (defined to be) the multiplicative inverse of the number b"
might lead one to the assumption that each number b has a multiplicative inverse and that the multiplicative inverse of a number b is a unique number (i.e. that b cannot have two unequal multiplicative inverses). Neither of those statements is established merely by giving a definition of the notation "##b^{-1}##".

One approach to presenting definitions is to precede them by proving or assuming all the facts that must be incorporated in the definition. Another approach is to make the definition and admit that the thing defined may not exist or that two different things may exist that both satisfy the definition. After the definition is made, you offer proof that the thing defined does exist - and if the word "the" as been used to imply that there is only one thing that satisfies the definition, you also prove that fact.
 
  • Like
Likes jbriggs444
  • #38
This is a long thread and I haven't read it all but I wanted to have my say as well.

I think 1 is the only plausible value one can give ##a^0## to be consistent with arithmetic. The definition is forced, IMHO, by the symmetry that the basic operations have. Basically we can run them in reverse. Subtraction is just addition in reverse. Division is just multiplication in reverse. And running it in reverse, a^2 = a*a, a^1 = a, a^0 = 1. It can only be 1. So while this isn't a proof, it is at least justified by a holistic vision of the purity of arithmetic.
 
  • #39
let a>0, and let f be a continuous real valued function defined on all reals such that f(n) = a.a...a (n times) = a^n, for n = any positive integer. and assume that f(x+y) = f(x).f(y) for all reals x,y. then we claim that :

1) f(0) = 1,

proof: note that f(1) = a, and that thus a = f(1) = f(0+1) = f(0).f(1) = f(0).a, hence f(0) = 1.

note also that then f(1/n)...f(1/n), (n times), = f(1/n+...+1/n) = f(1) = a, so f(1/n) = nth root of a = a^(1/n). Thus also f(m/n) = f(1/n)...f(1/n), (m times), = a^(1/n)...a^(1/n), (m times).

Thus f is determined on all rational numbers, and hence by continuity also on all real numbers.

thus if a > 0, there is at most one continuous real valued function f of a real variable such that f(n) = a^n, for all positive integers n, and such that f(x+y) = f(x).f(y) for all reals x,y.

the fact that there does in fact exist such a function follows from integration theory, i.e. the study of the integral of 1/x, and the inverse of this function.
 
Last edited:
  • Like
Likes suremarc
  • #40
mathwonk said:
let a>0, and let f be a continuous real valued function defined on all reals such that f(n) = a.a...a (n times) = a^n, for n = any positive integer. and assume that f(x+y) = f(x).f(y) for all reals x,y. then we claim that :


But what is f(x) = a.a...a ( x times ) for all reals when x = -3 or x = 1,5 say, according to the first definition of f(n)? You haven't defined this!
 
  • #41
mathwonk said:
let a>0, and let f be a continuous real valued function defined on all reals such that f(n) = a.a...a (n times) = a^n, for n = any positive integer. and assume that f(x+y) = f(x).f(y) for all reals x,y. then we claim that :
@mathwonk Furthermore, why do you assume f(x+y) = f(x).f(y) for all reals x,y? Another interesting rule would be f(x+y) = f(x)/f(y). Then you get a whole new set of defntions for powers etc.
 
  • #42
Rada Demorn said:
@mathwonk Furthermore, why do you assume f(x+y) = f(x).f(y) for all reals x,y? Another interesting rule would be f(x+y) = f(x)/f(y). Then you get a whole new set of defntions for powers etc.
So ##f(x) = \frac{f(x/2)}{f(x/2)} = 1## for all x. That does not sound very interesting.
 
  • Like
Likes PeroK
  • #44
I have read the replies and would offer a different explanation that is not at the B level if full understanding is required. The following notes from Harvey Mudd give the full details if you know calculus:
https://www.math.hmc.edu/~kindred/math30g/lectures.html

I post it for reference only - the OP will not understand it - it is not at the B level - although IMHO should be taught to all HS students so you can understand things better. Just my view - but it would solve a lot of problems IMHO.

Here is the cut down version that is easy to understand. Given any positive number a it is possible to rigorously define a function called a^x where x is any real number with the property a^(x+y) = a^x*a^y and a^1 = a. That is the advanced bit the above proves - don't worry about why it is so at the beginning level. Now a^n where n is an integer is a^1*a^1...a^1 n times or a^n = a*a*a*...*a n times. This is what beginners are taught raising something to a power means. But x in a^x is a real number so we can now see what such funny things as a^0 and a^(1/n) are. a^x = a^(x+0) = a^x*a^0 which when you divide by a^x you get a^0 =1. a = a^1 = a^(n/n) = a^(1/n + 1/n ,,,,,, +1/n) = a^1/n * a^1/n ... * a^1/n = (a^1/n)^n or a^1/n = (n)√a. And finally a^(x-x) = a^0 = 1 = a^x*a^(-x) so a^(-x) = 1/a^x. One can then proceed to define logs as the inverse of a^x but that comes out of the calculus approach anyway and you get the properties of logarithms which I not do the detail of.

The reason its not done that way in elementary treatments is you need calculus which you normally do after this stuff. IMHO this makes it harder than it should be for students. I personally believe you should study calculus at the same time as you do this stuff and things will be a lot easier. What grade to do it in? I believe along with an intuitive treatment of calculus you could do it in grade 10 - for good students grade 9.

Thanks
Bill
 
Last edited:
  • #45
I'll make a suggestion here. I know OP codes and I think a lot of people on this thread do as well. First ignore the word "proof" of ##a^0 =1##. Second consider why it is quite natural. Really just some basic thinking about coding (and perhaps groups) is all that's needed here.

I am sticking with integer values of ##k##. I spoiler tagged it due to length.

Start with addition. How would it be done with a for loop?

Python:
running_sum = 0

a_list = [a_1, a_2, a_3]
# k terms... 3 are shown here for the example
for a_i in a_list:
    running_sum += a_i
print("done")

That's the forward case. Now run it backward and try to "undo" (or invert) the addition. (Sometimes people call this subtraction)

Python:
running_sum = 0

a_list = [a_1, a_2, a_3]

for a_i in a_list:
    running_sum += -a_i
print("done")

now if we homogenize the items in the a_list to be identical we can say, that the magnitude of ##k## is the length of said list and simply define the forward case as

for positive integer ##k##
##k\cdot a_1 = \vert k\vert \cdot a_1 := 0 + \sum_{i=1}^{k } a_1 = 0 + \sum_{i=1}^{\vert k \vert} a_1##

and the backward case, i.e. for negative integer ##k## is

##k\cdot a_1 = -\vert k\vert\cdot a_1 := 0 + \sum_{i=1}^{\vert k \vert} -a_1##.

Checking both definitions and we find that it if ##k=0## they both evaluate to zero and hence ##k\cdot a_1 := 0## when ## k =0##. The reason for this check is that we've shown the operation to be different for ##k## vs ##-k##... but if ##k=0## then ##k = -k## and hence we need to check for agreement.

Note that the final formula for a homogenous sum involves multiplication, but to be clear, everything above was about sums.
- - - -
Now we consider products instead:

the forward case would be

Python:
running_product = 1

a_list = [a_1, a_2, a_3]

for a_i in a_list:
    running_product *= a_i
print("done")

now consider the backwards case. How do we 'undo' (or invert) multiplication. And as before we can proceed to consider the case of homogenized terms.

for positive integer ##k##
##a_1^k = a_1^{\vert k\vert} := 1 \cdot \prod_{i=1}^{\vert k \vert} a_1##

and for negative integer ##k## is
##a_1^k = a_1^{-\vert k\vert} := 1 \cdot \prod_{i=1}^{\vert k \vert} a_1^{-1}##

The second approach does not work on ##a_i = 0## but is fine for all ##a_i \neq 0##. Now compare definitions and evaluate when ##k=0##. Both evaluate to 1, again for any ##a_1 \neq 0##.
- - - -
Notice that 0 is the identity element in sums and 1 is the identity element in products. In each case if you apply your operation ##\big \vert k\big \vert## times, you start by applying it to the identity element. If ##k## is zero then you retain the identity element for your operation. With products, a tiny bit of extra care is needed because multiplying by zero is not invertible. (To tie this back to part one, consider that ## a_1 \cdot 0 = 0 + \sum_{i=1}^{0} a_1 = 0 + \sum_{i=1}^{0} -a_1 = 0## for any legal ##a_1## in your field or whatever, hence you cannot work backward and figure out what the unique ##a_1## was before your operation).

That's really it.

PeroK said:
In this case, for example, you might write:

##2^0##

But, what does that mean? There's no immediate way to "multiply 2 by itself 0 times". Unlike ##2^1, 2^2, 2^3 \dots ##, which have a simple, clear definition.
My bigger point, I suppose, is puzzles like these become non-issues if we start with the identity element and then iteratively apply the operation to it. (It also fixes some zero vs one indexing errors which is a bonus.)

I know that both OP and Perok code. For whatever reason the point I made above seems peculiar when writing it as ' pen and paper math' but it seems obvious when writing for loops.

Again, not a 'proof', just one of many reasons why the definition is quite natural from certain viewpoints.
 
  • #46
jbriggs444 said:
So ##f(x) = \frac{f(x/2)}{f(x/2)} = 1## for all x. That does not sound very interesting.
@jbriggs444 No, you are wrong because division is not associative f( (a+b+c)+d ) = f(a) / { f(b).f(c).f(d) } <> f ( (a+b) + (c+d) ) = { f(a).f(d) } / { f(b).f(c) }.

You may guess what happens if we try to evaluate f (1/n + ... + 1/n) with the aforementioned rule.
 
  • #47
Rada Demorn said:
@jbriggs444 No, you are wrong because division is not associative f( (a+b+c)+d ) = f(a) / { f(b).f(c).f(d) } <> f ( (a+b) + (c+d) ) = { f(a).f(d) } / { f(b).f(c) }.
You seem to have skipped a few steps. The definition in question does not involve multiplication at all. But let us ignore that.

Let a = x/2.
Let b = x/2
By definition ##f(x) = f(a+b) = \frac{f(a)}{f(b)}##
But a=b so f(a) = f(b) and ##\frac{f(a)}{f(b)}=1##

You are correct that division is not normally associative. And you are correct that at first this would seem to be the kiss of death for the definition in question. You can't have a definition that requires f(x) to be two different things at the same time. However, it turns out that in any expression involving nothing but division and 1's, division is associative.

e.g. 1/(1/1) = (1/1)/1

You may guess what happens if we try to evaluate f (1/n + ... + 1/n) with the aforementioned rule.
No need to guess. It is equal to one.
 
Last edited:
  • #48
Rijad Hadzic said:
I'm trying to prove that a^0 is = 1
an/an = an-n
Since an = an , an/an = 1 (just like 100/100 = 1)
n-n always equals to 0
an-n = 1

Hope this helps.
 
  • #49
On the argument about definition versus proof, is part of the problem that there's only one sensible extension of ##a^n## from shorthand for "##n## ##a##s multiplied together" to the case of ##n\leq 0##? Several ways have been shown of doing it in this thread, but they all lead to the same result. Would a counter-example be trying to extend division (which already gives us a way into non-integer numbers starting from the integers) to cover the case of 0/0? You could argue it should be 1 because it's something divided by itself, or 0 because it's zero times something, or infinite because it's something divided by zero. There's no single definition consistent with all the rules, so we leave it undefined (note "undefined" not "unprovable" or something of that nature) and try to take limits on a case-by-case basis?

Too many question marks there, but am conscious that I'm outside my field here.
 
  • Like
Likes PeroK
  • #50
Well, due to that problem, that n can be replaced by 4,5 or any natural number.
Still counts as a proof,right?
 

Similar threads

Replies
9
Views
2K
Replies
12
Views
2K
Replies
1
Views
2K
Replies
55
Views
5K
Replies
8
Views
3K
Replies
4
Views
1K
Replies
3
Views
1K
Replies
3
Views
1K
Back
Top