High School Proving a^0=1: Step-by-Step Guide

  • Thread starter Thread starter Rijad Hadzic
  • Start date Start date
  • Tags Tags
    Proof
Click For Summary
The discussion centers on proving that a^0 equals 1, with various participants exploring definitions and mathematical properties. One argument suggests defining a^n as the product of a multiplied n times, leading to the conclusion that a^0 must equal 1 to maintain consistency in exponent rules. Others point out that this approach is more of a motivation for a definition rather than a formal proof, emphasizing that a^0 = 1 is a necessary definition for the laws of exponents to hold true for all integers. Additionally, the conversation touches on the importance of defining mathematical terms clearly before engaging in proofs. Ultimately, the consensus is that defining a^0 = 1 is logical and consistent within the framework of exponentiation.
  • #61
PeroK said:
But, in the sense of mathematical development, where does calculus come from?

A rigorous development? That's analysis and the motivation is the issues that can arise in calculus if you are not careful. By that stage you will know a^0 = 1 for sure. But a hand-wavy development doesn't require it eg:
https://en.wikipedia.org/wiki/Calculus_Made_Easy

Its based on intuitive ideas like very small numbers dx you can for all practical purposes ignore and most certainly you can ignore dx^2. You can interweave it into a pre-calculus or algebra 2 + trig course they call it in the US before a student tackles calculus proper with ideas of limits etc made clearer. In Aus calculus and pre-calculus is taught in an integrated way and you can most certainly do it in such a course. I have to say however I don't think they do it like that here in Aus - I think they do it like the link given previously to the UNSW paper on it. Why - well I do not think most students are like the OP and feel dissatisfied with the definition route - only people like me that like to think about it feel a bit uneasy - it's in the back of your mind - why is this based on definitions - there must be something more going on - and indeed there is.

Thanks
Bill
 
Mathematics news on Phys.org
  • #62
bhobba said:
A rigorous development? That's analysis and the motivation is the issues that can arise in calculus if you are not careful. By that stage you will know a^0 = 1 for sure. But a hand-wavy development doesn't require it eg:
https://en.wikipedia.org/wiki/Calculus_Made_Easy

Its based on intuitive ideas like very small numbers dx you can for all practical purposes ignore and most certainly you can ignore dx^2. You can interweave it into a pre-calculus or algebra 2 + trig course they call it in the US before a student tackles calculus proper with ideas of limits etc made clearer. In Aus calculus and pre-calculus is taught in an integrated way and you can most certainly do it in such a course. I have to say however I don't think they do it like that here in Aus - I think they do it like the link given previously to the UNSW paper on it. Why - well I do not think most students are like the OP and feel dissatisfied with the definition route - only people like me that like to think about it feel a bit uneasy - it's in the back of your mind - why is this based on definitions - there must be something more going on - and indeed there is.

Thanks
Bill

This is interesting. I've never questioned the need to define certain things in mathematics. I suppose having studied pure maths it just is such a part of mathematics.

In the end, though, even with your calculus, at some point you are defining something like:

##e^x = \Sigma_{n = 0}^{\infty} \frac{x^n}{n!}##

Then, you might want to prove this. But, you can only prove it if you have defined ##e^x## some other way. In the end, ##e^x## mathematically has to be defined in some way. And, whatever way you choose to define it, you cannot then prove that.

Anyway, with that definition of the exponential function, let's evaluate ##e^0##:

##e^0 = \Sigma_{n = 0}^{\infty} \frac{0^n}{n!} = \frac{0^0}{0!} + \frac{0^1}{1!} + \dots = \frac{0^0}{0!} ##

Hmm. Maybe that doesn't resolve all your misgivings after all!

PS And, just to make the explicit point: defining real powers fundamentally rests on already having a definition of integer powers (including the zeroth power). And that's why you cannot use ##a^x## to prove ##a^0 = 1##. You can verify that your advanced definition of ##a^x## is consistent with your previously defined integer powers, ##a^n##, but you still need your integer powers to develop the advanced mathematical machinery in the first place.
 
Last edited:
  • Like
Likes S.G. Janssens
  • #63
Rada Demorn said:
Try this...

Let a = x/3
Let b = x/3
Let c = x/3

What do you get? Is it still equal to 1?
Yes. Recall that we are operating under the proposed property that ##f(a+b)=\frac{f(a)}{f(b)}##.

We have already established that if f(x) is defined at all (and is non-zero) then it must be equal to 1 everywhere. You ask about f(a+b+c) with a=b=c.

##f(a+b+c) = f((a+b)+c)= \frac{f(a+b)}{f(c)} = \frac{f(a)/f(b)}{f(c)}= \frac {1/1} {1} = \frac{1}{1} = 1##
##f(a+b+c) = f(a+(b+c)) = \frac{f(a)}{f(b+c)} = \frac{f(a)}{f(b)/f(c)} = \frac {1}{1/1} = \frac{1}{1} = 1##

As I had pointed out previously, division is associative if you only ever divide by one.

Edit: From the perspective of abstract algebra, if your only operator is "/" and your only operand is 1 then you have a pretty simple algebra. It is the algebraic group with one element and is clearly Abelian.
 
Last edited:
  • #64
jbriggs444 said:
Yes. Recall that we are operating under the proposed property that ##f(a+b)=\frac{f(a)}{f(b)}##.

We have already established that if f(x) is defined at all (and is non-zero) then it must be equal to 1 everywhere. You ask about f(a+b+c) with a=b=c.

##f(a+b+c) = f((a+b)+c)= \frac{f(a+b)}{f(c)} = \frac{f(a)/f(b)}{f(c)}= \frac {1/1} {1} = \frac{1}{1} = 1##
##f(a+b+c) = f(a+(b+c)) = \frac{f(a)}{f(b+c)} = \frac{f(a)}{f(b)/f(c)} = \frac {1}{1/1} = \frac{1}{1} = 1##

As I had pointed out previously, division is associative if you only ever divide by one.
No.

You are making a mistake!

In ##
f(a+b+c) = f((a+b)+c)= \frac{f(a+b)}{f(c)} = \frac{f(a)/f(b)}{f(c)}= \frac {1/1} {1} = \frac{1}{1} = 1
##, you are taking f(c) = 1 having only noticed this before and only for the case f (c/2+c/2) = 1. But what if c = c/3 + c/3 + c/3?

You better leave f(x) undetermined and admit that: ## f(x+x+x) = f((x+x)+x)= \frac{f(x+x)}{f(x)} = \frac{f(x)/f(x)}{f(x)}= \frac {1/1} {f(x)} = \frac{1}{f(x)}##
 
  • #65
Rada Demorn said:
you are taking f(c) = 1 having only noticed this before and only for the case f (c/2+c/2) = 1. But what if c = c/3 + c/3 + c/3?
You seem to have a fundamental misunderstanding.

If we demonstrate that f(c) must be equal to 1, then f(c) must be equal to 1. If we are able to demonstrate this without assuming anything about c then it follows that f(x) = 1 for all x. We have, in fact provided such a demonstration.

The fact that c = c/3+c/3+c/3 and that c = c/4+c/4+c/4+c/4, etc does not do anything to alter the fact that f(c) = 1.

It could have turned out that the proposed property that f(a+b)=f(a)/f(c) was inconsistent. It could have turned out that by evaluating f(c/3 + c/3 + c/3) we could demonstrate that the result would have to be something other than one. But that turns out not to be the case.

Edit: Note that we have a=b=c. You are trying to have f(a) and f(b) be well defined as 1 but have f(c) be undefined. That is silly.
 
Last edited:
  • #66
jbriggs444 said:
If we demonstrate that f(c) must be equal to 1, then f(c) must be equal to 1. If we are able to demonstrate this without assuming anything about c then it follows that f(x) = 1 for all x. We have, in fact provided such a demonstration.

You are assuming that c= c/2 + c/2 or x = x/2 + x/2. Period.

You have made a mistake and you are beating around the bush now!

I am not going to discuss this any further...
 
  • #67
Rada Demorn said:
You are assuming that c= c/2 + c/2 or x = x/2 + x/2.
No need to assume. Both(*) are obviously true and easily provable.

It is well known that 1/2 + 1/2 = 1 and that the real numbers are an algebraic field where 1 is the multiplicative identity element. Algebraic fields satisfy the distributive property of multiplication over addition.

c = c*1 = c*(1/2+1/2) = c*1/2+c*1/2 = c/2+c/2
x = x*1 = x*(1/2+1/2) = x*1/2+x*1/2 = x/2+x/2

(*) Just one claim really. The variable name is irrelevant.
 
  • #68
To be able to 'prove' satisfactorily the title problem we'd need to agree on what we know a priori. For the sake of discussion, let's operate in ##\mathbb R##. We have a seemingly simple explanation ##1 = \frac{a^n}{a^n} ## which is certainly true for positive integers ## n##, but we run into a potential issue. As pointed out by @PeroK , what is ##\frac{2^0}{2^0} ##. Knee jerk reaction is to say that the numerator and denominator must be equal so we have ##\frac{k}{k} =1 ##, right? Not so fast! How can we be sure that ##a^0\in\mathbb R ##? I.e how do we know the extension ##a^0 ## is well-defined? This might be a wrong assumption and it's not possible to utilise the known arithmetic as we'd hoped.

Perhaps, there is hope. Let ##a>0 ## and pick a sequence ##q_n\in\mathbb Q, n\in\mathbb N ## for which ##q_n\to 0 ##. It is shown that all elementary functions are continuous in their domain, therefore due to continuity of ##x\mapsto a^x ## we have ##a^{q_n} \to a^0 ##. Now we sing the epsilon song and show for every ##\varepsilon >0## ..dudu..you all know the lyrics.. we have
<br /> n&gt;N(\varepsilon) \implies \lvert a^{q_n} - 1\rvert &lt; \varepsilon .<br />
I'm not entirely convinced of this approach, either since I appeal to some witchcraft about elementary functions and some topological jargon.

The point is, we need to take something on faith. If all we have to work with is, say, ZF(C) then we can show the definition ##a^0=1 ## is consistent with what we already know. So, defining it as such (whenever the structure permits it) is a sensible thing to do.

It's very easy to accidentally get stuck in a circular argument here.

Utilising the axioms of ## \mathbb R## we could also say ##a^n\cdot 1 = a^n = a^{n+0} = a^{n} \cdot a^0 ##. If ##a^0 > 1 ## , then there are problems and like wise if ##a^0 <1 ##, but this is cheating, since I already implicitly assume ##a^0 ## is some meaningful proposition.
 
Last edited:
  • #69
PeroK said:
##e^x = \Sigma_{n = 0}^{\infty} \frac{x^n}{n!}##

It's easy. Define log(x) as ∫1/x from 1 to x. e^x is defined as its inverse. Differentiate log(a*b) wrt to say a and you get 1/a which is the same as the deriviative of log(a) so log(a*b) = log(a) + c. Let a =1 and you have c = log(b). So log(a*b) = log(a) + log(b). Let z = e^a*e^b. Log(z) = log(e^a) + log(e^b) = a + b or e^a*e^b = z = e^log(z) = e^(a+b). e by definition is e^1. Note the range of log x is all the reals - its domain is the positive reals. Hence e^x is defined for all reals and its range is positive reals. Also log(e^x) = x. Differentiating gives (e^x)' = e^x.

Now there are a couple of issues here such as showing log(x) has an inverse. Those issues are taken up in the Harvey Mudd notes:
https://www.math.hmc.edu/~kindred/cuc-only/math30g/lectures-s24/lect14.pdf

A fully rigorous treatment would of course need to wait until analysis. But as an initial explanation its not hard at all. You only need a few ideas from calculus such as a hand-wavy treatment like Calculus Made Easy.

Just as an aside, in chapter 6 of Calculus Lite there is a very simple proof at that hand-wavy level of sine'(x) = cos(x) and cos'(x) = -sine(x). Using that and a fairly simple argument using first order differential equations you can show e^ix = cos(x) + i*sine(x) and you can then easily derive the trigonometric identities. I think a combined algebra 2 + trigonometry type course including beginning calculus could easily be built around these ideas and you will be prepared for an honors calculus course like the following:
https://www.amazon.com/dp/0691125333/?tag=pfamazon01-20

Then something like the following in grade 12:
http://matrixeditions.com/5thUnifiedApproach.html

Only for good students of course, but you will be well prepared for university level math or math based subjects.

Thanks
Bill
 
Last edited:
  • #70
PeroK said:
This is interesting. I've never questioned the need to define certain things in mathematics. I suppose having studied pure maths it just is such a part of mathematics.

There is no logical issue - its done all the time. But in this case what you are doing by these definitions is extending, in a reasonable way, via the property you would like, namely x^(a+b) = x^a*x^b, what a^x is. Note - you can only go as far as the rationals via this. It just screams something more elegant should exist - that's the feeling I got from the final sentence of the original post. And it does - and it even exists for the reals - not just rationals. IMHO that more elegant way is better. But to be fair I don't think most students really care - only a few like the OP see surely there is something better than just defining things - and of those that do even less want to pursue it - even though if they did they will learn a lot about some more advanced math - namely calculus which will be to their credit.

Thanks
Bill
 
  • #71
bhobba said:
There is no logical issue - its done all the time. But in this case what you are doing by these definitions is extending, in a reasonable way, via the property you would like, namely x^(a+b) = x^a*x^b, what a^x is. Note - you can only go as far as the rationals via this. It just screams something more elegant should exist - that's the feeling I got from the final sentence of the original post. And it does - and it even exists for the reals - not just rationals. IMHO that more elegant way is better. But to be fair I don't think most students really care - only a few like the OP see surely there is something better than just defining things - and of those that do even less want to pursue it - even though if they did they will learn a lot about some more advanced math - namely calculus which will be to their credit.

Thanks
Bill

Well, it's all a matter of taste, I guess, and your definition of elegance. If teaching this to A-Level students, say, my preferred options would be:

1) Define ##a^0 = 1##, with the necessary justification.

2) Take as an axiom that ##\forall \ n \in \mathbb{Z}: a^na^m = a^{n+m}##, which implies that ##a^0 = 1##.

3) Say that once you've done a course in real analysis and rigorously defined ##\log x = \int_1^x \frac{1}{t}dt## and defined ##\exp(x)## as the inverse of the log function and defined ##a^x = \exp(x \log(a))##, then you can prove that ##a^0 = 1##.

My guess, from this thread, is that many students would prefer 2). As a 16-year-old I would have been very unhappy with 3). Not to say baffled by it!
 
  • Like
Likes bhobba
  • #72
PeroK said:
Well, it's all a matter of taste, I guess, and your definition of elegance.

Well said :biggrin::biggrin::biggrin::biggrin::biggrin::biggrin::biggrin::biggrin::biggrin:

PeroK said:
My guess, from this thread, is that many students would prefer 2). As a 16-year-old I would have been very unhappy with 3). Not to say baffled by it!

My proposal is you do not start with full blown analysis - you would have rocks in your head to try that. Only an introductory hand-wavy treatment is necessary to do what I suggest. Then you gradually move to honors calculus with integrated analysis, then an advanced rigorous treatment of linear algebra and multi-variable calculus. That Hubbard book is very good and as he says can be done at different levels - because of that some good high schools like Roxbury Latin use it in grade 12.

Yes it is a big issue the national curriculum places like here in Aus and England have. I personally am not a fan of it - here in Aus we are moving to all teachers having Masters and surely with teachers that qualified you just need to set some minimum standards and leave how to get there and how far you go up to the teacher and the students they have. At least one school here in Aus does that - the way it gets around the national curriculum is there is an out if you are on an individual learning plan. The first year at that school is spent doing that plan and there is a 50 minute meeting each day with your 'home' teacher ensuring you remain on track and is updated if required.

Thanks
Bill
 
  • #73
Uh, I remember I started real analysis I on my first semester and it was considered to be hand-wavy (with many results revised and given proofs for in analysis III). Even the hand-wavy course put rocks in my head o0) Nowadays, undergraduate levels start with Calculus I and II - that is like analysis I-lite abridged edition.
 
  • Like
Likes bhobba
  • #74
Rada Demorn said:
You better leave f(x) undetermined and admit that:

Nothing to add really. I am correcting the mistyping error in #64 post:

## f(x+x+x) = f((x+x)+x)= \frac{f(x+x)}{f(x)} = \frac{f(x)/f(x)}{f(x)}= \frac {1} {f(x)} = \frac{1}{f(x)} ##
 
  • #75
nuuskur said:
Uh, I remember I started real analysis I on my first semester and it was considered to be hand-wavy (with many results revised and given proofs for in analysis III). Even the hand-wavy course put rocks in my head o0)

The hand-wavy I am talking about is less rigorous than even what is done in HS in Aus where they introduce an intuitive idea of limit. Calculus Made Easy doesn't even do that - dy/dx is literally that - dx is a small quantity and dy = y(x+dx) - y(x). The chain rule is trivial dy/dx = (dy/dg)*(dg/dx) - dg simply cancels. Its all quite simple - intuitively. You would do it in grade 10, combined with a bit of pre-calc. Then more rigorous grade 11 and rather challenging grade 12. Just my thought - probably would crash in practice.

Still for those interested in starting to learn calculus, Calculus Made Easy is a good choice.

Thanks
Bill
 
  • #76
nuuskur said:
It is shown that all elementary functions are continuous in their domain, therefore due to continuity of ##x\mapsto a^x ## we have ##a^{q_n} \to a^0 ##.

How do we show the elementary function ##f(x) = a^x## is continuous at ##x= 0## without having a definition of ##f(0)##?
 
  • Like
Likes nuuskur
  • #77
bhobba said:
Still for those interested in starting to learn calculus, Calculus Made Easy is a good choice.

That may (or may not) be true, but it is not a good suggestion vis-a-vis the original topic of this thread, which concerned proof.

The thread is evolving in the direction of mathematical humor - challenge: Prove ##x^0 = 1## without assuming something that relies on having defined ##x^0##. Similar challenges are: Prove ##x + 0 = x## without defining this to be a property of ##0##, Prove ##(x)(1) = x## without defining this to be property of ##1##, etc.
 
  • Like
Likes PeroK
  • #78
Stephen Tashi said:
How do we show the elementary function ##f(x) = a^x## is continuous at ##x= 0## without having a definition of ##f(0)##?
Ahh..see? Already got stuck in the circle :(. Well, then the whole argument is null.

We could prove ##f(x) = a^x ## is continuous at ##0 ## with the candidate function value of ##1##, which is not very difficult, but we'd need to know what ##a^q ## for rational powers means. This is a can of worms.
 
  • #79
pardon me rada, i did not explain clearly what i was proving. there are two issues here, one is how to define a^x for all real x, and you are right i have not done this. the other issue is whether any definition at all, will yield a^0 = 1. Therse are called the existence and the uniqueness aspects of the exponential function. I have done only the uniqueness aspect. I.e. I have not shown how to define f(x) = a^x for all x, but rather i have proved that any definition at all, if one is possible satisfying the rule f(x+y) = f(x).f(y), (and also f(n) = a^n for positive integers n) must then obey the rule that a^0 = 1.

The reason of course for choosing this property of f is that this property is indeed satisfied by the function a^n, defined for positive integers n. Hence it is reasonable to hope that the function will still satisfy this property for all x. As I stated at the end above, the existence problem, of actually saying how to define a^x for all real x, (which is much harder), is usually done by integration theory.

I.e. one first proves, by considering the integral of 1/x, that there is a function g defined for all positive reals, with g(1) = 0, and whose derivative is 1/x; it follows that this function satisfies g(xy) = g(x)+g(y) for all positive reals, and is invertible. Moreover the function takes all real values. Then the inverse f of this function is defined for all reals, satisfies f(0) = 1, and f(x+y) = f(x).f(y). It follows that if we set a = f(1), then this function f(x) is a good candidate for a^x. At least it agrees with a^x when x is a positive integer, and it has the right additive and multiplicative property.

This subject is a bit difficult to explain thoroughly to students in my experience. Some people prefer to do it by approximation, but that is even more diffiocult technically.
 
Last edited:
  • #80
PeroK said:
The fundamental issue is that when you use some mathematical symbols, you must define what you mean by that arrangement of symbols. Until you know what you mean by those symbols, you cannot start to do mathematics using them. In this case, for example, you might write:

##2^0##

But, what does that mean? There's no immediate way to "multiply 2 by itself 0 times". Unlike ##2^1, 2^2, 2^3 \dots ##, which have a simple, clear definition.

My recommended approach is to define ##2^0 = 1## before you go any further. Then you know what those symbols mean.

Now, of course, you need to be careful that a definition is consistent with other definitions, and you need to understand the implications of a certain definition.

In this case, the only other candidate might be to define ##2^0 = 0##. But, when you look at the way powers work, you see that defining ##2^0 =1## is logical and consistent.

Thanks for this reply. I found this reply the most satisfying and beneficial.
 
  • Like
Likes PeroK
  • #81
mathwonk said:
note also that then f(1/n)...f(1/n), (n times), = f(1/n+...+1/n) = f(1) = a, so f(1/n) = nth root of a = a^(1/n). Thus also f(m/n) = f(1/n)...f(1/n), (m times), = a^(1/n)...a^(1/n), (m times).

Thus f is determined on all rational numbers, and hence by continuity also on all real numbers.

There is the technicality of whether ##f(m/n)## is well defined. e.g. ##f(2/3) = f(4/6)## ? ##f(x) = (-2)^x##
mathwonk said:
then this function f(x) is a good candidate for a^x.

I think you have in mind proving the existence of ##e^x##.

At least it agrees with a^x when x is a positive integer, and it has the right additive and multiplicative property.
Ok, we can define ##a^x## in terms of ##e^x## for ##a > 0##. Then we have the problem of the case ##a \le 0##.

For the benefit of @Rijad Hadzic, we should point out that valid attempts to avoid directly defining ## b^0 =1 ## for all ##b \ne 0## involve defining other things and then showing these other definitions imply ##b^0 = 1##. One cannot avoid using definitions as the foundation.
 
  • Like
Likes Ibix
  • #82
as far as i know there is no good way to define a function a^x for all real x when a is negative. hence the base for an exponential function is always chosen to be positive. (the difficulty of course is that a^(1/2) should be a square root of a, which will not exist in the real case for a < 0.)
 
  • #83
mathwonk said:
pardon me rada, i did not explain clearly what i was proving. there are two issues here, one is how to define a^x for all real x, and you are right i have not done this. the other issue is whether any definition at all, will yield a^0 = 1. Therse are called the existence and the uniqueness aspects of the exponential function. I have done only the uniqueness aspect. I.e. I have not shown how to define f(x) = a^x for all x, but rather i have proved that any definition at all, if one is possible satisfying the rule f(x+y) = f(x).f(y), must then obey the rule that a^0 = 1.
Now sir, I think that it is actually a mistake to consider the Functional Equation: f(x+y) = f(x).f(y) over the Reals. If we take this over the Natural numbers it has solution ## a^x ## and I think this is obvious.

Now we need only the restriction that f(0) is not equal to zero and let x=y=0 so that f(0+0)=f(0)=f(0).f(0) which gives immediately f(0)=1 or ## a^0=1 ## as required.
 
  • #84
PeroK said:
Anyway, with that definition of the exponential function, let's evaluate ##e^0##:

##e^0 = \Sigma_{n = 0}^{\infty} \frac{0^n}{n!} = \frac{0^0}{0!} + \frac{0^1}{1!} + \dots = \frac{0^0}{0!} ##

Hmm. Maybe that doesn't resolve all your misgivings after all!

I was hoping you or someone else would pick it up but the power series expansion of e(x) is (not using the compact summation formula)

e(x) = 1 + x + x^2/2! + x^3/3! + ...

So e(0) = 1.

The compact sum equation, you are correct in pointing out, has issues with zero ie what is 0^0. Its either 0 or 1:
https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero

However as far as power series expansions go its usually taken as 1.

When that happens, to avoid any possible confusion, its probably better writing it out term by term.

Thanks
Bill
 
Last edited:
  • #85
bhobba said:
but the power series expansion of e(x) is (not using the compact summation formula)

e(x) = 1 + x + x^2/2! + x^3/3! + ...

For the purpose of proving b^0 = 1, we shouldn't call it an "expansion" if this implies it is computed by taking derivatives and evaluating them at x = 0. That would require assuming ##e^0 = 1## in the first place.

For the purposes at hand, we can define ##e^x## by the above series and then define ##b^x## in terms of ##e^x## for ##b > 0##.

That leaves open the problem of showing ##(-2)^0 = 1##.
 
  • #86
Rada Demorn said:
Now we need only the restriction that f(0) is not equal to zero and let x=y=0 so that f(0+0)=f(0)=f(0).f(0) which gives immediately f(0)=1 or ## a^0=1 ## as required.
##f(0)=f(0)^2## has two solutions. The other solution is alluded to by @PeroK in #23.

Edit: However, you have covered this possibility already. Apologies.
 
Last edited:
  • #87
How do you like this for a definition?

a is a Natural Number.

a^3 is a volume, a cube.

a^2 is an area, a square.

a^1 is a length, a line.

What else a^0 could be but a point, an individual, a 1?
 
  • #88
Rada Demorn said:
How do you like this for a definition?

a is a Natural Number.

a^3 is a volume, a cube.

a^2 is an area, a square.

a^1 is a length, a line.

What else a^0 could be but a point, an individual, a 1?

If we have a cube of side 2, it's volume is 8, Or, if we have a cube of bricks, with each side 2 bricks long, we have 8 bricks.

For a square of side 2, it's area is 4. Or, a square of bricks needs 4 bricks.

For a line of length 2, it's length is 2. Or, a line of bricks needs 2 bricks.

For a point, it's size is 0. Or, if we have a zero-dimensional array of bricks, then there are no bricks.

By this intuitive reasoning, we should have ##2^0 = 0##.
 
  • #89
If one chose to compute a^n by choosing one member of set A (with cardinality a) for column 1, one member of set A for column 2 on up to column n and counting the number of possible combinations, one could make a reasonable argument that there is exactly one way to choose nothing at all in no columns at all for a^0.
 
  • Like
Likes PeroK
  • #90
Perhaps a valid proof that ##b^0 = 1## could be extended to valid proof that the "empty product" convention https://en.wikipedia.org/wiki/Empty_product is not a definition at all, but rather a theorem. (Just joking.)
 
  • Like
Likes jbriggs444

Similar threads

  • · Replies 55 ·
2
Replies
55
Views
6K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
767