Why is anything raised to the power zero, one?

In summary: What you're asking for is impossible to do. You're asking for a list of all the cases where F=0, and all the cases where F=f^{g} -1. That's not something that can be done.
  • #1
sshzp4
41
0
Are there any non-theological (or non-http://betterexplained.com/articles/understanding-exponents-why-does-00-1/" ), non-inductive, secular answers to the question:

Why is $a^0 =1; \forall a \in \{ \mathfrak{N}, \mathfrak{R}, \mathfrak{C}\}$? Why is this always true? Are there any 'mathematical' explanations for this? Anything that does not rely on obtuse arguments? Please link me to the answer if you are aware of any resources.

(Please pardon me if I overlooked previous discussions on the topic in the forum)

Thanks!

Sid
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
Does not the exponent indicate how many times that number is used as a factor? If a number is used as a factor zero times, isn't that the same as multiplying by one?
 
  • #3
I believe this has been answered before and the best answer seemed to be that exponents only really work if something to the 0-th power is defined as 1
 
  • #4
Skeptic2: Thanks. But I don't get that logic. (Edit: That view depends on the semantics of description, and so it might not be considered very reliable.)

Pengwuimilanove: I understand the heuristics answer, only it doesn't feel very satisfactory. I am just hoping some advanced 50 year old mathematics post doc will post something arcane and magical (Edit: Maybe the Godel or Peano fanatics have something nice?).
 
Last edited:
  • #5
Why is something raised to the power of one equal to that number?

If you can answer that it will be far clearer how to model an answer that you will be willing to accept.
 
  • #6
It isn't so much a matter of showing that "anything raised to the power 0 is one" as it is of defining powers that way. (And, by the way, it is NOT true as stated- [itex]0^0[/itex] is "undefined". What is true is that "If a is a non-zero number then [itex]a^0= 1[/itex].)

As I say, that is a definition but we have a reason for defining it that way: We define positive integer powers by "[itex]a^n[/itex] is equal to a multiplied by iteself n times". It is easy then to show, essentially by a "counting argument", that [itex]a^{m+n}[/itex], where m and n are both positive integers, is "a multiplied by itself m time, then a multiplied by itself n times, and those multiplied together" is just the same as "a multiplied by itself m+ n times". That is, [itex]a^{n+m}= a^ma^n[/itex].

That happens to be a very useful property! So we want to define other powers in such a way as to keep it true. Since "0" has the property that n+ 0= n for any n, we want to have that [itex]a^n= a^{n+0}= a^na^0[/itex]. Now, as long as [itex]a\ne 0[/itex] [itex]a^n\ne 0[/itex] and we can divide both sides of that equation by [itex]a^n[/itex] to get 1= a^0.

That is, in order to have [itex]a^{n+m}= a^na^m[/itex] true even if one of m or n is 0, we must define [itex]a^0= 1[/itex].

We do much the same thing to see how to define negative powers: If we are going to have [itex]a^{n+(-n)}= a^na^{-n}[/itex], since n+(-n)= 0 so that [itex]a^{n+(-n)}= a^0= 1[/itex], we must have [itex]a^na^{-1}= 1[/itex] and, again as long as a is not 0 we can divide both sides by [itex]a^n[/itex] to get [itex]a^{-n}= 1/a^n[/itex].
 
  • #7
Halls -- I understand your point. I agree with it even. But in the process of deriving the series-form solution to the elliptic integral yesterday, I was struck by how the $\sin^{2n}{\theta}$ terms were forced to disappear leaving just 1. And that struck me as being rather contrived.

Anyway, a better way (my personal opinion) to define the exponential function is "Any thing raised to the power of $n$ is to multiply 1 with that thing $n$ times". Then its clear that when you multiply 1 with the thing 0 times, you still have 1 remaining or that you don't multiply anything with 1 when the power is zero. This is not very different from what you described at all. But I am still saying that this is a semantic construct.

What you have presented is the 'selection of a definition based on a state of how you want things to behave after you have made the choice of a definition', which is perfectly fine and reasonable. The problem comes up when you raise a function (say, f) to the power of another function (g), where both functions are complex. In that case, to predict the behavior of such an expression, we usually rely on the definition that we constructed using visualizations true in the real domain alone. What I would like to know is the necessity/sufficiency criterion that states that if F=f^{g} -1, and iff F=0, then => g=0 \forall f,g \in {some domain} without again resorting to the definition.
 
  • #8
In any inductive definition (and every mathematical operation is defined in such a way), there's always inductive properties (x^(n+1) = x * x^n) and base cases (x^0 = 1).

To a philosopher, the inductive properties look like they have a lot of "meaning", because they often determine the properties of the operator involved.

The base case is chosen the way it is simply because alternatives lead to useless properties (say, x^0 = 0, where x^n = 0 for all x, n) or to simple variations that merely cause us to do extra work (say, x^0 = 2, where x^n is simply twice the value of usual).

Sometimes, two alternative definitions seem equally valid. For example, when defining rings, some authors say a ring requires a unit (a kind of base case). Other authors don't. The only difference is that in their theorems, the one author must always say "Let R be a ring with unit" when they want to talk about rings with units, and the other must talk about R \ {1} when they want to discuss a ring without a unit removed. Either definition works, but depending on what theorems you're interested in, one definition leads to more verbosity later down the page.

Operations are often "weird" near their base cases, because their properties stop working at that point. Since we use 0 and 1 as a common base cases, you often see weird behavior at these points. Division by zero is weird. Exponentiation at zero is weird. The expression 0^0 is super weird because TWO frickin' zeros are involved! But it's no weirder than what happens when you step off the edge of a cliff. The base case (regardless of what value you chose for it) is simply the point at which the induction properties stop working and you fall off. For most operators, this doesn't cause mental hangups, because asking the question out loud doesn't make sense in English (how often are you looking for subsets of a set when you know its empty?... it doesn't really bother people the answer is 1 instead of 0).

I'm not familiar with what problem you're working with, but extending algebraic operations to functions is always a ****ing pain. You have to know the range of all the functions involved in order to determine the domain of the expression you're working with. You can't get around it. You can do the physicist's thing and present like 0/0 or 0^0 is just infinity or some other unimportant value, and you'll still get the right answer for most other points on your function. But to find an exact answer, you have to be exact in your definitions.
 
  • #9
Because of exponential arithmetic actually:

1: x/x = 1 (provided that x != 0)
2: x^1/x^1 = 1
3: x^1 * x^-1 = 1
4: x^0 = 1

It's the basic rule of canceling out expontents, and it actually only works if x != 0. If you say that 0^0 is 1, at some points nasty things are going to happen. 0^0 is undefined, just as 0/0 is.

I mean, as long as we accept the rule that a^x * a^y = a^(x+y), we have no choice but to accept that x^0 for any x (not 0) is 1.

There are more ways to see this, like, if you graph f(y) = x^y for any x, you'll see that if y approaches 0, f(y) will approach 1.

Edit: Another thing is by the way that the product of no numbers is often seen as 1 because 1 is the multiplicative identity. We can then define:

1: prod () = 1
2: prod (a, b, c ...) = a * prod (b, c ...)

Similarly, we define sum like:

1: sum () = 0
2: sum (a, b, c ...) = a + sum (b, c ...)

As 0 is the additive identity. And this is also exactly the reason (one of the many) why x*0 = 0.
 
  • #10
So what are these nasty things that happen if you allow 0^0 to be 1?
 
  • #11
The same sorts of nasty things that happen if you allow 0/0 to be anything.
 
  • #12
>>Why is anything raised to the power zero, one?


It was a lost math. I think it is because of a space effect so that 1^2 * 1^0 = 1^2 instead of 0.
Since 1 appears to be the origin of all numbers, so it works for all numbers.


Now consider the following:

1^0 = 1
1^1 = 1
1^2 = 1

What is the different between 1^0, 1^1 and 1^2 ?
 
  • #13
sshzp4 said:
Halls -- I understand your point. I agree with it even. But in the process of deriving the series-form solution to the elliptic integral yesterday, I was struck by how the $\sin^{2n}{\theta}$ terms were forced to disappear leaving just 1. And that struck me as being rather contrived.

Anyway, a better way (my personal opinion) to define the exponential function is "Any thing raised to the power of $n$ is to multiply 1 with that thing $n$ times".
Well, I disagree with that. You are assserting the that [itex]2^{1/2}[/itex] is 1 multiplied by 2 1/2 times! What does that mean? Or that [itex]2^{\pi}[/itex] is 1 multiplied by 2 [itex]\pi[/times]. What does that mean?

Then its clear that when you multiply 1 with the thing 0 times, you still have 1 remaining or that you don't multiply anything with 1 when the power is zero. This is not very different from what you described at all. But I am still saying that this is a semantic construct.

What you have presented is the 'selection of a definition based on a state of how you want things to behave after you have made the choice of a definition', which is perfectly fine and reasonable. The problem comes up when you raise a function (say, f) to the power of another function (g), where both functions are complex. In that case, to predict the behavior of such an expression, we usually rely on the definition that we constructed using visualizations true in the real domain alone. What I would like to know is the necessity/sufficiency criterion that states that if F=f^{g} -1, and iff F=0, then => g=0 \forall f,g \in {some domain} without again resorting to the definition.
"Without resorting to the definition", you do not even know what your are talking about!
 
  • #14
The analytic approach is:

1. Define exp(z) = 1+z+z^2/2!+... on C
2. Define ln(z) as the inverse exp(z)
3. Define a^z = exp(z.ln(a))

Then it follows that a^0 = exp(0) = 1.
 
  • #15
Any number(N) raised to a finite power(x) can have some sort of meaning but N raised to the power zero seems to be meaningless.I think it might be more appropriate to say that as x tends to zero N to the power x tends to one.
 
  • #16
HallsofIvy has given the best explanation one can possibly give for the question asked. The others seem fine (mostly), but they are not as clear or concise as the one given by HallsofIvy...
 

1. Why is anything raised to the power zero equal to one?

Anything raised to the power of zero is equal to one because of the fundamental property of exponents. When we raise a number to a power, we are essentially multiplying that number by itself a certain number of times. When the power is zero, there are no iterations of multiplication, so the result is one.

2. Why does anything raised to the power one equal itself?

When a number is raised to the power of one, we are essentially multiplying that number by itself one time. This means that the number remains unchanged, resulting in the same number as the answer.

3. Can anything be raised to the power of zero and one?

Yes, anything, including numbers, variables, and expressions, can be raised to the power of zero and one. The resulting answers will always be one and the original number, respectively.

4. How does raising a number to the power zero or one affect the value?

Raising a number to the power of zero or one does not change the value of the number. When raised to the power of zero, the value remains as one, and when raised to the power of one, the value remains unchanged.

5. Why do we use exponents and powers in mathematics?

Exponents and powers are used in mathematics to represent repeated multiplication or division in a more concise and efficient way. They are also used in many mathematical formulas and equations to simplify complex expressions and make them easier to solve.

Similar threads

  • Math Proof Training and Practice
Replies
28
Views
5K
  • Introductory Physics Homework Help
Replies
7
Views
3K
  • Math Proof Training and Practice
6
Replies
175
Views
20K
  • Linear and Abstract Algebra
Replies
2
Views
2K
Replies
21
Views
17K
  • Linear and Abstract Algebra
3
Replies
83
Views
12K
  • Electrical Engineering
Replies
11
Views
3K
  • Math Proof Training and Practice
Replies
16
Views
5K
  • DIY Projects
2
Replies
36
Views
8K
  • General Discussion
Replies
34
Views
7K
Back
Top