timthereaper said:
Okay, I've read all the posts and have tried to figure out the controversy of 0^0=1. From what I understand (correct me if I'm wrong), but it seems that 0^0 is actually indeterminate, but the general consensus is that defining 0^0=1 makes things easier in some ways, but it violates some rules somehow.
That's a very good summary!
I'm no mathematician (I'm an engineer), so why is it convenient to define it this way? I guess I'm trying to figure out what problems arise if we didn't say that 0^0=1.
Well, for example in Taylor series. The Taylor series of e
x is
e^x=1+x+\frac{x^2}{2}+\frac{x^3}{3!}+...
No problems so far, but if we want to write it more compactly, we get
e^x=\sum_{k=0}^{+\infty}{\frac{x^k}{k!}}
However, when we evaluate this in 0 (thus if we want to evaluate e
0), then we get 0
0 in the first term. That is:
e^0=\frac{0^0}{0!}+\frac{0^1}{1!}+\frac{0^2}{2!}+...
Now the conventions 0!=1 and 0
0=1 are handy because they allow us to calculate e
0=1 (which we already knew to be true).
If we didn't set 0
0=1, then we would had to write
e^x=1+\sum_{k=1}^{+\infty}{\frac{x^k}{k!}}
which is less elegant. So you see, we only define 0
0=1, because it is sometimes more elegant to do so. We won't create new mathematics with it, we won't run into problems with it, it just makes things nicer.
In a way, it's the same thing as setting 0!=1. This is just a handy convention that makes a lot of things easier. But setting 0!=2 would have been as good, but it would make the formula's uglier...