Bipolarity said:
It seems there is contoversy over this. I believe the statement that 0^{0}=1 is verified by Google, then again Google is not a mathematician. I had this question a while ago, but it was clarified by
http://www.askamathematician.com/20...ematicians-and-high-school-teachers-disagree/ whose arguments I found quite convincing.
I see no reason why 0^{0}=1 interferes with \lim_{x\to 0} 0^x since the latter need not be defined at x=0 for the limit to exist. I see no reason why mathemat
icians would want that function to be continuous at x=0 either...
According to the link above, the definition 0^{0} = 1 allows for a more elegant enunciation of the binomial theorem, among a few other things.
The binomial theorem relies implicitly on the commutativity of multiplication of ordinary number. That is why there is a combinatorial coefficient in front of a^{n - k} b^{k}, because you clump all products of the type, say:
<br />
a b a b a, \ a a b b a, a b a a b, \ldots<br />
together.
But, matrix multiplication is non-commutative, so I don't see the benefit of defining A^{0} = I to simplify the matrix binomial theorem, when the binomial theorem is not simple for non-commutative matrices in the first place.
Bipolarity said:
And Dickfore claims it is only true for invertible matrices, I don't know where he's getting this information.
I am starting from a general definition of a function of a (square) matrix. Namely, suppose we can diagonalize a (square) matrix:
<br />
A = U \, \Lambda \, U^{-1}<br />
where \Lambda is a diagonal matrix containing the eigenvalues of A on the main diagonal. Then, one
defines:
<br />
f(A) = U \, f(\Lambda) \, U^{-1}<br />
where f(\Lambda) is simply a diagonal matrix again whose diagonal elements are the values f(\lambda_\alpha) for each corresponding eigenvalue of A, and U is the same matrix as above.
Then, the problem of defining A^0 for a matrix goes down to the problem for defining it for a (complex) number \lambda^0. Taking powers of complex numbers is given by:
<br />
u^{v} = \exp \left(v \, \mathrm{Log}(u) \right)<br />
If you take v = 0, it seems the argument of the exponential is always zero, and \exp(0) = 1. But, that is true provided that \mathrm{Log}(u) exists! Otherwise the r.h.s. is not defined. And, Log is not defined for u = 0.
If a matrix has a zero eigenvalue, then and only then its determinant, being the product of its eigenvalues is zero as well, and the matrix is singular. This is why I required the determinant to be non-zero, so that you have only non-zero eigenvalues, and their exponentiation (even by 0) is defined.