I've always considered it annoying how the differentiability of eigenvalues is considered trivial in the context of perturbation theory... So I this thread caught my attention. I guess this is what Hurkyl is talking about:
If \mathbb{R}\to\mathbb{R}^{n\times n},\; t\mapsto A(t) is some continuously differentiable function, then we can define a function
<br />
f:\mathbb{R}^{1+n}\to\mathbb{R}^n,\quad f_i(t,\lambda_1,\ldots, \lambda_n) = \det\big(A(t) - \lambda_i\textrm{id}_{n\times n}\big)<br />
The implicit function theorem says that if f(0,\lambda_1,\ldots, \lambda_n)=0 with some lambdas, and if
<br />
\left(\begin{array}{ccc}<br />
\frac{\partial f_1}{\partial\lambda_1} & \cdots & \frac{\partial f_1}{\partial\lambda_n} \\<br />
\vdots & & \vdots \\<br />
\frac{\partial f_n}{\partial\lambda_1} & \cdots & \frac{\partial f_n}{\partial\lambda_n} \\<br />
\end{array}\right)<br />
is invertible at this location (0,\lambda_1,\ldots, \lambda_n), then there exists some continuously differentiable mapping t\mapsto (\lambda_1(t),\ldots, \lambda_n(t)) such that these values are eigenvalues of A(t).
<br />
i\neq j\quad\implies\quad \frac{\partial f_i}{\partial\lambda_j} = 0<br />
so actually we only need to prove that
<br />
\frac{\partial f_i}{\partial\lambda_i} \neq 0<br />
for all 0\leq i\leq n.