It's hard to believe that the definition of differentiability of a function of two variables is not given in any textbook on Calculus of several variables. The existence of partial derivatives of a function at a given point is NOT enough to prove that the function is "differentiable" there. The standard definition is this:
F(x, y), a function from R^2 to R (you can extend to as many variables as you like) is "differentiable at (x_0, y_0) if and only if there exist a linear function L(x,y) from R^2 to R and a function \epsilon(x,y) from [R^2 to R such that
F(x,y)= F(x_0, y_0)+ L(x-x_0, y-y_0)+ \epsilon(x-x_0, y-y_0)
and
\lim_{(x,y)\to (x_0,y_0)} \frac{\epsilon(x-x_0, y-y_0)}{\sqrt{(x-x_0)^2+ (y-y_0)^2}}= 0
Roughly, that says that L(x,y) (the derivative of F(x,y) at (x_0, y_0) is the "best" linear approximation to F(x,y) in the neighborhood of (x_0, y_0). If you think of z= F(x,y) as defining a surface in R^3, that says that the surface has a tangent plane at the point.
That can be hard to use. Probably what you want to use is a theorem normally proved immediatly after that definition is introduced:
"F(x,y) is differentiable at (x_0, y_0) if and only if the partial derivatives \partial F/\partial x and \partial F/\partial y exist and are continuous in some neighbohood of (x_0, y_0)."