I'm not so fond of treating derivatives as quotients of differentials, but I think formally it works out here.
Without wanting to be argumentative, I would like to add some comments that I hope could be helpful for the OP and/or others. The original system from post #1 is
\[
\left\{
\begin{aligned}
x e^y + y f(z) &= a\\
x g(x,y) +z^2 &= b
\end{aligned}
\right.
\]
It was given that the system defines $x$ and $y$ as differentiable functions of $z$, but how do we find out whether this is indeed the case? This can be done together with the rest of the problem along the following lines, at least locally and provided that we are allowed to assume additionally that the derivatives of $f$ and $g$ are continuous.
The above system can be written as $F(x,y,z) = 0$ with $F : \mathbb{R}^2 \times \mathbb{R} \to \mathbb{R}^2$ given by
\[
F(x, y, z) =
\begin{pmatrix}
x e^y + y f(z) - a\\
x g(x,y) +z^2 - b
\end{pmatrix}
\]
Let $D_1F$ be the partial derivative of $F$ with respect to its first two arguments. At any point $(x_0,y_0,z_0) \in \mathbb{R}^3$ where $F(x_0,y_0,z_0) = 0$ and the $2 \times 2$ matrix $D_1F(x_0,y_0,z_0)$ is non-singular, the Implicit Function Theorem applies: There exists a continuously differentiable function $G$, defined in a neighborhood $U$ of $z_0$ and taking values in a neigborhood $V$ of $(x_0,y_0)$, such that for each $z \in U$ the equation $F(x,y,z) = 0$ has the unique solution $G(z) \in V$. We may differentiate the relation $F(G(z),z) = 0$ with respect to $z \in U$ to find
\[
D_1F(G(z),z)DG(z) + D_2F(G(z),z) = 0,
\]
so $DG(z) = -[D_1F(G(z),z)]^{-1}D_2F(G(z),z)$ for all $z \in U$. (The non-singularity of $D_1F(G(z_0),z_0)$ and the continuity of the derivatives of $f$ and $g$ are used together to ensure that the matrix inverse exists for $z$ near $z_0$.)