Hi transphenomen!
Let me explain you the theorems by some examples.
The implicit function theorem states that the solution set of an equation is locally a function. For example, say that we need a function f such that
e^{xf(x)}+f(x)^3=0
Does such a function even exist? We might solve the equation analytically, but this won't work in this case (I think). But we can use the implicit function theorem to say that there exists locally such a function (if some conditions on the derivatives are satisfied).
As can easily be seen, the point (0,-1) satisfies the equation
e^{xy}+y^3=0
All we got to do is take the partial derivatives with respect to y, this gives us
xe^{xy}+3y^2
So you see that the partial derivatives in (0,-1) don't vanish, so there exists a local function
g:]-\delta,\delta[\rightarrow ]-1-\epsilon,-1+\epsilon[
such that g(0)=-1 and
e^{xg(x)}+g(x)^3=0
For the inverse function theorem, it just tells us when it's ok to locally invert a funtion. For example, consider the function ex, can we invert this function? You might say yes: the logarithm is the inverse, but that's the definition of the logarithm, nobody tells us that this definition is a good one.
Now, we know that ex is a continuous function that never has a zero derivative. So by applying the inverse function theorem, we know that there is an inverse function f locally. Furthermore, we know it's derivative:
(f^{-1})^\prime(e^a)=\frac{1}{e^a}
thus
(f^{-1})^\prime(x)=\frac{1}{x}
In particular, this allows us to calculate the derivative of the logarithm without needing any property about the logarithm!