Proving Inverse Function: f(x)=g-1(x)

ritwik06
Messages
577
Reaction score
0

Homework Statement


The problem is to prove that:
if
f(g(x))=x ... (1)
and
g(f(x))=x ...(2)

then f(x)=g-1(x)


The Attempt at a Solution


Differentiating (1) wrt x
f'(g(x))*g'(x)=1
f'(g(x))=1/g'(x)

As the slopes are reciprocals of each other, hence f(x)=g-1(x)

Is this as simple as it seems? Are there any possible exceptions?
Is there any more explanatory proof?
Please help me.
 
Physics news on Phys.org
What you're trying to prove is usually what is taken to be the definition of the inverse function!

What definition are you using then?
 
quasar987 said:
What you're trying to prove is usually what is taken to be the definition of the inverse function!

What definition are you using then?

The definition that I use is that the inverse function is the one which returns the input (as its output) when provided with the output of the original function.

So, Have I proved it right? Are there no exceptions to it?
 
ritwik06 said:
The definition that I use is that the inverse function is the one which returns the input (as its output) when provided with the output of the original function.

This is not the definition of the inverse (it is the definition of the left inverse). Surely you have the definition of the inverse in mathematical symbols somewhere in your book or class notes?
 
ritwik06 said:
The definition that I use is that the inverse function is the one which returns the input (as its output) when provided with the output of the original function.

So, Have I proved it right? Are there no exceptions to it?

I agree with quasar that you might want to review your notes or ask your professor for the definition he or she is using.

However, it seems you're not far off the mark. One suitable definition might be that for any functions f and g such that f:A->B, g:B->A, if for each a in A and b in B, f(a) = b and g(b) = a, then g is the inverse of f.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top