The adjoint operator, A*, to A, where A is a linear tranformation from one innerproduct space, U, to another, V, is the operator from V back to U such that <Au, v>= <u, A*v>. Since it is going from V back to U it is a little like an inverse function.
We can make that more precise. Suppose we want to "solve" Ax= y were y is some vector in V. If A is NOT invertible, there may not be such an x (and if there is, there may be an infinite number). If A is not invertible, then A maps every vector in U into some subspace of V. If y does not happen to be in that subspace, there is no x such that Ax= y. But we can then look for the value of x that makes Ax closest to y. We can find that x by dropping a perpendicular from b to the subspace. That is, we want x such that b- Ax is perpendicular to that subspace: we want <Au, b- Ax>= 0 for all u in U. We can make use of the definition of adjoint to rewrite that as <u, A*(b- Ax)>= 0. The difference is that now the inner product is in U- and u can be any vector in u. The only vector in a vector space that is orthogonal to all vectors in a vector space (including itself) is the 0 vector. We must have A*b- A*Ax= 0. We can rewrite that as A*Ax= A*b. Now, even for A not invertible, A*A is likely to be: we have x= (A^*A)^{-1}A* b as the vector in U such that Ax is "closest" to y. (A^*A)^{-1}A is often called the "generalized inverse" of A. (If U and V are vector spaces over the complex numbers, this is the "Moore-Penrose" inverse.)
Of course, if A is a linear tranformation from a vector space U to U itself, it might happen that A is its own inverse. "Self adjoint" operators are very important. For example, one can show the eigenvalues of a self adjoint operator, even on a vector space over the complex numbers, are real numbers. Further, there always exist enough independent eigenvectors for a self adjoint operator to form a basis for the vector space. That means the selfadjoint operators are "diagonalizable".