As you may have learned in introductory linear algebra, the objects of a vector space, need not necessarily be vectors. Actually, you may have learned of spaces of polynomials. So, you must be aware, that the space of functions may also serve as a vector space.
You can also define operators that work on functions, transforming them into another function. (Just like a matrix transforms a vector into another vector) Examples for such transformations may be: the derivative, the anti-derivative or even the Fourier transform.
Knowing this, you may already have the intuitive definition of what eigenfunctions are (analogous to eigenvectors), but let's go over it:
An eigenfunction of an operator T:V->V (V being your function space) is a function f satisfying the equation Tf=\lambda Tf for some scalar \lambda. \lambda is called the eigenvalue of the operator.
The action of finding one operator's eigenvalues and eigenfunctions is called Diagonalization (borrowed from the diagonalizing of matrices).
Diagonalizationis a very useful tool in the solution of Partial Differential Equations, where you want to express the functions of the problem using series of eigenfunctions (off course you will need to define the operator which you are diagonalizing), which simplifies the problem.
For me the most noticable field in which diagonilization (of function operators) is heavily used, is Quantum Mechanics, especially when the operator in consideration is the Hamiltonian. The solution of a quantum mechanics problem is usually finding the eigenvalues and eigenfunctions (often called eigenstates) of the Hamiltonian. The eigenvalues of the Hamiltonian represent the allowed energies of the system, while the eigenstates, or some combination of them, are identified with some probability distribution.