If this guy hasn't done any linear algebra how do you expect him to know about linear maps and even analytical geometry? I will offer a layman explanation, and hopefully you can build up on it when you take linear algebra.
First of all, you need to know that vectors of Rn can be represented as column matrices. For example, the vector (1,-4,3) would look like this in linear algebra: \begin{pmatrix}<br />
1 \\<br />
-4 \\ <br />
3 \\<br />
\end{pmatrix}<br />
Eigen is german for "the same". So when you are working with a matrix A, you are trying to find an X and \lambda such that AX=\lambdaX. Here, X is the vector (represented as a colum matrix) and \lambda is some number. It should be clear that when multiplying matrices A (nxn) and X (nx1) you get an nx1 matrix in return. As you know, multiplying a scalar by X will also give you an nx1 matrix. So you are trying to find an X, called the eigenvector, so that AX is equivalent to X multiplied by some scalar, \lambda, called the eingenvalue. Sometimes more than one X will work for a certain \lambda.
To find eigenvectors, you work with AX = \lambdaX, which is the same as AX = \lambdaIX, which is the same as (A - \lambdaI)X = 0. Obviously X=0 works for this, but by defintion no eigen can ever be zero. The only way to guarantee that there exist some X other than zero is to make (A - \lambdaI) not invertible.
Theorem: Suppose BX=0. If B is invertible, then X=0 is the only solution.
Proof: If B is invertible, then there is some B- such that B-B=I.
So X = IX = B-BX = B-(BX) = 0, because BX=0.
See? The theorem tells us (A - \lambdaI) must not be invertible otherwise we will only have X=0 as an eingenvector, which is not allowed! The way you do this is make the determinant zero. There is a theorem is linear algebra that says A is not invertible iff its determinant is zero. So that's what you are doing when finding eigenvalues, expanding the determinant in terms of \lambda and making it equal zero. Because only
(A - \lambdaI) with those \lambda are non-invertible, so they are the only ones with nonzero X.
Once you have your eigenvalues, you find out which Xs work which you already know how to do. Ultimately, there are infinitely many Xs that work so we overcome this problem by excluding all other scalar multiples of each eigenvector. So you've found an eigenvector so that AX is the same as \lambdaX, pretty weird huh? A matrix acting like a scalar towards some X! Each \lambda has its own eigenvectors, and sometimes more than 1 are possible.
Eigenvalues and eigenvectors are used in linear algebra to diagonalize matrices, that is to make them into a nice and cute form. Certain matrices have predictable eigenvalues. Others not so predictable. You will learn other ways to use them in linear algebra. For now, I hope this clears things up.