You can probably split that question into two parts.
1. What are the math (linear algebra) properties of the SVD, how they can be used for least squares fitting, pseudo-inverse matrices, interpreting the meaning of ignoring (or setting to zero) some of the singular values, etc.
2. How to calculate the SVD efficiently, especially for large matrices.
A (very short and cryptic) answer to (1) is that the SVD of a matrix ##A## is closely related to the eigenvalues and vectors of the matrices ##A^TA## and ##AA^T##. Books and course notes on numerical methods and/or linear algebra should have the details.
For (2), you probably don't really want to know. If you want to calculate SVDs, use the routines from a library like LAPACK, or ARPACK for very large problems.
IIRC the "Numerical Recipes" book has a reasonable description of (1), and some (state of the art in the 1960s) computer code for (2). Old versions of "NR" are free, from here.
http://www.nr.com/oldverswitcher.html
Lanczos methods are more or less the current state of the art for very large eigenproblems, but the difference between the basic math (which is simple enough) and computer code that actually works lies in some subtle details about dealing with rounding errors, etc. The math behind Lanczos dates back to the 1940s or 50s, but it took untll the 1980s before anybody figured out how to make it work reliably as a numerical method. (In fact Wilkinson's classic book of the 1960s, "The Algebraic Eigenvalue Problem", "proved" that it was nice in theory but useless in practice!) You most definitely don't want to be write your own code for the Lanczos method, unless you do a LOT of research first.