Finding a, b so that y= ax+ b gives the "least squares" best fit to the set of points (x_i, y_i) means "solving" the equations y_i= ax_i+ b for all i. That is equivalent to the matrix equation Ax= y or
\begin{bmatrix}x_1 & 1 \\ x_2 & 1 \\ \cdot & \cdot \\ \cdot & \cdot \\ \cdot & \cdot \\ x_n & 1\end{bmatrix}\begin{bmatrix} a \\ b\end{bmatrix}= \begin{bmatrix} y_1 \\ y_2 \\ \cdot \\ \cdot \\ \cdot \\ y_n\end{bmatrix}
Now that is a linear transformation from R^2 to R^n and so its image is a 2 dimensional subspace of R^n. If the vector
\begin{bmatrix} y_1 \\ y_2 \\ \cdot \\ \cdot \\ \cdot \\ y_n\end{bmatrix}
happens to lie in that subspace, that is, if the points happen to lie on a straight line, then there would be a precise solution. If not, then then you are looking for the <a, b> that comes closest to that. Geometrically, you can find that solution vector by dropping a perpendicular from y to that two dimensional subspace. If \overline{x} gives that "optimal" solution, that is, if A\overline{x} is closest to y, then A\overline{x}- y is perpendicular to that two dimensional subspace, the space of all Ax. That means that, for any A, the inner product <Ax, A\overline{x}- y>= 0.
Now the "adjoint" of A, A^+, has the property that <Ax, y>= <x, A^+y> so here we have <Ax, A\overline{x}- y>= <x, A^+(A\overline{x}- y)>=<x, A^+A\overline{x}- A^+y>= 0. But since that is true for all x in R2, we must have A^+A\overline{x}- A^+y= 0 or A^+A\overline{x}= A^+y.
Now we can solve for x by multiplying both sides by (A^+A)^{-1}:
x= (A^+A)^{-1}A^+(y). That is the "pseudoinverse" you refer to.
Because, as before,
A= \begin{bmatrix}x_1 & 1 \\ x_2 & 1 \\ \cdot & \cdot \\ \cdot & \cdot \\ \cdot & \cdot \\ x_n & 1\end{bmatrix}[/itex]<br />
A^+= \begin{bmatrix}x_1 &amp; x_2 &amp; \cdot\cdot\cdot &amp; x_n \\ 1 &amp; 1 &amp; 1 \cdot\cdot\cdot &amp; 1\end{bmatrix}<br />
<br />
So that <br />
A^+A= \begin{bmatrix}\sum_{i= 1}^n x_i^2 &amp; \sum_{i=1}^n x_i \\ \sum{i=1}^n x_i &amp; n\end{bmatrix}<br />
and<br />
A^+ y= \begin{bmatrix}\sum_{i=1}^n x_iy_i &amp; \sum_{i=1}^n y_i\end{bmatrix}[/itex]<br />
<br />
So you want to solve the matrix equation <br />
\begin{bmatrix}\sum_{i= 1}^n x_i^2 &amp;amp; \sum_{i=1}^n x_i \\ \sum{i=1}^n x_i &amp;amp; n\end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= \begin{bmatrix}\sum_{i=1}^n x_iy_i &amp;amp; \sum_{i=1}^n y_i\end{bmatrix}[/itex]&lt;br /&gt;
&lt;br /&gt;
That should be easy- its just a 2 by 2 matrix and the inverse of &lt;br /&gt;
\begin{bmatrix}A &amp;amp;amp; B \\ C &amp;amp;amp; D\end{bmatrix}&lt;br /&gt;
is &lt;br /&gt;
\frac{1}{AD- BC}\begin{bmatrix}D &amp;amp;amp; -B \\ -C &amp;amp;amp; A\end{bmatrix}