If all you have at your disposal is software that does Least Squares regression, you are rather stuck. It can be helpful to rerun the analysis without the problem data values,
but only to see how much the results change without them . If the coefficients change by more than a couple standard errors of their original values (or worse, change sign, or cause a significant relationship to become non-significant, etc.) you have a serious problem.
Remember that least squares regression works by finding the estimates that minimize the sum of the squared residuals:
<br />
S(\hat \alpha, \hat \beta) = \min_{(a,b) \in \mathbf{R}^2} \sum \left(y_i - (a + b x_i)\right)^2<br />
The problem with squaring the residuals is that the resulting estimates are easily influenced by outliers - just as the sample mean is. The notion of robust regression is this: replace the operation of squaring with a function that has similar interpretations (can be thought of as a distance function, essentially) but which will downplay the role of outliers. One early choice was to use absolute values (think of the median compared to the mean)
<br />
S_{L1}(\hat \alpha, \hat \beta) = \min_{(a,b) \in \mathbb{R}^2} \sum |y_i - (a + bx_i)|<br />
This can be solved via linear programming and other methods, but there is no closed form expression for the estimates. This provides some protection against y outliers but not against x outliers.
The starting point for robust regression begins with this observation: both least squares and L1 regression can be viewed as a process of minimizing the following sum, a function of the residuals
<br />
\sum \rho(y_i - (a+bx_i))<br />
For least squares \rho(x) = x^2, for L1 \rho(x) = |x|. Huber (and others) had the idea of generalizing this to other forms of the function \rho.
The "simplest" case is Huber's function, which is "least squares in the middle and L1 on the ends". To ease typing I will use e_i as an abbreviation for y_i - (a + bx_i)
<br />
\rho(e_i) = \begin{cases}<br />
\frac{e_i^2} 2 & \text{ if } |e_i| \le k\\<br />
k |e_i| - \frac 1 2 k^2 & \text{ if } |e_i| > k<br />
\end{cases}<br />
The number k is a "tuning constant" - a common choice is 1.345, chosen for a number of theoretical and practical reasons. (Admission: this formulation assumes that the standard deviation of y is known: if not, things can get a little messier.) The solution can be found using Weighted Least Squares.
Other choices for the \rho function are available, offering different benefits.
There are other robust regression methods. Regression based on
ranks can be motivated by this starting point. Here is the least squares function again.
<br />
\sum \left(y_i - (a+bx_i)\right)^2 = \sum {\left(y_i - (a+bx_i)\right) \cdot \left(y_i - (a+bx_i)\right)}<br />
The idea is to replace one of the terms in the second form by an expression that involves a function of the ranks of the residuals.
<br />
\sum \phi(\text{R}(y_i - (a+bx_i)) (y_i - (a+bx_i))<br />
Judicious choice of the score function \phi can result in estimates that have reasonable break-down values and bounded influence. The 'drawback' here is that the intercept cannot be directly estimated, this must be done with some function of the residuals (median of the residuals, for example)
All the robust procedures I've mentioned give estimates that are asymptotically normal in distribution, which means (from a practical point of view) confidence intervals and other estimates can be easily obtained.
The down side? I don't know of any add-ins for Excel that will do any of these methods - there may be some, but since I don't use Excel in my teaching or other work I don't know of them.
There is a free statistics package, R, that has packages for robust regression. Don't let the word "free" lead you to thinking that this is a shabby program: it is very powerful. It runs on Windows, OS X, and Linux. You can find more information here.
http://cran.r-project.org/
I hope some (most?) of this is the type of response you wanted.