Plane Fitting with Linear Least Squares

Click For Summary
SUMMARY

This discussion focuses on fitting a plane through the origin in \(\mathbb{R}^3\) using linear least squares methods. The primary approach involves solving a 3x3 homogeneous linear system \(AX=0\), where matrix \(A\) is constructed from the points' coordinates. The conversation highlights two methods: one using eigenvalues and eigenvectors for solving the system, and another employing Singular Value Decomposition (SVD) with a rectangular \(N \times 4\) matrix. The latter method allows for regression analysis, while the former is more direct for minimizing squared distances.

PREREQUISITES
  • Understanding of linear algebra concepts, specifically eigenvalues and eigenvectors.
  • Familiarity with Singular Value Decomposition (SVD) and its applications.
  • Knowledge of linear regression techniques and matrix operations.
  • Experience with numerical methods for solving linear systems, including LU decomposition.
NEXT STEPS
  • Study the derivation and application of eigenvalues and eigenvectors in linear systems.
  • Learn about Singular Value Decomposition (SVD) and its role in regression analysis.
  • Explore LU decomposition and its advantages and limitations in solving linear equations.
  • Investigate the implementation of linear regression in programming languages, particularly C, as demonstrated in the provided source code.
USEFUL FOR

Data scientists, mathematicians, and engineers interested in computational geometry, regression analysis, and numerical methods for fitting models to multidimensional data.

mnb96
Messages
711
Reaction score
5
Hello,
I am trying to figure out how to fit a plane passing through the origin in \mathbf{R}^3, given a cloud of N points.
The points are vectors of the form (x_1^{(k)}, x_2^{(k)}, x_3^{(k)}) , where k stands for the k-th point. I want to minimize the sum of squared distances point-plane.

What I came out with was the following:
- solve a 3x3 homogeneous linear system AX=0 in which the (i,j) element of A is:

\sum_{k=1}^N x_i^{(k)} x_j^{(k)}

Now I have two questions:

1) I have read somewhere that this turns out to be an eigenvalues problem. Basically I need to find the eigen-values/vectors in order to solve the system. Why?

2) I found another method which instead builds a rectangular Nx4 matrix in which, in the first column there are all the x's of the points, in the second column all the y's, in the third all the z's, and in the fourth all 1's. Then they compute the SVD and extract (I think) the last column of the rightmost output matrix. Why does that work? What's the difference?
 
Physics news on Phys.org
It sounds like you are describing a multiple linear regression problem.

fi = a1xi + a2yi + a3zi

Where xi, yi, and zi are the points and fi is the observed function values.

To solve this, you can create a matrix Z with the following values:

x1 y1 z1
x2 y2 z2
x3 y3 z3
.
.
.
xn yn zn

You can also create a vector Y with the following values:

f1
f2
f3
.
.
.
fn

After creating the Z matrix and Y column vector, the regressed A vector can be found by solving:

ZTZA = ZTY

I have a longer post on this, with a derivation of that equation, and source code that I released to the public domain at http://www.trentfguidry.net/post/2009/07/19/Linear-and-Multiple-Linear-Regression-in-C.aspx"
 
Last edited by a moderator:
As for the eigenvalues/eigenvectors, I believe that you have to do that for an SVD decomposition, which is a way of solving the above matrix equation.

Another approach to solving the above equation is to use a different approach, such as an LU decomposition. The potential problem with an LU decomposition is that a singular matrix could result. If that occurs, then the problem cannot be solved using an LU decomposition and something else, like an SVD decomposition, would have to be used.

Fortunately, there are a lot of problems where singular matrices are a rare occurrence and the simpler and faster LU decomposition approach works. LU decomposition does not require eigenvalues/eigenvectors.

For part 2, you describe a Z matrix with ones in it. That Z matrix has constants that can offset it from the origin. For that case, you are regressing the function shown below.

fi = a0 + a1xi + a2yi + a3zi

If you want to force it through the origin, get rid of the column with all ones in it.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 40 ·
2
Replies
40
Views
4K