- 5
- 0
Say I have a set of points in 2D space. How would I find a line that maximizes the sum orthogonal projection of the points onto the line. The line does not have to go through the origin.
Hi mfb, I think you're right about taking the derivative on the minimization function. However, that's where I'm stuck. The minimization function has 3 parameters for 2D space: theta0, theta1, and theta2, no? How do I take the derivative of a funtion with 3 variables?@chiro: The problem is two-dimensional anyway.
@googleadbot: The video gives a formula for the quality of the separation. You can parameterize your line and express the quality as function of those two parameters. Then you have to minimize/maximize it: Find points where both derivatives (in the line parameters) are 0.
I think I can see how I can parameterize my line, but what matters, according to the video, is the vector theta that is normal to the line. Doesn't the line have an infinite number of normal vectors corresponding to it? If I want to parameterize the normal vector theta, then I would need at least 4 parameters as far as I can tell, because the vector would require both direction as well as magnitude (2 parameters) as well as the origin of theta (x/y coordinates of the base point). With that many parameters, I don't see how I can find a direct solution...I think this problem might require quadratic programming (that's what SVM uses to find the boundary line), problem is that I'm not sure how it does it and code to SVM Light or SVM Lib is pretty hard to follow.@chiro: The problem is two-dimensional anyway.
@googleadbot: The video gives a formula for the quality of the separation. You can parameterize your line and express the quality as function of those two parameters. Then you have to minimize/maximize it: Find points where both derivatives (in the line parameters) are 0.
They just differ in their length, not in their direction (up to the sign, and in 2D).Doesn't the line have an infinite number of normal vectors corresponding to it?