Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Statisics - linearity and best-fit in 3 dimensions

  1. May 12, 2012 #1
    Howdy folks

    I've gotten a number of answers to this in various fora, some contradictory.

    I need to do 3 things to a set of datapoints in 3space (X,y, and z real values).

    1)Test for linearity (pearson's R?).

    2)If passed, find line of best fit (SSE?)

    3)See if line of best fir is near parallel to any of the axes (Slope < whatever or > whatever?)

    Some have said that I can simply do pearson's r twice, but others have disagreed without providing a counterargument.

    If possible, please give equations or expressions.

    Many thanks in advance for any assistance

    Joe
     
  2. jcsd
  3. May 12, 2012 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    You could find the line of best fit first and then transform coordinates so that line is in a plane and use Pearson's R to test for linearity in the transformed coordinate system by projecting the data points onto that plane in various ways.

    Saying that you apply Pearson's R to 3 dimensional data is ambiguous until you specify exactly how that would be done. Are you talking about something like dropping the y values and applying it only the (z,x) data?
     
  4. May 12, 2012 #3
    The proposed solution was to use pearson's r twice, once for xz and once for xy. If that's what you meant by dropping one of the variables, then yes, you are correct. However, some pretty advanced people said this would not work. They did not say why.
     
  5. May 12, 2012 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    Let's clarify whether you are trying to do "linear regression" on the data or whether you are fitting a "line" to it. The usual kind of linear regression fits a plane to 3D data.
     
  6. May 12, 2012 #5
    I am trying to fit a line, not a plane.
     
  7. May 12, 2012 #6

    Stephen Tashi

    User Avatar
    Science Advisor

    Let me make sure I understand that!

    Can I assume one variable is to be predicted from the other two?

    Let's say we are trying to predict z. An equation of the form z = Ax + By + C defines a plane. If you are trying to use a line to predict z then it would have the form z = Aw + C where w is some variable. What variable did you have in mind for w?
     
  8. May 12, 2012 #7
    Well to be honest I'm not trying to predict anything. Rather I need to see if these datapoints fall on a line for gesture detection. That said, I _think_ anything that predicts Z would work.
     
  9. May 12, 2012 #8
    Wouldn't the intersection of 2 planes be a line? Therefore wouldn't two linear regrssions work as I was told?
     
  10. May 12, 2012 #9

    Stephen Tashi

    User Avatar
    Science Advisor

    Two planes can intersect to form a line. However, z = Ax + B defines a line in the zx plane and z = Cy + D defines a line in zy plane. The zx plane and the zy plane intersect at the z-axis, which is probably not what you had in mind. So you need to explain what you mean by "two linear regressions".

    Are you talking about predicting y as y = Ax + B and then predicting z as z = Cy + D and then assuming y can be replaced by Ax + B, so the prediction becomes z = C(Ax + B) + D ? If so, I can see why some people might say to do two Pearson's R tests. Whether you ought to use this approach is unclear.


    Applying statistics to real world problems is a subjective matter. In the type of statistics you want to use ("frequentist") , methods are selected according to people's empirical experience, tastes and traditions. If you are writing a thesis or journal article, the best thing to do is to consult the people who are evaluating your work about what statistical methods may be used. If you can't do that, look at examples of publications that they have approved.

    If you are trying to solve a real world problem, I think you'll get the best advice by describing the problem, not by merely attempting to abstract the mathematical details yourself.

    Edit: Sorry, I only read your last post and didn't read the one before. You did describe a real world problem. Do (x,y,z) describe the positions of the thing that makes the gesture. Is this data ordered in time?
     
    Last edited: May 12, 2012
  11. May 12, 2012 #10
    Yes x,y,z are ordered in time. I need to determine if they fall on a line to initiate gesture detection.
     
  12. May 12, 2012 #11

    Stephen Tashi

    User Avatar
    Science Advisor

    OK, regarding your original question:

    Do those 3 things really define your goal? - i.e. are you determined to approach the problem that way?

    Or is the bottom line description of your goal something like: I want to detect if the path of an object in space determines a gesture indicating a direction and if it does, then I want to determine that direction.

    Questions such as whether you can use Pearson's R test twice etc in this problem are going to be empirical questions not questions that have definite mathematically provable answers - unless you are the type of person who is willing to supply enough "givens" for mathematics to work with. You could supply the "givens" by supplying a detailed probablity model for how the data is produced. For one reason or another, most people with real world problems don't do that.

    The advice about consulting the evaluators still holds if this work is to be written up as an article or a thesis or "defended" in some manner. (For example, there are many papers written on gesture detection. Evaluators would compare how you did it to the methods in such papers.. If you are writing a computer program under a software contract, you may be asked to show that you exercised "due diligence" in consulting such literature.)

    If doing this work just for your own purposes, you can obviously try any method that you want I wouldn't say that doing two Pearson's R tests is patently absurd as an empirical approach. However, I wouldn't guarantee that it would work well either.

    There is a type of regression called "total least squares" regression. For example, if you assume the data is generated by some random displacment from a line, then express the line in parametric form as (x,y,z) = (As+B,Cs+D,Es+F) where s is an arbitrary parmeter. Find the values of the constants that minimize sum of the squares of the distances from each data point to the line. As I recall, that algorithm is more involved than doing two linear regressions. The variable s need not be time. For example, you could use x for s.

    If I think of how human beings gesture, they might extend and retract their arm several times to indicate a direction. Is this back-and-forth kind of gesture among those you are trying to detect?
     
  13. May 12, 2012 #12
    re, back and forth, no, quite the contrary. I have a package that detects an arbitrary gesture in 2-space (and can learn new gestures) and a package that tracks joints of the human body in 3space. However, the function of the gesture detection package is predicated on an "init event" (originally mousedown) and a "stop event" (mouseup). In my environment I need to build an init event from scratch. The simplest solution I could come up with is to detect individual motions parallel to the axes (over a limited set of datapoints), then use start and end end points to determine direction
     
  14. May 12, 2012 #13

    Stephen Tashi

    User Avatar
    Science Advisor

    I don't know how you define a "gesture". You say you are using a software package to track the motions of human joints. Are you tracking the motion of knee joints or other joints that I don't usually associate with making a "gesture" in the common meaning of that word. Or are you only tracking the motion of one joint, such as the wrist joint or index finger joint?

    Let me see if I understand the purpose of the algorithm you want to write.
    The input is a record of 3D (x,y,z,t) data of a single joint. The algoirthm detects when it makes a linear motion. When the motion occurs, the algorithm determines "init" and "stop" times of the (x,y,z) data and sends this as input to the 2-D gesture detection software. But won't you have to project the (x,y,z) data onto 2-D in order to used that software?
     
  15. May 12, 2012 #14
    Nope. Between init and stop the user knows to make a 2d gesture and the zplane data is discarded for those purposes. Re joint types, the software doesn't make that distinction - gesture handling is the same for all joints. I can inspect output to see what made the gesture but the gesture handler is common.
     
  16. May 12, 2012 #15

    Stephen Tashi

    User Avatar
    Science Advisor

    You avoided explaining what a gesture is! Is it any linear movement of a joint?

    It would be convenient if you would give a precise and complete description of the problem. What is the final goal of this process? How do you determine how well your procedure works?

    For now, let me guess. I'll guess that in terms of a video, the z direction is the direction the camera is looking. Suppose we are tracking a wrist joint and the person swings his outstretched arm in a circle. From some points of view, his wrist swings in an arc, but from other special points of view, his wrist would move in a line. So you want to apply some algorithm to the estimated 3D positions of a joint to determine if a set of 3D positions are approximately a line (as opposed to non-linear motion that just happens to project to a line in (x,y) ) before you send the 2D data to the gesture detection software.
     
  17. May 12, 2012 #16
    A gesture is an arbitrary set of time-tagged data on the xy plane (x, y, t). An init or stop is a dataset of consisting of a straight (for some value of straight) line parallel to any axis on any plane (x, y, z, t). It would be nice if I could tell one init or stop from another, by selecting what axis it is parallel to and whether it's coming or going, but I'll be happy enough with detecting a "straight" line parallel to the xz axis.

    Your understanding of the coordinate system is on the money, and so is your general analysis of the problem. Final goal is to initiate gathering of data for 2d gesture detection by detecting outward horizontal or vertical linear motion (away from origin) along one of the 6 axes and perform gesture detection on collected data triggered by inward horizontal or vertical linear motion (towards origin). If I was drawing on a screen with a mouse, outward linear motion would be a mousedown (press the mouse button) and inward would be mouseup (release mouse button)

    It's nice to see somebody actually caring about the problem as opposed to the math. Thanks again for all your help.

    Joe

    Sorry if this feels like pulling teeth, it's difficult to separate my knowledge of the system form common factors of a 3d space.
     
    Last edited: May 12, 2012
  18. May 13, 2012 #17

    Stephen Tashi

    User Avatar
    Science Advisor

    You've mentioned looking for motion parallel to "any axis". Let's clarify that. Assume z is "into the picture", x is "up" and y is "horizontal". (I count this as 3 axes, not six since I don't count the negative x axists as different from the positive x axis. Suppose a joint makes a straight line motion in the xy plane at a 45 degree angle to the postitive y axis. You could say that motion is parallel to a tilted axis, but it isn't parallel to the x or y axis. Are you interested in that type of linear motion?

    You also mention motion being toward or away from "the origin". Suppose the video shows a person standing with their feet at the origin and the motion data is for the motion of their wrist as they make various gestures like football referees so. Are you looking for motions that go straight toward or away from their feet? Or does "the origin" mean something different than the origin of the coordinate system?
     
  19. May 13, 2012 #18
    No, I am not interested in motion parallel to a tilted axis. Your understanding of my use of origin is also I think correct - in fact one operating mode of the gesture-tracking package specifically locks the coordinate system to body centroid and calculates all distances from there. So, to be absolutely accurate, origin should not be the feet but around waist level. However, a better solution would set origin from initial joint position - therefore, for any joint origin is set to first detected position in world coordinate system. This is again wishful thinking - if it overcomplicates the problem by all means pick a set origin around waist/navel level and go from there.
     
    Last edited: May 13, 2012
  20. May 13, 2012 #19

    Stephen Tashi

    User Avatar
    Science Advisor

    I'm having a hard time visualizing any practical problem where one would only care about motion parallel to one of the coordinate axes!

    In an earlier post, you said
    Is the direction that is determined by the start and end points always to be a direction that is parallel to one of the coordinate axes?
     
  21. May 13, 2012 #20
    Well, after we have tested to see if they are indeed parallel then yes. In that post by direction I meant towards or away from origin. Sorry if that was unclear

    Re application, think of a mouse or a keyboard. A "click" or "keystroke" only "moves" in one direction. Think of the 6 possible motions (3 axes, 2 directions) as 6 keystrokes. As I said, the 2d recognizer requires an init event to begin gathering data for recognition. The parallel motions will serve as init events.
     
    Last edited: May 13, 2012
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Statisics - linearity and best-fit in 3 dimensions
  1. Best model to fit data (Replies: 13)

Loading...