Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Matlab - averaging

  1. Jan 10, 2010 #1

    I have 2 sets of data, one is 472 data points long, the other 370. They are both a function of the same variable "x", they both have the same value for "x" as the last point and the same value for "x" as the first point.

    I'm being asked to average the two sets of data, but obviously I can't just use "(y+y2)/2". I also can't cut the 472 data set to 370 and then average because the scale is off. What can I do? Can I have matlab average values only when they correspond to the same value of "x"?

    Any help would be greatly appreciated. Thanks.
  2. jcsd
  3. Jan 10, 2010 #2
    What can of value is x? Scalar or vector?
  4. Jan 10, 2010 #3
    Both x and y are scalar.
  5. Jan 10, 2010 #4
    What about a weighted mean?

    Yeah, you just take a subset of the 472 set when the x is equal to the x in the 370 set.
    something like s1=a(a[1]==b[1])
    Last edited: Jan 10, 2010
  6. Jan 10, 2010 #5
    Well if they scalars then just sum all the values and divide by x1 + x2.
  7. Jan 10, 2010 #6
    "I'm being asked to average the two sets of data..."

    Are you sure you understood what you are asked to do? This could mean several things.
  8. Jan 11, 2010 #7
    This would work perfectly, do you happen to know the exact command?
  9. Jan 11, 2010 #8
    GCD[472-1,370-1]==3. The x values are identical at exactly 4 points. This would entail ignoring most of the data.
  10. Jan 11, 2010 #9
    The x values are identical at many (75% + ball-parking it) points...
  11. Jan 11, 2010 #10


    User Avatar
    Science Advisor
    Gold Member

    If I understand you correctly, you have two data sets that are functions of x but don't necessarily contain the same x values. A rigorous analysis is a parametric fit, that is, a best fit to a function, typically found by minimizing the least-squared error. Maximum likelihood methods are also common. For example, if y depends linearly on x then a linear regression to all of the data points gives the best fit values of slope and intercept. The best estimate of y at any given x is then easily calculated. Other common functionals are polynomials, exponentials, etc. Non-linear functions are more complicated to fit, of course.
  12. Jan 11, 2010 #11
    The x values are identical at exactly 4 points and some close matches.

    In any case, one interpretation of the problem encoded in Mathematica is

    ( (Sum [ y1, {i, 472} ] /472.) + (Sum [ y2, {i, 370} ]/370.) )/2.
  13. Jan 11, 2010 #12

    They are non-linear sets of data. And why are the x values identical at only 4 points? Look at the sets of data, attached. A is the magnetic field. The prof suggested that we do "Rxxmn + Rxxpl"/2, but it's evident he didn't realize the sets are different sizes...

    Attached Files:

  14. Jan 11, 2010 #13
    Good; it helps to know you are doing experimental physics, rather than an applied
    math abstraction. I'm still trying to interpret your text files, n stuff.
    Last edited: Jan 11, 2010
  15. Jan 11, 2010 #14
    Each file has 443 lines of data. A seems to be your independent variable (or intends to be somewhat independent, abit instrument noise maybe) ranging from -0.02 to 4.40 incremented in units of 0.01.

    Where do you get the 472 and 370 counts?
    Are the rxx's and rxy's your independent data for which you wish to find averages?
  16. Jan 12, 2010 #15
    Yes, I want to average the Rxx's and the Rxy's. The data was supplied by my prof.
    Last edited: Jan 12, 2010
  17. Jan 12, 2010 #16
    OK... Looking at the Minus.tex file, you have just as many Rxymn entries as Rxxmn entries. 443 each. The formatting is bad so that it looks like there are holes in the Rxxmn column of data that are due to mis-tabbing. Is this what you are talking about? The Rxypl and Rxxpl also have 443 entries apiece.
    Last edited: Jan 12, 2010
  18. Jan 12, 2010 #17
    ? I see 371 entries for the Rxypl/Rxxpl.
    Last edited: Jan 12, 2010
  19. Jan 12, 2010 #18
    I see that now. The danged data is full of gaps.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook