Multiplying two data sets which don't have the same length or spacing

AI Thread Summary
The discussion centers around the challenge of multiplying two data sets with different lengths and non-uniform spacing, both sharing the same x-axis (energy in MeV). The key issue is to focus on the overlapping region of the data sets, which requires filtering out irrelevant points. To achieve compatibility, participants suggest retaining only the overlapping energy points from both sets. If the specific energy values differ between the data sets, interpolation is necessary. Techniques such as MATLAB's cubic spline and Python's SciPy library are recommended for interpolation. The user ultimately resolved the issue by using the 'interpolate' function from SciPy in Python, successfully plotting the multiplied data.
ademus4
Messages
2
Reaction score
0
As the post title describes, I have two data sets and want to multiply them together (and finally plot them). The problem I have is that the data are different length, so for example using Matlab:

>>> data_set1 .* data_set2

But this would not give me the correct answer for a number of reasons.

The data share the same x-axis (energy in MeV). The trouble is the first data starts around 0 MeV and ends around 11. The second data starts just before the end of the first data and extends further on. The region in which they overlap is the part I am interested. Another issue I have is the length between data points is non-uniform in both data sets.

Is there a process to normalise or do something with the data to make them 'compatible'?

For tools, I am using anything I can get my hands on. Excel for basic ideas, Python (Numpy and MatplotLib) for proper data analysis. I am fairly comfortable with Matlab but I don't have it where I work now (I can however download Octave).
 
Technology news on Phys.org
If you're only interested in the overlap region, start by throwing away everything else. Keep only the points in the first data set which are higher energy than the minimum energy of the second data set, and similarly keep only the points in the second data set which are less than the maximum energy of the first.

You say that the length between the data points is non-uniform, but don't specify if the specific energies are the same in each data set, or different. If they're the same, then you're done once you take the overlap. If they're different, then you'll have to interpolate at least one of the data sets. You can interpolate one of the data sets onto the points where the other is evaluated, or interpolate them both onto some other set of points. If you need to work with them in future calculations, it might simplify things to interpolate them both onto a uniformly spaced set of points.
 
Hi ademus, is your data smooth enough to be reasonably interpolated?

If so try the MATLAB cubic spline. It is easy to use: y_new = spline(x, y, xnew), where "x" and "y" are your original data and "xnew" is the set of x samples points that you would like to use and "ynew" is the corresponding interpolated y data.
 
yeah, what uart said.

if you don't have matlab, you can do the same thing with python with scipy.interpolate, see this on-line example, for instance.
 
Thanks for the responses!

In the end I used the 'interpolate' function from Scipy in Python, which worked perfectly!

Here is of plot of the data (without labels etc, the blue and green data are multiplied to make the red data):

no_labels.png
 
Thread 'Star maps using Blender'
Blender just recently dropped a new version, 4.5(with 5.0 on the horizon), and within it was a new feature for which I immediately thought of a use for. The new feature was a .csv importer for Geometry nodes. Geometry nodes are a method of modelling that uses a node tree to create 3D models which offers more flexibility than straight modeling does. The .csv importer node allows you to bring in a .csv file and use the data in it to control aspects of your model. So for example, if you...
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to...
Back
Top