Normalisation of Experimental Data

In summary, the conversation discusses operating a particle separation device with 8 stages that collect particles of varying sizes. To compare the amount of particles collected in each stage, a normalization process is used to account for the different size ranges. The normalized data, represented as dM/dlogDp, shows higher numbers compared to the initial results of total mass collected. The conversation raises questions about the physical meaning and direct comparability of the two data sets and the correct value to report for the mass of particles in 10 grams of air.
  • #1
davidgrant23
22
0
Hi all,

I am currently operating a piece of equipment that essentially collects particles and separates them based on their size. Essentially you have 8 stages, and each stage has a differing size of particles it collects. For example:

Stage----Size of Particles (D) (um)-----Mass Collected (M) (mg)
1-------------0.1 - 0.2------------------------------1
2-------------0.2-0.5------------------------------0.1
3-------------0.5 - 1--------------------------------1
4-------------1 - 2----------------------------------0.5
5-------------2 - 4-----------------------------------1
6-------------4 - 5----------------------------------0.5
7-------------5 - 9------------------------------------3
8-------------9 - 10----------------------------------2

Now, as you can see each stage has a different range of sizes collected. Stage 1 has a "width" of 0.1 um, while stage 7 has a width of 4 um. Because of these different stage widths it is common to normalise the plot to make the results indepedent of stage width, like so:

dM/dlogDp = dM/log(Du)-log(Dl)

where Du is the upper stage width and Dl is the lower stage width (ex, for stage one Du is 0.2 and Dl is 0.1). If you then plot this data:

Stage----------dM/dlogDp
1----------------3.321928
2----------------0.251294
3----------------3.321928
4----------------1.660964
5----------------3.321928
6----------------5.159426
7----------------11.75215
8----------------43.70869

Now, my question is, what exactly is the physical meaning of these new results. The initial results show me the mass of particles between 0.1 and 10 um collected after a certain time of experiment or some other factor. The total mass is 9.1 mg. But, these normalised results have much higher numbers. Do they have any inherent meaning other than being independent of stage width. Am I missing some fundamental calculus principle that imparts meaning to dM/dlogDp?

Thanks.
 
Mathematics news on Phys.org
  • #2
You can imagine that the dimensions of the catcher will affect the number of things caught ... imagine you are catching raindrops in a bucket, the bigger bucket tends to catch more rain. If you want to compare the amount of rain, then, you need to account for the size of the bucket
In this case, it is sensible to divide the number of drops caught by the area of the bucket, getting rain per unit area ... and compare that: so we'd do ##dM/d\pi r^2## ... though not happy with that notation since we are actually doing ##M/\pi r^2##

Your normalization procedure does something similar.
To get what exactly ti is doing, you need to go into detail about what exactly is being counted and how.
 
  • #3
Hi Simon,

Just to give you a bit of background info, I essentially draw air through a device that separates out particulates based on their inertia and therefore size. There are 8 stages, each separates out the particles in the size range given in my initial post. Now, we know the mass of air that passes through the device (let's say 10 grams), and we know the mass of particles collected on each stage, so what I then do is calculated the amount of particles in a given air of mass (mgparticles/gair), for example:

Stage 1 mass collected = 1mg. 10 grams air drawn through. Therefore, 0.1mgparticles/1gramair. (Particles between 0.1-0.2 um)
Stage 2 mass collected = 0.1mg. 10 grams air drawn through. Therefore, 0.01mgparticles/1gramair. (Particles between 0.2-0.5 um)
etc etc.

Now, typically the 8 stages are summed and we get total particles (mg) per gram of air. This doesn't require normalisation. However, if we want to to see which size range of particles are predominant, then we need to take the size range of each stage into account. That's where we do the normalisation outlined previously.

I understand the principle of the normalisation. Indeed, how can you compare the mass collected in each stage when some stages collect a bigger range of particles than others?

What I can't get my head around is how the two are directly comparable. If you look at my original post the total mass collected is 9.1 mg, or 0.91mgparticles/gair if we assume 10g of air again. dM/dlogDp has the same units as mg or mgparticles/gair (you can do normalisation for either), but the numbers are far higher (72.5mg in fact).

So, ultimately, which one is correct? Are they directly comparable? Does the normalised have any physical meaning. Both each individual stage and the total mass of particles is much higher. If someone asked me "What mass of particles are in 10g of air?", would I tell them 9.1mg or 72.5mg?

Cheers,
Lewis
 

What is normalisation of experimental data?

Normalisation of experimental data refers to the process of adjusting and standardising data to account for any variations or biases that may exist within the dataset. This allows for fair and accurate comparisons between different experimental conditions or samples.

Why is normalisation important in scientific research?

Normalisation is important in scientific research because it helps to remove any confounding variables or biases that may affect the interpretation of the data. It also allows for meaningful comparisons to be made between different experimental conditions, which is crucial for drawing accurate conclusions.

What methods can be used for normalisation of experimental data?

There are several methods that can be used for normalisation of experimental data, including the use of internal controls, reference genes, or statistical methods such as mean or median normalisation. The method chosen will depend on the type of data and the experimental design.

How do you determine if data needs to be normalised?

Data should be normalised if there are significant differences in the baseline levels of the data or if there are known sources of bias or variation within the dataset. It is also important to consider the experimental design and the type of data being collected.

What are the potential limitations of normalising experimental data?

One potential limitation of normalising experimental data is that it can introduce errors or biases if not done correctly. It is important to carefully choose appropriate methods and to ensure that the normalisation process does not alter the underlying biological or physical reality of the data. Additionally, normalisation may not be suitable for all types of data or experimental designs.

Similar threads

  • General Math
Replies
19
Views
1K
Replies
31
Views
3K
  • General Math
Replies
4
Views
1K
  • High Energy, Nuclear, Particle Physics
2
Replies
46
Views
4K
  • Biology and Chemistry Homework Help
Replies
4
Views
3K
  • Introductory Physics Homework Help
Replies
11
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
11
Views
1K
  • Programming and Computer Science
Replies
1
Views
2K
  • Classical Physics
Replies
17
Views
2K
Replies
3
Views
1K
Back
Top