I How does the distribution depend on a variable resolution

Didier Drogba
Dear all,
We were trying to solve the following question but did not quite understand what to do. The question is as follows:

The reconstructed invariant mass is usually described by a Gaussian (or Normal) distribution. However, the resolution σ (the width of the distribution) is found to depend on the energy of the parent particle E in the laboratory frame, which in turn can be described by an exponential distribution for energies above some minimum (below this threshold nothing is detected). The resolution of the invariant mass is found to be σ(E) ∝ 1/ √ E. What does the resulting invariant spectrum look like? Does it still look Gaussian? The detection threshold for a detector is around energies of 500 MeV. The average detected energy is about 2.5 GeV.

All help will be greatly appreciated!
Kind regards,
DD
 
Physics news on Phys.org
Ignoring the physics, your first sentence implies that the distribution is normal, depending on a parameter.
 
I moved the thread to our mathematics section.

Imagine a discrete version of the problem: What do you get if half of the particles are measured with a resolution of ##\sigma## and the other half is measured with a resolution of ##2\sigma##?
Your problem is a generalization of this.
 
Didier Drogba said:
The reconstructed invariant mass is usually described by a Gaussian (or Normal) distribution. However, the resolution σ (the width of the distribution) is found to depend on the energy of the parent particle E in the laboratory frame,

Define the population we seeking a distribution for.

Are we dealing with a single type of particle that has a single invariant mass ## m_0## at all values of E? Does the population consists of a distribution of measurements of that mass that involve some errors in measurement whose standard deviation depends on E, but whose mean value does not depend on E? Or do different values of E imply different actual values of ##m_0## and different mean values for the measurement of m ? (I'm not familiar with a definition of "invariant mass", so that point isn't clear to me.)

What does the resulting invariant spectrum look like? Does it still look Gaussian?

We need to distinguish between these two questions:

1) Does the mean of a sample of N measurements of invariant mass (where E varies randomly) have gaussian distribution.

versus

2) Do single measurements of invariant mass (where E varies randomly) have a gaussian distribution.

The mean value of any sample of N independent measurements from any fixed (but non necessarily gaussian) distsribution) will be approximately gaussian. So the answer to 1) is Yes.

I see no way to answer 2) without getting into specifics.

Assuming the probability density of ##r = 1/\sqrt{E}## is ##f(r)## and conditional density of ##m## given ##r## is ##g(m,r)##.

The joint probability density for ##(m,r)## is ##g(m,r) f(r)##.

The marginal probability density for ##m## (which is what you seek) is ##\int_a^b {g(m,r)f(r) dr}## for whatever limits ##a,b## are needed to define the possible values of ##r##.

The first problem is to find ##f(r)##. You state ##E## has a truncated exponential distribution - i.e. ##E## has probability density ##K e^{-\lambda E}## for some constants ##K,\lambda## for values of ##E## in ##[E_{min},\infty)##. So we must derive the distribution of ##r = 1/ \sqrt{E}## from that information.

It is questionable whether a measurement of a mass could actually produce values in the entire interval ##(-\infty, \infty)##. So there may be a better approximation to such measurements than using a normal distribution However, assuming we use a normal distribution (with the same mean ##\mu## for all values of ##E##) then ##g(m,r) = \frac{1}{\sqrt{2\pi} \sigma} e^-{\frac{(m-\mu)^2}{k r^2}} ## where ##k## is some constant.

The marginal density of ##m## is ##h(m) = \int_a^b \frac{1}{\sqrt{2\pi} \sigma} e^-{\frac{(m-\mu)^2}{k r^2}} f(r) dr## There may be no explicit expression for ##h(m)## but it could be computed numerically.

Without knowing ##f(r)## explicitly, I think we can prove the mean of ##h(m)## is ##\mu## and that ##h(m)## is unimodal about ##m = \mu##. However, I don't see any general argument that ##h(m)## must have the gaussian form ##\frac{1}{\sqrt{2\pi} \sigma} e^-{\frac{(m-\mu)^2}{2 \sigma^2}} ##.
 
@Stephen Tashi: The energy distribution is described, so OP can get f(r) or (probably easier) f(E).

The measurements are independent and they are sampled individually from Gaussian distributions with the mean given by some global fixed m and the width dependent on the energy in this measurement. That is an approximation, of course, but typically the width will be something like a few percent of the mass, and it doesn't matter if extreme outliers are as frequent as the Gaussian distribution suggests or not.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top