A Fourier transform and Cosmic variance - a few precisions

AI Thread Summary
The discussion focuses on the challenges of estimating the variance of the amplitude of Fourier modes in the context of cosmic variance, particularly at the mode |k| = 0, where the statistical error is infinite due to having only one pixel. It explores the relationship between the number of accessible values (Nk) and the statistical error in power spectrum estimation, emphasizing that as k increases, Nk also increases, reducing the error. The participants seek clarification on the formulation of the statistical error as a relative error and the conditions under which a standard deviation can be considered a relative error. Additionally, there is a request for proof that Nk equals 1 when |k| = 0. The discussion highlights the need for peer-reviewed references to support the claims made.
fab13
Messages
300
Reaction score
7
TL;DR Summary
I would like to understand the reasoning which is done on a report about the cosmic variance. I nedd precisions to know how the expression linking the error relative of Power spectrum and the number of pixels in Fourier space. I would like also to understand under which conditions relative error and standard deviation are equal.
I cite an original report of a colleague :
If we are interested in power spectrum, we want to estimate the
variance of the amplitude of the modes ##k## of our Fourier
decomposition. If one observes the whole observable Universe and we
do the Fourier transformation we get a cube whose center is the mode
## |\vec{k}| = 0## which corresponds to the mean value of the observed
field.

This mode has only one pixel. How do we measure the variance of the
process at ## |\vec{k}| = 0 ## ? We can not.

In fact we can but the value of doesn't mean anything because the
error is infinite. In other words, we have an intrinsic (statistical)
error which depends on the number of achievements to which we have
access.

one will be able to consider spheres of sizes ## dk ## between ## [k: k +
dk] ## which will contain a number of pixels ## N_ {k} = V_ {k}/(dk)^{3}## where ## V_{k} = 4 \pi k^{2} dk ## is the volume of the sphere
and ## (dk)^{3} ## is the volume of a pixel in our Fourier transform
cube.

So one can estimate how many values one can use to calculate our
power spectrum for each value of ## k ##. The greater the ## k ##, the
greater the number of accessible values and therefore the statistical
error decreases. The power spectrum is a variance estimator so the
statistical error is basically a relative error:

## \dfrac{\sigma (P(k))}{P(k)} = \sqrt{\dfrac{2}{N_{k}
-1}}_{\text{with}} N_{k} \approx 4\pi \left (\dfrac{k}{dk}\right)^{2} ##

So we can see that for the case ##|\vec{k}| = 0## we have an infinite error because ##N_{k} = 1##.
1) I can't manage to proove that the statistical error is formulated like :

##\dfrac{\sigma (P (k))}{P(k)} = \sqrt{\dfrac {2}{N_{k} -1}}_{\text{with}} N_{k} \approx 4\pi \left(\dfrac{k}{dk}\right)^{2}##

and why it is considered like a relative error ?

2) Which are the conditions to assimilate a statistical error (standard deviation) to a relative error (##\dfrac{\Delta x}{x}##) ?

3) How to prove that ##N_{k} = 1## in the case ##|\vec{k}| = 0## ?

Any help is welcome.
 
Last edited:
Space news on Phys.org
fab13 said:
I cite an original report of a colleague
Please give a reference. You can't cite a source that nobody else but you can see.
 
Isn't really no one who could help me about this problem of understanding ?
 
fab13 said:
Hoping this will help you
Aside from the text being in French, this is still just a discussion forum and does not give enough information to figure out what the discussion is supposed to be about. There is one link to what looks like it should be a presentation, but the link is not valid (404 error).

Is there a peer-reviewed paper or something similar that the discussion you linked to is based on? Can you provide a link to such a paper?
 
https://en.wikipedia.org/wiki/Recombination_(cosmology) Was a matter density right after the decoupling low enough to consider the vacuum as the actual vacuum, and not the medium through which the light propagates with the speed lower than ##({\epsilon_0\mu_0})^{-1/2}##? I'm asking this in context of the calculation of the observable universe radius, where the time integral of the inverse of the scale factor is multiplied by the constant speed of light ##c##.
The formal paper is here. The Rutgers University news has published a story about an image being closely examined at their New Brunswick campus. Here is an excerpt: Computer modeling of the gravitational lens by Keeton and Eid showed that the four visible foreground galaxies causing the gravitational bending couldn’t explain the details of the five-image pattern. Only with the addition of a large, invisible mass, in this case, a dark matter halo, could the model match the observations...
Hi, I’m pretty new to cosmology and I’m trying to get my head around the Big Bang and the potential infinite extent of the universe as a whole. There’s lots of misleading info out there but this forum and a few others have helped me and I just wanted to check I have the right idea. The Big Bang was the creation of space and time. At this instant t=0 space was infinite in size but the scale factor was zero. I’m picturing it (hopefully correctly) like an excel spreadsheet with infinite...
Back
Top