Performing Detection with IID RV: Unknown PDF & Neamen Pearson Test Efficiency

  • Thread starter Thread starter sibtain125
  • Start date Start date
  • Tags Tags
    Pdf
sibtain125
Messages
3
Reaction score
0
Hi all


1) When we are performing detection , and we have received this y(n)=x(n)+z(n) for n=0,1,2,3..., where z is gaussian noise, but about x(n) we don't know its distribution , all we know that it is I.I.D. random variable with zero mean and fxed given variance. Now my question is can we still perform the Neamen Pearson test . p(y;H1)/p(y;H0)>gamma in the same manner we do when x distribution is completely known.

2) If the answer to the above question is yes then will the pdf of p(y;H1) be gaussian with mean=0 and variance = (noise variance + signal variance).

3) Can you please refer me to useful texts where i can find how COST functions are modeled for signal detection systems.

regards
 
Physics news on Phys.org
What exactly are your hypotheses?

A Gaussian would apply if you are talking about the distribution of the sum or average of a large number of signals.

If your hypotheses are about a single signal , such as H0: y[3] = 5 , I think you are out of luck.
 
hi well i have say N number of samples and it can be assumed to be large. actually its a standard detection problem in signal processing , when you are looking for the presence or absence of a signal . H0 means that the signal is not there and the variance of data is say sigma0 and under H1 when the signal is present then the variance of the received data is sigma1.
Inshort yes , the signals come from the distributions of large no. of signals.
 
sibtain125 said:
H0 means that the signal is not there and the variance of data is say sigma0 and under H1 when the signal is present then the variance of the received data is sigma1

To use Neyman-Peason you must pick some statistic or statistics and compute the likelihood of their observed value given each of the sigma's. If your idea is to use the individual values of the observed signals y[0], y[1],... as the statistics, you cannot compute the liklihood of this vector of values without assuming some specific probability distribution for the x. If you use a statistic involving the sum of the y, you can approximate this distribution as normal. For example, the mean, y_bar, of a sample of n of the y is approximately normally distributed.
 
Thanks Stephen, that solves the problem ,

anyway it means that we don't know what happens when we add IID RV (say: mean=0, var=1, unknown pdf) to a gaussian pdf (mean=0, var=1). should the resultng pdf remain a gaussian with var=2. thanks again
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.

Similar threads

Back
Top