I have a little trouble understanding this time. However, I can say that if f1 is the correct distribution, changing the value of k_b won't make f2 into the correct one.
If 00 is the correct value of k_b, and if you guess at, say, 01 as the correct value, this pretty much only means that your...
No, this is not one of the two sets of theoretical distributions referred to. It could be viewed as the set of theoretical distributions for the *wrong* k_a, I suppose.
Moreover, as you note in your last post, the experimental data has enabled us to construct a set of empirical distributions...
I think I can clarify further and hopefully eliminate the remaining ambiguity:
This is the process being applied to a single (P, C)-pair:
We hold various parameters constant, such as the positions of the bits to which the process is applied, as stated.
for (each possible candidate...
They are significant - I have to do this "partial computation of g's inverse", and I need to explain how I can obtain information on g^{-1}(y) without knowing all the bits of y.
This is correct.
Yes.
The first definition - a particular set of indices in P_i that never varies - is the...
Thanks for the detailed reply - I'll need to study it in more depth tomorrow to reply properly. But I think I may be able to shed some light on one thing you said:
"Why have we reached a choice between two distributions that are related in this way?"
I'm afraid that it's due to the inner...
I've been trying to code an algorithm to compute the convolution of two probability distributions. using the FFT. This relies on the "convolution theorem":
(p*q)[z] = FFT^{-1}(FFT(p) \cdot FFT(q))
However, when I test it using the distributions
p={0.1, 0.2, 0.3, 0.4}
q={0.4, 0.3, 0.2, 0.1}...
It's the size of the set - that is, it's the number of bits in it. So an (l_1)-bit value is a positive integer that can be expressed as a string of l_1 bits.
(So, for instance, the 8-bit values would be the integers 0 to 255.)
k_a is an l_{1}-bit integer value thus expressed. So if l_1 =...
> I suggest that you make another attempt at defining the problem.
I'll try to do so; and to go into a bit more detail in the process. This is going to be quite long and I'm not sure I can condense it into a tl;dr, plus this will involve some Binomial random variables which are used to derive...
I've been doing some cryptographic research in which I need to be able to assign "scores" to vectors of floating-point values. Each of these floating-point values is a log-likelihood ratio LLR(\hat{q}, p, q) where:
\hat{q} is empirical data. The empirical data for element i in the vector is...
"The precision of the argument function" error message, graph not plotted
I've got a function, integratedadvthirdaltb, that I'm trying to use in plotting some graphs:
thirdaltb[KP_, Ps_, C_, M_] :=
NSolve[Sqrt[2*M]*b +
InverseCDF[NormalDistribution[0, 1], Ps]*...
"Ok, the data's distribution is a known parametric form so it should be no problem to write down the pdf of the m-th largest data point." you state.
Let me see if I'm doing this right so far:
The CDF of the mth largest data point, where D is the number of data points, and where P(X ≤ x)...
(Apologies for neglecting my own thread for so long - I had something of a crisis of confidence with the research this forms a part of)
bpet, a randomly chosen X_i has a chi-squared distribution. I don't know whether this has a "parametric form or known tail asymptotics.", or how these apply...
(This question was previously posted to sci.math.research. I only received one reply; sadly the advice therein conflicted with section 9.1 of H.A. David's "Order Statistics" - and probably with the fact that there was such a field of study as "r-extreme order statistics" - hence my reposting it...