Testing PRNG w/ Chi-Squared & Kolmogorov-Smirnov Tests

  • Thread starter Thread starter anachin6000
  • Start date Start date
  • Tags Tags
    Testing
anachin6000
Messages
50
Reaction score
3
I was developing a physical RNG and I was trying to test it using The Chi-Squared test and The Kolmogorov-Smirnov Test. The latter gives me some problems.
I have an attached photo with a table. In the table, n is the number of generated numbers. The book says that the test should be run for n=1000. I've made a program that calculates the K parameters and I should compare them with values from the table. There is no set of values for n=1000, but there is a formula in the table to continue to calculate values.
In the formula, there is a -O(1/n) that troubles me. I reversed the formula to get -O(1/n) from the data in the table, but it makes no sense.
 

Attachments

  • Screenshot (55).png
    Screenshot (55).png
    44 KB · Views: 431
Last edited:
Physics news on Phys.org
The values from the formula are wrong by something that is probably of the order of 0.001. Mathematically O(1/n) doesn't guarantee that, but it won't be significantly larger.
You can probably ignore it, or include it as conservative estimate in the direction that hurts you.
 
  • Like
Likes anachin6000
Big O notation is used to indicate that a formula or equation is just an approximation, and the O(x) tells you an asymptotic bound on the size of the error in the approximation. Usually, you see this when the approximation is some kind of series expansion for a transcendental function, and the series has been cut off at some point.
e.g.
##exp(x) = 1 + x + x^2/2 + O(x^3)##
This approximation is valid for |x| << 1. When |x| < 1, higher powers like ##x^4, x^5## are smaller than ##x^3##. So, we say the error bound is O(x^3) and absorb all the smaller higher order terms into the largest error. The big O notation ignores constant coefficients because we don't care about that level of detail when working with asymptotic behavior. Really, we are only interested something even coarser than an order of magnitude.

For x > 1, it isn't correct to absorb terms like x^4, x^5, ... into the error of the x^3 term since these higher terms get bigger, not smaller. It doesn't make sense to do a series approximation in terms of increasing powers. But for some functions, you can do a series approximation in terms of decreasing powers (1/x, 1/x^2, 1/x^3 ...). In this case, you can write something like f(x) = g(x) + c/x + O(1/x^2).
All the higher terms get absorbed into O(1/x^2).

It sounds kind of wishy washy, but there is a rigorous formality to it.
 
  • Like
Likes anachin6000
Big O notation is also used in computer science to talk about the time or memory used by an algorithm as we scale the problem size. Basically, we want algorithms that scale well, like O(n) or O(n log n), and not like O(2^n) which is basically means we will never be able to use this algorithm for a large size of n, no matter how fast our computers get.
 
  • Like
Likes anachin6000
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top