# Is there any way to measure how random something is?

Odd question that at first seemed to have an obvious answer: no.

It all started when I realized that my friend is horrible at shuffling cards. They fall in large packets such that they fall in layers of unshuffled cards, rather than all of the cards being randomized. Then I realized to myself that no matter how they are shuffled, they are still as random as being shuffled 100 times... aren't they?

Is there any way to measure how "well shuffled" cards are, or more generally, how random something is?

You are correct that no matter what order they are in there's no way to say if they are random or not. However there are tests designed for determining if http://en.wikipedia.org/wiki/Pseudorandom_number_generator" [Broken] are producing decent results or not. Again it is possible for a truly random source to produce a sequence that fails these tests, but it would be extremely unlikely. Also it's possible for a sequence to pass the tests and still have a complex underlying pattern.

It's been a while since I messed with this stuff, but Diehard is the sort of industry standard. I found some newer ones that also worked well. The problem with the tests is that they are generally intended to be used with a computer based PRNG, so they expect input in binary form, and generally require millions of bits worth of data. A single deck of cards has around 200 and something bits, so your friend will need to do a lot of shuffling to get enough data for those tests.

Last edited by a moderator:
I agree with what the other posters have said about random number tests, but if we look at what the OP is trying to quantify I think that Boltzmann Entropy would be a better measure.

Boltzman Entropy for a deck of cards D with respect to a particular game G could be defined as:

S = natural log of (the number of distinct decks D' equivalent to D for the purposes of a round of G)

from this definition it follows that the state with the maximum entropy is the state where the arrangement of the cards was 'most typical' with respect to G, and states with a low entropy represent an atypical shuffling of the deck. Here is an analysis from another page (http://www.cs.unm.edu/~saia/infotheory.html): [Broken]

Now since all permuations have equal probability in a random deck of cards, the entropy of that deck is log52! = 225.6 bits. When we shuffle a deck of cards, that shuffle has entropy equivalent to log(52 choose 26) = 48.8 bits (we assume the deck is divided in half and a "rifle" shuffle is used). This means we should use a "rifle" shuffle 225.6/48.8 = 4.6 or 5 times on average to assure complete randomness. This computation is relatively simple because the probabilities of all events are assumed to be equal.

In my experience the problem with bad shuffling is that it causes the appearance of card combinations similar to those in the previous round of the game, which is repetitive and therefore boring. It should be pretty simple to generate statistics about these kind of repeats, just by counting them. Say something like "Your shuffling has caused X incidents of repetition in the last Y rounds of the game, while we know that with proper shuffling the probability for this is negligible." Finding the exact probability of "negligible" means would be a worthy homework problem in combinatorics.

Last edited by a moderator: