How do you know if something is really random? When testing Random Number Generators (RNGs) for randomness you quickly run into the “is that really random” dilemma. As pattern seeking creatures, we naturally look for patterns on an ongoing basis. Like many things, randomness varies. Most of the time, pseudo-random numbers generators (PRNGs) are basically creating randomness with lots of non-random processes.
How do you really know if something is random? Is “111111” random? Or “0100100101001011110111010”? It’s hard to tell. Why? Because the moment you start to define the parameters of randomness you lose your randomness.
The best method that scientists and programmers have come up with is to create a series of tests. There exists a multitude of tests and many test suites. Popular tests include NIST RNG testing suite, Dieharder test suite , Knuth test, Crypt-x test, and the list goes on. RNGs come in all shapes and sizes, so you can imagine it would be difficult to orchestrate testing for all the different types of random generators.
There are two phases to test the random number generator process. First you need a source of entropy that is impossible to guess like the weather. Second you need a deterministic algorithm to expand the seed into a multitude of sequences for keys and the like. Testing usually starts with checking your entropy.
Preliminary RNG Testing
The first challenge you encounter when trying to define what is random, is that almost anything can be random.
As seen above it is nearly impossible to accurately prove that a sequence of numbers is random. Generating a sequence with a million “‘0’s” can be very random, but not very desirable. It is easy to make correlations based on what logically looks random, but is that really random? Hard to tell. Fortunately there are a lot of tests to give you a pretty good estimate of randomness.
The Biased Test
First thing you want to do is put your source of entropy through a biased test. Just as in the figure above, you don’t want your number to be biased or skewed towards 0 or 1. So in order to remedy this a very famous Mathematician named John von Neumann designed a skew correction algorithm. This skew correction process is very simple, but also very effective.
It is based off of a coin toss, since there are only two variables “1” and “0.” The overall output should be roughly equal amounts 1’s and 0’s. To keep this equal ratio the skew correction algorithm looks at every two tosses and compare them. If the two bits in the pair are identical (i.e., 00 or 11), both bits are discarded. If the two bits are different (i.e., 01 or 10), the first bit is added to the stream of processed bits.
The down side of this is that you lose a substantial amount of random data . The correction will also take random data when the source is already unskewed. But then again this is just a basic test to analyze the source of the entropy. The next step is to assess the Deterministic random number generator that will amplify the seed numbers in multitude of numbers.
The Visual Test
One quick and easy way to determine the quality of a random number generator is to create a quick visual test. Human brains are pretty good at recognizing patterns. Visuals can give you a great snapshot test to see general patterns you might miss dredging through spreadsheets and numbers. “Random walk” is a commonly used technique for visualizing randomness and simulations. Using a bit generation is also another way to spot patterns you can’t see with simply looking at number sequences.
The Information Entropy Test
Random number generators focus a lot on producing large probable outcomes. This is where information entropy comes into play. In randomness there is set number of outcomes, information entropy is the probability of each individual outcome . The more possible variables there are in each bit of information, the more difficult it is to predict what the next bit will be based on the bits you have already seen.
This graph above shows a low information entropy level in the data as a percentage of the theoretical maximum. You don’t want to have more probable outcomes for certain values, such as “A” 50%, probability, when what you really want is equal probability (see image below). In most cases high levels of information entropy indicate the data is random, and a low level indicates that data isn’t.
Information entropy is not just the average amount of information covered by an event, when considering all outcomes. It’s also how much information you can you can pack into bits. You want to create a environment where all outcomes are equally probable.
Linux even has a error message telling you your computer cannot proceed without hitting a bunch of keys to create more entropy . The message described in the image above is one very common way programs collect entropy. Your source of entropy is not your only important factor – how much and how wide-ranged the values are important considerations as well. It all boils down to the same concept that a threshold of entropy is required in order to generate real randomness.
NIST Suite of Tests
There are almost an infinite amount of statistical tests that you could run on a RNG. Each test is focused on detecting something that would not be considered random or would be expected from a random sequence of numbers. Since the definition of non-randomness is detecting patterns, and patterns can vary and differ in an infinite amount of ways, there is no sure way to consider a set of tests to certify randomness.
When generating random number bits of 0s and 1s the ratio between them usually turns out to be 1:1 for a truly random sequence. In order to replicate this process we need to examine the stream of number as blocks.
For the each block, calculate to see if the ratio of 1s and 0s is as close to 0.5 as would be a truly random sequence. If the ratio is too small then that means that the ratio is off (too many 1s). Pretty basic test, If it the block is prone to failure then it will probably fail many other tests.
Frequency (Block) Statistics
Just like the Frequency (Monobit) test, this test looks to see if the number of 0s and 1s generated are the same as the number generated from a truly random source . The way it works is different though. This test takes a stream of numbers as a series of blocks. Then every block is divided into sub blocks. Every sub-block is then analysed for the optimal ratio of 1s and 0s that is close to what a truly random ratio of 1s and 0s is expected to look like.
And if you’re asking yourself what does a truly random number generator looks like, look no further. Here is a true random number generator that is based off of the radioactive decay.
After the sub-blocks are analyzed the values of the ratio are calculated and if the ratio is under 0.5 then this block fails the test. It’s a way to micro scan your bits for discrepancies. Remember that a good random number generator will also produce blocks that don’t look random, so we expect some of the blocks to fail the test. (In fact, we should be suspicious if all blocks passed the test.
Testing for “Runs”
RNGs produce a stream of bits (1s and 0s). The “Runs Test” breaks down the stream into recognizable patterns and calculates whether or not it is similar to a truly random stream of bits. A run is composed of identical bits in a stream, like 0000 or 111111111.
The focus of this test is to sum up the total number of runs in the sequence. The test examines the number of bits that transition from 0→1 or 1→0. These are the “runs” in the sequence. The runs are used to evaluate the “P-value.”
This P-value is the number of runs in a block that would be expected. Blocks fail the test when the P-value is too small, which means that the number of runs in the block are fewer (or more) compared to the results of a true random number of “runs.”
This test needs a Frequency test to calculate initial variables getting started, and this is true for many other tests too. There is no one RNG test that fits all, and this is why most testing come in suites. They are usually coupled together to maximize randomness assurance.
Longest-Run-of-Ones in a Block Test
This test focuses on M-bit blocks. Just as seen above this test counts the length of the longest run of ones and then compares it to what you expect from a truly random generator’s longest run of ones .
The Binary Matrix Rank Test
The binary test is for analyzing the vector patterns in a matrix of the stream of random numbers. Let me break this down into more understandable terms. The purpose of this test is to find linear dependence in the segments of the stream of random bits . They are patterns that you would otherwise not be able to see without matrices.
Linearly dependent means that outputs are on the same line. There is a linear pattern. The Binary Matrix does this with binary matrices. This is basically a mixing of the stream of random bits and seeing if there are any particular patterns.
The Non-overlapping Template Matching Test
This test is interested in finding aperiodic patterns . That is a pattern that is non-periodic. In short, this test is basically looking for patterns that might overlap in interesting ways, such as the figure below:
The process of testing basically goes through a m-bit window and tries to find patterns . If no patterns are found, it moves on to the next bit and repeats. If a pattern is found then if resumes the test to the bit after the found pattern.
Provably True Future RNG Tests
NIST outlines multiple different tests, in fact they outline over 14 that you can read about more here. We only covered 6 of them, which this author thought were particularly interesting. RNGs, even True RNGs, are unlikely to pass all of their tests. In fact, if they were able to pass all their tests that would probably be a bit suspicious since the NIST tests are designed to analyze outputs in multiple vectors, many which overlap.
What is the next generation form of these types of randomness tests? Enter Machine Learning (ML). Machine learning is changing almost everything that we do nowadays. It learns what you like and when you need something – the extent to which machine learning can penetrate our lives is still unknown. We at Unitychain believe that Machine Learning is perfectly suited for Randomness analysis, after all, machine learning is all about detecting patterns and a quality RNG should not have any obvious patterns.
This is why we are analyzing our morphing algorithms’ RNG output with Machine Learning. The theory is that a quality RNG will not generate patterns. Therefore, the less we are able to detect anomalies or obvious patterns the better our RNG is performing. It’s too early to tell if this approach will outperform NIST tests, but if it does you’ll definitely be hearing a lot more about it from us.
We have more exciting news and updates to publish in the near future about our RNGProtocol. In the meantime, remember the importance to…
“Expose yourself to as much randomness as possible.” – Ben Casnocha