Emergent Mind

Pseudo-Random Number Generation using Generative Adversarial Networks

(1810.00378)
Published Sep 30, 2018 in cs.LG and stat.ML

Abstract

Pseudo-random number generators (PRNG) are a fundamental element of many security algorithms. We introduce a novel approach to their implementation, by proposing the use of generative adversarial networks (GAN) to train a neural network to behave as a PRNG. Furthermore, we showcase a number of interesting modifications to the standard GAN architecture. The most significant is partially concealing the output of the GAN's generator, and training the adversary to discover a mapping from the overt part to the concealed part. The generator therefore learns to produce values the adversary cannot predict, rather than to approximate an explicit reference distribution. We demonstrate that a GAN can effectively train even a small feed-forward fully connected neural network to produce pseudo-random number sequences with good statistical properties. At best, subjected to the NIST test suite, the trained generator passed around 99% of test instances and 98% of overall tests, outperforming a number of standard non-cryptographic PRNGs.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.