Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] /dev/random is truly random?



Nguyen Vu Hung writes:

 > > Essentially impossible for non-realtime applications.

 > This is an important point. My question is "how "hard"  is is".

The relevant papers are presumably the ones referenced in the
comment.  You don't prove theorems in comments; the medium is not
adequate to express the math.

 > I did mean I want quantative explanations like: The complexity of
 > the algorithm is, the time it takes to do X is Y seconds etc.... I
 > think the term "hard"

?? There is no known algorithm, except to search the whole space, one
key per probe, which is O(2^N) for a key with N bits.  So with a
2048-bit key, it will take 0.5*(X*2^2048/(1000000*60*60*24*365.25))
years on average, where X is the number of microseconds per probe.  If
my back of the envelope estimate is correct, for X = 0.001, that's
just about 2^1990 years for one machine, which is a pretty long
time---and that's assuming that the probe takes about 4 instructions
at current clock speeds.  If you have a billion such machines, you can
reduce that period to only 2^1962 years.  That is the Wholly Grail[sic]
of cryptography, to reduce the attacker to a brute-force search that
on average will take longer than the expected life of the universe.
In cryptography, there is no other definition of "hard".

Thus the practical definition of "hard" is "do you believe that not
only no efficient algorithm is known, but that none exists?"  That
belief can only be supported by understanding the proofs of the
theorems, or by faith in the claims of experts who purport to
understand the proofs of the theorems.  I choose to trust that Linus
knows enough to pick smart people to review contributions to Linux in
this area.

 > Deterministic seed is timer, all other "noise" is non-deterministic.

That turns out not to be the case.  There are three deterministic
components of the seed: saved state of the RNG, uname, and system
time.  All of these are potentially accessible to the attacker even if
he doesn't have access to the system at boot up.  Suppose he was the
(former) admin who shutdown the machine and he stole the RNG state, he
knows the uname, and he can get the startup time with uptime.

So we *do* assume that the attacker can get that initial seed.  The
difficulty of getting that seed is *not* the source of security.

 > The author ensure the security by system startup time. He assumes that
 > the attacker can't know the system startup time if he don't have root
 > ( in the case, we are already doomed ).

That's simply false.  Security is not ensured by the quality of the
seed, it's ensured by adding noise (truly random to the best of my
knowledge) to the RNG state, and then using a cryptographic quality
hash to ensure that if the state is off by 1 bit, any of the other
2^256-1 hash values is equally likely to be the output of the RNG.

 > # What if the attacker gets root, he reboots the system and try to do
 > something more sophiciated with the RNG?

What about it?  Attacker has root.  Game over, no free resurrections.
An attacker with root can substitute a software RNG of his choice even
for /dev/hardware-rng, right?

 > I still don't agree with you that it is unpredictable, though I don't
 > have enough clues :D.

Well, here's the one you need.  The seed changes with every interrupt.
To predict future output of such a RNG, you need to measure the system
jitter to the precision that the system uses it, and with 100.00000000
0000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000% accuracy.[1]

The question here reduces to "can the attacker drain the entropy pool
so that the same seed is used for a sufficiently long sequence of
output, and the possible successor seeds are restricted to a 2^N set
for N small (say, < 64)?"  Ie, it's not about the quality of the
generator, not very much; it's about the quality of key management.

 > The algorithms Knuth introdues in his book are all deterministic. The
 > sequence of random number generated by such algorithms depend on the
 > seed. That's why we have to add (non-deterministic ) noisy to the
 > seed.

If you're looking at the algorithms, you're missing a clue.  The point
about reading Knuth is to understand how such algorithms are analyzed,
at a very basic level.  The algorithms themselves are way weak by
modern standards.

Adding noise is not sufficient, either.  It needs to be done
correctly.  The point is to introduce (mathematical) chaos, so that
even if the attacker manages to sync with the generator at some point,
he will very quickly fall hopelessly out of sync if he misses one or
two bits.

Finally, remember the key distribution problem.  You have a bootstrap
issue here; even if you are using the quantum randomness device that
Mauro mentioned to generate a one-time pad (which is provably secure),
how do you securely communicate the key, which is of the same length
as the text?  So real cryptography becomes a question of using a
chaotic generator based on a "small" (eg, 2048 bits) but truly random
key, and changing keys sufficiently frequently.

For example, WEP uses ARC4, but it is insufficiently secure.  Thus,
WPA was developed, which (surprise!) also uses ARC4 (AIUI, I haven't
actually read either standard---this is from Wikipedia).  The alleged
difference?  Proper management of keys.

So proper use of /dev/random is to generate keys, not for actual use
in encryption or whatever.

Footnotes: 
[1]  To all BOFHs and ASR denizens.  Sorry about the brick ... not. :-)



Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links