Separate /dev/random pseudodevice implemenation from kernel entropy pool

implementation.  Rewrite pseudodevice code to use cprng_strong(9).

The new pseudodevice is cloning, so each caller gets bits from a stream
generated with its own key.  Users of /dev/urandom get their generators
keyed on a "best effort" basis -- the kernel will rekey generators
whenever the entropy pool hits the high water mark -- while users of
/dev/random get their generators rekeyed every time key-length bits
are output.

The underlying cprng_strong API can use AES-256 or AES-128, but we use
AES-128 because of concerns about related-key attacks on AES-256.  This
improves performance (and reduces entropy pool depletion) significantly
for users of /dev/urandom but does cause users of /dev/random to rekey
twice as often.

Also fixes various bugs (including some missing locking and a reseed-counter
overflow in the CTR_DRBG code) found while testing this.

For long reads, this generator is approximately 20 times as fast as the
old generator (dd with bs=64K yields 53MB/sec on 2Ghz Core2 instead of
2.5MB/sec) and also uses a separate mutex per instance so concurrency
is greatly improved.  For reads of typical key sizes for modern
cryptosystems (16-32 bytes) performance is about the same as the old
code: a little better for 32 bytes, a little worse for 16 bytes.
This commit is contained in:
tls 2011-12-17 20:05:38 +00:00
parent 9a36cd1663
commit 6e1dd068e9
20 changed files with 1121 additions and 802 deletions

View File

@ -1,4 +1,4 @@
.\" $NetBSD: rnd.4,v 1.16 2010/03/22 18:58:31 joerg Exp $
.\" $NetBSD: rnd.4,v 1.17 2011/12/17 20:05:38 tls Exp $
.\"
.\" Copyright (c) 1997 Michael Graff
.\" All rights reserved.
@ -37,93 +37,133 @@
.Sh DESCRIPTION
The
.Nm
pseudo-device uses event timing information collected from many
devices, and mixes this into an entropy pool.
This pool is stirred with a cryptographically strong hash function
when data is extracted from the pool.
.Sh INTERNAL ENTROPY POOL MANAGEMENT
When a hardware event occurs (such as completion of a hard drive
transfer or an interrupt from a network device) a timestamp is
generated.
This timestamp is compared to the previous timestamp
recorded for the device, and the first, second, and third order
differentials are calculated.
pseudo-device has three purposes. On read, it returns cryptographically
strong random data from a generator keyed from the kernel entropy pool.
On write, data may be added to the entropy pool. By ioctl, the behavior
of the entropy pool (which sources are used; how their entropy is
estimated, etc.) may be controlled.
.Pp
If any of these differentials is zero, no entropy is assumed to
have been gathered.
If all are non-zero, one bit is assumed.
Next, data is mixed into the entropy pool using an LFSR (linear
feedback shift register).
The kernel uses event timing information collected from many
devices, and mixes this into an entropy pool. This pool is used to
key a stream generator (the CTR_DRBG generator specified by NIST
SP 800-90) which is used to generate values returned to userspace when
the pseudo-device is read.
.Pp
To extract data from the entropy pool, a cryptographically strong hash
function is used.
The output of this hash is mixed back into the pool using the LFSR,
and then folded in half before being returned to the caller.
The pseudodevice is cloning, which means that each time it is opened,
a new instance of the stream generator is created. Interposing a stream
generator between the entropy pool and readers in this manner protects
readers from each other (each reader's random stream is generated from a
unique key) and protects all users of the entropy pool from any attack
which might correlate its successive outputs to each other, such as
iterative guessing attacks.
.Pp
Mixing the actual hash into the pool causes the next extraction to
return a different value, even if no timing events were added to the
pool.
Folding the data in half prevents the caller to derive the
actual hash of the pool, preventing some attacks.
.Sh USER ACCESS
User code can obtain random values from the kernel in two ways.
.Pp
Reading from
.Pa /dev/random
will only return values while sufficient entropy exists in the
internal pool.
When sufficient entropy does not exist,
provides information-theoretic properties desirable for some callers:
it will guarantee that the stream generator never outputs more bits
than the length of its key, which may in some sense mean that all the
entropy provided to it by the entropy pool is "preserved" in its output.
.Pp
Reading from
.Pa /dev/random
may return
.Er EAGAIN
is returned for non-blocking reads, or the read will block for
blocking reads.
(for non-blocking reads), block, or return less data than requested, if
the pool does not have sufficient entropy
to provide a new key for the stream generator when sufficient bits have
been read to require rekeying.
.Pp
Reading from
.Pa /dev/urandom
will return as many values as requested, even when the entropy pool is
empty.
This data is not as good as reading from
.Pa /dev/random
since when the pool is empty, data is still returned, degenerating to a
pseudo-random generator.
will return as many values as requested. The stream generator may be
initially keyed from the entropy pool even if the pool's estimate of
its own entropy is less than the number of bits in the stream generator's
key. If this occurs, the generator will be rekeyed with fresh entropy
from the pool as soon as sufficient entropy becomes available. The
generator will also be rekeyed whenever the pool's entropy estimate
exceeds the size of the pool's internal state (when the pool "is full").
.Pp
Writing to either device will mix the data written into the pool using
the LFSR as above, without modifying the entropy estimation for the
pool.
.Sh RANDOM SOURCE STRUCTURE
Each source has a state structure which the kernel uses to hold the
timing information and other state for that source.
In some sense, this data is not as good as reading from
.Pa /dev/random ,
for at least two reasons. First, the generator may initially be keyed
from a pool that has never had as many bits of entropy mixed into it as
there are bits in the generator's key. Second, the generator may produce
many more bits of output than are contained in its own key, though it
will never produce more output on one key than is allowed by the
CTR_DRBG specification.
.Pp
However, reading large amounts of data from a single opened instance of
.Pa /dev/urandom
will
.Em
not
deplete the kernel entropy pool, as it would with some other
implementations. This preserves entropy for other callers and will
produce a more fair distribution of the available entropy over many
potential readers on the same system.
.Pp
Users of these interfaces must carefully consider their application's
actual security requirements and the characteristics of the system
on which they are reading from the pseudodevice. For many applications,
the depletion of the entropy pool caused by the
.Pa /dev/random
pseudodevice's continual rekeying of the stream generator will cause
application behavior (or, perhaps more precisely, nonbehavior) which
is less secure than relying on the
.Pa /dev/urandom
interface, which is guaranteed to rekey the stream generator as often
as it can.
.Pp
Excessive use of
.Pa /dev/random
can deplete the entropy pool (or, at least, its estimate of how many
bits of entropy it "contains") and reduce security for other consumers
of randomness both in userspace
.Em and within the kernel.
Some system administrators may wish therefore to remove the /dev/random
device node and replace it with a second copy of the node for the
nonblocking /dev/urandom device.
.Pp
In any event, as the Linux manual page notes, one should
be very suspicious of any application which attempts to read more than
32 bytes (256 bits) from the blocking
.Pa /dev/random
pseudodevice, since no practical cryptographic algorithm in current
use is believed to have a security strength greater than 256 bits.
.Pp
Writing to either device node will mix the data written into the
entropy pool, but will have no effect on the pool's entropy estimate.
The
.Xr ioctl 2
interface to the device may be used -- once only, and only when the
system is in insecure mode at security level 0 or lower -- to add
data with an explicit entropy estimate.
.Sh IOCTL INTERFACE
Various
.Xr ioctl 2
functions are available to control device behavior, gather statistics,
and add data to the entropy pool.
These are all defined in the
.In sys/rnd.h
file, along with the data types and constants. The structures and
ioctl functions are also listed below.
.Sh DATA TYPES
Each source has a state structure which summarizes the kernel's state
for that entropy source.
.Bd -literal -offset indent
typedef struct {
char name[16];
uint32_t last_time;
uint32_t last_delta;
uint32_t last_delta2;
uint32_t total;
uint32_t type;
uint32_t total;
uint32_t type;
uint32_t flags;
} rndsource_t;
.Ed
.Pp
This structure holds the internal representation of a device's timing
state.
The
.Va name
field holes the device name, as known to the kernel.
The
.Va last_time
entry is the timestamp of the last time this device generated an
event.
It is for internal use only, and not in any specific representation.
The
.Va last_delta
and
.Va last_delta2
fields hold the last first- and second-order deltas.
The
.Va total
field holds a count of how many bits this device has potentially
generated.
This is not the same as how many bits were used from it.
field holds the device name, as known to the kernel.
The
.Va type
field holds the device type.
@ -152,14 +192,6 @@ Do not assume any entropy is in the timing information.
.It Dv RND_FLAG_NO_COLLECT
Do not even add timing information to the pool.
.El
.Sh IOCTL
Various
.Xr ioctl 2
functions are available to control device behavior, gather statistics,
and add data to the entropy pool.
These are all defined in the
.In sys/rnd.h
file, along with the data types and constants.
.Pp
.Bl -tag -width RNDADDTOENTCNT
.It Dv RNDGETENTCNT
@ -248,9 +280,9 @@ are to be set or cleared.
.Pq Li "rnddata_t"
.Bd -literal -offset indent
typedef struct {
uint32_t len;
uint32_t entropy;
u_char data[RND_POOLWORDS * 4];
uint32_t len;
uint32_t entropy;
u_char data[RND_SAVEWORDS * sizeof(uint32_t)];
} rnddata_t;
.Ed
.El
@ -259,7 +291,7 @@ typedef struct {
.It Pa /dev/random
Returns ``good'' values only
.It Pa /dev/urandom
Always returns data, degenerates to a pseudo-random generator
Always returns data.
.El
.Sh SEE ALSO
.Xr rndctl 8 ,
@ -268,6 +300,6 @@ Always returns data, degenerates to a pseudo-random generator
The random device was first made available in
.Nx 1.3 .
.Sh AUTHORS
This implementation was written by Michael Graff \*[Lt]explorer@flame.org\*[Gt]
using ideas and algorithms gathered from many sources, including
the driver written by Ted Ts'o.
This implementation was written by Thor Lancelot Simon. It retains
some code (particularly for the ioctl interface) from the earlier
implementation by Michael Graff \*[Lt]explorer@flame.org\*[Gt].

View File

@ -1,4 +1,4 @@
.\" $NetBSD: cprng.9,v 1.3 2011/11/28 23:29:45 wiz Exp $
.\" $NetBSD: cprng.9,v 1.4 2011/12/17 20:05:38 tls Exp $
.\"
.\" Copyright (c) 2011 The NetBSD Foundation, Inc.
.\" All rights reserved.
@ -38,6 +38,7 @@
.Nm cprng_strong64 ,
.Nm cprng_strong_getflags ,
.Nm cprng_strong_setflags ,
.Nm cprng_strong_ready ,
.Nm cprng_strong_destroy ,
.Nm cprng_fast ,
.Nm cprng_fast32 ,
@ -50,7 +51,7 @@
.Ft void
.Fn cprng_strong_destroy "cprng_strong_t *cprng"
.Ft size_t
.Fn cprng_strong "cprng_strong_t *const cprng" "void *buf" "size_t len"
.Fn cprng_strong "cprng_strong_t *const cprng" "void *buf" "size_t len" "int blocking"
.Ft size_t
.Fn cprng_fast "void *buf" "size_t len"
.Ft uint32_t
@ -69,13 +70,14 @@
#define CPRNG_MAX_LEN 524288
typedef struct _cprng_strong {
kmutex_t mtx;
kcondvar_t cv;
NIST_CTR_DRBG drbg;
int flags;
char name[16];
int reseed_pending;
rndsink_t reseed;
kmutex_t mtx;
kcondvar_t cv;
struct selinfo selq;
NIST_CTR_DRBG drbg;
int flags;
char name[16];
int reseed_pending;
rndsink_t reseed;
} cprng_strong_t;
.Ed
.Sh DESCRIPTION
@ -170,6 +172,8 @@ Perform a
.Xr cv_broadcast 9
operation on the "cv" member of the returned cprng_strong_t each time
the generator is successfully rekeyed.
.Em If this flag is set, the generator will sleep when rekeying is needed,
.Em and will therefore always return the requested number of bytes.
.El
.Pp
Creation will succeed even if key material for the generator is not
@ -179,7 +183,7 @@ may cause rekeying.
.It Fn cprng_strong_destroy "cprng"
.Pp
Destroy an instance of the cprng_strong generator.
.It Fn cprng_strong "cprng" "buf" "len"
.It Fn cprng_strong "cprng" "buf" "len" "blocking"
.Pp
Fill memory location
.Fa buf
@ -187,9 +191,13 @@ with
.Fa len
bytes from the generator
.Fa cprng .
If less than
The
.Fa blocking
argument controls the blocking/non-blocking behavior of the
generator: if it is set to FNONBLOCK, the generator may return
less than
.Fa len
bytes are returned, the generator requires rekeying.
bytes if it requires rekeying.
If the
.Dv CPRNG_USE_CV
flag is set on the generator, the caller can wait on

View File

@ -1,4 +1,4 @@
.\" $NetBSD: rnd.9,v 1.18 2011/11/28 20:19:28 tls Exp $
.\" $NetBSD: rnd.9,v 1.19 2011/12/17 20:05:38 tls Exp $
.\"
.\" Copyright (c) 1997 The NetBSD Foundation, Inc.
.\" All rights reserved.
@ -51,12 +51,16 @@
These
.Nm
functions make a device available for entropy collection for
.Pa /dev/random .
the kernel entropy pool, which provides key material for the
.Xr cprng 9
and
.Xr rnd 4
.Pa (/dev/random) interfaces.
.Pp
Ideally the first argument
.Fa rnd_source
of these functions gets included in the devices' entity struct,
but any means to permanently (static) attach one such argument
but any means to permanently (statically) attach one such argument
to one incarnation of the device is ok.
Do not share
.Fa rnd_source
@ -167,6 +171,41 @@ for
.Va rnd_source
is permitted, and the device does not need to be attached.
.El
.Sh INTERNAL ENTROPY POOL MANAGEMENT
When a hardware event occurs (such as completion of a hard drive
transfer or an interrupt from a network device) a timestamp is
generated.
This timestamp is compared to the previous timestamp
recorded for the device, and the first, second, and third order
differentials are calculated.
.Pp
If any of these differentials is zero, no entropy is assumed to
have been gathered.
If all are non-zero, one bit is assumed.
Next, data is mixed into the entropy pool using an LFSR (linear
feedback shift register).
.Pp
To extract data from the entropy pool, a cryptographically strong hash
function is used.
The output of this hash is mixed back into the pool using the LFSR,
and then folded in half before being returned to the caller.
.Pp
Mixing the actual hash into the pool causes the next extraction to
return a different value, even if no timing events were added to the
pool.
Folding the data in half prevents the caller to derive the
actual hash of the pool, preventing some attacks.
.Pp
.Pp
In the NetBSD kernel, values should be extracted from the entropy
pool
.Em only
via the
.Xr cprng 9
interface. Direct access to the entropy pool is unsupported and
may be dangerous. There is no supported API for direct access to
the output of the entropy pool.
.Pp
.\" .Sh ERRORS
.Sh FILES
These functions are declared in src/sys/sys/rnd.h and defined in

View File

@ -1,4 +1,4 @@
# $NetBSD: files,v 1.1032 2011/11/19 22:51:21 tls Exp $
# $NetBSD: files,v 1.1033 2011/12/17 20:05:38 tls Exp $
# @(#)files.newconf 7.5 (Berkeley) 5/10/93
version 20100430
@ -1439,8 +1439,9 @@ file dev/mm.c
file dev/mulaw.c mulaw needs-flag
file dev/nullcons_subr.c nullcons needs-flag
file dev/radio.c radio needs-flag
file dev/rnd.c rnd needs-flag
file dev/rndpool.c rnd needs-flag
file dev/rnd.c
file dev/rndpool.c
file dev/rndpseudo.c rnd needs-flag
file dev/sequencer.c sequencer needs-flag
file dev/video.c video needs-flag
file dev/vnd.c vnd needs-flag

View File

@ -1,4 +1,4 @@
/* $NetBSD: nist_ctr_drbg_aes128.h,v 1.1 2011/11/19 22:51:22 tls Exp $ */
/* $NetBSD: nist_ctr_drbg_aes128.h,v 1.2 2011/12/17 20:05:38 tls Exp $ */
/*-
* Copyright (c) 2011 The NetBSD Foundation, Inc.
@ -73,8 +73,8 @@ typedef NIST_AES_ENCRYPT_CTX NIST_Key;
* 10.2 DRBG Mechanism Based on Block Ciphers
*
* Table 3 specifies the reseed interval as
* <= 2^48. We use 2^32 so we can always be sure it'll fit in an int.
* <= 2^48. We use 2^31 so we can always be sure it'll fit in an int.
*/
#define NIST_CTR_DRBG_RESEED_INTERVAL (0xffffffffU)
#define NIST_CTR_DRBG_RESEED_INTERVAL (0x7fffffffU)
#endif /* NIST_CTR_DRBG_AES128_H */

View File

@ -1,4 +1,4 @@
/* $NetBSD: nist_ctr_drbg_aes256.h,v 1.1 2011/11/19 22:51:22 tls Exp $ */
/* $NetBSD: nist_ctr_drbg_aes256.h,v 1.2 2011/12/17 20:05:38 tls Exp $ */
/*-
* Copyright (c) 2011 The NetBSD Foundation, Inc.
@ -73,8 +73,8 @@ typedef NIST_AES_ENCRYPT_CTX NIST_Key;
* 10.2 DRBG Mechanism Based on Block Ciphers
*
* Table 3 specifies the reseed interval as
* <= 2^48. We use 2^32 so we can always be sure it'll fit in an int.
* <= 2^48. We use 2^31 so we can always be sure it'll fit in an int.
*/
#define NIST_CTR_DRBG_RESEED_INTERVAL (0xffffffffU)
#define NIST_CTR_DRBG_RESEED_INTERVAL (0x7fffffffU)
#endif /* NIST_CTR_DRBG_AES256_H */

View File

@ -1,4 +1,4 @@
/* $NetBSD: iscsi_text.c,v 1.2 2011/11/29 03:50:31 tls Exp $ */
/* $NetBSD: iscsi_text.c,v 1.3 2011/12/17 20:05:39 tls Exp $ */
/*-
* Copyright (c) 2005,2006,2011 The NetBSD Foundation, Inc.
@ -1459,7 +1459,7 @@ assemble_security_parameters(connection_t *conn, ccb_t *ccb, pdu_t *rx_pdu,
cprng_strong(kern_cprng,
&state->temp_buf[CHAP_MD5_SIZE],
CHAP_CHALLENGE_LEN + 1);
CHAP_CHALLENGE_LEN + 1, 0);
set_key_n(state, K_Auth_CHAP_Identifier,
state->temp_buf[CHAP_MD5_SIZE]);
cpar = set_key_s(state, K_Auth_CHAP_Challenge,

View File

@ -1,4 +1,4 @@
/* $NetBSD: rnd.c,v 1.88 2011/11/29 03:50:31 tls Exp $ */
/* $NetBSD: rnd.c,v 1.89 2011/12/17 20:05:38 tls Exp $ */
/*-
* Copyright (c) 1997-2011 The NetBSD Foundation, Inc.
@ -32,7 +32,7 @@
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: rnd.c,v 1.88 2011/11/29 03:50:31 tls Exp $");
__KERNEL_RCSID(0, "$NetBSD: rnd.c,v 1.89 2011/12/17 20:05:38 tls Exp $");
#include <sys/param.h>
#include <sys/ioctl.h>
@ -117,15 +117,10 @@ kmutex_t rnd_mtx;
*/
TAILQ_HEAD(, rndsink) rnd_sinks;
/*
* our select/poll queue
*/
struct selinfo rnd_selq;
/*
* Memory pool for sample buffers
*/
static struct pool rnd_mempool;
static pool_cache_t rnd_mempc;
/*
* Our random pool. This is defined here rather than using the general
@ -159,34 +154,25 @@ static krndsource_t rnd_source_no_collect = {
struct callout rnd_callout;
void rndattach(int);
dev_type_open(rndopen);
dev_type_read(rndread);
dev_type_write(rndwrite);
dev_type_ioctl(rndioctl);
dev_type_poll(rndpoll);
dev_type_kqfilter(rndkqfilter);
const struct cdevsw rnd_cdevsw = {
rndopen, nullclose, rndread, rndwrite, rndioctl,
nostop, notty, rndpoll, nommap, rndkqfilter, D_OTHER|D_MPSAFE,
};
static inline void rnd_wakeup_readers(void);
void rnd_wakeup_readers(void);
static inline u_int32_t rnd_estimate_entropy(krndsource_t *, u_int32_t);
static inline u_int32_t rnd_counter(void);
static void rnd_timeout(void *);
static void rnd_process_events(void *);
static u_int32_t rnd_extract_data_locked(void *, u_int32_t, u_int32_t);
u_int32_t rnd_extract_data_locked(void *, u_int32_t, u_int32_t); /* XXX */
static int rnd_ready = 0;
int rnd_ready = 0;
static int rnd_have_entropy = 0;
#ifdef DIAGNOSTIC
static int rnd_tested = 0;
static rngtest_t rnd_rt;
static uint8_t rnd_testbits[sizeof(rnd_rt.rt_b)];
#endif
LIST_HEAD(, krndsource) rnd_sources;
static rndsave_t *boot_rsp;
rndsave_t *boot_rsp;
/*
* Generate a 32-bit counter. This should be more machine dependent,
* using cycle counters and the like when possible.
@ -211,7 +197,7 @@ rnd_counter(void)
/*
* Check to see if there are readers waiting on us. If so, kick them.
*/
static inline void
void
rnd_wakeup_readers(void)
{
rndsink_t *sink, *tsink;
@ -252,9 +238,7 @@ rnd_wakeup_readers(void)
rndpool_get_entropy_count(&rnd_pool));
#endif
rnd_have_entropy = 1;
cv_broadcast(&rndpool_cv);
mutex_spin_exit(&rndpool_mtx);
selnotify(&rnd_selq, 0, 0);
} else {
mutex_spin_exit(&rndpool_mtx);
}
@ -324,39 +308,6 @@ rnd_estimate_entropy(krndsource_t *rs, u_int32_t t)
return (1);
}
static int
rnd_mempool_init(void)
{
pool_init(&rnd_mempool, sizeof(rnd_sample_t), 0, 0, 0, "rndsample",
NULL, IPL_VM);
return 0;
}
static ONCE_DECL(rnd_mempoolinit_ctrl);
/*
* "Attach" the random device. This is an (almost) empty stub, since
* pseudo-devices don't get attached until after config, after the
* entropy sources will attach. We just use the timing of this event
* as another potential source of initial entropy.
*/
void
rndattach(int num)
{
u_int32_t c;
RUN_ONCE(&rnd_mempoolinit_ctrl, rnd_mempool_init);
/* Trap unwary players who don't call rnd_init() early */
KASSERT(rnd_ready);
/* mix in another counter */
c = rnd_counter();
mutex_spin_enter(&rndpool_mtx);
rndpool_add_data(&rnd_pool, &c, sizeof(u_int32_t), 1);
mutex_spin_exit(&rndpool_mtx);
}
/*
* initialize the global random pool for our use.
* rnd_init() must be called very early on in the boot process, so
@ -384,12 +335,14 @@ rnd_init(void)
LIST_INIT(&rnd_sources);
SIMPLEQ_INIT(&rnd_samples);
TAILQ_INIT(&rnd_sinks);
selinit(&rnd_selq);
rndpool_init(&rnd_pool);
mutex_init(&rndpool_mtx, MUTEX_DEFAULT, IPL_VM);
cv_init(&rndpool_cv, "rndread");
rnd_mempc = pool_cache_init(sizeof(rnd_sample_t), 0, 0, 0,
"rndsample", NULL, IPL_VM,
NULL, NULL, NULL);
/* Mix *something*, *anything* into the pool to help it get started.
* However, it's not safe for rnd_counter() to call microtime() yet,
* so on some platforms we might just end up with zeros anyway.
@ -428,496 +381,12 @@ rnd_init(void)
}
}
int
rndopen(dev_t dev, int flags, int ifmt,
struct lwp *l)
{
if (rnd_ready == 0)
return (ENXIO);
if (minor(dev) == RND_DEV_URANDOM || minor(dev) == RND_DEV_RANDOM)
return (0);
return (ENXIO);
}
int
rndread(dev_t dev, struct uio *uio, int ioflag)
{
u_int8_t *bf;
u_int32_t entcnt, mode, n, nread;
int ret;
DPRINTF(RND_DEBUG_READ,
("Random: Read of %zu requested, flags 0x%08x\n",
uio->uio_resid, ioflag));
if (uio->uio_resid == 0)
return (0);
switch (minor(dev)) {
case RND_DEV_RANDOM:
mode = RND_EXTRACT_GOOD;
break;
case RND_DEV_URANDOM:
mode = RND_EXTRACT_ANY;
break;
default:
/* Can't happen, but this is cheap */
return (ENXIO);
}
ret = 0;
bf = kmem_alloc(RND_TEMP_BUFFER_SIZE, KM_SLEEP);
while (uio->uio_resid > 0) {
n = min(RND_TEMP_BUFFER_SIZE, uio->uio_resid);
/*
* Make certain there is data available. If there
* is, do the I/O even if it is partial. If not,
* sleep unless the user has requested non-blocking
* I/O.
*
* If not requesting strong randomness, we can always read.
*/
mutex_spin_enter(&rndpool_mtx);
if (mode != RND_EXTRACT_ANY) {
for (;;) {
/*
* How much entropy do we have?
* If it is enough for one hash, we can read.
*/
entcnt = rndpool_get_entropy_count(&rnd_pool);
if (entcnt >= RND_ENTROPY_THRESHOLD * 8)
break;
/*
* Data is not available.
*/
if (ioflag & IO_NDELAY) {
mutex_spin_exit(&rndpool_mtx);
ret = EWOULDBLOCK;
goto out;
}
ret = cv_wait_sig(&rndpool_cv, &rndpool_mtx);
if (ret) {
mutex_spin_exit(&rndpool_mtx);
goto out;
}
}
}
nread = rnd_extract_data_locked(bf, n, mode);
mutex_spin_exit(&rndpool_mtx);
/*
* Copy (possibly partial) data to the user.
* If an error occurs, or this is a partial
* read, bail out.
*/
ret = uiomove((void *)bf, nread, uio);
if (ret != 0 || nread != n)
goto out;
}
out:
kmem_free(bf, RND_TEMP_BUFFER_SIZE);
return (ret);
}
int
rndwrite(dev_t dev, struct uio *uio, int ioflag)
{
u_int8_t *bf;
int n, ret = 0, estimate_ok = 0, estimate = 0, added = 0;
ret = kauth_authorize_device(curlwp->l_cred,
KAUTH_DEVICE_RND_ADDDATA, NULL, NULL, NULL, NULL);
if (ret) {
return (ret);
}
estimate_ok = !kauth_authorize_device(curlwp->l_cred,
KAUTH_DEVICE_RND_ADDDATA_ESTIMATE, NULL, NULL, NULL, NULL);
DPRINTF(RND_DEBUG_WRITE,
("Random: Write of %zu requested\n", uio->uio_resid));
if (uio->uio_resid == 0)
return (0);
ret = 0;
bf = kmem_alloc(RND_TEMP_BUFFER_SIZE, KM_SLEEP);
while (uio->uio_resid > 0) {
/*
* Don't flood the pool.
*/
if (added > RND_POOLWORDS * sizeof(int)) {
printf("rnd: added %d already, adding no more.\n",
added);
break;
}
n = min(RND_TEMP_BUFFER_SIZE, uio->uio_resid);
ret = uiomove((void *)bf, n, uio);
if (ret != 0)
break;
if (estimate_ok) {
/*
* Don't cause samples to be discarded by taking
* the pool's entropy estimate to the max.
*/
if (added > RND_POOLWORDS / 2)
estimate = 0;
else
estimate = n * NBBY / 2;
printf("rnd: adding on write, %d bytes, estimate %d\n",
n, estimate);
} else {
printf("rnd: kauth says no entropy.\n");
}
/*
* Mix in the bytes.
*/
mutex_spin_enter(&rndpool_mtx);
rndpool_add_data(&rnd_pool, bf, n, estimate);
mutex_spin_exit(&rndpool_mtx);
added += n;
DPRINTF(RND_DEBUG_WRITE, ("Random: Copied in %d bytes\n", n));
}
kmem_free(bf, RND_TEMP_BUFFER_SIZE);
return (ret);
}
static void
krndsource_to_rndsource(krndsource_t *kr, rndsource_t *r)
{
memset(r, 0, sizeof(*r));
strlcpy(r->name, kr->name, sizeof(r->name));
r->total = kr->total;
r->type = kr->type;
r->flags = kr->flags;
}
int
rndioctl(dev_t dev, u_long cmd, void *addr, int flag,
struct lwp *l)
{
krndsource_t *kr;
rndstat_t *rst;
rndstat_name_t *rstnm;
rndctl_t *rctl;
rnddata_t *rnddata;
u_int32_t count, start;
int ret = 0;
int estimate_ok = 0, estimate = 0;
switch (cmd) {
case FIONBIO:
case FIOASYNC:
case RNDGETENTCNT:
break;
case RNDGETPOOLSTAT:
case RNDGETSRCNUM:
case RNDGETSRCNAME:
ret = kauth_authorize_device(l->l_cred,
KAUTH_DEVICE_RND_GETPRIV, NULL, NULL, NULL, NULL);
if (ret)
return (ret);
break;
case RNDCTL:
ret = kauth_authorize_device(l->l_cred,
KAUTH_DEVICE_RND_SETPRIV, NULL, NULL, NULL, NULL);
if (ret)
return (ret);
break;
case RNDADDDATA:
ret = kauth_authorize_device(l->l_cred,
KAUTH_DEVICE_RND_ADDDATA, NULL, NULL, NULL, NULL);
if (ret)
return (ret);
estimate_ok = !kauth_authorize_device(l->l_cred,
KAUTH_DEVICE_RND_ADDDATA_ESTIMATE, NULL, NULL, NULL, NULL);
break;
default:
return (EINVAL);
}
switch (cmd) {
/*
* Handled in upper layer really, but we have to return zero
* for it to be accepted by the upper layer.
*/
case FIONBIO:
case FIOASYNC:
break;
case RNDGETENTCNT:
mutex_spin_enter(&rndpool_mtx);
*(u_int32_t *)addr = rndpool_get_entropy_count(&rnd_pool);
mutex_spin_exit(&rndpool_mtx);
break;
case RNDGETPOOLSTAT:
mutex_spin_enter(&rndpool_mtx);
rndpool_get_stats(&rnd_pool, addr, sizeof(rndpoolstat_t));
mutex_spin_exit(&rndpool_mtx);
break;
case RNDGETSRCNUM:
rst = (rndstat_t *)addr;
if (rst->count == 0)
break;
if (rst->count > RND_MAXSTATCOUNT)
return (EINVAL);
/*
* Find the starting source by running through the
* list of sources.
*/
kr = rnd_sources.lh_first;
start = rst->start;
while (kr != NULL && start >= 1) {
kr = kr->list.le_next;
start--;
}
/*
* Return up to as many structures as the user asked
* for. If we run out of sources, a count of zero
* will be returned, without an error.
*/
for (count = 0; count < rst->count && kr != NULL; count++) {
krndsource_to_rndsource(kr, &rst->source[count]);
kr = kr->list.le_next;
}
rst->count = count;
break;
case RNDGETSRCNAME:
/*
* Scan through the list, trying to find the name.
*/
rstnm = (rndstat_name_t *)addr;
kr = rnd_sources.lh_first;
while (kr != NULL) {
if (strncmp(kr->name, rstnm->name,
MIN(sizeof(kr->name),
sizeof(*rstnm))) == 0) {
krndsource_to_rndsource(kr, &rstnm->source);
return (0);
}
kr = kr->list.le_next;
}
ret = ENOENT; /* name not found */
break;
case RNDCTL:
/*
* Set flags to enable/disable entropy counting and/or
* collection.
*/
rctl = (rndctl_t *)addr;
kr = rnd_sources.lh_first;
/*
* Flags set apply to all sources of this type.
*/
if (rctl->type != 0xff) {
while (kr != NULL) {
if (kr->type == rctl->type) {
kr->flags &= ~rctl->mask;
kr->flags |=
(rctl->flags & rctl->mask);
}
kr = kr->list.le_next;
}
return (0);
}
/*
* scan through the list, trying to find the name
*/
while (kr != NULL) {
if (strncmp(kr->name, rctl->name,
MIN(sizeof(kr->name),
sizeof(rctl->name))) == 0) {
kr->flags &= ~rctl->mask;
kr->flags |= (rctl->flags & rctl->mask);
return (0);
}
kr = kr->list.le_next;
}
ret = ENOENT; /* name not found */
break;
case RNDADDDATA:
/*
* Don't seed twice if our bootloader has
* seed loading support.
*/
if (!boot_rsp) {
rnddata = (rnddata_t *)addr;
if (rnddata->len > sizeof(rnddata->data))
return EINVAL;
if (estimate_ok) {
/*
* Do not accept absurd entropy estimates, and
* do not flood the pool with entropy such that
* new samples are discarded henceforth.
*/
estimate = MIN((rnddata->len * NBBY) / 2,
MIN(rnddata->entropy,
RND_POOLBITS / 2));
} else {
estimate = 0;
}
mutex_spin_enter(&rndpool_mtx);
rndpool_add_data(&rnd_pool, rnddata->data,
rnddata->len, estimate);
mutex_spin_exit(&rndpool_mtx);
rnd_wakeup_readers();
}
#ifdef RND_VERBOSE
else {
printf("rnd: already seeded by boot loader\n");
}
#endif
break;
default:
return (EINVAL);
}
return (ret);
}
int
rndpoll(dev_t dev, int events, struct lwp *l)
{
u_int32_t entcnt;
int revents;
/*
* We are always writable.
*/
revents = events & (POLLOUT | POLLWRNORM);
/*
* Save some work if not checking for reads.
*/
if ((events & (POLLIN | POLLRDNORM)) == 0)
return (revents);
/*
* If the minor device is not /dev/random, we are always readable.
*/
if (minor(dev) != RND_DEV_RANDOM) {
revents |= events & (POLLIN | POLLRDNORM);
return (revents);
}
/*
* Make certain we have enough entropy to be readable.
*/
mutex_spin_enter(&rndpool_mtx);
entcnt = rndpool_get_entropy_count(&rnd_pool);
mutex_spin_exit(&rndpool_mtx);
if (entcnt >= RND_ENTROPY_THRESHOLD * 8)
revents |= events & (POLLIN | POLLRDNORM);
else
selrecord(l, &rnd_selq);
return (revents);
}
static void
filt_rnddetach(struct knote *kn)
{
mutex_spin_enter(&rndpool_mtx);
SLIST_REMOVE(&rnd_selq.sel_klist, kn, knote, kn_selnext);
mutex_spin_exit(&rndpool_mtx);
}
static int
filt_rndread(struct knote *kn, long hint)
{
uint32_t entcnt;
mutex_spin_enter(&rndpool_mtx);
entcnt = rndpool_get_entropy_count(&rnd_pool);
mutex_spin_exit(&rndpool_mtx);
if (entcnt >= RND_ENTROPY_THRESHOLD * 8) {
kn->kn_data = RND_TEMP_BUFFER_SIZE;
return (1);
}
return (0);
}
static const struct filterops rnd_seltrue_filtops =
{ 1, NULL, filt_rnddetach, filt_seltrue };
static const struct filterops rndread_filtops =
{ 1, NULL, filt_rnddetach, filt_rndread };
int
rndkqfilter(dev_t dev, struct knote *kn)
{
struct klist *klist;
switch (kn->kn_filter) {
case EVFILT_READ:
klist = &rnd_selq.sel_klist;
if (minor(dev) == RND_DEV_URANDOM)
kn->kn_fop = &rnd_seltrue_filtops;
else
kn->kn_fop = &rndread_filtops;
break;
case EVFILT_WRITE:
klist = &rnd_selq.sel_klist;
kn->kn_fop = &rnd_seltrue_filtops;
break;
default:
return (EINVAL);
}
kn->kn_hook = NULL;
mutex_spin_enter(&rndpool_mtx);
SLIST_INSERT_HEAD(klist, kn, kn_selnext);
mutex_spin_exit(&rndpool_mtx);
return (0);
}
static rnd_sample_t *
rnd_sample_allocate(krndsource_t *source)
{
rnd_sample_t *c;
c = pool_get(&rnd_mempool, PR_WAITOK);
c = pool_cache_get(rnd_mempc, PR_WAITOK);
if (c == NULL)
return (NULL);
@ -936,7 +405,7 @@ rnd_sample_allocate_isr(krndsource_t *source)
{
rnd_sample_t *c;
c = pool_get(&rnd_mempool, PR_NOWAIT);
c = pool_cache_get(rnd_mempc, PR_NOWAIT);
if (c == NULL)
return (NULL);
@ -951,7 +420,7 @@ static void
rnd_sample_free(rnd_sample_t *c)
{
memset(c, 0, sizeof(*c));
pool_put(&rnd_mempool, c);
pool_cache_put(rnd_mempc, c);
}
/*
@ -963,8 +432,6 @@ rnd_attach_source(krndsource_t *rs, const char *name, u_int32_t type,
{
u_int32_t ts;
RUN_ONCE(&rnd_mempoolinit_ctrl, rnd_mempool_init);
ts = rnd_counter();
strlcpy(rs->name, name, sizeof(rs->name));
@ -1214,16 +681,8 @@ rnd_hwrng_test(rnd_sample_t *sample)
v2 = (uint8_t *)sample->values + cmplen;
if (__predict_false(!memcmp(v1, v2, cmplen))) {
int *dump;
printf("rnd: source \"%s\" failed continuous-output test.\n",
source->name);
printf("rnd: bad buffer: ");
for (dump = (int *)sample->values;
dump < (int *)((uint8_t *)sample->values +
sizeof(sample->values)); dump += sizeof(int)) {
printf("%x ", *dump);
}
printf("\n");
return 1;
}
@ -1365,7 +824,7 @@ rnd_timeout(void *arg)
rnd_process_events(arg);
}
static u_int32_t
u_int32_t
rnd_extract_data_locked(void *p, u_int32_t len, u_int32_t flags)
{
@ -1381,9 +840,9 @@ rnd_extract_data_locked(void *p, u_int32_t len, u_int32_t flags)
c = rnd_counter();
rndpool_add_data(&rnd_pool, &c, sizeof(u_int32_t), 1);
}
if (!rnd_tested) {
rngtest_t rt;
uint8_t testbits[sizeof(rt.rt_b)];
#ifdef DIAGNOSTIC
while (!rnd_tested) {
int entropy_count;
entropy_count = rndpool_get_entropy_count(&rnd_pool);
@ -1391,8 +850,9 @@ rnd_extract_data_locked(void *p, u_int32_t len, u_int32_t flags)
printf("rnd: starting statistical RNG test, entropy = %d.\n",
entropy_count);
#endif
if (rndpool_extract_data(&rnd_pool, rt.rt_b,
sizeof(rt.rt_b), RND_EXTRACT_ANY) != sizeof(rt.rt_b)) {
if (rndpool_extract_data(&rnd_pool, rnd_rt.rt_b,
sizeof(rnd_rt.rt_b), RND_EXTRACT_ANY)
!= sizeof(rnd_rt.rt_b)) {
panic("rnd: could not get bits for statistical test");
}
/*
@ -1402,27 +862,30 @@ rnd_extract_data_locked(void *p, u_int32_t len, u_int32_t flags)
* up adding back non-random data claiming it were pure
* entropy.
*/
memcpy(testbits, rt.rt_b, sizeof(rt.rt_b));
strlcpy(rt.rt_name, "entropy pool", sizeof(rt.rt_name));
if (rngtest(&rt)) {
memcpy(rnd_testbits, rnd_rt.rt_b, sizeof(rnd_rt.rt_b));
strlcpy(rnd_rt.rt_name, "entropy pool", sizeof(rnd_rt.rt_name));
if (rngtest(&rnd_rt)) {
/*
* The probabiliity of a Type I error is 3/10000,
* but note this can only happen at boot time.
* The relevant standard says to reset the module,
* so that's what we do.
* but developers objected...
*/
panic("rnd: entropy pool failed statistical test");
printf("rnd: WARNING, ENTROPY POOL FAILED "
"STATISTICAL TEST!\n");
continue;
}
memset(&rt, 0, sizeof(rt));
rndpool_add_data(&rnd_pool, testbits, sizeof(testbits),
memset(&rnd_rt, 0, sizeof(rnd_rt));
rndpool_add_data(&rnd_pool, rnd_testbits, sizeof(rnd_testbits),
entropy_count);
memset(testbits, 0, sizeof(testbits));
memset(rnd_testbits, 0, sizeof(rnd_testbits));
#ifdef RND_VERBOSE
printf("rnd: statistical RNG test done, entropy = %d.\n",
rndpool_get_entropy_count(&rnd_pool));
#endif
rnd_tested++;
}
#endif
return rndpool_extract_data(&rnd_pool, p, len, flags);
}

View File

@ -1,4 +1,4 @@
/* $NetBSD: rndpool.c,v 1.21 2011/11/29 03:50:31 tls Exp $ */
/* $NetBSD: rndpool.c,v 1.22 2011/12/17 20:05:38 tls Exp $ */
/*-
* Copyright (c) 1997 The NetBSD Foundation, Inc.
@ -31,7 +31,7 @@
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: rndpool.c,v 1.21 2011/11/29 03:50:31 tls Exp $");
__KERNEL_RCSID(0, "$NetBSD: rndpool.c,v 1.22 2011/12/17 20:05:38 tls Exp $");
#include <sys/param.h>
#include <sys/systm.h>
@ -49,6 +49,11 @@ __KERNEL_RCSID(0, "$NetBSD: rndpool.c,v 1.21 2011/11/29 03:50:31 tls Exp $");
#define TAP4 9
#define TAP5 7
/*
* Let others know: the pool is full.
*/
int rnd_full;
static inline void rndpool_add_one_word(rndpool_t *, u_int32_t);
void
@ -173,25 +178,6 @@ rndpool_add_one_word(rndpool_t *rp, u_int32_t val)
}
}
#if 0
/*
* Stir a 32-bit value (with possibly less entropy than that) into the pool.
* Update entropy estimate.
*/
void
rndpool_add_uint32(rndpool_t *rp, u_int32_t val, u_int32_t entropy)
{
rndpool_add_one_word(rp, val);
rp->entropy += entropy;
rp->stats.added += entropy;
if (rp->entropy > RND_POOLBITS) {
rp->stats.discarded += (rp->entropy - RND_POOLBITS);
rp->entropy = RND_POOLBITS;
}
}
#endif
/*
* Add a buffer's worth of data to the pool.
*/
@ -230,6 +216,7 @@ rndpool_add_data(rndpool_t *rp, void *p, u_int32_t len, u_int32_t entropy)
if (rp->stats.curentropy > RND_POOLBITS) {
rp->stats.discarded += (rp->stats.curentropy - RND_POOLBITS);
rp->stats.curentropy = RND_POOLBITS;
rnd_full = 1;
}
}
@ -259,6 +246,8 @@ rndpool_extract_data(rndpool_t *rp, void *p, u_int32_t len, u_int32_t mode)
buf = p;
remain = len;
rnd_full = 0;
if (mode == RND_EXTRACT_ANY)
good = 1;
else

738
sys/dev/rndpseudo.c Normal file
View File

@ -0,0 +1,738 @@
/* $NetBSD: rndpseudo.c,v 1.1 2011/12/17 20:05:39 tls Exp $ */
/*-
* Copyright (c) 1997-2011 The NetBSD Foundation, Inc.
* All rights reserved.
*
* This code is derived from software contributed to The NetBSD Foundation
* by Michael Graff <explorer@flame.org> and Thor Lancelot Simon.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
* TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: rndpseudo.c,v 1.1 2011/12/17 20:05:39 tls Exp $");
#include <sys/param.h>
#include <sys/ioctl.h>
#include <sys/fcntl.h>
#include <sys/file.h>
#include <sys/filedesc.h>
#include <sys/select.h>
#include <sys/poll.h>
#include <sys/kmem.h>
#include <sys/atomic.h>
#include <sys/mutex.h>
#include <sys/proc.h>
#include <sys/kernel.h>
#include <sys/conf.h>
#include <sys/systm.h>
#include <sys/rnd.h>
#include <sys/vnode.h>
#include <sys/pool.h>
#include <sys/kauth.h>
#include <sys/cprng.h>
#include <sys/stat.h>
#include <dev/rnd_private.h>
#if defined(__HAVE_CPU_COUNTER) && !defined(_RUMPKERNEL) /* XXX: bad pooka */
#include <machine/cpu_counter.h>
#endif
#ifdef RND_DEBUG
#define DPRINTF(l,x) if (rnd_debug & (l)) printf x
int rnd_debug = 0;
#else
#define DPRINTF(l,x)
#endif
#define RND_DEBUG_WRITE 0x0001
#define RND_DEBUG_READ 0x0002
#define RND_DEBUG_IOCTL 0x0004
#define RND_DEBUG_SNOOZE 0x0008
/*
* list devices attached
*/
#if 0
#define RND_VERBOSE
#endif
/*
* The size of a temporary buffer, kmem_alloc()ed when needed, and used for
* reading and writing data.
*/
#define RND_TEMP_BUFFER_SIZE 512
static pool_cache_t rp_pc;
static pool_cache_t rp_cpc;
/*
* A context. cprng plus a smidge.
*/
typedef struct {
cprng_strong_t *cprng;
int hard;
int bytesonkey;
kmutex_t interlock;
} rp_ctx_t;
/*
* Our random pool. This is defined here rather than using the general
* purpose one defined in rndpool.c.
*
* Samples are collected and queued into a separate mutex-protected queue
* (rnd_samples, see above), and processed in a timeout routine; therefore,
* the mutex protecting the random pool is at IPL_SOFTCLOCK() as well.
*/
extern rndpool_t rnd_pool;
extern kmutex_t rndpool_mtx;
void rndattach(int);
dev_type_open(rndopen);
const struct cdevsw rnd_cdevsw = {
rndopen, noclose, noread, nowrite, noioctl, nostop,
notty, nopoll, nommap, nokqfilter, D_OTHER|D_MPSAFE
};
static int rnd_read(struct file *, off_t *, struct uio *, kauth_cred_t, int);
static int rnd_write(struct file *, off_t *, struct uio *, kauth_cred_t, int);
static int rnd_ioctl(struct file *, u_long, void *);
static int rnd_poll(struct file *, int);
static int rnd_stat(struct file *, struct stat *);
static int rnd_close(struct file *);
static int rnd_kqfilter(struct file *, struct knote *);
const struct fileops rnd_fileops = {
.fo_read = rnd_read,
.fo_write = rnd_write,
.fo_ioctl = rnd_ioctl,
.fo_fcntl = fnullop_fcntl,
.fo_poll = rnd_poll,
.fo_stat = rnd_stat,
.fo_close = rnd_close,
.fo_kqfilter = rnd_kqfilter,
.fo_restart = fnullop_restart
};
void rnd_wakeup_readers(void); /* XXX */
extern int rnd_ready; /* XXX */
extern rndsave_t *boot_rsp; /* XXX */
extern LIST_HEAD(, krndsource) rnd_sources; /* XXX */
/*
* Generate a 32-bit counter. This should be more machine dependent,
* using cycle counters and the like when possible.
*/
static inline u_int32_t
rndpseudo_counter(void)
{
struct timeval tv;
#if defined(__HAVE_CPU_COUNTER) && !defined(_RUMPKERNEL) /* XXX: bad pooka */
if (cpu_hascounter())
return (cpu_counter32());
#endif
microtime(&tv);
return (tv.tv_sec * 1000000 + tv.tv_usec);
}
/*
* "Attach" the random device. This is an (almost) empty stub, since
* pseudo-devices don't get attached until after config, after the
* entropy sources will attach. We just use the timing of this event
* as another potential source of initial entropy.
*/
void
rndattach(int num)
{
u_int32_t c;
/* Trap unwary players who don't call rnd_init() early */
KASSERT(rnd_ready);
rp_pc = pool_cache_init(RND_TEMP_BUFFER_SIZE, 0, 0, 0,
"rndtemp", NULL, IPL_NONE,
NULL, NULL, NULL);
rp_cpc = pool_cache_init(sizeof(rp_ctx_t), 0, 0, 0,
"rndctx", NULL, IPL_NONE,
NULL, NULL, NULL);
/* mix in another counter */
c = rndpseudo_counter();
mutex_spin_enter(&rndpool_mtx);
rndpool_add_data(&rnd_pool, &c, sizeof(u_int32_t), 1);
mutex_spin_exit(&rndpool_mtx);
}
int
rndopen(dev_t dev, int flag, int ifmt,
struct lwp *l)
{
rp_ctx_t *ctx;
file_t *fp;
int fd, hard, error = 0;
switch (minor(dev)) {
case RND_DEV_URANDOM:
hard = 0;
break;
case RND_DEV_RANDOM:
hard = 1;
break;
default:
return ENXIO;
}
ctx = pool_cache_get(rp_cpc, PR_WAITOK);
ctx->cprng = NULL;
ctx->hard = hard;
mutex_init(&ctx->interlock, MUTEX_DEFAULT, IPL_NONE);
if ((error = fd_allocfile(&fp, &fd)) != 0) {
pool_cache_put(rp_cpc, ctx);
return error;
}
return fd_clone(fp, fd, flag, &rnd_fileops, ctx);
}
static void
rnd_alloc_cprng(rp_ctx_t *ctx)
{
char personalization_buf[64];
struct lwp *l = curlwp;
int cflags = ctx->hard ? CPRNG_USE_CV :
CPRNG_INIT_ANY|CPRNG_REKEY_ANY;
mutex_enter(&ctx->interlock);
if (__predict_true(ctx->cprng == NULL)) {
snprintf(personalization_buf,
sizeof(personalization_buf),
"%d%llud%d", l->l_proc->p_pid,
(unsigned long long int)l->l_ncsw, l->l_cpticks);
ctx->cprng = cprng_strong_create(personalization_buf,
IPL_NONE, cflags);
}
membar_sync();
mutex_exit(&ctx->interlock);
}
static int
rnd_read(struct file * fp, off_t *offp, struct uio *uio,
kauth_cred_t cred, int flags)
{
rp_ctx_t *ctx = fp->f_data;
u_int8_t *bf;
int strength, ret;
DPRINTF(RND_DEBUG_READ,
("Random: Read of %zu requested, flags 0x%08x\n",
uio->uio_resid, ioflag));
if (uio->uio_resid == 0)
return (0);
if (ctx->cprng == NULL) {
rnd_alloc_cprng(ctx);
if (__predict_false(ctx->cprng == NULL)) {
return EIO;
}
}
strength = cprng_strong_strength(ctx->cprng);
ret = 0;
bf = pool_cache_get(rp_pc, PR_WAITOK);
while (uio->uio_resid > 0) {
int n, nread, want;
want = MIN(RND_TEMP_BUFFER_SIZE, uio->uio_resid);
/* XXX is this _really_ what's wanted? */
if (ctx->hard) {
n = MIN(want, strength - ctx->bytesonkey);
ctx->bytesonkey += n;
} else {
n = want;
}
nread = cprng_strong(ctx->cprng, bf, n,
(fp->f_flag & FNONBLOCK) ? FNONBLOCK : 0);
if (nread != n) {
if (fp->f_flag & FNONBLOCK) {
ret = EWOULDBLOCK;
} else {
ret = EINTR;
}
goto out;
}
ret = uiomove((void *)bf, nread, uio);
if (ret != 0 || n < want) {
goto out;
}
}
out:
if (ctx->bytesonkey >= strength) {
/* Force reseed of underlying DRBG (prediction resistance) */
cprng_strong_deplete(ctx->cprng);
ctx->bytesonkey = 0;
}
pool_cache_put(rp_pc, bf);
return (ret);
}
static int
rnd_write(struct file *fp, off_t *offp, struct uio *uio,
kauth_cred_t cred, int flags)
{
u_int8_t *bf;
int n, ret = 0, estimate_ok = 0, estimate = 0, added = 0;
ret = kauth_authorize_device(cred,
KAUTH_DEVICE_RND_ADDDATA, NULL, NULL, NULL, NULL);
if (ret) {
return (ret);
}
estimate_ok = !kauth_authorize_device(cred,
KAUTH_DEVICE_RND_ADDDATA_ESTIMATE, NULL, NULL, NULL, NULL);
DPRINTF(RND_DEBUG_WRITE,
("Random: Write of %zu requested\n", uio->uio_resid));
if (uio->uio_resid == 0)
return (0);
ret = 0;
bf = pool_cache_get(rp_pc, PR_WAITOK);
while (uio->uio_resid > 0) {
/*
* Don't flood the pool.
*/
if (added > RND_POOLWORDS * sizeof(int)) {
#ifdef RND_VERBOSE
printf("rnd: added %d already, adding no more.\n",
added);
#endif
break;
}
n = min(RND_TEMP_BUFFER_SIZE, uio->uio_resid);
ret = uiomove((void *)bf, n, uio);
if (ret != 0)
break;
if (estimate_ok) {
/*
* Don't cause samples to be discarded by taking
* the pool's entropy estimate to the max.
*/
if (added > RND_POOLWORDS / 2)
estimate = 0;
else
estimate = n * NBBY / 2;
#ifdef RND_VERBOSE
printf("rnd: adding on write, %d bytes, estimate %d\n",
n, estimate);
#endif
} else {
#ifdef RND_VERBOSE
printf("rnd: kauth says no entropy.\n");
#endif
}
/*
* Mix in the bytes.
*/
mutex_spin_enter(&rndpool_mtx);
rndpool_add_data(&rnd_pool, bf, n, estimate);
mutex_spin_exit(&rndpool_mtx);
added += n;
DPRINTF(RND_DEBUG_WRITE, ("Random: Copied in %d bytes\n", n));
}
pool_cache_put(rp_pc, bf);
return (ret);
}
static void
krndsource_to_rndsource(krndsource_t *kr, rndsource_t *r)
{
memset(r, 0, sizeof(*r));
strlcpy(r->name, kr->name, sizeof(r->name));
r->total = kr->total;
r->type = kr->type;
r->flags = kr->flags;
}
int
rnd_ioctl(struct file *fp, u_long cmd, void *addr)
{
krndsource_t *kr;
rndstat_t *rst;
rndstat_name_t *rstnm;
rndctl_t *rctl;
rnddata_t *rnddata;
u_int32_t count, start;
int ret = 0;
int estimate_ok = 0, estimate = 0;
switch (cmd) {
case FIONBIO:
case FIOASYNC:
case RNDGETENTCNT:
break;
case RNDGETPOOLSTAT:
case RNDGETSRCNUM:
case RNDGETSRCNAME:
ret = kauth_authorize_device(curlwp->l_cred,
KAUTH_DEVICE_RND_GETPRIV, NULL, NULL, NULL, NULL);
if (ret)
return (ret);
break;
case RNDCTL:
ret = kauth_authorize_device(curlwp->l_cred,
KAUTH_DEVICE_RND_SETPRIV, NULL, NULL, NULL, NULL);
if (ret)
return (ret);
break;
case RNDADDDATA:
ret = kauth_authorize_device(curlwp->l_cred,
KAUTH_DEVICE_RND_ADDDATA, NULL, NULL, NULL, NULL);
if (ret)
return (ret);
estimate_ok = !kauth_authorize_device(curlwp->l_cred,
KAUTH_DEVICE_RND_ADDDATA_ESTIMATE, NULL, NULL, NULL, NULL);
break;
default:
return (EINVAL);
}
switch (cmd) {
/*
* Handled in upper layer really, but we have to return zero
* for it to be accepted by the upper layer.
*/
case FIONBIO:
case FIOASYNC:
break;
case RNDGETENTCNT:
mutex_spin_enter(&rndpool_mtx);
*(u_int32_t *)addr = rndpool_get_entropy_count(&rnd_pool);
mutex_spin_exit(&rndpool_mtx);
break;
case RNDGETPOOLSTAT:
mutex_spin_enter(&rndpool_mtx);
rndpool_get_stats(&rnd_pool, addr, sizeof(rndpoolstat_t));
mutex_spin_exit(&rndpool_mtx);
break;
case RNDGETSRCNUM:
rst = (rndstat_t *)addr;
if (rst->count == 0)
break;
if (rst->count > RND_MAXSTATCOUNT)
return (EINVAL);
mutex_spin_enter(&rndpool_mtx);
/*
* Find the starting source by running through the
* list of sources.
*/
kr = rnd_sources.lh_first;
start = rst->start;
while (kr != NULL && start >= 1) {
kr = kr->list.le_next;
start--;
}
/*
* Return up to as many structures as the user asked
* for. If we run out of sources, a count of zero
* will be returned, without an error.
*/
for (count = 0; count < rst->count && kr != NULL; count++) {
krndsource_to_rndsource(kr, &rst->source[count]);
kr = kr->list.le_next;
}
rst->count = count;
mutex_spin_exit(&rndpool_mtx);
break;
case RNDGETSRCNAME:
/*
* Scan through the list, trying to find the name.
*/
mutex_spin_enter(&rndpool_mtx);
rstnm = (rndstat_name_t *)addr;
kr = rnd_sources.lh_first;
while (kr != NULL) {
if (strncmp(kr->name, rstnm->name,
MIN(sizeof(kr->name),
sizeof(*rstnm))) == 0) {
krndsource_to_rndsource(kr, &rstnm->source);
mutex_spin_exit(&rndpool_mtx);
return (0);
}
kr = kr->list.le_next;
}
mutex_spin_exit(&rndpool_mtx);
ret = ENOENT; /* name not found */
break;
case RNDCTL:
/*
* Set flags to enable/disable entropy counting and/or
* collection.
*/
mutex_spin_enter(&rndpool_mtx);
rctl = (rndctl_t *)addr;
kr = rnd_sources.lh_first;
/*
* Flags set apply to all sources of this type.
*/
if (rctl->type != 0xff) {
while (kr != NULL) {
if (kr->type == rctl->type) {
kr->flags &= ~rctl->mask;
kr->flags |=
(rctl->flags & rctl->mask);
}
kr = kr->list.le_next;
}
mutex_spin_exit(&rndpool_mtx);
return (0);
}
/*
* scan through the list, trying to find the name
*/
while (kr != NULL) {
if (strncmp(kr->name, rctl->name,
MIN(sizeof(kr->name),
sizeof(rctl->name))) == 0) {
kr->flags &= ~rctl->mask;
kr->flags |= (rctl->flags & rctl->mask);
mutex_spin_exit(&rndpool_mtx);
return (0);
}
kr = kr->list.le_next;
}
mutex_spin_exit(&rndpool_mtx);
ret = ENOENT; /* name not found */
break;
case RNDADDDATA:
/*
* Don't seed twice if our bootloader has
* seed loading support.
*/
if (!boot_rsp) {
rnddata = (rnddata_t *)addr;
if (rnddata->len > sizeof(rnddata->data))
return EINVAL;
if (estimate_ok) {
/*
* Do not accept absurd entropy estimates, and
* do not flood the pool with entropy such that
* new samples are discarded henceforth.
*/
estimate = MIN((rnddata->len * NBBY) / 2,
MIN(rnddata->entropy,
RND_POOLBITS / 2));
} else {
estimate = 0;
}
mutex_spin_enter(&rndpool_mtx);
rndpool_add_data(&rnd_pool, rnddata->data,
rnddata->len, estimate);
mutex_spin_exit(&rndpool_mtx);
rnd_wakeup_readers();
}
#ifdef RND_VERBOSE
else {
printf("rnd: already seeded by boot loader\n");
}
#endif
break;
default:
return (EINVAL);
}
return (ret);
}
static int
rnd_poll(struct file *fp, int events)
{
int revents;
rp_ctx_t *ctx = fp->f_data;
/*
* We are always writable.
*/
revents = events & (POLLOUT | POLLWRNORM);
/*
* Save some work if not checking for reads.
*/
if ((events & (POLLIN | POLLRDNORM)) == 0)
return (revents);
if (ctx->cprng == NULL) {
rnd_alloc_cprng(ctx);
if (__predict_false(ctx->cprng == NULL)) {
return EIO;
}
}
if (cprng_strong_ready(ctx->cprng)) {
revents |= events & (POLLIN | POLLRDNORM);
} else {
mutex_enter(&ctx->cprng->mtx);
selrecord(curlwp, &ctx->cprng->selq);
mutex_exit(&ctx->cprng->mtx);
}
return (revents);
}
static int
rnd_stat(struct file *fp, struct stat *st)
{
rp_ctx_t *ctx = fp->f_data;
/* XXX lock, if cprng allocated? why? */
memset(st, 0, sizeof(*st));
st->st_dev = makedev(cdevsw_lookup_major(&rnd_cdevsw),
ctx->hard ? RND_DEV_RANDOM :
RND_DEV_URANDOM);
/* XXX leave atimespect, mtimespec, ctimespec = 0? */
st->st_uid = kauth_cred_geteuid(fp->f_cred);
st->st_gid = kauth_cred_getegid(fp->f_cred);
st->st_mode = S_IFCHR;
return 0;
}
static int
rnd_close(struct file *fp)
{
rp_ctx_t *ctx = fp->f_data;
if (ctx->cprng) {
cprng_strong_destroy(ctx->cprng);
}
fp->f_data = NULL;
mutex_destroy(&ctx->interlock);
pool_cache_put(rp_cpc, ctx);
return 0;
}
static void
filt_rnddetach(struct knote *kn)
{
cprng_strong_t *c = kn->kn_hook;
mutex_enter(&c->mtx);
SLIST_REMOVE(&c->selq.sel_klist, kn, knote, kn_selnext);
mutex_exit(&c->mtx);
}
static int
filt_rndread(struct knote *kn, long hint)
{
cprng_strong_t *c = kn->kn_hook;
if (cprng_strong_ready(c)) {
kn->kn_data = RND_TEMP_BUFFER_SIZE;
return 1;
}
return 0;
}
static const struct filterops rnd_seltrue_filtops =
{ 1, NULL, filt_rnddetach, filt_seltrue };
static const struct filterops rndread_filtops =
{ 1, NULL, filt_rnddetach, filt_rndread };
static int
rnd_kqfilter(struct file *fp, struct knote *kn)
{
rp_ctx_t *ctx = fp->f_data;
struct klist *klist;
if (ctx->cprng == NULL) {
rnd_alloc_cprng(ctx);
if (__predict_false(ctx->cprng == NULL)) {
return EIO;
}
}
mutex_enter(&ctx->cprng->mtx);
switch (kn->kn_filter) {
case EVFILT_READ:
klist = &ctx->cprng->selq.sel_klist;
kn->kn_fop = &rndread_filtops;
break;
case EVFILT_WRITE:
klist = &ctx->cprng->selq.sel_klist;
kn->kn_fop = &rnd_seltrue_filtops;
break;
default:
mutex_exit(&ctx->cprng->mtx);
return EINVAL;
}
kn->kn_hook = ctx->cprng;
SLIST_INSERT_HEAD(klist, kn, kn_selnext);
mutex_exit(&ctx->cprng->mtx);
return (0);
}

View File

@ -1,5 +1,5 @@
/* $OpenBSD: tcp_subr.c,v 1.98 2007/06/25 12:17:43 markus Exp $ */
/* $NetBSD: tcp_rndiss.c,v 1.3 2011/11/19 22:51:24 tls Exp $ */
/* $NetBSD: tcp_rndiss.c,v 1.4 2011/12/17 20:05:39 tls Exp $ */
/*
* Copyright (c) 1982, 1986, 1988, 1990, 1993
@ -69,7 +69,7 @@
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: tcp_rndiss.c,v 1.3 2011/11/19 22:51:24 tls Exp $");
__KERNEL_RCSID(0, "$NetBSD: tcp_rndiss.c,v 1.4 2011/12/17 20:05:39 tls Exp $");
#include <sys/param.h>
#include <sys/cprng.h>
@ -104,7 +104,7 @@ tcp_rndiss_encrypt(u_int16_t val)
void
tcp_rndiss_init(void)
{
cprng_strong(kern_cprng, tcp_rndiss_sbox, sizeof(tcp_rndiss_sbox));
cprng_strong(kern_cprng, tcp_rndiss_sbox, sizeof(tcp_rndiss_sbox), 0);
tcp_rndiss_reseed = time_second + TCP_RNDISS_OUT;
tcp_rndiss_msb = tcp_rndiss_msb == 0x8000 ? 0 : 0x8000;

View File

@ -1,4 +1,4 @@
/* $NetBSD: init_sysctl.c,v 1.185 2011/11/20 01:09:14 tls Exp $ */
/* $NetBSD: init_sysctl.c,v 1.186 2011/12/17 20:05:39 tls Exp $ */
/*-
* Copyright (c) 2003, 2007, 2008, 2009 The NetBSD Foundation, Inc.
@ -30,7 +30,7 @@
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: init_sysctl.c,v 1.185 2011/11/20 01:09:14 tls Exp $");
__KERNEL_RCSID(0, "$NetBSD: init_sysctl.c,v 1.186 2011/12/17 20:05:39 tls Exp $");
#include "opt_sysv.h"
#include "opt_compat_netbsd.h"
@ -1396,7 +1396,7 @@ sysctl_kern_urnd(SYSCTLFN_ARGS)
{
int v, rv;
rv = cprng_strong(sysctl_prng, &v, sizeof(v));
rv = cprng_strong(sysctl_prng, &v, sizeof(v), 0);
if (rv == sizeof(v)) {
struct sysctlnode node = *rnode;
node.sysctl_data = &v;

View File

@ -1,4 +1,4 @@
/* $NetBSD: subr_cprng.c,v 1.4 2011/11/29 21:48:22 njoly Exp $ */
/* $NetBSD: subr_cprng.c,v 1.5 2011/12/17 20:05:39 tls Exp $ */
/*-
* Copyright (c) 2011 The NetBSD Foundation, Inc.
@ -46,7 +46,7 @@
#include <sys/cprng.h>
__KERNEL_RCSID(0, "$NetBSD: subr_cprng.c,v 1.4 2011/11/29 21:48:22 njoly Exp $");
__KERNEL_RCSID(0, "$NetBSD: subr_cprng.c,v 1.5 2011/12/17 20:05:39 tls Exp $");
void
cprng_init(void)
@ -71,6 +71,17 @@ cprng_counter(void)
return (tv.tv_sec * 1000000 + tv.tv_usec);
}
static void
cprng_strong_sched_reseed(cprng_strong_t *const c)
{
KASSERT(mutex_owned(&c->mtx));
if (!(c->reseed_pending)) {
c->reseed_pending = 1;
c->reseed.len = NIST_BLOCK_KEYLEN_BYTES;
rndsink_attach(&c->reseed);
}
}
static void
cprng_strong_reseed(void *const arg)
{
@ -91,6 +102,7 @@ cprng_strong_reseed(void *const arg)
if (c->flags & CPRNG_USE_CV) {
cv_broadcast(&c->cv);
}
selnotify(&c->selq, 0, 0);
mutex_exit(&c->mtx);
}
@ -99,7 +111,7 @@ cprng_strong_create(const char *const name, int ipl, int flags)
{
cprng_strong_t *c;
uint8_t key[NIST_BLOCK_KEYLEN_BYTES];
int r, getmore = 0;
int r, getmore = 0, hard = 0;
uint32_t cc;
c = kmem_alloc(sizeof(*c), KM_NOSLEEP);
@ -119,15 +131,19 @@ cprng_strong_create(const char *const name, int ipl, int flags)
cv_init(&c->cv, name);
}
selinit(&c->selq);
r = rnd_extract_data(key, sizeof(key), RND_EXTRACT_GOOD);
if (r != sizeof(key)) {
if (c->flags & CPRNG_INIT_ANY) {
#ifdef DEBUG
printf("cprng %s: WARNING insufficient "
"entropy at creation.\n", name);
#endif
rnd_extract_data(key + r, sizeof(key - r),
RND_EXTRACT_ANY);
} else {
return NULL;
hard++;
}
getmore++;
}
@ -138,46 +154,30 @@ cprng_strong_create(const char *const name, int ipl, int flags)
}
if (getmore) {
int wr = 0;
/* Ask for more. */
c->reseed_pending = 1;
c->reseed.len = sizeof(key);
rndsink_attach(&c->reseed);
if (c->flags & CPRNG_USE_CV) {
mutex_enter(&c->mtx);
do {
wr = cv_wait_sig(&c->cv, &c->mtx);
if (__predict_true(wr == 0)) {
break;
}
if (wr == ERESTART) {
continue;
} else {
cv_destroy(&c->cv);
mutex_exit(&c->mtx);
mutex_destroy(&c->mtx);
kmem_free(c, sizeof(c));
return NULL;
}
} while (1);
mutex_exit(&c->mtx);
/* Cause readers to wait for rekeying. */
if (hard) {
c->drbg.reseed_counter =
NIST_CTR_DRBG_RESEED_INTERVAL + 1;
} else {
c->drbg.reseed_counter =
(NIST_CTR_DRBG_RESEED_INTERVAL / 2) + 1;
}
}
return c;
}
size_t
cprng_strong(cprng_strong_t *const c, void *const p, size_t len)
cprng_strong(cprng_strong_t *const c, void *const p, size_t len, int flags)
{
uint32_t cc = cprng_counter();
#ifdef DEBUG
int testfail = 0;
#endif
if (len > CPRNG_MAX_LEN) { /* XXX should we loop? */
len = CPRNG_MAX_LEN; /* let the caller loop if desired */
}
mutex_enter(&c->mtx);
again:
if (nist_ctr_drbg_generate(&c->drbg, p, len, &cc, sizeof(cc))) {
/* A generator failure really means we hit the hard limit. */
if (c->flags & CPRNG_REKEY_ANY) {
@ -192,16 +192,15 @@ again:
panic("cprng %s: nist_ctr_drbg_reseed "
"failed.", c->name);
}
if (c->flags & CPRNG_USE_CV) {
cv_broadcast(&c->cv); /* XXX unnecessary? */
}
} else {
if (c->flags & CPRNG_USE_CV) {
if (!(flags & FNONBLOCK) &&
(c->flags & CPRNG_USE_CV)) {
int wr;
cprng_strong_sched_reseed(c);
do {
wr = cv_wait_sig(&c->cv, &c->mtx);
if (wr == EINTR) {
wr = cv_wait_sig(&c->cv, &c->mtx);
if (wr == ERESTART) {
mutex_exit(&c->mtx);
return 0;
}
@ -209,49 +208,55 @@ again:
len, &cc,
sizeof(cc)));
} else {
mutex_exit(&c->mtx);
return 0;
len = 0;
}
}
}
#ifdef DIAGNOSTIC
#ifdef DEBUG
/*
* If the generator has just been keyed, perform
* the statistical RNG test.
*/
if (__predict_false(c->drbg.reseed_counter == 1)) {
rngtest_t rt;
rngtest_t *rt = kmem_alloc(sizeof(*rt), KM_NOSLEEP);
strncpy(rt.rt_name, c->name, sizeof(rt.rt_name));
if (rt) {
if (nist_ctr_drbg_generate(&c->drbg, rt.rt_b,
sizeof(rt.rt_b), NULL, 0)) {
panic("cprng %s: nist_ctr_drbg_generate failed!",
c->name);
strncpy(rt->rt_name, c->name, sizeof(rt->rt_name));
if (nist_ctr_drbg_generate(&c->drbg, rt->rt_b,
sizeof(rt->rt_b), NULL, 0)) {
panic("cprng %s: nist_ctr_drbg_generate "
"failed!", c->name);
}
if (rngtest(&rt)) {
printf("cprng %s: failed statistical RNG test.\n",
c->name);
c->drbg.reseed_counter =
NIST_CTR_DRBG_RESEED_INTERVAL + 1;
}
}
testfail = rngtest(rt);
memset(&rt, 0, sizeof(rt));
if (testfail) {
printf("cprng %s: failed statistical RNG "
"test.\n", c->name);
c->drbg.reseed_counter =
NIST_CTR_DRBG_RESEED_INTERVAL + 1;
len = 0;
}
memset(rt, 0, sizeof(*rt));
kmem_free(rt, sizeof(*rt));
}
}
#endif
if (__predict_false(c->drbg.reseed_counter >
(NIST_CTR_DRBG_RESEED_INTERVAL / 2))) {
if (!(c->reseed_pending)) {
c->reseed_pending = 1;
c->reseed.len = NIST_BLOCK_KEYLEN_BYTES;
rndsink_attach(&c->reseed);
}
if (__predict_false(c->drbg.reseed_counter >
NIST_CTR_DRBG_RESEED_INTERVAL)) {
goto again; /* statistical test failure */
(NIST_CTR_DRBG_RESEED_INTERVAL / 2))) {
cprng_strong_sched_reseed(c);
}
if (rnd_full) {
if (!c->rekeyed_on_full) {
c->rekeyed_on_full++;
cprng_strong_sched_reseed(c);
}
} else {
c->rekeyed_on_full = 0;
}
mutex_exit(&c->mtx);
@ -261,17 +266,22 @@ again:
void
cprng_strong_destroy(cprng_strong_t *c)
{
KASSERT(!mutex_owned(&c->mtx));
mutex_enter(&c->mtx);
if (c->flags & CPRNG_USE_CV) {
KASSERT(!cv_has_waiters(&c->cv));
cv_destroy(&c->cv);
}
seldestroy(&c->selq);
mutex_destroy(&c->mtx);
if (c->reseed_pending) {
rndsink_detach(&c->reseed);
}
nist_ctr_drbg_destroy(&c->drbg);
mutex_exit(&c->mtx);
mutex_destroy(&c->mtx);
memset(c, 0, sizeof(*c));
kmem_free(c, sizeof(*c));
}
@ -302,6 +312,7 @@ cprng_strong_setflags(cprng_strong_t *const c, int flags)
if (c->flags & CPRNG_USE_CV) {
cv_broadcast(&c->cv);
}
selnotify(&c->selq, 0, 0);
}
}
c->flags = flags;

View File

@ -1,4 +1,4 @@
/* $NetBSD: if_spppsubr.c,v 1.124 2011/11/19 22:51:25 tls Exp $ */
/* $NetBSD: if_spppsubr.c,v 1.125 2011/12/17 20:05:39 tls Exp $ */
/*
* Synchronous PPP/Cisco link level subroutines.
@ -41,7 +41,7 @@
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: if_spppsubr.c,v 1.124 2011/11/19 22:51:25 tls Exp $");
__KERNEL_RCSID(0, "$NetBSD: if_spppsubr.c,v 1.125 2011/12/17 20:05:39 tls Exp $");
#if defined(_KERNEL_OPT)
#include "opt_inet.h"
@ -4298,7 +4298,7 @@ sppp_chap_scr(struct sppp *sp)
/* Compute random challenge. */
ch = (uint32_t *)sp->myauth.challenge;
cprng_strong(kern_cprng, ch, clen);
cprng_strong(kern_cprng, ch, clen, 0);
sp->confid[IDX_CHAP] = ++sp->pp_seq[IDX_CHAP];

View File

@ -1,4 +1,4 @@
/* $NetBSD: tcp_subr.c,v 1.243 2011/11/19 22:51:26 tls Exp $ */
/* $NetBSD: tcp_subr.c,v 1.244 2011/12/17 20:05:39 tls Exp $ */
/*
* Copyright (C) 1995, 1996, 1997, and 1998 WIDE Project.
@ -91,7 +91,7 @@
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: tcp_subr.c,v 1.243 2011/11/19 22:51:26 tls Exp $");
__KERNEL_RCSID(0, "$NetBSD: tcp_subr.c,v 1.244 2011/12/17 20:05:39 tls Exp $");
#include "opt_inet.h"
#include "opt_ipsec.h"
@ -2220,7 +2220,7 @@ tcp_new_iss1(void *laddr, void *faddr, u_int16_t lport, u_int16_t fport,
*/
if (tcp_iss_gotten_secret == false) {
cprng_strong(kern_cprng,
tcp_iss_secret, sizeof(tcp_iss_secret));
tcp_iss_secret, sizeof(tcp_iss_secret), 0);
tcp_iss_gotten_secret = true;
}

View File

@ -1,11 +1,11 @@
# $NetBSD: Makefile,v 1.2 2010/02/16 20:42:45 pooka Exp $
# $NetBSD: Makefile,v 1.3 2011/12/17 20:05:39 tls Exp $
#
.PATH: ${.CURDIR}/../../../../dev
LIB= rumpdev_rnd
SRCS= rnd.c rndpool.c
SRCS= rnd.c rndpseudo.c rndpool.c
SRCS+= component.c

View File

@ -1,4 +1,4 @@
/* $NetBSD: cprng_stub.c,v 1.3 2011/11/28 08:05:07 tls Exp $ */
/* $NetBSD: cprng_stub.c,v 1.4 2011/12/17 20:05:40 tls Exp $ */
/*-
* Copyright (c) 2011 The NetBSD Foundation, Inc.
@ -59,11 +59,13 @@ cprng_init(void)
return;
}
cprng_strong_t *cprng_strong_create(const char *const name, int ipl, int flags)
cprng_strong_t *
cprng_strong_create(const char *const name, int ipl, int flags)
{
cprng_strong_t *c;
c = kmem_alloc(sizeof(*c), KM_NOSLEEP);
/* zero struct to zero counters we won't ever set with no DRBG */
c = kmem_zalloc(sizeof(*c), KM_NOSLEEP);
if (c == NULL) {
return NULL;
}
@ -72,11 +74,13 @@ cprng_strong_t *cprng_strong_create(const char *const name, int ipl, int flags)
if (c->flags & CPRNG_USE_CV) {
cv_init(&c->cv, name);
}
selinit(&c->selq);
return c;
}
size_t cprng_strong(cprng_strong_t *c, void *p, size_t len)
size_t
cprng_strong(cprng_strong_t *c, void *p, size_t len, int blocking)
{
mutex_enter(&c->mtx);
cprng_fast(p, len); /* XXX! */
@ -84,7 +88,8 @@ size_t cprng_strong(cprng_strong_t *c, void *p, size_t len)
return len;
}
void cprng_strong_destroy(cprng_strong_t *c)
void
cprng_strong_destroy(cprng_strong_t *c)
{
mutex_destroy(&c->mtx);
cv_destroy(&c->cv);

View File

@ -1,4 +1,4 @@
/* $NetBSD: cprng.h,v 1.3 2011/12/13 08:00:36 mlelstv Exp $ */
/* $NetBSD: cprng.h,v 1.4 2011/12/17 20:05:40 tls Exp $ */
/*-
* Copyright (c) 2011 The NetBSD Foundation, Inc.
@ -32,10 +32,12 @@
#define _CPRNG_H
#include <sys/types.h>
#include <sys/fcntl.h>
#include <lib/libkern/libkern.h>
#include <sys/rnd.h>
#include <crypto/nist_ctr_drbg/nist_ctr_drbg.h>
#include <sys/condvar.h>
#include <sys/select.h>
/*
* NIST SP800-90 says 2^19 bytes per request for the CTR_DRBG.
@ -76,13 +78,15 @@ uint64_t cprng_fast64(void);
#endif
typedef struct _cprng_strong {
kmutex_t mtx;
kcondvar_t cv;
NIST_CTR_DRBG drbg;
int flags;
char name[16];
int reseed_pending;
rndsink_t reseed;
kmutex_t mtx;
kcondvar_t cv;
struct selinfo selq;
NIST_CTR_DRBG drbg;
int flags;
char name[16];
int reseed_pending;
int rekeyed_on_full;
rndsink_t reseed;
} cprng_strong_t;
#define CPRNG_INIT_ANY 0x00000001
@ -91,7 +95,7 @@ typedef struct _cprng_strong {
cprng_strong_t *cprng_strong_create(const char *const, int, int);
size_t cprng_strong(cprng_strong_t *const, void *const, size_t);
size_t cprng_strong(cprng_strong_t *const, void *const, size_t, int);
void cprng_strong_destroy(cprng_strong_t *);
@ -101,7 +105,7 @@ static inline uint32_t
cprng_strong32(void)
{
uint32_t r;
cprng_strong(kern_cprng, &r, sizeof(r));
cprng_strong(kern_cprng, &r, sizeof(r), 0);
return r;
}
@ -109,10 +113,37 @@ static inline uint64_t
cprng_strong64(void)
{
uint64_t r;
cprng_strong(kern_cprng, &r, sizeof(r));
cprng_strong(kern_cprng, &r, sizeof(r), 0);
return r;
}
static inline int
cprng_strong_ready(cprng_strong_t *c)
{
int ret = 0;
mutex_enter(&c->mtx);
if (c->drbg.reseed_counter < NIST_CTR_DRBG_RESEED_INTERVAL) {
ret = 1;
}
mutex_exit(&c->mtx);
return ret;
}
static inline void
cprng_strong_deplete(cprng_strong_t *c)
{
mutex_enter(&c->mtx);
c->drbg.reseed_counter = NIST_CTR_DRBG_RESEED_INTERVAL + 1;
mutex_exit(&c->mtx);
}
static inline int
cprng_strong_strength(cprng_strong_t *c)
{
return NIST_BLOCK_KEYLEN_BYTES;
}
void cprng_init(void);
int cprng_strong_getflags(cprng_strong_t *const);
void cprng_strong_setflags(cprng_strong_t *const, int);

View File

@ -1,4 +1,4 @@
/* $NetBSD: param.h,v 1.397 2011/11/28 08:05:07 tls Exp $ */
/* $NetBSD: param.h,v 1.398 2011/12/17 20:05:40 tls Exp $ */
/*-
* Copyright (c) 1982, 1986, 1989, 1993
@ -63,7 +63,7 @@
* 2.99.9 (299000900)
*/
#define __NetBSD_Version__ 599005800 /* NetBSD 5.99.58 */
#define __NetBSD_Version__ 599005900 /* NetBSD 5.99.59 */
#define __NetBSD_Prereq__(M,m,p) (((((M) * 100000000) + \
(m) * 1000000) + (p) * 100) <= __NetBSD_Version__)

View File

@ -1,4 +1,4 @@
/* $NetBSD: rnd.h,v 1.27 2011/12/17 12:59:21 apb Exp $ */
/* $NetBSD: rnd.h,v 1.28 2011/12/17 20:05:40 tls Exp $ */
/*-
* Copyright (c) 1997 The NetBSD Foundation, Inc.
@ -165,6 +165,8 @@ void rndsink_detach(rndsink_t *);
void rnd_seed(void *, size_t);
extern int rnd_full;
#endif /* _KERNEL */
#define RND_MAXSTATCOUNT 10 /* 10 sources at once max */