Rework rndsink(9) abstraction and adapt arc4random(9) and cprng(9).

rndsink(9):
- Simplify API.
- Simplify locking scheme.
- Add a man page.
- Avoid races in destruction.
- Avoid races in requesting entropy now and scheduling entropy later.

Periodic distribution of entropy to sinks reduces the need for the
last one, but this way we don't need to rely on periodic distribution
(e.g., in a future tickless NetBSD).

rndsinks_lock should probably eventually merge with the rndpool lock,
but we'll put that off for now.

cprng(9):
- Make struct cprng_strong opaque.
- Move rndpseudo.c parts that futz with cprng guts to subr_cprng.c.
- Fix kevent locking.  (Is kevent locking documented anywhere?)
- Stub out rump cprng further until we can rumpify rndsink instead.
- Strip code to grovel through struct cprng_strong in fstat.
This commit is contained in:
riastradh 2013-06-23 02:35:23 +00:00
parent 9f0b5be4ef
commit 6290b0987e
13 changed files with 1091 additions and 777 deletions

147
share/man/man9/rndsink.9 Normal file
View File

@ -0,0 +1,147 @@
.\" $NetBSD: rndsink.9,v 1.1 2013/06/23 02:35:23 riastradh Exp $
.\"
.\" Copyright (c) 2013 The NetBSD Foundation, Inc.
.\" All rights reserved.
.\"
.\" This documentation is derived from text contributed to The NetBSD
.\" Foundation by Taylor R. Campbell.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
.\" TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
.\" PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
.\" POSSIBILITY OF SUCH DAMAGE.
.\"
.Dd April 10, 2013
.Dt RNDSINK 9
.Os
.Sh NAME
.Nm rndsink ,
.Nm rndsink_create ,
.Nm rndsink_destroy ,
.Nm rndsink_request ,
.Nm rndsink_schedule ,
.Nd functions to asynchronously request entropy from the system entropy pool
.Sh SYNOPSIS
.In sys/rndsink.h
.Ft struct rndsink *
.Fn rndsink_create "size_t bytes" "void (*callback)(void *, const void *, size_t)" "void *arg"
.Ft void
.Fn rndsink_destroy "struct rndsink *rndsink"
.Ft bool
.Fn rndsink_request "struct rndsink *rndsink" "void *buffer" "size_t bytes"
.Ft void
.Fn rndsink_schedule "struct rndsink *rndsink"
.Sh DESCRIPTION
The
.Nm
functions support asynchronous requests for entropy from the system
entropy pool.
Users must call
.Fn rndsink_create
to create an rndsink which they may then pass to
.Fn rndsink_request
to request data from the system entropy pool.
If full entropy is not available, the system will call a callback when
entropy is next available.
Users can schedule a callback without requesting data now using
.Fn rndsink_schedule .
When users no longer need an rndsink, they must pass it to
.Fn rndsink_destroy .
.Pp
This API provides direct access to the system entropy pool.
Most users should use the
.Xr cprng 9
API instead, which interposes a cryptographic pseudorandom number
generator between the user and the entropy pool.
.Sh FUNCTIONS
.Bl -tag -width abcd
.It Fn rndsink_create bytes callback arg
Create an rndsink for requests of
.Fa bytes
bytes of entropy, which must be no more than
.Dv RNDSINK_MAX_BYTES .
When requested and enough entropy is available, the system will call
.Fa callback
with three arguments:
.Bl -item -offset indent
.It
.Fa arg ,
an arbitrary user-supplied pointer;
.It
a pointer to a buffer containing the bytes of entropy; and
.It
the number of bytes in the buffer, which will always be
.Fa bytes .
.El
.Pp
The callback will be called in soft interrupt context.
.Pp
.Fn rndsink_create
may sleep to allocate memory.
.It Fn rndsink_destroy rndsink
Destroy an rndsink.
.Fn rndsink_destroy
may sleep to wait for pending callbacks to complete and to deallocate
memory.
.It Fn rndsink_request rndsink buffer bytes
Store
.Fa bytes
bytes derived from the system entropy pool in
.Fa buffer .
If the bytes have full entropy, return true.
Otherwise, schedule a callback as if with
.Fn rndsink_schedule
and return false.
In either case,
.Fn rndsink_request
will store data in
.Fa buffer .
The argument
.Fa bytes
must be the same as the argument to
.Fn rndsink_create
that was used to create
.Fa rndsink .
May be called at
.Dv IPL_VM
or lower.
The caller should use
.Xr explicit_bzero 3
to clear
.Fa buffer
once it has used the data stored there.
.It Fn rndsink_schedule rndsink
Schedule a callback when the system entropy pool has enough entropy.
If a callback is already scheduled, it remains scheduled.
May be called at
.Dv IPL_VM
or lower.
.El
.Sh CODE REFERENCES
The rndsink API is implemented in
.Pa sys/kern/kern_rndsink.c
and
.Pa sys/sys/rndsink.h .
.Sh SEE ALSO
.Xr explicit_bzero 3 ,
.Xr cprng 9 ,
.Xr rnd 9
.Sh HISTORY
The rndsink API first appeared in
.Nx 7.0 .

View File

@ -1,4 +1,4 @@
# $NetBSD: files,v 1.1073 2013/06/10 20:28:36 christos Exp $
# $NetBSD: files,v 1.1074 2013/06/23 02:35:24 riastradh Exp $
# @(#)files.newconf 7.5 (Berkeley) 5/10/93
version 20100430
@ -1524,6 +1524,7 @@ file kern/kern_rate.c
file kern/kern_resource.c
file kern/kern_rndpool.c
file kern/kern_rndq.c
file kern/kern_rndsink.c
file kern/kern_runq.c
file kern/kern_rwlock.c
file kern/kern_rwlock_obj.c

View File

@ -1,4 +1,4 @@
/* $NetBSD: rndpseudo.c,v 1.12 2013/06/13 00:55:01 tls Exp $ */
/* $NetBSD: rndpseudo.c,v 1.13 2013/06/23 02:35:24 riastradh Exp $ */
/*-
* Copyright (c) 1997-2011 The NetBSD Foundation, Inc.
@ -30,7 +30,7 @@
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: rndpseudo.c,v 1.12 2013/06/13 00:55:01 tls Exp $");
__KERNEL_RCSID(0, "$NetBSD: rndpseudo.c,v 1.13 2013/06/23 02:35:24 riastradh Exp $");
#if defined(_KERNEL_OPT)
#include "opt_compat_netbsd.h"
@ -679,12 +679,10 @@ rnd_poll(struct file *fp, int events)
}
}
if (cprng_strong_ready(ctx->cprng)) {
revents |= events & (POLLIN | POLLRDNORM);
if (ctx->hard) {
revents |= cprng_strong_poll(ctx->cprng, events);
} else {
mutex_enter(&ctx->cprng->mtx);
selrecord(curlwp, &ctx->cprng->selq);
mutex_exit(&ctx->cprng->mtx);
revents |= (events & (POLLIN | POLLRDNORM));
}
return (revents);
@ -723,39 +721,10 @@ rnd_close(struct file *fp)
return 0;
}
static void
filt_rnddetach(struct knote *kn)
{
cprng_strong_t *c = kn->kn_hook;
mutex_enter(&c->mtx);
SLIST_REMOVE(&c->selq.sel_klist, kn, knote, kn_selnext);
mutex_exit(&c->mtx);
}
static int
filt_rndread(struct knote *kn, long hint)
{
cprng_strong_t *c = kn->kn_hook;
if (cprng_strong_ready(c)) {
kn->kn_data = RND_TEMP_BUFFER_SIZE;
return 1;
}
return 0;
}
static const struct filterops rnd_seltrue_filtops =
{ 1, NULL, filt_rnddetach, filt_seltrue };
static const struct filterops rndread_filtops =
{ 1, NULL, filt_rnddetach, filt_rndread };
static int
rnd_kqfilter(struct file *fp, struct knote *kn)
{
rp_ctx_t *ctx = fp->f_data;
struct klist *klist;
if (ctx->cprng == NULL) {
rnd_alloc_cprng(ctx);
@ -764,27 +733,5 @@ rnd_kqfilter(struct file *fp, struct knote *kn)
}
}
mutex_enter(&ctx->cprng->mtx);
switch (kn->kn_filter) {
case EVFILT_READ:
klist = &ctx->cprng->selq.sel_klist;
kn->kn_fop = &rndread_filtops;
break;
case EVFILT_WRITE:
klist = &ctx->cprng->selq.sel_klist;
kn->kn_fop = &rnd_seltrue_filtops;
break;
default:
mutex_exit(&ctx->cprng->mtx);
return EINVAL;
}
kn->kn_hook = ctx->cprng;
SLIST_INSERT_HEAD(klist, kn, kn_selnext);
mutex_exit(&ctx->cprng->mtx);
return (0);
return cprng_strong_kqfilter(ctx->cprng, kn);
}

View File

@ -1,4 +1,4 @@
/* $NetBSD: kern_rndq.c,v 1.13 2013/06/20 23:21:41 christos Exp $ */
/* $NetBSD: kern_rndq.c,v 1.14 2013/06/23 02:35:24 riastradh Exp $ */
/*-
* Copyright (c) 1997-2013 The NetBSD Foundation, Inc.
@ -32,7 +32,7 @@
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: kern_rndq.c,v 1.13 2013/06/20 23:21:41 christos Exp $");
__KERNEL_RCSID(0, "$NetBSD: kern_rndq.c,v 1.14 2013/06/23 02:35:24 riastradh Exp $");
#include <sys/param.h>
#include <sys/ioctl.h>
@ -48,6 +48,7 @@ __KERNEL_RCSID(0, "$NetBSD: kern_rndq.c,v 1.13 2013/06/20 23:21:41 christos Exp
#include <sys/callout.h>
#include <sys/intr.h>
#include <sys/rnd.h>
#include <sys/rndsink.h>
#include <sys/vnode.h>
#include <sys/pool.h>
#include <sys/kauth.h>
@ -103,17 +104,6 @@ typedef struct _rnd_sample_t {
SIMPLEQ_HEAD(, _rnd_sample_t) rnd_samples;
kmutex_t rnd_mtx;
/*
* Entropy sinks: usually other generators waiting to be rekeyed.
*
* A sink's callback MUST NOT re-add the sink to the list, or
* list corruption will occur. The list is protected by the
* rndsink_mtx, which must be released before calling any sink's
* callback.
*/
TAILQ_HEAD(, rndsink) rnd_sinks;
kmutex_t rndsink_mtx;
/*
* Memory pool for sample buffers
*/
@ -240,7 +230,7 @@ rnd_schedule_wakeup(void)
/*
* Tell any sources with "feed me" callbacks that we are hungry.
*/
static void
void
rnd_getmore(size_t byteswanted)
{
krndsource_t *rs;
@ -266,85 +256,30 @@ rnd_getmore(size_t byteswanted)
void
rnd_wakeup_readers(void)
{
rndsink_t *sink, *tsink;
size_t entropy_count;
TAILQ_HEAD(, rndsink) sunk = TAILQ_HEAD_INITIALIZER(sunk);
/*
* XXX This bookkeeping shouldn't be here -- this is not where
* the rnd_empty/rnd_initial_entropy state change actually
* happens.
*/
mutex_spin_enter(&rndpool_mtx);
entropy_count = rndpool_get_entropy_count(&rnd_pool);
const size_t entropy_count = rndpool_get_entropy_count(&rnd_pool);
if (entropy_count < RND_ENTROPY_THRESHOLD * 8) {
rnd_empty = 1;
mutex_spin_exit(&rndpool_mtx);
return;
} else {
#ifdef RND_VERBOSE
if (__predict_false(!rnd_initial_entropy)) {
printf("rnd: have initial entropy (%u)\n",
(unsigned int)entropy_count);
}
if (__predict_false(!rnd_initial_entropy))
printf("rnd: have initial entropy (%zu)\n",
entropy_count);
#endif
rnd_empty = 0;
rnd_initial_entropy = 1;
}
/*
* First, take care of consumers needing rekeying.
*/
mutex_spin_enter(&rndsink_mtx);
TAILQ_FOREACH_SAFE(sink, &rnd_sinks, tailq, tsink) {
if (!mutex_tryenter(&sink->mtx)) {
#ifdef RND_VERBOSE
printf("rnd_wakeup_readers: "
"skipping busy rndsink\n");
#endif
continue;
}
KASSERT(RSTATE_PENDING == sink->state);
if (sink->len * 8 < rndpool_get_entropy_count(&rnd_pool)) {
/* We have enough entropy to sink some here. */
if (rndpool_extract_data(&rnd_pool, sink->data,
sink->len, RND_EXTRACT_GOOD)
!= sink->len) {
panic("could not extract estimated "
"entropy from pool");
}
sink->state = RSTATE_HASBITS;
/* Move this sink to the list of pending callbacks */
TAILQ_REMOVE(&rnd_sinks, sink, tailq);
TAILQ_INSERT_HEAD(&sunk, sink, tailq);
} else {
#ifdef RND_VERBOSE
printf("sink wants %d, we have %d, asking for more\n",
(int)sink->len * 8,
(int)rndpool_get_entropy_count(&rnd_pool));
#endif
mutex_spin_exit(&sink->mtx);
rnd_getmore(sink->len * 8);
}
}
mutex_spin_exit(&rndsink_mtx);
mutex_spin_exit(&rndpool_mtx);
/*
* Now that we have dropped the mutexes, we can run sinks' callbacks.
* Since we have reused the "tailq" member of the sink structure for
* this temporary on-stack queue, the callback must NEVER re-add
* the sink to the main queue, or our on-stack queue will become
* corrupt.
*/
while ((sink = TAILQ_FIRST(&sunk))) {
#ifdef RND_VERBOSE
printf("supplying %d bytes to entropy sink \"%s\""
" (cb %p, arg %p).\n",
(int)sink->len, sink->name, sink->cb, sink->arg);
#endif
sink->state = RSTATE_HASBITS;
sink->cb(sink->arg);
TAILQ_REMOVE(&sunk, sink, tailq);
mutex_spin_exit(&sink->mtx);
}
rndsinks_distribute();
}
/*
@ -445,7 +380,7 @@ rnd_init(void)
return;
mutex_init(&rnd_mtx, MUTEX_DEFAULT, IPL_VM);
mutex_init(&rndsink_mtx, MUTEX_DEFAULT, IPL_VM);
rndsinks_init();
/*
* take a counter early, hoping that there's some variance in
@ -455,7 +390,6 @@ rnd_init(void)
LIST_INIT(&rnd_sources);
SIMPLEQ_INIT(&rnd_samples);
TAILQ_INIT(&rnd_sinks);
rndpool_init(&rnd_pool);
mutex_init(&rndpool_mtx, MUTEX_DEFAULT, IPL_VM);
@ -900,7 +834,6 @@ rnd_process_events(void)
SIMPLEQ_HEAD_INITIALIZER(dq_samples);
SIMPLEQ_HEAD(, _rnd_sample_t) df_samples =
SIMPLEQ_HEAD_INITIALIZER(df_samples);
TAILQ_HEAD(, rndsink) sunk = TAILQ_HEAD_INITIALIZER(sunk);
/*
* Sample queue is protected by rnd_mtx, drain to onstack queue
@ -1110,42 +1043,6 @@ rnd_extract_data(void *p, u_int32_t len, u_int32_t flags)
return retval;
}
void
rndsink_attach(rndsink_t *rs)
{
#ifdef RND_VERBOSE
printf("rnd: entropy sink \"%s\" wants %d bytes of data.\n",
rs->name, (int)rs->len);
#endif
KASSERT(mutex_owned(&rs->mtx));
KASSERT(rs->state = RSTATE_PENDING);
mutex_spin_enter(&rndsink_mtx);
TAILQ_INSERT_TAIL(&rnd_sinks, rs, tailq);
mutex_spin_exit(&rndsink_mtx);
rnd_schedule_process();
}
void
rndsink_detach(rndsink_t *rs)
{
rndsink_t *sink, *tsink;
#ifdef RND_VERBOSE
printf("rnd: entropy sink \"%s\" no longer wants data.\n", rs->name);
#endif
KASSERT(mutex_owned(&rs->mtx));
mutex_spin_enter(&rndsink_mtx);
TAILQ_FOREACH_SAFE(sink, &rnd_sinks, tailq, tsink) {
if (sink == rs) {
TAILQ_REMOVE(&rnd_sinks, rs, tailq);
}
}
mutex_spin_exit(&rndsink_mtx);
}
void
rnd_seed(void *base, size_t len)
{

321
sys/kern/kern_rndsink.c Normal file
View File

@ -0,0 +1,321 @@
/* $NetBSD: kern_rndsink.c,v 1.1 2013/06/23 02:35:24 riastradh Exp $ */
/*-
* Copyright (c) 2013 The NetBSD Foundation, Inc.
* All rights reserved.
*
* This code is derived from software contributed to The NetBSD Foundation
* by Taylor R. Campbell.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
* TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: kern_rndsink.c,v 1.1 2013/06/23 02:35:24 riastradh Exp $");
#include <sys/param.h>
#include <sys/types.h>
#include <sys/condvar.h>
#include <sys/kmem.h>
#include <sys/mutex.h>
#include <sys/queue.h>
#include <sys/rnd.h>
#include <sys/rndsink.h>
#include <dev/rnd_private.h> /* XXX provisional, for rnd_extract_data */
struct rndsink {
/* Callback state. */
enum {
RNDSINK_IDLE, /* no callback in progress */
RNDSINK_QUEUED, /* queued for callback */
RNDSINK_IN_FLIGHT, /* callback called */
RNDSINK_REQUEUED, /* queued again before callback done */
RNDSINK_DEAD, /* destroyed */
} rsink_state;
/* Entry on the queue of rndsinks, iff in the RNDSINK_QUEUED state. */
TAILQ_ENTRY(rndsink) rsink_entry;
/*
* Notifies rndsink_destroy when rsink_state transitions to
* RNDSINK_IDLE or RNDSINK_QUEUED.
*/
kcondvar_t rsink_cv;
/* rndsink_create parameters. */
unsigned int rsink_bytes;
rndsink_callback_t *rsink_callback;
void *rsink_arg;
};
static kmutex_t rndsinks_lock __cacheline_aligned;
static TAILQ_HEAD(, rndsink) rndsinks = TAILQ_HEAD_INITIALIZER(rndsinks);
void
rndsinks_init(void)
{
/*
* This mutex must be at an ipl as high as the highest ipl of
* anyone who wants to call rndsink_request.
*
* XXX Call this IPL_RND, perhaps.
*/
mutex_init(&rndsinks_lock, MUTEX_DEFAULT, IPL_VM);
}
/*
* XXX Provisional -- rndpool_extract and rndpool_maybe_extract should
* move into kern_rndpool.c.
*/
extern rndpool_t rnd_pool;
extern kmutex_t rndpool_mtx;
/*
* Fill the buffer with as much entropy as we can. Return true if it
* has full entropy and false if not.
*/
static bool
rndpool_extract(void *buffer, size_t bytes)
{
const size_t extracted = rnd_extract_data(buffer, bytes,
RND_EXTRACT_GOOD);
if (extracted < bytes) {
(void)rnd_extract_data((uint8_t *)buffer + extracted,
bytes - extracted, RND_EXTRACT_ANY);
mutex_spin_enter(&rndpool_mtx);
rnd_getmore(bytes - extracted);
mutex_spin_exit(&rndpool_mtx);
return false;
}
return true;
}
/*
* If we have as much entropy as is requested, fill the buffer with it
* and return true. Otherwise, leave the buffer alone and return
* false.
*/
static bool
rndpool_maybe_extract(void *buffer, size_t bytes)
{
bool ok;
KASSERT(bytes <= RNDSINK_MAX_BYTES);
CTASSERT(RND_ENTROPY_THRESHOLD <= 0xffffffffUL);
CTASSERT(RNDSINK_MAX_BYTES <= (0xffffffffUL - RND_ENTROPY_THRESHOLD));
CTASSERT((RNDSINK_MAX_BYTES + RND_ENTROPY_THRESHOLD) <=
(0xffffffffUL / NBBY));
const uint32_t bits_needed = ((bytes + RND_ENTROPY_THRESHOLD) * NBBY);
mutex_spin_enter(&rndpool_mtx);
if (bits_needed <= rndpool_get_entropy_count(&rnd_pool)) {
const uint32_t extracted __unused =
rndpool_extract_data(&rnd_pool, buffer, bytes,
RND_EXTRACT_GOOD);
KASSERT(extracted == bytes);
ok = true;
} else {
ok = false;
rnd_getmore(howmany(bits_needed -
rndpool_get_entropy_count(&rnd_pool), NBBY));
}
mutex_spin_exit(&rndpool_mtx);
return ok;
}
void
rndsinks_distribute(void)
{
uint8_t buffer[RNDSINK_MAX_BYTES];
struct rndsink *rndsink;
explicit_bzero(buffer, sizeof(buffer)); /* paranoia */
mutex_spin_enter(&rndsinks_lock);
while ((rndsink = TAILQ_FIRST(&rndsinks)) != NULL) {
KASSERT(rndsink->rsink_state == RNDSINK_QUEUED);
/* Bail if we can't get some entropy for this rndsink. */
if (!rndpool_maybe_extract(buffer, rndsink->rsink_bytes))
break;
/*
* Got some entropy. Take the sink off the queue and
* feed the entropy to the callback, with rndsinks_lock
* dropped. While running the callback, lock out
* rndsink_destroy by marking the sink in flight.
*/
TAILQ_REMOVE(&rndsinks, rndsink, rsink_entry);
rndsink->rsink_state = RNDSINK_IN_FLIGHT;
mutex_spin_exit(&rndsinks_lock);
(*rndsink->rsink_callback)(rndsink->rsink_arg, buffer,
rndsink->rsink_bytes);
explicit_bzero(buffer, rndsink->rsink_bytes);
mutex_spin_enter(&rndsinks_lock);
/*
* If, while the callback was running, anyone requested
* it be queued up again, do so now. Otherwise, idle.
* Either way, it is now safe to destroy, so wake the
* pending rndsink_destroy, if there is one.
*/
if (rndsink->rsink_state == RNDSINK_REQUEUED) {
TAILQ_INSERT_TAIL(&rndsinks, rndsink, rsink_entry);
rndsink->rsink_state = RNDSINK_QUEUED;
} else {
KASSERT(rndsink->rsink_state == RNDSINK_IN_FLIGHT);
rndsink->rsink_state = RNDSINK_IDLE;
}
cv_broadcast(&rndsink->rsink_cv);
}
mutex_spin_exit(&rndsinks_lock);
explicit_bzero(buffer, sizeof(buffer)); /* paranoia */
}
static void
rndsinks_enqueue(struct rndsink *rndsink)
{
KASSERT(mutex_owned(&rndsinks_lock));
/* XXX Kick any on-demand entropy sources too. */
switch (rndsink->rsink_state) {
case RNDSINK_IDLE:
/* Not on the queue and nobody is handling it. */
TAILQ_INSERT_TAIL(&rndsinks, rndsink, rsink_entry);
rndsink->rsink_state = RNDSINK_QUEUED;
break;
case RNDSINK_QUEUED:
/* Already on the queue. */
break;
case RNDSINK_IN_FLIGHT:
/* Someone is handling it. Ask to queue it up again. */
rndsink->rsink_state = RNDSINK_REQUEUED;
break;
case RNDSINK_REQUEUED:
/* Already asked to queue it up again. */
break;
case RNDSINK_DEAD:
panic("requesting entropy from dead rndsink: %p", rndsink);
default:
panic("rndsink %p in unknown state: %d", rndsink,
(int)rndsink->rsink_state);
}
}
struct rndsink *
rndsink_create(size_t bytes, rndsink_callback_t *callback, void *arg)
{
struct rndsink *const rndsink = kmem_alloc(sizeof(*rndsink), KM_SLEEP);
KASSERT(bytes <= RNDSINK_MAX_BYTES);
rndsink->rsink_state = RNDSINK_IDLE;
cv_init(&rndsink->rsink_cv, "rndsink");
rndsink->rsink_bytes = bytes;
rndsink->rsink_callback = callback;
rndsink->rsink_arg = arg;
return rndsink;
}
void
rndsink_destroy(struct rndsink *rndsink)
{
/*
* Make sure the rndsink is off the queue, and if it's already
* in flight, wait for the callback to complete.
*/
mutex_spin_enter(&rndsinks_lock);
while (rndsink->rsink_state != RNDSINK_IDLE) {
switch (rndsink->rsink_state) {
case RNDSINK_QUEUED:
TAILQ_REMOVE(&rndsinks, rndsink, rsink_entry);
rndsink->rsink_state = RNDSINK_IDLE;
break;
case RNDSINK_IN_FLIGHT:
case RNDSINK_REQUEUED:
cv_wait(&rndsink->rsink_cv, &rndsinks_lock);
break;
case RNDSINK_DEAD:
panic("destroying dead rndsink: %p", rndsink);
default:
panic("rndsink %p in unknown state: %d", rndsink,
(int)rndsink->rsink_state);
}
}
rndsink->rsink_state = RNDSINK_DEAD;
mutex_spin_exit(&rndsinks_lock);
cv_destroy(&rndsink->rsink_cv);
kmem_free(rndsink, sizeof(*rndsink));
}
void
rndsink_schedule(struct rndsink *rndsink)
{
/* Optimistically check without the lock whether we're queued. */
if ((rndsink->rsink_state != RNDSINK_QUEUED) &&
(rndsink->rsink_state != RNDSINK_REQUEUED)) {
mutex_spin_enter(&rndsinks_lock);
rndsinks_enqueue(rndsink);
mutex_spin_exit(&rndsinks_lock);
}
}
bool
rndsink_request(struct rndsink *rndsink, void *buffer, size_t bytes)
{
KASSERT(bytes == rndsink->rsink_bytes);
mutex_spin_enter(&rndsinks_lock);
const bool full_entropy = rndpool_extract(buffer, bytes);
if (!full_entropy)
rndsinks_enqueue(rndsink);
mutex_spin_exit(&rndsinks_lock);
return full_entropy;
}

View File

@ -1,11 +1,11 @@
/* $NetBSD: subr_cprng.c,v 1.17 2013/06/13 00:55:01 tls Exp $ */
/* $NetBSD: subr_cprng.c,v 1.18 2013/06/23 02:35:24 riastradh Exp $ */
/*-
* Copyright (c) 2011 The NetBSD Foundation, Inc.
* Copyright (c) 2011-2013 The NetBSD Foundation, Inc.
* All rights reserved.
*
* This code is derived from software contributed to The NetBSD Foundation
* by Thor Lancelot Simon.
* by Thor Lancelot Simon and Taylor R. Campbell.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@ -29,24 +29,42 @@
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/types.h>
#include <sys/time.h>
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: subr_cprng.c,v 1.18 2013/06/23 02:35:24 riastradh Exp $");
#include <sys/param.h>
#include <sys/types.h>
#include <sys/condvar.h>
#include <sys/cprng.h>
#include <sys/errno.h>
#include <sys/event.h> /* XXX struct knote */
#include <sys/fcntl.h> /* XXX FNONBLOCK */
#include <sys/kernel.h>
#include <sys/systm.h>
#include <sys/kmem.h>
#include <sys/mutex.h>
#include <sys/rngtest.h>
#include <sys/poll.h> /* XXX POLLIN/POLLOUT/&c. */
#include <sys/select.h>
#include <sys/systm.h>
#include <sys/rnd.h>
#include <dev/rnd_private.h>
#include <sys/rndsink.h>
#if DEBUG
#include <sys/rngtest.h>
#endif
#include <crypto/nist_ctr_drbg/nist_ctr_drbg.h>
#if defined(__HAVE_CPU_COUNTER)
#include <machine/cpu_counter.h>
#endif
#include <sys/cprng.h>
static void cprng_strong_generate(struct cprng_strong *, void *, size_t);
static void cprng_strong_reseed(struct cprng_strong *);
static void cprng_strong_reseed_from(struct cprng_strong *, const void *,
size_t, bool);
#if DEBUG
static void cprng_strong_rngtest(struct cprng_strong *);
#endif
__KERNEL_RCSID(0, "$NetBSD: subr_cprng.c,v 1.17 2013/06/13 00:55:01 tls Exp $");
static rndsink_callback_t cprng_strong_rndsink_callback;
void
cprng_init(void)
@ -71,343 +89,378 @@ cprng_counter(void)
return (tv.tv_sec * 1000000 + tv.tv_usec);
}
static void
cprng_strong_doreseed(cprng_strong_t *const c)
struct cprng_strong {
char cs_name[16];
int cs_flags;
kmutex_t cs_lock;
kcondvar_t cs_cv;
struct selinfo cs_selq;
struct rndsink *cs_rndsink;
bool cs_ready;
NIST_CTR_DRBG cs_drbg;
};
struct cprng_strong *
cprng_strong_create(const char *name, int ipl, int flags)
{
uint32_t cc = cprng_counter();
const uint32_t cc = cprng_counter();
struct cprng_strong *const cprng = kmem_alloc(sizeof(*cprng),
KM_SLEEP);
KASSERT(mutex_owned(&c->mtx));
KASSERT(mutex_owned(&c->reseed.mtx));
KASSERT(c->reseed.len == NIST_BLOCK_KEYLEN_BYTES);
if (nist_ctr_drbg_reseed(&c->drbg, c->reseed.data, c->reseed.len,
&cc, sizeof(cc))) {
panic("cprng %s: nist_ctr_drbg_reseed failed.", c->name);
}
memset(c->reseed.data, 0, c->reseed.len);
#ifdef RND_VERBOSE
printf("cprng %s: reseeded with rnd_filled = %d\n", c->name,
rnd_filled);
#endif
c->entropy_serial = rnd_filled;
c->reseed.state = RSTATE_IDLE;
if (c->flags & CPRNG_USE_CV) {
cv_broadcast(&c->cv);
}
selnotify(&c->selq, 0, 0);
}
static void
cprng_strong_sched_reseed(cprng_strong_t *const c)
{
KASSERT(mutex_owned(&c->mtx));
if (mutex_tryenter(&c->reseed.mtx)) {
switch (c->reseed.state) {
case RSTATE_IDLE:
c->reseed.state = RSTATE_PENDING;
c->reseed.len = NIST_BLOCK_KEYLEN_BYTES;
rndsink_attach(&c->reseed);
break;
case RSTATE_HASBITS:
/* Just rekey the underlying generator now. */
cprng_strong_doreseed(c);
break;
case RSTATE_PENDING:
if (c->entropy_serial != rnd_filled) {
rndsink_detach(&c->reseed);
rndsink_attach(&c->reseed);
}
break;
default:
panic("cprng %s: bad reseed state %d",
c->name, c->reseed.state);
break;
}
mutex_spin_exit(&c->reseed.mtx);
}
#ifdef RND_VERBOSE
else {
printf("cprng %s: skipping sched_reseed, sink busy\n",
c->name);
}
#endif
}
static void
cprng_strong_reseed(void *const arg)
{
cprng_strong_t *c = arg;
KASSERT(mutex_owned(&c->reseed.mtx));
KASSERT(RSTATE_HASBITS == c->reseed.state);
if (!mutex_tryenter(&c->mtx)) {
#ifdef RND_VERBOSE
printf("cprng: sink %s cprng busy, no reseed\n", c->reseed.name);
#endif
if (c->flags & CPRNG_USE_CV) { /* XXX if flags change? */
cv_broadcast(&c->cv);
}
return;
}
cprng_strong_doreseed(c);
mutex_exit(&c->mtx);
}
static size_t
cprng_entropy_try(uint8_t *key, size_t keylen)
{
int r;
r = rnd_extract_data(key, keylen, RND_EXTRACT_GOOD);
if (r != keylen) { /* Always fill in, for safety */
rnd_extract_data(key + r, keylen - r, RND_EXTRACT_ANY);
}
return r;
}
cprng_strong_t *
cprng_strong_create(const char *const name, int ipl, int flags)
{
cprng_strong_t *c;
uint8_t key[NIST_BLOCK_KEYLEN_BYTES];
int r, getmore = 0, hard = 0;
uint32_t cc;
c = kmem_alloc(sizeof(*c), KM_NOSLEEP);
if (c == NULL) {
return NULL;
}
c->flags = flags;
strlcpy(c->name, name, sizeof(c->name));
c->reseed.state = RSTATE_IDLE;
c->reseed.cb = cprng_strong_reseed;
c->reseed.arg = c;
c->entropy_serial = rnd_initial_entropy ? rnd_filled : -1;
mutex_init(&c->reseed.mtx, MUTEX_DEFAULT, IPL_VM);
strlcpy(c->reseed.name, name, sizeof(c->reseed.name));
mutex_init(&c->mtx, MUTEX_DEFAULT, ipl);
if (c->flags & CPRNG_USE_CV) {
cv_init(&c->cv, (const char *)c->name);
}
selinit(&c->selq);
r = cprng_entropy_try(key, sizeof(key));
if (r != sizeof(key)) {
if (c->flags & CPRNG_INIT_ANY) {
#ifdef DEBUG
/*
* If we have ever crossed the pool's
* minimum-entropy threshold, then we are
* providing cryptographically strong
* random output -- if not information-
* theoretically strong. Warn elsewise.
*/
if (!rnd_initial_entropy) {
printf("cprng %s: WARNING insufficient "
"entropy at creation.\n", name);
}
#endif
} else {
hard++;
}
getmore++;
}
if (nist_ctr_drbg_instantiate(&c->drbg, key, sizeof(key),
&cc, sizeof(cc), name, strlen(name))) {
panic("cprng %s: instantiation failed.", name);
}
if (getmore) {
/* Cause readers to wait for rekeying. */
if (hard) {
c->drbg.reseed_counter =
NIST_CTR_DRBG_RESEED_INTERVAL + 1;
} else {
c->drbg.reseed_counter =
(NIST_CTR_DRBG_RESEED_INTERVAL / 2) + 1;
}
}
return c;
}
size_t
cprng_strong(cprng_strong_t *const c, void *const p, size_t len, int flags)
{
uint32_t cc = cprng_counter();
#ifdef DEBUG
int testfail = 0;
#endif
if (len > CPRNG_MAX_LEN) { /* XXX should we loop? */
len = CPRNG_MAX_LEN; /* let the caller loop if desired */
}
mutex_enter(&c->mtx);
/* If we were initialized with the pool empty, rekey ASAP */
if (__predict_false(c->entropy_serial == -1) && rnd_initial_entropy) {
c->entropy_serial = 0;
goto rekeyany; /* We have _some_ entropy, use it. */
}
if (nist_ctr_drbg_generate(&c->drbg, p, len, &cc, sizeof(cc))) {
/* A generator failure really means we hit the hard limit. */
rekeyany:
if (c->flags & CPRNG_REKEY_ANY) {
uint8_t key[NIST_BLOCK_KEYLEN_BYTES];
if (cprng_entropy_try(key, sizeof(key)) !=
sizeof(key)) {
printf("cprng %s: WARNING "
"pseudorandom rekeying.\n", c->name);
}
cc = cprng_counter();
if (nist_ctr_drbg_reseed(&c->drbg, key, sizeof(key),
&cc, sizeof(cc))) {
panic("cprng %s: nist_ctr_drbg_reseed "
"failed.", c->name);
}
memset(key, 0, sizeof(key));
} else {
int wr;
do {
cprng_strong_sched_reseed(c);
if ((flags & FNONBLOCK) ||
!(c->flags & CPRNG_USE_CV)) {
len = 0;
break;
}
/*
* XXX There's a race with the cv_broadcast
* XXX in cprng_strong_sched_reseed, because
* XXX of the use of tryenter in that function.
* XXX This "timedwait" hack works around it,
* XXX at the expense of occasionaly polling
* XXX for success on a /dev/random rekey.
*/
wr = cv_timedwait_sig(&c->cv, &c->mtx,
mstohz(100));
if (wr == ERESTART) {
mutex_exit(&c->mtx);
return 0;
}
} while (nist_ctr_drbg_generate(&c->drbg, p,
len, &cc,
sizeof(cc)));
}
}
#ifdef DEBUG
/*
* If the generator has just been keyed, perform
* the statistical RNG test.
* rndsink_request takes a spin lock at IPL_VM, so we can be no
* higher than that.
*/
if (__predict_false(c->drbg.reseed_counter == 1) &&
(flags & FASYNC) == 0) {
rngtest_t *rt = kmem_intr_alloc(sizeof(*rt), KM_NOSLEEP);
KASSERT(ipl <= IPL_VM);
if (rt) {
/* Initialize the easy fields. */
(void)strlcpy(cprng->cs_name, name, sizeof(cprng->cs_name));
cprng->cs_flags = flags;
mutex_init(&cprng->cs_lock, MUTEX_DEFAULT, ipl);
cv_init(&cprng->cs_cv, cprng->cs_name);
selinit(&cprng->cs_selq);
cprng->cs_rndsink = rndsink_create(NIST_BLOCK_KEYLEN_BYTES,
&cprng_strong_rndsink_callback, cprng);
strncpy(rt->rt_name, c->name, sizeof(rt->rt_name));
/* Get some initial entropy. Record whether it is full entropy. */
uint8_t seed[NIST_BLOCK_KEYLEN_BYTES];
cprng->cs_ready = rndsink_request(cprng->cs_rndsink, seed,
sizeof(seed));
if (nist_ctr_drbg_instantiate(&cprng->cs_drbg, seed, sizeof(seed),
&cc, sizeof(cc), cprng->cs_name, sizeof(cprng->cs_name)))
/* XXX Fix nist_ctr_drbg API so this can't happen. */
panic("cprng %s: NIST CTR_DRBG instantiation failed",
cprng->cs_name);
explicit_bzero(seed, sizeof(seed));
if (nist_ctr_drbg_generate(&c->drbg, rt->rt_b,
sizeof(rt->rt_b), NULL, 0)) {
panic("cprng %s: nist_ctr_drbg_generate "
"failed!", c->name);
}
testfail = rngtest(rt);
if (!cprng->cs_ready && !ISSET(flags, CPRNG_INIT_ANY))
printf("cprng %s: creating with partial entropy\n",
cprng->cs_name);
if (testfail) {
printf("cprng %s: failed statistical RNG "
"test.\n", c->name);
c->drbg.reseed_counter =
NIST_CTR_DRBG_RESEED_INTERVAL + 1;
len = 0;
}
memset(rt, 0, sizeof(*rt));
kmem_intr_free(rt, sizeof(*rt));
}
}
#endif
if (__predict_false(c->drbg.reseed_counter >
(NIST_CTR_DRBG_RESEED_INTERVAL / 2))) {
cprng_strong_sched_reseed(c);
} else if (rnd_full) {
if (c->entropy_serial != rnd_filled) {
#ifdef RND_VERBOSE
printf("cprng %s: reseeding from full pool "
"(serial %d vs pool %d)\n", c->name,
c->entropy_serial, rnd_filled);
#endif
cprng_strong_sched_reseed(c);
}
}
mutex_exit(&c->mtx);
return len;
return cprng;
}
void
cprng_strong_destroy(cprng_strong_t *c)
cprng_strong_destroy(struct cprng_strong *cprng)
{
mutex_enter(&c->mtx);
mutex_spin_enter(&c->reseed.mtx);
if (c->flags & CPRNG_USE_CV) {
KASSERT(!cv_has_waiters(&c->cv));
cv_destroy(&c->cv);
/*
* Destroy the rndsink first to prevent calls to the callback.
*/
rndsink_destroy(cprng->cs_rndsink);
KASSERT(!cv_has_waiters(&cprng->cs_cv));
#if 0
KASSERT(!select_has_waiters(&cprng->cs_selq)) /* XXX ? */
#endif
nist_ctr_drbg_destroy(&cprng->cs_drbg);
seldestroy(&cprng->cs_selq);
cv_destroy(&cprng->cs_cv);
mutex_destroy(&cprng->cs_lock);
explicit_bzero(cprng, sizeof(*cprng)); /* paranoia */
kmem_free(cprng, sizeof(*cprng));
}
/*
* Generate some data from cprng. Block or return zero bytes,
* depending on flags & FNONBLOCK, if cprng was created without
* CPRNG_REKEY_ANY.
*/
size_t
cprng_strong(struct cprng_strong *cprng, void *buffer, size_t bytes, int flags)
{
size_t result;
/* Caller must loop for more than CPRNG_MAX_LEN bytes. */
bytes = MIN(bytes, CPRNG_MAX_LEN);
mutex_enter(&cprng->cs_lock);
if (ISSET(cprng->cs_flags, CPRNG_REKEY_ANY)) {
if (!cprng->cs_ready)
cprng_strong_reseed(cprng);
} else {
while (!cprng->cs_ready) {
if (ISSET(flags, FNONBLOCK) ||
!ISSET(cprng->cs_flags, CPRNG_USE_CV) ||
cv_wait_sig(&cprng->cs_cv, &cprng->cs_lock)) {
result = 0;
goto out;
}
}
}
seldestroy(&c->selq);
if (RSTATE_PENDING == c->reseed.state) {
rndsink_detach(&c->reseed);
cprng_strong_generate(cprng, buffer, bytes);
result = bytes;
out: mutex_exit(&cprng->cs_lock);
return result;
}
static void filt_cprng_detach(struct knote *);
static int filt_cprng_event(struct knote *, long);
static const struct filterops cprng_filtops =
{ 1, NULL, filt_cprng_detach, filt_cprng_event };
int
cprng_strong_kqfilter(struct cprng_strong *cprng, struct knote *kn)
{
switch (kn->kn_filter) {
case EVFILT_READ:
kn->kn_fop = &cprng_filtops;
kn->kn_hook = cprng;
mutex_enter(&cprng->cs_lock);
SLIST_INSERT_HEAD(&cprng->cs_selq.sel_klist, kn, kn_selnext);
mutex_exit(&cprng->cs_lock);
return 0;
case EVFILT_WRITE:
default:
return EINVAL;
}
mutex_spin_exit(&c->reseed.mtx);
mutex_destroy(&c->reseed.mtx);
}
nist_ctr_drbg_destroy(&c->drbg);
static void
filt_cprng_detach(struct knote *kn)
{
struct cprng_strong *const cprng = kn->kn_hook;
mutex_exit(&c->mtx);
mutex_destroy(&c->mtx);
mutex_enter(&cprng->cs_lock);
SLIST_REMOVE(&cprng->cs_selq.sel_klist, kn, knote, kn_selnext);
mutex_exit(&cprng->cs_lock);
}
memset(c, 0, sizeof(*c));
kmem_free(c, sizeof(*c));
static int
filt_cprng_event(struct knote *kn, long hint)
{
struct cprng_strong *const cprng = kn->kn_hook;
int ret;
if (hint == NOTE_SUBMIT)
KASSERT(mutex_owned(&cprng->cs_lock));
else
mutex_enter(&cprng->cs_lock);
if (cprng->cs_ready) {
kn->kn_data = CPRNG_MAX_LEN; /* XXX Too large? */
ret = 1;
} else {
ret = 0;
}
if (hint == NOTE_SUBMIT)
KASSERT(mutex_owned(&cprng->cs_lock));
else
mutex_exit(&cprng->cs_lock);
return ret;
}
int
cprng_strong_getflags(cprng_strong_t *const c)
cprng_strong_poll(struct cprng_strong *cprng, int events)
{
KASSERT(mutex_owned(&c->mtx));
return c->flags;
int revents;
if (!ISSET(events, (POLLIN | POLLRDNORM)))
return 0;
mutex_enter(&cprng->cs_lock);
if (cprng->cs_ready) {
revents = (events & (POLLIN | POLLRDNORM));
} else {
selrecord(curlwp, &cprng->cs_selq);
revents = 0;
}
mutex_exit(&cprng->cs_lock);
return revents;
}
/*
* XXX Kludge for the current /dev/random implementation.
*/
void
cprng_strong_setflags(cprng_strong_t *const c, int flags)
cprng_strong_deplete(struct cprng_strong *cprng)
{
KASSERT(mutex_owned(&c->mtx));
if (flags & CPRNG_USE_CV) {
if (!(c->flags & CPRNG_USE_CV)) {
cv_init(&c->cv, (const char *)c->name);
mutex_enter(&cprng->cs_lock);
cprng->cs_ready = false;
rndsink_schedule(cprng->cs_rndsink);
mutex_exit(&cprng->cs_lock);
}
/*
* XXX Move nist_ctr_drbg_reseed_advised_p and
* nist_ctr_drbg_reseed_needed_p into the nist_ctr_drbg API and make
* the NIST_CTR_DRBG structure opaque.
*/
static bool
nist_ctr_drbg_reseed_advised_p(NIST_CTR_DRBG *drbg)
{
return (drbg->reseed_counter > (NIST_CTR_DRBG_RESEED_INTERVAL / 2));
}
static bool
nist_ctr_drbg_reseed_needed_p(NIST_CTR_DRBG *drbg)
{
return (drbg->reseed_counter >= NIST_CTR_DRBG_RESEED_INTERVAL);
}
/*
* Generate some data from the underlying generator.
*/
static void
cprng_strong_generate(struct cprng_strong *cprng, void *buffer,
size_t bytes)
{
const uint32_t cc = cprng_counter();
KASSERT(bytes <= CPRNG_MAX_LEN);
KASSERT(mutex_owned(&cprng->cs_lock));
/*
* Generate some data from the NIST CTR_DRBG. Caller
* guarantees reseed if we're not ready, and if we exhaust the
* generator, we mark ourselves not ready. Consequently, this
* call to the CTR_DRBG should not fail.
*/
if (__predict_false(nist_ctr_drbg_generate(&cprng->cs_drbg, buffer,
bytes, &cc, sizeof(cc))))
panic("cprng %s: NIST CTR_DRBG failed", cprng->cs_name);
/*
* If we've been seeing a lot of use, ask for some fresh
* entropy soon.
*/
if (__predict_false(nist_ctr_drbg_reseed_advised_p(&cprng->cs_drbg)))
rndsink_schedule(cprng->cs_rndsink);
/*
* If we just exhausted the generator, inform the next user
* that we need a reseed.
*
* XXX For /dev/random CPRNGs, the criterion is supposed to be
* `Has this seeding generated 32 bytes?'.
*/
if (__predict_false(nist_ctr_drbg_reseed_needed_p(&cprng->cs_drbg))) {
cprng->cs_ready = false;
rndsink_schedule(cprng->cs_rndsink); /* paranoia */
}
}
/*
* Reseed with whatever we can get from the system entropy pool right now.
*/
static void
cprng_strong_reseed(struct cprng_strong *cprng)
{
uint8_t seed[NIST_BLOCK_KEYLEN_BYTES];
KASSERT(mutex_owned(&cprng->cs_lock));
const bool full_entropy = rndsink_request(cprng->cs_rndsink, seed,
sizeof(seed));
cprng_strong_reseed_from(cprng, seed, sizeof(seed), full_entropy);
explicit_bzero(seed, sizeof(seed));
}
/*
* Reseed with the given seed. If we now have full entropy, notify waiters.
*/
static void
cprng_strong_reseed_from(struct cprng_strong *cprng,
const void *seed, size_t bytes, bool full_entropy)
{
const uint32_t cc = cprng_counter();
KASSERT(bytes == NIST_BLOCK_KEYLEN_BYTES);
KASSERT(mutex_owned(&cprng->cs_lock));
/*
* Notify anyone interested in the partiality of entropy in our
* seed -- anyone waiting for full entropy, or any system
* operators interested in knowing when the entropy pool is
* running on fumes.
*/
if (full_entropy) {
if (!cprng->cs_ready) {
cprng->cs_ready = true;
cv_broadcast(&cprng->cs_cv);
selnotify(&cprng->cs_selq, (POLLIN | POLLRDNORM),
NOTE_SUBMIT);
}
} else {
if (c->flags & CPRNG_USE_CV) {
KASSERT(!cv_has_waiters(&c->cv));
cv_destroy(&c->cv);
}
/*
* XXX Is there is any harm in reseeding with partial
* entropy when we had full entropy before? If so,
* remove the conditional on this message.
*/
if (!cprng->cs_ready &&
!ISSET(cprng->cs_flags, CPRNG_REKEY_ANY))
printf("cprng %s: reseeding with partial entropy\n",
cprng->cs_name);
}
if (flags & CPRNG_REKEY_ANY) {
if (!(c->flags & CPRNG_REKEY_ANY)) {
if (c->flags & CPRNG_USE_CV) {
cv_broadcast(&c->cv);
}
selnotify(&c->selq, 0, 0);
}
}
c->flags = flags;
if (nist_ctr_drbg_reseed(&cprng->cs_drbg, seed, bytes, &cc, sizeof(cc)))
/* XXX Fix nist_ctr_drbg API so this can't happen. */
panic("cprng %s: NIST CTR_DRBG reseed failed", cprng->cs_name);
#if DEBUG
cprng_strong_rngtest(cprng);
#endif
}
#if DEBUG
/*
* Generate some output and apply a statistical RNG test to it.
*/
static void
cprng_strong_rngtest(struct cprng_strong *cprng)
{
KASSERT(mutex_owned(&cprng->cs_lock));
/* XXX Switch to a pool cache instead? */
rngtest_t *const rt = kmem_intr_alloc(sizeof(*rt), KM_NOSLEEP);
if (rt == NULL)
/* XXX Warn? */
return;
(void)strlcpy(rt->rt_name, cprng->cs_name, sizeof(rt->rt_name));
if (nist_ctr_drbg_generate(&cprng->cs_drbg, rt->rt_b, sizeof(rt->rt_b),
NULL, 0))
panic("cprng %s: NIST CTR_DRBG failed after reseed",
cprng->cs_name);
if (rngtest(rt)) {
printf("cprng %s: failed statistical RNG test\n",
cprng->cs_name);
/* XXX Not clear that this does any good... */
cprng->cs_ready = false;
rndsink_schedule(cprng->cs_rndsink);
}
explicit_bzero(rt, sizeof(*rt)); /* paranoia */
kmem_intr_free(rt, sizeof(*rt));
}
#endif
/*
* Feed entropy from an rndsink request into the CPRNG for which the
* request was issued.
*/
static void
cprng_strong_rndsink_callback(void *context, const void *seed, size_t bytes)
{
struct cprng_strong *const cprng = context;
mutex_enter(&cprng->cs_lock);
/* Assume that rndsinks provide only full-entropy output. */
cprng_strong_reseed_from(cprng, seed, bytes, true);
mutex_exit(&cprng->cs_lock);
}

View File

@ -1,4 +1,4 @@
/* $NetBSD: arc4random.c,v 1.32 2012/04/10 14:02:28 tls Exp $ */
/* $NetBSD: arc4random.c,v 1.33 2013/06/23 02:35:24 riastradh Exp $ */
/*-
* Copyright (c) 2002, 2011 The NetBSD Foundation, Inc.
@ -43,39 +43,39 @@
#include <sys/cdefs.h>
#ifdef _KERNEL
#define NRND 1
#else
#define NRND 0
#endif
#include <sys/types.h>
#include <sys/time.h>
#include <sys/param.h>
#ifdef _KERNEL
#include <sys/kernel.h>
#endif
#include <sys/systm.h>
#ifdef _KERNEL
#include <sys/mutex.h>
#include <sys/types.h>
#include <sys/rngtest.h>
#else
#define mutex_spin_enter(x) ;
#define mutex_spin_exit(x) ;
#define mutex_init(x, y, z) ;
#endif
#include <sys/systm.h>
#include <sys/time.h>
#ifdef _STANDALONE
/*
* XXX This is a load of bollocks. Standalone has no entropy source.
* This module should be removed from libkern once we confirm nobody is
* using it.
*/
#define time_uptime 1
typedef struct kmutex *kmutex_t;
#define MUTEX_DEFAULT 0
#define IPL_VM 0
static void mutex_init(kmutex_t *m, int t, int i) {}
static void mutex_spin_enter(kmutex_t *m) {}
static void mutex_spin_exit(kmutex_t *m) {}
typedef void rndsink_callback_t(void *, const void *, size_t);
struct rndsink;
static struct rndsink *rndsink_create(size_t n, rndsink_callback_t c, void *a)
{ return NULL; }
static bool rndsink_request(struct rndsink *s, void *b, size_t n)
{ return true; }
#else /* !_STANDALONE */
#include <sys/kernel.h>
#include <sys/mutex.h>
#include <sys/rndsink.h>
#endif /* _STANDALONE */
#include <lib/libkern/libkern.h>
#if NRND > 0
#include <sys/rnd.h>
#include <dev/rnd_private.h>
static rndsink_t rs;
#endif
/*
* The best known attack that distinguishes RC4 output from a random
* bitstream requires 2^25 bytes. (see Paul and Preneel, Analysis of
@ -97,9 +97,8 @@ static rndsink_t rs;
#define ARC4_RESEED_SECONDS 300
#define ARC4_KEYBYTES 16 /* 128 bit key */
#ifdef _STANDALONE
#define time_uptime 1 /* XXX ugly! */
#endif /* _STANDALONE */
static kmutex_t arc4_mtx;
static struct rndsink *arc4_rndsink;
static u_int8_t arc4_i, arc4_j;
static int arc4_initialized = 0;
@ -107,10 +106,10 @@ static int arc4_numbytes = 0;
static u_int8_t arc4_sbox[256];
static time_t arc4_nextreseed;
#ifdef _KERNEL
kmutex_t arc4_mtx;
#endif
static rndsink_callback_t arc4_rndsink_callback;
static void arc4_randrekey(void);
static void arc4_randrekey_from(const uint8_t[ARC4_KEYBYTES], bool);
static void arc4_init(void);
static inline u_int8_t arc4_randbyte(void);
static inline void arc4randbytes_unlocked(void *, size_t);
void _arc4randbytes(void *, size_t);
@ -126,133 +125,75 @@ arc4_swap(u_int8_t *a, u_int8_t *b)
*b = c;
}
static void
arc4_rndsink_callback(void *context __unused, const void *seed, size_t bytes)
{
KASSERT(bytes == ARC4_KEYBYTES);
arc4_randrekey_from(seed, true);
}
/*
* Stir our S-box.
* Stir our S-box with whatever we can get from the system entropy pool
* now.
*/
static void
arc4_randrekey(void *arg)
arc4_randrekey(void)
{
u_int8_t key[256];
int n, ask_for_more = 0;
#ifdef _KERNEL
#ifdef DIAGNOSTIC
#if 0 /* XXX rngtest_t is too large and could cause stack overflow */
rngtest_t rt;
#endif
#endif
#endif
#if NRND > 0
static int callback_pending;
int r;
#endif
uint8_t seed[ARC4_KEYBYTES];
const bool full_entropy = rndsink_request(arc4_rndsink, seed,
sizeof(seed));
arc4_randrekey_from(seed, full_entropy);
explicit_bzero(seed, sizeof(seed));
}
/*
* Stir our S-box with what's in seed.
*/
static void
arc4_randrekey_from(const uint8_t seed[ARC4_KEYBYTES], bool full_entropy)
{
uint8_t key[256];
size_t n;
mutex_spin_enter(&arc4_mtx);
(void)memcpy(key, seed, ARC4_KEYBYTES);
/* Rekey the arc4 state. */
for (n = ARC4_KEYBYTES; n < sizeof(key); n++)
key[n] = key[n % ARC4_KEYBYTES];
for (n = 0; n < 256; n++) {
arc4_j = (arc4_j + arc4_sbox[n] + key[n]) % 256;
arc4_swap(&arc4_sbox[n], &arc4_sbox[arc4_j]);
}
arc4_i = arc4_j;
explicit_bzero(key, sizeof(key));
/*
* The first time through, we must take what we can get,
* so schedule ourselves for callback no matter what.
* Throw away the first N words of output, as suggested in the
* paper "Weaknesses in the Key Scheduling Algorithm of RC4" by
* Fluher, Mantin, and Shamir. (N = 256 in our case.)
*/
if (__predict_true(arc4_initialized)) {
mutex_spin_enter(&arc4_mtx);
}
#if NRND > 0 /* XXX without rnd, we will key from the stack, ouch! */
else {
ask_for_more = 1;
r = rnd_extract_data(key, ARC4_KEYBYTES, RND_EXTRACT_ANY);
goto got_entropy;
}
for (n = 0; n < 256 * 4; n++)
arc4_randbyte();
if (arg == NULL) {
if (callback_pending) {
if (arc4_numbytes > ARC4_HARDMAX) {
printf("arc4random: WARNING, hit 2^29 bytes, "
"forcibly rekeying.\n");
r = rnd_extract_data(key, ARC4_KEYBYTES,
RND_EXTRACT_ANY);
mutex_spin_enter(&rs.mtx);
rndsink_detach(&rs);
mutex_spin_exit(&rs.mtx);
callback_pending = 0;
goto got_entropy;
} else {
mutex_spin_exit(&arc4_mtx);
return;
}
}
r = rnd_extract_data(key, ARC4_KEYBYTES, RND_EXTRACT_GOOD);
if (r < ARC4_KEYBYTES) {
ask_for_more = 1;
}
} else {
ask_for_more = 0;
callback_pending = 0;
if (rs.len != ARC4_KEYBYTES) {
panic("arc4_randrekey: rekey callback bad length");
}
memcpy(key, rs.data, rs.len);
memset(rs.data, 0, rs.len);
}
got_entropy:
if (!ask_for_more) {
callback_pending = 0;
} else if (!callback_pending) {
callback_pending = 1;
mutex_spin_enter(&rs.mtx);
strlcpy(rs.name, "arc4random", sizeof(rs.name));
rs.cb = arc4_randrekey;
rs.arg = &rs;
rs.len = ARC4_KEYBYTES;
rndsink_attach(&rs);
mutex_spin_exit(&rs.mtx);
}
#endif
/*
* If it's the first time, or we got a good key, actually rekey.
* Reset for next reseed cycle. If we don't have full entropy,
* caller has scheduled a reseed already.
*/
if (!ask_for_more || !arc4_initialized) {
for (n = ARC4_KEYBYTES; n < sizeof(key); n++)
key[n] = key[n % ARC4_KEYBYTES];
arc4_nextreseed = time_uptime +
(full_entropy? ARC4_RESEED_SECONDS : 0);
arc4_numbytes = 0;
for (n = 0; n < 256; n++) {
arc4_j = (arc4_j + arc4_sbox[n] + key[n]) % 256;
arc4_swap(&arc4_sbox[n], &arc4_sbox[arc4_j]);
}
arc4_i = arc4_j;
#if 0 /* XXX */
arc4_rngtest();
#endif
memset(key, 0, sizeof(key));
/*
* Throw away the first N words of output, as suggested in the
* paper "Weaknesses in the Key Scheduling Algorithm of RC4"
* by Fluher, Mantin, and Shamir. (N = 256 in our case.)
*/
for (n = 0; n < 256 * 4; n++)
arc4_randbyte();
/* Reset for next reseed cycle. */
arc4_nextreseed = time_uptime + ARC4_RESEED_SECONDS;
arc4_numbytes = 0;
#ifdef _KERNEL
#ifdef DIAGNOSTIC
#if 0 /* XXX rngtest_t is too large and could cause stack overflow */
/*
* Perform the FIPS 140-2 statistical RNG test; warn if our
* output has such poor quality as to fail the test.
*/
arc4randbytes_unlocked(rt.rt_b, sizeof(rt.rt_b));
strlcpy(rt.rt_name, "arc4random", sizeof(rt.rt_name));
if (rngtest(&rt)) {
/* rngtest will scream to the console. */
arc4_nextreseed = time_uptime;
arc4_numbytes = ARC4_MAXBYTES;
/* XXX should keep old context around, *NOT* use new */
}
#endif
#endif
#endif
}
if (__predict_true(arc4_initialized)) {
mutex_spin_exit(&arc4_mtx);
}
mutex_spin_exit(&arc4_mtx);
}
/*
@ -264,12 +205,14 @@ arc4_init(void)
int n;
mutex_init(&arc4_mtx, MUTEX_DEFAULT, IPL_VM);
mutex_init(&rs.mtx, MUTEX_DEFAULT, IPL_VM);
arc4_rndsink = rndsink_create(ARC4_KEYBYTES, &arc4_rndsink_callback,
NULL);
arc4_i = arc4_j = 0;
for (n = 0; n < 256; n++)
arc4_sbox[n] = (u_int8_t) n;
arc4_randrekey(NULL);
arc4_randrekey();
arc4_initialized = 1;
}
@ -316,7 +259,7 @@ _arc4randbytes(void *p, size_t len)
mutex_spin_exit(&arc4_mtx);
if ((arc4_numbytes > ARC4_MAXBYTES) ||
(time_uptime > arc4_nextreseed)) {
arc4_randrekey(NULL);
arc4_randrekey();
}
}

View File

@ -1,4 +1,4 @@
# $NetBSD: Makefile.rumpkern,v 1.127 2013/05/01 17:52:34 pooka Exp $
# $NetBSD: Makefile.rumpkern,v 1.128 2013/06/23 02:35:24 riastradh Exp $
#
.include "${RUMPTOP}/Makefile.rump"
@ -79,6 +79,7 @@ SRCS+= init_sysctl_base.c \
kern_resource.c \
kern_rndpool.c \
kern_rndq.c \
kern_rndsink.c \
kern_stub.c \
kern_syscall.c \
kern_sysctl.c \

View File

@ -1,4 +1,4 @@
/* $NetBSD: cprng_stub.c,v 1.6 2013/04/30 00:03:53 pooka Exp $ */
/* $NetBSD: cprng_stub.c,v 1.7 2013/06/23 02:35:24 riastradh Exp $ */
/*-
* Copyright (c) 2011 The NetBSD Foundation, Inc.
@ -29,16 +29,17 @@
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/types.h>
#include <sys/time.h>
#include <sys/param.h>
#include <sys/types.h>
#include <sys/cprng.h>
#include <sys/event.h>
#include <sys/kernel.h>
#include <sys/systm.h>
#include <sys/kmem.h>
#include <sys/mutex.h>
#include <sys/poll.h>
#include <sys/rngtest.h>
#include <sys/cprng.h>
#include <sys/systm.h>
#include <sys/time.h>
#include <rump/rumpuser.h>
@ -60,41 +61,46 @@ cprng_init(void)
}
cprng_strong_t *
cprng_strong_create(const char *const name, int ipl, int flags)
cprng_strong_create(const char *const name __unused, int ipl __unused,
int flags __unused)
{
cprng_strong_t *c;
/* zero struct to zero counters we won't ever set with no DRBG */
c = kmem_zalloc(sizeof(*c), KM_NOSLEEP);
if (c == NULL) {
return NULL;
}
strlcpy(c->name, name, sizeof(c->name));
mutex_init(&c->mtx, MUTEX_DEFAULT, ipl);
if (c->flags & CPRNG_USE_CV) {
cv_init(&c->cv, name);
}
selinit(&c->selq);
return c;
return NULL;
}
size_t
cprng_strong(cprng_strong_t *c, void *p, size_t len, int blocking)
cprng_strong(cprng_strong_t *c __unused, void *p, size_t len,
int blocking __unused)
{
mutex_enter(&c->mtx);
KASSERT(c == NULL);
cprng_fast(p, len); /* XXX! */
mutex_exit(&c->mtx);
return len;
}
void
cprng_strong_destroy(cprng_strong_t *c)
int
cprng_strong_kqfilter(cprng_strong_t *c __unused, struct knote *kn __unused)
{
mutex_destroy(&c->mtx);
cv_destroy(&c->cv);
memset(c, 0, sizeof(*c));
kmem_free(c, sizeof(*c));
KASSERT(c == NULL);
kn->kn_data = CPRNG_MAX_LEN;
return 1;
}
int
cprng_strong_poll(cprng_strong_t *c __unused, int events)
{
KASSERT(c == NULL);
return (events & (POLLIN | POLLRDNORM));
}
void
cprng_strong_deplete(struct cprng_strong *c __unused)
{
KASSERT(c == NULL);
}
void
cprng_strong_destroy(cprng_strong_t *c __unused)
{
KASSERT(c == NULL);
}
size_t
@ -103,6 +109,7 @@ cprng_fast(void *p, size_t len)
size_t randlen;
rumpuser_getrandom(p, len, 0, &randlen);
KASSERT(randlen == len);
return len;
}
@ -113,6 +120,7 @@ cprng_fast32(void)
uint32_t ret;
rumpuser_getrandom(&ret, sizeof(ret), 0, &randlen);
KASSERT(randlen == sizeof(ret));
return ret;
}
@ -123,5 +131,6 @@ cprng_fast64(void)
size_t randlen;
rumpuser_getrandom(&ret, sizeof(ret), 0, &randlen);
KASSERT(randlen == sizeof(ret));
return ret;
}

View File

@ -1,11 +1,11 @@
/* $NetBSD: cprng.h,v 1.6 2012/11/25 15:29:45 christos Exp $ */
/* $NetBSD: cprng.h,v 1.7 2013/06/23 02:35:24 riastradh Exp $ */
/*-
* Copyright (c) 2011 The NetBSD Foundation, Inc.
* Copyright (c) 2011-2013 The NetBSD Foundation, Inc.
* All rights reserved.
*
* This code is derived from software contributed to The NetBSD Foundation
* by Thor Lancelot Simon.
* by Thor Lancelot Simon and Taylor R. Campbell.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@ -28,16 +28,19 @@
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
/*
* XXX Don't change this to _SYS_CPRNG_H or anything -- code outside
* this file relies on its name... (I'm looking at you, ipf!)
*/
#ifndef _CPRNG_H
#define _CPRNG_H
#include <sys/types.h>
#include <sys/fcntl.h>
#include <lib/libkern/libkern.h>
#include <sys/rnd.h>
#include <sys/fcntl.h> /* XXX users bogusly transitively need this */
#include <sys/rnd.h> /* XXX users bogusly transitively need this */
#include <crypto/nist_ctr_drbg/nist_ctr_drbg.h>
#include <sys/condvar.h>
#include <sys/select.h>
/*
* NIST SP800-90 says 2^19 bytes per request for the CTR_DRBG.
@ -77,17 +80,9 @@ uint32_t cprng_fast32(void);
uint64_t cprng_fast64(void);
#endif
typedef struct _cprng_strong {
kmutex_t mtx;
kcondvar_t cv;
struct selinfo selq;
NIST_CTR_DRBG drbg;
int flags;
char name[16];
int reseed_pending;
int entropy_serial;
rndsink_t reseed;
} cprng_strong_t;
typedef struct cprng_strong cprng_strong_t;
void cprng_init(void);
#define CPRNG_INIT_ANY 0x00000001
#define CPRNG_REKEY_ANY 0x00000002
@ -97,59 +92,38 @@ b\0INIT_ANY\0\
b\1REKEY_ANY\0\
b\2USE_CV\0"
cprng_strong_t *cprng_strong_create(const char *const, int, int);
cprng_strong_t *
cprng_strong_create(const char *, int, int);
void cprng_strong_destroy(cprng_strong_t *);
size_t cprng_strong(cprng_strong_t *, void *, size_t, int);
size_t cprng_strong(cprng_strong_t *const, void *const, size_t, int);
struct knote; /* XXX temp, for /dev/random */
int cprng_strong_kqfilter(cprng_strong_t *, struct knote *); /* XXX " */
int cprng_strong_poll(cprng_strong_t *, int); /* XXX " */
void cprng_strong_deplete(cprng_strong_t *); /* XXX " */
void cprng_strong_destroy(cprng_strong_t *);
extern cprng_strong_t * kern_cprng;
extern cprng_strong_t *kern_cprng;
static inline uint32_t
cprng_strong32(void)
{
uint32_t r;
cprng_strong(kern_cprng, &r, sizeof(r), 0);
return r;
return r;
}
static inline uint64_t
cprng_strong64(void)
{
uint64_t r;
uint64_t r;
cprng_strong(kern_cprng, &r, sizeof(r), 0);
return r;
return r;
}
static inline int
cprng_strong_ready(cprng_strong_t *c)
{
int ret = 0;
mutex_enter(&c->mtx);
if (c->drbg.reseed_counter < NIST_CTR_DRBG_RESEED_INTERVAL) {
ret = 1;
}
mutex_exit(&c->mtx);
return ret;
}
static inline void
cprng_strong_deplete(cprng_strong_t *c)
{
mutex_enter(&c->mtx);
c->drbg.reseed_counter = NIST_CTR_DRBG_RESEED_INTERVAL + 1;
mutex_exit(&c->mtx);
}
static inline int
static inline unsigned int
cprng_strong_strength(cprng_strong_t *c)
{
return NIST_BLOCK_KEYLEN_BYTES;
}
void cprng_init(void);
int cprng_strong_getflags(cprng_strong_t *const);
void cprng_strong_setflags(cprng_strong_t *const, int);
#endif
#endif /* _CPRNG_H */

View File

@ -1,4 +1,4 @@
/* $NetBSD: rnd.h,v 1.37 2013/06/20 23:21:42 christos Exp $ */
/* $NetBSD: rnd.h,v 1.38 2013/06/23 02:35:24 riastradh Exp $ */
/*-
* Copyright (c) 1997 The NetBSD Foundation, Inc.
@ -139,23 +139,6 @@ rndsource_setcb(struct krndsource *const rs, void *const cb, void *const arg)
rs->getarg = arg;
}
enum rsink_st {
RSTATE_IDLE = 0,
RSTATE_PENDING,
RSTATE_HASBITS
};
typedef struct rndsink {
TAILQ_ENTRY(rndsink) tailq; /* the queue */
kmutex_t mtx; /* lock to seed or unregister */
enum rsink_st state; /* in-use? filled? */
void (*cb)(void *); /* callback function when ready */
void *arg; /* callback function argument */
char name[16]; /* sink name */
size_t len; /* how many bytes wanted/supplied */
uint8_t data[64]; /* random data returned here */
} rndsink_t;
typedef struct {
uint32_t cursor; /* current add point in the pool */
uint32_t rotate; /* how many bits to rotate by */
@ -184,8 +167,7 @@ void rnd_attach_source(krndsource_t *, const char *,
uint32_t, uint32_t);
void rnd_detach_source(krndsource_t *);
void rndsink_attach(rndsink_t *);
void rndsink_detach(rndsink_t *);
void rnd_getmore(size_t);
void rnd_seed(void *, size_t);
@ -257,7 +239,7 @@ typedef struct {
* A context. cprng plus a smidge.
*/
typedef struct {
struct _cprng_strong *cprng;
struct cprng_strong *cprng;
int hard;
int bytesonkey;
kmutex_t interlock;

53
sys/sys/rndsink.h Normal file
View File

@ -0,0 +1,53 @@
/* $NetBSD: rndsink.h,v 1.1 2013/06/23 02:35:24 riastradh Exp $ */
/*-
* Copyright (c) 2013 The NetBSD Foundation, Inc.
* All rights reserved.
*
* This code is derived from software contributed to The NetBSD Foundation
* by Taylor R. Campbell.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
* TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _SYS_RNDSINK_H
#define _SYS_RNDSINK_H
#ifndef _KERNEL /* XXX */
#error <sys/rndsink.h> is meant for kernel consumers only.
#endif
#define RNDSINK_MAX_BYTES 32
struct rndsink;
typedef void rndsink_callback_t(void *, const void *, size_t);
void rndsinks_init(void);
void rndsinks_distribute(void);
struct rndsink *
rndsink_create(size_t, rndsink_callback_t *, void *);
void rndsink_destroy(struct rndsink *);
bool rndsink_request(struct rndsink *, void *, size_t);
void rndsink_schedule(struct rndsink *);
#endif /* _SYS_RNDSINK_H */

View File

@ -1,4 +1,4 @@
/* $NetBSD: misc.c,v 1.11 2012/11/25 15:30:28 christos Exp $ */
/* $NetBSD: misc.c,v 1.12 2013/06/23 02:35:24 riastradh Exp $ */
/*-
* Copyright (c) 2008 The NetBSD Foundation, Inc.
@ -30,7 +30,7 @@
*/
#include <sys/cdefs.h>
__RCSID("$NetBSD: misc.c,v 1.11 2012/11/25 15:30:28 christos Exp $");
__RCSID("$NetBSD: misc.c,v 1.12 2013/06/23 02:35:24 riastradh Exp $");
#define _KMEMUSER
#include <stdbool.h>
@ -205,23 +205,9 @@ p_rnd(struct file *f)
dprintf("can't read rnd at %p for pid %d", f->f_data, Pid);
return 0;
}
(void)printf("* rnd ");
(void)printf("* rnd");
if (rp.hard)
printf("bytesonkey=%d, ", rp.bytesonkey);
if (rp.cprng) {
cprng_strong_t cprng;
if (!KVM_READ(rp.cprng, &cprng, sizeof(cprng))) {
dprintf("can't read rnd cprng at %p for pid %d",
rp.cprng, Pid);
} else {
char buf[128];
snprintb(buf, sizeof(buf), CPRNG_FMT, cprng.flags);
(void)printf("name=%s, serial=%d%s, flags=%s\n",
cprng.name, cprng.entropy_serial,
cprng.reseed_pending ? ", reseed" : "", buf);
return 0;
}
}
printf(" bytesonkey=%d", rp.bytesonkey);
printf("\n");
return 0;
}