NetBSD/sys/dev/random.c

306 lines
7.7 KiB
C
Raw Normal View History

/* $NetBSD: random.c,v 1.10 2021/12/28 13:22:43 riastradh Exp $ */
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
/*-
* Copyright (c) 2019 The NetBSD Foundation, Inc.
* All rights reserved.
*
* This code is derived from software contributed to The NetBSD Foundation
* by Taylor R. Campbell.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
* TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
/*
* /dev/random, /dev/urandom -- stateless version
*
* For short reads from /dev/urandom, up to 256 bytes, read from a
* per-CPU NIST Hash_DRBG instance that is reseeded as soon as the
* system has enough entropy.
*
* For all other reads, instantiate a fresh NIST Hash_DRBG from
* the global entropy pool, and draw from it.
*
* Each read is independent; there is no per-open state.
* Concurrent reads from the same open run in parallel.
*
* Reading from /dev/random may block until entropy is available.
* Either device may return short reads if interrupted.
*/
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: random.c,v 1.10 2021/12/28 13:22:43 riastradh Exp $");
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
#include <sys/param.h>
#include <sys/types.h>
#include <sys/atomic.h>
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
#include <sys/conf.h>
#include <sys/cprng.h>
#include <sys/entropy.h>
#include <sys/errno.h>
#include <sys/event.h>
#include <sys/fcntl.h>
#include <sys/kauth.h>
#include <sys/kmem.h>
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
#include <sys/lwp.h>
#include <sys/poll.h>
2020-08-14 03:53:15 +03:00
#include <sys/random.h>
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
#include <sys/rnd.h>
#include <sys/rndsource.h>
#include <sys/signalvar.h>
#include <sys/systm.h>
#include <sys/vnode.h> /* IO_NDELAY */
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
#include "ioconf.h"
static dev_type_open(random_open);
static dev_type_close(random_close);
static dev_type_ioctl(random_ioctl);
static dev_type_poll(random_poll);
static dev_type_kqfilter(random_kqfilter);
static dev_type_read(random_read);
static dev_type_write(random_write);
const struct cdevsw rnd_cdevsw = {
.d_open = random_open,
.d_close = random_close,
.d_read = random_read,
.d_write = random_write,
.d_ioctl = random_ioctl,
.d_stop = nostop,
.d_tty = notty,
.d_poll = random_poll,
.d_mmap = nommap,
.d_kqfilter = random_kqfilter,
.d_discard = nodiscard,
.d_flag = D_OTHER|D_MPSAFE,
};
#define RANDOM_BUFSIZE 512 /* XXX pulled from arse */
/* Entropy source for writes to /dev/random and /dev/urandom */
static krndsource_t user_rndsource;
void
rndattach(int num)
{
rnd_attach_source(&user_rndsource, "/dev/random", RND_TYPE_UNKNOWN,
RND_FLAG_COLLECT_VALUE);
}
static int
random_open(dev_t dev, int flags, int fmt, struct lwp *l)
{
/* Validate minor. */
switch (minor(dev)) {
case RND_DEV_RANDOM:
case RND_DEV_URANDOM:
break;
default:
return ENXIO;
}
return 0;
}
static int
random_close(dev_t dev, int flags, int fmt, struct lwp *l)
{
/* Success! */
return 0;
}
static int
random_ioctl(dev_t dev, unsigned long cmd, void *data, int flag, struct lwp *l)
{
/*
* No non-blocking/async options; otherwise defer to
* entropy_ioctl.
*/
switch (cmd) {
case FIONBIO:
case FIOASYNC:
return 0;
default:
return entropy_ioctl(cmd, data);
}
}
static int
random_poll(dev_t dev, int events, struct lwp *l)
{
/* /dev/random may block; /dev/urandom is always ready. */
switch (minor(dev)) {
case RND_DEV_RANDOM:
return entropy_poll(events);
case RND_DEV_URANDOM:
return events & (POLLIN|POLLRDNORM | POLLOUT|POLLWRNORM);
default:
return 0;
}
}
static int
random_kqfilter(dev_t dev, struct knote *kn)
{
/* Validate the event filter. */
switch (kn->kn_filter) {
case EVFILT_READ:
case EVFILT_WRITE:
break;
default:
return EINVAL;
}
/* /dev/random may block; /dev/urandom never does. */
switch (minor(dev)) {
case RND_DEV_RANDOM:
if (kn->kn_filter == EVFILT_READ)
return entropy_kqfilter(kn);
/* FALLTHROUGH */
case RND_DEV_URANDOM:
kn->kn_fop = &seltrue_filtops;
return 0;
default:
return ENXIO;
}
}
/*
* random_read(dev, uio, flags)
*
* Generate data from a PRNG seeded from the entropy pool.
*
* - If /dev/random, block until we have full entropy, or fail
* with EWOULDBLOCK, and if `depleting' entropy, return at most
* the entropy pool's capacity at once.
*
* - If /dev/urandom, generate data from whatever is in the
* entropy pool now.
*
* On interrupt, return a short read, but not shorter than 256
* bytes (actually, no shorter than RANDOM_BUFSIZE bytes, which is
* 512 for hysterical raisins).
*/
static int
random_read(dev_t dev, struct uio *uio, int flags)
{
2020-08-14 03:53:15 +03:00
int gflags;
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
2020-08-14 03:53:15 +03:00
/* Set the appropriate GRND_* mode. */
switch (minor(dev)) {
case RND_DEV_RANDOM:
gflags = GRND_RANDOM;
break;
case RND_DEV_URANDOM:
gflags = GRND_INSECURE;
break;
default:
return ENXIO;
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
}
/*
* Set GRND_NONBLOCK if the user requested IO_NDELAY (i.e., the
* file was opened with O_NONBLOCK).
*/
if (flags & IO_NDELAY)
2020-08-14 03:53:15 +03:00
gflags |= GRND_NONBLOCK;
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
2020-08-14 03:53:15 +03:00
/* Defer to getrandom. */
return dogetrandom(uio, gflags);
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
}
/*
* random_write(dev, uio, flags)
*
* Enter data from uio into the entropy pool.
*
* Assume privileged users provide full entropy, and unprivileged
* users provide no entropy. If you have a nonuniform source of
* data with n bytes of min-entropy, hash it with an XOF like
* SHAKE128 into exactly n bytes first.
*/
static int
random_write(dev_t dev, struct uio *uio, int flags)
{
kauth_cred_t cred = kauth_cred_get();
uint8_t *buf;
Consolidate entropy on RNDADDDATA and writes to /dev/random. The man page for some time has advertised: Writing to either /dev/random or /dev/urandom influences subsequent output of both devices, guaranteed to take effect at next open. So let's make that true again. It is a conscious choice _not_ to consolidate entropy frequently. For example, if you have a _slow_ HWRNG, which provides 32 bits of entropy every few seconds, and you reveal a hash that to the adversary before any more comes in, the adversary can in principle just keep guessing the intermediate state by a brute force search over ~2^32 possibilities. To mitigate this, the kernel generally tries to avoid consolidating entropy from the per-CPU pools until doing so would bring us from zero entropy to full entropy. However, there are various _possible_ sources of entropy which are just hard to give honest estimates for that are valid on ~all machines -- like interrupt timings. The time at which we read a seed in, which usually happens via /etc/rc.d/random_seed early in userland, is a reasonable time to gather this up. An operator or system engineer who knows another opportune moment can always issue `sysctl -w kern.entropy.consolidate=1'. Prompted by a suggestion from nia@ to consolidate entropy at the first transition to userland. I chose not to do that because it would likely cause warning fatigue on systems that are perfectly fine with a random seed -- doing it this way instead lets rndctl -L trigger the consolidation automatically. A subsequent commit will reorder the operations in rndctl again to make it work out better.
2020-05-07 22:05:51 +03:00
bool privileged = false, any = false;
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
int error = 0;
/* Verify user's authorization to affect the entropy pool. */
error = kauth_authorize_device(cred, KAUTH_DEVICE_RND_ADDDATA,
NULL, NULL, NULL, NULL);
if (error)
return error;
/*
* Check whether user is privileged. If so, assume user
* furnishes full-entropy data; if not, accept user's data but
* assume it has zero entropy when we do accounting. If you
* want to specify less entropy, use ioctl(RNDADDDATA).
*/
if (kauth_authorize_device(cred, KAUTH_DEVICE_RND_ADDDATA_ESTIMATE,
NULL, NULL, NULL, NULL) == 0)
privileged = true;
/* Get a buffer for transfers. */
buf = kmem_alloc(RANDOM_BUFSIZE, KM_SLEEP);
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
/* Consume data. */
while (uio->uio_resid) {
size_t n = MIN(uio->uio_resid, RANDOM_BUFSIZE);
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
/* Transfer n bytes in and enter them into the pool. */
error = uiomove(buf, n, uio);
if (error)
break;
rnd_add_data(&user_rndsource, buf, n, privileged ? n*NBBY : 0);
any = true;
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
/* Now's a good time to yield if needed. */
preempt_point();
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
/* Check for interruption. */
if (__predict_false(curlwp->l_flag & LW_PENDSIG) &&
sigispending(curlwp, 0)) {
error = EINTR;
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
break;
}
}
/* Zero the buffer and free it. */
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
explicit_memset(buf, 0, RANDOM_BUFSIZE);
kmem_free(buf, RANDOM_BUFSIZE);
Consolidate entropy on RNDADDDATA and writes to /dev/random. The man page for some time has advertised: Writing to either /dev/random or /dev/urandom influences subsequent output of both devices, guaranteed to take effect at next open. So let's make that true again. It is a conscious choice _not_ to consolidate entropy frequently. For example, if you have a _slow_ HWRNG, which provides 32 bits of entropy every few seconds, and you reveal a hash that to the adversary before any more comes in, the adversary can in principle just keep guessing the intermediate state by a brute force search over ~2^32 possibilities. To mitigate this, the kernel generally tries to avoid consolidating entropy from the per-CPU pools until doing so would bring us from zero entropy to full entropy. However, there are various _possible_ sources of entropy which are just hard to give honest estimates for that are valid on ~all machines -- like interrupt timings. The time at which we read a seed in, which usually happens via /etc/rc.d/random_seed early in userland, is a reasonable time to gather this up. An operator or system engineer who knows another opportune moment can always issue `sysctl -w kern.entropy.consolidate=1'. Prompted by a suggestion from nia@ to consolidate entropy at the first transition to userland. I chose not to do that because it would likely cause warning fatigue on systems that are perfectly fine with a random seed -- doing it this way instead lets rndctl -L trigger the consolidation automatically. A subsequent commit will reorder the operations in rndctl again to make it work out better.
2020-05-07 22:05:51 +03:00
/* If we added anything, consolidate entropy now. */
if (any)
entropy_consolidate();
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.
2020-04-30 06:28:18 +03:00
return error;
}