Commit Graph

66 Commits

Author SHA1 Message Date
ad a355028fa4 Eliminate l->l_ncsw and l->l_nivcsw. From memory think they were added
before we had per-LWP struct rusage; the same is now tracked there.
2023-10-04 20:28:05 +00:00
riastradh 763d441de3 cprng(9): Drop and retake percpu reference across entropy_extract.
entropy_extract may sleep on an adaptive lock, which invalidates
percpu(9) references.

Add a note in the comment over entropy_extract about this.

Discovered by stumbling upon this panic during a test run:

[   1.0200050] panic: kernel diagnostic assertion "(cprng == percpu_getref(cprng_fast_percpu)) && (percpu_putref(cprng_fast_percpu), true)" failed: file "/home/riastradh/netbsd/current/src/sys/rump/librump/rumpkern/../../../crypto/cprng_fast/cprng_fast.c", line 117

XXX pullup-10
2023-08-05 11:21:24 +00:00
riastradh 72c927ccb4 entropy(9): Disable !cold assertion in rump for now.
Evidently rump starts threads before it sets cold = 0, and deferring
the call to rump_thread_allow(NULL) in rump.c rump_init_callback
until after setting cold = 0 doesn't work either -- rump kernels just
hang.  To be investigated -- for now, let's just stop breaking
thousands of tests.
2023-08-04 16:02:01 +00:00
riastradh 3586ae1d3b entropy(9): Simplify stages. Split interrupt vs non-interrupt paths.
- Nix the entropy stage (cold, warm, hot).  Just use the usual kernel
  `cold' (cold: single-core, single-thread; interrupts may happen),
  and don't make any three-way distinction about whether interrupts
  or threads or other CPUs can be running.

  Instead, while cold, use splhigh/splx or forbid paths to come from
  interrupt context, and while warm, use mutex or the per-CPU hard
  and soft interrupt paths for low latency.  This comes at a small
  cost to some interrupt latency, since we may stir the pool in
  interrupt context -- but only for a very short window early at boot
  between configure and configure2, so it's hard to imagine it
  matters much.

- Allow rnd_add_uint32 to run in hard interrupt context or with spin
  locks held, but defer processing to softint and drop samples on the
  floor if buffer is full.  This is mainly used for cheaply tossing
  samples from drivers for non-HWRNG devices into the entropy pool,
  so it is often used from interrupt context and/or under spin locks.

- New rnd_add_data_intr provides the interrupt-like data entry path
  for arbitrary buffers and driver-specified entropy estimates: defer
  processing to softint and drop samples on the floor if buffer is
  full.

- Document that rnd_add_data is forbidden under spin locks outside
  interrupt context (will crash in LOCKDEBUG), and inadvisable in
  interrupt context (but technically permitted just in case there are
  compatibility issues for now); later we can forbid it altogether in
  interrupt context or under spin locks.

- Audit all uses of rnd_add_data to use rnd_add_data_intr where it
  might be used in interrupt context or under a spin lock.

This fixes a regression from last year when the global entropy lock
was changed from IPL_VM (spin) to IPL_SOFTSERIAL (adaptive).  Thought
I'd caught all the problems from that, but another one bit three
different people this week, presumably because of recent changes that
led to more non-HWRNG drivers entering the entropy consolidation
path from rnd_add_uint32.

In my attempt to preserve the rnd(9) API for the (now long-since
abandoned) prospect of pullup to netbsd-9 in my rewrite of the
entropy subsystem in 2020, I didn't introduce a separate entry point
for entering entropy from interrupt context or equivalent, i.e., spin
locks held, and instead made rnd_add_data rely on cpu_intr_p() to
decide whether to process the whole sample under a lock or only take
as much as there's buffer space for before scheduling a softint.  In
retrospect, that was a mistake (though perhaps not as much of a
mistake as other entropy API decisions...), a mistake which is
finally getting rectified now by rnd_add_data_intr.

XXX pullup-10
2023-08-04 07:38:53 +00:00
riastradh 96b2c7de8d entropy(9): Reintroduce netbsd<=9 time-delta estimator for unblocking.
The system will (in a subsequent change) by default block for this
condition before almost all of userland is running (including
/etc/rc.d/sshd key generation).  That way, a never-blocking
getentropy(3) API will never return any data without at least
best-effort entropy like netbsd<=9 did to applications except in
single-user mode (where you have to be careful about everything
anyway) or in the few processes that run before a seed can even be
loaded (where blocking indefinitely, e.g. when generating a stack
protector cookie in libc, could pose a severe availability problem
that can't be configured away, but where the security impact is low).

However, (in another subsequent change) we will continue to use
_only_ HWRNG driver estimates and seed estimates, and _not_
time-delta estimator, for _warning_ about security in motd, daily
security report, etc.  And if HWRNG/seed provides enough entropy
before time-delta estimator does, that will unblock /dev/random too.

The result is:

- Machines with HWRNG or seed won't warn about entropy and will
  essentially never block -- even on first boot without a seed, it
  will take only as long as the fastest HWRNG to unblock.

- Machines with neither HWRNG nor seed:
  . will warn about entropy, giving feedback about security;
    and
  . will avoid returning anything more predictable than netbsd<=9;
    but
  . won't block (much) longer than netbsd<=9 would (and won't block
    again after blocking once, except with kern.entropy.depletion=1 for
    testing).

  (The threshold for unblocking is now somewhat higher than before:
  512 samples that pass the time-delta estimator, rather than 80 as
  it used to be.)

  And, of course, adding a seed (or HWRNG) will prevent both warnings
  and blocking.

The mechanism is:

1. /dev/random will block until _either_

   (a) enough bits of entropy (256) from reliable sources have been
       added to the pool, _or_

   (b) enough samples have been added from any sources (512), passing
       the old time-delta entropy estimator, that the possible
       security benefit doesn't justify holding up availability any
       longer (`best effort'), except on systems with higher security
       requirements like securelevel=2 which can disable non-HWRNG,
       non-seed sources with rndctl_flags in rc.conf(5).

2. dmesg will report `entropy: ready' when 1(a) is satisfied, but if
   1(b) is satisfied first, it will report `entropy: best effort', so
   the concise log messages will reflect the timing and whether in
   any period of time any of the system might be relying on best
   effort entropy.

3. The sysctl knob kern.entropy.needed (and the ioctl RNDGETPOOLSTAT
   variable rndpoolstat_t::added) still reflects the number of bits
   of entropy from reliable sources, so we can still use this to
   suggest regenerating ssh keys.

   This matters on platforms that can only be reached, after flashing
   an installation image, by sshing in over a (private) network, like
   small network appliances or remote virtual machines without
   (interactive) serial consoles.  If we blocked indefinitely at boot
   when generating ssh keys, such platforms would be unusable.  This
   way, platforms are usable, but operators can still be advised at
   login time to regenerate keys as soon as they can actually load
   entropy onto the system, e.g. with rndctl(8) on a seed file copied
   from a local machine over the (private) network.

4. On machines without HWRNG, using a seed file still suppresses
   warnings for users who need more confident security.  But it is no
   longer necessary for availability.

This is a compromise between availability and security:

- The security mechanism of blocking indefinitely on machines without
  HWRNG hurts availability too much, as painful experience over the
  multiple years since I made the mistake of introducing it have
  shown.  (Sorry!)

- The other main alternative, not having a blocking path at all (as I
  pushed for, and as OpenBSD has done for a long time) could
  potentially reduce security vs netbsd<=9, and would run against the
  expectations set by many popular operating systems to the severe
  detriment of public perception of NetBSD security.

Even though we can't _confidently_ assess enough entropy from, e.g.,
sampling interrupt timings, this is the traditional behaviour that
most operating systems provide -- and the result here is a net
nondecrease in security over netbsd<=9, because all paths from the
entropy pool to userland now have at least as high a standard before
returning data as they did in netbsd<=9.

PR kern/55641
PR pkg/55847
PR kern/57185
https://mail-index.netbsd.org/current-users/2020/09/02/msg039470.html
https://mail-index.netbsd.org/current-users/2020/11/21/msg039931.html
https://mail-index.netbsd.org/current-users/2020/12/05/msg040019.html

XXX pullup-10
2023-06-30 21:42:05 +00:00
riastradh 28811d5ecf entropy(9): Avoid race between rnd_add_data and ioctl(RNDCTL).
XXX pullup-10
2023-05-24 20:22:23 +00:00
riastradh b8bfbde6e8 entropy(9): On flags change, cancel any scheduled consolidation.
We've been instructed to lose confidence in existing entropy sources,
so let's make sure to re-gather enough entropy before the next
consolidation can happen, in case some of what would be counted in
consolidation is from those entropy sources.

XXX pullup-10
2023-05-24 20:22:12 +00:00
riastradh 30c052bdf9 entropy(9): Allow changing flags on all entropy sources at once.
Entropy sources should all have nonempty names, and this will enable
an operator to, for example, disable all but a specific entropy
source.

XXX pullup-10
2023-03-03 12:52:49 +00:00
riastradh a816c0f978 random(4): Report number of bytes ready to read, not number of bits.
Only affects systems with the diagnostic and testing option
kern.entropy.depletion=1.

XXX pullup-10
2023-03-01 08:13:54 +00:00
riastradh 21ea4580f9 entropy: Don't disclose stack garbage in kern.entropy sysctls.
kern.entropy.consolidate and kern.entropy.gather are supposed to be
write-only -- it doesn't make any sense to read from them, but I
suppose it's better to read-as-zero than read-as-stack-secrets!
2022-08-05 23:43:46 +00:00
riastradh ec85059186 entropy(9): Update comment about where entropy_extract is allowed.
As of last month, it is forbidden in all hard interrupt context.
2022-05-13 09:40:02 +00:00
riastradh 59f579f5f3 entropy(9): Note rules about how to use entropy_extract output. 2022-05-13 09:39:52 +00:00
riastradh b4749e24a2 entropy(9): Call entropy_softintr while bound to CPU.
It looks like We tripped on the new assertion in entropy_account_cpu
when there was pending entropy on cpu0 running lwp0 when xc_broadcast
ran -- since xc_broadcast calls the function directly rather than
calling it through softint_schedule, it's not called via the softint
lwp which would satisfy the assertion.
2022-03-24 12:58:56 +00:00
riastradh 2b8e15f8fa entropy(9): Include <sys/lwp.h> and <sys/proc.h> explicitly.
Now that we use curlwp, struct lwp::l_pflag, and LP_BOUND, let's not
rely on side-loads from other .h files.
2022-03-23 23:20:52 +00:00
riastradh 8f575ec770 entropy(9): Bind to CPU temporarily to avoid race with lwp migration.
More fallout from the IPL_VM->IPL_SOFTSERIAL change.

In entropy_enter, there is a window when the lwp can be migrated to
another CPU:

	ec = entropy_cpu_get();
	...
	pending = ec->ec_pending + ...;
	...
	entropy_cpu_put();

	/* lwp migration possible here */

	if (pending)
		entropy_account_cpu(ec);

If this happens, we may trip over any of several problems in
entropy_account_cpu because it assumes ec is the current CPU's state
in order to decide whether we have anything to contribute from the
local pool to the global pool.

No need to do this in entropy_softintr because softints are bound to
the CPU anyway.
2022-03-23 23:18:17 +00:00
riastradh eae1841cdb entropy(9): Make rnd_lock_sources work while cold.
x86 uses entropy_extract verrrrrry early.  Fixes mistake in previous
that did not manifest in my testing on aarch64, which does not use it
so early.
2022-03-21 00:25:04 +00:00
riastradh dd68197b35 entropy(9): Improve entropy warning messages and documentation.
- For the main warning message, use less jargon, say `security', and
  cite the entropy(7) man page for further reading.  Document this in
  rnd(4) and entropy(7).

- For the debug-only warning message, say `entropy' only once and omit
  it from the rnd(4) man page -- it's not very important unless you're
  debugging the kernel in which case you probably know what you're
  doing enough to not need the text explained in the man page.
2022-03-20 18:19:57 +00:00
riastradh 89444d3fe8 entropy(9): Fix premature optimization deadlock in entropy_request.
- For synchronous queries from /dev/random, which are waiting for
  entropy to be ready, wait for concurrent access -- e.g., concurrent
  rnd_detach_source -- to finish, and make sure to request entropy
  from all sources (unless we're interrupted by a signal).

- When invoked through softint context (e.g., cprng_fast_intr ->
  cprng_strong -> entropy_extract), don't wait, because we're
  forbidden from waiting anyway.

- For entropy_bootrequest, wait but don't bother failing on signal
  because this only happens in kthread context, not in userland
  process context, so there can't be signals.

Nix rnd_trylock_sources; use the same entropy_extract flags
(ENTROPY_WAIT, ENTROPY_SIG) for rnd_lock_sources.
2022-03-20 14:30:56 +00:00
riastradh 8ef4287173 Revert "entropy(9): Nix rnd_trylock_sources."
Not a premature optimization after all -- this is necessary because
entropy_request can run in softint context, where the cv_wait_sig in
rnd_lock_sources is forbidden.  Need to do this another way.
2022-03-20 14:05:41 +00:00
riastradh e98aa0c66a entropy(9): Nix rnd_trylock_sources.
This was a premature optimization that turned out to be bogus.  It's
not harmful to request more than we need from drivers, so let's not
go out of our way to avoid that.
2022-03-20 13:44:18 +00:00
riastradh f7b53447aa entropy(9): Fix another new race in entropy_account_cpu.
The consolidation xcall can preempt entropy_enter, between when it
unlocks the per-CPU state and when it calls entropy_account_cpu, with
the effect of setting ec->ec_pending=0.

Previously this was impossible because we called entropy_account_cpu
with the per-CPU state still locked, but that doesn't work now that
the global entropy lock is an adaptive lock which might sleep which
is forbidden while the per-CPU state is locked.
2022-03-20 13:18:11 +00:00
riastradh 5798170187 entropy(9): Shuffle some assertions around.
Tripped over (diff || E->pending == ENTROPY_CAPACITY*NBBY), not sure
why yet, printing values will help.

No functional change intended.
2022-03-20 13:17:44 +00:00
riastradh 260710e2b4 entropy(9): Lock the per-CPU state in entropy_account_cpu.
This was previously called with the per-CPU state locked, which
worked fine as long as the global entropy lock was a spin lock so
acquiring it would never sleep.  Now it's an adaptive lock, so it's
not safe to take with the per-CPU state lock -- but we still need to
prevent reentrant access to the per-CPU entropy pool by interrupt
handlers while we're extracting from it.  So now the logic for
entering a sample is:

- lock per-CPU state
- entpool_enter
- unlock per-CPU state
- if anything pending on this CPU and it's time to consolidate:
  - lock global entropy state
  - lock per-CPU state
  - transfer
  - unlock per-CPU state
  - unlock global entropy state
2022-03-20 13:17:32 +00:00
riastradh 450311ec18 entropy(9): Factor out logic to lock and unlock per-CPU state.
No functional change intended.
2022-03-20 13:17:09 +00:00
riastradh 88beb6d7fa entropy(9): Avoid reentrance to per-CPU state from sleeping on lock.
Changing the global entropy lock from IPL_VM to IPL_SOFTSERIAL meant
it went from being a spin lock, which blocks preemption, to being an
adaptive lock, which might sleep -- and allow other threads to run
concurrently with the softint, even if those threads have softints
blocked with splsoftserial.

This manifested as KASSERT(!ec->ec_locked) triggering in
entropy_consolidate_xc -- presumably entropy_softintr slept on the
global entropy lock while holding the per-CPU state locked with
ec->ec_locked, and then entropy_consolidate_xc ran.

Instead, to protect access to the per-CPU state without taking a
global lock, defer entropy_account_cpu until after ec->ec_locked is
cleared.  This way, we never sleep while holding ec->ec_locked, nor
do we incur any contention on shared memory when entering entropy
unless we're about to distribute it.  To verify this, sprinkle in
assertions that curlwp->l_ncsw hasn't changed by the time we release
ec->ec_locked.
2022-03-20 00:19:11 +00:00
riastradh 66528ec8b6 rnd(9): Delete legacy rnd_initial_entropy symbol.
Use entropy_epoch() instead.

XXX kernel ABI change deleting symbol requires bump
2022-03-19 14:35:08 +00:00
riastradh e2caead148 entropy(9): Count dropped or truncated interrupt samples. 2022-03-18 23:35:28 +00:00
riastradh 4b3ca98c58 entropy(9): Reduce global entropy lock from IPL_VM to IPL_SOFTSERIAL.
This is no longer ever taken in hard interrupt context, so there's no
longer any need to block interrupts while doing crypto operations on
the global entropy pool.
2022-03-18 23:35:19 +00:00
riastradh e4ceb72edc entropy(9): Request entropy after the softint is enabled.
Otherwise, there is a window during which interrupts are running, but
the softint is not, so if many interrupts queue (low-entropy) samples
early at boot, they might get dropped on the floor.  This could
happen, for instance, with a PCI RNG like ubsec(4) or hifn(4) which
requests entropy and processes it in its own hard interrupt handler.
2022-03-18 23:35:07 +00:00
riastradh ceeae26ca4 entropy(9): Use the early-entropy path only while cold.
This way, we never take the global entropy lock from interrupt
handlers (no interrupts while cold), so the global entropy lock need
not block interrupts.

There's an annoying ordering issue here: softint_establish doesn't
work until after CPUs have been detected, which happens inside
configure(), which is also what enables interrupts.  So we have no
opportunity to softint_establish the entropy softint _before_
interrupts are enabled.

To work around this, we have to put a conditional into the interrupt
path, and go out of our way to process any queued samples after
establishing the softint.  If we just made softint_establish work
early, like percpu_create does now, this problem would go away and we
could delete a bit of logic here.

Candidate fix for PR kern/56730.
2022-03-18 23:34:56 +00:00
riastradh 0107837f03 entropy(9): Create per-CPU state earlier.
This will make it possible to use it from interrupts as soon as they
start, which means the global entropy pool lock won't have to block
interrupts.
2022-03-18 23:34:44 +00:00
riastradh a820d532b6 entropy(9): Forbid entropy_extract in hard interrupt context.
With a little additional work, this will let us reduce the global
entropy pool lock so it never blocks interrupts.
2022-03-16 23:56:55 +00:00
andvar 634b965029 fix few typos in comments for word "because". 2022-03-04 21:12:03 +00:00
thorpej f54558365c entropy_read_filtops is MPSAFE. 2021-09-26 15:10:51 +00:00
thorpej 12ae65d98c Change the kqueue filterops::f_isfd field to filterops::f_flags, and
define a flag FILTEROP_ISFD that has the meaning of the prior f_isfd.
Field and flag name aligned with OpenBSD.

This does not constitute a functional or ABI change, as the field location
and size, and the value placed in that field, are the same as the previous
code, but we're bumping __NetBSD_Version__ so 3rd-party module source code
can adapt, as needed.

NetBSD 9.99.89
2021-09-26 01:16:07 +00:00
christos 813a709df2 don't opencode kauth_cred_get() 2021-09-21 14:54:26 +00:00
jmcneill 7271b77973 entropy: Only print consolidation warning of AB_DEBUG.
The previous fix for PR kern/55458 changed printf to log(LOG_DEBUG, ...) with
the intent of hiding the message unless 'boot -x'. But this did not actually
suppress the message to console as log(LOG_DEBUG, ...) will print to console
if syslogd is not running yet.

So instead, just check for AB_DEBUG flag directly in boothowto, and only
printf the message if it is set.
2021-02-12 19:48:26 +00:00
riastradh 4c8ed8b3ce entropy: Reduce `no seed from bootloader' message to debug level.
This does not necessarily indicate a problem -- only x86 and arm pass
a seed from the bootloader anyway -- so it makes for an always-on
warning on some platforms, including all rump kernels, which is not
helpful.
2021-01-21 17:33:55 +00:00
riastradh 3f5d9c7d23 entropy: Record number of time and data samples for userland.
This more or less follows the semantics of the RNDGETESTNUM and
RNDGETESTNAME ioctls to restore useful `rndctl -lv' output.

Specifically: We count the number of time or data samples entered
with rnd_add_*.  Previously it would count the total number of 32-bit
words in the data, rather than the number of rnd_add_* calls that
enter data, but I think the number of calls makes more sense here.
2021-01-16 02:21:26 +00:00
riastradh 36a480a170 entropy: Use a separate condvar for rndsource list lock.
Otherwise, two processes both waiting for entropy will dance around
waking each other up (by releasing the rndsource list lock) and going
back to sleep (waiting for entropy).

Witnessed on the armv7 testbed when /etc/security presumably ran
twice over a >day-long test, until the metaphorical plug got pulled:

net/if_ipsec/t_ipsec_natt (509/888): 2 test cases
    ipsecif_natt_transport_null: [ 37123.2631856] entropy: pid 1005 (dd) blocking due to lack of entropy
[256.523317s] Failed: atf-check failed; see the output of the test for details
    ipsecif_natt_transport_rijndaelcbc: [274.370791s] Failed: atf-check failed; see the output of the test for details
[532.486697s]
...
    puffs_lstat_symlink: [ 123442.1606517] entropy: pid 9499 (dd) blocking due to lack of entropy
[ 123442.1835067] entropy: pid 1005 (dd) blocking due to lack of entropy
[ 123442.1944600] entropy: pid 9499 (dd) blocking due to lack of entropy
[ 123442.1944600] entropy: pid 1005 (dd) blocking due to lack of entropy
...
2021-01-13 23:53:23 +00:00
riastradh 0c4aa85688 entropy: Downgrade consolidation warning from printf to LOG_DEBUG.
Candidate fix for PR kern/55458.  This message is rather technical,
and so is unlikely to be acted on by anyone not debugging the kernel
anyway.  Most likely, on any system where it is a real problem, there
will be another (less technical) entropy warning anyway.
2021-01-11 02:18:40 +00:00
thorpej 2ef9bcafb7 Use sel{record,remove}_knote(). 2020-12-11 03:00:09 +00:00
gson 30dac4875b Log a message when a process blocks due to a lack of entropy.
Discussed on tech-kern.
2020-09-29 07:51:01 +00:00
riastradh bdad8b2721 New system call getrandom() compatible with Linux and others.
Three ways to call:

getrandom(p, n, 0)              Blocks at boot until full entropy.
                                Returns up to n bytes at p; guarantees
                                up to 256 bytes even if interrupted
                                after blocking.  getrandom(0,0,0)
                                serves as an entropy barrier: return
                                only after system has full entropy.

getrandom(p, n, GRND_INSECURE)  Never blocks.  Guarantees up to 256
                                bytes even if interrupted.  Equivalent
                                to /dev/urandom.  Safe only after
                                successful getrandom(...,0),
                                getrandom(...,GRND_RANDOM), or read
                                from /dev/random.

getrandom(p, n, GRND_RANDOM)    May block at any time.  Returns up to n
                                bytes at p, but no guarantees about how
                                many -- may return as short as 1 byte.
                                Equivalent to /dev/random.  Legacy.
                                Provided only for source compatibility
                                with Linux.

Can also use flags|GRND_NONBLOCK to fail with EWOULDBLOCK/EAGAIN
without producing any output instead of blocking.

- The combination GRND_INSECURE|GRND_NONBLOCK is the same as
  GRND_INSECURE, since GRND_INSECURE never blocks anyway.

- The combinations GRND_INSECURE|GRND_RANDOM and
  GRND_INSECURE|GRND_RANDOM|GRND_NONBLOCK are nonsensical and fail
  with EINVAL.

As proposed on tech-userlevel, tech-crypto, tech-security, and
tech-kern, and subsequently adopted by core (minus the getentropy part
of the proposal, because other operating systems and participants in
the discussion couldn't come to an agreement about getentropy and
blocking semantics):

https://mail-index.netbsd.org/tech-userlevel/2020/05/02/msg012333.html
2020-08-14 00:53:15 +00:00
riastradh a3f52b6ec2 Don't invoke callbacks of rndsources with collection disabled. 2020-05-12 20:50:17 +00:00
riastradh 5b75316950 Make rndctl -E/-C reset entropy accounting.
If we don't trust a source, it's unreasonable to trust any entropy it
previously provided, and we don't have any way to undo only the
effects of that source, so just zero our estimate of the entropy in
the pool and start over.

(However, keep the samples already in the pool -- just treat them as
though they had zero entropy and start gathering more.)
2020-05-10 02:56:12 +00:00
riastradh 2bd92f80a9 Fix comments. 2020-05-10 01:29:40 +00:00
riastradh d5f6e51db3 Use a temporary pool to consolidate entropy atomically.
There was a low-probability race with the entropy consolidation
logic: calls to entropy_extract at the same time as consolidation is
happening might witness partial contributions from the CPUs when
needed=256, say 64 bits at a time.

To avoid this, feed everything from the per-CPU pools into a
temporary pool, and then feed the temporary pool into the global pool
under the lock at the same time as we update needed.
2020-05-10 00:08:12 +00:00
riastradh 998f36ada6 Prune dead branch. 2020-05-09 06:12:32 +00:00
riastradh 9dc4826f31 Make variable unused outside kern_entropy.c static. 2020-05-08 15:54:11 +00:00