Commit Graph

35 Commits

Author SHA1 Message Date
christos
e7ae23fd9e include "ioconf.h" to get the 'void <driver>attach(int count);' prototype. 2015-08-20 14:40:16 +00:00
riastradh
b8cae62e30 Mark some variables __read_mostly. 2015-04-21 12:55:57 +00:00
riastradh
808c0b52c7 Sort includes. 2015-04-21 12:47:33 +00:00
riastradh
b1dc061a01 Xor, not ior, to combine bits of binuptime for rnd_counter. 2015-04-21 12:07:31 +00:00
riastradh
e75aac8fa3 Move substantive part of rnd_ioctl to kern_rndq.c. 2015-04-14 12:51:30 +00:00
riastradh
c101c28183 Eliminate last two cases of u_int*_t in rndpseudo.c. 2015-04-14 12:27:02 +00:00
riastradh
0ea00109fe Use binuptime, not microtime/nanotime as substitute cycle counter. 2015-04-14 12:25:41 +00:00
riastradh
550e06a8cd Include rndpool.h, rndsource.h here because we use them. 2015-04-14 12:21:12 +00:00
riastradh
5c52da5236 Use rnd_add_data, not rndpool_mtx and rndpool_add_data. 2015-04-14 12:14:09 +00:00
riastradh
6f03865532 Gather rnd-private declarations into <dev/rnd_private.h>.
Let's try to avoid putting externs in .c files where the compiler
can't check them.
2015-04-13 15:13:50 +00:00
riastradh
45acba1c8a If we're going to use the queue macros, use them as intended. 2015-04-13 14:56:22 +00:00
christos
aa5330adc0 add a couple of event counters. 2014-11-09 20:29:58 +00:00
tls
9b3a62bd20 Fixes and enhancements for polled entropy sources:
Add explicit enable/disable hooks for callout-driven sources (be more
	power friendly).

	Make "skew" source polled so it runs only when there is entropy
	demand.

	Adjust entropy collection from polled sources so it's processed
	sooner.
2014-10-26 18:22:32 +00:00
matt
a99bf7b907 Try not to use f_data, use f_rndctx to get a correctly typed pointer. 2014-09-05 09:23:14 +00:00
tls
ea6af427bd Merge tls-earlyentropy branch into HEAD. 2014-08-10 16:44:32 +00:00
dholland
f9228f4225 Add d_discard to all struct cdevsw instances I could find.
All have been set to "nodiscard"; some should get a real implementation.
2014-07-25 08:10:31 +00:00
dholland
a68f9396b6 Change (mostly mechanically) every cdevsw/bdevsw I can find to use
designated initializers.

I have not built every extant kernel so I have probably broken at
least one build; however I've also found and fixed some wrong
cdevsw/bdevsw entries so even if so I think we come out ahead.
2014-03-16 05:20:22 +00:00
pooka
70f1780989 kill _RUMPKERNEL ifdef 2014-03-11 20:35:47 +00:00
riastradh
db4eba3c26 Fix spurious kassert on interrupted blocking read from /dev/random.
Return EINTR in this case instead.  While here, clarify comment.

Fixes PR kern/48119, simpler than the patch attached there, per
discussion with tls@, who had this right in the earlier version
of rndpseudo.c before I broke it (oops!).
2013-09-25 03:14:55 +00:00
riastradh
68f2e8ca36 When reading from /dev/random, block at most once in cprng_strong.
We are not obligated to return exactly as many bytes as requested,
and many applications -- notably those that use stdio or otherwise
buffered I/O to read from /dev/random -- try to read many more than
32 bytes at a time from /dev/random even if all they are about to use
is 32 bytes.

In this case, blocking until we have enough entropy to fill a large
buffer causes needless application delays, e.g. causing cgdconfig
(which reads from /dev/random with stdio) to hang at boot when trying
to configure a random-keyed device for swap.

Patch tested by Aran Clauson.  Fixes PR kern/48028.
2013-07-21 22:30:19 +00:00
pgoyette
e990cb232e Initialize some variables to make the vax build happy.
XXX Not sure why this problem only showed up on vax builds.
2013-07-02 13:27:42 +00:00
riastradh
a7f90b2fd2 Fix races in /dev/u?random initialization and accounting.
- Push /dev/random `information-theoretic' accounting into cprng(9).
- Use percpu(9) for the per-CPU CPRNGs.
- Use atomics with correct memory barriers for lazy CPRNG creation.
- Remove /dev/random file kmem grovelling from fstat(1).
2013-07-01 15:22:00 +00:00
riastradh
6290b0987e Rework rndsink(9) abstraction and adapt arc4random(9) and cprng(9).
rndsink(9):
- Simplify API.
- Simplify locking scheme.
- Add a man page.
- Avoid races in destruction.
- Avoid races in requesting entropy now and scheduling entropy later.

Periodic distribution of entropy to sinks reduces the need for the
last one, but this way we don't need to rely on periodic distribution
(e.g., in a future tickless NetBSD).

rndsinks_lock should probably eventually merge with the rndpool lock,
but we'll put that off for now.

cprng(9):
- Make struct cprng_strong opaque.
- Move rndpseudo.c parts that futz with cprng guts to subr_cprng.c.
- Fix kevent locking.  (Is kevent locking documented anywhere?)
- Stub out rump cprng further until we can rumpify rndsink instead.
- Strip code to grovel through struct cprng_strong in fstat.
2013-06-23 02:35:23 +00:00
tls
5819ac2839 Convert the entropy pool framework from pseudo-callout-driven to
soft interrupt driven operation.

Add a polling mode of operation -- now we can ask hardware random number
generators to top us up just when we need it (bcm2835_rng and amdpm
converted as examples).

Fix a stall noticed with repeated reads from /dev/random while testing.
2013-06-13 00:55:01 +00:00
christos
3f97768e4e move context struct to a header for the benefit of fstat. 2012-11-25 15:29:24 +00:00
tls
a918f11452 Fix two problems that could cause /dev/random to not wake up readers when entropy became available. 2012-05-19 16:00:41 +00:00
tls
848fd25c3d Fix a bug and a compilation problem. Bug: spin mutexes don't have owners,
so KASSERT(!mutex_owned()) shouldn't be used to assert that the current
LWP does not have the mutex.  Compilation problem: explicitly include
sys/mutex.h from rnd.h so evbarm builds again.
2012-04-20 21:57:33 +00:00
tls
8e1a1c9f45 Address multiple problems with rnd(4)/cprng(9):
1) Add a per-cpu CPRNG to handle short reads from /dev/urandom so that
   programs like perl don't drain the entropy pool dry by repeatedly
   opening, reading 4 bytes, closing.

2) Really fix the locking around reseeds and destroys.

3) Fix the opportunistic-reseed strategy so it actually works, reseeding
   existing RNGs once each (as they are used, so idle RNGs don't get
   reseeded) until the pool is half empty or newly full again.
2012-04-17 02:50:38 +00:00
drochner
8e1ae09c43 reorder initialization to improve error handling in case the system
runs out of file descriptors, avoids LOCKDEBUG panic due to double
mutex initialization
2012-03-30 20:15:18 +00:00
apb
aed644df58 Revert previous; the #include was already present, and I got confused
by a merge error.
2011-12-20 13:42:19 +00:00
apb
69662c7cbb #include "opt_compat_netbsd.h" 2011-12-20 12:45:00 +00:00
apb
381a814261 Add COMPAT_50 and COMPAT_NETBSD32 compatibility code for rnd(4)
ioctl commands.

Tested with "rndctl -ls" using an old 32-bit version of rndctl(8)
(built for NetBSD-5.99.56/i386) and a new 64-bit kernel
(NetBSD-5.99.59/amd64).
2011-12-19 21:53:52 +00:00
apb
7c0101b055 Return ENOTTY, not EINVAL, when the ioctl command is unrecognised. 2011-12-19 21:44:08 +00:00
drochner
f8ac16bb44 make this build with RND_DEBUG 2011-12-19 11:10:08 +00:00
tls
6e1dd068e9 Separate /dev/random pseudodevice implemenation from kernel entropy pool
implementation.  Rewrite pseudodevice code to use cprng_strong(9).

The new pseudodevice is cloning, so each caller gets bits from a stream
generated with its own key.  Users of /dev/urandom get their generators
keyed on a "best effort" basis -- the kernel will rekey generators
whenever the entropy pool hits the high water mark -- while users of
/dev/random get their generators rekeyed every time key-length bits
are output.

The underlying cprng_strong API can use AES-256 or AES-128, but we use
AES-128 because of concerns about related-key attacks on AES-256.  This
improves performance (and reduces entropy pool depletion) significantly
for users of /dev/urandom but does cause users of /dev/random to rekey
twice as often.

Also fixes various bugs (including some missing locking and a reseed-counter
overflow in the CTR_DRBG code) found while testing this.

For long reads, this generator is approximately 20 times as fast as the
old generator (dd with bs=64K yields 53MB/sec on 2Ghz Core2 instead of
2.5MB/sec) and also uses a separate mutex per instance so concurrency
is greatly improved.  For reads of typical key sizes for modern
cryptosystems (16-32 bytes) performance is about the same as the old
code: a little better for 32 bytes, a little worse for 16 bytes.
2011-12-17 20:05:38 +00:00