Commit Graph

1042 Commits

Author SHA1 Message Date
martin
8be1e866ab PR 55239: initialize all RAS sections for non-MP configurations 2020-05-15 15:20:40 +00:00
msaitoh
8012ca3f0e Remove extra semicolon. 2020-05-14 08:34:17 +00:00
maxv
712bef211f Use the hotpatch framework when patching _atomic_cas_64. 2020-05-01 08:32:50 +00:00
riastradh
5084c1b50f Rewrite entropy subsystem.
Primary goals:

1. Use cryptography primitives designed and vetted by cryptographers.
2. Be honest about entropy estimation.
3. Propagate full entropy as soon as possible.
4. Simplify the APIs.
5. Reduce overhead of rnd_add_data and cprng_strong.
6. Reduce side channels of HWRNG data and human input sources.
7. Improve visibility of operation with sysctl and event counters.

Caveat: rngtest is no longer used generically for RND_TYPE_RNG
rndsources.  Hardware RNG devices should have hardware-specific
health tests.  For example, checking for two repeated 256-bit outputs
works to detect AMD's 2019 RDRAND bug.  Not all hardware RNGs are
necessarily designed to produce exactly uniform output.

ENTROPY POOL

- A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1
  kludge as the cryptographic primitive.

- `Entropy depletion' is available for testing purposes with a sysctl
  knob kern.entropy.depletion; otherwise it is disabled, and once the
  system reaches full entropy it is assumed to stay there as far as
  modern cryptography is concerned.

- No `entropy estimation' based on sample values.  Such `entropy
  estimation' is a contradiction in terms, dishonest to users, and a
  potential source of side channels.  It is the responsibility of the
  driver author to study the entropy of the process that generates
  the samples.

- Per-CPU gathering pools avoid contention on a global queue.

- Entropy is occasionally consolidated into global pool -- as soon as
  it's ready, if we've never reached full entropy, and with a rate
  limit afterward.  Operators can force consolidation now by running
  sysctl -w kern.entropy.consolidate=1.

- rndsink(9) API has been replaced by an epoch counter which changes
  whenever entropy is consolidated into the global pool.
  . Usage: Cache entropy_epoch() when you seed.  If entropy_epoch()
    has changed when you're about to use whatever you seeded, reseed.
  . Epoch is never zero, so initialize cache to 0 if you want to reseed
    on first use.
  . Epoch is -1 iff we have never reached full entropy -- in other
    words, the old rnd_initial_entropy is (entropy_epoch() != -1) --
    but it is better if you check for changes rather than for -1, so
    that if the system estimated its own entropy incorrectly, entropy
    consolidation has the opportunity to prevent future compromise.

- Sysctls and event counters provide operator visibility into what's
  happening:
  . kern.entropy.needed - bits of entropy short of full entropy
  . kern.entropy.pending - bits known to be pending in per-CPU pools,
    can be consolidated with sysctl -w kern.entropy.consolidate=1
  . kern.entropy.epoch - number of times consolidation has happened,
    never 0, and -1 iff we have never reached full entropy

CPRNG_STRONG

- A cprng_strong instance is now a collection of per-CPU NIST
  Hash_DRBGs.  There are only two in the system: user_cprng for
  /dev/urandom and sysctl kern.?random, and kern_cprng for kernel
  users which may need to operate in interrupt context up to IPL_VM.

  (Calling cprng_strong in interrupt context does not strike me as a
  particularly good idea, so I added an event counter to see whether
  anything actually does.)

- Event counters provide operator visibility into when reseeding
  happens.

INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG)

- Unwired for now; will be rewired in a subsequent commit.
2020-04-30 03:28:18 +00:00
maxv
129e4c2b33 Use the hotpatch framework for LFENCE/MFENCE. 2020-04-26 14:49:17 +00:00
maxv
f012eec2fe Remove unused argument in macro. 2020-04-26 13:59:44 +00:00
maxv
e18b0a4638 Remove unused. 2020-04-26 13:54:02 +00:00
maxv
88b0d179cd Drop the hardcoded array, use the hotpatch section. 2020-04-26 13:37:14 +00:00
bouyer
c24c993fe4 Merge the bouyer-xenpvh branch, bringing in Xen PV drivers support under HVM
guests in GENERIC.
Xen support can be disabled at runtime with
boot -c
disable hypervisor
2020-04-25 15:26:16 +00:00
rin
27f1060c62 Restrict usage of m68k assembler versions of {,u}divsi3 and {,u}divsi3 to
kernel and bootloader for 68010.

They requires a special calling convention to udivsi3, and cannot to be
mixed up in normal routines provided by libgcc or compiler_rt. Although,
there's no problem for using them in a controlled situation, i.e., kernel
and standalone programs.

Note that this does not affect at all m68k ports other than sun2, since
codes generated by gcc do not call these routines.

Assembler files are moved from common/lib/libc/arch/m68k/gen to
sys/lib/libkern/arch/m68k in order not to be compiled in libc.

Revert hack introduced to lib/libc/compiler_rt/Makefile.inc rev 1.37:
http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/compiler_rt/Makefile.inc#rev1.37

Proposed on port-sun2@ with no response...
(Again, this does not affect m68k ports other than sun2.)
http://mail-index.netbsd.org/port-sun2/2020/03/10/msg000102.html
2020-04-22 11:28:56 +00:00
ryo
adc5085fcd Fixed to not use the "br" instruction. Branch Target Identification (BTI) doesn't like "br".
requested by maxv@
2020-04-11 05:12:52 +00:00
ad
5ff779fe83 Match the naming convention in the file. 2020-04-11 01:46:47 +00:00
ad
f1ed81b8fc PR kern/54979 (radixtree might misbehave if ENOMEM)
- radix_tree_insert_node(): if the insert failed due to ENOMEM, roll back
  any updates made to the tree.

- radix_tree_grow(): either succeed or fail, never make partial adjustments
  to the tree.

- radix_tree_await_memory(): allocate & free the maximum possible number of
  nodes required by any insertion.
2020-04-10 23:43:05 +00:00
ad
5e0de1ec0e Rename radix_tree_node_clean_p() to radix_tree_node_sum() and have it return
the computed sum.  Use to replace any_children_tagmask().  Simpler & faster.
2020-04-10 21:56:41 +00:00
skrll
a6a8f0073d Fix KASAN build on aarch64 2020-04-07 08:07:58 +00:00
wiz
81e8a3b48e Teach dk(4) about ZFS.
"looks ok" mlelstv
2020-03-30 08:36:09 +00:00
rin
880f104786 For kernel, rename ffs to __ffssi2 rather than having a weak symbol.
This enables us to load modules depended to __ffssi2.

It is difficult to deal with weak symbols consistently in in-kernel
linker. See explanation by pgoyette on tech-kern:

    http://mail-index.netbsd.org/tech-kern/2020/03/09/msg026148.html

Also, we do not currently provide ffs(9) as a kernel routine.
2020-03-10 08:15:44 +00:00
rin
ee127cc58e Add missing END() for coldfire. 2020-03-09 13:36:10 +00:00
skrll
7680719e6b Give the thumb atomic ops a chance of working 2020-03-09 11:21:54 +00:00
rin
da092eb7d2 Remove wrong comment (copy-paste from somewhere);
__mulsi3 does not depend on __udivsi3.
2020-03-09 08:29:11 +00:00
kamil
7f5eec67a9 Add support for alignment_assumptions in uubsan
Cherry-pick from FreeBSD:

From 7c1bc5ffc2fa68ddc76e5ea8a3a1a6fdfeee57f0 Mon Sep 17 00:00:00 2001
From: andrew <andrew@FreeBSD.org>
Date: Tue, 28 May 2019 09:12:15 +0000
Subject: [PATCH] Teach the kernel KUBSAN runtime about alignment_assumption

This checks the alignment of a given pointer is sufficient for the
requested alignment asked for. This fixes the build with a recent
llvm/clang.

Sponsored by:	DARPA, AFRL
2020-03-08 21:35:03 +00:00
rin
fd255ae543 Implement workaround for IBM405 Errata 77 (aka CPU_210), where
interrupted stwcx. may errantly write data to memory:

    https://elinux.org/images/1/1d/Ppc405gp-errata.pdf

This is because stwcx. is split into two pieces in the pipeline.

We need to
(1) insert dcbt before every stwcx. instruction, as well as
(2) insert sync before every rfi/rfci instruction.

It is unclear which processors are affected, but according to Linux,
all 405-based cores up until 405GPR and 405EP are affected:

    https://github.com/torvalds/linux/blob/master/arch/powerpc/platforms/40x/Kconfig#L140

For kernel, this workaround can be restricted to affected processors.
However, for kernel modules and userland, we have to enable it for all
32bit powerpc archs in order to share common binaries as before.

Proposed on port-powerpc:

    http://mail-index.netbsd.org/port-powerpc/2020/02/21/msg003583.html
2020-03-01 23:23:36 +00:00
fox
819b6be2db common/lib/libc/stdlib: Fix possible signed integer overflow.
common/lib/libc/stdlib/random.c:482:6 can result in signed integer overflow.

This bug was reported by UBSan runs.

The change has been tested using the following program to generate random numbers
in both the old and the new library and can be used to verify the correctness of the
library after the change.

#include <stdio.h>
#include <stdlib.h>

#define COUNT 1000 * 1000

int
main(void)
{
        int i;
        FILE *fp = fopen("numbers.txt", "w");

        srandom(0xdeadbeef);

        for(i = 0; i < COUNT; i++) {
                fprintf(fp, "%ld\n", random());
        }

        fclose(fp);

        return 0;
}

Reviewed by: riastradh@ , kamil@
2020-02-22 14:47:29 +00:00
ad
e7cb9801ce Some boot blocks too big now, only compare in big chunks if !_STANDALONE. 2020-01-29 09:18:26 +00:00
ad
81d0e040cc gang_lookup_scan(): if a dense scan and the first sibling doesn't match,
the scan is finished.
2020-01-28 22:20:45 +00:00
ad
e4d889e5b3 Add a radix_tree_await_memory(), for kernel use. 2020-01-28 16:33:34 +00:00
ad
038210787f Drop the alignment check if __NO_STRICT_ALIGNMENT (x86, m68k, vax). 2020-01-27 22:22:03 +00:00
ad
42a88f8ef1 bcmp() / memcmp(): compare in uintptr_t sized chunks when it's easy to. 2020-01-27 22:13:39 +00:00
ad
78780d1e41 x86 uses the C versions of bcmp() and memcmp() now. 2020-01-27 22:09:21 +00:00
ad
6c03ec4c5b Back out previous, it's broken. 2020-01-16 09:23:43 +00:00
ad
198cbac382 Rewrite bcmp() & memcmp() to not use REP CMPS. Seems about 5-10x faster for
small strings on modern hardware.
2020-01-15 10:56:49 +00:00
para
0041cdea95 initialize radix_tree_node_cache with PR_LARGECACHE
this increases the cache groups from 15 to 63 items in order
to reduce traffic between pool cache layers
this is the same as for other highly frequented pool caches as the pvpool and anonpool
2020-01-12 20:00:41 +00:00
skrll
79fe88a4f8 Trailing whitespace 2020-01-06 13:21:18 +00:00
ad
6ee4781f70 proc_compare(): weed out zombies before doing anything else. From skrll@. 2020-01-06 11:16:35 +00:00
christos
a340b0e513 Formalize that the printf formats should be uintmax_t so we can
uniformly use 'j' in the user-provided formatting strings instead
of depending on _LP64 to use 'll' or 'l' (and the PRI macros). The
alternative is to parse the printf format manually to determine
which modifier to apply which would make this transparent to the
user (they could still always use '%u' or '%x'), but that's too
painful.
2019-12-06 19:36:21 +00:00
ad
b8255b9f0e Fix warning that appears when compiling in kernel. 2019-12-05 19:03:39 +00:00
ad
9afd1ce310 Delete the counter from "struct radix_tree_node", and in the one place we
need a non-zero check, substitute with a deterministic bitwise OR of all
values in the node.  The structure then becomes cache line aligned.

For each node we now need only touch 2 cache lines instead of 3, which makes
all the operations faster (measured), amortises the cost of not having a
counter, and will avoid intra-pool-page false sharing on MP.
2019-12-05 18:50:41 +00:00
ad
0558f52127 Merge radixtree changes from yamt-pagecache. 2019-12-05 18:32:25 +00:00
roy
8569e6d26f Make it easier to use strtoi and strtou in downsteam applications
without the need to define HAVE_NBTOOL_CONFIG_H and yet allow -Wundef
not to log any warnings.
2019-11-28 12:33:23 +00:00
kamil
0b65214dd3 uubsan: Implement function_type_mismatch_v1
RTTI is not supported by micro-UBSan (by design) and this is now a stub
handler.
2019-11-01 14:54:07 +00:00
kamil
aeb81341f9 uubsan: Handle implicit_conversion 2019-10-30 00:13:46 +00:00
maya
a11c5ab81a Remove htonll and ntohll as symbols from aarch64 libc.
Other architectures do not define them, and so we don't provide a
function declaration in any header.

This means a package may detect it with a link-test and then fail
due to the missing declaration, like sysutils/collectd currently does.

Done this way as aarch64 has not had a release yet. Discussed with releng.
2019-10-12 09:22:36 +00:00
mrg
8c38a0de66 workaround a GCC 8 warning:
- code that will be unreachable on platforms with
  sizeof(double) != sizeof(unsigned long) triggered an valid out
  of bounds warning.  avoid the error by using sizeof ul.
- also assert that the sizes are the same if entering here.

both from kamil@.
2019-10-04 12:12:47 +00:00
skrll
a58be5d164 Traiing whitespace. 2019-09-16 12:40:40 +00:00
skrll
2ce102b5b1 __sync_{,x}or_and_fetch_8 should return new value... make it so. 2019-09-15 14:55:04 +00:00
skrll
8453e2f7da __sync_or_and_fetch_8 should return new value... make it do that. 2019-09-15 11:14:15 +00:00
kamil
1a5f018b01 Enhance the support of LLVM sanitizers
Define _REENTRANT for MKSANITIZER build. This is needed for at least stdio
code. This caused new build issued with duplicated symbols in few places
and rump kernel code picking different code paths borrowed from libc.
Handle all this in one go.

Add bsd.sanitizer.mk to share common code used by programs and libraries.

Switch from realall to beforeinstall target in .syms files. This is more
reliable in MKSANITIZER.
2019-08-27 22:48:53 +00:00
para
50fe1a8b2a add now required includes for memcpy prototypes analogue to other hash functions
(fix the build)
2019-08-20 15:17:02 +00:00
riastradh
23d950dc47 Fix byte order bug in murmurhash and pacify sanitizers. 2019-08-20 12:33:26 +00:00
joerg
3dbc6e4c72 ARMv6KZ has been misspelled by GCC since forever, but clang only
provides the correct name. Support both.
2019-08-02 12:07:24 +00:00