Commit Graph

278258 Commits

Author SHA1 Message Date
ryo
561087550e Switch the Icache sync operation to the necessary and sufficient one according to the CTR_EL0.DIC and CTR_EL0.IDC flags.
If CTR_EL0.DIC=1, Icache invalidation is not required.
If CTR_EL0.IDC=1, Dcache clean before Icache invalidation is not required.
CLIDR_EL1.LoC is 0, or CLIDR_EL1.LoUIS and CLIDR_EL1.LoUU are 0, Dcache clean is not required as well.

SEE ALSO ARMARM, "CTR_EL0 Cache Type Register", and "CLIDR_EL1 Cache Level ID Register"
2020-07-01 07:59:16 +00:00
lukem
b252a0e204 use ggc-none.c not ggc-none.o in SRCS
not tested, based on similar change to
 external/gpl3/gcc/usr.bin/lto-wrapper/Makefile
2020-07-01 07:54:24 +00:00
lukem
1defdf0961 bsd.dep.mk: fix "make tags" (again)
[repeat revision 1.85]

Fix "make tags" to actually build a tags file:
- Use !commands() instead of !target(), so that the rule actually works
- Write to ${.OBJDIR}/tags for read-only source (don't know why ${.TARGET}
  isn't sufficient).
- Only match *.[cly] from ${.ALLSRCS} - just excluding *.h causes failures
  because of ${targ}: subdir-${targ} in bsd.subdir.mk.

Thanks to uwe@ for assistance.
2020-07-01 07:38:29 +00:00
jruoho
55abcd082f Add basic checks for a64l(3), l64a(3), and l64a_r(3). 2020-07-01 07:16:37 +00:00
martin
bf0d51809d Forbid gcc to whine about intended format truncation 2020-07-01 06:31:18 +00:00
jruoho
7fed543511 Add basic tests for the rest of the mktemp(3) family of functions, including
a case for PR lib/55441.
2020-07-01 05:37:25 +00:00
uwe
1765a45b05 hline, vline - don't lose attributes when using default character.
Make default (wide) and non-wide behavior match.  If the character
argument has (only) attributes set, use them with the default line
character.

In the wide case don't do the fallback in hline - it just calls
hline_set that needs to do it anyway.  Fix the latter to check the
wcwidth of the right character and avoid division by zero.
2020-07-01 02:57:01 +00:00
uwe
4eb5fad78b Oops. Fix y/x typo in the previous whline() fix for !HAVE_WCHAR. 2020-07-01 02:14:41 +00:00
riastradh
8747f41571 copystr is now in libkern; don't redefine it in rumpcopy.c.
Should fix build breakage from the copystr changes.
2020-07-01 00:42:13 +00:00
lukem
41b765e18c fix sets for MKKYUA 2020-06-30 23:51:47 +00:00
riastradh
aac1a7e566 Reallocate registers to avoid abusing callee-saves registers, v8-v15.
Forgot to consult the AAPCS before committing this before -- oops!

While here, take advantage of the 32 aarch64 simd registers to avoid
all stack spills.
2020-06-30 23:06:02 +00:00
riastradh
6d5a7eed7d Use `.arch_extension aes' for aese/aesmc/aesd/aesimc.
Unlike `.arch_extension crypto', this works with clang; both work
with gas, so we'll go with this.

Clang still can't handle aes_armv8_64.S yet -- it gets confused by
dup and mov on lanes, but this makes progress.
2020-06-30 21:53:39 +00:00
riastradh
b54ccdd478 Use .p2align rather than .align.
Apparently on arm, .align is actually an alias for .p2align, taking a
power of two rather than a number of bytes, so aes_armv8_64.o was
bloated to 32KB with obscene alignment when it only needed to be
barely past 4KB.

Do the same for the x86 aes_ni_64.S -- even though .align takes a
number of bytes rather than a power of two on x86, let's just stay
away from the temptations of the evil .align directive.
2020-06-30 21:41:03 +00:00
uwe
d9a8ae84b1 Fix indentation in the copyright.
Make it match its siblings in other files.
2020-06-30 21:27:18 +00:00
riastradh
aedb0d4e40 Tweak clang neon intrinsics so they build.
(this file is still a kludge)
2020-06-30 21:24:00 +00:00
riastradh
8039b48b5b New build.sh option: -c <compiler>
Could never remember what the incantation is to do a clang build, so
now it's just `build.sh -c clang'.
2020-06-30 21:22:19 +00:00
uwe
9c6a61e7ae whline - save/restore the y coordinate too.
Reaching the right side of the screen can cause a line wrap.
Forgot to apply the fix to the !HAVE_WCHAR case.
PR lib/55434
2020-06-30 21:10:13 +00:00
uwe
1eecb61d77 whline_set - save/restore the y coordinate too.
Reaching the right side of the screen can cause a line wrap.
PR lib/55434
2020-06-30 21:02:24 +00:00
riastradh
bd9707e06e New test sys/crypto/aes/t_aes.
Runs aes_selftest on all kernel AES implementations supported on the
current hardware, not just the preferred one.
2020-06-30 20:32:10 +00:00
msaitoh
24cca43843 If an error occurred in sme_refresh function, pass ENVSYS_SINVALID.
OK'd by pgoyette.
2020-06-30 19:02:42 +00:00
riastradh
c8c5c422ac Limit aes_neon to cpu_cortex | aarch64.
We won't use it on any other systems, and it doesn't build without
NEON anyway.  Verified earmv7hf GENERIC, aarch64 GENERIC64, and
earmv6 RPI2 all build with this.
2020-06-30 17:03:13 +00:00
maxv
82287798e3 be one-shot by default, with room for circular 2020-06-30 16:28:17 +00:00
maxv
64f849a4c1 fix file path 2020-06-30 16:22:55 +00:00
riastradh
1c86761fac New sysctl node hw.aes_impl for selected AES implementation. 2020-06-30 16:21:17 +00:00
maxv
ca08b3e761 Make copystr() a MI C function, part of libkern and shared on all
architectures.

Notes:

 - On alpha and ia64 the function is kept but gets renamed locally to avoid
   symbol collision. This is because on these two arches, I am not sure
   whether the ASM callers do not rely on fixed registers, so I prefer to
   keep the ASM body for now.
 - On Vax, only the symbol is removed, because the body is used from other
   functions.
 - On RISC-V, this change fixes a bug: copystr() was just a wrapper around
   strlcpy(), but strlcpy() makes the operation less safe (strlen on the
   source beyond its size).
 - The kASan, kCSan and kMSan wrappers are removed, because now that
   copystr() is in C, the compiler transformations are applied to it,
   without the need for manual wrappers.

Could test on amd64 only, but should be fine.
2020-06-30 16:20:00 +00:00
jruoho
2ba250a115 After a comedy of errors, move t_mbtowc to its final resting place. 2020-06-30 16:09:40 +00:00
kim
f61899471d Remove local domain always, not just when looking up addresses 2020-06-30 15:02:55 +00:00
kim
98df278e6f Compute a value for domain before comparing against it 2020-06-30 14:57:25 +00:00
jruoho
8b2d29b6bf Check that DTrace's execsnoop and opensnoop work (cf. PR kern/53417). 2020-06-30 14:30:49 +00:00
sborrill
a39749b012 Only need to set brightness if reading the initial state fails
to sync firmware and the driver. Avoids black screen at boot time.
Thanks to jmcneill@
2020-06-30 13:14:21 +00:00
jruoho
6d91546d37 Skip a few more nodes, and enable this test for Qemu runs. 2020-06-30 11:49:26 +00:00
jruoho
e643f0ea97 Add a couple of tests for sequential ifconfig(8) options, incl. PR kern/41912. 2020-06-30 11:48:20 +00:00
mbalmer
1ad28b1314 www.lua.org uses https. 2020-06-30 07:37:32 +00:00
riastradh
49b86377f2 NetBSD 6.99.69 welcomes you, and hopes you enjoy your new AES API. 2020-06-30 06:25:15 +00:00
sevan
544b2f2613 Lua 5.4.0 is out 2020-06-30 05:19:19 +00:00
riastradh
64af5d547a Missed a spot -- one more 32-bit sign-compare issue. 2020-06-30 04:17:31 +00:00
riastradh
6a40410cdc Fix sign-compare issue on 32-bit systems.
Built fine on amd64, where all unsigned values are representable in
ssize_t, but I didn't try building on i386, where they're not.
2020-06-30 04:15:46 +00:00
riastradh
5766dd4aa9 Rename enc_xform_rijndael128 -> enc_xform_aes.
Update netipsec dependency.
2020-06-30 04:14:55 +00:00
riastradh
a296c51503 Note kernel AES rework. 2020-06-30 00:26:12 +00:00
riastradh
96d271ec30 Make padlock(4) compile on amd64. 2020-06-29 23:58:44 +00:00
riastradh
a220774a13 Provide hand-written AES NEON assembly for arm32.
gcc does a lousy job at compiling 128-bit NEON intrinsics on arm32;
hand-writing it made it about 12x faster, by avoiding a zillion loads
and stores to spill everything and the kitchen sink onto the stack.
(But gcc does fine on aarch64, presumably because it has twice as
many registers and doesn't have to deal with q2=d4/d5 overlapping.)
2020-06-29 23:57:56 +00:00
riastradh
0a776e17e0 New permutation-based AES implementation using ARM NEON.
Also derived from Mike Hamburg's public-domain vpaes code.
2020-06-29 23:56:30 +00:00
riastradh
c41eed1f74 Implement fpu_kern_enter/leave for arm32. 2020-06-29 23:54:05 +00:00
riastradh
9f4370e773 Move aarch64/fpu.h to arm/fpu.h. 2020-06-29 23:53:12 +00:00
riastradh
c057901613 New permutation-based AES implementation using SSSE3.
This covers a lot of CPUs -- particularly lower-end CPUs over the
past decade which lack AES-NI.

Derived from Mike Hamburg's public domain vpaes software; see
<https://crypto.stanford.edu/vpaes/> for details.
2020-06-29 23:51:35 +00:00
riastradh
4809cab8b6 Split SSE2 logic into separate units.
Ensure that there are no paths into files compiled with -msse -msse2
at all except via fpu_kern_enter.

I didn't run into a practical problem with this, but let's not leave
a ticking time bomb for subsequent toolchain changes in case the mere
declaration of local __m128i variables causes trouble.
2020-06-29 23:50:05 +00:00
riastradh
336b5650c6 New SSE2-based bitsliced AES implementation.
This should work on essentially all x86 CPUs of the last two decades,
and may improve throughput over the portable C aes_ct implementation
from BearSSL by

(a) reducing the number of vector operations in sequence, and
(b) batching four rather than two blocks in parallel.

Derived from BearSSL'S aes_ct64 implementation adjusted so that where
aes_ct64 uses 64-bit q[0],...,q[7], aes_sse2 uses (q[0], q[4]), ...,
(q[3], q[7]), each tuple representing a pair of 64-bit quantities
stacked in a single 128-bit register.  This translation was done very
naively, and mostly reduces the cost of ShiftRows and data movement
without doing anything to address the S-box or (Inv)MixColumns, which
spread all 64-bit quantities across separate registers and ignore the
upper halves.

Unfortunately, SSE2 -- which is all that is guaranteed on all amd64
CPUs -- doesn't have PSHUFB, which would help out a lot more.  For
example, vpaes relies on that.  Perhaps there are enough CPUs out
there with PSHUFB but not AES-NI to make it worthwhile to import or
adapt vpaes too.

Note: This includes local definitions of various Intel compiler
intrinsics for gcc and clang in terms of their __builtin_* &c.,
because the necessary header files are not available during the
kernel build.  This is a kludge -- we should fix it properly; the
present approach is expedient but not ideal.
2020-06-29 23:47:54 +00:00
riastradh
04a6492d1e New cgd cipher adiantum.
Adiantum is a wide-block cipher, built out of AES, XChaCha12,
Poly1305, and NH, defined in

   Paul Crowley and Eric Biggers, `Adiantum: length-preserving
   encryption for entry-level processors', IACR Transactions on
   Symmetric Cryptology 2018(4), pp. 39--61.

Adiantum provides better security than a narrow-block cipher with CBC
or XTS, because every bit of each sector affects every other bit,
whereas with CBC each block of plaintext only affects the following
blocks of ciphertext in the disk sector, and with XTS each block of
plaintext only affects its own block of ciphertext and nothing else.

Adiantum generally provides much better performance than
constant-time AES-CBC or AES-XTS software do without hardware
support, and performance comparable to or better than the
variable-time (i.e., leaky) AES-CBC and AES-XTS software we had
before.  (Note: Adiantum also uses AES as a subroutine, but only once
per disk sector.  It takes only a small fraction of the time spent by
Adiantum, so there's relatively little performance impact to using
constant-time AES software over using variable-time AES software for
it.)

Adiantum naturally scales to essentially arbitrary disk sector sizes;
sizes >=1024-bytes take the most advantage of Adiantum's design for
performance, so 4096-byte sectors would be a natural choice if we
taught cgd to change the disk sector size.  (However, it's a
different cipher for each disk sector size, so it _must_ be a cgd
parameter.)

The paper presents a similar construction HPolyC.  The salient
difference is that HPolyC uses Poly1305 directly, whereas Adiantum
uses Poly1395(NH(...)).  NH is annoying because it requires a
1072-byte key, which means the test vectors are ginormous, and
changing keys is costly; HPolyC avoids these shortcomings by using
Poly1305 directly, but HPolyC is measurably slower, costing about
1.5x what Adiantum costs on 4096-byte sectors.

For the purposes of cgd, we will reuse each key for many messages,
and there will be very few keys in total (one per cgd volume) so --
except for the annoying verbosity of test vectors -- the tradeoff
weighs in the favour of Adiantum, especially if we teach cgd to do
>>512-byte sectors.

For now, everything that Adiantum needs beyond what's already in the
kernel is gathered into a single file, including NH, Poly1305, and
XChaCha12.  We can split those out -- and reuse them, and provide MD
tuned implementations, and so on -- as needed; this is just a first
pass to get Adiantum implemented for experimentation.
2020-06-29 23:44:01 +00:00
riastradh
1f8a993cb5 VIA AES: Batch AES-XTS computation into eight blocks at a time.
Experimental -- performance improvement is not clearly worth the
complexity.
2020-06-29 23:41:35 +00:00
riastradh
937bd5f179 uvm: Make sure swap encryption IV is 128-bit-aligned on stack.
Will help hardware-assisted AES.
2020-06-29 23:40:28 +00:00