1949 Commits

Author SHA1 Message Date
chs
df8845cafb fix the build for when UVMHIST is enabled. 2019-12-02 20:02:02 +00:00
uwe
45b09b5d95 Add missing #include <sys/atomic.h> 2019-12-01 23:14:47 +00:00
ad
f36df6629d Avoid calling pmap_page_protect() while under uvm_pageqlock. 2019-12-01 20:31:40 +00:00
ad
b33d8c3694 Free pages in batch instead of taking uvm_pageqlock for each one. 2019-12-01 17:02:50 +00:00
ad
5ce257a95c __cacheline_aligned on a lock. 2019-12-01 16:44:11 +00:00
ad
2fa8dbd876 Minor correction to previous. 2019-12-01 14:43:26 +00:00
ad
221d5f982e - Adjust uvmexp.swpgonly with atomics, and make uvm_swap_data_lock static.
- A bit more __cacheline_aligned on mutexes.
2019-12-01 14:40:31 +00:00
ad
0aaea1d84e Deactivate pages in batch instead of acquiring uvm_pageqlock repeatedly. 2019-12-01 14:30:01 +00:00
ad
6e176d2434 Give each of the page queue locks their own cache line. 2019-12-01 14:28:01 +00:00
ad
24e75c17af Activate pages in batch instead of acquring uvm_pageqlock a zillion times. 2019-12-01 14:24:43 +00:00
martin
ef330e1237 Add missing <sys/atomic.h> include 2019-12-01 10:19:59 +00:00
maxv
f0ea087db1 Use atomic_{load,store}_relaxed() on global counters. 2019-12-01 08:19:09 +00:00
ad
5b83f9471b Use lwp_changepri(). 2019-11-21 17:47:53 +00:00
pgoyette
1d577fe379 Move all non-emulation-specific coredump code into the coredump module,
and remove all #ifdef COREDUMP conditional compilation.  Now, the
coredump module is completely separated from the emulation modules, and
they can all be independently loaded and unloaded.

Welcome to 9.99.18 !
2019-11-20 19:37:51 +00:00
maxv
8965522558 Don't include "opt_kasan.h" when there's already <sys/asan.h> included. 2019-11-14 16:48:51 +00:00
maxv
10c5b02320 Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized
memory used by the kernel at run time, and just like kASan and kCSan, it
is an excellent feature. It has already detected 38 uninitialized variables
in the kernel during my testing, which I have since discreetly fixed.

We use two shadows:
 - "shad", to track uninitialized memory with a bit granularity (1:1).
   Each bit set to 1 in the shad corresponds to one uninitialized bit of
   real kernel memory.
 - "orig", to track the origin of the memory with a 4-byte granularity
   (1:1). Each uint32_t cell in the orig indicates the origin of the
   associated uint32_t of real kernel memory.

The memory consumption of these shadows is consequent, so at least 4GB of
RAM is recommended to run kMSan.

The compiler inserts calls to specific __msan_* functions on each memory
access, to manage both the shad and the orig and detect uninitialized
memory accesses that change the execution flow (like an "if" on an
uninitialized variable).

We mark as uninit several types of memory buffers (stack, pools, kmem,
malloc, uvm_km), and check each buffer passed to copyout, copyoutstr,
bwrite, if_transmit_lock and DMA operations, to detect uninitialized memory
that leaves the system. This allows us to detect kernel info leaks in a way
that is more efficient and also more user-friendly than KLEAK.

Contrary to kASan, kMSan requires comprehensive coverage, ie we cannot
tolerate having one non-instrumented function, because this could cause
false positives. kMSan cannot instrument ASM functions, so I converted
most of them to __asm__ inlines, which kMSan is able to instrument. Those
that remain receive special treatment.

Contrary to kASan again, kMSan uses a TLS, so we must context-switch this
TLS during interrupts. We use different contexts depending on the interrupt
level.

The orig tracks precisely the origin of a buffer. We use a special encoding
for the orig values, and pack together in each uint32_t cell of the orig:
 - a code designating the type of memory (Stack, Pool, etc), and
 - a compressed pointer, which points either (1) to a string containing
   the name of the variable associated with the cell, or (2) to an area
   in the kernel .text section which we resolve to a symbol name + offset.

This encoding allows us not to consume extra memory for associating
information with each cell, and produces a precise output, that can tell
for example the name of an uninitialized variable on the stack, the
function in which it was pushed on the stack, and the function where we
accessed this uninitialized variable.

kMSan is available with LLVM, but not with GCC.

The code is organized in a way that is similar to kASan and kCSan, so it
means that other architectures than amd64 can be supported.
2019-11-14 16:23:52 +00:00
chs
9ea67c54e5 in uvm_fault_lower_io(), fetch all the map entry values that we need
before we unlock everything.

Reported-by: syzbot+bb6f0092562222b489a3@syzkaller.appspotmail.com
2019-11-10 20:38:33 +00:00
skrll
f563530044 Fix a UVMHIST_LOG format broken in 1.91 2019-11-07 07:45:14 +00:00
rin
427ac38b08 Fix previous; semantics of align argument of uvm_map() is different
when UVM_FLAG_COLORMATCH is specified.

Should fix PR kern/54669.
2019-11-01 13:04:22 +00:00
rin
dc365f90f5 PR kern/54395
- Align hint for virtual address at the beginning of uvm_map() if
  required. Otherwise, it will be rounded up/down in an unexpected
  way by uvm_map_space_avail(), which results in assertion failure.

  Fix kernel panic when executing earm binary (8KB pages) on aarch64
  (4KB pages), which relies on mmap(2) with MAP_ALIGNED flag.

- Use inline functions/macros consistently.

- Add some more KASSERT's.

For more details, see the PR as well as discussion on port-kern:
http://mail-index.netbsd.org/tech-kern/2019/10/27/msg025629.html
2019-11-01 08:26:18 +00:00
skrll
af0cb0a34c Define and use VM_PAGEMD_PVLIST_EMPTY_P 2019-10-20 08:29:38 +00:00
skrll
0126296dc0 Whitespace 2019-10-20 07:58:21 +00:00
skrll
5f7d8e837b Re-order _P() macros to match bit definitions. NFCI 2019-10-20 07:54:29 +00:00
skrll
b6e3ab3307 Whitespace 2019-10-20 07:22:51 +00:00
skrll
8535470345 Remove KASSERT(!VM_PAGEMD_PVLIST_LOCKED_P(mdpg)) - can only assert that it
is owned
2019-10-20 07:18:22 +00:00
mlelstv
cd01aa02e1 Defer to synchronous I/O before the aiodone work queue exists. 2019-10-06 05:48:00 +00:00
kamil
6c69d9fad1 Avoid left shift changing the signedness flag
Reviewed by <mrg>

Reported-by: syzbot+25ac03024cedf27f3368@syzkaller.appspotmail.com
2019-10-04 22:48:45 +00:00
chs
e880b3aa3c in uvm_wait(), panic if the pagedaemon thread does not exist.
this avoids a hang if the system runs out of memory before
the mechanisms for reclaiming memory have been set up.
2019-10-01 17:40:22 +00:00
skrll
01e9893f42 Use "segmap" for uvm_wait message in pmap_segtab_alloc 2019-09-23 18:20:07 +00:00
maxv
0fd1f118ce Fix programming mistake: 'paddrp' is a pointer given as argument, setting
it to NULL in the called function does not set it to NULL in the caller.

Actually, the callers of these functions do not do anything with the
special error handling, so drop the unused checks and the NULL assignments
altogether.

Found by the lgtm bot.
2019-09-20 11:09:43 +00:00
skrll
d2a9676ecc s/pte/ptep/ in pmap_pte_process for consistency with other code. NFCI. 2019-09-18 18:29:58 +00:00
skrll
471755a1a7 Whitespace 2019-09-18 18:18:44 +00:00
mrg
8fda897a24 KASSERT -> KASSERTMSG so we actually display the overflowed values. 2019-08-10 01:06:45 +00:00
maxv
c394531319 Change 'npgs' from int to size_t. Otherwise the 64bit->32bit conversion
could lead to npgs=0, which is not expected. It later triggers a panic
in uvm_vsunlock().

Found by TriforceAFL (Akul Pillai).
2019-08-06 08:10:27 +00:00
chs
95ce9a69b4 fix two bugs reported in
https://syzkaller.appspot.com/bug?id=8840dce484094a926e1ec388ffb83acb2fa291c9

 - in uvm_fault_check(), if the map entry is wired, handle the fault the same way
   that we would handle UVM_FAULT_WIRE.  faulting on wired mappings is valid
   if the mapped object was truncated and then later grown again.

 - in uvm_fault_unwire_locked(), we must hold the locks for the vm_map_entry
   while calling pmap_extract() in order to avoid races with the mapped object
   being truncated while we are unwiring it.

Reported-by: syzbot+2e0ae2fc35ab7301c7b8@syzkaller.appspotmail.com
2019-08-05 17:36:42 +00:00
riastradh
f1a5ceb12e Remove last trace of never-used map_attrib. 2019-08-01 02:28:55 +00:00
msaitoh
8ccde42c84 Avoid undefined behavior in uao_pagein_page(). Found by kUBSan. OK'd by
riastradh. I think this is a real bug on amd64 at least.
2019-07-28 05:28:53 +00:00
skrll
930577a55f Provide and use PV_ISKENTER_P. NFCI. 2019-07-12 10:39:12 +00:00
mlelstv
b5e559801c Add missing lock around pmap_protect.
ok, chs@

Reported-by: syzbot+6bfd0be70896fc9e9a3d@syzkaller.appspotmail.com
2019-07-12 06:27:13 +00:00
maxv
1fb326db46 Fix info leak: 'map_attrib' is not used in UVM, and contains uninitialized
heap garbage. Return zero. Maybe we should remove the field completely.
2019-07-11 17:07:10 +00:00
christos
cc833a4d7d use __nothing 2019-06-19 12:55:01 +00:00
skrll
f144d2a709 Once more short line to unwrap 2019-06-19 10:04:40 +00:00
skrll
dd7deb8f88 Unwrap short lines. NFCI. 2019-06-19 10:00:19 +00:00
skrll
ba6c36b14b Make a comment generic and not MIPS specific 2019-06-19 09:56:17 +00:00
chs
bedbd941f8 in uvm_map_protect(), do a pmap_update() before possibly switching from
removing pmap entries to creating them.  this fixes the problem reported in
https://syzkaller.appspot.com/bug?id=cc89e47f05e4eea2fd69bcccb5e837f8d1ab4d60
2019-06-08 23:48:33 +00:00
maxv
5bd7eba201 Misc changes in RISC-V. Start changing the memory layout, too. 2019-06-01 12:42:27 +00:00
msaitoh
39c3181ae1 s/recieve/receive/ 2019-05-28 08:59:33 +00:00
skrll
08ae7ed332 Usee __BIT() 2019-05-20 17:00:57 +00:00
skrll
bcfef4b20f Trailing whitespace 2019-05-20 16:58:49 +00:00
skrll
f77ca71313 Avoid KASSERT(!cpu_intr_p()) when breaking into ddb and issuing
show uvmexp
2019-05-09 08:16:14 +00:00