Commit Graph

2101 Commits

Author SHA1 Message Date
ad
103c607c83 UVM_PAGE_TRKOWN: print the LID too 2020-05-19 20:46:39 +00:00
ad
ff872804dc Start trying to reduce cache misses on vm_page during fault processing.
- Make PGO_LOCKED getpages imply PGO_NOBUSY and remove the latter.  Mark
  pages busy only when there's actually I/O to do.

- When doing COW on a uvm_object, don't mess with neighbouring pages.  In
  all likelyhood they're already entered.

- Don't mess with neighbouring VAs that have existing mappings as replacing
  those mappings with same can be quite costly.

- Don't enqueue pages for neighbour faults unless not enqueued already, and
  don't activate centre pages unless uvmpdpol says its useful.

Also:

- Make PGO_LOCKED getpages on UAOs work more like vnodes: do gang lookup in
  the radix tree, and don't allocate new pages.

- Fix many assertion failures around faults/loans with tmpfs.
2020-05-17 19:38:16 +00:00
ad
c28f10c162 Don't set PG_AOBJ on a page unless UVM_OBJ_IS_AOBJ(), otherwise it can
catch pages from e.g. uvm_loanzero_object.
2020-05-17 17:12:28 +00:00
ad
8545b637a5 - If the hardware provided NUMA info, then use it to decide how to set up
the allocator's buckets, instead of doing round robin distribution.  There
  are open questions here but this is better than doing nothing.

- Kernel reserve pages are for the kernel not realtime threads.
2020-05-17 15:11:57 +00:00
ad
f080bee543 Mark amappl with PR_LARGECACHE. 2020-05-17 15:07:22 +00:00
ad
4cd60295b3 Reported-by: syzbot+3e3c7cfa8093f8de047e@syzkaller.appspotmail.com
Comment out an assertion that's now bogus and add a comment.
2020-05-15 22:35:05 +00:00
ad
9c9ebb954c PR kern/55268: tmpfs is slow
uao_get(): in the PGO_LOCKED case, we're okay to allocate a new page as long
as the caller holds a write lock.  PGO_NOBUSY doesn't put a stop to that.
2020-05-15 22:27:04 +00:00
ad
939a94f3ce uvm_pagemarkdirty(): no need to set radix tree tag unless page is currently
marked clean.
2020-05-15 22:25:18 +00:00
pgoyette
9dd096f8e8 Add missing dependency.
Fixes builds with VM_SWAP but no other users of rijndael crypto code.
2020-05-10 22:28:09 +00:00
riastradh
cdc9c12fff Rename things so the symbol better matches the sysctl name.
No functional change intended, except that the symbol that was
previously `uvm_swap_encryption' is now `uvm_swap_encrypt', backing
the sysctl knob `vm.swap_encrypt'.
2020-05-10 02:38:10 +00:00
riastradh
2e4a8ba1a5 Avoid overflow if a very large number of pages are swapped at once.
Unlikely, but let's make sure we don't hit this ever.
2020-05-09 22:00:48 +00:00
riastradh
373ada04c3 Implement swap encryption.
Enabled by sysctl -w vm.swap_encrypt=1.  Key is generated lazily when
we first need to swap a page.  Key is chosen independently for each
swap device.  The ith swap page is encrypted with AES256-CBC using
AES256_k(le32enc(i) || 0^96) as the initialization vector.  Can be
changed at any time; no need for compatibility with on-disk formats.
Costs one bit of memory per page in each swapdev, plus a few hundred
bytes per swapdev to store the expanded AES key.

Shoulda done this decades ago!  Plan to enable this by default;
performance impact is unlikely to matter because it only happens when
you're already swapping anyway.  Much easier to set up than cgd, so
we can rip out all the documentation about carefully setting up
random-keyed cgd at the right time.
2020-05-09 21:50:39 +00:00
thorpej
790ddc0b33 Make the uvm_voaddr structure more compact, only occupying 2 pointers
worth of space, by encoding the type in the lower bits of the object
pointer.
2020-05-09 15:13:19 +00:00
thorpej
1a07681a27 - In uvm_voaddr_acquire(), take an extra hold on the anon lock obj.
- In uvm_voaddr_release(), if the anon ref count drops to 0, call
  uvm_anfree() rather than uvm_anon_release().  Unconditionally drop
  the anon lock, and release the extra hold on the anon lock obj.

Fixes a panic that occurs if the backing store for a futex backed by
an anon memory location is unmapped while a thread is waiting in the
futex.

Add a test case that reproduced the panic to verify that it's fixed.
2020-04-30 04:18:07 +00:00
rin
545b324688 Add missing \ to fix build for PMAP_CACHE_VIVT, i.e., ARMv4 and prior. 2020-04-27 02:47:26 +00:00
thorpej
11d794387e Disable ubc_direct by default again. There are still stability issues
(e.g. panic during 2020.04.25.00.07.27 amd64 releng test run).
2020-04-26 16:16:13 +00:00
ad
18391da5bf ubc_alloc_direct(): for a write make sure pages are always marked dirty
because there's no managed mapping.
2020-04-24 19:47:03 +00:00
ad
f6da483c1a Enable ubc_direct by default, but only on systems with no more than 2 CPUs
for now.
2020-04-23 21:53:01 +00:00
ad
f5ad84fdb3 PR kern/54759 (vm.ubc_direct deadlock when read()/write() into mapping of itself)
- Add new flag UBC_ISMAPPED which tells ubc_uiomove() the object is mmap()ed
  somewhere.  Use it to decide whether to do direct-mapped copy, rather than
  poking around directly in the vnode in ubc_uiomove(), which is ugly and
  doesn't work for tmpfs.  It would be nicer to contain all this in UVM but
  the filesystem provides the needed locking here (VV_MAPPED) and to
  reinvent that would suck more.

- Rename UBC_UNMAP_FLAG() to UBC_VNODE_FLAGS().  Pass in UBC_ISMAPPED where
  appropriate.
2020-04-23 21:47:07 +00:00
ad
e4cdabc9f4 ubc_direct_release(): unbusy the pages directly since pg->interlock is
being taken.
2020-04-23 21:12:06 +00:00
ad
940e505e51 uvm_aio_aiodone_pages(): only call uvm_pageout_done() if work was done for
the page daemon.
2020-04-19 21:53:38 +00:00
skrll
e5ed078588 Fix UVMHIST_LOG compile on 32bit platforms 2020-04-19 08:59:53 +00:00
riastradh
b9b3063225 Fix trailing whitespace. 2020-04-18 17:22:26 +00:00
thorpej
9fc3fff218 Add an API to get a reference on the identity of an individual byte of
virtual memory, a "virtual object address".  This is not a reference to
a physical byte of memory, per se, but a reference to a byte residing
in a page, owned by a unique UVM object (either a uobj or an anon).  Two
separate address+addresses space tuples that reference the same byte in
an object (such as a location in a shared memory segment) will resolve
to equivalent virtual object addresses.  Even if the residency status
of the page changes, the virtual object address remains unchanged.

struct uvm_voaddr -- a structure that encapsulates this address reference.

uvm_voaddr_acquire() -- a function to acquire this address reference,
given a vm_map and a vaddr_t.

uvm_voaddr_release() -- a function to release this address reference.

uvm_voaddr_compare() -- a function to compare two such address references.

uvm_voaddr_acquire() resolves the COW status of the object address before
acquiring.

In collaboration with riastradh@ and chs@.
2020-04-18 03:27:13 +00:00
skrll
76cb9a0523 Fix UVMHIST bulid 2020-04-14 05:43:57 +00:00
ad
f708c498bf uvm_fault_check(): if MADV_SEQUENTIAL, change lower lock type to RW_WRITER
in case many threads are concurrently doing "sequential" access, to avoid
excessive mixing of read/write lock holds.
2020-04-13 22:22:19 +00:00
maxv
983fd9ccfe hardclock_ticks -> getticks() 2020-04-13 15:54:45 +00:00
ad
8338008968 Comments 2020-04-13 15:16:14 +00:00
skrll
d2599c324c Trailing whitespace 2020-04-13 08:05:22 +00:00
skrll
fe8087d6bb Oops, forgot the empty macro version of UVMHIST_CALLARGS 2020-04-13 07:11:08 +00:00
skrll
e4535b97c1 Use UVMHIST_CALLARGS 2020-04-12 15:36:18 +00:00
tsutsui
edf49f7cd3 Update a link to "CLOCK-Pro" paper. 2020-04-10 18:17:56 +00:00
ad
960b41883d uvmspace_exec(): set VM_MAP_DYING for the duration, so pmap_update() is not
called until the pmap has been totally cleared out after pmap_remove_all(),
or it can confuse some pmap implementations.
2020-04-10 17:26:46 +00:00
skrll
4aca4be1c8 Make a comment less MIPS specific 2020-04-09 08:55:45 +00:00
skrll
1d725ebb5c Provide UVMHIST_CALLARGS 2020-04-08 07:56:34 +00:00
ad
e7b964a876 For single page I/O, use direct mapping if available. 2020-04-07 19:15:23 +00:00
ad
5294ba607b ubc_direct_release(): remove spurious call to uvm_pagemarkdirty(). 2020-04-07 19:12:25 +00:00
ad
f3fdb8c6cb PR kern/54759: vm.ubc_direct deadlock when read()/write() into mapping of itself
Prevent ubc_uiomove_direct() on mapped vnodes.
2020-04-07 19:11:13 +00:00
ad
c70743cce9 Mark uvm_map_entry_cache with PR_LARGECACHE. 2020-04-04 21:17:02 +00:00
maxv
fd2e91e6b8 Hide 'hardclock_ticks' behind a new getticks() function, and use relaxed
atomics internally. Only one caller is converted for now.

Discussed with riastradh@ and ad@.
2020-04-02 16:29:30 +00:00
skrll
e077f38610 Fix UVMHIST build 2020-03-23 10:35:56 +00:00
skrll
1cf687a322 Trailing whitespace 2020-03-23 10:35:08 +00:00
ad
1d7848ad43 Process concurrent page faults on individual uvm_objects / vm_amaps in
parallel, where the relevant pages are already in-core.  Proposed on
tech-kern.

Temporarily disabled on MP architectures with __HAVE_UNLOCKED_PMAP until
adjustments are made to their pmaps.
2020-03-22 18:32:41 +00:00
ad
0622217a01 Go back to freeing struct vm_anon one by one. There may have been an
advantage circa ~2008 but there isn't now.
2020-03-20 19:08:54 +00:00
ad
fbf93ce6a4 uvm_fault_upper_lookup(): don't call pmap_extract() and pmap_update() more
often than needed.
2020-03-20 18:50:09 +00:00
ad
d305fcb599 sysctl_vm_uvmexp2(): some counters were needlessly truncated. 2020-03-19 20:23:19 +00:00
ad
1912643ff9 Tweak the March 14th change to make page waits interlocked by pg->interlock.
Remove unneeded changes and only deal with the PQ_WANTED flag, to exclude
possible bugs.
2020-03-17 18:31:38 +00:00
ad
a1a4ef596e Fix a comment. 2020-03-17 00:30:17 +00:00
ad
cd4b207ac9 Use C99-ism to reduce ifdefs. Pointed out by christos@. 2020-03-16 20:07:44 +00:00
ad
db42bf9228 pmap_pv_track(): use PMAP_PAGE_INIT() otherwise the x86 pmap pukes. 2020-03-16 19:56:39 +00:00