- Make PGO_LOCKED getpages imply PGO_NOBUSY and remove the latter. Mark
pages busy only when there's actually I/O to do.
- When doing COW on a uvm_object, don't mess with neighbouring pages. In
all likelyhood they're already entered.
- Don't mess with neighbouring VAs that have existing mappings as replacing
those mappings with same can be quite costly.
- Don't enqueue pages for neighbour faults unless not enqueued already, and
don't activate centre pages unless uvmpdpol says its useful.
Also:
- Make PGO_LOCKED getpages on UAOs work more like vnodes: do gang lookup in
the radix tree, and don't allocate new pages.
- Fix many assertion failures around faults/loans with tmpfs.
the allocator's buckets, instead of doing round robin distribution. There
are open questions here but this is better than doing nothing.
- Kernel reserve pages are for the kernel not realtime threads.
No functional change intended, except that the symbol that was
previously `uvm_swap_encryption' is now `uvm_swap_encrypt', backing
the sysctl knob `vm.swap_encrypt'.
Enabled by sysctl -w vm.swap_encrypt=1. Key is generated lazily when
we first need to swap a page. Key is chosen independently for each
swap device. The ith swap page is encrypted with AES256-CBC using
AES256_k(le32enc(i) || 0^96) as the initialization vector. Can be
changed at any time; no need for compatibility with on-disk formats.
Costs one bit of memory per page in each swapdev, plus a few hundred
bytes per swapdev to store the expanded AES key.
Shoulda done this decades ago! Plan to enable this by default;
performance impact is unlikely to matter because it only happens when
you're already swapping anyway. Much easier to set up than cgd, so
we can rip out all the documentation about carefully setting up
random-keyed cgd at the right time.
- In uvm_voaddr_release(), if the anon ref count drops to 0, call
uvm_anfree() rather than uvm_anon_release(). Unconditionally drop
the anon lock, and release the extra hold on the anon lock obj.
Fixes a panic that occurs if the backing store for a futex backed by
an anon memory location is unmapped while a thread is waiting in the
futex.
Add a test case that reproduced the panic to verify that it's fixed.
- Add new flag UBC_ISMAPPED which tells ubc_uiomove() the object is mmap()ed
somewhere. Use it to decide whether to do direct-mapped copy, rather than
poking around directly in the vnode in ubc_uiomove(), which is ugly and
doesn't work for tmpfs. It would be nicer to contain all this in UVM but
the filesystem provides the needed locking here (VV_MAPPED) and to
reinvent that would suck more.
- Rename UBC_UNMAP_FLAG() to UBC_VNODE_FLAGS(). Pass in UBC_ISMAPPED where
appropriate.
virtual memory, a "virtual object address". This is not a reference to
a physical byte of memory, per se, but a reference to a byte residing
in a page, owned by a unique UVM object (either a uobj or an anon). Two
separate address+addresses space tuples that reference the same byte in
an object (such as a location in a shared memory segment) will resolve
to equivalent virtual object addresses. Even if the residency status
of the page changes, the virtual object address remains unchanged.
struct uvm_voaddr -- a structure that encapsulates this address reference.
uvm_voaddr_acquire() -- a function to acquire this address reference,
given a vm_map and a vaddr_t.
uvm_voaddr_release() -- a function to release this address reference.
uvm_voaddr_compare() -- a function to compare two such address references.
uvm_voaddr_acquire() resolves the COW status of the object address before
acquiring.
In collaboration with riastradh@ and chs@.
parallel, where the relevant pages are already in-core. Proposed on
tech-kern.
Temporarily disabled on MP architectures with __HAVE_UNLOCKED_PMAP until
adjustments are made to their pmaps.