.. |
pmap
|
Fix UVMHIST bulid
|
2020-04-14 05:43:57 +00:00 |
files.uvm
|
uvm(9): Switch from legacy rijndael API to new aes API.
|
2020-06-29 23:33:46 +00:00 |
Makefile
|
|
|
uvm_amap.c
|
Mark amappl with PR_LARGECACHE.
|
2020-05-17 15:07:22 +00:00 |
uvm_amap.h
|
Go back to freeing struct vm_anon one by one. There may have been an
|
2020-03-20 19:08:54 +00:00 |
uvm_anon.c
|
Process concurrent page faults on individual uvm_objects / vm_amaps in
|
2020-03-22 18:32:41 +00:00 |
uvm_anon.h
|
Go back to freeing struct vm_anon one by one. There may have been an
|
2020-03-20 19:08:54 +00:00 |
uvm_aobj.c
|
uao_get(): in the PGO_SYNCIO case use uvm_page_array and simplify control
|
2020-05-25 22:04:51 +00:00 |
uvm_aobj.h
|
|
|
uvm_bio.c
|
make ubc_winshift / ubc_winsize constant, and based on whatever is bigger
|
2020-06-25 14:04:30 +00:00 |
uvm_coredump.c
|
UVM locking changes, proposed on tech-kern:
|
2020-02-23 15:46:38 +00:00 |
uvm_ddb.h
|
Redo the page allocator to perform better, especially on multi-core and
|
2019-12-27 12:51:56 +00:00 |
uvm_device.c
|
0x%#x --> %#x for non-external codes.
|
2020-02-24 12:38:57 +00:00 |
uvm_device.h
|
|
|
uvm_extern.h
|
g/c vm_page_zero_enable
|
2020-06-14 22:25:15 +00:00 |
uvm_fault_i.h
|
UVM locking changes, proposed on tech-kern:
|
2020-02-23 15:46:38 +00:00 |
uvm_fault.c
|
Start trying to reduce cache misses on vm_page during fault processing.
|
2020-05-17 19:38:16 +00:00 |
uvm_fault.h
|
|
|
uvm_glue.c
|
Remove PG_ZERO. It worked brilliantly on x86 machines from the mid-90s but
|
2020-06-14 21:41:42 +00:00 |
uvm_glue.h
|
|
|
uvm_init.c
|
Fix a comment.
|
2020-03-06 20:46:12 +00:00 |
uvm_io.c
|
|
|
uvm_km.c
|
Make page waits (WANTED vs BUSY) interlocked by pg->interlock. Gets RW
|
2020-03-14 20:23:51 +00:00 |
uvm_km.h
|
|
|
uvm_loan.c
|
Counter tweaks:
|
2020-06-11 22:21:05 +00:00 |
uvm_loan.h
|
|
|
uvm_map.c
|
Avoid passing file paths in panic strings, this results in extra long
|
2020-05-30 08:50:31 +00:00 |
uvm_map.h
|
Catch up with the usage of struct vmspace::vm_refcnt
|
2020-05-26 00:50:53 +00:00 |
uvm_meter.c
|
Remove PG_ZERO. It worked brilliantly on x86 machines from the mid-90s but
|
2020-06-14 21:41:42 +00:00 |
uvm_mmap.c
|
UVM locking changes, proposed on tech-kern:
|
2020-02-23 15:46:38 +00:00 |
uvm_mremap.c
|
UVM locking changes, proposed on tech-kern:
|
2020-02-23 15:46:38 +00:00 |
uvm_object.c
|
- Alter the convention for uvm_page_array slightly, so the basic search
|
2020-05-25 21:15:10 +00:00 |
uvm_object.h
|
Make uvm_pagemarkdirty() responsible for putting vnodes onto the syncer
|
2020-03-14 20:45:23 +00:00 |
uvm_page_array.c
|
uvm_page_array_fill(): return ENOENT in all cases when nothing's left.
|
2020-05-26 21:52:12 +00:00 |
uvm_page_array.h
|
- Alter the convention for uvm_page_array slightly, so the basic search
|
2020-05-25 21:15:10 +00:00 |
uvm_page_status.c
|
uvm_pagemarkdirty(): no need to set radix tree tag unless page is currently
|
2020-05-15 22:25:18 +00:00 |
uvm_page.c
|
<sys/extent.h> not needed here.
|
2020-06-17 06:24:15 +00:00 |
uvm_page.h
|
Remove PG_ZERO. It worked brilliantly on x86 machines from the mid-90s but
|
2020-06-14 21:41:42 +00:00 |
uvm_pager.c
|
use maximum-size fixed size array instead of variable-length array
|
2020-06-25 09:58:44 +00:00 |
uvm_pager.h
|
PR kern/32166: pgo_get protocol is ambiguous
|
2020-05-19 22:22:15 +00:00 |
uvm_param.h
|
uvm_emap_size was removed a while ago
|
2020-06-25 18:20:18 +00:00 |
uvm_pdaemon.c
|
Counter tweaks:
|
2020-06-11 22:21:05 +00:00 |
uvm_pdaemon.h
|
UVM locking changes, proposed on tech-kern:
|
2020-02-23 15:46:38 +00:00 |
uvm_pdpolicy_clock.c
|
Counter tweaks:
|
2020-06-11 22:21:05 +00:00 |
uvm_pdpolicy_clockpro.c
|
Start trying to reduce cache misses on vm_page during fault processing.
|
2020-05-17 19:38:16 +00:00 |
uvm_pdpolicy_impl.h
|
|
|
uvm_pdpolicy.h
|
Start trying to reduce cache misses on vm_page during fault processing.
|
2020-05-17 19:38:16 +00:00 |
uvm_pgflcache.c
|
Remove PG_ZERO. It worked brilliantly on x86 machines from the mid-90s but
|
2020-06-14 21:41:42 +00:00 |
uvm_pgflcache.h
|
Redo the page allocator to perform better, especially on multi-core and
|
2019-12-27 12:51:56 +00:00 |
uvm_pglist.c
|
Remove PG_ZERO. It worked brilliantly on x86 machines from the mid-90s but
|
2020-06-14 21:41:42 +00:00 |
uvm_pglist.h
|
Comments
|
2020-04-13 15:16:14 +00:00 |
uvm_physseg.c
|
uvm_physseg: cluster fields used during RB tree lookup for PHYS_TO_VM_PAGE().
|
2020-03-15 21:06:30 +00:00 |
uvm_physseg.h
|
|
|
uvm_pmap.h
|
pmap_remove_all(): Return a boolean value to indicate the behaviour. If
|
2020-03-14 14:05:42 +00:00 |
uvm_prot.h
|
|
|
uvm_readahead.c
|
Drop & re-acquire vmobjlock less often.
|
2020-05-19 21:45:35 +00:00 |
uvm_readahead.h
|
|
|
uvm_stat.c
|
Remove PG_ZERO. It worked brilliantly on x86 machines from the mid-90s but
|
2020-06-14 21:41:42 +00:00 |
uvm_stat.h
|
Oops, forgot the empty macro version of UVMHIST_CALLARGS
|
2020-04-13 07:11:08 +00:00 |
uvm_swap.c
|
uvm: Make sure swap encryption IV is 128-bit-aligned on stack.
|
2020-06-29 23:40:28 +00:00 |
uvm_swap.h
|
|
|
uvm_swapstub.c
|
|
|
uvm_unix.c
|
|
|
uvm_user.c
|
|
|
uvm_vnode.c
|
- Alter the convention for uvm_page_array slightly, so the basic search
|
2020-05-25 21:15:10 +00:00 |
uvm.h
|
- If the hardware provided NUMA info, then use it to decide how to set up
|
2020-05-17 15:11:57 +00:00 |