Commit Graph

94 Commits

Author SHA1 Message Date
ad 59d979c5f1 Pass an ipl argument to pool_init/POOL_INIT to be used when initializing
the pool's lock.
2007-03-12 18:18:22 +00:00
thorpej 712239e366 Replace the Mach-derived boolean_t type with the C99 bool type. A
future commit will replace use of TRUE and FALSE with true and false.
2007-02-21 22:59:35 +00:00
yamt 1a7bc55dcc remove some __unused from function parameters. 2006-11-01 10:17:58 +00:00
uwe bb003f9487 More __unused (in cpp conditionals not touched by i386). 2006-10-12 21:35:00 +00:00
christos 4d595fd7b1 - sprinkle __unused on function decls.
- fix a couple of unused bugs
- no more -Wno-unused for i386
2006-10-12 01:30:41 +00:00
drochner ef8848c74a Introduce a UVM_KMF_EXEC flag for uvm_km_alloc() which enforces an
executable mapping. Up to now, only R+W was requested from pmap_kenter_pa.
On most CPUs, we get an executable mapping anyway, due to lack of
hardware support or due to lazyness in the pmap implementation. Only
alpha does obey VM_PROT_EXECUTE, afaics.
2006-07-05 14:26:42 +00:00
yamt c24f70bcad move wait points for kva from upper layers to vm_map. PR/33185 #1.
XXX there is a concern about interaction with kva fragmentation.
see: http://mail-index.NetBSD.org/tech-kern/2006/05/11/0000.html
2006-05-25 14:27:28 +00:00
yamt 38ae305f09 uvm_km_suballoc: consider kva overhead of "kmapent".
fixes PR/31275 (me) and PR/32287 (Christian Biere).
2006-05-03 14:12:01 +00:00
yamt 9f6a649d14 uvm_km_pgremove/uvm_km_pgremove_intrsafe: fix assertions. 2006-04-05 21:56:24 +00:00
yamt 4038c2995b uvm_km_check_empty: fix an assertion. 2006-03-17 09:37:55 +00:00
christos 95e1ffb156 merge ktrace-lwp. 2005-12-11 12:16:03 +00:00
thorpej e569facced Use ANSI function decls. 2005-06-27 02:19:48 +00:00
christos e715d2ee98 avoid shadow variables.
remove unneeded casts.
2005-05-29 21:06:33 +00:00
simonb eddbeffa06 Use a cast to (long long) and 0x%llx to print out a paddr_t instead
of casting to (void *).  Fixes compile problems with 64-bit paddr_t
on 32-bit platforms.
2005-04-20 14:10:03 +00:00
yamt 01c07ef7bd fix unreasonably frequent "killed: out of swap" on systems which have
little or no swap.
- even on a severe swap shortage, if we have some amount of file-backed pages,
  don't bother to kill processes.
- if all pages in queue will be likely reactivated, just give up
  page type balancing rather than spinning unnecessarily.
2005-04-12 13:11:45 +00:00
yamt cde0e21683 unwrap short lines. 2005-04-01 12:37:27 +00:00
yamt 6b2d8b66a4 merge yamt-km branch.
- don't use managed mappings/backing objects for wired memory allocations.
  save some resources like pv_entry.  also fix (most of) PR/27030.
- simplify kernel memory management API.
- simplify pmap bootstrap of some ports.
- some related cleanups.
2005-04-01 11:59:21 +00:00
perry bcfcddbac1 nuke trailing whitespace 2005-02-26 22:31:44 +00:00
yamt 22099ab744 in uvm_unmap_remove, always wakeup va waiters if any.
uvm_km_free_wakeup is now a synonym of uvm_km_free.
2005-01-13 11:50:32 +00:00
yamt e4666bf785 don't reserve (uvm_mapent_reserve) entries for malloc/pool backends
because it isn't necessary or safe.
reported and tested by Denis Lagno.  PR/28897.
2005-01-12 09:34:35 +00:00
yamt 22cf117f61 km_vacache_alloc: UVM_PROT_ALL rather than UVM_PROT_NONE
so that uvm_kernacc works.  PR/28861. (FUKAUMI Naoki)
2005-01-05 00:58:57 +00:00
yamt 0c5ceb72e4 km_vacache_alloc: specify va hint correctly rather than
using stack garbage.  PR/28845.
2005-01-03 04:01:13 +00:00
yamt a880e5e2b5 in the case of !PMAP_MAP_POOLPAGE, gather pool backend allocations to
large chunks for kernel_map and kmem_map to ease kva fragmentation.
2005-01-01 21:08:02 +00:00
yamt 95c82bfee4 introduce vm_map_kernel, a subclass of vm_map, and
move some kernel-only members of vm_map to it.
2005-01-01 21:02:12 +00:00
yamt 1207308b90 for in-kernel maps,
- allocate kva for vm_map_entry from the map itsself and
  remove the static limit, MAX_KMAPENT.
- keep merged entries for later splitting to fix allocate-to-free problem.
  PR/24039.
2005-01-01 21:00:06 +00:00
junyoung 7e0c058612 Drop trailing spaces. 2004-03-24 07:47:32 +00:00
matt a78a1b0777 Back out the changes in
http://mail-index.netbsd.org/source-changes/2004/01/29/0027.html
since they don't really fix the problem.

Incorpate one fix:  Mark uvm_map_entry's that were created with
UVM_FLAG_NOMERGE so that they will not be used as future merge
candidates.
2004-02-10 01:30:49 +00:00
yamt 20c5bc5099 - split uvm_map() into two functions for the followings.
- for in-kernel maps, disable map entry merging so that
  unmap operations won't block. (workaround for PR/24039)
- for in-kernel maps, allocate kva for vm_map_entry from
  the map itsself and eliminate MAX_KMAPENT and
  uvm_map_entry_kmem_pool.
2004-01-29 12:06:02 +00:00
pk 3c96ae431b * Introduce uvm_km_kmemalloc1() which allows alignment and preferred offset
to be passed to uvm_map().

* Turn all uvm_km_valloc*() macros back into (inlined) functions to retain
  binary compatibility with any 3rd party modules.
2003-12-18 15:02:04 +00:00
pk 60181444ca Condense all existing variants of uvm_km_valloc into a single function:
uvm_km_valloc1(), and use it to express all of
	uvm_km_valloc()
	uvm_km_valloc_wait()
	uvm_km_valloc_prefer()
	uvm_km_valloc_prefer_wait()
	uvm_km_valloc_align()
in terms of it by macro expansion.
2003-12-18 08:15:42 +00:00
pk 9a4aea0127 When retiring a swap device with marked bad blocks on it we should update
the `# swap page in use' and `# swap page only' counters.  However, at the
time of swap device removal we can no longer figure out how many of the
bad swap pages are actually also `swap only' pages.

So, on swap I/O errors arrange things to not include the bad swap pages in
the `swpgonly' counter as follows: uvm_swap_markbad() decrements `swpgonly'
by the number of bad pages, and the various VM object deallocation routines
do not decrement `swpgonly' for swap slots marked as SWSLOT_BAD.
2003-08-28 13:12:17 +00:00
pk 5869d91cb9 Introduce uvm_swapisfull(), which computes the available swap space by
taking into account swap devices that are in the process of being removed.
2003-08-11 16:33:30 +00:00
thorpej 36da248c07 Back out the following chagne:
http://mail-index.netbsd.org/source-changes/2003/05/08/0068.html

There were some side-effects that I didn't anticipate, and fixing them
is proving to be more difficult than I thought, do just eject for now.
Maybe one day we can look at this again.

Fixes PR kern/21517.
2003-05-10 21:10:23 +00:00
thorpej b77900c3c2 Simplify the way the bounds of the managed kernel virtual address
space is advertised to UVM by making virtual_avail and virtual_end
first-class exported variables by UVM.  Machine-dependent code is
responsible for initializing them before main() is called.  Anything
that steals KVA must adjust these variables accordingly.

This reduces the number of instances of this info from 3 to 1, and
simplifies the pmap(9) interface by removing the pmap_virtual_space()
function call, and removing two arguments from pmap_steal_memory().

This also eliminates some kludges such as having to burn kernel_map
entries on space used by the kernel and stolen KVA.

This also eliminates use of VM_{MIN,MAX}_KERNEL_ADDRESS from MI code,
this giving MD code greater flexibility over the bounds of the managed
kernel virtual address space if a given port's specific platforms can
vary in this regard (this is especially true of the evb* ports).
2003-05-08 18:13:12 +00:00
bouyer d986226518 Change uvm_km_kmemalloc() to accept flag UVM_KMF_NOWAIT and pass it to
uvm_map(). Change uvm_map() to honnor UVM_KMF_NOWAIT. For this, change
amap_extend() to take a flags parameter instead of just boolean for
direction, and introduce AMAP_EXTEND_FORWARDS and AMAP_EXTEND_NOWAIT flags
(AMAP_EXTEND_BACKWARDS is still defined as 0x0, to keep the code easier to
read).
Add a flag parameter to uvm_mapent_alloc().
This solves a problem a pool_get(PR_NOWAIT) could trigger a pool_get(PR_WAITOK)
in uvm_mapent_alloc().
Thanks to Chuck Silvers, enami tsugutomo, Andrew Brown and Jason R Thorpe
for feedback.
2002-11-30 18:28:04 +00:00
oster 7eac5bf44e Garbage collect some leftover (and unneeded) code. OK'ed by chs. 2002-10-05 17:26:06 +00:00
chs 9672ac098f add a new km flag UVM_KMF_CANFAIL, which causes uvm_km_kmemalloc() to
return failure if swap is full and there are no free physical pages.
have malloc() use this flag if M_CANFAIL is passed to it.
use M_CANFAIL to allow amap_extend() to fail when memory is scarce.
this should prevent most of the remaining hangs in low-memory situations.
2002-09-15 16:54:26 +00:00
thorpej 95cb683cfb Don't pass VM_PROT_EXEC to pmap_kenter_pa(). 2002-08-14 15:21:31 +00:00
thorpej 26b2d2217b If the bootstrapping process didn't actually use any KVA space, don't
reserve size of 0 in kernel_map.

From OpenBSD.
2002-03-07 20:15:32 +00:00
lukem b616d1ca1d add RCSIDs, and in some cases, slightly cleanup #include order 2001-11-10 07:36:59 +00:00
chs ac48df1681 only acquire the lock for swpgonly if we actually need to adjust it. 2001-11-07 08:43:32 +00:00
chs 2ed88fe090 several changes prompted by loaning problems:
- fix the loaned case in uvm_pagefree().
 - redo uvmexp.swpgonly accounting to work with page loaning.
   add an assertion before each place we adjust uvmexp.swpgonly.
 - fix uvm_km_pgremove() to always free any swap space associated with
   the range being removed.
 - get rid of UVM_LOAN_WIRED flag.  instead, we just make sure that
   pages loaned to the kernel are never on the page queues.
   this allows us to assert that pages are not loaned and wired
   at the same time.
 - add yet more assertions.
2001-11-06 08:07:49 +00:00
chs 64c6d1d2dc a whole bunch of changes to improve performance and robustness under load:
- remove special treatment of pager_map mappings in pmaps.  this is
   required now, since I've removed the globals that expose the address range.
   pager_map now uses pmap_kenter_pa() instead of pmap_enter(), so there's
   no longer any need to special-case it.
 - eliminate struct uvm_vnode by moving its fields into struct vnode.
 - rewrite the pageout path.  the pager is now responsible for handling the
   high-level requests instead of only getting control after a bunch of work
   has already been done on its behalf.  this will allow us to UBCify LFS,
   which needs tighter control over its pages than other filesystems do.
   writing a page to disk no longer requires making it read-only, which
   allows us to write wired pages without causing all kinds of havoc.
 - use a new PG_PAGEOUT flag to indicate that a page should be freed
   on behalf of the pagedaemon when it's unlocked.  this flag is very similar
   to PG_RELEASED, but unlike PG_RELEASED, PG_PAGEOUT can be cleared if the
   pageout fails due to eg. an indirect-block buffer being locked.
   this allows us to remove the "version" field from struct vm_page,
   and together with shrinking "loan_count" from 32 bits to 16,
   struct vm_page is now 4 bytes smaller.
 - no longer use PG_RELEASED for swap-backed pages.  if the page is busy
   because it's being paged out, we can't release the swap slot to be
   reallocated until that write is complete, but unlike with vnodes we
   don't keep a count of in-progress writes so there's no good way to
   know when the write is done.  instead, when we need to free a busy
   swap-backed page, just sleep until we can get it busy ourselves.
 - implement a fast-path for extending writes which allows us to avoid
   zeroing new pages.  this substantially reduces cpu usage.
 - encapsulate the data used by the genfs code in a struct genfs_node,
   which must be the first element of the filesystem-specific vnode data
   for filesystems which use genfs_{get,put}pages().
 - eliminate many of the UVM pagerops, since they aren't needed anymore
   now that the pager "put" operation is a higher-level operation.
 - enhance the genfs code to allow NFS to use the genfs_{get,put}pages
   instead of a modified copy.
 - clean up struct vnode by removing all the fields that used to be used by
   the vfs_cluster.c code (which we don't use anymore with UBC).
 - remove kmem_object and mb_object since they were useless.
   instead of allocating pages to these objects, we now just allocate
   pages with no object.  such pages are mapped in the kernel until they
   are freed, so we can use the mapping to find the page to free it.
   this allows us to remove splvm() protection in several places.

The sum of all these changes improves write throughput on my
decstation 5000/200 to within 1% of the rate of NetBSD 1.5
and reduces the elapsed time for "make release" of a NetBSD 1.5
source tree on my 128MB pc to 10% less than a 1.5 kernel took.
2001-09-15 20:36:31 +00:00
chris 0e7661f023 Update pmap_update to now take the updated pmap as an argument.
This will allow improvements to the pmaps so that they can more easily defer expensive operations, eg tlb/cache flush, til the last possible moment.

Currently this is a no-op on most platforms, so they should see no difference.

Reviewed by Jason.
2001-09-10 21:19:08 +00:00
thorpej a279b0973b Reduce some complexity in the fault path -- Rather than maintaining
an spl-protected "interrupt safe map" list, simply require that callers
of uvm_fault() never call us in interrupt context (MD code must make
the assertion), and check for interrupt-safe maps in uvmfault_lookup()
before we lock the map.
2001-06-26 17:55:14 +00:00
chs 821ec03ed9 replace vm_map{,_entry}_t with struct vm_map{,_entry} *. 2001-06-02 18:09:08 +00:00
chs 118ddca24a replace {simple_,}lock{_data,}_t with struct {simple,}lock {,*}. 2001-05-26 16:32:40 +00:00
chs 3845302904 remove trailing whitespace. 2001-05-25 04:06:11 +00:00
thorpej 1c3a62e066 Sprinkle pmap_update() calls after calls to:
- pmap_enter()
- pmap_remove()
- pmap_protect()
- pmap_kenter_pa()
- pmap_kremove()
as described in pmap(9).

These calls are relatively conservative.  It may be possible to
optimize these a little more.
2001-04-24 04:30:50 +00:00
thorpej 9e0af4a217 Add a __predict_true() to an extremely common case. 2001-04-12 21:11:47 +00:00