Commit Graph

106 Commits

Author SHA1 Message Date
cegger
dfa8eb2f81 Move PMAP_KMPAGE to be used in pmap_kenter_pa flags argument.
'Looks good to me' gimpy@
2010-05-14 05:02:05 +00:00
joerg
d621e29eca Remove separate mb_map. The nmbclusters is computed at boot time based
on the amount of physical memory and limited by NMBCLUSTERS if present.
Architectures without direct mapping also limit it based on the kmem_map
size, which is used as backing store. On i386 and ARM, the maximum KVA
used for mbuf clusters is limited to 64MB by default.

The old default limits and limits based on GATEWAY have been removed.
key_registered_sb_max is hard-wired to a value derived from 2048
clusters.
2010-02-08 19:02:25 +00:00
cegger
9480c51b04 Add a flags argument to pmap_kenter_pa(9).
Patch showed on tech-kern@ http://mail-index.netbsd.org/tech-kern/2009/11/04/msg006434.html
No objections.
2009-11-07 07:27:40 +00:00
ad
b5413f0358 It's easier for kernel reserve pages to be consumed because the pagedaemon
serves as less of a barrier these days. Restrict provision of kernel reserve
pages to kmem and one of these cases:

- doing a NOWAIT allocation
- caller is a realtime thread
- caller is a kernel thread
- explicitly requested, for example by the pmap
2008-12-13 11:34:43 +00:00
ad
a371a02d26 PR port-amd64/32816 amd64 can not load lkms
Change some assertions to partially allow for VM_MAP_IS_KERNEL(map) where
map is outside the range of kernel_map.
2008-12-01 10:54:57 +00:00
pooka
c7b9619f12 the most karmic commit of all: fix tyop in comment 2008-08-04 13:37:33 +00:00
matt
ad65eb54bc Add PMAP_KMPAGE flag for pmap_kenter_pa. This allows pmaps to know that
the page being entered is being for the kernel memory allocator.  Such pages
should have no references and don't need bookkeeping.
2008-07-16 00:11:27 +00:00
yamt
d886e611bd remove a redundant pmap_update and add a comment instead. 2008-03-24 08:52:55 +00:00
chris
855792073c Add some more missing pmap_update()s following pmap_kremove()s. 2008-02-23 17:27:58 +00:00
ad
4a780c9ae2 Merge vmlocking2 to head. 2008-01-02 11:48:20 +00:00
ad
6e948e8304 Fix DEBUG build. 2007-07-21 20:52:59 +00:00
ad
4688843d2b Merge unobtrusive locking changes from the vmlocking branch. 2007-07-21 19:21:53 +00:00
ad
59d979c5f1 Pass an ipl argument to pool_init/POOL_INIT to be used when initializing
the pool's lock.
2007-03-12 18:18:22 +00:00
thorpej
712239e366 Replace the Mach-derived boolean_t type with the C99 bool type. A
future commit will replace use of TRUE and FALSE with true and false.
2007-02-21 22:59:35 +00:00
yamt
1a7bc55dcc remove some __unused from function parameters. 2006-11-01 10:17:58 +00:00
uwe
bb003f9487 More __unused (in cpp conditionals not touched by i386). 2006-10-12 21:35:00 +00:00
christos
4d595fd7b1 - sprinkle __unused on function decls.
- fix a couple of unused bugs
- no more -Wno-unused for i386
2006-10-12 01:30:41 +00:00
drochner
ef8848c74a Introduce a UVM_KMF_EXEC flag for uvm_km_alloc() which enforces an
executable mapping. Up to now, only R+W was requested from pmap_kenter_pa.
On most CPUs, we get an executable mapping anyway, due to lack of
hardware support or due to lazyness in the pmap implementation. Only
alpha does obey VM_PROT_EXECUTE, afaics.
2006-07-05 14:26:42 +00:00
yamt
c24f70bcad move wait points for kva from upper layers to vm_map. PR/33185 #1.
XXX there is a concern about interaction with kva fragmentation.
see: http://mail-index.NetBSD.org/tech-kern/2006/05/11/0000.html
2006-05-25 14:27:28 +00:00
yamt
38ae305f09 uvm_km_suballoc: consider kva overhead of "kmapent".
fixes PR/31275 (me) and PR/32287 (Christian Biere).
2006-05-03 14:12:01 +00:00
yamt
9f6a649d14 uvm_km_pgremove/uvm_km_pgremove_intrsafe: fix assertions. 2006-04-05 21:56:24 +00:00
yamt
4038c2995b uvm_km_check_empty: fix an assertion. 2006-03-17 09:37:55 +00:00
christos
95e1ffb156 merge ktrace-lwp. 2005-12-11 12:16:03 +00:00
thorpej
e569facced Use ANSI function decls. 2005-06-27 02:19:48 +00:00
christos
e715d2ee98 avoid shadow variables.
remove unneeded casts.
2005-05-29 21:06:33 +00:00
simonb
eddbeffa06 Use a cast to (long long) and 0x%llx to print out a paddr_t instead
of casting to (void *).  Fixes compile problems with 64-bit paddr_t
on 32-bit platforms.
2005-04-20 14:10:03 +00:00
yamt
01c07ef7bd fix unreasonably frequent "killed: out of swap" on systems which have
little or no swap.
- even on a severe swap shortage, if we have some amount of file-backed pages,
  don't bother to kill processes.
- if all pages in queue will be likely reactivated, just give up
  page type balancing rather than spinning unnecessarily.
2005-04-12 13:11:45 +00:00
yamt
cde0e21683 unwrap short lines. 2005-04-01 12:37:27 +00:00
yamt
6b2d8b66a4 merge yamt-km branch.
- don't use managed mappings/backing objects for wired memory allocations.
  save some resources like pv_entry.  also fix (most of) PR/27030.
- simplify kernel memory management API.
- simplify pmap bootstrap of some ports.
- some related cleanups.
2005-04-01 11:59:21 +00:00
perry
bcfcddbac1 nuke trailing whitespace 2005-02-26 22:31:44 +00:00
yamt
22099ab744 in uvm_unmap_remove, always wakeup va waiters if any.
uvm_km_free_wakeup is now a synonym of uvm_km_free.
2005-01-13 11:50:32 +00:00
yamt
e4666bf785 don't reserve (uvm_mapent_reserve) entries for malloc/pool backends
because it isn't necessary or safe.
reported and tested by Denis Lagno.  PR/28897.
2005-01-12 09:34:35 +00:00
yamt
22cf117f61 km_vacache_alloc: UVM_PROT_ALL rather than UVM_PROT_NONE
so that uvm_kernacc works.  PR/28861. (FUKAUMI Naoki)
2005-01-05 00:58:57 +00:00
yamt
0c5ceb72e4 km_vacache_alloc: specify va hint correctly rather than
using stack garbage.  PR/28845.
2005-01-03 04:01:13 +00:00
yamt
a880e5e2b5 in the case of !PMAP_MAP_POOLPAGE, gather pool backend allocations to
large chunks for kernel_map and kmem_map to ease kva fragmentation.
2005-01-01 21:08:02 +00:00
yamt
95c82bfee4 introduce vm_map_kernel, a subclass of vm_map, and
move some kernel-only members of vm_map to it.
2005-01-01 21:02:12 +00:00
yamt
1207308b90 for in-kernel maps,
- allocate kva for vm_map_entry from the map itsself and
  remove the static limit, MAX_KMAPENT.
- keep merged entries for later splitting to fix allocate-to-free problem.
  PR/24039.
2005-01-01 21:00:06 +00:00
junyoung
7e0c058612 Drop trailing spaces. 2004-03-24 07:47:32 +00:00
matt
a78a1b0777 Back out the changes in
http://mail-index.netbsd.org/source-changes/2004/01/29/0027.html
since they don't really fix the problem.

Incorpate one fix:  Mark uvm_map_entry's that were created with
UVM_FLAG_NOMERGE so that they will not be used as future merge
candidates.
2004-02-10 01:30:49 +00:00
yamt
20c5bc5099 - split uvm_map() into two functions for the followings.
- for in-kernel maps, disable map entry merging so that
  unmap operations won't block. (workaround for PR/24039)
- for in-kernel maps, allocate kva for vm_map_entry from
  the map itsself and eliminate MAX_KMAPENT and
  uvm_map_entry_kmem_pool.
2004-01-29 12:06:02 +00:00
pk
3c96ae431b * Introduce uvm_km_kmemalloc1() which allows alignment and preferred offset
to be passed to uvm_map().

* Turn all uvm_km_valloc*() macros back into (inlined) functions to retain
  binary compatibility with any 3rd party modules.
2003-12-18 15:02:04 +00:00
pk
60181444ca Condense all existing variants of uvm_km_valloc into a single function:
uvm_km_valloc1(), and use it to express all of
	uvm_km_valloc()
	uvm_km_valloc_wait()
	uvm_km_valloc_prefer()
	uvm_km_valloc_prefer_wait()
	uvm_km_valloc_align()
in terms of it by macro expansion.
2003-12-18 08:15:42 +00:00
pk
9a4aea0127 When retiring a swap device with marked bad blocks on it we should update
the `# swap page in use' and `# swap page only' counters.  However, at the
time of swap device removal we can no longer figure out how many of the
bad swap pages are actually also `swap only' pages.

So, on swap I/O errors arrange things to not include the bad swap pages in
the `swpgonly' counter as follows: uvm_swap_markbad() decrements `swpgonly'
by the number of bad pages, and the various VM object deallocation routines
do not decrement `swpgonly' for swap slots marked as SWSLOT_BAD.
2003-08-28 13:12:17 +00:00
pk
5869d91cb9 Introduce uvm_swapisfull(), which computes the available swap space by
taking into account swap devices that are in the process of being removed.
2003-08-11 16:33:30 +00:00
thorpej
36da248c07 Back out the following chagne:
http://mail-index.netbsd.org/source-changes/2003/05/08/0068.html

There were some side-effects that I didn't anticipate, and fixing them
is proving to be more difficult than I thought, do just eject for now.
Maybe one day we can look at this again.

Fixes PR kern/21517.
2003-05-10 21:10:23 +00:00
thorpej
b77900c3c2 Simplify the way the bounds of the managed kernel virtual address
space is advertised to UVM by making virtual_avail and virtual_end
first-class exported variables by UVM.  Machine-dependent code is
responsible for initializing them before main() is called.  Anything
that steals KVA must adjust these variables accordingly.

This reduces the number of instances of this info from 3 to 1, and
simplifies the pmap(9) interface by removing the pmap_virtual_space()
function call, and removing two arguments from pmap_steal_memory().

This also eliminates some kludges such as having to burn kernel_map
entries on space used by the kernel and stolen KVA.

This also eliminates use of VM_{MIN,MAX}_KERNEL_ADDRESS from MI code,
this giving MD code greater flexibility over the bounds of the managed
kernel virtual address space if a given port's specific platforms can
vary in this regard (this is especially true of the evb* ports).
2003-05-08 18:13:12 +00:00
bouyer
d986226518 Change uvm_km_kmemalloc() to accept flag UVM_KMF_NOWAIT and pass it to
uvm_map(). Change uvm_map() to honnor UVM_KMF_NOWAIT. For this, change
amap_extend() to take a flags parameter instead of just boolean for
direction, and introduce AMAP_EXTEND_FORWARDS and AMAP_EXTEND_NOWAIT flags
(AMAP_EXTEND_BACKWARDS is still defined as 0x0, to keep the code easier to
read).
Add a flag parameter to uvm_mapent_alloc().
This solves a problem a pool_get(PR_NOWAIT) could trigger a pool_get(PR_WAITOK)
in uvm_mapent_alloc().
Thanks to Chuck Silvers, enami tsugutomo, Andrew Brown and Jason R Thorpe
for feedback.
2002-11-30 18:28:04 +00:00
oster
7eac5bf44e Garbage collect some leftover (and unneeded) code. OK'ed by chs. 2002-10-05 17:26:06 +00:00
chs
9672ac098f add a new km flag UVM_KMF_CANFAIL, which causes uvm_km_kmemalloc() to
return failure if swap is full and there are no free physical pages.
have malloc() use this flag if M_CANFAIL is passed to it.
use M_CANFAIL to allow amap_extend() to fail when memory is scarce.
this should prevent most of the remaining hangs in low-memory situations.
2002-09-15 16:54:26 +00:00
thorpej
95cb683cfb Don't pass VM_PROT_EXEC to pmap_kenter_pa(). 2002-08-14 15:21:31 +00:00