Commit Graph

72 Commits

Author SHA1 Message Date
para
9a22f48db4 get rid of not used uvm_map flag (UVM_MAP_KMAPENT) 2012-10-29 16:00:05 +00:00
rmind
a4c86c1b45 Remove VM_MAP_INTRSAFE and related code. Not used since the "kmem changes". 2012-02-19 00:05:55 +00:00
para
e62ee4d475 extending vmem(9) to be able to allocated resources for it's own needs.
simplifying uvm_map handling (no special kernel entries anymore no relocking)
make malloc(9) a thin wrapper around kmem(9)
(with private interface for interrupt safety reasons)

releng@ acknowledged
2012-01-27 19:48:38 +00:00
chs
caaf3a9421 fix UVM_MAP_CLIP_* to only clip if the clip address is within the entry
(which would only not be true if the clip address is at one of the boundaries
of the entry).  fixes PR 44788.
2012-01-21 16:51:38 +00:00
reinoud
ea698f7362 Ooops forgot the uvm_map.h 2011-12-20 15:41:01 +00:00
rmind
e225b7bd09 Welcome to 5.99.53! Merge rmind-uvmplock branch:
- Reorganize locking in UVM and provide extra serialisation for pmap(9).
  New lock order: [vmpage-owner-lock] -> pmap-lock.

- Simplify locking in some pmap(9) modules by removing P->V locking.

- Use lock object on vmobjlock (and thus vnode_t::v_interlock) to share
  the locks amongst UVM objects where necessary (tmpfs, layerfs, unionfs).

- Rewrite and optimise x86 TLB shootdown code, make it simpler and cleaner.
  Add TLBSTATS option for x86 to collect statistics about TLB shootdowns.

- Unify /dev/mem et al in MI code and provide required locking (removes
  kernel-lock on some ports).  Also, avoid cache-aliasing issues.

Thanks to Andrew Doran and Joerg Sonnenberger, as their initial patches
formed the core changes of this branch.
2011-06-12 03:35:36 +00:00
chuck
40ec801a13 udpate license clauses on my code to match the new-style BSD licenses.
based on second diff that rmind@ sent me.

no functional change with this commit.
2011-02-02 15:25:27 +00:00
matt
19e6c76b2d Rename rb.h to rbtree.h, as it is more appropriate (c.f. ptree.h). Also
helps find code that hasn't been updated to use the new rbtree API.
2010-09-25 01:42:38 +00:00
yamt
5e1c2c932d - uvm_map_extract: update map->size correctly for !UVM_EXTRACT_CONTIG.
- uvm_map_extract: panic on zero-sized entries.
- make uvm_map_replace static.
2009-08-01 16:35:51 +00:00
yamt
16babfa6fb on MADV_WILLNEED, start prefetching backing object's pages. 2009-06-10 01:55:33 +00:00
matt
5698938787 Make uvm_map.? use <sys/rb.h> instead of <sys/tree.h>. Change the
ambiguous members ownspace/space to gap/maxgap.  Add some evcnt for
evaluation of lookups using tree/list.  Drop threshold of using
tree for lookups from > 30 to > 15.

Bump kernel version to 4.99.71
2008-07-29 00:03:06 +00:00
yamt
fb0cfeb7dd fix a locking botch. PR/38415 from Wolfgang Solfrank. 2008-04-26 13:44:00 +00:00
yamt
74e0872eca simplify locking and remove vm_map_upgrade/downgrade.
this fixes a deadlock due to read-lock recursion of map->lock.
2008-01-08 13:09:55 +00:00
ad
4a780c9ae2 Merge vmlocking2 to head. 2008-01-02 11:48:20 +00:00
he
e7a46bb8e3 When _KERNEL is defined, we have now grown a dependency on
<sys/proc.h>, since one of the inline functions now refer to curlwp.
Fix this by including <sys/proc.h> when _KERNEL is defined.
2007-07-22 21:07:47 +00:00
ad
4688843d2b Merge unobtrusive locking changes from the vmlocking branch. 2007-07-21 19:21:53 +00:00
thorpej
b3667ada6d TRUE -> true, FALSE -> false 2007-02-22 06:05:00 +00:00
thorpej
712239e366 Replace the Mach-derived boolean_t type with the C99 bool type. A
future commit will replace use of TRUE and FALSE with true and false.
2007-02-21 22:59:35 +00:00
yamt
c24f70bcad move wait points for kva from upper layers to vm_map. PR/33185 #1.
XXX there is a concern about interaction with kva fragmentation.
see: http://mail-index.NetBSD.org/tech-kern/2006/05/11/0000.html
2006-05-25 14:27:28 +00:00
yamt
38ae305f09 uvm_km_suballoc: consider kva overhead of "kmapent".
fixes PR/31275 (me) and PR/32287 (Christian Biere).
2006-05-03 14:12:01 +00:00
perry
fbae48b901 Change "inline" back to "__inline" in .h files -- C99 is still too
new, and some apps compile things in C89 mode. C89 keywords stay.

As per core@.
2006-02-16 20:17:12 +00:00
yamt
a3af4c1530 remove the following options. no objections on tech-kern@.
UVM_PAGER_INLINE
	UVM_AMAP_INLINE
	UVM_PAGE_INLINE
	UVM_MAP_INLINE
2006-02-11 12:45:07 +00:00
yamt
e8e7ab8637 implement compat_linux mremap. 2006-01-21 13:34:15 +00:00
perry
0f0296d88a Remove leading __ from __(const|inline|signed|volatile) -- it is obsolete. 2005-12-24 20:45:08 +00:00
christos
95e1ffb156 merge ktrace-lwp. 2005-12-11 12:16:03 +00:00
yamt
50a25acc50 (try to) merge map entries in fault handler. 2005-05-17 13:55:33 +00:00
yamt
6b2d8b66a4 merge yamt-km branch.
- don't use managed mappings/backing objects for wired memory allocations.
  save some resources like pv_entry.  also fix (most of) PR/27030.
- simplify kernel memory management API.
- simplify pmap bootstrap of some ports.
- some related cleanups.
2005-04-01 11:59:21 +00:00
chs
467487d274 use vm_map_{min,max}() instead of dereferencing the vm_map pointer directly.
define and use vm_map_set{min,max}() for modifying these values.
remove the {min,max}_offset aliases for these vm_map fields to be more
namespace-friendly.  PR 26475.
2005-02-11 02:12:03 +00:00
yamt
22099ab744 in uvm_unmap_remove, always wakeup va waiters if any.
uvm_km_free_wakeup is now a synonym of uvm_km_free.
2005-01-13 11:50:32 +00:00
yamt
e4666bf785 don't reserve (uvm_mapent_reserve) entries for malloc/pool backends
because it isn't necessary or safe.
reported and tested by Denis Lagno.  PR/28897.
2005-01-12 09:34:35 +00:00
yamt
a880e5e2b5 in the case of !PMAP_MAP_POOLPAGE, gather pool backend allocations to
large chunks for kernel_map and kmem_map to ease kva fragmentation.
2005-01-01 21:08:02 +00:00
yamt
95c82bfee4 introduce vm_map_kernel, a subclass of vm_map, and
move some kernel-only members of vm_map to it.
2005-01-01 21:02:12 +00:00
yamt
1207308b90 for in-kernel maps,
- allocate kva for vm_map_entry from the map itsself and
  remove the static limit, MAX_KMAPENT.
- keep merged entries for later splitting to fix allocate-to-free problem.
  PR/24039.
2005-01-01 21:00:06 +00:00
matt
a78a1b0777 Back out the changes in
http://mail-index.netbsd.org/source-changes/2004/01/29/0027.html
since they don't really fix the problem.

Incorpate one fix:  Mark uvm_map_entry's that were created with
UVM_FLAG_NOMERGE so that they will not be used as future merge
candidates.
2004-02-10 01:30:49 +00:00
yamt
20c5bc5099 - split uvm_map() into two functions for the followings.
- for in-kernel maps, disable map entry merging so that
  unmap operations won't block. (workaround for PR/24039)
- for in-kernel maps, allocate kva for vm_map_entry from
  the map itsself and eliminate MAX_KMAPENT and
  uvm_map_entry_kmem_pool.
2004-01-29 12:06:02 +00:00
yamt
57e554da69 track map entries and free spaces using red-black tree
to improve scalability of operations on the map.

originally done by Niels Provos for OpenBSD.
tweaked for NetBSD by me with some advices from enami tsugutomo.
discussed on tech-kern@ and tech-perform@.
2003-11-01 11:09:02 +00:00
enami
aa87bee0c5 ansi'fy. 2003-10-01 22:50:15 +00:00
enami
a396bb4713 Swap where the vm map's max and min offset are stored so that they can be
used during map traversal.
2003-09-10 13:38:20 +00:00
atatat
df0a9badc6 Introduce "top down" memory management for mmap()ed allocations. This
means that the dynamic linker gets mapped in at the top of available
user virtual memory (typically just below the stack), shared libraries
get mapped downwards from that point, and calls to mmap() that don't
specify a preferred address will get mapped in below those.

This means that the heap and the mmap()ed allocations will grow
towards each other, allowing one or the other to grow larger than
before.  Previously, the heap was limited to MAXDSIZ by the placement
of the dynamic linker (and the process's rlimits) and the space
available to mmap was hobbled by this reservation.

This is currently only enabled via an *option* for the i386 platform
(though other platforms are expected to follow).  Add "options
USE_TOPDOWN_VM" to your kernel config file, rerun config, and rebuild
your kernel to take advantage of this.

Note that the pmap_prefer() interface has not yet been modified to
play nicely with this, so those platforms require a bit more work
(most notably the sparc) before they can use this new memory
arrangement.

This change also introduces a VM_DEFAULT_ADDRESS() macro that picks
the appropriate default address based on the size of the allocation or
the size of the process's text segment accordingly.  Several drivers
and the SYSV SHM address assignment were changed to use this instead
of each one picking their own "default".
2003-02-20 22:16:05 +00:00
perry
bbad42171f /*CONTCOND*/ while (0)'ed macros 2002-11-02 07:40:47 +00:00
chs
94a62d45d6 add a new flag VM_MAP_DYING, which is set before we start
tearing down a vm_map.  use this to skip the pmap_update()
at the end of all the removes, which allows pmaps to optimize
pmap tear-down.  also, use the new pmap_remove_all() hook to
let the pmap implemenation know what we're up to.
2002-09-22 07:21:29 +00:00
christos
7e19baba28 protect against traditional macro expansion. 2001-10-03 13:32:23 +00:00
chs
2133049a7c create a new pool for map entries, allocated from kmem_map instead of
kernel_map.  use this instead of the static map entries when allocating
map entries for kernel_map.  this greatly reduces the number of static
map entries used and should eliminate the problems with running out.
2001-09-09 19:38:22 +00:00
thorpej
a279b0973b Reduce some complexity in the fault path -- Rather than maintaining
an spl-protected "interrupt safe map" list, simply require that callers
of uvm_fault() never call us in interrupt context (MD code must make
the assertion), and check for interrupt-safe maps in uvmfault_lookup()
before we lock the map.
2001-06-26 17:55:14 +00:00
chs
821ec03ed9 replace vm_map{,_entry}_t with struct vm_map{,_entry} *. 2001-06-02 18:09:08 +00:00
chs
118ddca24a replace {simple_,}lock{_data,}_t with struct {simple,}lock {,*}. 2001-05-26 16:32:40 +00:00
chs
3845302904 remove trailing whitespace. 2001-05-25 04:06:11 +00:00
chs
ac3bc537bd eliminate the KERN_* error codes in favor of the traditional E* codes.
the mapping is:

KERN_SUCCESS			0
KERN_INVALID_ADDRESS		EFAULT
KERN_PROTECTION_FAILURE		EACCES
KERN_NO_SPACE			ENOMEM
KERN_INVALID_ARGUMENT		EINVAL
KERN_FAILURE			various, mostly turn into KASSERTs
KERN_RESOURCE_SHORTAGE		ENOMEM
KERN_NOT_RECEIVER		<unused>
KERN_NO_ACCESS			<unused>
KERN_PAGES_LOCKED		<unused>
2001-03-15 06:10:32 +00:00
chs
19b7b64642 clean up DIAGNOSTIC checks, use KASSERT(). 2001-02-18 21:19:08 +00:00
enami
4625dcde2e Use single const char array instead of over 200 string constant. 2000-12-13 08:06:11 +00:00