Commit Graph

1598 Commits

Author SHA1 Message Date
riastradh
bf188c97ed Set bp->b_resid to bp->b_bcount on error in swstrategy as required. 2013-05-07 15:49:09 +00:00
matt
c54c5d3aee Remove __BEGIN_DECLS/__END_DECLS
Allow pmap_kenter_pa to be a macro.
2013-02-02 14:06:58 +00:00
para
5bb32f7811 improve on comments 2013-01-29 21:37:04 +00:00
para
f2aa565376 bring file up to date for previous vmem changes. 2013-01-29 21:29:40 +00:00
para
39dafdefa9 revert previous commit not yet fully functional, sorry 2013-01-26 15:18:00 +00:00
para
cca299e0a3 make vmem(9) ready to be used early during bootstrap to replace extent(9).
pass memory for vmem structs into the initialization functions and
do away with the static pools for this.
factor out the vmem internal structures into a private header.
remove special bootstrapping of the kmem_va_arena as all necessary memory
comes from pool_allocator_meta wich is fully operational at this point.
2013-01-26 13:50:33 +00:00
jakllsch
793070a537 Until such time as the swap subsystem can be converted to use The One True
Allocator, prevent panics if (MAXPHYS/PAGE_SIZE) > BLIST_MAX_ALLOC.
From Wolfgang Stukenbrock in PR#41765.
2012-11-27 20:15:55 +00:00
matt
abccd7c658 When uvm_io reserves kernel address space, make sure it's starts with the
same color as the user address space being copied.
2012-11-02 16:43:16 +00:00
para
9a22f48db4 get rid of not used uvm_map flag (UVM_MAP_KMAPENT) 2012-10-29 16:00:05 +00:00
christos
b1425120c0 move from common/pmap/tlb -> uvm/pmap 2012-10-03 00:51:45 +00:00
matt
76a03311f2 #include <sys/atomic.h> 2012-09-15 06:25:47 +00:00
rmind
1e84067639 - Manage anonymous UVM object reference count with atomic ops.
- Fix an old bug of possible lock against oneself (uao_detach_locked() is
  called from uao_swap_off() with uao_list_lock acquired).  Also removes
  the try-lock dance in uao_swap_off(), since the lock order changes.
2012-09-14 22:20:50 +00:00
rmind
8c26068070 - Describe uvm_aobj and the lock order.
- Remove unnecessary uao_dropswap_range1() wrapper.
- KNF.  Sprinkle some __cacheline_aligned.
2012-09-14 18:56:15 +00:00
para
75ebc1f88a call pmap_growkernel once after the kmem_arena is created
to make the pmap cover it's address space
assert on the growth in uvm_km_kmem_alloc

for the 3rd uvm_map_entry uvm_map_prepare will grow the kernel,
but we might call into uvm_km_kmem_alloc through imports to
the kmem_meta_arena earlier

while here guard uvm_km_va_starved_p from kmem_arena not yet created

thanks for tracking this down to everyone involved
2012-09-07 06:45:04 +00:00
matt
3413f0dfee Remove locking since it isn't needed. As soon as the 2nd uvm_map_entry in kernel_map
is created, uvm_map_prepare will call pmap_growkernel and the pmap_growkernel call in
uvm_km_mem_alloc will never be called again.
2012-09-04 13:37:41 +00:00
matt
d22fabd15b Switch to a spin lock (uvm_kentry_lock) which, fortunately, was sitting there
unused.
2012-09-03 19:53:42 +00:00
matt
f98c747f9e Cleanup comment. Change panic to KASSERTMSG.
Use kernel_map->misc_lock to make sure we don't call pmap_growkernel
concurrently and possibly mess up uvm_maxkaddr.
2012-09-03 17:30:04 +00:00
matt
b1534669df Shut up gcc printf warning. 2012-09-03 16:07:17 +00:00
matt
2825f5a997 Don't try grow the entire kmem space but just do as needed in uvm_km_kmem_alloc 2012-09-03 15:55:42 +00:00
matt
9e35a51eac Fix a bug where the kernel was never grown to accomodate the kmem VA space
since that happens before the kernel_map is set.
2012-09-03 14:21:24 +00:00
matt
a462d18984 Add a __HAVE_CPU_UAREA_IDLELWP hook so that the MD code can allocate
special UAREAs for idle lwp's.
2012-09-01 00:26:37 +00:00
chs
215c8cfa7e avoid leaking a uvm_object reference when merging a new map entry
with the entries on both sides.  fixes PR 46807.
2012-08-18 14:28:04 +00:00
matt
fd2366536d -fno-common broke kernhist since it used commons.
Add a KERNHIST_DEFINE which is define the kernel history.
Change UVM to deal with the new usage.
2012-07-30 23:56:48 +00:00
matt
05e601aafa Convert a KASSERT to a KASSERTMSG. Expand one KASSERTSG a little bit. 2012-07-09 11:19:34 +00:00
jym
57d7988f76 Now that pool_cache_invalidate() is synchronous and can handle per-CPU
caches, merge together pool_drain_start() and pool_drain_end() into

bool pool_drain(struct pool **ppp);

"bool" value indicates whether reclaiming was fully done (true) or not (false)
"ppp" will contain a pointer to the pool that was drained (optional).

See http://mail-index.netbsd.org/tech-kern/2012/06/04/msg013287.html
2012-06-05 22:51:47 +00:00
rmind
227075769a Improve the wording slightly. 2012-06-03 17:12:49 +00:00
dsl
e21a34c25e Add some pre-processor magic to verify that the type of the data item
passed to sysctl_createv() actually matches the declared type for
  the item itself.
In the places where the caller specifies a function and a structure
  address (typically the 'softc') an explicit (void *) cast is now needed.
Fixes bugs in sys/dev/acpi/asus_acpi.c sys/dev/bluetooth/bcsp.c
  sys/kern/vfs_bio.c sys/miscfs/syncfs/sync_subr.c and setting
  AcpiGbl_EnableAmlDebugObject.
(mostly passing the address of a uint64_t when typed as CTLTYPE_INT).
I've test built quite a few kernels, but there may be some unfixed MD
  fallout. Most likely passing &char[] to char *.
Also add CTLFLAG_UNSIGNED for unsiged decimals - not set yet.
2012-06-02 21:36:41 +00:00
para
c8f20a5780 add some description about the vmem arenas, how they stack up and their purpose 2012-06-02 08:42:37 +00:00
martin
c5f76e4cca Only use generic readahead on VREG vnodes, the space used to store the
context is not valid on other types.
Prevents the crash reported in PR kern/38889, but does not fix the
mmap of block devices, more work is needed (no size on VBLK vnodes).
2012-06-01 14:52:48 +00:00
rmind
c6170bf370 Describe PG_ flags (for struct vm_page). Reviewed by yamt@. 2012-05-05 20:45:35 +00:00
chs
8306a9eddf change vflushbuf() to take the full FSYNC_* flags.
translate FSYNC_LAZY into PGO_LAZY for VOP_PUTPAGES() so that
genfs_do_io() can set the appropriate io priority for the I/O.
this is the first part of addressing PR 46325.
2012-04-29 22:53:59 +00:00
yamt
36347bfd16 uvm_km_kmem_alloc: don't hardcode kmem_va_arena 2012-04-13 15:34:42 +00:00
yamt
a630250ee1 comments 2012-04-13 15:33:38 +00:00
chs
26b2b74f75 initialize amap per-page reference counts before changing the amap's
overall reference count.  this fixes the crashes seen for the last 9 months
with web browers and plugins, which was also the cause of PR 46193.
2012-04-08 20:47:10 +00:00
martin
94b761b6aa Rework posix_spawn locking and memory management:
- always provide a vmspace for the new proc, initially borrowing from proc0
   (this part fixes PR 46286)
 - increase parallelism between parent and child if arguments allow this,
   avoiding a potential deadlock on exec_lock
 - add a new flag for userland to request old (lockstepped) behaviour for
   better error reporting
 - adapt test cases to the previous two and add a new variant to test the
   diagnostics flag
 - fix a few memory (and lock) leaks
 - provide netbsd32 compat
2012-04-08 11:27:44 +00:00
chs
16c2ba7402 fix uarea_system_poolpage_free() to handle freeing a uarea
that was not allocated by cpu_uarea_alloc() (ie. on plaforms
where cpu_uarea_alloc() failing is not fatal).
fixes PR 46284.
2012-04-06 17:16:30 +00:00
chs
6042004c72 adjust amap_cow_now() to make UVM_PAGE_TRKOWN happy. 2012-03-30 02:25:24 +00:00
uebayasi
1e0c5c59e0 Expose vm_inherit/voff_t/pgoff_t to userland to fix build. 2012-03-19 00:17:08 +00:00
uebayasi
57e974fbee Move base type definitions from uvm_extern.h to uvm_param.h so that
other sources can easily include part of UVM headers without the whole
uvm_extern.h (e.g. sys/vnode.h wants only uvm_object.h).
2012-03-18 13:31:14 +00:00
elad
0c9d8d15c9 Replace the remaining KAUTH_GENERIC_ISSUSER authorization calls with
something meaningful. All relevant documentation has been updated or
written.

Most of these changes were brought up in the following messages:

    http://mail-index.netbsd.org/tech-kern/2012/01/18/msg012490.html
    http://mail-index.netbsd.org/tech-kern/2012/01/19/msg012502.html
    http://mail-index.netbsd.org/tech-kern/2012/02/17/msg012728.html

Thanks to christos, manu, njoly, and jmmv for input.

Huge thanks to pgoyette for spinning these changes through some build
cycles and ATF.
2012-03-13 18:40:26 +00:00
bouyer
1cc4347583 uvm_km_pgremove_intrsafe(): properly compute the size to pmap_kremove()
(do not trucate it to the first __PGRM_BATCH pages per batch): if we were
given a sparse mapping, we could leave mappings in place.
Note that this doesn't seem to be a problem right now: I added a KASSERT
in my private tree to see if uvm_km_pgremove_intrsafe() would use a
too short size, and it didn't fire.
2012-03-12 21:37:12 +00:00
he
85a5a5ec09 __uvmexp_pagesize is needed also for non-modular builds, as
witnessed by the otherwise failing sparc build.
2012-02-27 01:39:58 +00:00
rmind
3a78ca9333 uvm_km_kmem_alloc: return ENOMEM on failure in PMAP_MAP_POOLPAGE case. 2012-02-25 22:28:06 +00:00
matt
081df64308 Add "opt_modular.h"
#define __uvmexp_pagesize
if MIN_PAGE_SIZE != MAX_PAGE_SIZE && modular is defined
2012-02-23 20:49:46 +00:00
bouyer
aa488cdf00 When using uvm_km_pgremove_intrsafe() make sure mappings are removed
before returning the pages to the free pool. Otherwise, under Xen,
a page which still has a writable mapping could be allocated for
a PDP by another CPU and the hypervisor would refuse it (this is
PR port-xen/45975).
For this, move the pmap_kremove() calls inside uvm_km_pgremove_intrsafe(),
and do pmap_kremove()/uvm_pagefree() in batch of (at most) 16 entries
(as suggested by Chuck Silvers on tech-kern@, see also
http://mail-index.netbsd.org/tech-kern/2012/02/17/msg012727.html and
followups).
2012-02-20 19:14:23 +00:00
martin
94602e1cf5 Solve previous fix (for early posix_spawn children exiting on error)
differently.
2012-02-20 12:21:23 +00:00
rmind
a4c86c1b45 Remove VM_MAP_INTRSAFE and related code. Not used since the "kmem changes". 2012-02-19 00:05:55 +00:00
matt
a199401390 Make sure to export uvmexp_* if MODULAR is defined.
Make the uvmexp_page* be a pointer to a const int as well as having the
pointer be const as well.
2012-02-17 23:41:02 +00:00
matt
abc292211d Add KASSERTs to uvm_pagealloc_pgfl to verify the page is actually free and has
the contents that it should.
Redo the KASSERTs for the pageq in uvm_pagefree.
2012-02-16 11:46:14 +00:00
martin
7e9aef18d8 Fix another merge botch - bracket vm space assignement with kpreempt-
disable/enable.
2012-02-12 20:28:14 +00:00