Commit Graph

138 Commits

Author SHA1 Message Date
para
f2aa565376 bring file up to date for previous vmem changes. 2013-01-29 21:29:40 +00:00
para
39dafdefa9 revert previous commit not yet fully functional, sorry 2013-01-26 15:18:00 +00:00
para
cca299e0a3 make vmem(9) ready to be used early during bootstrap to replace extent(9).
pass memory for vmem structs into the initialization functions and
do away with the static pools for this.
factor out the vmem internal structures into a private header.
remove special bootstrapping of the kmem_va_arena as all necessary memory
comes from pool_allocator_meta wich is fully operational at this point.
2013-01-26 13:50:33 +00:00
para
75ebc1f88a call pmap_growkernel once after the kmem_arena is created
to make the pmap cover it's address space
assert on the growth in uvm_km_kmem_alloc

for the 3rd uvm_map_entry uvm_map_prepare will grow the kernel,
but we might call into uvm_km_kmem_alloc through imports to
the kmem_meta_arena earlier

while here guard uvm_km_va_starved_p from kmem_arena not yet created

thanks for tracking this down to everyone involved
2012-09-07 06:45:04 +00:00
matt
3413f0dfee Remove locking since it isn't needed. As soon as the 2nd uvm_map_entry in kernel_map
is created, uvm_map_prepare will call pmap_growkernel and the pmap_growkernel call in
uvm_km_mem_alloc will never be called again.
2012-09-04 13:37:41 +00:00
matt
d22fabd15b Switch to a spin lock (uvm_kentry_lock) which, fortunately, was sitting there
unused.
2012-09-03 19:53:42 +00:00
matt
f98c747f9e Cleanup comment. Change panic to KASSERTMSG.
Use kernel_map->misc_lock to make sure we don't call pmap_growkernel
concurrently and possibly mess up uvm_maxkaddr.
2012-09-03 17:30:04 +00:00
matt
b1534669df Shut up gcc printf warning. 2012-09-03 16:07:17 +00:00
matt
2825f5a997 Don't try grow the entire kmem space but just do as needed in uvm_km_kmem_alloc 2012-09-03 15:55:42 +00:00
matt
9e35a51eac Fix a bug where the kernel was never grown to accomodate the kmem VA space
since that happens before the kernel_map is set.
2012-09-03 14:21:24 +00:00
matt
05e601aafa Convert a KASSERT to a KASSERTMSG. Expand one KASSERTSG a little bit. 2012-07-09 11:19:34 +00:00
rmind
227075769a Improve the wording slightly. 2012-06-03 17:12:49 +00:00
para
c8f20a5780 add some description about the vmem arenas, how they stack up and their purpose 2012-06-02 08:42:37 +00:00
yamt
36347bfd16 uvm_km_kmem_alloc: don't hardcode kmem_va_arena 2012-04-13 15:34:42 +00:00
bouyer
1cc4347583 uvm_km_pgremove_intrsafe(): properly compute the size to pmap_kremove()
(do not trucate it to the first __PGRM_BATCH pages per batch): if we were
given a sparse mapping, we could leave mappings in place.
Note that this doesn't seem to be a problem right now: I added a KASSERT
in my private tree to see if uvm_km_pgremove_intrsafe() would use a
too short size, and it didn't fire.
2012-03-12 21:37:12 +00:00
rmind
3a78ca9333 uvm_km_kmem_alloc: return ENOMEM on failure in PMAP_MAP_POOLPAGE case. 2012-02-25 22:28:06 +00:00
bouyer
aa488cdf00 When using uvm_km_pgremove_intrsafe() make sure mappings are removed
before returning the pages to the free pool. Otherwise, under Xen,
a page which still has a writable mapping could be allocated for
a PDP by another CPU and the hypervisor would refuse it (this is
PR port-xen/45975).
For this, move the pmap_kremove() calls inside uvm_km_pgremove_intrsafe(),
and do pmap_kremove()/uvm_pagefree() in batch of (at most) 16 entries
(as suggested by Chuck Silvers on tech-kern@, see also
http://mail-index.netbsd.org/tech-kern/2012/02/17/msg012727.html and
followups).
2012-02-20 19:14:23 +00:00
rmind
a4c86c1b45 Remove VM_MAP_INTRSAFE and related code. Not used since the "kmem changes". 2012-02-19 00:05:55 +00:00
para
4c23b30cff proper sizing of kmem_arena on different ports
PR port-i386/45946: Kernel locks up in VMEM system
2012-02-10 17:35:47 +00:00
para
5f319369a0 improve sizing of kmem_arena now that more allocations are made from it
don't enforce limits if not required

ok: riz@
2012-02-04 17:56:16 +00:00
matt
a6a91141ef Always allocate the kmem region. Add UVMHIST support. Approved by releng. 2012-02-03 19:25:07 +00:00
para
3eed4a3cfa - bringing kmeminit_nkmempages back and revert pmaps that called this early
- use nkmempages to scale the kmem_arena
- reducing diff to pre kmem/vmem change
   (NKMEMPAGES_MAX_DEFAULT will need adjusting on some archs)
2012-02-02 18:59:44 +00:00
para
e253ed8e30 allocate uareas and buffers from kernel_map again
add code to drain pools if kmem_arena runs out of space
2012-02-01 23:43:49 +00:00
matt
4444dd3f00 Use right UVM_xxx_COLORMATCH flag (even both use the same value). 2012-02-01 02:22:27 +00:00
matt
bf4e528611 Deal with case when kmembase == kmemstart.
Use KASSERTMSG for a few KASSERTs
Make sure to match the color of the VA when we are allocating a physical page.
2012-01-31 00:30:52 +00:00
para
cf91957c71 size kmem_arena more sanely for small memory machines 2012-01-29 12:37:01 +00:00
para
e62ee4d475 extending vmem(9) to be able to allocated resources for it's own needs.
simplifying uvm_map handling (no special kernel entries anymore no relocking)
make malloc(9) a thin wrapper around kmem(9)
(with private interface for interrupt safety reasons)

releng@ acknowledged
2012-01-27 19:48:38 +00:00
matt
207bff18bc Forward some UVM from matt-nb5-mips64. Add UVM_KMF_COLORMATCH flag.
When uvm_map gets passed UVM_FLAG_COLORMATCH, the align argument contains
the color of the starting address to be allocated (0..colormask).
When uvm_km_alloc is passed UVM_KMF_COLORMATCH (which can only be used with
UVM_KMF_VAONLY), the align argument contain the color of the starting address
to be allocated.
Change uvm_pagermapin to use this.  When mapping user pages in the kernel,
if colormatch is used with the color of the starting user page then the kernel
mapping will be congruent with the existing user mappings.
2011-09-01 06:40:28 +00:00
yamt
b456aa6a56 - fix a use-after-free bug in uvm_km_free.
(after uvm_km_pgremove frees pages, the following pmap_remove touches them.)
- acquire the object lock for operations on pmap_kernel as it can actually be
  raced with P->V operations.  eg. pagedaemon.
2011-07-05 14:03:06 +00:00
rmind
e225b7bd09 Welcome to 5.99.53! Merge rmind-uvmplock branch:
- Reorganize locking in UVM and provide extra serialisation for pmap(9).
  New lock order: [vmpage-owner-lock] -> pmap-lock.

- Simplify locking in some pmap(9) modules by removing P->V locking.

- Use lock object on vmobjlock (and thus vnode_t::v_interlock) to share
  the locks amongst UVM objects where necessary (tmpfs, layerfs, unionfs).

- Rewrite and optimise x86 TLB shootdown code, make it simpler and cleaner.
  Add TLBSTATS option for x86 to collect statistics about TLB shootdowns.

- Unify /dev/mem et al in MI code and provide required locking (removes
  kernel-lock on some ports).  Also, avoid cache-aliasing issues.

Thanks to Andrew Doran and Joerg Sonnenberger, as their initial patches
formed the core changes of this branch.
2011-06-12 03:35:36 +00:00
chuck
40ec801a13 udpate license clauses on my code to match the new-style BSD licenses.
based on second diff that rmind@ sent me.

no functional change with this commit.
2011-02-02 15:25:27 +00:00
matt
4d8e8a7e36 Add better color matching selecting free pages. KM pages will now allocated
so that VA and PA have the same color.  On a page fault, choose a physical
page that has the same color as the virtual address.

When allocating kernel memory pages, allow the MD to specify a preferred
VM_FREELIST from which to choose pages.  For machines with large amounts
of memory (> 4GB), all kernel memory to come from <4GB to reduce the amount
of bounce buffering needed with 32bit DMA devices.
2011-01-04 08:26:33 +00:00
cegger
dfa8eb2f81 Move PMAP_KMPAGE to be used in pmap_kenter_pa flags argument.
'Looks good to me' gimpy@
2010-05-14 05:02:05 +00:00
joerg
d621e29eca Remove separate mb_map. The nmbclusters is computed at boot time based
on the amount of physical memory and limited by NMBCLUSTERS if present.
Architectures without direct mapping also limit it based on the kmem_map
size, which is used as backing store. On i386 and ARM, the maximum KVA
used for mbuf clusters is limited to 64MB by default.

The old default limits and limits based on GATEWAY have been removed.
key_registered_sb_max is hard-wired to a value derived from 2048
clusters.
2010-02-08 19:02:25 +00:00
cegger
9480c51b04 Add a flags argument to pmap_kenter_pa(9).
Patch showed on tech-kern@ http://mail-index.netbsd.org/tech-kern/2009/11/04/msg006434.html
No objections.
2009-11-07 07:27:40 +00:00
ad
b5413f0358 It's easier for kernel reserve pages to be consumed because the pagedaemon
serves as less of a barrier these days. Restrict provision of kernel reserve
pages to kmem and one of these cases:

- doing a NOWAIT allocation
- caller is a realtime thread
- caller is a kernel thread
- explicitly requested, for example by the pmap
2008-12-13 11:34:43 +00:00
ad
a371a02d26 PR port-amd64/32816 amd64 can not load lkms
Change some assertions to partially allow for VM_MAP_IS_KERNEL(map) where
map is outside the range of kernel_map.
2008-12-01 10:54:57 +00:00
pooka
c7b9619f12 the most karmic commit of all: fix tyop in comment 2008-08-04 13:37:33 +00:00
matt
ad65eb54bc Add PMAP_KMPAGE flag for pmap_kenter_pa. This allows pmaps to know that
the page being entered is being for the kernel memory allocator.  Such pages
should have no references and don't need bookkeeping.
2008-07-16 00:11:27 +00:00
yamt
d886e611bd remove a redundant pmap_update and add a comment instead. 2008-03-24 08:52:55 +00:00
chris
855792073c Add some more missing pmap_update()s following pmap_kremove()s. 2008-02-23 17:27:58 +00:00
ad
4a780c9ae2 Merge vmlocking2 to head. 2008-01-02 11:48:20 +00:00
ad
6e948e8304 Fix DEBUG build. 2007-07-21 20:52:59 +00:00
ad
4688843d2b Merge unobtrusive locking changes from the vmlocking branch. 2007-07-21 19:21:53 +00:00
ad
59d979c5f1 Pass an ipl argument to pool_init/POOL_INIT to be used when initializing
the pool's lock.
2007-03-12 18:18:22 +00:00
thorpej
712239e366 Replace the Mach-derived boolean_t type with the C99 bool type. A
future commit will replace use of TRUE and FALSE with true and false.
2007-02-21 22:59:35 +00:00
yamt
1a7bc55dcc remove some __unused from function parameters. 2006-11-01 10:17:58 +00:00
uwe
bb003f9487 More __unused (in cpp conditionals not touched by i386). 2006-10-12 21:35:00 +00:00
christos
4d595fd7b1 - sprinkle __unused on function decls.
- fix a couple of unused bugs
- no more -Wno-unused for i386
2006-10-12 01:30:41 +00:00
drochner
ef8848c74a Introduce a UVM_KMF_EXEC flag for uvm_km_alloc() which enforces an
executable mapping. Up to now, only R+W was requested from pmap_kenter_pa.
On most CPUs, we get an executable mapping anyway, due to lack of
hardware support or due to lazyness in the pmap implementation. Only
alpha does obey VM_PROT_EXECUTE, afaics.
2006-07-05 14:26:42 +00:00