Commit Graph

172 Commits

Author SHA1 Message Date
rmind
5940ea9676 uvm_pageidlezero: use try-lock to not occupy uvm_fpageqlock, which may
be on demand by other CPUs.  Reduces lock contention in some workloads
on many CPU (8+) systems.

Tested by tls@.
2011-04-01 00:47:11 +00:00
chuck
beb929a933 udpate license clauses on my code to match the new-style BSD licenses.
based on diff that rmind@ sent me.

no functional change with this commit.
2011-02-02 17:53:41 +00:00
chuck
40ec801a13 udpate license clauses on my code to match the new-style BSD licenses.
based on second diff that rmind@ sent me.

no functional change with this commit.
2011-02-02 15:25:27 +00:00
matt
4d8e8a7e36 Add better color matching selecting free pages. KM pages will now allocated
so that VA and PA have the same color.  On a page fault, choose a physical
page that has the same color as the virtual address.

When allocating kernel memory pages, allow the MD to specify a preferred
VM_FREELIST from which to choose pages.  For machines with large amounts
of memory (> 4GB), all kernel memory to come from <4GB to reduce the amount
of bounce buffering needed with 32bit DMA devices.
2011-01-04 08:26:33 +00:00
matt
9898b7c4fd When panicing due a non-power of 2 pagesize, include the pagesize in the
panic message.
2010-12-11 22:34:03 +00:00
uebayasi
e3b768e416 Revert vm_physseg allocation changes. A report says that it causes
panics when used with mplayer in heavy load.
2010-11-25 04:45:30 +00:00
uebayasi
2bc2c4def2 ... and another. 2010-11-14 15:18:07 +00:00
uebayasi
12d4db54c0 Fix build caused by a last minute change. 2010-11-14 15:16:53 +00:00
uebayasi
26dd1e598f Be a little more friendly to dynamic physical segment registration.
Maintain an array of pointer to struct vm_physseg, instead of struct
array.  So that VM subsystem can take its pointer safely.  Pointer
to this struct will replace raw paddr_t usage in the future.

Dynamic removal is not supported yet.

Only MD data structure changes, no kernel bump needed.

Tested on i386, amd64, powerpc/ibm40x, arm11.
2010-11-14 15:06:34 +00:00
uebayasi
77d80f38cd Abstraction fix; move physical address -> per-page metadata (struct
vm_page *) "reverse" lookup code from uvm_page.h to uvm_page.c, to
help migration to not do that.

Likewise move per-page metadata (struct vm_page *) -> physical
address "forward" conversion code into *.c too.  This is called
only low-layer VM and MD code.
2010-11-12 05:23:41 +00:00
uebayasi
aa803dbb9d Abstraction fix; move physical address -> physical segment "reverse"
lookup code from uvm_page.h to uvm_page.c.

This code is used by some pmaps to lookup per-page state (PV) from
per-segment metadata (struct vm_physseg).  This is not needed if
UVM looks up physical segment once in fault handler, then directly
passes it to pmap.  This change helps transition to that model.

The only users of vm_physseg_find() are pmap_motorola.c and
powerpc/ibm4xx/pmap.c.

Tested By:	Compiling and running powerpc/ibm4xx/pmap.c
		(evbppc/conf/OPENBLOCKS266)
2010-11-12 03:21:04 +00:00
uebayasi
7e090d7604 C style; make a sentinel pointer have an exclusive value; no
functional changes.
2010-11-11 15:59:27 +00:00
uebayasi
4e73439bf8 Typo in a comment. 2010-11-11 15:51:05 +00:00
uebayasi
2d8b28a059 Minor clean up. 2010-11-11 15:47:43 +00:00
uebayasi
e2e0c29553 Minor clean up. 2010-11-11 14:50:54 +00:00
uebayasi
41e5df6d3e Remove incomplete, never worked dynamic run-time memory registration
(uvm_page_physload(9)).  This functionality will be re-added later.
2010-11-06 15:42:43 +00:00
rmind
879d5dfb5e Fixes/improvements to RB-tree implementation:
1. Fix inverted node order, so that negative value from comparison operator
   would represent lower (left) node, and positive - higher (right) node.
2. Add an argument (i.e. "context"), passed to comparison operators.
3. Change rb_tree_insert_node() to return a node - either inserted one or
   already existing one.
4. Amend the interface to manipulate the actual object, instead of the
   rb_node (in a similar way as Patricia-tree interface does).
5. Update all RB-tree users accordingly.

XXX: Perhaps rename rb.h to rbtree.h, since cleaning-up..

1-3 address the PR/43488 by Jeremy Huddleston.

Passes RB-tree regression tests.
Reviewed by: matt@, christos@
2010-09-24 22:51:50 +00:00
ad
b746352090 Reduce memory spent on bookkeeping for large values of MAXCPUS. 2010-04-25 15:54:14 +00:00
jym
7bf36164a7 - Use ctob() instead of ptoa() to obtain physical addresses from frame
numbers. Using ptoa() will cast to vaddr_t, which might no be adequate
for architectures where sizeof(paddr_t) > sizeof(vaddr_t) (like i386 PAE).

- small fix inside AGP heuristics to avoid masking high order bits for
systems with more than 4GB.

Reviewed by bouyer@.

See also http://mail-index.netbsd.org/tech-kern/2010/02/22/msg007373.html
2010-02-24 00:01:11 +00:00
uebayasi
5620716c87 uvm_pageinsert, uvm_pageremove: Pass the uboj, to/from which a pg is
inserted/removed, as an argument, because looking up a back-reference from
pg is redundant.  No functional changes.
2010-01-27 03:56:33 +00:00
cegger
9480c51b04 Add a flags argument to pmap_kenter_pa(9).
Patch showed on tech-kern@ http://mail-index.netbsd.org/tech-kern/2009/11/04/msg006434.html
No objections.
2009-11-07 07:27:40 +00:00
thorpej
21d14bd56b Move uvm_page-related DDB hooks into uvm_page.c. 2009-08-18 19:08:39 +00:00
thorpej
97a2657a66 Add a real API for testing if a page is a managed page, and adjust callers
to stop relying on vm_physseg_find() for this purpose.
2009-08-18 18:06:53 +00:00
matt
4efef68d70 Fix brain fart. physmem was int not long. 2009-08-11 16:27:08 +00:00
matt
6328246ec5 Add back declaration of physmem but use the existing type (long). 2009-08-11 16:07:24 +00:00
haad
ced21e5799 Remove physmem definition to uintptr_t from another patch. 2009-08-11 09:16:53 +00:00
haad
b760fc6e71 Add uvm_reclaim_hooks support for reclaiming kernel KVA space and memory.
This is used only by zfs where uvm_reclaim hook is added from arc cache.

Oked ad@.
2009-08-10 23:17:29 +00:00
abs
fbcfe9c7af Clarify free_list usage in uvm_page_physload() regarding faster/slower RAM.
Slower RAM should be assigned a higher free_list id.
No functional change to code, just comments and manpage
2009-03-12 12:55:16 +00:00
drochner
605d3094c4 oops - missed a case with PMAP_PAGEIDLEZERO if md code aborts the
zeroing process, from Nicolas Joly
2009-02-27 23:29:08 +00:00
drochner
e66cf328ab -fix two conditions where PQ_FREE was still/already set while the page
was not anymore/yet on the freelist and uvm_fpageqlock was not held
-clear PQ_FREE while the page is in the works of pageidlezero
This avoids that the DMA memory allocator (pglistalloc) grabs a page
which is not on the freelist, leading to a diagnostic panic (with DEBUG)
or freelist corruption. (mostly on X server activation after a VT
switch or suspend/resume because this can allocate megabytes of AGP
memory)
This might fix PR port-i386/38989 by Alan Barrett (in case this was
a multiprocessor).
2009-02-26 18:18:14 +00:00
yamt
20c094eb67 uvm_page_unbusy: add an assertion 2009-01-16 07:01:28 +00:00
ad
b5413f0358 It's easier for kernel reserve pages to be consumed because the pagedaemon
serves as less of a barrier these days. Restrict provision of kernel reserve
pages to kmem and one of these cases:

- doing a NOWAIT allocation
- caller is a realtime thread
- caller is a kernel thread
- explicitly requested, for example by the pmap
2008-12-13 11:34:43 +00:00
ad
a9c686e81a Scale the number of kernel reserve pages by the number of CPUs. 2008-07-04 10:56:59 +00:00
ad
16a991e560 uvm_pageidlezero: fix a broken test which made it give up too easily. 2008-07-02 17:47:53 +00:00
matt
1906aa3e59 Switch from KASSERT to CTASSERT for those asserts testing sizes of types. 2008-07-02 14:47:34 +00:00
matt
5a4f0c6b2b Change tree op members/typedefs to rbto_compare_* from rb_compare_* 2008-06-30 20:14:09 +00:00
yamt
52d8b786fc - uvm_pagereplace: don't try to insert multiple pages with the same offset
into uvm_object rbtree.
- inline static -> static inline
2008-06-17 02:30:57 +00:00
he
7151c31712 Delete what appears to be a spurious assignment to an undeclared
'cpu' variable added in revision 1.133.  Restores buildability for this file.
2008-06-05 08:16:01 +00:00
ad
7a34cb95f0 Replace the global vm_page hash with a per vm_object rbtree.
Proposed on tech-kern@.
2008-06-04 15:06:04 +00:00
ad
cbbf514e2c - vm_page: put listq, pageq into a union alongside a LIST_ENTRY, so we can
use both types of list.

- Make page coloring and idle zero state per-CPU.

- Maintain per-CPU page freelists. When freeing, put pages onto the local
  CPU's lists and the global lists. When allocating, prefer to take pages
  from the local CPU. If none are available take from the global list as
  done now. Proposed on tech-kern@.
2008-06-04 12:45:28 +00:00
ad
6dd8a2b97b uvm_pageidlezero:
- Use high and low water marks to try and reduce power consumption.
- Do trylock on uvm_fpageqlock, and bail if we can't get it.
- Only run on one CPU at a time.
2008-06-02 11:11:14 +00:00
yamt
d886e611bd remove a redundant pmap_update and add a comment instead. 2008-03-24 08:52:55 +00:00
ad
0fe23ea49b Assert uvm_fpageqlock is held in a few more places. 2008-02-27 14:24:24 +00:00
chris
855792073c Add some more missing pmap_update()s following pmap_kremove()s. 2008-02-23 17:27:58 +00:00
yamt
729c3a185b unwrap short lines. 2008-01-13 16:46:47 +00:00
ad
4a780c9ae2 Merge vmlocking2 to head. 2008-01-02 11:48:20 +00:00
ad
e9e11b98df Use atomics to maintain uvmexp.{anon,exec,file}pages. 2007-11-29 18:07:11 +00:00
ad
82f39f6568 Fix merge error. 2007-10-08 14:14:55 +00:00
ad
4de14a3313 Pad the hashlocks to 32-byte boundaries. 2007-10-08 14:06:15 +00:00
ad
4688843d2b Merge unobtrusive locking changes from the vmlocking branch. 2007-07-21 19:21:53 +00:00