Commit Graph

67 Commits

Author SHA1 Message Date
uebayasi
26dd1e598f Be a little more friendly to dynamic physical segment registration.
Maintain an array of pointer to struct vm_physseg, instead of struct
array.  So that VM subsystem can take its pointer safely.  Pointer
to this struct will replace raw paddr_t usage in the future.

Dynamic removal is not supported yet.

Only MD data structure changes, no kernel bump needed.

Tested on i386, amd64, powerpc/ibm40x, arm11.
2010-11-14 15:06:34 +00:00
uebayasi
4f1dd4067f Put VM_PAGE_TO_MD() definition in one place. No functional changes. 2010-11-12 07:59:24 +00:00
uebayasi
77d80f38cd Abstraction fix; move physical address -> per-page metadata (struct
vm_page *) "reverse" lookup code from uvm_page.h to uvm_page.c, to
help migration to not do that.

Likewise move per-page metadata (struct vm_page *) -> physical
address "forward" conversion code into *.c too.  This is called
only low-layer VM and MD code.
2010-11-12 05:23:41 +00:00
uebayasi
aa803dbb9d Abstraction fix; move physical address -> physical segment "reverse"
lookup code from uvm_page.h to uvm_page.c.

This code is used by some pmaps to lookup per-page state (PV) from
per-segment metadata (struct vm_physseg).  This is not needed if
UVM looks up physical segment once in fault handler, then directly
passes it to pmap.  This change helps transition to that model.

The only users of vm_physseg_find() are pmap_motorola.c and
powerpc/ibm4xx/pmap.c.

Tested By:	Compiling and running powerpc/ibm4xx/pmap.c
		(evbppc/conf/OPENBLOCKS266)
2010-11-12 03:21:04 +00:00
uebayasi
04cf143fd6 Use more VM_PHYSMEM_*() accessors. No functional changes. 2010-11-10 09:27:21 +00:00
uebayasi
e367a5a854 Prepare vm_physmem[] -> (*vm_physmem)[] migration, so that physical
segments can be changed at run-time.  Pointers are easier to update.
2010-11-10 01:24:46 +00:00
matt
19e6c76b2d Rename rb.h to rbtree.h, as it is more appropriate (c.f. ptree.h). Also
helps find code that hasn't been updated to use the new rbtree API.
2010-09-25 01:42:38 +00:00
hannken
c84e81cad1 Add vm page flag PG_MARKER and use it to tag dummy marker pages
in genfs_do_putpages() and uao_put().
Use 'v_uobj.uo_npages' to check for an empty memq.
Put some assertions where these marker pages may not appear.

Ok: YAMAMOTO Takashi <yamt@netbsd.org>
2010-07-29 10:54:50 +00:00
uebayasi
2903e6d834 __inline -> inline 2010-02-06 12:10:59 +00:00
uebayasi
6f78b42add Make vm_physseg lookup routines take the target vm_physseg. This is for the
coming "managed" device segments.
2010-02-06 02:56:17 +00:00
thorpej
97a2657a66 Add a real API for testing if a page is a managed page, and adjust callers
to stop relying on vm_physseg_find() for this purpose.
2009-08-18 18:06:53 +00:00
yamt
09ff411cf6 - g/c stale function prototypes.
- rename UVM_PAGE_HASH_PENALTY to UVM_PAGE_TREE_PENALTY.
2009-01-16 02:33:14 +00:00
ad
7a34cb95f0 Replace the global vm_page hash with a per vm_object rbtree.
Proposed on tech-kern@.
2008-06-04 15:06:04 +00:00
ad
cbbf514e2c - vm_page: put listq, pageq into a union alongside a LIST_ENTRY, so we can
use both types of list.

- Make page coloring and idle zero state per-CPU.

- Maintain per-CPU page freelists. When freeing, put pages onto the local
  CPU's lists and the global lists. When allocating, prefer to take pages
  from the local CPU. If none are available take from the global list as
  done now. Proposed on tech-kern@.
2008-06-04 12:45:28 +00:00
ad
6dd8a2b97b uvm_pageidlezero:
- Use high and low water marks to try and reduce power consumption.
- Do trylock on uvm_fpageqlock, and bail if we can't get it.
- Only run on one CPU at a time.
2008-06-02 11:11:14 +00:00
matt
41b4feae9a Convert two inlines from old-style-definitions to ansi. 2008-02-27 19:38:57 +00:00
ad
185d25c158 Minor corrections to comments. 2008-02-27 14:23:33 +00:00
ad
4a780c9ae2 Merge vmlocking2 to head. 2008-01-02 11:48:20 +00:00
ad
4688843d2b Merge unobtrusive locking changes from the vmlocking branch. 2007-07-21 19:21:53 +00:00
perseant
9bab95b2ae Track lwp as well as proc owner with UVM_PAGE_TRKOWN 2007-04-14 07:01:33 +00:00
thorpej
712239e366 Replace the Mach-derived boolean_t type with the C99 bool type. A
future commit will replace use of TRUE and FALSE with true and false.
2007-02-21 22:59:35 +00:00
yamt
9d3e3eab23 merge yamt-pdpolicy branch.
- separate page replacement policy from the rest of kernel
	- implement an alternative replacement policy
2006-09-15 15:51:12 +00:00
uebayasi
c515049d02 Update comment to match reality (vm_physmemseg -> vm_physseg). 2006-04-06 07:18:23 +00:00
perry
fbae48b901 Change "inline" back to "__inline" in .h files -- C99 is still too
new, and some apps compile things in C89 mode. C89 keywords stay.

As per core@.
2006-02-16 20:17:12 +00:00
yamt
a3af4c1530 remove the following options. no objections on tech-kern@.
UVM_PAGER_INLINE
	UVM_AMAP_INLINE
	UVM_PAGE_INLINE
	UVM_MAP_INLINE
2006-02-11 12:45:07 +00:00
perry
0f0296d88a Remove leading __ from __(const|inline|signed|volatile) -- it is obsolete. 2005-12-24 20:45:08 +00:00
yamt
52f0a62851 read-ahead statistics. 2005-11-29 15:45:28 +00:00
chs
48e4eb59a3 adapt to const changes. 2005-06-04 13:48:35 +00:00
yamt
ffaa06d53c g/c stale declarations of page queues. 2004-10-07 10:56:26 +00:00
yamt
5469c2b7c1 add assertions. 2004-05-12 20:09:50 +00:00
junyoung
325f5482a8 Nuke __P(). 2004-03-24 07:55:01 +00:00
rearnsha
31a79ddeb0 In vm_phsyseg_find, use u_int for start, len and try when doing a
binary search.  Avoids the need for signed division by 2.  Approved
by thorpej.
2003-11-10 16:13:05 +00:00
yamt
70538d0c22 add a DEBUG check if freed PG_ZERO pages are really zero-filled. 2003-11-03 03:58:28 +00:00
thorpej
36da248c07 Back out the following chagne:
http://mail-index.netbsd.org/source-changes/2003/05/08/0068.html

There were some side-effects that I didn't anticipate, and fixing them
is proving to be more difficult than I thought, do just eject for now.
Maybe one day we can look at this again.

Fixes PR kern/21517.
2003-05-10 21:10:23 +00:00
thorpej
b77900c3c2 Simplify the way the bounds of the managed kernel virtual address
space is advertised to UVM by making virtual_avail and virtual_end
first-class exported variables by UVM.  Machine-dependent code is
responsible for initializing them before main() is called.  Anything
that steals KVA must adjust these variables accordingly.

This reduces the number of instances of this info from 3 to 1, and
simplifies the pmap(9) interface by removing the pmap_virtual_space()
function call, and removing two arguments from pmap_steal_memory().

This also eliminates some kludges such as having to burn kernel_map
entries on space used by the kernel and stolen KVA.

This also eliminates use of VM_{MIN,MAX}_KERNEL_ADDRESS from MI code,
this giving MD code greater flexibility over the bounds of the managed
kernel virtual address space if a given port's specific platforms can
vary in this regard (this is especially true of the evb* ports).
2003-05-08 18:13:12 +00:00
enami
b7ac697dae s/than than/than/. 2002-11-08 02:05:16 +00:00
chs
64c6d1d2dc a whole bunch of changes to improve performance and robustness under load:
- remove special treatment of pager_map mappings in pmaps.  this is
   required now, since I've removed the globals that expose the address range.
   pager_map now uses pmap_kenter_pa() instead of pmap_enter(), so there's
   no longer any need to special-case it.
 - eliminate struct uvm_vnode by moving its fields into struct vnode.
 - rewrite the pageout path.  the pager is now responsible for handling the
   high-level requests instead of only getting control after a bunch of work
   has already been done on its behalf.  this will allow us to UBCify LFS,
   which needs tighter control over its pages than other filesystems do.
   writing a page to disk no longer requires making it read-only, which
   allows us to write wired pages without causing all kinds of havoc.
 - use a new PG_PAGEOUT flag to indicate that a page should be freed
   on behalf of the pagedaemon when it's unlocked.  this flag is very similar
   to PG_RELEASED, but unlike PG_RELEASED, PG_PAGEOUT can be cleared if the
   pageout fails due to eg. an indirect-block buffer being locked.
   this allows us to remove the "version" field from struct vm_page,
   and together with shrinking "loan_count" from 32 bits to 16,
   struct vm_page is now 4 bytes smaller.
 - no longer use PG_RELEASED for swap-backed pages.  if the page is busy
   because it's being paged out, we can't release the swap slot to be
   reallocated until that write is complete, but unlike with vnodes we
   don't keep a count of in-progress writes so there's no good way to
   know when the write is done.  instead, when we need to free a busy
   swap-backed page, just sleep until we can get it busy ourselves.
 - implement a fast-path for extending writes which allows us to avoid
   zeroing new pages.  this substantially reduces cpu usage.
 - encapsulate the data used by the genfs code in a struct genfs_node,
   which must be the first element of the filesystem-specific vnode data
   for filesystems which use genfs_{get,put}pages().
 - eliminate many of the UVM pagerops, since they aren't needed anymore
   now that the pager "put" operation is a higher-level operation.
 - enhance the genfs code to allow NFS to use the genfs_{get,put}pages
   instead of a modified copy.
 - clean up struct vnode by removing all the fields that used to be used by
   the vfs_cluster.c code (which we don't use anymore with UBC).
 - remove kmem_object and mb_object since they were useless.
   instead of allocating pages to these objects, we now just allocate
   pages with no object.  such pages are mapped in the kernel until they
   are freed, so we can use the mapping to find the page to free it.
   this allows us to remove splvm() protection in several places.

The sum of all these changes improves write throughput on my
decstation 5000/200 to within 1% of the rate of NetBSD 1.5
and reduces the elapsed time for "make release" of a NetBSD 1.5
source tree on my 128MB pc to 10% less than a 1.5 kernel took.
2001-09-15 20:36:31 +00:00
thorpej
ae69446f0f Back out previous -- christos needs to update his lint(1). 2001-07-25 23:05:04 +00:00
christos
8228035453 fix non-portable bitmap warning. 2001-07-25 22:41:10 +00:00
wiz
a9356936b4 seperate -> separate 2001-07-22 13:33:58 +00:00
thorpej
c3cd2c3cfc Rather than using u_shorts, use u_ints and bitfields in the vm_page. This
provides us more flexibility with pageq-locked fields, and clarifies the
locking semantics for platforms which cannot address shorts.

From Ross Harvey.
2001-06-28 00:26:38 +00:00
chs
3845302904 remove trailing whitespace. 2001-05-25 04:06:11 +00:00
ross
d8840def52 Expand on the locking notes comment with a XXX warning about u_short fields. 2001-05-16 00:16:01 +00:00
thorpej
31fafb678f Support dynamic sizing of the page color bins. We also support
dynamically re-coloring pages; as machine-dependent code discovers
the size of the system's caches, it may call uvm_page_recolor() with
the new number of colors to use.  If the new mumber of colors is
smaller (or equal to) the current number of colors, then uvm_page_recolor()
is a no-op.

The system defaults to one bucket if machine-dependent code does not
initialize uvmexp.ncolors before uvm_page_init() is called.

Note that the number of color bins should be initialized to something
reasonable as early as possible -- for many early memory allocations,
we live with the consequences of the page choice for the lifetime of
the boot.
2001-05-02 01:22:19 +00:00
thorpej
220bcf69ac Garbage-collect a comment that has not been applicable since Mach. 2001-05-01 03:01:18 +00:00
thorpej
cf67ac7122 Per discussion w/ chuck and chuck, restructure the md page stuff
to use a structure called "vm_page_md", and use __HAVE_VM_PAGE_MD
and __HAVE_PMAP_PHYSSEG.
2001-05-01 02:19:13 +00:00
thorpej
2b27ac7a99 Add a VM_MDPAGE_MEMBERS macro that defines pmap-specific data for
each vm_page structure.  Add a VM_MDPAGE_INIT() macro to init this
data when pages are initialized by UVM.  These macros are mandatory,
but ports may #define them to nothing if they are not needed/used.

This deprecates struct pmap_physseg.  As a transitional measure,
allow a port to #define PMAP_PHYSSEG so that it can continue to
use it until its pmap is converted to use VM_MDPAGE_MEMBERS.

Use all this stuff to eliminate a lot of extra work in the Alpha
pmap module (it's smaller and faster now).  Changes to other pmap
modules will follow.
2001-04-29 22:44:31 +00:00
thorpej
cda7baa0d5 Implement page coloring, using a round-robin bucket selection
algorithm (Solaris calls this "Bin Hopping").

This implementation currently relies on MD code to define a
constant defining the number of buckets.  This will change
reasonably soon (MD code will be able to dynamically size
the bucket array).
2001-04-29 04:23:20 +00:00
chs
1e651a1688 remove some more leftovers from Mach. 2000-12-28 08:24:55 +00:00
chs
aeda8d3b77 Initial integration of the Unified Buffer Cache project. 2000-11-27 08:39:39 +00:00