Commit Graph

38 Commits

Author SHA1 Message Date
ad 4688843d2b Merge unobtrusive locking changes from the vmlocking branch. 2007-07-21 19:21:53 +00:00
thorpej 712239e366 Replace the Mach-derived boolean_t type with the C99 bool type. A
future commit will replace use of TRUE and FALSE with true and false.
2007-02-21 22:59:35 +00:00
yamt 9d3e3eab23 merge yamt-pdpolicy branch.
- separate page replacement policy from the rest of kernel
	- implement an alternative replacement policy
2006-09-15 15:51:12 +00:00
christos 103d2f520c XXX: GCC uninitialized. 2006-05-14 05:30:31 +00:00
christos 95e1ffb156 merge ktrace-lwp. 2005-12-11 12:16:03 +00:00
thorpej e569facced Use ANSI function decls. 2005-06-27 02:19:48 +00:00
yamt 9555030270 make free page queue filo rather than fifo.
data in pages freed more recently are more likely on cpu cache.
2004-09-17 20:46:03 +00:00
junyoung 7e0c058612 Drop trailing spaces. 2004-03-24 07:47:32 +00:00
yamt 70538d0c22 add a DEBUG check if freed PG_ZERO pages are really zero-filled. 2003-11-03 03:58:28 +00:00
yamt d6dc30aeba in uvm_pagefree and friends, if freed pages have been marked by
PG_ZERO flag, put them to PGFL_ZEROS queue rather than default one
so that we can re-use zero-filled pages efficiently.
2003-11-01 15:18:42 +00:00
yamt 91161caf3c use VM_PAGE_TO_PHYS macro instead of using phys_addr directly. 2003-08-26 15:12:18 +00:00
drochner 9c0942bc88 sync comments with reality 2003-08-02 14:12:51 +00:00
thorpej 2d5e311009 Make PGALLOC_VERBOSE compile where size_t != int. 2003-03-10 19:52:24 +00:00
perry bbad42171f /*CONTCOND*/ while (0)'ed macros 2002-11-02 07:40:47 +00:00
drochner e14af78731 Big cleanup and speed improvements to pglist_alloc code:
-pass vm_physseg* instead of physseg index, and PFN (int) instead
 of physical address (could be done even more)
-simplify detection of boundary crossing and behave more intelligently
 in this case
-take stuff out of the inner loops, or put into "#ifdef DEBUG"
 (because we move along physsegs we don't need to check that the
  pages are physically contigous)
-make the "simple" and "contigous" branches look more uniform; at
 least the outer loops might coalesce one day
2002-06-27 18:05:29 +00:00
enami d76e8e25bc Shift by PAGE_SHIFT instead of dividing by PAGE_SIZE. 2002-06-20 08:24:22 +00:00
drochner cb01228cf4 Make the DMA memory allocators (uvm_pglistalloc())
obey the preferences expressed by freelist assignment,
to avoid wasting valuable "low memory" to devices which
don't really need it.
comments:
-I'm not sure searching the physsegs within a freelist
 beginning with the biggest is the right thing. This is
 what the "memory steal" code in uvm_page.c does, so
 keep it consistent.
-There seems to be some confusion whether the upper
 address limit passed is inclusive or not. Stays on
 the save side, possibly leaving one page out.
-The boundary/pagemask check can be simplified, also some
 arguments passed are only used for diagnostic checks.
-Integration with UVM_PAGE_TRKOWN???
2002-06-18 15:49:48 +00:00
drochner d2b9876081 move initialization of the "struct pglist" returned by uvm_pglistalloc()
from the calling code into uvm_pglistalloc() itself for consistency
and easier error handling
2002-06-02 14:44:35 +00:00
drochner f452b252a8 Add another allocator to uvm_pglistalloc() which is used in the case where
no alignment / boundary / nsegs restrictions apply.
This one doesn't insist in a contigous range, and it honours the "waitok"
flag, thus succeeds in situations which were hopeless with the existing one.

(A solution which searches for a minimum number of contiguous ranges using
some best-fit or so algorithm would be expensive to implement; I believe the
"either-or" done here does reflect the current use by bus_dma quite well.)

Now agp memory allocation is robust for me. (tested on i810)
2002-05-29 19:20:11 +00:00
lukem b616d1ca1d add RCSIDs, and in some cases, slightly cleanup #include order 2001-11-10 07:36:59 +00:00
chs 64c6d1d2dc a whole bunch of changes to improve performance and robustness under load:
- remove special treatment of pager_map mappings in pmaps.  this is
   required now, since I've removed the globals that expose the address range.
   pager_map now uses pmap_kenter_pa() instead of pmap_enter(), so there's
   no longer any need to special-case it.
 - eliminate struct uvm_vnode by moving its fields into struct vnode.
 - rewrite the pageout path.  the pager is now responsible for handling the
   high-level requests instead of only getting control after a bunch of work
   has already been done on its behalf.  this will allow us to UBCify LFS,
   which needs tighter control over its pages than other filesystems do.
   writing a page to disk no longer requires making it read-only, which
   allows us to write wired pages without causing all kinds of havoc.
 - use a new PG_PAGEOUT flag to indicate that a page should be freed
   on behalf of the pagedaemon when it's unlocked.  this flag is very similar
   to PG_RELEASED, but unlike PG_RELEASED, PG_PAGEOUT can be cleared if the
   pageout fails due to eg. an indirect-block buffer being locked.
   this allows us to remove the "version" field from struct vm_page,
   and together with shrinking "loan_count" from 32 bits to 16,
   struct vm_page is now 4 bytes smaller.
 - no longer use PG_RELEASED for swap-backed pages.  if the page is busy
   because it's being paged out, we can't release the swap slot to be
   reallocated until that write is complete, but unlike with vnodes we
   don't keep a count of in-progress writes so there's no good way to
   know when the write is done.  instead, when we need to free a busy
   swap-backed page, just sleep until we can get it busy ourselves.
 - implement a fast-path for extending writes which allows us to avoid
   zeroing new pages.  this substantially reduces cpu usage.
 - encapsulate the data used by the genfs code in a struct genfs_node,
   which must be the first element of the filesystem-specific vnode data
   for filesystems which use genfs_{get,put}pages().
 - eliminate many of the UVM pagerops, since they aren't needed anymore
   now that the pager "put" operation is a higher-level operation.
 - enhance the genfs code to allow NFS to use the genfs_{get,put}pages
   instead of a modified copy.
 - clean up struct vnode by removing all the fields that used to be used by
   the vfs_cluster.c code (which we don't use anymore with UBC).
 - remove kmem_object and mb_object since they were useless.
   instead of allocating pages to these objects, we now just allocate
   pages with no object.  such pages are mapped in the kernel until they
   are freed, so we can use the mapping to find the page to free it.
   this allows us to remove splvm() protection in several places.

The sum of all these changes improves write throughput on my
decstation 5000/200 to within 1% of the rate of NetBSD 1.5
and reduces the elapsed time for "make release" of a NetBSD 1.5
source tree on my 128MB pc to 10% less than a 1.5 kernel took.
2001-09-15 20:36:31 +00:00
thorpej 27f66d3bf7 Macro'ize the code that checks the free and inactive thresholds and
wakes the pagedaemon.
2001-06-27 21:18:34 +00:00
chs 11a9651c8f replace vm_page_t with struct vm_page *. 2001-05-26 21:27:10 +00:00
chs 3845302904 remove trailing whitespace. 2001-05-25 04:06:11 +00:00
thorpej cda7baa0d5 Implement page coloring, using a round-robin bucket selection
algorithm (Solaris calls this "Bin Hopping").

This implementation currently relies on MD code to define a
constant defining the number of buckets.  This will change
reasonably soon (MD code will be able to dynamically size
the bucket array).
2001-04-29 04:23:20 +00:00
chs 19b7b64642 clean up DIAGNOSTIC checks, use KASSERT(). 2001-02-18 21:19:08 +00:00
chs 2ed28d2c7a lots of cleanup:
use queue.h macros and KASSERT().
address amap offsets in pages instead of bytes.
make amap_ref() and amap_unref() take an amap, offset and length
  instead of a vm_map_entry_t.
improve whitespace and comments.
2000-11-25 06:27:59 +00:00
mrg dea44a9ec4 remove include of <vm/vm.h> 2000-06-27 17:29:17 +00:00
thorpej 1410091b4e Clean up a comment. 2000-05-20 19:54:01 +00:00
thorpej 9ec517a68e Changes necessary to implement pre-zero'ing of pages in the idle loop:
- Make page free lists have two actual queues: known-zero pages and
  pages with unknown contents.
- Implement uvm_pageidlezero().  This function attempts to zero up to
  the target number of pages until the target has been reached (currently
  target is `all free pages') or until whichqs becomes non-zero (indicating
  that a process is ready to run).
- Define a new hook for the pmap module for pre-zero'ing pages.  This is
  used to zero the pages using uncached access.  This allows us to zero
  as many pages as we want without polluting the cache.

In order to use this feature, each platform must add the appropropriate
glue in their idle loop.
2000-04-24 17:12:00 +00:00
thorpej 3f176180d5 Garbage collect thread_sleep()/thread_wakeup() left over from the old
Mach VM code.  Also nuke iprintf(), which was no longer used anywhere.

Add proclist locking where appropriate.
1999-07-22 22:58:38 +00:00
thorpej 6eb9ee7cd8 - Change uvm_{lock,unlock}_fpageq() to return/take the previous interrupt
level directly, instead of making the caller wrap the calls in
  splimp()/splx().
- Add a comment documenting that interrupts that cause memory allocation
  must be blocked while the free page queue is locked.

Since interrupts must be blocked while this lock is asserted, tying them
together like this helps to prevent mistakes.
1999-05-24 19:10:57 +00:00
eeh a2dd74ed79 Merge paddr_t changes into the main branch. 1998-08-13 02:10:37 +00:00
thorpej 7fd701e0fa Add support for multiple memory free lists. There is at least one
default free list, and 0 - N additional free list, in order of descending
priority.

A new page allocation function, uvm_pagealloc_strat(), has been added,
providing three page allocation strategies:

	- normal: high -> low priority free list walk, taking the
	  page off the first free list that has one.

	- only: attempt to allocate a page only from the specified free
	  list, failing if that free list has none available.

	- fallback: if `only' fails, fall back on `normal'.

uvm_pagealloc(...) is provided for normal use (and is a synonym for
uvm_pagealloc_strat(..., UVM_PGA_STRAT_NORMAL, 0); the free list argument
is ignored for the `normal' case).

uvm_page_physload() now specified which free list the pages will be
loaded onto.  This means that some platforms which have multiple physical
memory segments may define additional vm_physsegs if they wish to break
individual physical segments into differing priorities.

Machine-dependent code must define _at least_ the following constants
in <machine/vmparam.h>:

	VM_NFREELIST: the number of free lists the system will have

	VM_FREELIST_DEFAULT: the default freelist (should always be 0,
	but is defined in machdep code so that it's with all of the
	other free list-related constants).

Additional free list names may be defined by machine-dependent code, but
they will only be used by machine-dependent code (e.g. for loading the
vm_physsegs).
1998-07-08 04:28:27 +00:00
kleink 182e12f413 Remove inclusions of syscall (and syscall argument) related header files;
we don't need them here.
1998-05-05 20:51:04 +00:00
mrg 8106d13596 KNF. 1998-03-09 00:58:55 +00:00
thorpej 9eb328b495 RCS ID police. 1998-02-06 22:26:13 +00:00
mrg f2caacc717 initial import of the new virtual memory system, UVM, into -current.
UVM was written by chuck cranor <chuck@maria.wustl.edu>, with some
minor portions derived from the old Mach code.  i provided some help
getting swap and paging working, and other bug fixes/ideas.  chuck
silvers <chuq@chuq.com> also provided some other fixes.

this is the UVM kernel code portion.


this will be KNF'd shortly.  :-)
1998-02-05 06:25:08 +00:00