Commit Graph

610 Commits

Author SHA1 Message Date
simonb
a426ccb272 Fix whitespace bogon. 2002-10-30 02:48:28 +00:00
chs
e60ad901b2 examine the B_ERROR flag instead of the b_error field to determine
whether or not an error has occured.  pointed out by Stephan Uphoff.
2002-10-27 16:53:20 +00:00
atatat
68277bb301 In the case of a double amap_extend() (during a forward merge after a
back merge), don't abort the allocation if the second extend fails,
just abort the forward merge and finish the allocation.

Code reviewed by thorpej.
2002-10-24 22:22:28 +00:00
atatat
2d6863ada3 Call amap_extend() a second time in the case of a bimerge (both
backwards and forwards) if the previous entry was backed by an amap.

Fixes pr kern/18789, where netscape 7 + a java applet actually manage
to incur forward and bimerges in userspace.

Code reviewed by fvdl and thorpej.
2002-10-24 20:37:59 +00:00
jdolecek
e0cc03a09b merge kqueue branch into -current
kqueue provides a stateful and efficient event notification framework
currently supported events include socket, file, directory, fifo,
pipe, tty and device changes, and monitoring of processes and signals

kqueue is supported by all writable filesystems in NetBSD tree
(with exception of Coda) and all device drivers supporting poll(2)

based on work done by Jonathan Lemon for FreeBSD
initial NetBSD port done by Luke Mewburn and Jason Thorpe
2002-10-23 09:10:23 +00:00
atatat
94ef8e0795 Add an implementation of forward merging of new map entries. Most new
allocations can be merged either forwards or backwards, meaning no new
entries will be added to the list, and some can even be merged in both
directions, resulting in a surplus entry.

This code typically reduces the number of map entries in the
kernel_map by an order of magnitude or more.  It also makes possible
recovery from the pathological case of "5000 processes created and
then killed", which leaves behind a large number of map entries.

The only forward merge case not covered is the instance of an amap
that has to be extended backwards (WIP).  Note that this only affects
processes, not the kernel (the kernel doesn't use amaps), and that
merge opportunities like this come up *very* rarely, if at all.  Eg,
after being up for eight days, I see only three failures in this
regard, and even those are most likely due to programs I'm developing
to exercise this case.

Code reviewed by thorpej, matt, christos, mrg, chuq, chuck, perry,
tls, and probably others.  I'd like to thank my mother, the Hollywood
Foreign Press...
2002-10-18 13:18:42 +00:00
oster
7eac5bf44e Garbage collect some leftover (and unneeded) code. OK'ed by chs. 2002-10-05 17:26:06 +00:00
chs
b1e734dd18 uao_find_swslot()'s second argument is in units of pages, not bytes.
spotted by Doug Donsbach.
2002-10-01 07:52:30 +00:00
mycroft
3c7847ff41 #if 0 the call to uvm_map_checkprot() in sys_munmap() -- it's not documented,
and programs do not expect it.  Also fixes memory leaks in dlopen()/dlclose().
2002-09-27 19:13:29 +00:00
provos
0f09ed48a5 remove trailing \n in panic(). approved perry. 2002-09-27 15:35:29 +00:00
chs
94a62d45d6 add a new flag VM_MAP_DYING, which is set before we start
tearing down a vm_map.  use this to skip the pmap_update()
at the end of all the removes, which allows pmaps to optimize
pmap tear-down.  also, use the new pmap_remove_all() hook to
let the pmap implemenation know what we're up to.
2002-09-22 07:21:29 +00:00
chs
2b73cf7ece encapsulate knowledge of uarea allocation in some new functions. 2002-09-22 07:20:29 +00:00
chs
55e1f79335 add pmap_remove_all() hook (empty on most platforms so far). 2002-09-22 07:17:08 +00:00
chs
208b369512 add missing anon lock around call to uvm_anon_lockloanpg(). 2002-09-21 06:16:07 +00:00
chs
9672ac098f add a new km flag UVM_KMF_CANFAIL, which causes uvm_km_kmemalloc() to
return failure if swap is full and there are no free physical pages.
have malloc() use this flag if M_CANFAIL is passed to it.
use M_CANFAIL to allow amap_extend() to fail when memory is scarce.
this should prevent most of the remaining hangs in low-memory situations.
2002-09-15 16:54:26 +00:00
thorpej
3479cf6ba9 Protect "struct uvm" with _KERNEL. 2002-09-15 01:01:32 +00:00
gehenna
77a6b82b27 Merge the gehenna-devsw branch into the trunk.
This merge changes the device switch tables from static array to
dynamically generated by config(8).

- All device switches is defined as a constant structure in device drivers.

- The new grammer ``device-major'' is introduced to ``files''.

	device-major <prefix> char <num> [block <num>] [<rules>]

- All device major numbers must be listed up in port dependent majors.<arch>
  by using this grammer.

- Added the new naming convention.
  The name of the device switch must be <prefix>_[bc]devsw for auto-generation
  of device switch tables.

- The backward compatibility of loading block/character device
  switch by LKM framework is broken. This is necessary to convert
  from block/character device major to device name in runtime and vice versa.

- The restriction to assign device major by LKM is completely removed.
  We don't need to reserve LKM entries for dynamic loading of device switch.

- In compile time, device major numbers list is packed into the kernel and
  the LKM framework will refer it to assign device major number dynamically.
2002-09-06 13:18:43 +00:00
thorpej
ae8d1b60df When breaking an loan due to a page fault, check to see if the other
kind of reference-holder (anon or object) is referencing the page.  If
not, then the page must be removed from the pageq's.

Reviewed by Chuck Silvers.
2002-09-02 21:09:50 +00:00
drochner
77944bfa08 call cpu_dumpconf() after dumpdev change, so that
the global dumpsize/dumplo get updated
2002-08-31 17:07:59 +00:00
chs
d510857ed4 be sure that the page we allocate to break a loan is put on a paging queue.
fixes PR 18037.
2002-08-29 05:03:30 +00:00
matt
78581fe411 In amap_ref, only increment the amap's refcnt after we have established
the ppref array.  Otherwise, the newly ref'ed pages will be doubly
counted and thus never freed because the pprefcnt can't fall to 0.
2002-08-22 23:39:37 +00:00
thorpej
95cb683cfb Don't pass VM_PROT_EXEC to pmap_kenter_pa(). 2002-08-14 15:21:31 +00:00
chs
ebe4c850ef allocate the bufq after zeroing the swapdev structure, not before. 2002-07-27 14:37:00 +00:00
hannken
7de36862a8 Rename bufq_init() to bufq_alloc().
Add bufq_free() to remove a buffer queue.
Avoid MALLOC while holding a spinlock.

From Chuck Silvers.
2002-07-21 15:32:17 +00:00
hannken
d4c062b4cc Convert to new device buffer queue interface. 2002-07-19 16:26:01 +00:00
chs
b1b5af79c2 when dropping a kernel loan, if this was the last loan-to-kernel but
the page is still loaned to an anon, we should put the page back on a
paging queue.  this is because while pages loaned to the kernel really
do need to stay resident (since the kernel is accessing the physical
memory directly), pages loaned to anons can be paged out just fine.
(the page will be paged out twice, first to the object and then again
to the anon, but after that the page can be reused.)
2002-07-14 23:53:41 +00:00
yamt
d96bff0e27 add KSTACK_CHECK_MAGIC. discussed on tech-kern. 2002-07-02 20:27:44 +00:00
chs
cfefc92864 rearrange a few lines to appease an assertion. 2002-06-29 18:27:30 +00:00
drochner
e14af78731 Big cleanup and speed improvements to pglist_alloc code:
-pass vm_physseg* instead of physseg index, and PFN (int) instead
 of physical address (could be done even more)
-simplify detection of boundary crossing and behave more intelligently
 in this case
-take stuff out of the inner loops, or put into "#ifdef DEBUG"
 (because we move along physsegs we don't need to check that the
  pages are physically contigous)
-make the "simple" and "contigous" branches look more uniform; at
 least the outer loops might coalesce one day
2002-06-27 18:05:29 +00:00
chs
faab7dbb46 count aobj pages (most notably kernel stack pages) as anon pages
for memory usage-balancing purposes.
2002-06-20 15:05:29 +00:00
enami
d76e8e25bc Shift by PAGE_SHIFT instead of dividing by PAGE_SIZE. 2002-06-20 08:24:22 +00:00
wrstuden
2eb3dc82d8 Fix recent bugs seen on Performa 4400 macppc's by
Makoto Fujiwara <makoto@ki.nu> and Manuel Bouyer <bouyer@netbsd.org>.
Help from Allen Briggs, Jason Thorpe, and Matt Thomas.

We need to call cpu_cache_probe() early in boot (machdep.c).
Add 603 info for completeness, and use NBPG not PAGESIZE, as the
latter relies on uvm being setup (cpu_subr.c).
Let uvm_page_recolor() be called before uvm has been set up; just
note the page coloring value (uvm_page.c).
2002-06-19 17:01:18 +00:00
drochner
cb01228cf4 Make the DMA memory allocators (uvm_pglistalloc())
obey the preferences expressed by freelist assignment,
to avoid wasting valuable "low memory" to devices which
don't really need it.
comments:
-I'm not sure searching the physsegs within a freelist
 beginning with the biggest is the right thing. This is
 what the "memory steal" code in uvm_page.c does, so
 keep it consistent.
-There seems to be some confusion whether the upper
 address limit passed is inclusive or not. Stays on
 the save side, possibly leaving one page out.
-The boundary/pagemask check can be simplified, also some
 arguments passed are only used for diagnostic checks.
-Integration with UVM_PAGE_TRKOWN???
2002-06-18 15:49:48 +00:00
drochner
d2b9876081 move initialization of the "struct pglist" returned by uvm_pglistalloc()
from the calling code into uvm_pglistalloc() itself for consistency
and easier error handling
2002-06-02 14:44:35 +00:00
atatat
6c03c181d2 "offest" -> "offset" in a comment 2002-05-31 16:49:50 +00:00
drochner
f452b252a8 Add another allocator to uvm_pglistalloc() which is used in the case where
no alignment / boundary / nsegs restrictions apply.
This one doesn't insist in a contigous range, and it honours the "waitok"
flag, thus succeeds in situations which were hopeless with the existing one.

(A solution which searches for a minimum number of contiguous ranges using
some best-fit or so algorithm would be expensive to implement; I believe the
"either-or" done here does reflect the current use by bus_dma quite well.)

Now agp memory allocation is robust for me. (tested on i810)
2002-05-29 19:20:11 +00:00
enami
9e1deeab34 Add missing pageq lock while uvm_pagefree() is called (either directly
or indirectly).  Reviewed by chuq.
2002-05-29 11:04:39 +00:00
enami
2afb4efc4c Make uvn_findpages to return number of pages found so that caller can
easily check if all requested pages are found or not.
2002-05-17 22:00:50 +00:00
matt
357945ce6f When core dumping a process, don't dump maps backed up by the device pager.
(move the pagerops externs to uvm_object.h and out the C files).
2002-05-15 06:57:49 +00:00
enami
694a0fec54 When loaned page become ownerless as a result of freeing, it should be
dequeue'ed from pageq.  Fix provided by chuq.
2002-05-15 00:19:12 +00:00
fredette
c857df5775 When preparing to swap to a miniroot partition, add a little
padding to our estimate of the miniroot's size, to avoid
overwriting it.
2002-05-09 21:43:44 +00:00
enami
fabaf9a730 - In genfs_putpages(), no need to restrict the cluster within the given
region.
- In uvm_aio_aiodone(), remove assertions no longer true.
2002-05-09 07:14:37 +00:00
enami
6ceef3fc14 In uao_put(), if we wait for the busy page owned by someone else,
we can't simply reuse the pointor to the page.  Instead, we need to
acquire it again.  So, rearrange the loop like genfs_putpages() does.
Reviewed by chuq.
2002-05-09 07:04:23 +00:00
enami
c4e1385f55 Fetch the right page from a file even if it is mapped from middle of it.
This makes `tail -<N> <FILE> | cat > file' correctly, where <FILE> is
a regular file larger than 10Mbytes (makes tail to map part of file)
and <N> is big enough to produce output larger than 8kbytes (makes pipe
to use page loan facility).  Problem reported by FUKAUMI Naoki on japanese
local mailing list.
2002-05-07 02:29:52 +00:00
chs
988df8394c look in the right flags field for PQ_INACTIVE.
make uvmpd_scan_inactive() return void since its return value is ignored.
2002-05-05 16:26:17 +00:00
thorpej
338e636672 Allow pmap_copy_page() and pmap_zero_page() to be #define'd
in <machine/pmap.h>.
2002-04-10 00:40:45 +00:00
manu
8645636b04 Updated comment to reflect the creation of uvm_swap_stats() 2002-04-01 12:24:11 +00:00
nathanw
a1be32226e In amap_pp_adjref(), avoid incorrectly merging the first two chunks in
a ppref array when the range being adjusted includes the beginning of
the array.
2002-03-28 06:06:29 +00:00
manu
da6e8ccbe8 Don't allocate struct swapent when we only need a struct oswapent. 2002-03-26 11:50:26 +00:00
chs
f80ed5892c remove PGO_WEAK, it isn't needed anymore. 2002-03-25 02:08:09 +00:00