between a bug in the receive path (buffer length used instead of actual
received frame length) and the ARP code not correctly setting the length
of the reused mbuf, which all told caused the transmit logic in the chip
to wedge.
Also, make sure to count received packets.
Found and fixed by Zdenek Salvet <salvet@ics.muni.cz>, PR #7809.
revision 1.41
date: 1996/09/07 22:26:50; author: mycroft; state: Exp; lines: +5 -20
Use SIGBUS iff we get a legitimate bus fault. Use SIGSEGV for page protection
violations (per Solaris, SVR4, AIX, Linux, Irix, and SunOS).
...which happens to eliminate an unsafe call to uvm_map_checkprot().
starts running can occur on both DMA in and DMA out. I missed the little comment
in the Mach driver for the DMA out case. Also, only ignore the interrupt if the
TC is non-zero (to match the Mach driver).
- rather than treating MAP_COPY like MAP_PRIVATE by sheer virtue of it not
being MAP_SHARED, actually convert the MAP_COPY flag into MAP_PRIVATE.
- return EINVAL if MAP_SHARED and MAP_PRIVATE are both included in flags.
which use uvm_vslock() should now test the return value. If it's not
KERN_SUCCESS, wiring the pages failed, so the operation which is using
uvm_vslock() should error out.
XXX We currently just EFAULT a failed uvm_vslock(). We may want to do
more about translating error codes in the future.
Don't (ab)use uvm_map_pageable() to allocate PT pages. Instead, do
some internal reference counting on PT pages. We still allocate them
with the page fault routine (a wire-fault, now), but no longer free
PT pages from pmap_pageable().
Anyway, just because a drive doesn't support the LOAD (to BOT) command does
not mean that the drive doesn't support the UNLOAD command. Also note and
print errors in rewinds and unloads (and errors in writing closing filemarks
for same).
allocation strategy no longer works at all. Move pmap.new.* to pmap.*.
To read the revision history of PMAP_NEW up until this merge, use cvs
rlog of the old pmap.new.* files.
allocation strategy no longer works at all. Move pmap.new.* to pmap.*.
To read the revision history of PMAP_NEW up until this merge, use cvs
rlog of the old pmap.new.* files.
pmap_change_wiring(...,FALSE) unless the map entry claims the address
is unwired. This fixes the following scenario, as described on
tech-kern@netbsd.org on Wed 6/16/1999 12:25:23:
- User mlock(2)'s a buffer, to guarantee it will never become
non-resident while he is using it.
- User then does physio to that buffer. Physio calls uvm_vslock()
to lock down the pages and ensure that page faults do not happen
while the I/O is in progress (possibly in interrupt context).
- Physio does the I/O.
- Physio calls uvm_vsunlock(). This calls uvm_fault_unwire().
>>> HERE IS WHERE THE PROBLEM OCCURS <<<
uvm_fault_unwire() calls pmap_change_wiring(..., FALSE),
which now gives the pmap free reign to recycle the mapping
information for that page, which is illegal; the mapping is
still wired (due to the mlock(2)), but now access of the
page could cause a non-protection page fault (disallowed).
NOTE: This could eventually lead to a panic when the user
subsequently munlock(2)'s the buffer and the mapping info
has been recycled for use by another mapping!
the map be at least read-locked to call this function. This requirement
will be taken advantage of in a future commit.
* Write a uvm_fault_unwire() wrapper which read-locks the map and calls
uvm_fault_unwire_locked().
* Update the comments describing the locking contraints of uvm_fault_wire()
and uvm_fault_unwire().
semantics. That is, regardless of the number of mlock/mlockall calls,
an munlock/munlockall actually unlocks the region (i.e. sets wiring count
to 0).
Add a comment describing why uvm_map_pageable() should not be used for
transient page wirings (e.g. for physio) -- note, it's currently only
(ab)used in this way by a few pieces of code which are known to be
broken, i.e. the Amiga and Atari pmaps, and i386 and pc532 if PMAP_NEW is
not used. The i386 GDT code uses uvm_map_pageable(), but in a safe
way, and could be trivially converted to use uvm_fault_wire() instead.