each vm_page structure. Add a VM_MDPAGE_INIT() macro to init this
data when pages are initialized by UVM. These macros are mandatory,
but ports may #define them to nothing if they are not needed/used.
This deprecates struct pmap_physseg. As a transitional measure,
allow a port to #define PMAP_PHYSSEG so that it can continue to
use it until its pmap is converted to use VM_MDPAGE_MEMBERS.
Use all this stuff to eliminate a lot of extra work in the Alpha
pmap module (it's smaller and faster now). Changes to other pmap
modules will follow.
based on the existing net/if_spppsubr.c stuff.
While there are completely userland (bpf based) implementations available,
those have a vastly larger per packet overhead thus causing major CPU
overhead and higher latency. On an i386 base router, running a 486DX at 50MHz
my line (768kBit/s downstream) was limited to something (varying) between 10
and 20 kByte/s effective download rate. With this implementation I get full
bandwidth (~85kByte/s).
This is client side only. Arguably the right way to add full PPPoE support
(including server side) would be a variation of the ppp line discipline and
appropriate modifications to pppd. I promise every help I can give to anyone
doing that - but I needed this realy fast. Besids, on low memory NAT boxes
with typically a single PPPoE connection, this implementation is more
lightweight than a pppd based one, which nicely fits my needs.
algorithm (Solaris calls this "Bin Hopping").
This implementation currently relies on MD code to define a
constant defining the number of buckets. This will change
reasonably soon (MD code will be able to dynamically size
the bucket array).
to <sys/types.h> and <sys/stdint.h>.
* Add a new C99 <stdint.h> header, which provides integer types of
explicit width, related limits and integer constant macros.
* Extend <inttypes.h> to provide <stdint.h> definitions and format
macros for printf() and scanf().
* Add C99 strtoimax() and strtoumax() functions.
* Use the latter within scanf().
* Add C99 %j, %t and %z printf()/scanf() conversions for
intmax_t, pointer-type and size_t arguments.
using the cycle counter. MP-safeness is achieved by giving each
CPU its own PCC frequency variables, and kicking the non-primary
processors via an IPI once per second.
Based on the sample code from David Mills' "A Kernel Model for
Precision Timekeeping".
Both models tested and seem to be quite stable and fast.
Thanks to:
- Hans Hubner <hans@Huebner.org> for giving me the cards for testing
- Georg Klug of Syskonnect, who provided me with hw docs for these cards,
very promptly and willingly - I wish all vendors would be like this
- Alfred Arnold, Linux SKNET driver author, for giving me valuable Syskonnect
contact :)
reasonable transmit/receive buffer count.
This is needed for e.g. SKNET adapters, which use top 30 bytes of 16KB
memory to map registers and PROM and hence not all the memory is
available for buffers.
to 16 or 4 (depending on capabilities of adapter), as it was before
thorpej_scsipi integration
Waiting feedback to known whenever the problem with openings set to AHC_SCB_MAX
existed before.
- Add (missed)powerhook_disestablish() in ex_detach().
- Sync with below. Original commit log message:
Add new powerhook argument values, PWR_SOFTSUSPEND, PWR_SOFTSTANDBY and
PWR_SOFTRESUME. Apm calls powerhook with the values in normal interrupt
priority level while others are protected with splhigh().
- We need to skip PGO_DONTCARE page also.
- ``npages'' returned by VOP_GETPAGES for lower vp doesn't count
those pages in this case. So, just loop ``npages'' times is
insufficient. Loop while there is real pages instead.
This is a completely rewritten scsipi_xfer execution engine, and the
associated changes to HBA drivers. Overview of changes & features:
- All xfers are queued in the mid-layer, rather than doing so in an
ad-hoc fashion in individual adapter drivers.
- Adapter/channel resource management in the mid-layer, avoids even trying
to start running an xfer if the adapter/channel doesn't have the resources.
- Better communication between the mid-layer and the adapters.
- Asynchronous event notification mechanism from adapter to mid-layer and
peripherals.
- Better peripheral queue management: freeze/thaw, sorted requeueing during
recovery, etc.
- Clean separation of peripherals, adapters, and adapter channels (no more
scsipi_link).
- Kernel thread for each scsipi_channel makes error recovery much easier
(no more dealing with interrupt context when recovering from an error).
- Mid-layer support for tagged queueing: commands can have the tag type
set explicitly, tag IDs are allocated in the mid-layer (thus eliminating
the need to use buggy tag ID allocation schemes in many adapter drivers).
- support for QUEUE FULL and CHECK CONDITION status in mid-layer; the command
will be requeued, or a REQUEST SENSE will be sent as appropriate.
Just before the merge syssrc has been tagged with thorpej_scsipi_beforemerge
there are some pathological cases that cause pmap_page_protect()
to be called on a wired page to revoke all mappings. We
need to skip mappings that are actually wired to prevent Bad
things from happening later.
THIS IS JUST A WORK-AROUND. We need to prevent the pathological
behavior from happening in UVM to begin with. But it's unclear
what the right solution is there, right now.
need to explicitly relatch the interrupt when firing it up again. So, in the
trigger routines, explicitly disable and reenable the interrupt to relatch it,
like we do in the interrupt routine.
Also clean up some broken loop overrun checks.
My ES1371 seems to be more reliable now, but I'm not going to pretend to fully
understand this chip.
array of those structures at the end of the pmap structure. We compute
the size of the pmap structure based on the maximum CPU ID for a
particular machine. This gives us better cache behavior and better
memory footprint for the ASN info.
Three different IRQ:s can be selected for each event, 9, 11, or 13
(which selects hardware priority). More events to be added as they
are discovered. Do not use shb_intr_establish() to register IRQ 9, 11
or 13 anymore.
- pmap_enter()
- pmap_remove()
- pmap_protect()
- pmap_kenter_pa()
- pmap_kremove()
as described in pmap(9).
These calls are relatively conservative. It may be possible to
optimize these a little more.
so do so. This allows us to call uvm_pageboot_alloc() before
pmap_bootstrap().
Also, the virtual_start variable is unneeded in the Alpha pmap
module, and virtual_end (and the mostly-unused-except-by-bus_dma
variables avail_start and avail_end) can be `computed' at the
same time.
interrupts are properly reset on PS/2 now.
Handle the slighly different PS/2 CMOS layout and get/set century
byte as appropriate. The check for valid CMOS CRC checksum is not implemented
yet; I don't currently know algorithm they use.
The info about PS/2 CMOS was taken from the Padgett Peterson's
x86/MSDOS Interrupt List, release 60.
calling pmap_steal_memory() directly. On these platforms, since
uvm_pageboot_alloc() is a wrapper around pmap_steal_memory(), there
is no functional change. This is merely for API consistency.
which have pmap_steal_memory(). This is to reduce the API differences
between pmaps that implement pmap_steal_memory() and pmaps which do
not.
Note that pmap_steal_memory() needs to adjust *vstartp and/or
*vendp only if it used addresses within the range provided to UVM
via the pmap_virtual_space() call. I.e. it is not necessary to do
so in any current pmap_steal_memory() implementation.
to be cleared always in edmcadone(), otherwise if there is a write
via bounce buffer followed by read directly to buf, the read operation
would return trashed data (the buf data would get overwritten
by contents of bounce buffer in edmcadone()).
Reset b_resid as necessary when the i/o is done, too.
g/c some unneeded stuff, use lockmgr()-style locking in ed_[un]lock(),
better avoid some deadlocks
These changes make the driver quite a bit more stable. It's actually
reliable enough to be possible to newfs the drive and use it for
read/write filesystem now.
call overhead is incurred as we start sprinkling pmap_update() calls
throughout the source tree (no pmaps currently defer operations, but
we are adding the infrastructure to allow them to do so).
prevents us losing the locked state of the old vnode.
fvdl thinks the old vnode is certain to be locked at this point. I've put in
a KASSERT to be on the safe side.
This seems to fix PR kern/12661.