This is a completely rewritten scsipi_xfer execution engine, and the
associated changes to HBA drivers. Overview of changes & features:
- All xfers are queued in the mid-layer, rather than doing so in an
ad-hoc fashion in individual adapter drivers.
- Adapter/channel resource management in the mid-layer, avoids even trying
to start running an xfer if the adapter/channel doesn't have the resources.
- Better communication between the mid-layer and the adapters.
- Asynchronous event notification mechanism from adapter to mid-layer and
peripherals.
- Better peripheral queue management: freeze/thaw, sorted requeueing during
recovery, etc.
- Clean separation of peripherals, adapters, and adapter channels (no more
scsipi_link).
- Kernel thread for each scsipi_channel makes error recovery much easier
(no more dealing with interrupt context when recovering from an error).
- Mid-layer support for tagged queueing: commands can have the tag type
set explicitly, tag IDs are allocated in the mid-layer (thus eliminating
the need to use buggy tag ID allocation schemes in many adapter drivers).
- support for QUEUE FULL and CHECK CONDITION status in mid-layer; the command
will be requeued, or a REQUEST SENSE will be sent as appropriate.
Just before the merge syssrc has been tagged with thorpej_scsipi_beforemerge
array of those structures at the end of the pmap structure. We compute
the size of the pmap structure based on the maximum CPU ID for a
particular machine. This gives us better cache behavior and better
memory footprint for the ASN info.
- pmap_enter()
- pmap_remove()
- pmap_protect()
- pmap_kenter_pa()
- pmap_kremove()
as described in pmap(9).
These calls are relatively conservative. It may be possible to
optimize these a little more.
so do so. This allows us to call uvm_pageboot_alloc() before
pmap_bootstrap().
Also, the virtual_start variable is unneeded in the Alpha pmap
module, and virtual_end (and the mostly-unused-except-by-bus_dma
variables avail_start and avail_end) can be `computed' at the
same time.
calling pmap_steal_memory() directly. On these platforms, since
uvm_pageboot_alloc() is a wrapper around pmap_steal_memory(), there
is no functional change. This is merely for API consistency.
which have pmap_steal_memory(). This is to reduce the API differences
between pmaps that implement pmap_steal_memory() and pmaps which do
not.
Note that pmap_steal_memory() needs to adjust *vstartp and/or
*vendp only if it used addresses within the range provided to UVM
via the pmap_virtual_space() call. I.e. it is not necessary to do
so in any current pmap_steal_memory() implementation.
call overhead is incurred as we start sprinkling pmap_update() calls
throughout the source tree (no pmaps currently defer operations, but
we are adding the infrastructure to allow them to do so).
SPINLOCK_SPIN_HOOK, so that we actually check for
pending IPIs on the Alpha more than once. Also,
when we call alpha_ipi_process(), make sure to go
to splipi().
to arrive here referencing the kernel_lev1map without having the
RESERVED ASN -- another CPU may have caused pmap_lev1map_destroy()
to be called, and that routine only invalidates the ASN for the
CPU that called it. So, in the MULTIPROCESSOR case, simply assign
the RESERVED ASN if we reference the kernel_lev1map rather than
asserting that we already have the RESERVED ASN. Thanks to Bill
Sommerfeld for helping me track down the problem.
Also add a new IPI that causes a CPU to re-activate its address
space if the pmap it's using changes level 1 maps (this probably
won't happen very often, but it's correct to have it).
This makes Alpha MP kernels boot multiuser. In fact, this commit
is being made from my dual-CPU AlphaServer 1200 running an MP kernel.
that became apparent when UBC was added: store a pointer to
the process itself, not a pointer to ci->ci_curproc.
This gets us back to where we were before UBC went in: MP
kernels get to single-user mode, and can run processes on
both CPUs, but things go south when we try to come into
multi-user mode.
reserving RAM in the bus_mem extent map. Problem pointed
out by Artur Grabowski.
- Work around a slightly annoying bit of behavior exhibited by
the UP1000 firmware. The UP1000 firmware reports the space
consumed by the "ISA hole" in the same MDDT entry as two
chunks of RAM (on either side of the hole) used by the PALcode,
all as one "reserved for PALcode" chunk. We must take this
into account when reserving RAM in the bus_mem extent map.
spawned kthread/process runs at IPL_0 instead of whatever IPL the
parent was running at.
This appears to fix the NTP clock stability problems observed on some
alpha systems; the clock appears stable even when there's heavy
raidframe (i.e., kthread-intensive) I/O under way.