Any problems reported by testers have been fixed, and massive
cross-compiling of kernels has shown that any problems that remain
with actually building kernels are not related to this.
This will allow improvements to the pmaps so that they can more easily defer expensive operations, eg tlb/cache flush, til the last possible moment.
Currently this is a no-op on most platforms, so they should see no difference.
Reviewed by Jason.
and with the comment '4.2BSD TCP/IP bug compat. Not recommended'
Add commented out 'TCP_DEBUG # Record last TCP_NDEBUG packets with SO_DEBUG'
(All hail amiga and atari which make some attempt to automate the
multiplicity of config files...)
option for System V semaphores. It appears that there are no overrides
in the code and each file has the following added.
options SYSVSEM # System V semaphores
+#options SEMMNI=10 # number of semaphore identifiers
+#options SEMMNS=60 # number of semaphores in system
+#options SEMUME=10 # max number of undo entries per process
+#options SEMMNU=30 # number of undo structures in system
options SYSVSHM # System V shared memory
If anyone thinks that this is incorrect for any of these files, please
correct it.
Note - the i386 port was not forgotten. It was done separately.
This is a completely rewritten scsipi_xfer execution engine, and the
associated changes to HBA drivers. Overview of changes & features:
- All xfers are queued in the mid-layer, rather than doing so in an
ad-hoc fashion in individual adapter drivers.
- Adapter/channel resource management in the mid-layer, avoids even trying
to start running an xfer if the adapter/channel doesn't have the resources.
- Better communication between the mid-layer and the adapters.
- Asynchronous event notification mechanism from adapter to mid-layer and
peripherals.
- Better peripheral queue management: freeze/thaw, sorted requeueing during
recovery, etc.
- Clean separation of peripherals, adapters, and adapter channels (no more
scsipi_link).
- Kernel thread for each scsipi_channel makes error recovery much easier
(no more dealing with interrupt context when recovering from an error).
- Mid-layer support for tagged queueing: commands can have the tag type
set explicitly, tag IDs are allocated in the mid-layer (thus eliminating
the need to use buggy tag ID allocation schemes in many adapter drivers).
- support for QUEUE FULL and CHECK CONDITION status in mid-layer; the command
will be requeued, or a REQUEST SENSE will be sent as appropriate.
Just before the merge syssrc has been tagged with thorpej_scsipi_beforemerge
- pmap_enter()
- pmap_remove()
- pmap_protect()
- pmap_kenter_pa()
- pmap_kremove()
as described in pmap(9).
These calls are relatively conservative. It may be possible to
optimize these a little more.
calling pmap_steal_memory() directly. On these platforms, since
uvm_pageboot_alloc() is a wrapper around pmap_steal_memory(), there
is no functional change. This is merely for API consistency.
and link it directly to db_command_table[] so that it's not necessary
to do this at runtime. Make db_machine_command_table[] const on all ports.
g/c now unneded stuff, like db_machine_commands_install(), db_machine_init()
Patch written by enami.
rather than assigning to the whole field, set or clear individual flags,
which implies that the B_BUSY and B_INVAL flags will remain set.
this allows us to make the assertion in brelse() that B_BUSY is set,
which is the purpose of all this.
without MII. It supports 100BaseTX only. Half duplex/Full duplex can
be specified manually, but there are no auto negotiation functionality.
XXX: It takes 34 seconds before sending/receiving packets on the wire
after initial setup. It is obviously a bug because the board
just works fine on NEWS-OS, but I cannot find what's wrong...
Once it starts working, it seems there are no problems.
for each driver to indicate the interrupt has been handled or not.
add prototype of apbus_dmatag_init() function, which allocates memory
and initialize its member including bus space handle to flush DMA write
cache. Note that the allocated pointer must be freed.
Clear interrupt mask for specified interrupt in apbus_intr_establish().
Add dma stuff; define apbus_dmamap_sync() to flush DMA write cache by
accessing the corresponding I/O port.
pseudo-device pty 2 # pseudo-terminals (Sysinst needs two)
(Some installers may not be using sysinst, in which case this just reduces
the number of ptys from 16 that are not used to 2 that are not used)
For i386 conf files, no change other than comments.
based on it working already for macppc.
Also add commented out:
#options VNODE_OP_NOINLINE # Don't inline vnode op calls
#options NFS_V2_ONLY # Exclude NFS3 and NQNFS code
as suggestions for additional savings
routine. Works similarly fto pmap_prefer(), but allows callers
to specify a minimum power-of-two alignment of the region.
How we ever got along without this for so long is beyond me.
- for sizeof(void *) == 8 arch, this is mandatory. MHLEN is too small
already (less than 80) and there are chances for unwanted packet loss due
to m_pullup restriction.
- for other cases, the change should avoid allocating clusters in most cases
(even when you have IPv4 IPsec tunnel, or IPv6 with moderate amount of
extension header)
portmasters: if your arch chokes with the change (high memory usage or
whatever), please backout the change for your arch.
<vm/pglist.h> -> <uvm/uvm_pglist.h>
<vm/vm_inherit.h> -> <uvm/uvm_inherit.h>
<vm/vm_kern.h> -> into <uvm/uvm_extern.h>
<vm/vm_object.h> -> nothing
<vm/vm_pager.h> -> into <uvm/uvm_pager.h>
also includes a bunch of <vm/vm_page.h> include removals (due to redudancy
with <vm/vm.h>), and a scattering of other similar headers.
"off_t" and the return value is a "paddr_t" to allow mappings
at offsets past 2^31 bytes. Somewhat inspired by FreeBSD, which
only changed the offset to a "vm_offset_t".
Includes updates for the i386, pc532 and sh3 mmmmap from Jason Thorpe.
state into global and per-CPU scheduler state:
- Global state: sched_qs (run queues), sched_whichqs (bitmap
of non-empty run queues), sched_slpque (sleep queues).
NOTE: These may collectively move into a struct schedstate
at some point in the future.
- Per-CPU state, struct schedstate_percpu: spc_runtime
(time process on this CPU started running), spc_flags
(replaces struct proc's p_schedflags), and
spc_curpriority (usrpri of processes on this CPU).
- Every platform must now supply a struct cpu_info and
a curcpu() macro. Simplify existing cpu_info declarations
where appropriate.
- All references to per-CPU scheduler state now made through
curcpu(). NOTE: this will likely be adjusted in the future
after further changes to struct proc are made.
Tested on i386 and Alpha. Changes are mostly mechanical, but apologies
in advance if it doesn't compile on a particular platform.
change these from bp->b_un.b_addr to bp->b_data, as well. This also
allows us more flexibility to experiment with other data buffer types
hung off of struct buf.
it to determine the boot device: mvme68k, pc532, macppc, ofppc. Those
platforms should be changed to use device_register(). In the mean time,
those ports defined __BROKEN_DK_ESTABLISH.
contains the values __SIMPLELOCK_LOCKED and __SIMPLELOCK_UNLOCKED, which
replace the old SIMPLELOCK_LOCKED and SIMPLELOCK_UNLOCKED. These files
are also required to supply inline functions __cpu_simple_lock(),
__cpu_simple_lock_try(), and __cpu_simple_unlock() if locking is to be
supported on that platform (i.e. if MULTIPROCESSOR is defined in the
_KERNEL case). Change these functions to take an int * (&alp->lock_data)
rather than the struct simplelock * itself.
These changes make it possible for userland to use the locking primitives
by including <machine/lock.h>.
cpu_fork() mistakenly created processes forked by proc0, including
kthreads, in splhigh condition, because [1] proc0's PCB was zero
cleared during initialization, and [2] value 0 in status register
field made processes to have splhigh condition when CPU tick was
assigned for them. This mostly doesn't matter as forked processes
dive immediately into user mode through proc_trampoline code path,
however, kthreads never do that and remain in splhigh.
Reported by Ethan Solomita <ethan@geocast.com>.
timeout()/untimeout() API:
- Clients supply callout handle storage, thus eliminating problems of
resource allocation.
- Insertion and removal of callouts is constant time, important as
this facility is used quite a lot in the kernel.
The old timeout()/untimeout() API has been removed from the kernel.
so that the right entries get added to dev_name2blk[]. Needed for / on RAID.
(Whoops! I missed checking these in when adding the RAID_AUTOCONFIG stuff.)