- due to a merge error, 2 line were missing making all the kernel data area
cache inhibited.
- due to a misunderstanding of "kstsize" units, all but the first page of
the kernel segment table was copyback cached on the '040/'060 which
should have caused sporadic user process segmentation faults or
kernel endless loops on the '060, under heavy load (when lots of
userland pagetables are in-core), although the problem was not yet
observed.
char is sign extended before it is assigned to an unsigned int. This
fix, which has been tested with a different testcase, adds casts to
signed chars which results in proper behavior.
char is sign extended before it is assigned to an unsigned int. This
fix, which has been tested with a different testcase, adds explicit
casts to unsigned char before the value of a character is copied.
an unrelated bug report. This will make kernel startups a bit more readable
in the presence of unsupported hardware.
Information contributed by Andreas Bussjaeger.
- adjust txhiwat and mindma params a bit
- fixed a couple of incorrectly labeled panic calls
- the "location" was being calculated incorrectly in some cases (forgot
to subtract off MID_RAMBASE). this only caused problem when trying
to change the size of the tx/rx buffers (e.g. to 64KB).
- fixed possible non-aligned DMA burst in the starting byte burst case.
(e.g. if we could DMA 3 bytes, but only have 2 it is not legal
for us to use MIDDMA_BYTE2 mode).
- opt: on tx: try and avoid flushing the internal buffer by padding out the
length of the last mbuf a bit (if possible)
- merged multiple DRQ/DTQ ADD macros into a single DRQ and a single DTQ
macro with a uniform interface to make the code simpler and easier to read.
- en_start: only update atm_flags if EN_MBUF_OPT is enabled (which it
should be)
- for alburst: make sure we don't DMA more bytes than we need (on both
tx and rx). if the alburst is larger than we need, drop to
MIDDMA_WORD mode.
- major change: enable the use of byte and 2 byte DMA on the trasmit side.
this allows us to DMA from non-word sized/aligned mbufs directly.
[the old code would always call en_mfix which would copy (or move) the
data in order to ensure proper alignment... it turns out TCP gives
us non-word sized/aligned mbufs when it is retransmitting, so we needed
to handle this case more efficiently.] the following functions
were changed to make this work:
- en_dqneed: add an arg to let us know if we are transmitting or not.
if we are TX, then we must take into account byte DMAs when
estimating the number of DTQs we will need for a buffer
- en_start: only mfix mbufs if DMA is disabled
- en_txdma: only set launch.nodma if we have en_mfix'd the mbuf chain
also, we may need a DTQ to flush the chip's internal byte buffer
- en_txlaunch: only attempt a copy if we have the proper alignment.
add byte dma code for the front and end of the buffer.
make sure the internal dma buffer is flushed out.
- stats: keep track of how many times we have to use byte sized DMA
midwayreg:
- add byte/2byte DMA defines
midwayvar:
- add new stat counter to monitor less-than-word lengthed DMA
start adding back in tracing printfs. add support for the virtual page
table. Now it gets to user-land code, but fails because i've not
added support to the context switch code to activate and deactivate pmaps.