937a7a3ed9
This is a completely rewritten scsipi_xfer execution engine, and the associated changes to HBA drivers. Overview of changes & features: - All xfers are queued in the mid-layer, rather than doing so in an ad-hoc fashion in individual adapter drivers. - Adapter/channel resource management in the mid-layer, avoids even trying to start running an xfer if the adapter/channel doesn't have the resources. - Better communication between the mid-layer and the adapters. - Asynchronous event notification mechanism from adapter to mid-layer and peripherals. - Better peripheral queue management: freeze/thaw, sorted requeueing during recovery, etc. - Clean separation of peripherals, adapters, and adapter channels (no more scsipi_link). - Kernel thread for each scsipi_channel makes error recovery much easier (no more dealing with interrupt context when recovering from an error). - Mid-layer support for tagged queueing: commands can have the tag type set explicitly, tag IDs are allocated in the mid-layer (thus eliminating the need to use buggy tag ID allocation schemes in many adapter drivers). - support for QUEUE FULL and CHECK CONDITION status in mid-layer; the command will be requeued, or a REQUEST SENSE will be sent as appropriate. Just before the merge syssrc has been tagged with thorpej_scsipi_beforemerge |
||
---|---|---|
.. | ||
compile | ||
conf | ||
dev | ||
docs | ||
include | ||
mvme68k | ||
stand | ||
Makefile | ||
README.VMEbus-RAM |
$NetBSD: README.VMEbus-RAM,v 1.2 1997/11/01 19:18:39 scw Exp $ NetBSD/mvme68k port: VMEbus RAM card configuration As of version 1.3, NetBSD-mvme68k (for VME147) officially supports additional RAM in A32 space over the VMEbus. This file describes where to configure your VMEbus RAM and how to point the kernel in the direction of it. The MVME147 board has a fairly primitive VMEbus controller chip. The mapping of cpu address to VMEbus address is hardwired and so dictates what can be seen where by the 68030. From the cpu's perspective, A24 space spans 0x00000000 to 0x00ffffff. However, onboard RAM also spans this space. With 8Mb of onboard RAM, only the top 8Mb of VMEbus A24 space can be seen. With 16Mb onboard, there is no easy way to get at A24 space at all! The cpu sees VMEbus A32 space starting from 0x01000000. The lowest 16Mb of A32 space is not accessable (it's covered either by onboard RAM or A24 space). With 32Mb of onboard RAM, the cpu can address A32 space from 0x02000000 upwards. This still leaves a considerable chunk of A32 space visible, so in practice isn't really a problem. The best place for VMEbus RAM cards is in A32 space, at the address given by: max(0x01000000, onboard_ram_size) In my case, this would locate my 8Mb VMEbus RAM card at 0x01000000. This starting address needs to be written to the MVME147's NVRAM at address 0xfffe0764, as follows: 147Bug> mm fffe0764 ;L FFFE0764 00000000? 01000000 <cr> <--- you type 01000000 FFFE0768 00000000? . <cr> 147Bug> Next, you need to configure the end address of VMEbus RAM. Assuming your RAM card is 8Mb in size, this would be 0x017fffff. You need to write this value to NVRAM address 0xfffe0768, as follows: 147Bug> mm fffe0768 ;L FFFE0768 00000000? 017fffff <cr> <--- you type 017fffff FFFE076c 00000000? . <cr> 147Bug> You could obviously combine the above two steps. If you have more than one VMEbus RAM card, you must configure them so that they appear physically contiguous in A32 address space. So, to add another 8Mb card in addition to the card above, it should be jumpered to start at 0x01800000. In this case, you would change NVRAM location 0xfffe0768 to be 0x01ffffff. If NVRAM location 0xfffe0764 is zero, the kernel assumes you only have onboard RAM and will not attempt to use any VMEbus RAM. Some extra notes on VMEbus RAM cards ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ So... You've got your nice shiny VMEbus RAM card up and running with NetBSD, and you're wondering why your system runs slower than it did with less RAM! The simple answer is "Motorola got it wrong". (Or at least that's my opinion. If anyone can cure the following, let me know!) In their infinite wisdom, the designers of the MVME147 decided that they would disable the 68030's cache on *any* access to the VMEbus. The upshot is that the cache only works for onboard RAM, not VMEbus RAM, hence your system runs slower. As far as I can see, the only way to cure this is to physically cut a trace on the circuit board and use the MMU to control caching on a page-by-page basis... Anyhow, hopefully the above instructions have finally put to rest the most asked question about the mvme68k port. Cheers, Steve Woodford: scw@netbsd.org