NetBSD/sys/dev/ic/aic79xx_osm.c

881 lines
22 KiB
C
Raw Normal View History

2021-08-07 19:18:40 +03:00
/* $NetBSD: aic79xx_osm.c,v 1.36 2021/08/07 16:19:12 thorpej Exp $ */
/*
* Bus independent NetBSD shim for the aic7xxx based adaptec SCSI controllers
*
* Copyright (c) 1994-2002 Justin T. Gibbs.
* Copyright (c) 2001-2002 Adaptec Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU Public License ("GPL").
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* //depot/aic7xxx/freebsd/dev/aic7xxx/aic79xx_osm.c#26 $
*
Apply the following change checked in 2003/05/04 00:20:07 by gibbs to the FreeBSD ahd driver: Correct spelling errors. Switch to handling bad SCSI status as a sequencer interrupt instead of having the kernel proccess these failures via the completion queue. This is done because: o The old scheme required us to pause the sequencer and clear critical sections for each SCB. It seems that these pause actions, if coincident with a sequencer FIFO interrupt, would result in a FIFO interrupt getting lost or directing to the wrong FIFO. This caused hangs when the driver was stressed under high "queue full" loads. o The completion code assumed that it was always called with the sequencer running. This may not be the case in timeout processing where completions occur manually via ahd_pause_and_flushwork(). o With this scheme, the extra expense of clearing critical sections is avoided since the sequencer will only self pause once all pending selections have cleared and it is not in a critical section. aic79xx.c Add code to handle the new BAD_SCB_STATUS sequencer interrupt code. This just redirects the SCB through the already existing ahd_complete_scb() code path. Remove code in ahd_handle_scsi_status() that paused the sequencer, made sure that no selections where pending, and cleared critical sections. Bad status SCBs are now only processed when all of these conditions are true. aic79xx.reg: Add the BAD_SCB_STATUS sequencer interrupt code. aic79xx.seq: When completing an SCB upload to the host, if we are doing this because the SCB contains non-zero SCSI status, defer completing the SCB until there are no pending selection events. When completing these SCBs, use the new BAD_SCB_STATUS sequencer interrupt. For all other uploaded SCBs (currently only for underruns), the SCB is completed via the normal done queue. Additionally, keep the SCB that is currently being uploaded on the COMPLETE_DMA_SCB list until the dma is completed, not just until the DMA is started. This ensures that the DMA is restarted properly should the host disable the DMA transfer for some reason. In our RevA workaround for Maxtor drives, guard against the host pausing us while trying to pause I/O until the first data-valid REQ by clearing the current snapshot so that we can tell if the transfer has completed prior to us noticing the REQINIT status. In cfg4data_intr, shave off an instruction before getting the data path running by adding an entrypoint to the overrun handler to also increment the FIFO use count. In the overrun handler, be sure to clear our LONGJMP address in both exit paths. Perform a few sequencer optimizations. aic79xx.c: Print the full path from the SCB when a packetized status overrun occurs. Remove references to LONGJMP_SCB which is being removed from firmware usage. Print the new SCB_FIFO_USE_COUNT field in the per-SCB section of ahd_dump_card_state(). The SCB_TAG field is now re-used by the sequencer, so it no longer makes sense to reference this field in the kernel driver. aic79xx.h: Re-arrange fields in the hardware SCB from largest size type to smallest. This makes it easier to move fields without changing field alignment. The hardware scb tag field is now down near the "spare" portion of the SCB to facilitate reuse by the sequencer. aic79xx.reg: Remove LONGJMP_ADDR. Rearrange SCB fields to match aic79xx.h. Add SCB_FIFO_USE_COUNT as the first byte of the SCB_TAG field. aic79xx.seq: Add a per-SCB "Fifos in use count" field and use it to determine when it is safe (all data posted) to deliver status back to the host. The old method involved polling one or both FIFOs to verify that the current task did not have pending data. This makes running down the GSFIFO very cheap, so we will empty the GSFIFO in one idle loop pass in all cases. Use this simplification of the completion process to prune down the data FIFO teardown sequencer for packetized transfers. Much more code is now shared between the data residual and transfer complete cases. Correct some issues in the packetized status handler. It used to be possible to CLRCHN our FIFO before status had fully transferred to the host. We also failed to handle NONPACKREQ phases that could occur should a CRC error occur during transmission of the status data packet. Correct a few big endian issues: aic79xx.c: aic79xx_inline.h: aic79xx_pci.c: aic79xx_osm.c: o Always get the SCB's tag via the SCB_GET_TAG acccessor o Add missing use of byte swapping macros when touching hscb fields. o Don't double swap SEEPROM data when it is printed. Correct a big-endian bug. We cannot assign a o When assigning a 32bit LE variable to a 64bit LE variable, we must be explict about how the words of the 64bit LE variable are initialized. Cast to (uint32_t*) to do this. aic79xx.c: In ahd_clear_critical_section(), hit CRLSCSIINT after restoring the interrupt masks to avoid what appears to be a glitch on SCSIINT. Any real SCSIINT status will be persistent and will immidiately reset SCSIINT. This clear should only get rid of spurious SCSIINTs. This glitch was the cause of the "Unexpected PKT busfree" status that occurred under high queue full loads Call ahd_fini_scbdata() after shutdown so that any ahd_chip_init() routine that might access SCB data will not access free'd memory. Reset the bus on an IOERR since the chip doesn't seem to reset to the new voltage level without this. Change offset calculation for scatter gather maps so that the calculation is correct if an integral multiple of sg lists does not fit in the allocation size. Adjust bus dma tag for data buffers based on 39BIT addressing flag in our softc. Use the QFREEZE count to simplify ahd_pause_and_flushworkd(). We can thus rely on the sequencer eventually clearing ENSELO. In ahd_abort_scbs(), fix a bug that could potentially corrupt sequencer state. The saved SCB was being restored in the SCSI mode instead of the saved mode. It turns out that the SCB did not need to be saved at all as the scbptr is already restored by all subroutines called during this function that modify that register. aic79xx.c: aic79xx.h: aic79xx_pci.c: Add support for parsing the seeprom vital product data. The VPD data are currently unused. aic79xx.h: aic79xx.seq: aic79xx_pci.c: Add a firmware workaround to make the LED blink brighter during packetized operations on the H2A. aic79xx_inline.h: The host does not use timer interrupts, so don't gate our decision on whether or not to unpause the sequencer on whether or not a timer interrupt is pending.
2003-08-29 04:09:59 +04:00
* $FreeBSD: src/sys/dev/aic7xxx/aic79xx_osm.c,v 1.11 2003/05/04 00:20:07 gibbs Exp $
*/
/*
* Ported from FreeBSD by Pascal Renauld, Network Storage Solutions, Inc.
* - April 2003
*/
2003-07-14 19:47:00 +04:00
#include <sys/cdefs.h>
2021-08-07 19:18:40 +03:00
__KERNEL_RCSID(0, "$NetBSD: aic79xx_osm.c,v 1.36 2021/08/07 16:19:12 thorpej Exp $");
2003-07-14 19:47:00 +04:00
#include <dev/ic/aic79xx_osm.h>
#include <dev/ic/aic79xx_inline.h>
#ifndef AHD_TMODE_ENABLE
#define AHD_TMODE_ENABLE 0
#endif
2005-02-27 03:26:58 +03:00
static int ahd_ioctl(struct scsipi_channel *channel, u_long cmd,
void *addr, int flag, struct proc *p);
2005-02-27 03:26:58 +03:00
static void ahd_action(struct scsipi_channel *chan,
scsipi_adapter_req_t req, void *arg);
static void ahd_execute_scb(void *arg, bus_dma_segment_t *dm_segs,
int nsegments);
static int ahd_poll(struct ahd_softc *ahd, int wait);
static void ahd_setup_data(struct ahd_softc *ahd, struct scsipi_xfer *xs,
struct scb *scb);
#if NOT_YET
static void ahd_set_recoveryscb(struct ahd_softc *ahd, struct scb *scb);
#endif
static bool ahd_pmf_suspend(device_t, const pmf_qual_t *);
static bool ahd_pmf_resume(device_t, const pmf_qual_t *);
static bool ahd_pmf_shutdown(device_t, int);
/*
* Attach all the sub-devices we can find
*/
int
ahd_attach(struct ahd_softc *ahd)
{
2009-09-02 21:08:12 +04:00
int s;
char ahd_info[256];
ahd_controller_info(ahd, ahd_info, sizeof(ahd_info));
aprint_normal("%s: %s\n", ahd_name(ahd), ahd_info);
2009-09-02 21:08:12 +04:00
ahd_lock(ahd, &s);
ahd->sc_adapter.adapt_dev = ahd->sc_dev;
ahd->sc_adapter.adapt_nchannels = 1;
2005-02-27 03:26:58 +03:00
ahd->sc_adapter.adapt_openings = ahd->scb_data.numscbs - 1;
ahd->sc_adapter.adapt_max_periph = 32;
ahd->sc_adapter.adapt_ioctl = ahd_ioctl;
ahd->sc_adapter.adapt_minphys = ahd_minphys;
ahd->sc_adapter.adapt_request = ahd_action;
ahd->sc_channel.chan_adapter = &ahd->sc_adapter;
2009-09-02 21:08:12 +04:00
ahd->sc_channel.chan_bustype = &scsi_bustype;
ahd->sc_channel.chan_channel = 0;
ahd->sc_channel.chan_ntargets = AHD_NUM_TARGETS;
ahd->sc_channel.chan_nluns = 8 /*AHD_NUM_LUNS*/;
ahd->sc_channel.chan_id = ahd->our_id;
ahd->sc_channel.chan_flags |= SCSIPI_CHAN_CANGROW;
Merge thorpej-cfargs branch: Simplify and make extensible the config_search() / config_found() / config_attach() interfaces: rather than having different variants for which arguments you want pass along, just have a single call that takes a variadic list of tag-value arguments. Adjust all call sites: - Simplify wherever possible; don't pass along arguments that aren't actually needed. - Don't be explicit about what interface attribute is attaching if the device only has one. (More simplification.) - Add a config_probe() function to be used in indirect configuiration situations, making is visibly easier to see when indirect config is in play, and allowing for future change in semantics. (As of now, this is just a wrapper around config_match(), but that is an implementation detail.) Remove unnecessary or redundant interface attributes where they're not needed. There are currently 5 "cfargs" defined: - CFARG_SUBMATCH (submatch function for direct config) - CFARG_SEARCH (search function for indirect config) - CFARG_IATTR (interface attribte) - CFARG_LOCATORS (locators array) - CFARG_DEVHANDLE (devhandle_t - wraps OFW, ACPI, etc. handles) ...and a sentinel value CFARG_EOL. Add some extra sanity checking to ensure that interface attributes aren't ambiguous. Use CFARG_DEVHANDLE in MI FDT, OFW, and ACPI code, and macppc and shark ports to associate those device handles with device_t instance. This will trickle trough to more places over time (need back-end for pre-OFW Sun OBP; any others?).
2021-04-25 02:36:23 +03:00
ahd->sc_child = config_found(ahd->sc_dev, &ahd->sc_channel, scsiprint,
2021-08-07 19:18:40 +03:00
CFARGS_NONE);
ahd_intr_enable(ahd, TRUE);
if (ahd->flags & AHD_RESET_BUS_A)
ahd_reset_channel(ahd, 'A', TRUE);
if (!pmf_device_register1(ahd->sc_dev,
ahd_pmf_suspend, ahd_pmf_resume, ahd_pmf_shutdown))
aprint_error_dev(ahd->sc_dev,
"couldn't establish power handler\n");
2009-09-02 21:08:12 +04:00
ahd_unlock(ahd, &s);
return (1);
}
static bool
ahd_pmf_suspend(device_t dev, const pmf_qual_t *qual)
{
struct ahd_softc *sc = device_private(dev);
#if 0
return (ahd_suspend(sc) == 0);
#else
ahd_shutdown(sc);
return true;
#endif
}
static bool
ahd_pmf_resume(device_t dev, const pmf_qual_t *qual)
{
#if 0
struct ahd_softc *sc = device_private(dev);
return (ahd_resume(sc) == 0);
#else
return true;
#endif
}
static bool
ahd_pmf_shutdown(device_t dev, int howto)
{
struct ahd_softc *sc = device_private(dev);
/* Disable all interrupt sources by resetting the controller */
ahd_shutdown(sc);
return true;
}
static int
2005-02-27 03:26:58 +03:00
ahd_ioctl(struct scsipi_channel *channel, u_long cmd,
void *addr, int flag, struct proc *p)
{
2009-09-05 16:46:55 +04:00
struct ahd_softc *ahd;
2009-09-02 21:08:12 +04:00
int s, ret = ENOTTY;
2009-09-05 16:46:55 +04:00
ahd = device_private(channel->chan_adapter->adapt_dev);
2009-09-02 21:08:12 +04:00
switch (cmd) {
case SCBUSIORESET:
s = splbio();
ahd_reset_channel(ahd, channel->chan_channel == 1 ? 'B' : 'A', TRUE);
splx(s);
ret = 0;
break;
default:
break;
}
return ret;
}
/*
* Catch an interrupt from the adapter
*/
void
ahd_platform_intr(void *arg)
{
struct ahd_softc *ahd;
2005-02-27 03:26:58 +03:00
ahd = arg;
printf("%s; ahd_platform_intr\n", ahd_name(ahd));
ahd_intr(ahd);
}
/*
* We have an scb which has been processed by the
* adaptor, now we look to see how the operation * went.
*/
void
ahd_done(struct ahd_softc *ahd, struct scb *scb)
{
struct scsipi_xfer *xs;
struct scsipi_periph *periph;
int s;
LIST_REMOVE(scb, pending_links);
xs = scb->xs;
periph = xs->xs_periph;
callout_stop(&scb->xs->xs_callout);
if (xs->datalen) {
int op;
if (xs->xs_control & XS_CTL_DATA_IN)
2009-09-02 21:08:12 +04:00
op = BUS_DMASYNC_POSTREAD;
else
2009-09-02 21:08:12 +04:00
op = BUS_DMASYNC_POSTWRITE;
bus_dmamap_sync(ahd->parent_dmat, scb->dmamap, 0,
scb->dmamap->dm_mapsize, op);
2009-09-02 21:08:12 +04:00
bus_dmamap_unload(ahd->parent_dmat, scb->dmamap);
}
/*
* If the recovery SCB completes, we have to be
* out of our timeout.
*/
if ((scb->flags & SCB_RECOVERY_SCB) != 0) {
struct scb *list_scb;
/*
* We were able to complete the command successfully,
* so reinstate the timeouts for all other pending
* commands.
*/
LIST_FOREACH(list_scb, &ahd->pending_scbs, pending_links) {
struct scsipi_xfer *txs = list_scb->xs;
2005-02-27 03:26:58 +03:00
if (!(txs->xs_control & XS_CTL_POLL)) {
2009-09-02 21:08:12 +04:00
callout_reset(&txs->xs_callout,
(txs->timeout > 1000000) ?
(txs->timeout / 1000) * hz :
(txs->timeout * hz) / 1000,
ahd_timeout, list_scb);
}
}
if (ahd_get_transaction_status(scb) != XS_NOERROR)
2009-09-02 21:08:12 +04:00
ahd_set_transaction_status(scb, XS_TIMEOUT);
scsipi_printaddr(xs->xs_periph);
2005-02-27 03:26:58 +03:00
printf("%s: no longer in timeout, status = %x\n",
ahd_name(ahd), xs->status);
}
if (xs->error != XS_NOERROR) {
2009-09-02 21:08:12 +04:00
/* Don't clobber any existing error state */
} else if ((xs->status == SCSI_STATUS_BUSY) ||
(xs->status == SCSI_STATUS_QUEUE_FULL)) {
2009-09-02 21:08:12 +04:00
ahd_set_transaction_status(scb, XS_BUSY);
2005-02-27 03:26:58 +03:00
printf("%s: drive (ID %d, LUN %d) queue full (SCB 0x%x)\n",
ahd_name(ahd), SCB_GET_TARGET(ahd,scb), SCB_GET_LUN(scb), SCB_GET_TAG(scb));
2009-09-02 21:08:12 +04:00
} else if ((scb->flags & SCB_SENSE) != 0) {
/*
* We performed autosense retrieval.
*
* zero the sense data before having
* the drive fill it. The SCSI spec mandates
* that any untransferred data should be
* assumed to be zero. Complete the 'bounce'
* of sense information through buffers accessible
* via bus-space by copying it into the clients
* csio.
*/
memset(&xs->sense.scsi_sense, 0, sizeof(xs->sense.scsi_sense));
memcpy(&xs->sense.scsi_sense, ahd_get_sense_buf(ahd, scb),
sizeof(struct scsi_sense_data));
2005-02-27 03:26:58 +03:00
2009-09-02 21:08:12 +04:00
ahd_set_transaction_status(scb, XS_SENSE);
} else if ((scb->flags & SCB_PKT_SENSE) != 0) {
struct scsi_status_iu_header *siu;
u_int sense_len;
#ifdef AHD_DEBUG
int i;
#endif
/*
* Copy only the sense data into the provided buffer.
*/
siu = (struct scsi_status_iu_header *)scb->sense_data;
sense_len = MIN(scsi_4btoul(siu->sense_length),
sizeof(xs->sense.scsi_sense));
memset(&xs->sense.scsi_sense, 0, sizeof(xs->sense.scsi_sense));
2005-02-27 03:26:58 +03:00
memcpy(&xs->sense.scsi_sense,
scb->sense_data + SIU_SENSE_OFFSET(siu), sense_len);
#ifdef AHD_DEBUG
printf("Copied %d bytes of sense data offset %d:", sense_len,
SIU_SENSE_OFFSET(siu));
for (i = 0; i < sense_len; i++)
printf(" 0x%x", ((uint8_t *)&xs->sense.scsi_sense)[i]);
printf("\n");
#endif
2009-09-02 21:08:12 +04:00
ahd_set_transaction_status(scb, XS_SENSE);
}
if (scb->flags & SCB_FREEZE_QUEUE) {
2009-09-02 21:08:12 +04:00
scsipi_periph_thaw(periph, 1);
scb->flags &= ~SCB_FREEZE_QUEUE;
}
2009-09-02 21:08:12 +04:00
if (scb->flags & SCB_REQUEUE)
ahd_set_transaction_status(scb, XS_REQUEUE);
2009-09-02 21:08:12 +04:00
ahd_lock(ahd, &s);
ahd_free_scb(ahd, scb);
ahd_unlock(ahd, &s);
2009-09-02 21:08:12 +04:00
scsipi_done(xs);
}
static void
ahd_action(struct scsipi_channel *chan, scsipi_adapter_req_t req, void *arg)
{
2009-09-02 21:08:12 +04:00
struct ahd_softc *ahd;
struct ahd_initiator_tinfo *tinfo;
struct ahd_tmode_tstate *tstate;
2009-09-05 16:46:55 +04:00
ahd = device_private(chan->chan_adapter->adapt_dev);
switch(req) {
case ADAPTER_REQ_RUN_XFER:
{
struct scsipi_xfer *xs;
2009-09-02 21:08:12 +04:00
struct scsipi_periph *periph;
struct scb *scb;
struct hardware_scb *hscb;
u_int target_id;
u_int our_id;
u_int col_idx;
char channel;
int s;
2009-09-02 21:08:12 +04:00
xs = arg;
periph = xs->xs_periph;
2009-09-02 21:08:12 +04:00
SC_DEBUG(periph, SCSIPI_DB3, ("ahd_action\n"));
target_id = periph->periph_target;
2009-09-02 21:08:12 +04:00
our_id = ahd->our_id;
channel = (chan->chan_channel == 1) ? 'B' : 'A';
2009-09-02 21:08:12 +04:00
/*
* get an scb to use.
*/
ahd_lock(ahd, &s);
tinfo = ahd_fetch_transinfo(ahd, channel, our_id,
target_id, &tstate);
if (xs->xs_tag_type != 0 ||
(tinfo->curr.ppr_options & MSG_EXT_PPR_IU_REQ) != 0)
col_idx = AHD_NEVER_COL_IDX;
else
col_idx = AHD_BUILD_COL_IDX(target_id,
periph->periph_lun);
if ((scb = ahd_get_scb(ahd, col_idx)) == NULL) {
xs->error = XS_RESOURCE_SHORTAGE;
ahd_unlock(ahd, &s);
scsipi_done(xs);
return;
}
ahd_unlock(ahd, &s);
hscb = scb->hscb;
SC_DEBUG(periph, SCSIPI_DB3, ("start scb(%p)\n", scb));
scb->xs = xs;
/*
* Put all the arguments for the xfer in the scb
*/
hscb->control = 0;
hscb->scsiid = BUILD_SCSIID(ahd, sim, target_id, our_id);
hscb->lun = periph->periph_lun;
if (xs->xs_control & XS_CTL_RESET) {
hscb->cdb_len = 0;
scb->flags |= SCB_DEVICE_RESET;
hscb->control |= MK_MESSAGE;
hscb->task_management = SIU_TASKMGMT_LUN_RESET;
ahd_execute_scb(scb, NULL, 0);
} else {
hscb->task_management = 0;
}
ahd_setup_data(ahd, xs, scb);
break;
}
case ADAPTER_REQ_GROW_RESOURCES:
#ifdef AHC_DEBUG
printf("%s: ADAPTER_REQ_GROW_RESOURCES\n", ahd_name(ahd));
#endif
chan->chan_adapter->adapt_openings += ahd_alloc_scbs(ahd);
if (ahd->scb_data.numscbs >= AHD_SCB_MAX_ALLOC)
chan->chan_flags &= ~SCSIPI_CHAN_CANGROW;
break;
case ADAPTER_REQ_SET_XFER_MODE:
{
struct scsipi_xfer_mode *xm = arg;
struct ahd_devinfo devinfo;
int target_id, our_id, first;
u_int width;
int s;
char channel;
u_int ppr_options = 0, period, offset;
uint16_t old_autoneg;
2005-02-27 03:26:58 +03:00
target_id = xm->xm_target;
our_id = chan->chan_id;
channel = 'A';
s = splbio();
tinfo = ahd_fetch_transinfo(ahd, channel, our_id, target_id,
&tstate);
ahd_compile_devinfo(&devinfo, our_id, target_id,
0, channel, ROLE_INITIATOR);
old_autoneg = tstate->auto_negotiate;
/*
* XXX since the period and offset are not provided here,
* fake things by forcing a renegotiation using the user
* settings if this is called for the first time (i.e.
* during probe). Also, cap various values at the user
* values, assuming that the user set it up that way.
*/
if (ahd->inited_target[target_id] == 0) {
period = tinfo->user.period;
offset = tinfo->user.offset;
ppr_options = tinfo->user.ppr_options;
width = tinfo->user.width;
tstate->tagenable |=
(ahd->user_tagenable & devinfo.target_mask);
tstate->discenable |=
(ahd->user_discenable & devinfo.target_mask);
ahd->inited_target[target_id] = 1;
first = 1;
} else
first = 0;
2003-04-21 20:52:07 +04:00
if (xm->xm_mode & (PERIPH_CAP_WIDE16 | PERIPH_CAP_DT))
width = MSG_EXT_WDTR_BUS_16_BIT;
else
width = MSG_EXT_WDTR_BUS_8_BIT;
ahd_validate_width(ahd, NULL, &width, ROLE_UNKNOWN);
if (width > tinfo->user.width)
width = tinfo->user.width;
ahd_set_width(ahd, &devinfo, width, AHD_TRANS_GOAL, FALSE);
2003-04-21 20:52:07 +04:00
if (!(xm->xm_mode & (PERIPH_CAP_SYNC | PERIPH_CAP_DT))) {
period = 0;
offset = 0;
ppr_options = 0;
}
if ((xm->xm_mode & PERIPH_CAP_DT) &&
(tinfo->user.ppr_options & MSG_EXT_PPR_DT_REQ))
ppr_options |= MSG_EXT_PPR_DT_REQ;
else
ppr_options &= ~MSG_EXT_PPR_DT_REQ;
if ((tstate->discenable & devinfo.target_mask) == 0 ||
(tstate->tagenable & devinfo.target_mask) == 0)
ppr_options &= ~MSG_EXT_PPR_IU_REQ;
if ((xm->xm_mode & PERIPH_CAP_TQING) &&
(ahd->user_tagenable & devinfo.target_mask))
tstate->tagenable |= devinfo.target_mask;
else
tstate->tagenable &= ~devinfo.target_mask;
ahd_find_syncrate(ahd, &period, &ppr_options, AHD_SYNCRATE_MAX);
ahd_validate_offset(ahd, NULL, period, &offset,
MSG_EXT_WDTR_BUS_8_BIT, ROLE_UNKNOWN);
if (offset == 0) {
period = 0;
ppr_options = 0;
}
if (ppr_options != 0
&& tinfo->user.transport_version >= 3) {
tinfo->goal.transport_version =
tinfo->user.transport_version;
tinfo->curr.transport_version =
tinfo->user.transport_version;
}
ahd_set_syncrate(ahd, &devinfo, period, offset,
ppr_options, AHD_TRANS_GOAL, FALSE);
/*
* If this is the first request, and no negotiation is
* needed, just confirm the state to the scsipi layer,
* so that it can print a message.
*/
if (old_autoneg == tstate->auto_negotiate && first) {
xm->xm_mode = 0;
xm->xm_period = tinfo->curr.period;
xm->xm_offset = tinfo->curr.offset;
if (tinfo->curr.width == MSG_EXT_WDTR_BUS_16_BIT)
xm->xm_mode |= PERIPH_CAP_WIDE16;
if (tinfo->curr.period)
xm->xm_mode |= PERIPH_CAP_SYNC;
if (tstate->tagenable & devinfo.target_mask)
xm->xm_mode |= PERIPH_CAP_TQING;
if (tinfo->curr.ppr_options & MSG_EXT_PPR_DT_REQ)
xm->xm_mode |= PERIPH_CAP_DT;
scsipi_async_event(chan, ASYNC_EVENT_XFER_MODE, xm);
}
splx(s);
}
}
2005-02-27 03:26:58 +03:00
return;
}
static void
ahd_execute_scb(void *arg, bus_dma_segment_t *dm_segs, int nsegments)
{
struct scb *scb;
struct scsipi_xfer *xs;
2009-09-02 21:08:12 +04:00
struct ahd_softc *ahd;
struct ahd_initiator_tinfo *tinfo;
struct ahd_tmode_tstate *tstate;
u_int mask;
2009-09-02 21:08:12 +04:00
int s;
scb = arg;
xs = scb->xs;
xs->error = 0;
xs->status = 0;
xs->xs_status = 0;
2009-09-05 16:46:55 +04:00
ahd = device_private(
xs->xs_periph->periph_channel->chan_adapter->adapt_dev);
scb->sg_count = 0;
if (nsegments != 0) {
void *sg;
int op;
u_int i;
ahd_setup_data_scb(ahd, scb);
/* Copy the segments into our SG list */
for (i = nsegments, sg = scb->sg_list; i > 0; i--) {
sg = ahd_sg_setup(ahd, scb, sg, dm_segs->ds_addr,
dm_segs->ds_len,
/*last*/i == 1);
dm_segs++;
}
2005-02-27 03:26:58 +03:00
if (xs->xs_control & XS_CTL_DATA_IN)
op = BUS_DMASYNC_PREREAD;
else
op = BUS_DMASYNC_PREWRITE;
2005-02-27 03:26:58 +03:00
bus_dmamap_sync(ahd->parent_dmat, scb->dmamap, 0,
scb->dmamap->dm_mapsize, op);
}
ahd_lock(ahd, &s);
/*
* Last time we need to check if this SCB needs to
* be aborted.
*/
if (ahd_get_scsi_status(scb) == XS_STS_DONE) {
if (nsegments != 0)
bus_dmamap_unload(ahd->parent_dmat,
scb->dmamap);
ahd_free_scb(ahd, scb);
ahd_unlock(ahd, &s);
return;
}
tinfo = ahd_fetch_transinfo(ahd, SCSIID_CHANNEL(ahd, scb->hscb->scsiid),
SCSIID_OUR_ID(scb->hscb->scsiid),
SCSIID_TARGET(ahd, scb->hscb->scsiid),
&tstate);
mask = SCB_GET_TARGET_MASK(ahd, scb);
if ((tstate->discenable & mask) != 0)
scb->hscb->control |= DISCENB;
if ((tstate->tagenable & mask) != 0)
scb->hscb->control |= xs->xs_tag_type|TAG_ENB;
if ((tinfo->curr.ppr_options & MSG_EXT_PPR_IU) != 0) {
scb->flags |= SCB_PACKETIZED;
if (scb->hscb->task_management != 0)
scb->hscb->control &= ~MK_MESSAGE;
}
#if 0 /* This looks like it makes sense at first, but it can loop */
if ((xs->xs_control & XS_CTL_DISCOVERY) &&
(tinfo->goal.width != 0
|| tinfo->goal.period != 0
|| tinfo->goal.ppr_options != 0)) {
scb->flags |= SCB_NEGOTIATE;
scb->hscb->control |= MK_MESSAGE;
} else
#endif
if ((tstate->auto_negotiate & mask) != 0) {
2009-09-02 21:08:12 +04:00
scb->flags |= SCB_AUTO_NEGOTIATE;
scb->hscb->control |= MK_MESSAGE;
}
LIST_INSERT_HEAD(&ahd->pending_scbs, scb, pending_links);
scb->flags |= SCB_ACTIVE;
if (!(xs->xs_control & XS_CTL_POLL)) {
callout_reset(&scb->xs->xs_callout, xs->timeout > 1000000 ?
(xs->timeout / 1000) * hz : (xs->timeout * hz) / 1000,
ahd_timeout, scb);
}
if ((scb->flags & SCB_TARGET_IMMEDIATE) != 0) {
/* Define a mapping from our tag to the SCB. */
ahd->scb_data.scbindex[SCB_GET_TAG(scb)] = scb;
ahd_pause(ahd);
ahd_set_scbptr(ahd, SCB_GET_TAG(scb));
ahd_outb(ahd, RETURN_1, CONT_MSG_LOOP_TARG);
ahd_unpause(ahd);
} else {
ahd_queue_scb(ahd, scb);
}
if (!(xs->xs_control & XS_CTL_POLL)) {
2009-09-02 21:08:12 +04:00
ahd_unlock(ahd, &s);
return;
}
/*
* If we can't use interrupts, poll for completion
*/
SC_DEBUG(xs->xs_periph, SCSIPI_DB3, ("cmd_poll\n"));
do {
if (ahd_poll(ahd, xs->timeout)) {
if (!(xs->xs_control & XS_CTL_SILENT))
printf("cmd fail\n");
ahd_timeout(scb);
break;
}
} while (!(xs->xs_status & XS_STS_DONE));
ahd_unlock(ahd, &s);
}
static int
ahd_poll(struct ahd_softc *ahd, int wait)
{
while (--wait) {
2009-09-02 21:08:12 +04:00
DELAY(1000);
if (ahd_inb(ahd, INTSTAT) & INT_PEND)
break;
}
if (wait == 0) {
printf("%s: board is not responding\n", ahd_name(ahd));
return (EIO);
}
ahd_intr(ahd);
2009-09-02 21:08:12 +04:00
return (0);
}
static void
ahd_setup_data(struct ahd_softc *ahd, struct scsipi_xfer *xs,
struct scb *scb)
{
struct hardware_scb *hscb;
hscb = scb->hscb;
xs->resid = xs->status = 0;
hscb->cdb_len = xs->cmdlen;
if (hscb->cdb_len > MAX_CDB_LEN) {
int s;
/*
* Should CAM start to support CDB sizes
* greater than 16 bytes, we could use
* the sense buffer to store the CDB.
*/
2005-02-27 03:26:58 +03:00
ahd_set_transaction_status(scb,
XS_DRIVER_STUFFUP);
ahd_lock(ahd, &s);
ahd_free_scb(ahd, scb);
ahd_unlock(ahd, &s);
scsipi_done(xs);
}
memcpy(hscb->shared_data.idata.cdb, xs->cmd, hscb->cdb_len);
2005-02-27 03:26:58 +03:00
/* Only use S/G if there is a transfer */
2009-09-02 21:08:12 +04:00
if (xs->datalen) {
int error;
2009-09-02 21:08:12 +04:00
error = bus_dmamap_load(ahd->parent_dmat,
scb->dmamap, xs->data,
xs->datalen, NULL,
((xs->xs_control & XS_CTL_NOSLEEP) ?
BUS_DMA_NOWAIT : BUS_DMA_WAITOK) |
BUS_DMA_STREAMING |
((xs->xs_control & XS_CTL_DATA_IN) ?
BUS_DMA_READ : BUS_DMA_WRITE));
2009-09-02 21:08:12 +04:00
if (error) {
#ifdef AHD_DEBUG
2009-09-02 21:11:26 +04:00
printf("%s: in ahd_setup_data(): bus_dmamap_load() "
"= %d\n",
ahd_name(ahd), error);
#endif
2009-09-02 21:08:12 +04:00
xs->error = XS_RESOURCE_SHORTAGE;
scsipi_done(xs);
return;
}
ahd_execute_scb(scb,
scb->dmamap->dm_segs,
scb->dmamap->dm_nsegs);
2009-09-02 21:08:12 +04:00
} else {
ahd_execute_scb(scb, NULL, 0);
}
}
void
ahd_timeout(void *arg)
{
struct scb *scb;
struct ahd_softc *ahd;
int s;
scb = arg;
ahd = scb->ahd_softc;
printf("%s: ahd_timeout\n", ahd_name(ahd));
ahd_lock(ahd, &s);
ahd_pause_and_flushwork(ahd);
(void)ahd_save_modes(ahd);
#if 0
ahd_set_modes(ahd, AHD_MODE_SCSI, AHD_MODE_SCSI);
ahd_outb(ahd, SCSISIGO, ACKO);
printf("set ACK\n");
ahd_outb(ahd, SCSISIGO, 0);
printf("clearing Ack\n");
ahd_restore_modes(ahd, saved_modes);
#endif
if ((scb->flags & SCB_ACTIVE) == 0) {
/* Previous timeout took care of me already */
printf("%s: Timedout SCB already complete. "
"Interrupts may not be functioning.\n", ahd_name(ahd));
ahd_unpause(ahd);
ahd_unlock(ahd, &s);
return;
}
ahd_print_path(ahd, scb);
printf("SCB 0x%x - timed out\n", SCB_GET_TAG(scb));
ahd_dump_card_state(ahd);
ahd_reset_channel(ahd, SIM_CHANNEL(ahd, sim),
/*initiate reset*/TRUE);
ahd_unlock(ahd, &s);
return;
}
int
ahd_platform_alloc(struct ahd_softc *ahd, void *platform_arg)
{
ahd->platform_data = malloc(sizeof(struct ahd_platform_data), M_DEVBUF,
M_WAITOK | M_ZERO);
return (0);
}
void
ahd_platform_free(struct ahd_softc *ahd)
{
free(ahd->platform_data, M_DEVBUF);
}
int
ahd_softc_comp(struct ahd_softc *lahd, struct ahd_softc *rahd)
{
/* We don't sort softcs under NetBSD so report equal always */
return (0);
}
int
ahd_detach(struct ahd_softc *ahd, int flags)
{
int rv = 0;
if (ahd->sc_child != NULL)
rv = config_detach(ahd->sc_child, flags);
pmf_device_deregister(ahd->sc_dev);
ahd_free(ahd);
return rv;
}
void
ahd_platform_set_tags(struct ahd_softc *ahd,
struct ahd_devinfo *devinfo, ahd_queue_alg alg)
{
2009-09-02 21:08:12 +04:00
struct ahd_tmode_tstate *tstate;
2009-09-02 21:08:12 +04:00
ahd_fetch_transinfo(ahd, devinfo->channel, devinfo->our_scsiid,
devinfo->target, &tstate);
2009-09-02 21:08:12 +04:00
if (alg != AHD_QUEUE_NONE)
tstate->tagenable |= devinfo->target_mask;
else
2009-09-02 21:08:12 +04:00
tstate->tagenable &= ~devinfo->target_mask;
}
void
2009-09-02 21:11:26 +04:00
ahd_send_async(struct ahd_softc *ahd, char channel, u_int target, u_int lun,
ac_code code, void *opt_arg)
{
struct ahd_tmode_tstate *tstate;
struct ahd_initiator_tinfo *tinfo;
struct ahd_devinfo devinfo;
struct scsipi_channel *chan;
struct scsipi_xfer_mode xm;
#ifdef DIAGNOSTIC
if (channel != 'A')
panic("ahd_send_async: not channel A");
#endif
2009-09-02 21:11:26 +04:00
chan = &ahd->sc_channel;
switch (code) {
case AC_TRANSFER_NEG:
2009-09-02 21:11:26 +04:00
tinfo = ahd_fetch_transinfo(ahd, channel, ahd->our_id, target,
&tstate);
2009-09-02 21:11:26 +04:00
ahd_compile_devinfo(&devinfo, ahd->our_id, target, lun,
channel, ROLE_UNKNOWN);
/*
* Don't bother if negotiating. XXX?
*/
if (tinfo->curr.period != tinfo->goal.period
|| tinfo->curr.width != tinfo->goal.width
|| tinfo->curr.offset != tinfo->goal.offset
|| tinfo->curr.ppr_options != tinfo->goal.ppr_options)
break;
xm.xm_target = target;
xm.xm_mode = 0;
xm.xm_period = tinfo->curr.period;
xm.xm_offset = tinfo->curr.offset;
if (tinfo->goal.ppr_options & MSG_EXT_PPR_DT_REQ)
xm.xm_mode |= PERIPH_CAP_DT;
if (tinfo->curr.width == MSG_EXT_WDTR_BUS_16_BIT)
xm.xm_mode |= PERIPH_CAP_WIDE16;
if (tinfo->curr.period)
xm.xm_mode |= PERIPH_CAP_SYNC;
if (tstate->tagenable & devinfo.target_mask)
xm.xm_mode |= PERIPH_CAP_TQING;
scsipi_async_event(chan, ASYNC_EVENT_XFER_MODE, &xm);
break;
case AC_BUS_RESET:
scsipi_async_event(chan, ASYNC_EVENT_RESET, NULL);
case AC_SENT_BDR:
default:
break;
}
}