Currently the libqos PCI layer includes accessor helpers for 8, 16 and 32
bit reads and writes. It's likely that we'll want 64-bit accesses in the
future (plenty of modern peripherals will have 64-bit reigsters). This
adds them.
For PIO (not MMIO) accesses on the PC backend, this is implemented as two
32-bit ins or outs. That's not ideal but AFAICT x86 doesn't have 64-bit
versions of in and out.
This patch also converts the single current user of 64-bit accesses -
virtio-pci.c to use the new mechanism, rather than a sequence of 8 byte
reads.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
In the libqos PCI code we now have accessors both for registers (byte
significance preserving) and for streaming data (byte address order
preserving). These exist in both the interface for qtest drivers and in
the machine specific backends.
However, the register-style accessors aren't actually necessary in the
backend. They can be implemented in terms of the byte address order
preserving accessors by the libqos wrappers. This works because PCI is
always little endian.
This does assume that the back end byte address order preserving accessors
will perform the equivalent of a single bus transaction for short lengths.
This is the case, and in fact they currently end up using the same
cpu_physical_memory_rw() implementation within the qtest accelerator.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Currently PCI memory (aka MMIO) space is accessed via a set of readb/writeb
style accessors. This is what we want for accessing discrete registers of
a certain size. However, there are a few cases where we instead need a
"bag of bytes" style streaming interface to PCI MMIO space. This can be
either for streaming data style registers or when there's actual memory
rather than registers in PCI space, for example frame buffers or ivshmem.
This patch adds backend callbacks, and libqos wrappers for this type of
byte address order preserving accesses.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
The PCI backends in libqos each supply an iomap() and iounmap() function
which is used to set up a specified PCI BAR. But PCI BAR allocation takes
place entirely within PCI space, so doesn't really need per-backend
versions. For example, Linux includes generic BAR allocation code used on
platforms where that isn't done by firmware.
This patch merges the BAR allocation from the two existing backends into a
single simplified copy. The back ends just need to set up some parameters
describing the window of PCI IO and PCI memory addresses which are
available for allocation. Like both the existing versions the new one uses
a simple bump allocator.
Note that (again like the existing versions) this doesn't really handle
64-bit memory BARs properly. It is actually used for such a BAR by the
ivshmem test, and apparently the 32-bit MMIO BAR logic is close enough to
work, as long as the BAR isn't too big. Fixing that to properly handle
64-bit BAR allocation is a problem for another time.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
The PCI IO space (aka PIO, aka legacy IO) and PCI memory space (aka MMIO)
are distinct address spaces by the PCI spec (although parts of one might be
aliased to parts of the other in some cases).
However, qpci_io_read*() and qpci_io_write*() can perform accesses to
either space depending on parameter. That's convenient for test case
drivers, since there are a fair few devices which can be controlled via
either a PIO or MMIO BAR but with an otherwise identical driver.
This is implemented by having addresses below 64kiB treated as PIO, and
those above treated as MMIO. This works because low addresses in memory
space are generally reserved for DMA rather than MMIO.
At the moment, this demultiplexing must be handled by each PCI backend
(pc and spapr, so far). There's no real reason for this - the current
encoding is likely to work for all platforms, and even if it doesn't we
can still use a more complex common encoding since the value returned from
iomap are semi-opaque.
This patch moves the demultiplexing into the common part of the libqos PCI
code, with the backends having simpler, separate accessors for PIO and
MMIO space. This also means we have a way of explicitly accessing either
space if it's necessary for some special case.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Currently, the MMIO space for accessing PCI on pseries guests begins at
1 TiB in guest address space. Each PCI host bridge (PHB) has a 64 GiB
chunk of address space in which it places its outbound PIO and 32-bit and
64-bit MMIO windows.
This scheme as several problems:
- It limits guest RAM to 1 TiB (though we have a limited fix for this
now)
- It limits the total MMIO window to 64 GiB. This is not always enough
for some of the large nVidia GPGPU cards
- Putting all the windows into a single 64 GiB area means that naturally
aligning things within there will waste more address space.
In addition there was a miscalculation in some of the defaults, which meant
that the MMIO windows for each PHB actually slightly overran the 64 GiB
region for that PHB. We got away without nasty consequences because
the overrun fit within an unused area at the beginning of the next PHB's
region, but it's not pretty.
This patch implements a new scheme which addresses those problems, and is
also closer to what bare metal hardware and pHyp guests generally use.
Because some guest versions (including most current distro kernels) can't
access PCI MMIO above 64 TiB, we put all the PCI windows between 32 TiB and
64 TiB. This is broken into 1 TiB chunks. The first 1 TiB contains the
PIO (64 kiB) and 32-bit MMIO (2 GiB) windows for all of the PHBs. Each
subsequent TiB chunk contains a naturally aligned 64-bit MMIO window for
one PHB each.
This reduces the number of allowed PHBs (without full manual configuration
of all the windows) from 256 to 31, but this should still be plenty in
practice.
We also change some of the default window sizes for manually configured
PHBs to saner values.
Finally we adjust some tests and libqos so that it correctly uses the new
default locations. Ideally it would parse the device tree given to the
guest, but that's a more complex problem for another time.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Currently the functions in pci-spapr.c (like pci-pc.c on which it's based)
don't distinguish between 32-bit and 64-bit PCI MMIO. At the moment, the
qemu side implementation is a bit weird and has a single MMIO window
straddling 32-bit and 64-bit regions, but we're likely to change that in
future.
In any case, pci-pc.c - and therefore the testcases using PCI - only handle
32-bit MMIOs for now. For spapr despite whatever changes might happen with
the MMIO windows, the 32-bit window is likely to remain at 2..4 GiB in PCI
space.
So, explicitly limit pci-spapr.c to 32-bit MMIOs for now, we can add 64-bit
MMIO support back in when and if we need it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
In pci-spapr.c (as in pci-pc.c from which it was derived), the
pci_hole_start/pci_hole_size and pci_iohole_start/pci_iohole_size pairs[1]
essentially define the region of PCI (not CPU) addresses in which MMIO
or PIO BARs respectively will be allocated.
The size value is relative to the start value. But in pci-spapr.c it is
set to the entire size of the window supported by the (emulated) hardware,
but the start values are *not* at the beginning of the emulated windows.
That means if you tried to map enough PCI BARs, we'd messily overrun the
IO windows, instead of failing in iomap as we should.
This patch corrects this by calculating the hole sizes from the location
of the window in PCI space and the hole start.
[1] Those are bad names, but that's a problem for another time.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
The libqos code for accessing PCI on the spapr machine type uses IOBASE()
and MMIOBASE() macros to determine the address in the CPU memory map of
the windows to PCI address space.
This is a detail of the implementation of PCI in the machine type, it's not
specified by the PAPR standard. Real guests would get the addresses of the
PCI windows from the device tree.
Finding the device tree in libqos would be awkward, but we can at least
localize this knowledge of the implementation to the init function, saving
it in the QPCIBusSPAPR structure for use by the accessors.
That leaves only one place to fix if we alter the location of the PCI
windows, as we're planning to do.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
[dwg: Fixed build problem on 32-bit hosts]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>