2013-10-14 19:01:11 +04:00
|
|
|
/*
|
|
|
|
* QEMU<->ACPI BIOS PCI hotplug interface
|
|
|
|
*
|
|
|
|
* QEMU supports PCI hotplug via ACPI. This module
|
|
|
|
* implements the interface between QEMU and the ACPI BIOS.
|
|
|
|
* Interface specification - see docs/specs/acpi_pci_hotplug.txt
|
|
|
|
*
|
|
|
|
* Copyright (c) 2013, Red Hat Inc, Michael S. Tsirkin (mst@redhat.com)
|
|
|
|
* Copyright (c) 2006 Fabrice Bellard
|
|
|
|
*
|
|
|
|
* This library is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
2020-10-23 15:44:24 +03:00
|
|
|
* License version 2.1 as published by the Free Software Foundation.
|
2013-10-14 19:01:11 +04:00
|
|
|
*
|
|
|
|
* This library is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* Lesser General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with this library; if not, see <http://www.gnu.org/licenses/>
|
|
|
|
*
|
|
|
|
* Contributions after 2012-01-13 are licensed under the terms of the
|
|
|
|
* GNU GPL, version 2 or (at your option) any later version.
|
|
|
|
*/
|
|
|
|
|
2016-01-26 21:17:03 +03:00
|
|
|
#include "qemu/osdep.h"
|
2013-10-14 19:01:11 +04:00
|
|
|
#include "hw/acpi/pcihp.h"
|
|
|
|
|
2019-02-02 22:57:47 +03:00
|
|
|
#include "hw/pci-host/i440fx.h"
|
2013-10-14 19:01:11 +04:00
|
|
|
#include "hw/pci/pci.h"
|
2018-12-12 12:16:18 +03:00
|
|
|
#include "hw/pci/pci_bridge.h"
|
2013-10-14 19:01:11 +04:00
|
|
|
#include "hw/acpi/acpi.h"
|
|
|
|
#include "exec/address-spaces.h"
|
|
|
|
#include "hw/pci/pci_bus.h"
|
2019-08-12 08:23:45 +03:00
|
|
|
#include "migration/vmstate.h"
|
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-14 11:01:28 +03:00
|
|
|
#include "qapi/error.h"
|
2013-10-14 19:01:11 +04:00
|
|
|
#include "qom/qom-qobject.h"
|
2019-04-02 19:18:59 +03:00
|
|
|
#include "trace.h"
|
2013-10-14 19:01:11 +04:00
|
|
|
|
2014-02-03 14:45:01 +04:00
|
|
|
#define ACPI_PCIHP_ADDR 0xae00
|
pci: introduce acpi-index property for PCI device
In x86/ACPI world, linux distros are using predictable
network interface naming since systemd v197. Which on
QEMU based VMs results into path based naming scheme,
that names network interfaces based on PCI topology.
With itm on has to plug NIC in exactly the same bus/slot,
which was used when disk image was first provisioned/configured
or one risks to loose network configuration due to NIC being
renamed to actually used topology.
That also restricts freedom to reshape PCI configuration of
VM without need to reconfigure used guest image.
systemd also offers "onboard" naming scheme which is
preferred over PCI slot/topology one, provided that
firmware implements:
"
PCI Firmware Specification 3.1
4.6.7. DSM for Naming a PCI or PCI Express Device Under
Operating Systems
"
that allows to assign user defined index to PCI device,
which systemd will use to name NIC. For example, using
-device e1000,acpi-index=100
guest will rename NIC to 'eno100', where 'eno' is default
prefix for "onboard" naming scheme. This doesn't require
any advance configuration on guest side to com in effect
at 'onboard' scheme takes priority over path based naming.
Hope is that 'acpi-index' it will be easier to consume by
management layer, compared to forcing specific PCI topology
and/or having several disk image templates for different
topologies and will help to simplify process of spawning
VM from the same template without need to reconfigure
guest NIC.
This patch adds, 'acpi-index'* property and wires up
a 32bit register on top of pci hotplug register block
to pass index value to AML code at runtime.
Following patch will add corresponding _DSM code and
wire it up to PCI devices described in ACPI.
*) name comes from linux kernel terminology
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210315180102.3008391-3-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-03-15 21:00:58 +03:00
|
|
|
#define ACPI_PCIHP_SIZE 0x0018
|
2014-02-03 14:44:58 +04:00
|
|
|
#define PCI_UP_BASE 0x0000
|
|
|
|
#define PCI_DOWN_BASE 0x0004
|
|
|
|
#define PCI_EJ_BASE 0x0008
|
|
|
|
#define PCI_RMV_BASE 0x000c
|
|
|
|
#define PCI_SEL_BASE 0x0010
|
pci: introduce acpi-index property for PCI device
In x86/ACPI world, linux distros are using predictable
network interface naming since systemd v197. Which on
QEMU based VMs results into path based naming scheme,
that names network interfaces based on PCI topology.
With itm on has to plug NIC in exactly the same bus/slot,
which was used when disk image was first provisioned/configured
or one risks to loose network configuration due to NIC being
renamed to actually used topology.
That also restricts freedom to reshape PCI configuration of
VM without need to reconfigure used guest image.
systemd also offers "onboard" naming scheme which is
preferred over PCI slot/topology one, provided that
firmware implements:
"
PCI Firmware Specification 3.1
4.6.7. DSM for Naming a PCI or PCI Express Device Under
Operating Systems
"
that allows to assign user defined index to PCI device,
which systemd will use to name NIC. For example, using
-device e1000,acpi-index=100
guest will rename NIC to 'eno100', where 'eno' is default
prefix for "onboard" naming scheme. This doesn't require
any advance configuration on guest side to com in effect
at 'onboard' scheme takes priority over path based naming.
Hope is that 'acpi-index' it will be easier to consume by
management layer, compared to forcing specific PCI topology
and/or having several disk image templates for different
topologies and will help to simplify process of spawning
VM from the same template without need to reconfigure
guest NIC.
This patch adds, 'acpi-index'* property and wires up
a 32bit register on top of pci hotplug register block
to pass index value to AML code at runtime.
Following patch will add corresponding _DSM code and
wire it up to PCI devices described in ACPI.
*) name comes from linux kernel terminology
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210315180102.3008391-3-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-03-15 21:00:58 +03:00
|
|
|
#define PCI_AIDX_BASE 0x0014
|
2013-10-14 19:01:11 +04:00
|
|
|
|
|
|
|
typedef struct AcpiPciHpFind {
|
|
|
|
int bsel;
|
|
|
|
PCIBus *bus;
|
|
|
|
} AcpiPciHpFind;
|
|
|
|
|
2021-03-15 21:00:59 +03:00
|
|
|
static gint g_cmp_uint32(gconstpointer a, gconstpointer b, gpointer user_data)
|
|
|
|
{
|
|
|
|
return a - b;
|
|
|
|
}
|
|
|
|
|
|
|
|
static GSequence *pci_acpi_index_list(void)
|
|
|
|
{
|
|
|
|
static GSequence *used_acpi_index_list;
|
|
|
|
|
|
|
|
if (!used_acpi_index_list) {
|
|
|
|
used_acpi_index_list = g_sequence_new(NULL);
|
|
|
|
}
|
|
|
|
return used_acpi_index_list;
|
|
|
|
}
|
|
|
|
|
2013-10-14 19:01:11 +04:00
|
|
|
static int acpi_pcihp_get_bsel(PCIBus *bus)
|
|
|
|
{
|
2014-04-24 18:15:56 +04:00
|
|
|
Error *local_err = NULL;
|
2017-06-07 19:36:15 +03:00
|
|
|
uint64_t bsel = object_property_get_uint(OBJECT(bus), ACPI_PCIHP_PROP_BSEL,
|
|
|
|
&local_err);
|
2014-04-24 18:15:56 +04:00
|
|
|
|
2017-06-07 19:36:15 +03:00
|
|
|
if (local_err || bsel >= ACPI_PCIHP_MAX_HOTPLUG_BUS) {
|
2014-04-24 18:15:56 +04:00
|
|
|
if (local_err) {
|
|
|
|
error_free(local_err);
|
|
|
|
}
|
2013-10-14 19:01:11 +04:00
|
|
|
return -1;
|
2014-04-24 18:15:56 +04:00
|
|
|
} else {
|
|
|
|
return bsel;
|
2013-10-14 19:01:11 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-09-06 16:40:32 +03:00
|
|
|
/* Assign BSEL property to all buses. In the future, this can be changed
|
|
|
|
* to only assign to buses that support hotplug.
|
|
|
|
*/
|
|
|
|
static void *acpi_set_bsel(PCIBus *bus, void *opaque)
|
|
|
|
{
|
|
|
|
unsigned *bsel_alloc = opaque;
|
|
|
|
unsigned *bus_bsel;
|
|
|
|
|
|
|
|
if (qbus_is_hotpluggable(BUS(bus))) {
|
|
|
|
bus_bsel = g_malloc(sizeof *bus_bsel);
|
|
|
|
|
|
|
|
*bus_bsel = (*bsel_alloc)++;
|
|
|
|
object_property_add_uint32_ptr(OBJECT(bus), ACPI_PCIHP_PROP_BSEL,
|
qom: Drop parameter @errp of object_property_add() & friends
The only way object_property_add() can fail is when a property with
the same name already exists. Since our property names are all
hardcoded, failure is a programming error, and the appropriate way to
handle it is passing &error_abort.
Same for its variants, except for object_property_add_child(), which
additionally fails when the child already has a parent. Parentage is
also under program control, so this is a programming error, too.
We have a bit over 500 callers. Almost half of them pass
&error_abort, slightly fewer ignore errors, one test case handles
errors, and the remaining few callers pass them to their own callers.
The previous few commits demonstrated once again that ignoring
programming errors is a bad idea.
Of the few ones that pass on errors, several violate the Error API.
The Error ** argument must be NULL, &error_abort, &error_fatal, or a
pointer to a variable containing NULL. Passing an argument of the
latter kind twice without clearing it in between is wrong: if the
first call sets an error, it no longer points to NULL for the second
call. ich9_pm_add_properties(), sparc32_ledma_realize(),
sparc32_dma_realize(), xilinx_axidma_realize(), xilinx_enet_realize()
are wrong that way.
When the one appropriate choice of argument is &error_abort, letting
users pick the argument is a bad idea.
Drop parameter @errp and assert the preconditions instead.
There's one exception to "duplicate property name is a programming
error": the way object_property_add() implements the magic (and
undocumented) "automatic arrayification". Don't drop @errp there.
Instead, rename object_property_add() to object_property_try_add(),
and add the obvious wrapper object_property_add().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200505152926.18877-15-armbru@redhat.com>
[Two semantic rebase conflicts resolved]
2020-05-05 18:29:22 +03:00
|
|
|
bus_bsel, OBJ_PROP_FLAG_READ);
|
2017-09-06 16:40:32 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return bsel_alloc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void acpi_set_pci_info(void)
|
|
|
|
{
|
|
|
|
static bool bsel_is_set;
|
|
|
|
PCIBus *bus;
|
|
|
|
unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
|
|
|
|
|
|
|
|
if (bsel_is_set) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
bsel_is_set = true;
|
|
|
|
|
|
|
|
bus = find_i440fx(); /* TODO: Q35 support */
|
|
|
|
if (bus) {
|
|
|
|
/* Scan all PCI buses. Set property to enable acpi based hotplug. */
|
|
|
|
pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL, &bsel_alloc);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-08-21 19:54:03 +03:00
|
|
|
static void acpi_pcihp_disable_root_bus(void)
|
|
|
|
{
|
|
|
|
static bool root_hp_disabled;
|
|
|
|
PCIBus *bus;
|
|
|
|
|
|
|
|
if (root_hp_disabled) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
bus = find_i440fx();
|
|
|
|
if (bus) {
|
|
|
|
/* setting the hotplug handler to NULL makes the bus non-hotpluggable */
|
|
|
|
qbus_set_hotplug_handler(BUS(bus), NULL);
|
|
|
|
}
|
|
|
|
root_hp_disabled = true;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2013-10-14 19:01:11 +04:00
|
|
|
static void acpi_pcihp_test_hotplug_bus(PCIBus *bus, void *opaque)
|
|
|
|
{
|
|
|
|
AcpiPciHpFind *find = opaque;
|
|
|
|
if (find->bsel == acpi_pcihp_get_bsel(bus)) {
|
|
|
|
find->bus = bus;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static PCIBus *acpi_pcihp_find_hotplug_bus(AcpiPciHpState *s, int bsel)
|
|
|
|
{
|
|
|
|
AcpiPciHpFind find = { .bsel = bsel, .bus = NULL };
|
|
|
|
|
|
|
|
if (bsel < 0) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
pci_for_each_bus(s->root, acpi_pcihp_test_hotplug_bus, &find);
|
|
|
|
|
|
|
|
/* Make bsel 0 eject root bus if bsel property is not set,
|
|
|
|
* for compatibility with non acpi setups.
|
|
|
|
* TODO: really needed?
|
|
|
|
*/
|
|
|
|
if (!bsel && !find.bus) {
|
|
|
|
find.bus = s->root;
|
|
|
|
}
|
2020-09-18 11:41:02 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Check if find.bus is actually hotpluggable. If bsel is set to
|
|
|
|
* NULL for example on the root bus in order to make it
|
|
|
|
* non-hotpluggable, find.bus will match the root bus when bsel
|
|
|
|
* is 0. See acpi_pcihp_test_hotplug_bus() above. Since the
|
|
|
|
* bus is not hotpluggable however, we should not select the bus.
|
|
|
|
* Instead, we should set find.bus to NULL in that case. In the check
|
|
|
|
* below, we generalize this case for all buses, not just the root bus.
|
|
|
|
* The callers of this function check for a null return value and
|
|
|
|
* handle them appropriately.
|
|
|
|
*/
|
|
|
|
if (find.bus && !qbus_is_hotpluggable(BUS(find.bus))) {
|
|
|
|
find.bus = NULL;
|
|
|
|
}
|
2013-10-14 19:01:11 +04:00
|
|
|
return find.bus;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool acpi_pcihp_pc_no_hotplug(AcpiPciHpState *s, PCIDevice *dev)
|
|
|
|
{
|
|
|
|
PCIDeviceClass *pc = PCI_DEVICE_GET_CLASS(dev);
|
2014-02-05 19:36:48 +04:00
|
|
|
DeviceClass *dc = DEVICE_GET_CLASS(dev);
|
2013-10-14 19:01:11 +04:00
|
|
|
/*
|
|
|
|
* ACPI doesn't allow hotplug of bridge devices. Don't allow
|
|
|
|
* hot-unplug of bridge devices unless they were added by hotplug
|
|
|
|
* (and so, not described by acpi).
|
|
|
|
*/
|
2014-02-05 19:36:48 +04:00
|
|
|
return (pc->is_bridge && !dev->qdev.hotplugged) || !dc->hotpluggable;
|
2013-10-14 19:01:11 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void acpi_pcihp_eject_slot(AcpiPciHpState *s, unsigned bsel, unsigned slots)
|
|
|
|
{
|
2018-12-12 12:16:19 +03:00
|
|
|
HotplugHandler *hotplug_ctrl;
|
2013-10-14 19:01:11 +04:00
|
|
|
BusChild *kid, *next;
|
2015-03-23 18:29:26 +03:00
|
|
|
int slot = ctz32(slots);
|
2013-10-14 19:01:11 +04:00
|
|
|
PCIBus *bus = acpi_pcihp_find_hotplug_bus(s, bsel);
|
|
|
|
|
2019-04-02 19:19:00 +03:00
|
|
|
trace_acpi_pci_eject_slot(bsel, slot);
|
|
|
|
|
2020-03-26 16:56:24 +03:00
|
|
|
if (!bus || slot > 31) {
|
2013-10-14 19:01:11 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Mark request as complete */
|
|
|
|
s->acpi_pcihp_pci_status[bsel].down &= ~(1U << slot);
|
2014-01-26 14:31:27 +04:00
|
|
|
s->acpi_pcihp_pci_status[bsel].up &= ~(1U << slot);
|
2013-10-14 19:01:11 +04:00
|
|
|
|
|
|
|
QTAILQ_FOREACH_SAFE(kid, &bus->qbus.children, sibling, next) {
|
|
|
|
DeviceState *qdev = kid->child;
|
|
|
|
PCIDevice *dev = PCI_DEVICE(qdev);
|
|
|
|
if (PCI_SLOT(dev->devfn) == slot) {
|
2014-01-26 14:31:27 +04:00
|
|
|
if (!acpi_pcihp_pc_no_hotplug(s, dev)) {
|
2018-12-12 12:16:19 +03:00
|
|
|
hotplug_ctrl = qdev_get_hotplug_handler(qdev);
|
|
|
|
hotplug_handler_unplug(hotplug_ctrl, qdev, &error_abort);
|
qdev: Let the hotplug_handler_unplug() caller delete the device
When unplugging a device, at one point the device will be destroyed
via object_unparent(). This will, one the one hand, unrealize the
removed device hierarchy, and on the other hand, destroy/free the
device hierarchy.
When chaining hotplug handlers, we want to overwrite a bus hotplug
handler by the machine hotplug handler, to be able to perform
some part of the plug/unplug and to forward the calls to the bus hotplug
handler.
For now, the bus hotplug handler would trigger an object_unparent(), not
allowing us to perform some unplug action on a device after we forwarded
the call to the bus hotplug handler. The device would be gone at that
point.
machine_unplug_handler(dev)
/* eventually do unplug stuff */
bus_unplug_handler(dev)
/* dev is gone, we can't do more unplug stuff */
So move the object_unparent() to the original caller of the unplug. For
now, keep the unrealize() at the original places of the
object_unparent(). For implicitly chained hotplug handlers (e.g. pc
code calling acpi hotplug handlers), the object_unparent() has to be
done by the outermost caller. So when calling hotplug_handler_unplug()
from inside an unplug handler, nothing is to be done.
hotplug_handler_unplug(dev) -> calls machine_unplug_handler()
machine_unplug_handler(dev) {
/* eventually do unplug stuff */
bus_unplug_handler(dev) -> calls unrealize(dev)
/* we can do more unplug stuff but device already unrealized */
}
object_unparent(dev)
In the long run, every unplug action should be factored out of the
unrealize() function into the unplug handler (especially for PCI). Then
we can get rid of the additonal unrealize() calls and object_unparent()
will properly unrealize the device hierarchy after the device has been
unplugged.
hotplug_handler_unplug(dev) -> calls machine_unplug_handler()
machine_unplug_handler(dev) {
/* eventually do unplug stuff */
bus_unplug_handler(dev) -> only unplugs, does not unrealize
/* we can do more unplug stuff */
}
object_unparent(dev) -> will unrealize
The original approach was suggested by Igor Mammedov for the PCI
part, but I extended it to all hotplug handlers. I consider this one
step into the right direction.
To summarize:
- object_unparent() on synchronous unplugs is done by common code
-- "Caller of hotplug_handler_unplug"
- object_unparent() on asynchronous unplugs ("unplug requests") has to
be done manually
-- "Caller of hotplug_handler_unplug"
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190228122849.4296-2-david@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-02-28 15:28:47 +03:00
|
|
|
object_unparent(OBJECT(qdev));
|
2013-10-14 19:01:11 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void acpi_pcihp_update_hotplug_bus(AcpiPciHpState *s, int bsel)
|
|
|
|
{
|
|
|
|
BusChild *kid, *next;
|
|
|
|
PCIBus *bus = acpi_pcihp_find_hotplug_bus(s, bsel);
|
|
|
|
|
|
|
|
/* Execute any pending removes during reset */
|
|
|
|
while (s->acpi_pcihp_pci_status[bsel].down) {
|
|
|
|
acpi_pcihp_eject_slot(s, bsel, s->acpi_pcihp_pci_status[bsel].down);
|
|
|
|
}
|
|
|
|
|
|
|
|
s->acpi_pcihp_pci_status[bsel].hotplug_enable = ~0;
|
|
|
|
|
|
|
|
if (!bus) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
QTAILQ_FOREACH_SAFE(kid, &bus->qbus.children, sibling, next) {
|
|
|
|
DeviceState *qdev = kid->child;
|
|
|
|
PCIDevice *pdev = PCI_DEVICE(qdev);
|
|
|
|
int slot = PCI_SLOT(pdev->devfn);
|
|
|
|
|
|
|
|
if (acpi_pcihp_pc_no_hotplug(s, pdev)) {
|
|
|
|
s->acpi_pcihp_pci_status[bsel].hotplug_enable &= ~(1U << slot);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void acpi_pcihp_update(AcpiPciHpState *s)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ACPI_PCIHP_MAX_HOTPLUG_BUS; ++i) {
|
|
|
|
acpi_pcihp_update_hotplug_bus(s, i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-08-21 19:54:03 +03:00
|
|
|
void acpi_pcihp_reset(AcpiPciHpState *s, bool acpihp_root_off)
|
2013-10-14 19:01:11 +04:00
|
|
|
{
|
2020-08-21 19:54:03 +03:00
|
|
|
if (acpihp_root_off) {
|
|
|
|
acpi_pcihp_disable_root_bus();
|
|
|
|
}
|
2017-09-06 16:40:32 +03:00
|
|
|
acpi_set_pci_info();
|
2013-10-14 19:01:11 +04:00
|
|
|
acpi_pcihp_update(s);
|
|
|
|
}
|
|
|
|
|
pci: introduce acpi-index property for PCI device
In x86/ACPI world, linux distros are using predictable
network interface naming since systemd v197. Which on
QEMU based VMs results into path based naming scheme,
that names network interfaces based on PCI topology.
With itm on has to plug NIC in exactly the same bus/slot,
which was used when disk image was first provisioned/configured
or one risks to loose network configuration due to NIC being
renamed to actually used topology.
That also restricts freedom to reshape PCI configuration of
VM without need to reconfigure used guest image.
systemd also offers "onboard" naming scheme which is
preferred over PCI slot/topology one, provided that
firmware implements:
"
PCI Firmware Specification 3.1
4.6.7. DSM for Naming a PCI or PCI Express Device Under
Operating Systems
"
that allows to assign user defined index to PCI device,
which systemd will use to name NIC. For example, using
-device e1000,acpi-index=100
guest will rename NIC to 'eno100', where 'eno' is default
prefix for "onboard" naming scheme. This doesn't require
any advance configuration on guest side to com in effect
at 'onboard' scheme takes priority over path based naming.
Hope is that 'acpi-index' it will be easier to consume by
management layer, compared to forcing specific PCI topology
and/or having several disk image templates for different
topologies and will help to simplify process of spawning
VM from the same template without need to reconfigure
guest NIC.
This patch adds, 'acpi-index'* property and wires up
a 32bit register on top of pci hotplug register block
to pass index value to AML code at runtime.
Following patch will add corresponding _DSM code and
wire it up to PCI devices described in ACPI.
*) name comes from linux kernel terminology
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210315180102.3008391-3-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-03-15 21:00:58 +03:00
|
|
|
#define ONBOARD_INDEX_MAX (16 * 1024 - 1)
|
|
|
|
|
2018-12-12 12:16:17 +03:00
|
|
|
void acpi_pcihp_device_pre_plug_cb(HotplugHandler *hotplug_dev,
|
|
|
|
DeviceState *dev, Error **errp)
|
2013-10-14 19:01:11 +04:00
|
|
|
{
|
pci: introduce acpi-index property for PCI device
In x86/ACPI world, linux distros are using predictable
network interface naming since systemd v197. Which on
QEMU based VMs results into path based naming scheme,
that names network interfaces based on PCI topology.
With itm on has to plug NIC in exactly the same bus/slot,
which was used when disk image was first provisioned/configured
or one risks to loose network configuration due to NIC being
renamed to actually used topology.
That also restricts freedom to reshape PCI configuration of
VM without need to reconfigure used guest image.
systemd also offers "onboard" naming scheme which is
preferred over PCI slot/topology one, provided that
firmware implements:
"
PCI Firmware Specification 3.1
4.6.7. DSM for Naming a PCI or PCI Express Device Under
Operating Systems
"
that allows to assign user defined index to PCI device,
which systemd will use to name NIC. For example, using
-device e1000,acpi-index=100
guest will rename NIC to 'eno100', where 'eno' is default
prefix for "onboard" naming scheme. This doesn't require
any advance configuration on guest side to com in effect
at 'onboard' scheme takes priority over path based naming.
Hope is that 'acpi-index' it will be easier to consume by
management layer, compared to forcing specific PCI topology
and/or having several disk image templates for different
topologies and will help to simplify process of spawning
VM from the same template without need to reconfigure
guest NIC.
This patch adds, 'acpi-index'* property and wires up
a 32bit register on top of pci hotplug register block
to pass index value to AML code at runtime.
Following patch will add corresponding _DSM code and
wire it up to PCI devices described in ACPI.
*) name comes from linux kernel terminology
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210315180102.3008391-3-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-03-15 21:00:58 +03:00
|
|
|
PCIDevice *pdev = PCI_DEVICE(dev);
|
|
|
|
|
2018-12-12 12:16:17 +03:00
|
|
|
/* Only hotplugged devices need the hotplug capability. */
|
|
|
|
if (dev->hotplugged &&
|
|
|
|
acpi_pcihp_get_bsel(pci_get_bus(PCI_DEVICE(dev))) < 0) {
|
2014-02-05 19:36:49 +04:00
|
|
|
error_setg(errp, "Unsupported bus. Bus doesn't have property '"
|
|
|
|
ACPI_PCIHP_PROP_BSEL "' set");
|
|
|
|
return;
|
2013-10-14 19:01:11 +04:00
|
|
|
}
|
pci: introduce acpi-index property for PCI device
In x86/ACPI world, linux distros are using predictable
network interface naming since systemd v197. Which on
QEMU based VMs results into path based naming scheme,
that names network interfaces based on PCI topology.
With itm on has to plug NIC in exactly the same bus/slot,
which was used when disk image was first provisioned/configured
or one risks to loose network configuration due to NIC being
renamed to actually used topology.
That also restricts freedom to reshape PCI configuration of
VM without need to reconfigure used guest image.
systemd also offers "onboard" naming scheme which is
preferred over PCI slot/topology one, provided that
firmware implements:
"
PCI Firmware Specification 3.1
4.6.7. DSM for Naming a PCI or PCI Express Device Under
Operating Systems
"
that allows to assign user defined index to PCI device,
which systemd will use to name NIC. For example, using
-device e1000,acpi-index=100
guest will rename NIC to 'eno100', where 'eno' is default
prefix for "onboard" naming scheme. This doesn't require
any advance configuration on guest side to com in effect
at 'onboard' scheme takes priority over path based naming.
Hope is that 'acpi-index' it will be easier to consume by
management layer, compared to forcing specific PCI topology
and/or having several disk image templates for different
topologies and will help to simplify process of spawning
VM from the same template without need to reconfigure
guest NIC.
This patch adds, 'acpi-index'* property and wires up
a 32bit register on top of pci hotplug register block
to pass index value to AML code at runtime.
Following patch will add corresponding _DSM code and
wire it up to PCI devices described in ACPI.
*) name comes from linux kernel terminology
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210315180102.3008391-3-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-03-15 21:00:58 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* capped by systemd (see: udev-builtin-net_id.c)
|
|
|
|
* as it's the only known user honor it to avoid users
|
|
|
|
* misconfigure QEMU and then wonder why acpi-index doesn't work
|
|
|
|
*/
|
|
|
|
if (pdev->acpi_index > ONBOARD_INDEX_MAX) {
|
|
|
|
error_setg(errp, "acpi-index should be less or equal to %u",
|
|
|
|
ONBOARD_INDEX_MAX);
|
|
|
|
return;
|
|
|
|
}
|
2021-03-15 21:00:59 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* make sure that acpi-index is unique across all present PCI devices
|
|
|
|
*/
|
|
|
|
if (pdev->acpi_index) {
|
|
|
|
GSequence *used_indexes = pci_acpi_index_list();
|
|
|
|
|
|
|
|
if (g_sequence_lookup(used_indexes, GINT_TO_POINTER(pdev->acpi_index),
|
|
|
|
g_cmp_uint32, NULL)) {
|
|
|
|
error_setg(errp, "a PCI device with acpi-index = %" PRIu32
|
|
|
|
" already exist", pdev->acpi_index);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
g_sequence_insert_sorted(used_indexes,
|
|
|
|
GINT_TO_POINTER(pdev->acpi_index),
|
|
|
|
g_cmp_uint32, NULL);
|
|
|
|
}
|
2018-12-12 12:16:17 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void acpi_pcihp_device_plug_cb(HotplugHandler *hotplug_dev, AcpiPciHpState *s,
|
|
|
|
DeviceState *dev, Error **errp)
|
|
|
|
{
|
|
|
|
PCIDevice *pdev = PCI_DEVICE(dev);
|
|
|
|
int slot = PCI_SLOT(pdev->devfn);
|
|
|
|
int bsel;
|
2013-10-14 19:01:11 +04:00
|
|
|
|
|
|
|
/* Don't send event when device is enabled during qemu machine creation:
|
|
|
|
* it is present on boot, no hotplug event is necessary. We do send an
|
|
|
|
* event when the device is disabled later. */
|
2014-02-05 19:36:49 +04:00
|
|
|
if (!dev->hotplugged) {
|
2018-12-12 12:16:18 +03:00
|
|
|
/*
|
|
|
|
* Overwrite the default hotplug handler with the ACPI PCI one
|
|
|
|
* for cold plugged bridges only.
|
|
|
|
*/
|
|
|
|
if (!s->legacy_piix &&
|
|
|
|
object_dynamic_cast(OBJECT(dev), TYPE_PCI_BRIDGE)) {
|
|
|
|
PCIBus *sec = pci_bridge_get_sec_bus(PCI_BRIDGE(pdev));
|
|
|
|
|
qdev: Drop qbus_set_hotplug_handler() parameter @errp
qbus_set_hotplug_handler() is a simple wrapper around
object_property_set_link().
object_property_set_link() fails when the property doesn't exist, is
not settable, or its .check() method fails. These are all programming
errors here, so passing &error_abort to qbus_set_hotplug_handler() is
appropriate.
Most of its callers do. Exceptions:
* pcie_cap_slot_init(), shpc_init(), spapr_phb_realize() pass NULL,
i.e. they ignore errors.
* spapr_machine_init() passes &error_fatal.
* s390_pcihost_realize(), virtio_serial_device_realize(),
s390_pcihost_plug() pass the error to their callers. The latter two
keep going after the error, which looks wrong.
Drop the @errp parameter, and instead pass &error_abort to
object_property_set_link().
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20200630090351.1247703-15-armbru@redhat.com>
2020-06-30 12:03:39 +03:00
|
|
|
qbus_set_hotplug_handler(BUS(sec), OBJECT(hotplug_dev));
|
2018-12-12 12:16:18 +03:00
|
|
|
/* We don't have to overwrite any other hotplug handler yet */
|
|
|
|
assert(QLIST_EMPTY(&sec->child));
|
|
|
|
}
|
|
|
|
|
2014-02-05 19:36:49 +04:00
|
|
|
return;
|
2013-10-14 19:01:11 +04:00
|
|
|
}
|
|
|
|
|
2018-12-12 12:16:17 +03:00
|
|
|
bsel = acpi_pcihp_get_bsel(pci_get_bus(pdev));
|
|
|
|
g_assert(bsel >= 0);
|
2014-02-05 19:36:49 +04:00
|
|
|
s->acpi_pcihp_pci_status[bsel].up |= (1U << slot);
|
2016-05-31 13:01:17 +03:00
|
|
|
acpi_send_event(DEVICE(hotplug_dev), ACPI_PCI_HOTPLUG_STATUS);
|
2014-02-05 19:36:49 +04:00
|
|
|
}
|
|
|
|
|
2016-05-31 13:01:17 +03:00
|
|
|
void acpi_pcihp_device_unplug_cb(HotplugHandler *hotplug_dev, AcpiPciHpState *s,
|
2014-02-05 19:36:49 +04:00
|
|
|
DeviceState *dev, Error **errp)
|
2018-12-12 12:16:19 +03:00
|
|
|
{
|
2021-03-15 21:00:59 +03:00
|
|
|
PCIDevice *pdev = PCI_DEVICE(dev);
|
|
|
|
|
2019-04-02 19:19:00 +03:00
|
|
|
trace_acpi_pci_unplug(PCI_SLOT(PCI_DEVICE(dev)->devfn),
|
|
|
|
acpi_pcihp_get_bsel(pci_get_bus(PCI_DEVICE(dev))));
|
2021-03-15 21:00:59 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* clean up acpi-index so it could reused by another device
|
|
|
|
*/
|
|
|
|
if (pdev->acpi_index) {
|
|
|
|
GSequence *used_indexes = pci_acpi_index_list();
|
|
|
|
|
|
|
|
g_sequence_remove(g_sequence_lookup(used_indexes,
|
|
|
|
GINT_TO_POINTER(pdev->acpi_index),
|
|
|
|
g_cmp_uint32, NULL));
|
|
|
|
}
|
|
|
|
|
2020-06-10 08:31:56 +03:00
|
|
|
qdev_unrealize(dev);
|
2018-12-12 12:16:19 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void acpi_pcihp_device_unplug_request_cb(HotplugHandler *hotplug_dev,
|
|
|
|
AcpiPciHpState *s, DeviceState *dev,
|
|
|
|
Error **errp)
|
2014-02-05 19:36:49 +04:00
|
|
|
{
|
|
|
|
PCIDevice *pdev = PCI_DEVICE(dev);
|
|
|
|
int slot = PCI_SLOT(pdev->devfn);
|
2017-11-29 11:46:27 +03:00
|
|
|
int bsel = acpi_pcihp_get_bsel(pci_get_bus(pdev));
|
2019-04-02 19:19:00 +03:00
|
|
|
|
|
|
|
trace_acpi_pci_unplug_request(bsel, slot);
|
|
|
|
|
2014-02-05 19:36:49 +04:00
|
|
|
if (bsel < 0) {
|
|
|
|
error_setg(errp, "Unsupported bus. Bus doesn't have property '"
|
|
|
|
ACPI_PCIHP_PROP_BSEL "' set");
|
|
|
|
return;
|
2013-10-14 19:01:11 +04:00
|
|
|
}
|
|
|
|
|
2014-02-05 19:36:49 +04:00
|
|
|
s->acpi_pcihp_pci_status[bsel].down |= (1U << slot);
|
2016-05-31 13:01:17 +03:00
|
|
|
acpi_send_event(DEVICE(hotplug_dev), ACPI_PCI_HOTPLUG_STATUS);
|
2013-10-14 19:01:11 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static uint64_t pci_read(void *opaque, hwaddr addr, unsigned int size)
|
|
|
|
{
|
|
|
|
AcpiPciHpState *s = opaque;
|
|
|
|
uint32_t val = 0;
|
|
|
|
int bsel = s->hotplug_select;
|
|
|
|
|
2014-08-20 09:52:30 +04:00
|
|
|
if (bsel < 0 || bsel >= ACPI_PCIHP_MAX_HOTPLUG_BUS) {
|
2013-10-14 19:01:11 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (addr) {
|
2014-02-03 14:44:58 +04:00
|
|
|
case PCI_UP_BASE:
|
2014-01-26 14:31:27 +04:00
|
|
|
val = s->acpi_pcihp_pci_status[bsel].up;
|
2014-02-03 14:44:59 +04:00
|
|
|
if (!s->legacy_piix) {
|
|
|
|
s->acpi_pcihp_pci_status[bsel].up = 0;
|
|
|
|
}
|
2019-04-02 19:18:59 +03:00
|
|
|
trace_acpi_pci_up_read(val);
|
2013-10-14 19:01:11 +04:00
|
|
|
break;
|
2014-02-03 14:44:58 +04:00
|
|
|
case PCI_DOWN_BASE:
|
2013-10-14 19:01:11 +04:00
|
|
|
val = s->acpi_pcihp_pci_status[bsel].down;
|
2019-04-02 19:18:59 +03:00
|
|
|
trace_acpi_pci_down_read(val);
|
2013-10-14 19:01:11 +04:00
|
|
|
break;
|
2014-02-03 14:44:58 +04:00
|
|
|
case PCI_EJ_BASE:
|
2019-04-02 19:18:59 +03:00
|
|
|
trace_acpi_pci_features_read(val);
|
2013-10-14 19:01:11 +04:00
|
|
|
break;
|
2014-02-03 14:44:58 +04:00
|
|
|
case PCI_RMV_BASE:
|
2013-10-14 19:01:11 +04:00
|
|
|
val = s->acpi_pcihp_pci_status[bsel].hotplug_enable;
|
2019-04-02 19:18:59 +03:00
|
|
|
trace_acpi_pci_rmv_read(val);
|
2013-10-14 19:01:11 +04:00
|
|
|
break;
|
2014-02-03 14:44:58 +04:00
|
|
|
case PCI_SEL_BASE:
|
2013-10-14 19:01:11 +04:00
|
|
|
val = s->hotplug_select;
|
2019-04-02 19:18:59 +03:00
|
|
|
trace_acpi_pci_sel_read(val);
|
pci: introduce acpi-index property for PCI device
In x86/ACPI world, linux distros are using predictable
network interface naming since systemd v197. Which on
QEMU based VMs results into path based naming scheme,
that names network interfaces based on PCI topology.
With itm on has to plug NIC in exactly the same bus/slot,
which was used when disk image was first provisioned/configured
or one risks to loose network configuration due to NIC being
renamed to actually used topology.
That also restricts freedom to reshape PCI configuration of
VM without need to reconfigure used guest image.
systemd also offers "onboard" naming scheme which is
preferred over PCI slot/topology one, provided that
firmware implements:
"
PCI Firmware Specification 3.1
4.6.7. DSM for Naming a PCI or PCI Express Device Under
Operating Systems
"
that allows to assign user defined index to PCI device,
which systemd will use to name NIC. For example, using
-device e1000,acpi-index=100
guest will rename NIC to 'eno100', where 'eno' is default
prefix for "onboard" naming scheme. This doesn't require
any advance configuration on guest side to com in effect
at 'onboard' scheme takes priority over path based naming.
Hope is that 'acpi-index' it will be easier to consume by
management layer, compared to forcing specific PCI topology
and/or having several disk image templates for different
topologies and will help to simplify process of spawning
VM from the same template without need to reconfigure
guest NIC.
This patch adds, 'acpi-index'* property and wires up
a 32bit register on top of pci hotplug register block
to pass index value to AML code at runtime.
Following patch will add corresponding _DSM code and
wire it up to PCI devices described in ACPI.
*) name comes from linux kernel terminology
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210315180102.3008391-3-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-03-15 21:00:58 +03:00
|
|
|
break;
|
|
|
|
case PCI_AIDX_BASE:
|
|
|
|
val = s->acpi_index;
|
|
|
|
s->acpi_index = 0;
|
|
|
|
trace_acpi_pci_acpi_index_read(val);
|
|
|
|
break;
|
2013-10-14 19:01:11 +04:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pci_write(void *opaque, hwaddr addr, uint64_t data,
|
|
|
|
unsigned int size)
|
|
|
|
{
|
pci: introduce acpi-index property for PCI device
In x86/ACPI world, linux distros are using predictable
network interface naming since systemd v197. Which on
QEMU based VMs results into path based naming scheme,
that names network interfaces based on PCI topology.
With itm on has to plug NIC in exactly the same bus/slot,
which was used when disk image was first provisioned/configured
or one risks to loose network configuration due to NIC being
renamed to actually used topology.
That also restricts freedom to reshape PCI configuration of
VM without need to reconfigure used guest image.
systemd also offers "onboard" naming scheme which is
preferred over PCI slot/topology one, provided that
firmware implements:
"
PCI Firmware Specification 3.1
4.6.7. DSM for Naming a PCI or PCI Express Device Under
Operating Systems
"
that allows to assign user defined index to PCI device,
which systemd will use to name NIC. For example, using
-device e1000,acpi-index=100
guest will rename NIC to 'eno100', where 'eno' is default
prefix for "onboard" naming scheme. This doesn't require
any advance configuration on guest side to com in effect
at 'onboard' scheme takes priority over path based naming.
Hope is that 'acpi-index' it will be easier to consume by
management layer, compared to forcing specific PCI topology
and/or having several disk image templates for different
topologies and will help to simplify process of spawning
VM from the same template without need to reconfigure
guest NIC.
This patch adds, 'acpi-index'* property and wires up
a 32bit register on top of pci hotplug register block
to pass index value to AML code at runtime.
Following patch will add corresponding _DSM code and
wire it up to PCI devices described in ACPI.
*) name comes from linux kernel terminology
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210315180102.3008391-3-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-03-15 21:00:58 +03:00
|
|
|
int slot;
|
|
|
|
PCIBus *bus;
|
|
|
|
BusChild *kid, *next;
|
2013-10-14 19:01:11 +04:00
|
|
|
AcpiPciHpState *s = opaque;
|
pci: introduce acpi-index property for PCI device
In x86/ACPI world, linux distros are using predictable
network interface naming since systemd v197. Which on
QEMU based VMs results into path based naming scheme,
that names network interfaces based on PCI topology.
With itm on has to plug NIC in exactly the same bus/slot,
which was used when disk image was first provisioned/configured
or one risks to loose network configuration due to NIC being
renamed to actually used topology.
That also restricts freedom to reshape PCI configuration of
VM without need to reconfigure used guest image.
systemd also offers "onboard" naming scheme which is
preferred over PCI slot/topology one, provided that
firmware implements:
"
PCI Firmware Specification 3.1
4.6.7. DSM for Naming a PCI or PCI Express Device Under
Operating Systems
"
that allows to assign user defined index to PCI device,
which systemd will use to name NIC. For example, using
-device e1000,acpi-index=100
guest will rename NIC to 'eno100', where 'eno' is default
prefix for "onboard" naming scheme. This doesn't require
any advance configuration on guest side to com in effect
at 'onboard' scheme takes priority over path based naming.
Hope is that 'acpi-index' it will be easier to consume by
management layer, compared to forcing specific PCI topology
and/or having several disk image templates for different
topologies and will help to simplify process of spawning
VM from the same template without need to reconfigure
guest NIC.
This patch adds, 'acpi-index'* property and wires up
a 32bit register on top of pci hotplug register block
to pass index value to AML code at runtime.
Following patch will add corresponding _DSM code and
wire it up to PCI devices described in ACPI.
*) name comes from linux kernel terminology
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210315180102.3008391-3-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-03-15 21:00:58 +03:00
|
|
|
|
|
|
|
s->acpi_index = 0;
|
2013-10-14 19:01:11 +04:00
|
|
|
switch (addr) {
|
pci: introduce acpi-index property for PCI device
In x86/ACPI world, linux distros are using predictable
network interface naming since systemd v197. Which on
QEMU based VMs results into path based naming scheme,
that names network interfaces based on PCI topology.
With itm on has to plug NIC in exactly the same bus/slot,
which was used when disk image was first provisioned/configured
or one risks to loose network configuration due to NIC being
renamed to actually used topology.
That also restricts freedom to reshape PCI configuration of
VM without need to reconfigure used guest image.
systemd also offers "onboard" naming scheme which is
preferred over PCI slot/topology one, provided that
firmware implements:
"
PCI Firmware Specification 3.1
4.6.7. DSM for Naming a PCI or PCI Express Device Under
Operating Systems
"
that allows to assign user defined index to PCI device,
which systemd will use to name NIC. For example, using
-device e1000,acpi-index=100
guest will rename NIC to 'eno100', where 'eno' is default
prefix for "onboard" naming scheme. This doesn't require
any advance configuration on guest side to com in effect
at 'onboard' scheme takes priority over path based naming.
Hope is that 'acpi-index' it will be easier to consume by
management layer, compared to forcing specific PCI topology
and/or having several disk image templates for different
topologies and will help to simplify process of spawning
VM from the same template without need to reconfigure
guest NIC.
This patch adds, 'acpi-index'* property and wires up
a 32bit register on top of pci hotplug register block
to pass index value to AML code at runtime.
Following patch will add corresponding _DSM code and
wire it up to PCI devices described in ACPI.
*) name comes from linux kernel terminology
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210315180102.3008391-3-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-03-15 21:00:58 +03:00
|
|
|
case PCI_AIDX_BASE:
|
|
|
|
/*
|
|
|
|
* fetch acpi-index for specified slot so that follow up read from
|
|
|
|
* PCI_AIDX_BASE can return it to guest
|
|
|
|
*/
|
|
|
|
slot = ctz32(data);
|
|
|
|
|
|
|
|
if (s->hotplug_select >= ACPI_PCIHP_MAX_HOTPLUG_BUS) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
bus = acpi_pcihp_find_hotplug_bus(s, s->hotplug_select);
|
|
|
|
QTAILQ_FOREACH_SAFE(kid, &bus->qbus.children, sibling, next) {
|
|
|
|
Object *o = OBJECT(kid->child);
|
|
|
|
PCIDevice *dev = PCI_DEVICE(o);
|
|
|
|
if (PCI_SLOT(dev->devfn) == slot) {
|
|
|
|
s->acpi_index = object_property_get_uint(o, "acpi-index", NULL);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
trace_acpi_pci_acpi_index_write(s->hotplug_select, slot, s->acpi_index);
|
|
|
|
break;
|
2014-02-03 14:44:58 +04:00
|
|
|
case PCI_EJ_BASE:
|
2013-10-14 19:01:11 +04:00
|
|
|
if (s->hotplug_select >= ACPI_PCIHP_MAX_HOTPLUG_BUS) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
acpi_pcihp_eject_slot(s, s->hotplug_select, data);
|
2019-04-02 19:18:59 +03:00
|
|
|
trace_acpi_pci_ej_write(addr, data);
|
2013-10-14 19:01:11 +04:00
|
|
|
break;
|
2014-02-03 14:44:58 +04:00
|
|
|
case PCI_SEL_BASE:
|
2017-09-06 16:40:31 +03:00
|
|
|
s->hotplug_select = s->legacy_piix ? ACPI_PCIHP_BSEL_DEFAULT : data;
|
2019-04-02 19:18:59 +03:00
|
|
|
trace_acpi_pci_sel_write(addr, data);
|
2013-10-14 19:01:11 +04:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static const MemoryRegionOps acpi_pcihp_io_ops = {
|
|
|
|
.read = pci_read,
|
|
|
|
.write = pci_write,
|
|
|
|
.endianness = DEVICE_LITTLE_ENDIAN,
|
|
|
|
.valid = {
|
|
|
|
.min_access_size = 4,
|
|
|
|
.max_access_size = 4,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2015-02-18 22:14:49 +03:00
|
|
|
void acpi_pcihp_init(Object *owner, AcpiPciHpState *s, PCIBus *root_bus,
|
2014-02-03 14:44:59 +04:00
|
|
|
MemoryRegion *address_space_io, bool bridges_enabled)
|
2013-10-14 19:01:11 +04:00
|
|
|
{
|
2015-02-18 22:14:49 +03:00
|
|
|
s->io_len = ACPI_PCIHP_SIZE;
|
|
|
|
s->io_base = ACPI_PCIHP_ADDR;
|
2014-02-03 14:45:01 +04:00
|
|
|
|
2020-11-03 13:26:34 +03:00
|
|
|
s->root = root_bus;
|
2014-02-03 14:44:59 +04:00
|
|
|
s->legacy_piix = !bridges_enabled;
|
2014-02-03 14:45:01 +04:00
|
|
|
|
2015-02-18 22:14:49 +03:00
|
|
|
memory_region_init_io(&s->io, owner, &acpi_pcihp_io_ops, s,
|
|
|
|
"acpi-pci-hotplug", s->io_len);
|
|
|
|
memory_region_add_subregion(address_space_io, s->io_base, &s->io);
|
|
|
|
|
|
|
|
object_property_add_uint16_ptr(owner, ACPI_PCIHP_IO_BASE_PROP, &s->io_base,
|
qom: Drop parameter @errp of object_property_add() & friends
The only way object_property_add() can fail is when a property with
the same name already exists. Since our property names are all
hardcoded, failure is a programming error, and the appropriate way to
handle it is passing &error_abort.
Same for its variants, except for object_property_add_child(), which
additionally fails when the child already has a parent. Parentage is
also under program control, so this is a programming error, too.
We have a bit over 500 callers. Almost half of them pass
&error_abort, slightly fewer ignore errors, one test case handles
errors, and the remaining few callers pass them to their own callers.
The previous few commits demonstrated once again that ignoring
programming errors is a bad idea.
Of the few ones that pass on errors, several violate the Error API.
The Error ** argument must be NULL, &error_abort, &error_fatal, or a
pointer to a variable containing NULL. Passing an argument of the
latter kind twice without clearing it in between is wrong: if the
first call sets an error, it no longer points to NULL for the second
call. ich9_pm_add_properties(), sparc32_ledma_realize(),
sparc32_dma_realize(), xilinx_axidma_realize(), xilinx_enet_realize()
are wrong that way.
When the one appropriate choice of argument is &error_abort, letting
users pick the argument is a bad idea.
Drop parameter @errp and assert the preconditions instead.
There's one exception to "duplicate property name is a programming
error": the way object_property_add() implements the magic (and
undocumented) "automatic arrayification". Don't drop @errp there.
Instead, rename object_property_add() to object_property_try_add(),
and add the obvious wrapper object_property_add().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200505152926.18877-15-armbru@redhat.com>
[Two semantic rebase conflicts resolved]
2020-05-05 18:29:22 +03:00
|
|
|
OBJ_PROP_FLAG_READ);
|
2015-02-18 22:14:49 +03:00
|
|
|
object_property_add_uint16_ptr(owner, ACPI_PCIHP_IO_LEN_PROP, &s->io_len,
|
qom: Drop parameter @errp of object_property_add() & friends
The only way object_property_add() can fail is when a property with
the same name already exists. Since our property names are all
hardcoded, failure is a programming error, and the appropriate way to
handle it is passing &error_abort.
Same for its variants, except for object_property_add_child(), which
additionally fails when the child already has a parent. Parentage is
also under program control, so this is a programming error, too.
We have a bit over 500 callers. Almost half of them pass
&error_abort, slightly fewer ignore errors, one test case handles
errors, and the remaining few callers pass them to their own callers.
The previous few commits demonstrated once again that ignoring
programming errors is a bad idea.
Of the few ones that pass on errors, several violate the Error API.
The Error ** argument must be NULL, &error_abort, &error_fatal, or a
pointer to a variable containing NULL. Passing an argument of the
latter kind twice without clearing it in between is wrong: if the
first call sets an error, it no longer points to NULL for the second
call. ich9_pm_add_properties(), sparc32_ledma_realize(),
sparc32_dma_realize(), xilinx_axidma_realize(), xilinx_enet_realize()
are wrong that way.
When the one appropriate choice of argument is &error_abort, letting
users pick the argument is a bad idea.
Drop parameter @errp and assert the preconditions instead.
There's one exception to "duplicate property name is a programming
error": the way object_property_add() implements the magic (and
undocumented) "automatic arrayification". Don't drop @errp there.
Instead, rename object_property_add() to object_property_try_add(),
and add the obvious wrapper object_property_add().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200505152926.18877-15-armbru@redhat.com>
[Two semantic rebase conflicts resolved]
2020-05-05 18:29:22 +03:00
|
|
|
OBJ_PROP_FLAG_READ);
|
2013-10-14 19:01:11 +04:00
|
|
|
}
|
|
|
|
|
pci: introduce acpi-index property for PCI device
In x86/ACPI world, linux distros are using predictable
network interface naming since systemd v197. Which on
QEMU based VMs results into path based naming scheme,
that names network interfaces based on PCI topology.
With itm on has to plug NIC in exactly the same bus/slot,
which was used when disk image was first provisioned/configured
or one risks to loose network configuration due to NIC being
renamed to actually used topology.
That also restricts freedom to reshape PCI configuration of
VM without need to reconfigure used guest image.
systemd also offers "onboard" naming scheme which is
preferred over PCI slot/topology one, provided that
firmware implements:
"
PCI Firmware Specification 3.1
4.6.7. DSM for Naming a PCI or PCI Express Device Under
Operating Systems
"
that allows to assign user defined index to PCI device,
which systemd will use to name NIC. For example, using
-device e1000,acpi-index=100
guest will rename NIC to 'eno100', where 'eno' is default
prefix for "onboard" naming scheme. This doesn't require
any advance configuration on guest side to com in effect
at 'onboard' scheme takes priority over path based naming.
Hope is that 'acpi-index' it will be easier to consume by
management layer, compared to forcing specific PCI topology
and/or having several disk image templates for different
topologies and will help to simplify process of spawning
VM from the same template without need to reconfigure
guest NIC.
This patch adds, 'acpi-index'* property and wires up
a 32bit register on top of pci hotplug register block
to pass index value to AML code at runtime.
Following patch will add corresponding _DSM code and
wire it up to PCI devices described in ACPI.
*) name comes from linux kernel terminology
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20210315180102.3008391-3-imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-03-15 21:00:58 +03:00
|
|
|
bool vmstate_acpi_pcihp_use_acpi_index(void *opaque, int version_id)
|
|
|
|
{
|
|
|
|
AcpiPciHpState *s = opaque;
|
|
|
|
return s->acpi_index;
|
|
|
|
}
|
|
|
|
|
2013-10-14 19:01:11 +04:00
|
|
|
const VMStateDescription vmstate_acpi_pcihp_pci_status = {
|
|
|
|
.name = "acpi_pcihp_pci_status",
|
|
|
|
.version_id = 1,
|
|
|
|
.minimum_version_id = 1,
|
2014-04-16 17:32:32 +04:00
|
|
|
.fields = (VMStateField[]) {
|
2013-10-14 19:01:11 +04:00
|
|
|
VMSTATE_UINT32(up, AcpiPciHpPciStatus),
|
|
|
|
VMSTATE_UINT32(down, AcpiPciHpPciStatus),
|
|
|
|
VMSTATE_END_OF_LIST()
|
|
|
|
}
|
|
|
|
};
|