qemu/include/hw/i386/pc.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

322 lines
9.0 KiB
C
Raw Normal View History

#ifndef HW_PC_H
#define HW_PC_H
#include "qemu/notify.h"
#include "qapi/qapi-types-common.h"
#include "qemu/uuid.h"
#include "hw/boards.h"
#include "hw/block/fdc.h"
pc: Support firmware configuration with -blockdev The PC machines put firmware in ROM by default. To get it put into flash memory (required by OVMF), you have to use -drive if=pflash,unit=0,... and optionally -drive if=pflash,unit=1,... Why two -drive? This permits setting up one part of the flash memory read-only, and the other part read/write. It also makes upgrading firmware on the host easier. Below the hood, it creates two separate flash devices, because we were too lazy to improve our flash device models to support sector protection. The problem at hand is to do the same with -blockdev somehow, as one more step towards deprecating -drive. Mapping -drive if=none,... to -blockdev is a solved problem. With if=T other than if=none, -drive additionally configures a block device frontend. For non-onboard devices, that part maps to -device. Also a solved problem. For onboard devices such as PC flash memory, we have an unsolved problem. This is actually an instance of a wider problem: our general device configuration interface doesn't cover onboard devices. Instead, we have a zoo of ad hoc interfaces that are much more limited. One of them is -drive, which we'd rather deprecate, but can't until we have suitable replacements for all its uses. Sadly, I can't attack the wider problem today. So back to the narrow problem. My first idea was to reduce it to its solved buddy by using pluggable instead of onboard devices for the flash memory. Workable, but it requires some extra smarts in firmware descriptors and libvirt. Paolo had an idea that is simpler for libvirt: keep the devices onboard, and add machine properties for their block backends. The implementation is less than straightforward, I'm afraid. First, block backend properties are *qdev* properties. Machines can't have those, as they're not devices. I could duplicate these qdev properties as QOM properties, but I hate that. More seriously, the properties do not belong to the machine, they belong to the onboard flash devices. Adding them to the machine would then require bad magic to somehow transfer them to the flash devices. Fortunately, QOM provides the means to handle exactly this case: add alias properties to the machine that forward to the onboard devices' properties. Properties need to be created in .instance_init() methods. For PC machines, that's pc_machine_initfn(). To make alias properties work, we need to create the onboard flash devices there, too. Requires several bug fixes, in the previous commits. We also have to realize the devices. More on that below. If the user sets pflash0, firmware resides in flash memory. pc_system_firmware_init() maps and realizes the flash devices. Else, firmware resides in ROM. The onboard flash devices aren't used then. pc_system_firmware_init() destroys them unrealized, along with the alias properties. The existing code to pick up drives defined with -drive if=pflash is replaced by code to desugar into the machine properties. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <87ftrtux81.fsf@dusky.pond.sub.org>
2019-03-11 20:39:26 +03:00
#include "hw/block/flash.h"
#include "hw/i386/x86.h"
#include "hw/hotplug.h"
#include "qom/object.h"
vl: Add sgx compound properties to expose SGX EPC sections to guest Because SGX EPC is enumerated through CPUID, EPC "devices" need to be realized prior to realizing the vCPUs themselves, i.e. long before generic devices are parsed and realized. From a virtualization perspective, the CPUID aspect also means that EPC sections cannot be hotplugged without paravirtualizing the guest kernel (hardware does not support hotplugging as EPC sections must be locked down during pre-boot to provide EPC's security properties). So even though EPC sections could be realized through the generic -devices command, they need to be created much earlier for them to actually be usable by the guest. Place all EPC sections in a contiguous block, somewhat arbitrarily starting after RAM above 4g. Ensuring EPC is in a contiguous region simplifies calculations, e.g. device memory base, PCI hole, etc..., allows dynamic calculation of the total EPC size, e.g. exposing EPC to guests does not require -maxmem, and last but not least allows all of EPC to be enumerated in a single ACPI entry, which is expected by some kernels, e.g. Windows 7 and 8. The new compound properties command for sgx like below: ...... -object memory-backend-epc,id=mem1,size=28M,prealloc=on \ -object memory-backend-epc,id=mem2,size=10M \ -M sgx-epc.0.memdev=mem1,sgx-epc.1.memdev=mem2 Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20210719112136.57018-6-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-28 11:40:58 +03:00
#include "hw/i386/sgx-epc.h"
#include "hw/cxl/cxl.h"
#define MAX_IDE_BUS 2
/**
* PCMachineState:
* @acpi_dev: link to ACPI PM device that performs ACPI hotplug handling
* @boot_cpus: number of present VCPUs
*/
typedef struct PCMachineState {
/*< private >*/
X86MachineState parent_obj;
/* <public> */
/* State for other subsystems/APIs: */
Notifier machine_done;
/* Pointers to devices and objects: */
PCIBus *pcibus;
I2CBus *smbus;
pc: Support firmware configuration with -blockdev The PC machines put firmware in ROM by default. To get it put into flash memory (required by OVMF), you have to use -drive if=pflash,unit=0,... and optionally -drive if=pflash,unit=1,... Why two -drive? This permits setting up one part of the flash memory read-only, and the other part read/write. It also makes upgrading firmware on the host easier. Below the hood, it creates two separate flash devices, because we were too lazy to improve our flash device models to support sector protection. The problem at hand is to do the same with -blockdev somehow, as one more step towards deprecating -drive. Mapping -drive if=none,... to -blockdev is a solved problem. With if=T other than if=none, -drive additionally configures a block device frontend. For non-onboard devices, that part maps to -device. Also a solved problem. For onboard devices such as PC flash memory, we have an unsolved problem. This is actually an instance of a wider problem: our general device configuration interface doesn't cover onboard devices. Instead, we have a zoo of ad hoc interfaces that are much more limited. One of them is -drive, which we'd rather deprecate, but can't until we have suitable replacements for all its uses. Sadly, I can't attack the wider problem today. So back to the narrow problem. My first idea was to reduce it to its solved buddy by using pluggable instead of onboard devices for the flash memory. Workable, but it requires some extra smarts in firmware descriptors and libvirt. Paolo had an idea that is simpler for libvirt: keep the devices onboard, and add machine properties for their block backends. The implementation is less than straightforward, I'm afraid. First, block backend properties are *qdev* properties. Machines can't have those, as they're not devices. I could duplicate these qdev properties as QOM properties, but I hate that. More seriously, the properties do not belong to the machine, they belong to the onboard flash devices. Adding them to the machine would then require bad magic to somehow transfer them to the flash devices. Fortunately, QOM provides the means to handle exactly this case: add alias properties to the machine that forward to the onboard devices' properties. Properties need to be created in .instance_init() methods. For PC machines, that's pc_machine_initfn(). To make alias properties work, we need to create the onboard flash devices there, too. Requires several bug fixes, in the previous commits. We also have to realize the devices. More on that below. If the user sets pflash0, firmware resides in flash memory. pc_system_firmware_init() maps and realizes the flash devices. Else, firmware resides in ROM. The onboard flash devices aren't used then. pc_system_firmware_init() destroys them unrealized, along with the alias properties. The existing code to pick up drives defined with -drive if=pflash is replaced by code to desugar into the machine properties. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <87ftrtux81.fsf@dusky.pond.sub.org>
2019-03-11 20:39:26 +03:00
PFlashCFI01 *flash[2];
ISADevice *pcspk;
DeviceState *iommu;
BusState *idebus[MAX_IDE_BUS];
/* Configuration options: */
uint64_t max_ram_below_4g;
OnOffAuto vmport;
SmbiosEntryPointType smbios_entry_point_type;
const char *south_bridge;
bool acpi_build_enabled;
bool smbus_enabled;
bool sata_enabled;
bool hpet_enabled;
bool i8042_enabled;
bool default_bus_bypass_iommu;
bool fd_bootchk;
uint64_t max_fw_size;
/* ACPI Memory hotplug IO base address */
hwaddr memhp_io_base;
vl: Add sgx compound properties to expose SGX EPC sections to guest Because SGX EPC is enumerated through CPUID, EPC "devices" need to be realized prior to realizing the vCPUs themselves, i.e. long before generic devices are parsed and realized. From a virtualization perspective, the CPUID aspect also means that EPC sections cannot be hotplugged without paravirtualizing the guest kernel (hardware does not support hotplugging as EPC sections must be locked down during pre-boot to provide EPC's security properties). So even though EPC sections could be realized through the generic -devices command, they need to be created much earlier for them to actually be usable by the guest. Place all EPC sections in a contiguous block, somewhat arbitrarily starting after RAM above 4g. Ensuring EPC is in a contiguous region simplifies calculations, e.g. device memory base, PCI hole, etc..., allows dynamic calculation of the total EPC size, e.g. exposing EPC to guests does not require -maxmem, and last but not least allows all of EPC to be enumerated in a single ACPI entry, which is expected by some kernels, e.g. Windows 7 and 8. The new compound properties command for sgx like below: ...... -object memory-backend-epc,id=mem1,size=28M,prealloc=on \ -object memory-backend-epc,id=mem2,size=10M \ -M sgx-epc.0.memdev=mem1,sgx-epc.1.memdev=mem2 Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20210719112136.57018-6-yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-28 11:40:58 +03:00
SGXEPCState sgx_epc;
CXLState cxl_devices_state;
} PCMachineState;
#define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
#define PC_MACHINE_MAX_RAM_BELOW_4G "max-ram-below-4g"
#define PC_MACHINE_VMPORT "vmport"
#define PC_MACHINE_SMBUS "smbus"
#define PC_MACHINE_SATA "sata"
#define PC_MACHINE_I8042 "i8042"
#define PC_MACHINE_MAX_FW_SIZE "max-fw-size"
#define PC_MACHINE_SMBIOS_EP "smbios-entry-point-type"
/**
* PCMachineClass:
*
* Compat fields:
*
* @enforce_aligned_dimm: check that DIMM's address/size is aligned by
* backend's alignment value if provided
* @acpi_data_size: Size of the chunk of memory at the top of RAM
* for the BIOS ACPI tables and other BIOS
* datastructures.
* @gigabyte_align: Make sure that guest addresses aligned at
* 1Gbyte boundaries get mapped to host
* addresses aligned at 1Gbyte boundaries. This
* way we can use 1GByte pages in the host.
*
*/
struct PCMachineClass {
/*< private >*/
X86MachineClass parent_class;
/*< public >*/
/* Device configuration: */
bool pci_enabled;
const char *default_south_bridge;
/* Compat options: */
/* Default CPU model version. See x86_cpu_set_default_version(). */
int default_cpu_version;
/* ACPI compat: */
bool has_acpi_build;
bool rsdp_in_ram;
int legacy_acpi_table_size;
unsigned acpi_data_size;
int pci_root_uid;
/* SMBIOS compat: */
bool smbios_defaults;
bool smbios_legacy_mode;
bool smbios_uuid_encoded;
SmbiosEntryPointType default_smbios_ep_type;
/* RAM / address space compat: */
bool gigabyte_align;
bool has_reserved_memory;
bool enforce_aligned_dimm;
bool broken_reserved_end;
bool enforce_amd_1tb_hole;
/* generate legacy CPU hotplug AML */
bool legacy_cpu_hotplug;
/* use PVH to load kernels that support this feature */
bool pvh_enabled;
/* create kvmclock device even when KVM PV features are not exposed */
bool kvmclock_create_always;
/* resizable acpi blob compat */
bool resizable_acpi_blob;
hw/i386/pc: improve physical address space bound check for 32-bit x86 systems 32-bit x86 systems do not have a reserved memory for hole64. On those 32-bit systems without PSE36 or PAE CPU features, hotplugging memory devices are not supported by QEMU as QEMU always places hotplugged memory above 4 GiB boundary which is beyond the physical address space of the processor. Linux guests also does not support memory hotplug on those systems. Please see Linux kernel commit b59d02ed08690 ("mm/memory_hotplug: disable the functionality for 32b") for more details. Therefore, the maximum limit of the guest physical address in the absence of additional memory devices effectively coincides with the end of "above 4G memory space" region for 32-bit x86 without PAE/PSE36. When users configure additional memory devices, after properly accounting for the additional device memory region to find the maximum value of the guest physical address, the address will be outside the range of the processor's physical address space. This change adds improvements to take above into consideration. For example, previously this was allowed: $ ./qemu-system-x86_64 -cpu pentium -m size=10G With this change now it is no longer allowed: $ ./qemu-system-x86_64 -cpu pentium -m size=10G qemu-system-x86_64: Address space limit 0xffffffff < 0x2bfffffff phys-bits too low (32) However, the following are allowed since on both cases physical address space of the processor is 36 bits: $ ./qemu-system-x86_64 -cpu pentium2 -m size=10G $ ./qemu-system-x86_64 -cpu pentium,pse36=on -m size=10G For 32-bit, without PAE/PSE36, hotplugging additional memory is no longer allowed. $ ./qemu-system-i386 -m size=1G,maxmem=3G,slots=2 qemu-system-i386: Address space limit 0xffffffff < 0x1ffffffff phys-bits too low (32) $ ./qemu-system-i386 -machine q35 -m size=1G,maxmem=3G,slots=2 qemu-system-i386: Address space limit 0xffffffff < 0x1ffffffff phys-bits too low (32) A new compatibility flag is introduced to make sure pc_max_used_gpa() keeps returning the old value for machines 8.1 and older. Therefore, the above is still allowed for older machine types in order to support compatibility. Hence, the following still works: $ ./qemu-system-i386 -machine pc-i440fx-8.1 -m size=1G,maxmem=3G,slots=2 $ ./qemu-system-i386 -machine pc-q35-8.1 -m size=1G,maxmem=3G,slots=2 Further, following is also allowed as with PSE36, the processor has 36-bit address space: $ ./qemu-system-i386 -cpu 486,pse36=on -m size=1G,maxmem=3G,slots=2 After calling CPUID with EAX=0x80000001, all AMD64 compliant processors have the longmode-capable-bit turned on in the extended feature flags (bit 29) in EDX. The absence of CPUID longmode can be used to differentiate between 32-bit and 64-bit processors and is the recommended approach. QEMU takes this approach elsewhere (for example, please see x86_cpu_realizefn()), With this change, pc_max_used_gpa() also uses the same method to detect 32-bit processors. Unit tests are modified to not run 32-bit x86 tests that use memory hotplug. Suggested-by: David Hildenbrand <david@redhat.com> Signed-off-by: Ani Sinha <anisinha@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20230922160413.165702-1-anisinha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-09-22 19:04:13 +03:00
/*
* whether the machine type implements broken 32-bit address space bound
* check for memory.
*/
bool broken_32bit_mem_addr_check;
};
#define TYPE_PC_MACHINE "generic-pc-machine"
OBJECT_DECLARE_TYPE(PCMachineState, PCMachineClass, PC_MACHINE)
/* ioapic.c */
GSIState *pc_gsi_create(qemu_irq **irqs, bool pci_enabled);
/* pc.c */
void pc_acpi_smi_interrupt(void *opaque, int irq, int level);
#define PCI_HOST_PROP_RAM_MEM "ram-mem"
#define PCI_HOST_PROP_PCI_MEM "pci-mem"
#define PCI_HOST_PROP_SYSTEM_MEM "system-mem"
#define PCI_HOST_PROP_IO_MEM "io-mem"
#define PCI_HOST_PROP_PCI_HOLE_START "pci-hole-start"
#define PCI_HOST_PROP_PCI_HOLE_END "pci-hole-end"
#define PCI_HOST_PROP_PCI_HOLE64_START "pci-hole64-start"
#define PCI_HOST_PROP_PCI_HOLE64_END "pci-hole64-end"
#define PCI_HOST_PROP_PCI_HOLE64_SIZE "pci-hole64-size"
#define PCI_HOST_BELOW_4G_MEM_SIZE "below-4g-mem-size"
#define PCI_HOST_ABOVE_4G_MEM_SIZE "above-4g-mem-size"
void pc_pci_as_mapping_init(MemoryRegion *system_memory,
MemoryRegion *pci_address_space);
void xen_load_linux(PCMachineState *pcms);
void pc_memory_init(PCMachineState *pcms,
MemoryRegion *system_memory,
MemoryRegion *rom_memory,
uint64_t pci_hole64_size);
uint64_t pc_pci_hole64_start(void);
DeviceState *pc_vga_init(ISABus *isa_bus, PCIBus *pci_bus);
void pc_basic_device_init(struct PCMachineState *pcms,
ISABus *isa_bus, qemu_irq *gsi,
ISADevice *rtc_state,
bool create_fdctrl,
uint32_t hpet_irqs);
void pc_cmos_init(PCMachineState *pcms,
ISADevice *s);
void pc_nic_init(PCMachineClass *pcmc, ISABus *isa_bus, PCIBus *pci_bus);
void pc_i8259_create(ISABus *isa_bus, qemu_irq *i8259_irqs);
/* port92.c */
#define PORT92_A20_LINE "a20"
#define TYPE_PORT92 "port92"
/* pc_sysfw.c */
void pc_system_firmware_init(PCMachineState *pcms, MemoryRegion *rom_memory);
bool pc_system_ovmf_table_find(const char *entry, uint8_t **data,
int *data_len);
void pc_system_parse_ovmf_flash(uint8_t *flash_ptr, size_t flash_size);
/* sgx.c */
void pc_machine_init_sgx_epc(PCMachineState *pcms);
extern GlobalProperty pc_compat_8_2[];
extern const size_t pc_compat_8_2_len;
extern GlobalProperty pc_compat_8_1[];
extern const size_t pc_compat_8_1_len;
extern GlobalProperty pc_compat_8_0[];
extern const size_t pc_compat_8_0_len;
extern GlobalProperty pc_compat_7_2[];
extern const size_t pc_compat_7_2_len;
extern GlobalProperty pc_compat_7_1[];
extern const size_t pc_compat_7_1_len;
extern GlobalProperty pc_compat_7_0[];
extern const size_t pc_compat_7_0_len;
extern GlobalProperty pc_compat_6_2[];
extern const size_t pc_compat_6_2_len;
extern GlobalProperty pc_compat_6_1[];
extern const size_t pc_compat_6_1_len;
extern GlobalProperty pc_compat_6_0[];
extern const size_t pc_compat_6_0_len;
extern GlobalProperty pc_compat_5_2[];
extern const size_t pc_compat_5_2_len;
extern GlobalProperty pc_compat_5_1[];
extern const size_t pc_compat_5_1_len;
extern GlobalProperty pc_compat_5_0[];
extern const size_t pc_compat_5_0_len;
extern GlobalProperty pc_compat_4_2[];
extern const size_t pc_compat_4_2_len;
extern GlobalProperty pc_compat_4_1[];
extern const size_t pc_compat_4_1_len;
extern GlobalProperty pc_compat_4_0[];
extern const size_t pc_compat_4_0_len;
extern GlobalProperty pc_compat_3_1[];
extern const size_t pc_compat_3_1_len;
extern GlobalProperty pc_compat_3_0[];
extern const size_t pc_compat_3_0_len;
extern GlobalProperty pc_compat_2_12[];
extern const size_t pc_compat_2_12_len;
extern GlobalProperty pc_compat_2_11[];
extern const size_t pc_compat_2_11_len;
extern GlobalProperty pc_compat_2_10[];
extern const size_t pc_compat_2_10_len;
extern GlobalProperty pc_compat_2_9[];
extern const size_t pc_compat_2_9_len;
extern GlobalProperty pc_compat_2_8[];
extern const size_t pc_compat_2_8_len;
extern GlobalProperty pc_compat_2_7[];
extern const size_t pc_compat_2_7_len;
target-i386: present virtual L3 cache info for vcpus Some software algorithms are based on the hardware's cache info, for example, for x86 linux kernel, when cpu1 want to wakeup a task on cpu2, cpu1 will trigger a resched IPI and told cpu2 to do the wakeup if they don't share low level cache. Oppositely, cpu1 will access cpu2's runqueue directly if they share llc. The relevant linux-kernel code as bellow: static void ttwu_queue(struct task_struct *p, int cpu) { struct rq *rq = cpu_rq(cpu); ...... if (... && !cpus_share_cache(smp_processor_id(), cpu)) { ...... ttwu_queue_remote(p, cpu); /* will trigger RES IPI */ return; } ...... ttwu_do_activate(rq, p, 0); /* access target's rq directly */ ...... } In real hardware, the cpus on the same socket share L3 cache, so one won't trigger a resched IPIs when wakeup a task on others. But QEMU doesn't present a virtual L3 cache info for VM, then the linux guest will trigger lots of RES IPIs under some workloads even if the virtual cpus belongs to the same virtual socket. For KVM, there will be lots of vmexit due to guest send IPIs. The workload is a SAP HANA's testsuite, we run it one round(about 40 minuates) and observe the (Suse11sp3)Guest's amounts of RES IPIs which triggering during the period: No-L3 With-L3(applied this patch) cpu0: 363890 44582 cpu1: 373405 43109 cpu2: 340783 43797 cpu3: 333854 43409 cpu4: 327170 40038 cpu5: 325491 39922 cpu6: 319129 42391 cpu7: 306480 41035 cpu8: 161139 32188 cpu9: 164649 31024 cpu10: 149823 30398 cpu11: 149823 32455 cpu12: 164830 35143 cpu13: 172269 35805 cpu14: 179979 33898 cpu15: 194505 32754 avg: 268963.6 40129.8 The VM's topology is "1*socket 8*cores 2*threads". After present virtual L3 cache info for VM, the amounts of RES IPIs in guest reduce 85%. For KVM, vcpus send IPIs will cause vmexit which is expensive, so it can cause severe performance degradation. We had tested the overall system performance if vcpus actually run on sparate physical socket. With L3 cache, the performance improves 7.2%~33.1%(avg:15.7%). Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2016-09-07 08:21:13 +03:00
extern GlobalProperty pc_compat_2_6[];
extern const size_t pc_compat_2_6_len;
extern GlobalProperty pc_compat_2_5[];
extern const size_t pc_compat_2_5_len;
extern GlobalProperty pc_compat_2_4[];
extern const size_t pc_compat_2_4_len;
extern GlobalProperty pc_compat_2_3[];
extern const size_t pc_compat_2_3_len;
extern GlobalProperty pc_compat_2_2[];
extern const size_t pc_compat_2_2_len;
extern GlobalProperty pc_compat_2_1[];
extern const size_t pc_compat_2_1_len;
extern GlobalProperty pc_compat_2_0[];
extern const size_t pc_compat_2_0_len;
extern GlobalProperty pc_compat_1_7[];
extern const size_t pc_compat_1_7_len;
extern GlobalProperty pc_compat_1_6[];
extern const size_t pc_compat_1_6_len;
extern GlobalProperty pc_compat_1_5[];
extern const size_t pc_compat_1_5_len;
extern GlobalProperty pc_compat_1_4[];
extern const size_t pc_compat_1_4_len;
#define DEFINE_PC_MACHINE(suffix, namestr, initfn, optsfn) \
static void pc_machine_##suffix##_class_init(ObjectClass *oc, void *data) \
{ \
MachineClass *mc = MACHINE_CLASS(oc); \
optsfn(mc); \
mc->init = initfn; \
} \
static const TypeInfo pc_machine_type_##suffix = { \
.name = namestr TYPE_MACHINE_SUFFIX, \
.parent = TYPE_PC_MACHINE, \
.class_init = pc_machine_##suffix##_class_init, \
}; \
static void pc_machine_init_##suffix(void) \
{ \
type_register(&pc_machine_type_##suffix); \
} \
type_init(pc_machine_init_##suffix)
#endif