For the most part we can use the new generic routine,
though exceptions need some post-processing to sort
invalid from integer overflow.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230527141910.1885950-4-richard.henderson@linaro.org>
Test for invalid, integer overflow, and inexact.
Test for proper result, modulo 2**64.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230527141910.1885950-3-richard.henderson@linaro.org>
Add versions of float64_to_int* which do not saturate the result.
Reviewed-by: Christoph Muellner <christoph.muellner@vrull.eu>
Tested-by: Christoph Muellner <christoph.muellner@vrull.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230527141910.1885950-2-richard.henderson@linaro.org>
Ensure that that both the start and last addresses are within
the same guest page.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230629082522.606219-3-mark.cave-ayland@ilande.co.uk>
[rth: Use tcg_debug_assert, simplify the expression]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Due to a copy-paste error in tb_invalidate_phys_range, the wrong
start address was passed to tb_invalidate_phys_page_range__locked.
Correct is to use the start of each page in turn.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Fixes: e506ad6a05 ("accel/tcg: Pass last not end to tb_invalidate_phys_range")
Message-Id: <20230629082522.606219-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add some ifdefs to avoid an unused function and unused variable.
Fixes: de1f8ce0ab ("ui/dbus: use shared D3D11 Texture2D when possible")
Co-developed-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <336f7697-bcfa-1f5f-e411-6859815aa26c@eik.bme.hu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* Create an initial stack frame for the main() function of the s390-ccw bios
* Clean up type definitions in the s390-ccw bios
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmSd1MwRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbUNAg//aO7pkzKPIUXG/g8PSzzgjYu9bDTketrQ
P08wk1jj9CQMLN6dcnVnmzPhC4EqyrZqMYvRH4qFPLJmi0m+Jq3fEEkVzKbI3baO
0qQX6DNJVLn6qcgvZ8+ZjkLmuWn/lN4+MH92vdUgpkCcj5y7FB4FjoaG+Z0yZxsS
YI6gG8D/i6fnq0zsKGMzmzHCswmN4s9qnY9a4nLV0YeMnrZJjUmUUKomWv0FP5jM
qtLf6pRtgR4u/WD9ktwjISlOn7AKQeCYgZcMu1kBnrSWDjhLytUrv8h2JqRxGOap
nRtdFzTvgeWKJbCX9v+XLb1bqzFj/LLgoCRzUOqV1CdBKf3JycIXyLMpTJ1+kV4J
NnzCjnfq/LSDwwCjeg3cRBUFjGkuHBZwQzBh5m4xXBqae07UhMGpWBmhIh7qgPy2
RXox0xK8Ot/vhYxtNojOiEW0Wp4KJElB9Wxn1Vz0kX4OXRcxHu9CDazZXTKBuBGA
YWZ9HbsquvwNMV5pgCuXzVWW3FCzrhGgtVYREwYyBIInJaEGCWKCyMAuDXb4fkWL
eS0Mryp3AMaJ6CidK2ELWygMkKA8xDF8pKm5jgQWRhs5jirydi1B4hPeGFsm1vUI
TYs08XuC9p66O2Ffn2Sc/uAXbe/FQ7Ce6EbGUUetpafo9FxPhbP28hPUhkcHt68Y
tmGzqAuwgxc=
=oWSq
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2023-06-29' of https://gitlab.com/thuth/qemu into staging
* Fix a compilation issue in the s390-ccw bios with Clang + binutils 2.40
* Create an initial stack frame for the main() function of the s390-ccw bios
* Clean up type definitions in the s390-ccw bios
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmSd1MwRHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbUNAg//aO7pkzKPIUXG/g8PSzzgjYu9bDTketrQ
# P08wk1jj9CQMLN6dcnVnmzPhC4EqyrZqMYvRH4qFPLJmi0m+Jq3fEEkVzKbI3baO
# 0qQX6DNJVLn6qcgvZ8+ZjkLmuWn/lN4+MH92vdUgpkCcj5y7FB4FjoaG+Z0yZxsS
# YI6gG8D/i6fnq0zsKGMzmzHCswmN4s9qnY9a4nLV0YeMnrZJjUmUUKomWv0FP5jM
# qtLf6pRtgR4u/WD9ktwjISlOn7AKQeCYgZcMu1kBnrSWDjhLytUrv8h2JqRxGOap
# nRtdFzTvgeWKJbCX9v+XLb1bqzFj/LLgoCRzUOqV1CdBKf3JycIXyLMpTJ1+kV4J
# NnzCjnfq/LSDwwCjeg3cRBUFjGkuHBZwQzBh5m4xXBqae07UhMGpWBmhIh7qgPy2
# RXox0xK8Ot/vhYxtNojOiEW0Wp4KJElB9Wxn1Vz0kX4OXRcxHu9CDazZXTKBuBGA
# YWZ9HbsquvwNMV5pgCuXzVWW3FCzrhGgtVYREwYyBIInJaEGCWKCyMAuDXb4fkWL
# eS0Mryp3AMaJ6CidK2ELWygMkKA8xDF8pKm5jgQWRhs5jirydi1B4hPeGFsm1vUI
# TYs08XuC9p66O2Ffn2Sc/uAXbe/FQ7Ce6EbGUUetpafo9FxPhbP28hPUhkcHt68Y
# tmGzqAuwgxc=
# =oWSq
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 29 Jun 2023 09:00:28 PM CEST
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [undefined]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [undefined]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2023-06-29' of https://gitlab.com/thuth/qemu:
pc-bios: Update the s390 bios images with the recent changes
pc-bios/s390-ccw: Don't use __bss_start with the "larl" instruction
pc-bios/s390-ccw: Move the stack array into start.S
pc-bios/s390-ccw: Provide space for initial stack frame in start.S
pc-bios/s390-ccw: Fix indentation in start.S
pc-bios/s390-ccw/Makefile: Use -z noexecstack to silence linker warning
pc-bios/s390-ccw: Get rid of the the __u* types
s390-ccw: Getting rid of ulong
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When vfio_realize fails, the mmap_timer used for INTx optimization
isn't freed. As this timer isn't activated yet, the potential impact
is just a piece of leaked memory.
Fixes: ea486926b0 ("vfio-pci: Update slow path INTx algorithm timer related")
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
The kvm irqchip notifier is only registered if the device supports
INTx, however it's unconditionally removed in vfio realize error
path. If the assigned device does not support INTx, this will cause
QEMU to crash when vfio realize fails. Change it to conditionally
remove the notifier only if the notify hook is setup.
Before fix:
(qemu) device_add vfio-pci,host=81:11.1,id=vfio1,bus=root1,xres=1
Connection closed by foreign host.
After fix:
(qemu) device_add vfio-pci,host=81:11.1,id=vfio1,bus=root1,xres=1
Error: vfio 0000:81:11.1: xres and yres properties require display=on
(qemu)
Fixes: c5478fea27 ("vfio/pci: Respond to KVM irqchip change notifier")
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Cédric has stepped up involvement in vfio, reviewing and managing
patches, as well as pull requests. This work deserves gratitude and
punishment with a promotion to co-maintainer ;)
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
The major parts of VFIO migration are supported today in QEMU. This
includes basic VFIO migration, device dirty page tracking and precopy
support.
Thus, at this point in time, it seems appropriate to make VFIO migration
non-experimental: remove the x prefix from enable_migration property,
change it to ON_OFF_AUTO and let the default value be AUTO.
In addition, make the following adjustments:
1. When enable_migration is ON and migration is not supported, fail VFIO
device realization.
2. When enable_migration is AUTO (i.e., not explicitly enabled), require
device dirty tracking support. This is because device dirty tracking
is currently the only method to do dirty page tracking, which is
essential for migrating in a reasonable downtime. Setting
enable_migration to ON will not require device dirty tracking.
3. Make migration error and blocker messages more elaborate.
4. Remove error prints in vfio_migration_query_flags().
5. Rename trace_vfio_migration_probe() to
trace_vfio_migration_realize().
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Currently, VFIO bytes_transferred is not reset properly:
1. bytes_transferred is not reset after a VM snapshot (so a migration
following a snapshot will report incorrect value).
2. bytes_transferred is a single counter for all VFIO devices, however
upon migration failure it is reset multiple times, by each VFIO
device.
Fix it by introducing a new function vfio_reset_bytes_transferred() and
calling it during migration and snapshot start.
Remove existing bytes_transferred reset in VFIO migration state
notifier, which is not needed anymore.
Fixes: 3710586caa ("qapi: Add VFIO devices migration stats in Migration stats")
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
When vfio_enable_vectors() returns with less than requested nr_vectors
we retry with what kernel reported back. But the retry path doesn't
call vfio_prepare_kvm_msi_virq_batch() and this results in,
qemu-system-aarch64: vfio: Error: Failed to enable 4 MSI vectors, retry with 1
qemu-system-aarch64: ../hw/vfio/pci.c:602: vfio_commit_kvm_msi_virq_batch: Assertion `vdev->defer_kvm_irq_routing' failed
Fixes: dc580d51f7 ("vfio: defer to commit kvm irq routing when enable msi/msix")
Reviewed-by: Longpeng <longpeng2@huawei.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
NVIDIA Turing and newer GPUs implement the MSI-X capability at the offset
previously reserved for use by hypervisors to implement the GPUDirect
Cliques capability. A revised specification provides an alternate
location. Add a config space walk to the quirk to check for conflicts,
allowing us to fall back to the new location or generate an error at the
quirk setup rather than when the real conflicting capability is added
should there be no available location.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
A common helper implementing the realloc algorithm for handling
capabilities.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Robin Voetter <robin@streamhpc.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Loading of a VFIO device's data can take a substantial amount of time as
the device may need to allocate resources, prepare internal data
structures, etc. This can increase migration downtime, especially for
VFIO devices with a lot of resources.
To solve this, VFIO migration uAPI defines "initial bytes" as part of
its precopy data stream. Initial bytes can be used in various ways to
improve VFIO migration performance. For example, it can be used to
transfer device metadata to pre-allocate resources in the destination.
However, for this to work we need to make sure that all initial bytes
are sent and loaded in the destination before the source VM is stopped.
Use migration switchover ack capability to make sure a VFIO device's
initial bytes are sent and loaded in the destination before the source
stops the VM and attempts to complete the migration.
This can significantly reduce migration downtime for some devices.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Tested-by: YangHang Liu <yanghliu@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Pre-copy support allows the VFIO device data to be transferred while the
VM is running. This helps to accommodate VFIO devices that have a large
amount of data that needs to be transferred, and it can reduce migration
downtime.
Pre-copy support is optional in VFIO migration protocol v2.
Implement pre-copy of VFIO migration protocol v2 and use it for devices
that support it. Full description of it can be found in the following
Linux commit: 4db52602a607 ("vfio: Extend the device migration protocol
with PRE_COPY").
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Tested-by: YangHang Liu <yanghliu@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
VFIO migration flags are queried once in vfio_migration_init(). Store
them in VFIOMigration so they can be used later to check the device's
migration capabilities without re-querying them.
This will be used in the next patch to check if the device supports
precopy migration.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Tested-by: YangHang Liu <yanghliu@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Refactor vfio_save_block() to return the size of saved data on success
and -errno on error.
This will be used in next patch to implement VFIO migration pre-copy
support.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Tested-by: YangHang Liu <yanghliu@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Add migration switchover ack capability test. The test runs without
devices that support this capability, but is still useful to make sure
it didn't break anything.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: YangHang Liu <yanghliu@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Now that switchover ack logic has been implemented, enable the
capability.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: YangHang Liu <yanghliu@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Implement switchover ack logic. This prevents the source from stopping
the VM and completing the migration until an ACK is received from the
destination that it's OK to do so.
To achieve this, a new SaveVMHandlers handler switchover_ack_needed()
and a new return path message MIG_RP_MSG_SWITCHOVER_ACK are added.
The switchover_ack_needed() handler is called during migration setup in
the destination to check if switchover ack is used by the migrated
device.
When switchover is approved by all migrated devices in the destination
that support this capability, the MIG_RP_MSG_SWITCHOVER_ACK return path
message is sent to the source to notify it that it's OK to do
switchover.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: YangHang Liu <yanghliu@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Migration downtime estimation is calculated based on bandwidth and
remaining migration data. This assumes that loading of migration data in
the destination takes a negligible amount of time and that downtime
depends only on network speed.
While this may be true for RAM, it's not necessarily true for other
migrated devices. For example, loading the data of a VFIO device in the
destination might require from the device to allocate resources, prepare
internal data structures and so on. These operations can take a
significant amount of time which can increase migration downtime.
This patch adds a new capability "switchover ack" that prevents the
source from stopping the VM and completing the migration until an ACK
is received from the destination that it's OK to do so.
This can be used by migrated devices in various ways to reduce downtime.
For example, a device can send initial precopy metadata to pre-allocate
resources in the destination and use this capability to make sure that
the pre-allocation is completed before the source VM is stopped, so it
will have full effect.
This new capability relies on the return path capability to communicate
from the destination back to the source.
The actual implementation of the capability will be added in the
following patches.
Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Acked-by: Markus Armbruster <armbru@redhat.com>
Tested-by: YangHang Liu <yanghliu@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
The startup code of the bios has slightly been changed, apart
from that, there should not be any functional changes this time.
Signed-off-by: Thomas Huth <thuth@redhat.com>
start.S currently cannot be compiled with Clang 16 and binutils 2.40:
ld: start.o(.text+0x8): misaligned symbol `__bss_start' (0xc1e5) for
relocation R_390_PC32DBL
According to the built-in linker script of ld, the symbol __bss_start
can actually point *before* the .bss section and does not need to have
any alignment, so in certain situations (like when using the internal
assembler of Clang), the __bss_start symbol can indeed be unaligned
and thus it is not suitable for being used with the "larl" instruction
that needs an address that is at least aligned to halfwords.
The problem went unnoticed so far since binutils <= 2.39 did not
check the alignment, but starting with binutils 2.40, such unaligned
addresses are now refused.
Fix it by loading the address indirectly instead.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2216662
Reported-by: Miroslav Rezanina <mrezanin@redhat.com>
Suggested-by: Andreas Krebbel <andreas.krebbel@de.ibm.com>
Message-Id: <20230629104821.194859-8-thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The stack array is only referenced from the start-up code (which is
shared between the s390-ccw.img and the s390-netboot.img), but it is
currently declared twice, once in main.c and once in netmain.c.
It makes more sense to declare this in start.S instead - which will
also be helpful in the next patch, since we need to mention the .bss
section in start.S in that patch.
While we're at it, let's also drop the huge alignment of the stack,
since there is no technical requirement for aligning it to page
boundaries.
Message-Id: <20230627074703.99608-4-thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Eric Farman <farman@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Providing the space of a stack frame is the duty of the caller,
so we should reserve 160 bytes before jumping into the main function.
Otherwise the main() function might write past the stack array.
While we're at it, add a proper STACK_SIZE macro for the stack size
instead of using magic numbers (this is also required for the following
patch).
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Message-Id: <20230627074703.99608-3-thuth@redhat.com>
Reviewed-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Marc Hartmayer <mhartmay@linux.ibm.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
start.S is currently indented with a mixture of spaces and tabs, which
is quite ugly. QEMU coding style says indentation should be 4 spaces,
and this is also what we are using in the assembler files in the
tests/tcg/s390x/ folder already, so let's adjust start.S accordingly.
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Message-Id: <20230627074703.99608-2-thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Recent versions of ld complain when linking the s390-ccw bios:
/usr/bin/ld: warning: start.o: missing .note.GNU-stack section implies
executable stack
/usr/bin/ld: NOTE: This behaviour is deprecated and will be removed in
a future version of the linker
We can silence the warning by telling the linker to mark the stack
as not executable.
Message-Id: <20230622130822.396793-1-thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The types starting with double underscores have likely been
introduced into the s390-ccw bios to be able to re-use structs
from the Linux kernel in the past, but the corresponding structs
in cio.h have been changed there a long time ago already to not
use the variants with the double underscores anymore:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/diff/drivers/s390/cio/cio.h?id=cd6b4f27b9bb2a
So it would be good to replace these in the s390-ccw bios now, too.
Message-Id: <20230627114101.122231-1-thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Any good reason why this still exist?
I can understand u* and __u* to be linux kernel like, but ulong?
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230629104821.194859-2-thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There are timeouts in the cross-i386-tci job that are related to plugins.
Restrict this job to basic TCI testing.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230629130844.151453-1-richard.henderson@linaro.org>
* Fix backwards time with -icount auto
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmSdRiQUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroOqcwf9FGAqZ+0V34Y8XeXMu8Es3bFjEKG8
t3BpVNhTBOYDPvpshnPVx2I29nRT2opc1C4YkjMAv5/1nivj1kDM7hDObOSJQvqy
5FgTsJYqRtGj+J7uVBrspWZsP8BYeykKmXR6deBOPvCuw5nnLdDQ3dLV2F26lKUu
lsFyEVbi4dzf8+TVuNIXEg7mVBYytjBQwBmmHgeOofeikjq9WEudr49mwJMCHyzl
iXCatnctXGKZYSnp+eHIBiFRdSzjqdgrDRa0ysSqABoBI1pmkhyQKSay6cSjfG4n
gFlqPF/i9RqAWpsQrM1IMGgPK39SrT2dYlHDJV2P/NEQrS6kLh2HoW/ArQ==
=oj3B
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* Make named CPU models usable for qemu-{i386,x86_64}
* Fix backwards time with -icount auto
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmSdRiQUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroOqcwf9FGAqZ+0V34Y8XeXMu8Es3bFjEKG8
# t3BpVNhTBOYDPvpshnPVx2I29nRT2opc1C4YkjMAv5/1nivj1kDM7hDObOSJQvqy
# 5FgTsJYqRtGj+J7uVBrspWZsP8BYeykKmXR6deBOPvCuw5nnLdDQ3dLV2F26lKUu
# lsFyEVbi4dzf8+TVuNIXEg7mVBYytjBQwBmmHgeOofeikjq9WEudr49mwJMCHyzl
# iXCatnctXGKZYSnp+eHIBiFRdSzjqdgrDRa0ysSqABoBI1pmkhyQKSay6cSjfG4n
# gFlqPF/i9RqAWpsQrM1IMGgPK39SrT2dYlHDJV2P/NEQrS6kLh2HoW/ArQ==
# =oj3B
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 29 Jun 2023 10:51:48 AM CEST
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [undefined]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
target/i386: emulate 64-bit ring 0 for linux-user if LM feature is set
target/i386: ignore CPL0-specific features in user mode emulation
target/i386: ignore ARCH_CAPABILITIES features in user mode emulation
target/i386: Export MSR_ARCH_CAPABILITIES bits to guests
icount: don't adjust virtual time backwards after warp
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
32-bit binaries can run on a long mode processor even if the kernel
is 64-bit, of course, and this can have slightly different behavior;
for example, SYSCALL is allowed on Intel processors.
Allow reporting LM to programs running under user mode emulation,
so that "-cpu" can be used with named CPU models even for qemu-i386
and even without disabling LM by hand.
Fortunately, most of the runtime code in QEMU has to depend on HF_LMA_MASK
or on HF_CS64_MASK (which is anyway false for qemu-i386's 32-bit code
segment) rather than TARGET_X86_64, therefore all that is needed is an
update of linux-user's ring 0 setup.
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1534
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Features such as PCID are only accessible through privileged operations,
and therefore have no impact on any user-mode operation. Allow reporting
them to programs running under user mode emulation, so that "-cpu" can be
used with more named CPU models.
XSAVES would be similar, but it doesn't make sense to provide it until
XSAVEC is implemented.
With this change, all CPUs up to Broadwell-v4 can be emulate. Skylake-Client
requires XSAVEC, while EPYC also requires SHA-NI, MISALIGNSSE and TOPOEXT.
MISALIGNSSE is not hard to implement, but I am not sure it is worth using
a precious hflags bit for it.
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1534
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
ARCH_CAPABILITIES is only accessible through a read-only MSR, so it has
no impact on any user-mode operation (user-mode cannot read the MSR).
So do not bother printing warnings about it in user mode emulation.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On Intel CPUs there are certain bits in MSR_ARCH_CAPABILITIES that
indicates if the CPU is not affected by a vulnerability. Without these
bits guests may try to deploy the mitigation even if the CPU is not
affected.
Export the bits to guests that indicate immunity to hardware
vulnerabilities.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Message-ID: <63d85cc76d4cdc51e6c732478b81d8f13be11e5a.1687551881.git.pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- Re-enable the graph lock
- More fixes to coroutine_fn marking
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmScQCQRHGt3b2xmQHJl
ZGhhdC5jb20ACgkQfwmycsiPL9bNSA//WIzPT45rFhl2U9QgyOJu26ho6ahsgwgI
Z3QM5kCDB1dAN9USRPxhGboLGo8CyY7eeSwSrR7RtwBGYrWrAoJfGp5gK/7d9s5Q
o0AGgRPnJGhFkBhRRMytsDsewM6Kk4IRmk4HMK3cOH3rsSM8RHs6KmDSBKesllu0
QVGf3qW4u8LHyZyGM5OlPVUbtuDuK6/52FGhpXBp+x4oyNegOhjwO4mGOvTG+xIk
Q5zwWZaPfjxaEDkvW8iahB6/D7Tpt64BmMf1Ydhxcd5eKEp932CiBI36aAlNKoRD
Al5wztRx1GEh12ekN39jIi7Ypp3JX26keJcieKU0q656pT551UFRYjU0Rk08/Cca
qv2oiQDu6bHgQ9zCQ1nMfa9+K2MyBwx0b5qfYkvs2RzgCTl8ImgBQANHfw8tz6Bq
HUo1zsFBXCaK0boUB5iFwdf3rlx3t9UTEuDej/RaHqZjZD5xeG/smCcOlSfHaKUa
wXfYxvm8ZfefJn1D6io1A+7M956uvIQNtmh13cU44clgFX9Y/bBNMg/5lMRsJKo8
xxjvqCAyxo/pPfUsVWx4pc8AXbfVa85gyoSiaLEYZnqP54sJ2lFccqykCsTy58Lo
VDcoPnoSc+LNqBOvtzxXgQbEWFCXU6fe0+TZgVYUvExWFIAOImeDWg2GD1JVrwsX
e9QrPhL3DXg=
=ZQcP
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging
Block layer patches
- Re-enable the graph lock
- More fixes to coroutine_fn marking
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmScQCQRHGt3b2xmQHJl
# ZGhhdC5jb20ACgkQfwmycsiPL9bNSA//WIzPT45rFhl2U9QgyOJu26ho6ahsgwgI
# Z3QM5kCDB1dAN9USRPxhGboLGo8CyY7eeSwSrR7RtwBGYrWrAoJfGp5gK/7d9s5Q
# o0AGgRPnJGhFkBhRRMytsDsewM6Kk4IRmk4HMK3cOH3rsSM8RHs6KmDSBKesllu0
# QVGf3qW4u8LHyZyGM5OlPVUbtuDuK6/52FGhpXBp+x4oyNegOhjwO4mGOvTG+xIk
# Q5zwWZaPfjxaEDkvW8iahB6/D7Tpt64BmMf1Ydhxcd5eKEp932CiBI36aAlNKoRD
# Al5wztRx1GEh12ekN39jIi7Ypp3JX26keJcieKU0q656pT551UFRYjU0Rk08/Cca
# qv2oiQDu6bHgQ9zCQ1nMfa9+K2MyBwx0b5qfYkvs2RzgCTl8ImgBQANHfw8tz6Bq
# HUo1zsFBXCaK0boUB5iFwdf3rlx3t9UTEuDej/RaHqZjZD5xeG/smCcOlSfHaKUa
# wXfYxvm8ZfefJn1D6io1A+7M956uvIQNtmh13cU44clgFX9Y/bBNMg/5lMRsJKo8
# xxjvqCAyxo/pPfUsVWx4pc8AXbfVa85gyoSiaLEYZnqP54sJ2lFccqykCsTy58Lo
# VDcoPnoSc+LNqBOvtzxXgQbEWFCXU6fe0+TZgVYUvExWFIAOImeDWg2GD1JVrwsX
# e9QrPhL3DXg=
# =ZQcP
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 28 Jun 2023 04:13:56 PM CEST
# gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6
# gpg: issuer "kwolf@redhat.com"
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full]
* tag 'for-upstream' of https://repo.or.cz/qemu/kevin: (23 commits)
block: use bdrv_co_debug_event in coroutine context
block: use bdrv_co_getlength in coroutine context
qcow2: mark more functions as coroutine_fns and GRAPH_RDLOCK
vhdx: mark more functions as coroutine_fns and GRAPH_RDLOCK
vmdk: mark more functions as coroutine_fns and GRAPH_RDLOCK
dmg: mark more functions as coroutine_fns and GRAPH_RDLOCK
cloop: mark more functions as coroutine_fns and GRAPH_RDLOCK
block: mark another function as coroutine_fns and GRAPH_UNLOCKED
bochs: mark more functions as coroutine_fns and GRAPH_RDLOCK
vpc: mark more functions as coroutine_fns and GRAPH_RDLOCK
qed: mark more functions as coroutine_fns and GRAPH_RDLOCK
file-posix: remove incorrect coroutine_fn calls
Revert "graph-lock: Disable locking for now"
graph-lock: Unlock the AioContext while polling
blockjob: Fix AioContext locking in block_job_add_bdrv()
block: Fix AioContext locking in bdrv_open_backing_file()
block: Fix AioContext locking in bdrv_open_inherit()
block: Fix AioContext locking in bdrv_reopen_parse_file_or_backing()
block: Fix AioContext locking in bdrv_attach_child_common()
block: Fix AioContext locking in bdrv_open_child()
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQFSBAABCgA8FiEEzGIauY6CIA2RXMnEW8LFb64PMh8FAmScHBkeHG1hcmsuY2F2
ZS1heWxhbmRAaWxhbmRlLmNvLnVrAAoJEFvCxW+uDzIfuZ8H/3KjLLCaGcO3jnus
P/ky3wGYx9aah/iNfRDgaaGRkPX18Eabq0BidUt/DN28yQmKgnOcbCwHlIt4QdCt
PeO9hRNLpCop63LwyQQTrSZEdVZP75CX6dRcN+6h5TsY66/ESZjBsivuJGVHIU6O
L8zJv2KKg0SKtJHsPGkUppmfyM4btmGTerqSJHv1SJfy4DJdzRMF83/WOZtE5srm
YvpgZsiztBpHbG/+jLn2mX7iaQiZQCCs+weU0ynszr5WENAnuJderjO+mo0DZkqD
j+R6LMcHHj6I4uP68eJowdTezOpoZNROh/gdUozCweA1AC/8RotkJa9UcBeEplY/
+wV8mts=
=ga0/
-----END PGP SIGNATURE-----
Merge tag 'qemu-sparc-20230628' of https://github.com/mcayland/qemu into staging
qemu-sparc queue
# -----BEGIN PGP SIGNATURE-----
#
# iQFSBAABCgA8FiEEzGIauY6CIA2RXMnEW8LFb64PMh8FAmScHBkeHG1hcmsuY2F2
# ZS1heWxhbmRAaWxhbmRlLmNvLnVrAAoJEFvCxW+uDzIfuZ8H/3KjLLCaGcO3jnus
# P/ky3wGYx9aah/iNfRDgaaGRkPX18Eabq0BidUt/DN28yQmKgnOcbCwHlIt4QdCt
# PeO9hRNLpCop63LwyQQTrSZEdVZP75CX6dRcN+6h5TsY66/ESZjBsivuJGVHIU6O
# L8zJv2KKg0SKtJHsPGkUppmfyM4btmGTerqSJHv1SJfy4DJdzRMF83/WOZtE5srm
# YvpgZsiztBpHbG/+jLn2mX7iaQiZQCCs+weU0ynszr5WENAnuJderjO+mo0DZkqD
# j+R6LMcHHj6I4uP68eJowdTezOpoZNROh/gdUozCweA1AC/8RotkJa9UcBeEplY/
# +wV8mts=
# =ga0/
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 28 Jun 2023 01:40:09 PM CEST
# gpg: using RSA key CC621AB98E82200D915CC9C45BC2C56FAE0F321F
# gpg: issuer "mark.cave-ayland@ilande.co.uk"
# gpg: Good signature from "Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: CC62 1AB9 8E82 200D 915C C9C4 5BC2 C56F AE0F 321F
* tag 'qemu-sparc-20230628' of https://github.com/mcayland/qemu:
escc: emulate dip switch language layout settings on SUN keyboard
target/sparc: Use tcg_gen_lookup_and_goto_ptr for v9 WRASI
target/sparc: Use DYNAMIC_PC_LOOKUP for v9 RETURN
target/sparc: Use DYNAMIC_PC_LOOKUP for JMPL
target/sparc: Use DYNAMIC_PC_LOOKUP for conditional branches
target/sparc: Introduce DYNAMIC_PC_LOOKUP
target/sparc: Drop inline markers from translate.c
target/sparc: Fix npc comparison in sparc_tr_insn_start
target/sparc: Use tcg_gen_lookup_and_goto_ptr in gen_goto_tb
Revert "hw/sparc64/niagara: Use blk_name() instead of open-coding it"
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
A regression was introduced in the last pull request. Fix it up.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmScH0QPHG1zdEByZWRo
YXQuY29tAAoJECgfDbjSjVRpEPUH/1s424Aerch82tdps+qIhuclf9Jq47oo7Q/Y
JVeizUsFLtE0Wwmfyna1rIbaILM//Akcq8Y0Ny+GHtYA8NdIaAQfue87uy+k8qbc
qFXbimZEzjZp7CAC+6tUiv8UDaYF7I9giImZnHkkbPDz22ACQQCzV6nTogoc1pzg
BkLxbWjYUdSTT8l1h/H7XwGWKsKZ9RUGxxAOpKqdK3NElmy+1I1eeUvhnLZwAc3i
9HUMOg2JQBhky0jjkrDHQcyopxlHNBrz7D6/sZKOyua627DgRS1BOAM9h2u2F3rq
+6Hv258g48764Hl0SYEKCBULI+CrgtpcS/aq8sLW6Tm7Cw2k/N0=
=y9dL
-----END PGP SIGNATURE-----
Merge tag 'for_upstream' of https://git.kernel.org/pub/scm/virt/kvm/mst/qemu into staging
virtio: regression fix
A regression was introduced in the last pull request. Fix it up.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# -----BEGIN PGP SIGNATURE-----
#
# iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmScH0QPHG1zdEByZWRo
# YXQuY29tAAoJECgfDbjSjVRpEPUH/1s424Aerch82tdps+qIhuclf9Jq47oo7Q/Y
# JVeizUsFLtE0Wwmfyna1rIbaILM//Akcq8Y0Ny+GHtYA8NdIaAQfue87uy+k8qbc
# qFXbimZEzjZp7CAC+6tUiv8UDaYF7I9giImZnHkkbPDz22ACQQCzV6nTogoc1pzg
# BkLxbWjYUdSTT8l1h/H7XwGWKsKZ9RUGxxAOpKqdK3NElmy+1I1eeUvhnLZwAc3i
# 9HUMOg2JQBhky0jjkrDHQcyopxlHNBrz7D6/sZKOyua627DgRS1BOAM9h2u2F3rq
# +6Hv258g48764Hl0SYEKCBULI+CrgtpcS/aq8sLW6Tm7Cw2k/N0=
# =y9dL
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 28 Jun 2023 01:53:40 PM CEST
# gpg: using RSA key 5D09FD0871C8F85B94CA8A0D281F0DB8D28D5469
# gpg: issuer "mst@redhat.com"
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [undefined]
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* tag 'for_upstream' of https://git.kernel.org/pub/scm/virt/kvm/mst/qemu:
net/vhost-net: do not assert on null pointer return from tap_get_vhost_net()
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add MEMORY_LISTNER_PRIORITY_MIN for the symbolic value for the min value of
the memory listener instead of the hard-coded magic value 0. Add explicit
initialization.
No functional change intended.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <29f88477fe82eb774bcfcae7f65ea21995f865f2.1687279702.git.isaku.yamahata@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Add MEMORY_LISTENER_PRIORITY_DEV_BACKEND for the symbolic value
for memory listener to replace the hard-coded value 10 for the
device backend.
No functional change intended.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <8314d91688030d7004e96958f12e2c83fb889245.1687279702.git.isaku.yamahata@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Add MEMORY_LISTNER_PRIORITY_ACCEL for the symbolic value for the memory
listener to replace the hard-coded value 10 for accel.
No functional change intended.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <feebe423becc6e2aa375f59f6abce9a85bc15abb.1687279702.git.isaku.yamahata@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
003f230e37 ("machine: Tweak the order of topology members in struct
CpuTopology") changes the meaning of MachineState.smp.cores from "the
number of cores in one package" to "the number of cores in one die"
and doesn't fix other uses of MachineState.smp.cores. And because of
the introduction of cluster, now smp.cores just means "the number of
cores in one cluster". This clearly does not fit the semantics here.
And before this error message, WHvSetPartitionProperty() is called to
set prop.ProcessorCount.
So the error message should show the prop.ProcessorCount other than
"cores per cluster" or "cores per package".
Cc: Sunil Muthuswamy <sunilmut@microsoft.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230529124331.412822-1-zhao1.liu@linux.intel.com>
[PMD: Use '%u' format for ProcessorCount]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
These fields shouldn't be accessed when KVM is not available.
Restrict the KVM timer migration state. Rename the KVM timer
post_load() handler accordingly, because cpu_post_load() is
too generic.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-Id: <20230626232007.8933-3-philmd@linaro.org>
The 'kvm_sw_tlb' and 'tlb_dirty' fields introduced in commit
93dd5e852c ("kvm: ppc: booke206: use MMU API") are specific
to KVM and shouldn't be accessed when it is not available.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20230624192645.13680-1-philmd@linaro.org>
These fields shouldn't be accessed when KVM is not available.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230405160454.97436-8-philmd@linaro.org>