docs/devel/nested-papr.txt is entirely (apart from the initial
paragraph) a partial copy of the kernel documentation
https://docs.kernel.org/arch/powerpc/kvm-nested.html
There's no benefit to the QEMU docs to converting this to rST,
so instead delete it. Anybody needing to know the API and
protocol for the guest to communicate with the hypervisor
to created nested VMs should refer to the authoratitative
documentation in the kernel docs.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-id: 20240816133318.3603114-1-peter.maydell@linaro.org
The Neoverse-V1 TRM is a bit confused about the layout of the
ID_AA64ISAR1_EL1 register, and so its table 3-6 has the wrong value
for this ID register. Trust instead section 3.2.74's list of which
fields are set.
This means that we stop incorrectly reporting FEAT_XS as present, and
now report the presence of FEAT_BF16.
Cc: qemu-stable@nongnu.org
Reported-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240917161337.3012188-1-peter.maydell@linaro.org
The code at the tail end of the loop in kvm_dirty_ring_reaper_thread()
is unreachable, because there is no way for execution to leave the
loop. Replace it with a g_assert_not_reached().
(The code has always been unreachable, right from the start
when the function was added in commit b4420f198dd8.)
Resolves: Coverity CID 1547687
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240815131206.3231819-3-peter.maydell@linaro.org
In kvm_init_vcpu()and do_kvm_destroy_vcpu(), the return value from
kvm_ioctl(..., KVM_GET_VCPU_MMAP_SIZE, ...)
is an 'int', but we put it into a 'long' logal variable mmap_size.
Coverity then complains that there might be a truncation when we copy
that value into the 'int ret' which we use for returning a value in
an error-exit codepath. This can't ever actually overflow because
the value was in an 'int' to start with, but it makes more sense
to use 'int' for mmap_size so we don't do the widen-then-narrow
sequence in the first place.
Resolves: Coverity CID 1547515
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240815131206.3231819-2-peter.maydell@linaro.org
OpenBSD 7.3 we use is EoL. Both 7.4 and 7.5 releases do not work on
anything above Neoverse-N1 due to PAC emulation:
https://marc.info/?l=openbsd-arm&m=171050428327850&w=2
OpenBSD 7.6 is not yet released.
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Message-id: 20240910-b4-move-to-freebsd-v5-4-0fb66d803c93@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
'Test might timeout' means nothing. Replace it with useful information
that it is emulation of pointer authentication what makes this test run
too long.
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Message-id: 20240910-b4-move-to-freebsd-v5-3-0fb66d803c93@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
FreeBSD has longer support cycle for stable release (14.x EoL in 2028)
than OpenBSD (7.3 we use is already EoL). Also bugfixes are backported
so we can stay on 14.x for longer.
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Message-id: 20240910-b4-move-to-freebsd-v5-2-0fb66d803c93@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We want to run tests using default cpu without having to remember which
Arm core is it.
Change Neoverse-N1 (old default) test to use default cpu (Neoverse-N2 at
the moment).
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Message-id: 20240910-b4-move-to-freebsd-v5-1-0fb66d803c93@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
SW modifying USART_CR1 TE bit should cuase HW to respond by altering
USART_ISR TEACK bit, and likewise for RE and REACK bit.
This resolves some but not all issues necessary for the official STM USART
HAL driver to function as is.
Fixes: 87b77e6e01 ("hw/char/stm32l4x5_usart: Enable serial read and write")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2540
Signed-off-by: Jacob Abrams <satur9nine@gmail.com>
Message-id: 20240911043255.51966-1-satur9nine@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-28-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While these functions really do return a 32-bit value,
widening the return type means that we need do less
marshalling between TCG types.
Remove NeonGenNarrowEnvFn typedef; add NeonGenOne64OpEnvFn.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240912024114.1097832-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes SHL and SLI.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes SSHR, USHR, SSRA, USRA, SRSHR, URSHR,
SRSRA, URSRA, SRI.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There isn't a lot of commonality along the different paths of
handle_shri_with_rndacc. Split them out to separate functions,
which will be usable during the decodetree conversion.
Simplify 64-bit rounding operations to not require double-word arithmetic.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We always pass the same value for round; compute it
within common code.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Combine the right shift with the extension via
the tcg extract operations.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes SHL and SLI.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes SSHR, USHR, SSRA, USRA, SRSHR, URSHR, SRSRA, URSRA, SRI.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Handle the two special cases within these new
functions instead of higher in the call stack.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use simple shift and add instead of ctpop, ctz, shift and mask.
Unlike SVE, there is no predicate to disable elements.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The extract2 tcg op performs the same operation
as the do_ext64 function.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of cmp+and or cmp+andc, use cmpsel. This will
be better for hosts that use predicate registers for cmp.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of cmp+and or cmp+andc, use cmpsel. This will
be better for hosts that use predicate registers for cmp.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of copying a constant into a temporary with dupi,
use a vector constant directly.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of copying a constant into a temporary with dupi,
use a vector constant directly.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240912024114.1097832-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- Fabiano's patch to move two tests to slow tests.
- Peter's patch to fix qatzip builds
- Stefan's multifd-zstd fix on unsigned diff comparisons
- Fea's bug fix to consistently use memattrs when map() address space
- Fabiano's bug fix on multifd race condition against receivedmap
-----BEGIN PGP SIGNATURE-----
iIgEABYKADAWIQS5GE3CDMRX2s990ak7X8zN86vXBgUCZuscdRIccGV0ZXJ4QHJl
ZGhhdC5jb20ACgkQO1/MzfOr1wblcQD/amw/nw5LiL8/+QzJtjFTckouyHtCn++I
x5yVJndm4M4BAOD+b6Pd5byAX1bH7eZ85368ivKLGIZep6qEvICQmw0G
=wxVI
-----END PGP SIGNATURE-----
Merge tag 'migration-20240917-pull-request' of https://gitlab.com/peterx/qemu into staging
Migration pull request for 9.2
- Fabiano's patch to move two tests to slow tests.
- Peter's patch to fix qatzip builds
- Stefan's multifd-zstd fix on unsigned diff comparisons
- Fea's bug fix to consistently use memattrs when map() address space
- Fabiano's bug fix on multifd race condition against receivedmap
# -----BEGIN PGP SIGNATURE-----
#
# iIgEABYKADAWIQS5GE3CDMRX2s990ak7X8zN86vXBgUCZuscdRIccGV0ZXJ4QHJl
# ZGhhdC5jb20ACgkQO1/MzfOr1wblcQD/amw/nw5LiL8/+QzJtjFTckouyHtCn++I
# x5yVJndm4M4BAOD+b6Pd5byAX1bH7eZ85368ivKLGIZep6qEvICQmw0G
# =wxVI
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 18 Sep 2024 19:31:17 BST
# gpg: using EDDSA key B9184DC20CC457DACF7DD1A93B5FCCCDF3ABD706
# gpg: issuer "peterx@redhat.com"
# gpg: Good signature from "Peter Xu <xzpeter@gmail.com>" [marginal]
# gpg: aka "Peter Xu <peterx@redhat.com>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: B918 4DC2 0CC4 57DA CF7D D1A9 3B5F CCCD F3AB D706
* tag 'migration-20240917-pull-request' of https://gitlab.com/peterx/qemu:
migration/multifd: Fix rb->receivedmap cleanup race
migration/savevm: Remove extra load cleanup calls
softmmu/physmem.c: Keep transaction attribute in address_space_map()
migration/multifd: Fix loop conditions in multifd_zstd_send_prepare and multifd_zstd_recv
migration/multifd: Fix build for qatzip
tests/qtest/migration: Move a couple of slow tests under g_test_slow
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
At the moment we run all fp-test tests except for the muladd ones
with the default meson test timeout of 30s. This is plenty for
most of the test cases, but for multiplication and division we
can sometimes hit the timeout if the CI runner is going slow.
Add support to meson.build for a way to override the timeout on
a per test basis, and use it to set the timeout to 60s for
fp-test-rem, fp-test-div and fp-test-mul. We can use this new
generic mechanism also to set the timeout for mulAdd rather
than hardcoding it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240917141641.2836265-1-peter.maydell@linaro.org
Fix a segmentation fault in multifd when rb->receivedmap is cleared
too early.
After commit 5ef7e26bdb ("migration/multifd: solve zero page causing
multiple page faults"), multifd started using the rb->receivedmap
bitmap, which belongs to ram.c and is initialized and *freed* from the
ram SaveVMHandlers.
Multifd threads are live until migration_incoming_state_destroy(),
which is called after qemu_loadvm_state_cleanup(), leading to a crash
when accessing rb->receivedmap.
process_incoming_migration_co() ...
qemu_loadvm_state() multifd_nocomp_recv()
qemu_loadvm_state_cleanup() ramblock_recv_bitmap_set_offset()
rb->receivedmap = NULL set_bit_atomic(..., rb->receivedmap)
...
migration_incoming_state_destroy()
multifd_recv_cleanup()
multifd_recv_terminate_threads(NULL)
Move the loadvm cleanup into migration_incoming_state_destroy(), after
multifd_recv_cleanup() to ensure multifd threads have already exited
when rb->receivedmap is cleared.
Adjust the postcopy listen thread comment to indicate that we still
want to skip the cpu synchronization.
CC: qemu-stable@nongnu.org
Fixes: 5ef7e26bdb ("migration/multifd: solve zero page causing multiple page faults")
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240917185802.15619-3-farosas@suse.de
[peterx: added comment in migration_incoming_state_destroy()]
Signed-off-by: Peter Xu <peterx@redhat.com>
There are two qemu_loadvm_state_cleanup() calls that were introduced
when qemu_loadvm_state_setup() was still called before loading the
configuration section, so there was state to be cleaned up if the
header checks failed.
However, commit 9e14b84908 ("migration/savevm: load_header before
load_setup") has moved that configuration section part to
qemu_loadvm_state_header() which now happens before
qemu_loadvm_state_setup().
Remove the cleanup calls that are now misplaced.
Note that we didn't use Fixes because it's benign to cleanup() even if
setup() is not invoked. So this patch is not needed for stable, as it
falls into cleanup category.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240917185802.15619-2-farosas@suse.de
[peterx: added last paragraph of commit message]
Signed-off-by: Peter Xu <peterx@redhat.com>
The follow-up transactions may use the data in the attribution, so keep
the value of attribution from the function parameter just as
flatview_translate() above.
Signed-off-by: Fea.Wang <fea.wang@sifive.com>
Cc: qemu-stable@nongnu.org
Fixes: f26404fbee ("Make address_space_map() take a MemTxAttrs argument")
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20240912070404.2993976-2-fea.wang@sifive.com
Signed-off-by: Peter Xu <peterx@redhat.com>
GitHub's CodeQL reports four critical errors which are fixed by this commit:
Unsigned difference expression compared to zero
An expression (u - v > 0) with unsigned values u, v is only false if u == v,
so all changed expressions did not work as expected.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Link: https://lore.kernel.org/r/20240910054138.1458555-1-sw@weilnetz.de
[peterx: Fix mangled email for author]
Signed-off-by: Peter Xu <peterx@redhat.com>
These were passing a NULL buffer pointer unconditionally, which happens
to behave in a mostly benign way (except for the chance of an excess
memory region unref and a bounce buffer leak). Per the function comment,
this was never meant to be accepted though, and triggers an assertion
with the "softmmu: Support concurrent bounce buffers" change.
Given that the code in question never sets up any mappings, just remove
the unnecessary dma_memory_unmap calls along with the DBDMA_io struct
fields that are now entirely unused.
Signed-off-by: Mattias Nissler <mnissler@rivosinc.com>
Message-Id: <20240916175708.1829059-1-mnissler@rivosinc.com>
Fixes: be1e343995 ("macio: switch over to new byte-aligned DMA helpers")
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
The qatzip series was based on an older commit, it applied cleanly even
though it has conflicts. Neither CI nor myself found the build will break
as it's skipped by default when qatzip library was missing.
Fix the build issues. No need to copy stable as it just landed 9.2.
Cc: Yichen Wang <yichen.wang@bytedance.com>
Cc: Bryan Zhang <bryan.zhang@bytedance.com>
Cc: Hao Xiang <hao.xiang@linux.dev>
Cc: Yuan Liu <yuan1.liu@intel.com>
Fixes: 80484f9459 ("migration: Introduce 'qatzip' compression method")
Link: https://lore.kernel.org/r/20240910210450.3835123-1-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
The xbzrel and vcpu_dirty_limit are the two slowest tests from
migration-test. Move them under g_test_slow() to save about 40s per
run.
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240911145204.17692-1-farosas@suse.de
Signed-off-by: Peter Xu <peterx@redhat.com>
An integer overflow fix for the last zone on a zoned block device whose
capacity is not a multiple of the zone size.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmbpa0sACgkQnKSrs4Gr
c8hdAwgAgp6AJVXHiPo43GbhdSsKJ2bq8DIPrsqgwAxD3rgxxRVwsWzENQgzF8O9
qoXPmU0eqPp0zTsKTxNrlIgCpsJ3X4Oeg89u4N1xUOAJtADZGlbucUQEkAgIhWMl
IFLjtFc7EbhWn57FmQGzANeOJOB+OumfQGeC7wbeAtUCn7g08rXtq+5I5GRKqkkP
u1FlSassd7fyVnlVc+BT2aKANBITKhJGhYqwndvxXzMIi0L54/bQRrarLoy7oJuG
1k8zYLi6giUINNwYMtzn5ooXNnOSoxHKKfwcFT8hGZixwBBnCnYHjNkfs/QyvZQ7
ZuR9mY6pqp/lg5127UlpOR7d6HADLQ==
=709Z
-----END PGP SIGNATURE-----
Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging
Pull request
An integer overflow fix for the last zone on a zoned block device whose
capacity is not a multiple of the zone size.
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmbpa0sACgkQnKSrs4Gr
# c8hdAwgAgp6AJVXHiPo43GbhdSsKJ2bq8DIPrsqgwAxD3rgxxRVwsWzENQgzF8O9
# qoXPmU0eqPp0zTsKTxNrlIgCpsJ3X4Oeg89u4N1xUOAJtADZGlbucUQEkAgIhWMl
# IFLjtFc7EbhWn57FmQGzANeOJOB+OumfQGeC7wbeAtUCn7g08rXtq+5I5GRKqkkP
# u1FlSassd7fyVnlVc+BT2aKANBITKhJGhYqwndvxXzMIi0L54/bQRrarLoy7oJuG
# 1k8zYLi6giUINNwYMtzn5ooXNnOSoxHKKfwcFT8hGZixwBBnCnYHjNkfs/QyvZQ7
# ZuR9mY6pqp/lg5127UlpOR7d6HADLQ==
# =709Z
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 17 Sep 2024 12:43:07 BST
# gpg: using RSA key 8695A8BFD3F97CDAAC35775A9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" [full]
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" [full]
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* tag 'block-pull-request' of https://gitlab.com/stefanha/qemu:
hw/block: fix uint32 overflow
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>