Test an allocating write to a parallels image that has a backing node.
Before HEAD^, doing so used to give me a failed assertion (when the
backing node contains only `42` bytes; the results varies with the value
chosen, for `0` bytes, for example, all I get is EIO).
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20220714132801.72464-3-hreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Commit a4072543cc has changed the I/O here
from working on a local one-element I/O vector to just using the buffer
directly (using the bdrv_co_pread()/bdrv_co_pwrite() helper functions
introduced shortly before).
However, it only changed the bdrv_co_preadv() call to bdrv_co_pread() -
the subsequent bdrv_co_pwritev() call stayed this way, and so still
expects a QEMUIOVector pointer instead of a plain buffer. We must
change that to be a bdrv_co_pwrite() call.
Fixes: a4072543cc ("block/parallels: use buffer-based io")
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20220714132801.72464-2-hreitz@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
This patch fixes the dedicated framebuffer mailbox interface by
removing an unneeded offset. This means that we pick the framebuffer
address in the same way that we do if the guest code uses the buffer
allocate mechanism of the bcm2835_property interface (case
0x00040001: /* Allocate buffer */ in bcm2835_property.c).
The documentation of this mailbox interface doesn't say anything
about using parts of the request buffer address to affect the
chosen framebuffer address:
https://github.com/raspberrypi/firmware/wiki/Mailbox-framebuffer-interface
Some baremetal applications like the Screen01/Screen02 examples from
Baking Pi tutorial[1] didn't work before this patch.
[1] https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen01.html
Signed-off-by: Alan Jian <alanjian85@outlook.com>
Message-id: 20220725145838.8412-1-alanjian85@outlook.com
[PMM: tweaked commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The '==' operator to test is a bashism; the standard way to copmare
strings is '='. This causes dash to complain:
../../configure: 681: test: linux: unexpected operator
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20220720152631.450903-6-peter.maydell@linaro.org
In commit 823eb01345 we moved the setting of ARCH from configure
to meson.build, but we accidentally left behind one attempt to use
$ARCH in configure, which was trying to add -msmall-data to the
compiler flags on Alpha hosts. Since ARCH is now never set, the test
always fails and we never add the flag.
There isn't actually any need to use this compiler flag on Alpha:
the original intent was that it would allow us to simplify our TCG
codegen on that platform, but we never actually made the TCG changes
that would rely on -msmall-data.
Drop the effectively-dead code from configure, as we don't need it.
This was spotted by shellcheck:
In ./configure line 2254:
case "$ARCH" in
^---^ SC2153: Possible misspelling: ARCH may not be assigned, but arch is.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20220720152631.450903-5-peter.maydell@linaro.org
The variable string-replacement syntax ${var/old/new} is a bashism
(though it is also supported by some other shells), and for instance
does not work with the NetBSD /bin/sh, which complains:
../src/configure: 687: Syntax error: Bad substitution
Replace it with a more portable sed-based approach, similar to
what we already do in quote_sh().
Note that shellcheck also diagnoses this:
In ./configure line 687:
e=${e/'\'/'\\'}
^-----------^ SC2039: In POSIX sh, string replacement is undefined.
^-- SC1003: Want to escape a single quote? echo 'This is how it'\''s done'.
^-- SC1003: Want to escape a single quote? echo 'This is how it'\''s done'.
In ./configure line 688:
e=${e/\"/'\"'}
^----------^ SC2039: In POSIX sh, string replacement is undefined.
Fixes: 8154f5e64b ("meson: Prefix each element of firmware path")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-id: 20220720152631.450903-4-peter.maydell@linaro.org
In shell script syntax, $var[something] is not special for variable
expansion: $var is expanded. However, as it can look as if it were
intended to be an array element access (the correct syntax for which
is ${var[something]}), shellcheck recommends using explicit braces
around ${var} to clarify the intended expansion.
This fixes the warning:
In ./configure line 2346:
if "$target_ld" -verbose 2>&1 | grep -q "^[[:space:]]*$emu[[:space:]]*$"; then
^-- SC1087: Use braces when expanding arrays, e.g. ${array[idx]} (or ${var}[.. to quiet).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20220720152631.450903-3-peter.maydell@linaro.org
In commit 7d7dbf9dc1 we added a line to the configure script
which is not valid POSIX shell syntax, because it is missing a space
after a '!' character. shellcheck diagnoses this:
if !(GIT="$git" "$source_path/scripts/git-submodule.sh" "$git_submodules_action" "$git_submodules"); then
^-- SC1035: You are missing a required space after the !.
and the OpenBSD shell will not correctly handle this without the space.
Fixes: 7d7dbf9dc1 ("configure: replace --enable/disable-git-update with --with-git-submodules")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Tested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 20220720152631.450903-2-peter.maydell@linaro.org
In commit 7390e0e9ab, we added support for SME loads and stores.
Unlike SVE loads and stores, these include handling of 128-bit
elements. The SME load/store functions call down into the existing
sve_cont_ldst_elements() function, which uses the element size MO_*
value as an index into the pred_esz_masks[] array. Because this code
path now has to handle MO_128, we need to add an extra element to the
array.
This bug was spotted by Coverity because it meant we were reading off
the end of the array.
Resolves: Coverity CID 1490539, 1490541, 1490543, 1490544, 1490545,
1490546, 1490548, 1490549, 1490550, 1490551, 1490555, 1490557,
1490558, 1490560, 1490561, 1490563
Fixes: 7390e0e9ab ("target/arm: Implement SME LD1, ST1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220718100144.3248052-1-peter.maydell@linaro.org
Update the regex for the slirp component now that it lives
solely inside /slirp/, and note that it should be ignored in
Coverity analysis (because it's a separate upstream project
now, and they run Coverity on it themselves).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20220718142310.16013-3-peter.maydell@linaro.org
Add the component regex for the new loongarch target.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20220718142310.16013-2-peter.maydell@linaro.org
Version: GnuPG v1
iQEcBAABAgAGBQJi36ocAAoJEO8Ells5jWIRuOYH/jtaDNGTBs/h8A041gQaCMmw
jufUXHCdKGgmZMpJ/AoCUWx4USdx8hEGSt/j4kvSmIPX+VLuCfLefHDlTxndiWAv
fnUr4NB7LAz2b5D3d5QX1Np+zHG5mHx95KfDIaWdcz9N1HUHlEOakxTDc2EvR1hF
yh8g2n5xdvzK5kWvPcNgJpU/ezDumOFo04JndBb4fIqDmZfW3hvJQ3IKiS3P1J9C
Kbb/usoXGrdoZ9T1R2cqtn1CxrgfMlF2pKJFWzs3nU+ewD9C6oKS4rDQCZxx+JEx
ZvfnSTUPgBBlT4zqZTTjyFQMQdtis5qK5iAKDEENkqVC1iULPhnM9DN0qxcIoQs=
=SpWG
-----END PGP SIGNATURE-----
Merge tag 'net-pull-request' of https://github.com/jasowang/qemu into staging
# gpg: Signature made Tue 26 Jul 2022 09:47:24 BST
# gpg: using RSA key EF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* tag 'net-pull-request' of https://github.com/jasowang/qemu:
vdpa: Fix memory listener deletions of iova tree
vhost: Get vring base from vq, not svq
e1000e: Fix possible interrupt loss when using MSI
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
vhost_vdpa_listener_region_del is always deleting the first iova entry
of the tree, since it's using the needle iova instead of the result's
one.
This was detected using a vga virtual device in the VM using vdpa SVQ.
It makes some extra memory adding and deleting, so the wrong one was
mapped / unmapped. This was undetected before since all the memory was
mappend and unmapped totally without that device, but other conditions
could trigger it too:
* mem_region was with .iova = 0, .translated_addr = (correct GPA).
* iova_tree_find_iova returned right result, but does not update
mem_region.
* iova_tree_remove always removed region with .iova = 0. Right iova were
sent to the device.
* Next map will fill the first region with .iova = 0, causing a mapping
with the same iova and device complains, if the next action is a map.
* Next unmap will cause to try to unmap again iova = 0, causing the
device to complain that no region was mapped at iova = 0.
Fixes: 34e3c94eda ("vdpa: Add custom IOTLB translations to SVQ")
Reported-by: Lei Yang <leiyang@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
The SVQ vring used idx usually match with the guest visible one, as long
as all the guest buffers (GPA) maps to exactly one buffer within qemu's
VA. However, as we can see in virtqueue_map_desc, a single guest buffer
could map to many buffers in SVQ vring.
Also, its also a mistake to rewind them at the source of migration.
Since VirtQueue is able to migrate the inflight descriptors, its
responsability of the destination to perform the rewind just in case it
cannot report the inflight descriptors to the device.
This makes easier to migrate between backends or to recover them in
vhost devices that support set in flight descriptors.
Fixes: 6d0b222666 ("vdpa: Adapt vhost_vdpa_get_vring_base to SVQ")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Commit "e1000e: Prevent MSI/MSI-X storms" introduced msi_causes_pending
to prevent interrupt storms problem. It was tested with MSI-X.
In case of MSI, the guest can rely solely on interrupts to clear ICR.
Upon clearing all pending interrupts, msi_causes_pending gets cleared.
However, when e1000e_itr_should_postpone() in e1000e_send_msi() returns
true, MSI never gets fired by e1000e_intrmgr_on_throttling_timer()
because msi_causes_pending is still set. This results in interrupt loss.
To prevent this, we need to clear msi_causes_pending when MSI is going
to get fired by the throttling timer. The guest can then receive
interrupts eventually.
Signed-off-by: Ake Koomsin <ake@igel.co.jp>
Signed-off-by: Jason Wang <jasowang@redhat.com>
* Pass random seed to x86 and other FDT platforms
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmLa3dUUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroObpwf/ceqT05kDypSbPmSZPspzfGZimoQL
9egbI27siFGUYhmZ/odiv5YU82Y44FHaElsmGsKZQAFvJ4JhROR8ZrDIejI/mWhk
9yCTW5y+DlFHwZbeAfqMQeK1sfI4TvZ70SnBtpFKsA0bkHmYNAtPJZOSL8SEtZJS
HA0+jOQdk1+ddjQjgy1AOg5R51nHQGELNz29aF2Z3elKN8ZM9BGY2TQzJ+SMfyRW
+iU2r5teqRzHDK005WFZgaH5OtG5f2t/fgRycG9WDQYiYmna9wZQICyCiwEEgFu+
G7lqtPR0YRuVgFwqhhHW7i0wg0GvpEjCRyzc3Gets2j4FjYKn66xy2EPSA==
=OYEp
-----END PGP SIGNATURE-----
Merge tag 'for-upstream2' of https://gitlab.com/bonzini/qemu into staging
* Bug fixes
* Pass random seed to x86 and other FDT platforms
# gpg: Signature made Fri 22 Jul 2022 18:26:45 BST
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream2' of https://gitlab.com/bonzini/qemu:
hw/i386: pass RNG seed via setup_data entry
hw/rx: pass random seed to fdt
hw/mips: boston: pass random seed to fdt
hw/nios2: virt: pass random seed to fdt
oss-fuzz: ensure base_copy is a generic-fuzzer
oss-fuzz: remove binaries from qemu-bundle tree
accel/kvm: Avoid Coverity warning in query_stats()
docs: Add caveats for Windows as the build platform
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When writing back the fd[1] pipe file handle to emulated userspace
memory, use sizeof(abi_int) as offset insted of the hosts's int type.
There is no functional change in this patch.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <YtQ3Id6z8slpVr7r@p100>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The pipe2() syscall is available on all Linux platforms since kernel
2.6.27, so use it unconditionally to emulate pipe() and pipe2().
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <YtbZ2ojisTnzxN9Y@p100>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This program:
int main(void) { asm("bv %r0(%r0)"); return 0; }
produces on real hppa hardware the expected segfault:
SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x3} ---
killed by SIGSEGV +++
Segmentation fault
But when run on linux-user you get instead internal qemu errors:
ERROR: linux-user/hppa/cpu_loop.c:172:cpu_loop: code should not be reached
Bail out! ERROR: linux-user/hppa/cpu_loop.c:172:cpu_loop: code should not be reached
ERROR: accel/tcg/cpu-exec.c:933:cpu_exec: assertion failed: (cpu == current_cpu)
Bail out! ERROR: accel/tcg/cpu-exec.c:933:cpu_exec: assertion failed: (cpu == current_cpu)
Fix it by adding the missing case for the EXCP_IMP trap in
cpu_loop() and raise a segfault.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <YtWNC56seiV6VenA@p100>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Tiny machines optimized for fast boot time generally don't use EFI,
which means a random seed has to be supplied some other way. For this
purpose, Linux (≥5.20) supports passing a seed in the setup_data table
with SETUP_RNG_SEED, specially intended for hypervisors, kexec, and
specialized bootloaders. The linked commit shows the upstream kernel
implementation.
At Paolo's request, we don't pass these to versioned machine types ≤7.0.
Link: https://git.kernel.org/tip/tip/c/68b8e9713c8
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Eduardo Habkost <eduardo@habkost.net>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220721125636.446842-1-Jason@zx2c4.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the FDT contains /chosen/rng-seed, then the Linux RNG will use it to
initialize early. Set this using the usual guest random number
generation function. This FDT node is part of the DT specification.
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220719122033.135902-1-Jason@zx2c4.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the FDT contains /chosen/rng-seed, then the Linux RNG will use it to
initialize early. Set this using the usual guest random number
generation function. This FDT node is part of the DT specification.
I'd do the same for other MIPS platforms but boston is the only one that
seems to use FDT.
Cc: Paul Burton <paulburton@kernel.org>
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220719120843.134392-1-Jason@zx2c4.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the FDT contains /chosen/rng-seed, then the Linux RNG will use it to
initialize early. Set this using the usual guest random number
generation function. This FDT node is part of the DT specification.
Cc: Chris Wulff <crwulff@gmail.com>
Cc: Marek Vasut <marex@denx.de>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220719120113.118034-1-Jason@zx2c4.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Depending on how the target list is sorted in by qemu, the first target
(used as the base copy of the fuzzer, to which all others are linked)
might not be a generic-fuzzer. Since we are trying to only use
generic-fuzz, on oss-fuzz, fix that, to ensure the base copy is a
generic-fuzzer.
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Message-Id: <20220720180946.2264253-1-alxndr@bu.edu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
oss-fuzz is finding possible fuzzing targets even under qemu-bundle/.../bin, but they
cannot be used because the required shared libraries are missing. Since the
fuzzing targets are already placed manually in $OUT, the bindir and libexecdir
subtrees are not needed; remove them.
Cc: Alexander Bulekov <alxndr@bu.edu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Coverity complains that there is a codepath in the query_stats()
function where it can leak the memory pointed to by stats_list. This
can only happen if the caller passes something other than
STATS_TARGET_VM or STATS_TARGET_VCPU as the 'target', which no
callsite does. Enforce this assumption using g_assert_not_reached(),
so that if we have a future bug we hit the assert rather than
silently leaking memory.
Resolves: Coverity CID 1490140
Fixes: cc01a3f4ca ("kvm: Support for querying fd-based stats")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220719134853.327059-1-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit cf60ccc330 ("cutils: Introduce bundle mechanism") introduced
a Python script to populate a bundle directory using os.symlink() to
point to the binaries in the pc-bios directory of the source tree.
Commit 882084a04a ("datadir: Use bundle mechanism") removed previous
logic in pc-bios/meson.build to create a link/copy of pc-bios binaries
in the build tree so os.symlink() is the way to go.
However os.symlink() may fail [1] on Windows if an unprivileged Windows
user started the QEMU build process, which results in QEMU executables
generated in the build tree not able to load the default BIOS/firmware
images due to symbolic links not present in the bundle directory.
This commits updates the documentation by adding such caveats for users
who want to build QEMU on the Windows platform.
[1] https://docs.python.org/3/library/os.html#os.symlink
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-Id: <20220719135014.764981-1-bmeng.cn@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This replaces yesterdays pull and:
a) Fixes some test build errors without TLS
b) Reenabled the zlib acceleration on s390
now that we have Ilya's fix
Hyman's dirty page rate limit set
Ilya's fix for zlib vs migration
Peter's postcopy-preempt
Cleanup from Dan
zero-copy tidy ups from Leo
multifd doc fix from Juan
Revert disable of zlib acceleration on s390x
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEERfXHG0oMt/uXep+pBRYzHrxb/ecFAmLX5KAACgkQBRYzHrxb
/ef8KRAAgUYg+vEPYHXFz48vPBQr+bxcf0G8uzKT4S/afY7xdA1W0nsNR97cTtcW
ofZQAMjDaxMaCR6Xteh4TUiP346AZv7ZeAKbzDMekrqnLJw8x7hcJRTfgHJVFBkd
GGYOx7mYf0BEZJfykRDG1EbcwMGlGKK+WGV/tQ1/zM6/QOUXJow+Mn+JoSwl9fLH
QKvQ5uOxrI9gN0bvWyx94zeoMNQmOrO8jq/48dbDZZebwg4fAmoBEAU3dYJ9obaW
RE9DLywypWJ5ctYbyrwl+gSjo8DRxBUmbqDE8XxoonJQEZ712d66erGT3w4PLyKP
8RqxE3dduM0IfbvlvoOFyqtxTUN6hzb5cBVSuT5ukKyNWCjvxwXqqXgUkzxmc6JD
Mgh3WnM1EZwdInG0zzScVN2WwMYhKoW0gb//35Dy/Z6HaWww6SPm21hIqzoINqzI
FKW41Gp2pdFFl5HAx03IxhZ9aRJKdtKqexvlD5IDPBrBom2QQJ7Bex1najgpK9Y5
jqYQrAFn72U3Dxm0cRjfoSc6aI6kXu44RO3CyTvl65B6bZY+bZvj4fx+4IVrzm8Q
PAsLDp+qbY2YKgtFT21csKQe2rux7QuafsREd3oBXOaUHgNv5xQ5nLIL5LhcSGX5
B9l82uU8ftuD5sMJn5uYQv1/0n5empXTsgl5GXZ3Wfni33v89yA=
=wiSA
-----END PGP SIGNATURE-----
Merge tag 'pull-migration-20220720c' of https://gitlab.com/dagrh/qemu into staging
Migration pull 2022-07-20
This replaces yesterdays pull and:
a) Fixes some test build errors without TLS
b) Reenabled the zlib acceleration on s390
now that we have Ilya's fix
Hyman's dirty page rate limit set
Ilya's fix for zlib vs migration
Peter's postcopy-preempt
Cleanup from Dan
zero-copy tidy ups from Leo
multifd doc fix from Juan
Revert disable of zlib acceleration on s390x
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
# gpg: Signature made Wed 20 Jul 2022 12:18:56 BST
# gpg: using RSA key 45F5C71B4A0CB7FB977A9FA90516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" [full]
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* tag 'pull-migration-20220720c' of https://gitlab.com/dagrh/qemu: (30 commits)
Revert "gitlab: disable accelerated zlib for s390x"
migration: Avoid false-positive on non-supported scenarios for zero-copy-send
multifd: Document the locking of MultiFD{Send/Recv}Params
migration/multifd: Report to user when zerocopy not working
Add dirty-sync-missed-zero-copy migration stat
QIOChannelSocket: Fix zero-copy flush returning code 1 when nothing sent
migration: remove unreachable code after reading data
tests: Add postcopy preempt tests
tests: Add postcopy tls recovery migration test
tests: Add postcopy tls migration test
tests: Move MigrateCommon upper
migration: Respect postcopy request order in preemption mode
migration: Enable TLS for preempt channel
migration: Export tls-[creds|hostname|authz] params to cmdline too
migration: Add helpers to detect TLS capability
migration: Add property x-postcopy-preempt-break-huge
migration: Create the postcopy preempt channel asynchronously
migration: Postcopy recover with preempt enabled
migration: Postcopy preemption enablement
migration: Postcopy preemption preparation on channel creation
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Version: GnuPG v1
iQEcBAABAgAGBQJi18PHAAoJEO8Ells5jWIRCEQH+wepXDoT6Q56xmUgxVs+hlAD
CXGy71/cNV08Yu3PTTXo8SYaw+KXxsA9ECgIr2hsfPXarAdoOpJFpZR0HoqIzaXd
kpD6bvwN8bEEOlAHxKcb6/VM+VYntZBfkH9m1WLGx3fHILazLblyL8w2Hkp7NK9J
IBpQQ63uU8Xt0+js96Z/sPOKRjrtbKXFT1bhY2CI8MKZpuqNyED0jZYwbNdnRwZN
fuKbpsaaT4Wxx+mQMg7H7a0e/xx3DNi2F6cAtGLH98WYzbLFgExSSK8G8jnwEVfM
EKWfU7N4zmokq7jN99yvGzjIzLrnLX6yn/ifSs+lQOzdtCA9zEbotI+CDCVdPs4=
=9zus
-----END PGP SIGNATURE-----
Merge tag 'net-pull-request' of https://github.com/jasowang/qemu into staging
# gpg: Signature made Wed 20 Jul 2022 09:58:47 BST
# gpg: using RSA key EF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* tag 'net-pull-request' of https://github.com/jasowang/qemu: (25 commits)
net/colo.c: fix segmentation fault when packet is not parsed correctly
net/colo.c: No need to track conn_list for filter-rewriter
net/colo: Fix a "double free" crash to clear the conn_list
softmmu/runstate.c: add RunStateTransition support form COLO to PRELAUNCH
vdpa: Add x-svq to NetdevVhostVDPAOptions
vdpa: Add device migration blocker
vdpa: Extract get features part from vhost_vdpa_get_max_queue_pairs
vdpa: Buffer CVQ support on shadow virtqueue
vdpa: manual forward CVQ buffers
vhost-net-vdpa: add stubs for when no virtio-net device is present
vdpa: Export vhost_vdpa_dma_map and unmap calls
vhost: Add svq avail_handler callback
vhost: add vhost_svq_poll
vhost: Expose vhost_svq_add
vhost: add vhost_svq_push_elem
vhost: Track number of descs in SVQDescState
vhost: Add SVQDescState
vhost: Decouple vhost_svq_add from VirtQueueElement
vhost: Check for queue full at vhost_svq_add
vhost: Move vhost_svq_kick call to vhost_svq_add
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit 309df6acb2.
With Ilya's 'multifd: Copy pages before compressing them with zlib'
in the latest migration series, this shouldn't be a problem any more.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Migration with zero-copy-send currently has it's limitations, as it can't
be used with TLS nor any kind of compression. In such scenarios, it should
output errors during parameter / capability setting.
But currently there are some ways of setting this not-supported scenarios
without printing the error message:
!) For 'compression' capability, it works by enabling it together with
zero-copy-send. This happens because the validity test for zero-copy uses
the helper unction migrate_use_compression(), which check for compression
presence in s->enabled_capabilities[MIGRATION_CAPABILITY_COMPRESS].
The point here is: the validity test happens before the capability gets
enabled. If all of them get enabled together, this test will not return
error.
In order to fix that, replace migrate_use_compression() by directly testing
the cap_list parameter migrate_caps_check().
2) For features enabled by parameters such as TLS & 'multifd_compression',
there was also a possibility of setting non-supported scenarios: setting
zero-copy-send first, then setting the unsupported parameter.
In order to fix that, also add a check for parameters conflicting with
zero-copy-send on migrate_params_check().
3) XBZRLE is also a compression capability, so it makes sense to also add
it to the list of capabilities which are not supported with zero-copy-send.
Fixes: 1abaec9a1b ("migration: Change zero_copy_send from migration parameter to migration capability")
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Message-Id: <20220719122345.253713-1-leobras@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reorder the structures so we can know if the fields are:
- Read only
- Their own locking (i.e. sems)
- Protected by 'mutex'
- Only for the multifd channel
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20220531104318.7494-2-quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
dgilbert: Typo fixes from Chen Zhang
Some errors, like the lack of Scatter-Gather support by the network
interface(NETIF_F_SG) may cause sendmsg(...,MSG_ZEROCOPY) to fail on using
zero-copy, which causes it to fall back to the default copying mechanism.
After each full dirty-bitmap scan there should be a zero-copy flush
happening, which checks for errors each of the previous calls to
sendmsg(...,MSG_ZEROCOPY). If all of them failed to use zero-copy, then
increment dirty_sync_missed_zero_copy migration stat to let the user know
about it.
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220711211112.18951-4-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Acked-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220711211112.18951-3-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
If flush is called when no buffer was sent with MSG_ZEROCOPY, it currently
returns 1. This return code should be used only when Linux fails to use
MSG_ZEROCOPY on a lot of sendmsg().
Fix this by returning early from flush if no sendmsg(...,MSG_ZEROCOPY)
was attempted.
Fixes: 2bc58ffc29 ("QIOChannelSocket: Implement io_writev zero copy flag & io_flush for CONFIG_LINUX")
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220711211112.18951-2-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The code calls qio_channel_read() in a loop when it reports
QIO_CHANNEL_ERR_BLOCK. This code is reported when errno==EAGAIN.
As such the later block of code will always hit the 'errno != EAGAIN'
condition, making the final 'else' unreachable.
Fixes: Coverity CID 1490203
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220627135318.156121-1-berrange@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Four tests are added for preempt mode:
- Postcopy plain
- Postcopy recovery
- Postcopy tls
- Postcopy tls+recovery
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185530.27801-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
dgilbert: Manual merge
It's easy to build this upon the postcopy tls test. Rename the old
postcopy recovery test to postcopy/recovery/plain.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185527.27747-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
dgilbert: Manual merge
We just added TLS tests for precopy but not postcopy. Add the
corresponding test for vanilla postcopy.
Rename the vanilla postcopy to "postcopy/plain" because all postcopy tests
will only use unix sockets as channel.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185525.27692-1-peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
dgilbert: Manual merge
So that it can be used in postcopy tests too soon.
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185522.27638-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
With preemption mode on, when we see a postcopy request that was requesting
for exactly the page that we have preempted before (so we've partially sent
the page already via PRECOPY channel and it got preempted by another
postcopy request), currently we drop the request so that after all the
other postcopy requests are serviced then we'll go back to precopy stream
and start to handle that.
We dropped the request because we can't send it via postcopy channel since
the precopy channel already contains partial of the data, and we can only
send a huge page via one channel as a whole. We can't split a huge page
into two channels.
That's a very corner case and that works, but there's a change on the order
of postcopy requests that we handle since we're postponing this (unlucky)
postcopy request to be later than the other queued postcopy requests. The
problem is there's a possibility that when the guest was very busy, the
postcopy queue can be always non-empty, it means this dropped request will
never be handled until the end of postcopy migration. So, there's a chance
that there's one dest QEMU vcpu thread waiting for a page fault for an
extremely long time just because it's unluckily accessing the specific page
that was preempted before.
The worst case time it needs can be as long as the whole postcopy migration
procedure. It's extremely unlikely to happen, but when it happens it's not
good.
The root cause of this problem is because we treat pss->postcopy_requested
variable as with two meanings bound together, as the variable shows:
1. Whether this page request is urgent, and,
2. Which channel we should use for this page request.
With the old code, when we set postcopy_requested it means either both (1)
and (2) are true, or both (1) and (2) are false. We can never have (1)
and (2) to have different values.
However it doesn't necessarily need to be like that. It's very legal that
there's one request that has (1) very high urgency, but (2) we'd like to
use the precopy channel. Just like the corner case we were discussing
above.
To differenciate the two meanings better, introduce a new field called
postcopy_target_channel, showing which channel we should use for this page
request, so as to cover the old meaning (2) only. Then we leave the
postcopy_requested variable to stand only for meaning (1), which is the
urgency of this page request.
With this change, we can easily boost priority of a preempted precopy page
as long as we know that page is also requested as a postcopy page. So with
the new approach in get_queued_page() instead of dropping that request, we
send it right away with the precopy channel so we get back the ordering of
the page faults just like how they're requested on dest.
Reported-by: Manish Mishra <manish.mishra@nutanix.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Manish Mishra <manish.mishra@nutanix.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185520.27583-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This patch is based on the async preempt channel creation. It continues
wiring up the new channel with TLS handshake to destionation when enabled.
Note that only the src QEMU needs such operation; the dest QEMU does not
need any change for TLS support due to the fact that all channels are
established synchronously there, so all the TLS magic is already properly
handled by migration_tls_channel_process_incoming().
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185518.27529-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
It's useful for specifying tls credentials all in the cmdline (along with
the -object tls-creds-*), especially for debugging purpose.
The trick here is we must remember to not free these fields again in the
finalize() function of migration object, otherwise it'll cause double-free.
The thing is when destroying an object, we'll first destroy the properties
that bound to the object, then the object itself. To be explicit, when
destroy the object in object_finalize() we have such sequence of
operations:
object_property_del_all(obj);
object_deinit(obj, ti);
So after this change the two fields are properly released already even
before reaching the finalize() function but in object_property_del_all(),
hence we don't need to free them anymore in finalize() or it's double-free.
This also fixes a trivial memory leak for tls-authz as we forgot to free it
before this patch.
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185515.27475-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Add migrate_channel_requires_tls() to detect whether the specific channel
requires TLS, leveraging the recently introduced migrate_use_tls(). No
functional change intended.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185513.27421-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Add a property field that can conditionally disable the "break sending huge
page" behavior in postcopy preemption. By default it's enabled.
It should only be used for debugging purposes, and we should never remove
the "x-" prefix.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Manish Mishra <manish.mishra@nutanix.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185511.27366-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This patch allows the postcopy preempt channel to be created
asynchronously. The benefit is that when the connection is slow, we won't
take the BQL (and potentially block all things like QMP) for a long time
without releasing.
A function postcopy_preempt_wait_channel() is introduced, allowing the
migration thread to be able to wait on the channel creation. The channel
is always created by the main thread, in which we'll kick a new semaphore
to tell the migration thread that the channel has created.
We'll need to wait for the new channel in two places: (1) when there's a
new postcopy migration that is starting, or (2) when there's a postcopy
migration to resume.
For the start of migration, we don't need to wait for this channel until
when we want to start postcopy, aka, postcopy_start(). We'll fail the
migration if we found that the channel creation failed (which should
probably not happen at all in 99% of the cases, because the main channel is
using the same network topology).
For a postcopy recovery, we'll need to wait in postcopy_pause(). In that
case if the channel creation failed, we can't fail the migration or we'll
crash the VM, instead we keep in PAUSED state, waiting for yet another
recovery.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Manish Mishra <manish.mishra@nutanix.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185509.27311-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>