Version: GnuPG v1
iQEcBAABAgAGBQJhB3M3AAoJEO8Ells5jWIRFB0H/1U2ZTqIrIz41kU4twRdcj8z
npX1cJvXtFRymh2Sp3flh4UWg6RtGn0rkKBbGeQjlvLLFHXm6HyTTfO0ZJ+5zyd3
MRzkU8HU/GnDJ1eiYC+FwJr8cKcapdcANMd2YE2j29kifX4lO2NHfkIPVXjpR732
qEDVm6kiM/XPid4fmpfLz1Lemibn7dwmIQuYtmuhxBQ+8H1uxhLtx1nz5pJcOQcs
JOiNgeyJzH7vosI2zOB2JGtXr39JOJJ2IQjwH687sluOXc1DmkY9DsiN0gNSn+Fh
sETodE56Zd3aTumQ+5+IImG5N4rpzgyjA5FRFLmS0dmtiMrulNLaE5w3zzlbgXI=
=xI/1
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging
# gpg: Signature made Mon 02 Aug 2021 05:23:19 BST
# gpg: using RSA key EF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>" [marginal]
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
hw/net: e1000e: Don't zero out the VLAN tag in the legacy RX descriptor
hw/net: e1000e: Correct the initial value of VET register
hw/net: e1000: Correct the initial value of VET register
hw/net/can: sja1000 fix buff2frame_bas and buff2frame_pel when dlc is out of std CAN 8 bytes
hw/net/vmxnet3: Do not abort QEMU if guest specified bad queue numbers
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In the legacy RX descriptor mode, VLAN tag was saved to d->special
by e1000e_build_rx_metadata() in e1000e_write_lgcy_rx_descr(), but
it was then zeroed out again at the end of the call, which is wrong.
Fixes: c89d416a2b ("e1000e: Don't zero out buffer address in rx descriptor")
Reported-by: Markus Carlstedt <markus.carlstedt@windriver.com>
Signed-off-by: Christina Wang <christina.wang@windriver.com>
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
The initial value of VLAN Ether Type (VET) register is 0x8100, as per
the manual and real hardware.
While Linux e1000e driver always writes VET register to 0x8100, it is
not always the case for everyone. Drivers relying on the reset value
of VET won't be able to transmit and receive VLAN frames in QEMU.
Unlike e1000 in QEMU, e1000e uses a field 'vet' in "struct E1000Core"
to cache the value of VET register, but the cache only gets updated
when VET register is written. To always get a consistent VET value
no matter VET is written or remains its reset value, drop the 'vet'
field and use 'core->mac[VET]' directly.
Reported-by: Markus Carlstedt <markus.carlstedt@windriver.com>
Signed-off-by: Christina Wang <christina.wang@windriver.com>
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
The initial value of VLAN Ether Type (VET) register is 0x8100, as per
the manual and real hardware.
While Linux e1000 driver always writes VET register to 0x8100, it is
not always the case for everyone. Drivers relying on the reset value
of VET won't be able to transmit and receive VLAN frames in QEMU.
Reported-by: Markus Carlstedt <markus.carlstedt@windriver.com>
Signed-off-by: Christina Wang <christina.wang@windriver.com>
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Problem reported by openEuler fuzz-sig group.
The buff2frame_bas function (hw\net\can\can_sja1000.c)
infoleak(qemu5.x~qemu6.x) or stack-overflow(qemu 4.x).
Reported-by: Qiang Ning <ningqiang1@huawei.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Pavel Pisa <pisa@cmp.felk.cvut.cz>
Signed-off-by: Jason Wang <jasowang@redhat.com>
QEMU should never terminate unexpectedly just because the guest is
doing something wrong like specifying wrong queue numbers. Let's
simply refuse to set the device active in this case.
Buglink: https://bugs.launchpad.net/qemu/+bug/1890160
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Raised exceptions don't return, so mark the helper with noreturn.
Fixes: 032c76bc6f ("nios2: Add architecture emulation support")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210729101315.2318714-1-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This crept in as either a cut-and-paste error, or rebase error.
Fixes: cfec388518 ("atomic_template: add inline trace/plugin helpers")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210729004647.282017-24-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Coverity seems to have issues figuring out the properties of g_malloc0
and other non *_n functions. While this was "fixed" by removing the
custom second argument to __coverity_mark_as_afm_allocated__, inline
the code from the array-based allocation functions to avoid future
issues.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
g_malloc/g_malloc0/g_realloc only return NULL if the size is 0; we do not need
to cover that in the model, and so far have expected __coverity_alloc__
to model a non-NULL return value. But that apparently does not work
anymore, so add some extra conditionals that invoke __coverity_panic__
for NULL pointers.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These models are not needed anymore now that Coverity does not check
anymore that the result is used with "g_free". Coverity understands
GCC attributes and uses them to detect leaks.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Recently, Coverity has started complaining about using g_free() to free
memory areas allocated by GLib functions not included in model.c,
such as g_strfreev. This unfortunately goes against the GLib
documentation, which suggests that g_malloc() should be matched
with g_free() and plain malloc() with free(); since GLib 2.46 however
g_malloc() is hardcoded to always use the system malloc implementation,
and g_free is just "free" plus a tracepoint. Therefore, this
should not cause any problem in practice.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use void * for consistency with the actual function; provide a model
for MemoryRegionCache functions and for address_space_rw. These
let Coverity understand the bounds of the data that various functions
read and write even at very high levels of inlining (e.g. pci_dma_read).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-readconfig is still recording SMP options in QemuOpts instead of
using machine_opts_dict. This means that SMP options from -readconfig
are ignored.
Just stop using QemuOpts for -smp, making it return false for
is_qemuopts_group. Configuration files will merge the values in
machine_opts_dict using the new function machine_merge_property.
At the same time, fix -mem-prealloc which looked at QemuOpts to find the
number of guest CPUs, which it used as the default number of preallocation
threads.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It will be used to parse smp-opts config groups from configuration
files. The point to note is that it does not steal a reference
from the caller. This is better because this function will be called
from qemu_config_foreach's callback; qemu_config_foreach does not cede
its reference to the qdict to the callback, and wants to free it. To
balance that extra reference, machine_parse_property_opt now needs
a qobject_unref.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The main fix here is for io_uring. Spurious -EAGAIN errors can happen and the
request needs to be resubmitted.
The MAINTAINERS changes carry no risk and we might as well include them in QEMU
6.1.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmEC1bwACgkQnKSrs4Gr
c8g8LAf9GsnEPZPpqgKw48dZLaT2sYnrEXgGMVrLlAYp7gUL9fUEGvDn3ORTjEHW
ERWNI/ppeN9E+bLCtpuZtZpACR4/x4pSelXpN7N+OIQELxIM1R5FIARrcjd0zlUG
BmyINDn5fUvsS6LeeD/PALdNsQK3KZfpgiHpiEvmOVQ7HRdpH6LXyIAq653QwWpR
B9YpUu8RG/PqdbNFWVfOWYzobBqy4wwIjR4KIWH9l8QOY2QtgqehqPLl7v4i2mIv
yxDEtLwCL053+Y6J0PP+jULVsFpcNMeZwa/0ox1SDWP71NstKejQL+WM6ZNLraV/
vTxQbQ65LhAR1HHYKLMRWXFou/ih9w==
=ltc/
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stefanha-gitlab/tags/block-pull-request' into staging
Pull request
The main fix here is for io_uring. Spurious -EAGAIN errors can happen and the
request needs to be resubmitted.
The MAINTAINERS changes carry no risk and we might as well include them in QEMU
6.1.
# gpg: Signature made Thu 29 Jul 2021 17:22:20 BST
# gpg: using RSA key 8695A8BFD3F97CDAAC35775A9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" [full]
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" [full]
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha-gitlab/tags/block-pull-request:
MAINTAINERS: Added myself as a reviewer for the NVMe Block Driver
block/io_uring: resubmit when result is -EAGAIN
MAINTAINERS: add Stefano Garzarella as io_uring reviewer
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
I'm interested in following the activity around the NVMe bdrv.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20210728183340.2018313-1-philmd@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Linux SCSI can throw spurious -EAGAIN in some corner cases in its
completion path, which will end up being the result in the completed
io_uring request.
Resubmitting such requests should allow block jobs to complete, even
if such spurious errors are encountered.
Co-authored-by: Stefan Hajnoczi <stefanha@gmail.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Message-id: 20210729091029.65369-1-f.ebner@proxmox.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
I've been working with io_uring for a while so I'd like to help
with reviews.
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Message-Id: <20210728131515.131045-1-sgarzare@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Fold the usb2.txt information on device passthrough into usb.rst;
since this is the last part of the .txt file we can delete it now.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210728141457.14825-5-peter.maydell@linaro.org>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Fold the usb2.txt documentation about specifying which physical
port a USB device should use into usb.rst.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210728141457.14825-4-peter.maydell@linaro.org>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Fold the information in docs/usb2.txt about the different
kinds of supported USB controller into the main rST manual.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210728141457.14825-3-peter.maydell@linaro.org>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
We already have a section on USB in the rST manual; fold
the information in docs/usb-storage.txt into it.
We add 'format=raw' to the various -drive options in the code
examples, because QEMU will print warnings these days if you
omit it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210728141457.14825-2-peter.maydell@linaro.org>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
data might point into the middle of a larger buffer, there is a separate
free_on_destroy pointer passed into bufp_alloc() to handle that. It is
only used in the normal workflow though, not when dropping packets due
to the queue being full. Fix that.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/491
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210722072756.647673-1-kraxel@redhat.com>
On windows we can't wait on file descriptors.
Poll libusb using a timer instead.
Fixes long-standing FIXME.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/431
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20210623085249.1151901-2-kraxel@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
../subprojects/libvhost-user/libvhost-user.c:1070:12: error: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 3 has type ‘__u64’ {aka ‘long long unsigned int’} [-Werror=format=]
1070 | DPRINT(" desc_user_addr: 0x%016" PRIx64 "\n", vra->desc_user_addr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
| |
| __u64 {aka long long unsigned int}
Rather than using %llx, which may fail if __u64 is declared differently
elsewhere, let's just cast the values. Feel free to propose a better solution!
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210505151313.203258-2-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Meson now checks that subprojects do not access files from parent
project. While we all agree this is best practice, libvhost-user also
want to share a few headers with QEMU, and libvhost-user isn't really a
standalone project at this point (although this is making the dependency
a bit more explicit).
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210505151313.203258-1-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The shift constant was incorrect, causing int_prio to always be zero.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
[Rewritten commit message since v1 had already been included. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
VMRUN exits with SVM_EXIT_ERR if either:
* The event injected has a reserved type.
* When the event injected is of type 3 (exception), and the vector that
has been specified does not correspond to an exception.
This does not fix the entire exc_inj test in kvm-unit-tests.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210725090855.19713-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When using clang, we get
ERROR: configure test passed without -Werror but failed with -Werror.
This is probably a bug in the configure script. The failing command
will be at the bottom of config.log.
You can run configure with --disable-werror to bypass this check.
What we really want from these two tests is whether the
entire code sequence is supported, including pragmas.
Adding -Werror makes the test properly fail for clang.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210719200112.295316-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When searching for options like -n in MAKEFLAGS, current code may result
in a false positive match when make is invoked with long options like
--no-print-directory. This has been observed with certain versions of
host make (e.g. 3.82) while building the Qemu package in buildroot.
Filter out such long options before searching for one-character options.
Signed-off-by: Alexey Neyman <stilor@att.net>
Message-Id: <20210722020846.3678817-1-stilor@att.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Coverity reports potential NULL pointer dereference in
get_supported_hv_cpuid_legacy() when 'cs->kvm_state' is NULL. While
'cs->kvm_state' can indeed be NULL in hv_cpuid_get_host(),
kvm_hyperv_expand_features() makes sure that it only happens when
KVM_CAP_SYS_HYPERV_CPUID is supported and KVM_CAP_SYS_HYPERV_CPUID
implies KVM_CAP_HYPERV_CPUID so get_supported_hv_cpuid_legacy() is
never really called. Add asserts to strengthen the protection against
broken KVM behavior.
Coverity: CID 1458243
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210716115852.418293-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Haiku does not support compiling with -fpie. See the discussion here
for details:
https://discuss.haiku-os.org/t/qemu-on-haiku-sdl-issue/10961/6?u=rjzak
Signed-off-by: Richard Zak <richard.j.zak@gmail.com>
Message-Id: <CAOakUfM8zMpYiAEn-_f9s1DHdVB-Bq9fGMM=Hfr8hJW9ra6aWw@mail.gmail.com>
[thuth: Tweaked title and patch description]
Signed-off-by: Thomas Huth <thuth@redhat.com>
Even if <linux/kvm.h> seems to exist for all archs on linux, however including
it with __linux__ defined seems to be not working yet as it'll try to include
asm/kvm.h and that can be missing for archs that do not support kvm.
To fix this (instead of any attempt to fix linux headers..), we can mark the
header to be x86_64 only, because it's so far only service for adding the kvm
dirty ring test.
Fixes: 1f546b709d ("tests: migration-test: Add dirty ring test")
Reported-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20210728214128.206198-1-peterx@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
If maintainers are currently pushing to a branch called "staging"
in their repository, they are ending up with some stuck jobs - unless
they have a s390x CI runner machine available. That's ugly, we should
make sure that the related jobs are really only started if such a
runner is available. So let's only run these jobs if it's the
"staging" branch of the main repository of the QEMU project (where
we can be sure that the s390x runner is available), or if the user
explicitly set a S390X_RUNNER_AVAILABLE variable in their CI configs
to declare that they have such a runner available, too.
Fixes: 4799c21023 ("Jobs based on custom runners: add job definitions ...")
Message-Id: <20210728173857.497523-1-thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
These two jobs are currently failing very often - the linker seems to
get killed due to out-of-memory problems. Since apparently nobody has
currently an idea how to fix that nicely, let's mark the jobs as manual
for the time being until someone comes up with a proper fix.
Message-Id: <20210728075141.400816-1-thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The iotests 197 and 215 are occasionally failing in the gitlab-CI now.
According to the log, the failure is "./common.rc: Killed" which might
be an indication that the process has been killed due to out-of-memory
reasons. Both tests are doing a big read with 2G that likely causes
this issue. It used to work fine in the gitlab-CI in the past, but
either the program is now requiring more free memory, or the the CI
containers have changed, so that the OOM condition now sometimes occurs.
Anyway, these two tests are not really suitable for CI containers if
they are doing things like huge reads (which is likely also the reason
why they haven't been added to the "auto" group in the past), so let's
simply disable them in the gitlab-CI now, too.
Message-Id: <20210727162542.318882-1-thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Jobs depending on another should not use the 'when: always'
condition, because if a dependency failed we should not keep
running jobs depending on it. The correct condition is
'when: on_success'.
Fixes: c6fc0fc1a7 ("gitlab-ci.yml: Add jobs to build OpenSBI firmware binaries")
Reported-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Message-Id: <20210727142431.1672530-5-philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Jobs depending on another should not use the 'when: always'
condition, because if a dependency failed we should not keep
running jobs depending on it. The correct condition is
'when: on_success'.
Fixes: 71920809ce ("gitlab-ci.yml: Add jobs to build EDK2 firmware binaries")
Reported-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210727142431.1672530-4-philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Jobs depending on another should not use the 'when: always'
condition, because if a dependency failed we should not keep
running jobs depending on it. The correct condition is
'when: on_success'.
Fixes: f56bf4caf7 ("gitlab: Run Avocado tests manually (except mainstream CI)")
Reported-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210727142431.1672530-3-philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We introduced the QEMU_CI_AVOCADO_TESTING variable in commit f56bf4caf
("gitlab: Run Avocado tests manually (except mainstream CI)"), but
forgot to document it properly. Do it now.
Suggested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210727142431.1672530-2-philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
PowerPC has two KVM types (HV, PR) that translate into three kernel
modules:
kvm.ko - common kvm code
kvm_hv.ko - kvm running with MSR_HV=1 or MSR_HV|PR=0 in a nested guest.
kvm_pr.ko - kvm running in usermode MSR_PR=1.
Since the two KVM types can both be running at the same time, this
creates a situation in which it is possible for one or both of the
modules to fail to initialize, leaving the generic one behind. This
leads QEMU to think it can create a guest, but KVM will fail when
calling the type-specific code:
ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
qemu-kvm: failed to initialize KVM: Invalid argument
Ideally this would be solved kernel-side, but it might be a while
until we can get rid of one of the modules. So in the meantime this
patch tries to make this less confusing for the end user by adding a
more elucidative message:
ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
PPC KVM module is not loaded. Try 'modprobe kvm_hv'.
[dwg: Fixed error in #elif which failed compile on !ppc hosts]
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20210722141340.2367905-1-farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Coverity reported issues which are caused by mixing of signed return codes
from DTC and unsigned return codes of the client interface.
This introduces PROM_ERROR and makes distinction between the error types.
This fixes NEGATIVE_RETURNS, OVERRUN issues reported by Coverity.
This adds a comment about the return parameters number in the VOF hcall.
The reason for such counting is to keep the numbers look the same in
vof_client_handle() and the Linux (an OF client).
vmc->client_architecture_support() returns target_ulong and we want to
propagate this to the client (for example H_MULTI_THREADS_ACTIVE).
The VOF path to do_client_architecture_support() needs chopping off
the top 32bit but SLOF's H_CAS does not; and either way the return values
are either 0 or 32bit negative error code. For now this chops
the top 32bits.
This makes "claim" fail if the allocated address is above 4GB as
the client interface is 32bit. This still allows claiming memory above
4GB as potentially initrd can be put there and the client can read
the address from the FDT's "available" property.
Fixes: CID 1458139, 1458138, 1458137, 1458133, 1458132
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20210720050726.2737405-1-aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>