- Fix crashes and hangs related to iothreads, bdrv_drain and block jobs:
- Fix some AIO context locking in jobs
- Fix blk->in_flight during blk_wait_while_drained()
- vpc: Don't round up already aligned BAT sizes
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJejI1UAAoJEH8JsnLIjy/WQhEQAIF2zkkdbT77VTMLvFmPU36J
hPR2RSzd5egNtfn3zm7vGZJ3FEZb5SEAV5R9CnwVAJeQnPUll7bjkPsoynrEUFU2
PSoFe8lKuYwoAOOSjqakASpKx/Xs3sctKc4fgRWr5/dbWFCC77Q+qxHiKO4KYfeh
g00jsjeCLMVLop3r9uMDovlA81mJUIP5axSjSH7akgzLC+ZCfH2RFFu62D7wYv9w
UgQM/NZt2YuLFm+BeQLB4K2BK+PZBCN1trPj+h11ecUwm9b1XZZfO/7R0T8Wrljc
49fd5Z/GombCnyxuLCr6QrhLZcr8FffLJXQgkdHTkWUKQXqZUfmqLjVIAbli0ZiC
hP8jE5EgdAXpBrGeM2oH+dk0iQSBGTIMaGlhDwxO32xOAQ8TKpRiMZMfwGHqB+vn
m/EPoHLHL6vQGxISfGjj4k3fnSP74nvRrS656MhLuG03SkPZacyTZQpTkErg2imW
AeU6g4afvvLtJBoF/YYA5Qhff1Ux5eh9jactIW1DRf/Q4tTc4ioTU25560Le4eGZ
kex/AwIcf9P47eTUCP8L6iNKz8RU7bV4g9vl9zz7fQm3i9GEhly88XvOppnXRUvT
XdkfdlSmUZ9vue6rAfgsL5fQIHtsGRfH90nT11/IW1X4baOImtcQBWg3xZdR4zPS
W2H0J01PlSKE8l/OWQlo
=oODf
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches:
- Fix crashes and hangs related to iothreads, bdrv_drain and block jobs:
- Fix some AIO context locking in jobs
- Fix blk->in_flight during blk_wait_while_drained()
- vpc: Don't round up already aligned BAT sizes
# gpg: Signature made Tue 07 Apr 2020 15:25:24 BST
# gpg: using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream:
vpc: Don't round up already aligned BAT sizes
block: Fix blk->in_flight during blk_wait_while_drained()
block: Increase BB.in_flight for coroutine and sync interfaces
block-backend: Reorder flush/pdiscard function definitions
backup: don't acquire aio_context in backup_clean
replication: assert we own context before job_cancel_sync
job: take each job's lock individually in job_txn_apply
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When %gs cannot be used, we use register offset addressing.
This path is almost never used, so it was clearly not tested.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200406174803.8192-1-richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Without -Werror, the probe may succeed, but then compilation fails
later when -Werror is added for other reasons. Shows up on windows,
where the compiler complains about -fPIC.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200401214756.6559-1-richard.henderson@linaro.org>
Message-Id: <20200403191150.863-13-alex.bennee@linaro.org>
The https://makecode.microbit.org/#editor generates slightly weird
.hex files which work fine on a real microbit but causes QEMU to
choke. The reason is extraneous data after the EOF record which causes
the loader to attempt to write a bigger file than it should to the
"rom". According to the HEX file spec an EOF really should be the last
thing we process so lets do that.
Reported-by: Ursula Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200403191150.863-12-alex.bennee@linaro.org>
Don't use magic spaces, calculate the justification for the file
field like the kernel does with seq_pad.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200403191150.863-10-alex.bennee@linaro.org>
Unfortunately reading /proc/self/maps is still considered the gold
standard for a process finding out about it's own memory layout. As we
will want this data in other contexts soon factor out the code to read
and parse the data. Rather than just blindly copying the existing
sscanf based code we use a more modern glib version of the parsing
code to make a more general purpose map structure.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200403191150.863-9-alex.bennee@linaro.org>
All other calls to normalize*Subnormal detect zero input before
the call -- this is the only outlier. This case can happen with
+0.0 + +0.0 = +0.0 or -0.0 + -0.0 = -0.0, so return a zero of
the correct sign.
Reported-by: Coverity (CID 1421991)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200327232042.10008-1-richard.henderson@linaro.org>
Message-Id: <20200403191150.863-8-alex.bennee@linaro.org>
./gdbstub.c: In function ‘handle_query_thread_extra’:
/usr/include/glib-2.0/glib/glib-autocleanups.h:28:10:
error: ‘cpu_name’ may be used uninitialized in this function
[-Werror=maybe-uninitialized]
g_free (*pp);
^
./gdbstub.c:2063:26: note: ‘cpu_name’ was declared here
g_autofree char *cpu_name;
^
cc1: all warnings being treated as errors
Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
Message-Id: <20200326151407.25046-1-dplotnikov@virtuozzo.com>
Reported-by: Euler Robot <euler.robot@huawei.com>
Reported-by: Chen Qun <kuhn.chenqun@huawei.com>
Reviewed-by: Miroslav Rezanina <mrezanin@redhat.com>
Message-Id: <20200325092137.24020-1-kuhn.chenqun@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200403191150.863-7-alex.bennee@linaro.org>
Dynamically allocating a new structure within the DisasContext can
potentially leak as we can longjmp out of the translation loop (see
test_phys_mem). The proper fix would be to use static allocation
within the DisasContext but as the Xtensa translator imports it's code
from elsewhere I leave that as an exercise for the maintainer.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20200403191150.863-6-alex.bennee@linaro.org>
Searching for memory space can cause problems so lets extend the
CPU_LOG_PAGE output so you can watch init_guest_space fail to
allocate memory. A more involved fix is actually required to make this
function play nicely with the large guard pages the sanitiser likes to
use.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20200403191150.863-5-alex.bennee@linaro.org>
We are not using them and they just get in the way.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200403191150.863-4-alex.bennee@linaro.org>
Checking TARGET_ABI_BITS is sketchy - we should check for the presence
of the define to be sure. Also clean up the white space while we are
there.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200403191150.863-3-alex.bennee@linaro.org>
It's perfectly possible to have no function symbols in your elf file
and if we do the undefined behaviour sanitizer rightly complains about
us passing NULL to qsort. Check nsyms before we go ahead.
While we are at it lets drop the unchecked return value and cleanup
the fail leg by use of g_autoptr.
Another fix was proposed 101 weeks ago in:
Message-Id: 20180421232120.22208-1-f4bug@amsat.org
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200403191150.863-2-alex.bennee@linaro.org>
Some GitHub users try to open pull requests against the GitHub
mirror. Unfortunate these get ignored until eventually someone
notices and closes the request.
Enable the 'Repo Lockdown' [*] 3rd party bot which can autorespond
to pull requests with a friendly comment, close the request, and
then lock it to prevent further comments.
[*] https://github.com/dessant/repo-lockdown
Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-Id: <20200406214125.18538-1-f4bug@amsat.org>
[AJB: s/fill/file/ and point at canonical qemu.org/contribute]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Paul Durrant <paul@xen.org>
Message-Id: <20200406165043.1447837-1-anthony.perard@citrix.com>
Since 7f5d9b206d ("object-add: don't create return value if
failed"), qmp_object_add() don't write any value in 'ret_data', thus
has random data. Then qobject_unref() fails and abort().
Fix by initialising 'ret_data' properly.
Fixes: 5f07c4d60d ("qapi: Flatten object-add")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20200406164207.1446817-1-anthony.perard@citrix.com>
The function usbback_packet_complete() currently takes a USBPacket*,
which must be a pointer to the packet field within a struct
usbback_req; the function uses container_of() to get the struct
usbback_req* given the USBPacket*.
This is unnecessarily confusing (and in particular it confuses the
Coverity Scan analysis, resulting in the false positive CID 1421919
where it thinks that we write off the end of the structure). Since
both callsites already have the pointer to the struct usbback_req,
just pass that in directly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Message-Id: <20200323164318.26567-1-peter.maydell@linaro.org>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
As reported on Launchpad, Azure apparently doesn't accept images for
upload that are not both aligned to 1 MB blocks and have a BAT size that
matches the image size exactly.
As far as I can tell, there is no real reason why we create a BAT that
is one entry longer than necessary for aligned image sizes, so change
that.
(Even though the condition is only mentioned as "should" in the spec and
previous products accepted larger BATs - but we'll try to maintain
compatibility with as many of Microsoft's ever-changing interpretations
of the VHD spec as possible.)
Fixes: https://bugs.launchpad.net/bugs/1870098
Reported-by: Tobias Witek
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200402093603.2369-1-kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Waiting in blk_wait_while_drained() while blk->in_flight is increased
for the current request is wrong because it will cause the drain
operation to deadlock.
This patch makes sure that blk_wait_while_drained() is called with
blk->in_flight increased exactly once for the current request, and that
it temporarily decreases the counter while it waits.
Fixes: cf3129323f
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200407121259.21350-4-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
External callers of blk_co_*() and of the synchronous blk_*() functions
don't currently increase the BlockBackend.in_flight counter, but calls
from blk_aio_*() do, so there is an inconsistency whether the counter
has been increased or not.
This patch moves the actual operations to static functions that can
later know they will always be called with in_flight increased exactly
once, even for external callers using the blk_co_*() coroutine
interfaces.
If the public blk_co_*() interface is unused, remove it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200407121259.21350-3-kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Move all variants of the flush/pdiscard functions to a single place and
put the blk_co_*() version first because it is called by all other
variants (and will become static in the next patch).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200407121259.21350-2-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
All code-paths leading to backup_clean (via job_clean) have the job's
context already acquired. The job's context is guaranteed to be the same
as the one used by backup_top via backup_job_create.
Since the previous logic effectively acquired the lock twice, this
broke cleanup of backups for disks using IO threads, since the BDRV_POLL_WHILE
in bdrv_backup_top_drop -> bdrv_do_drained_begin would only release the lock
once, thus deadlocking with the IO thread.
This is a partial revert of 0abf258171.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200407115651.69472-4-s.reiter@proxmox.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
job_cancel_sync requires the job's lock to be held, all other callers
already do this (replication_stop, drive_backup_abort,
blockdev_backup_abort, job_cancel_sync_all, cancel_common).
In this case we're in a BlockDriver handler, so we already have a lock,
just assert that it is the same as the one used for the commit_job.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Message-Id: <20200407115651.69472-3-s.reiter@proxmox.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
All callers of job_txn_apply hold a single job's lock, but different
jobs within a transaction can have different contexts, thus we need to
lock each one individually before applying the callback function.
Similar to job_completed_txn_abort this also requires releasing the
caller's context before and reacquiring it after to avoid recursive
locks which might break AIO_WAIT_WHILE in the callback. This is safe, since
existing code would already have to take this into account, lest
job_completed_txn_abort might have broken.
This also brings to light a different issue: When a callback function in
job_txn_apply moves it's job to a different AIO context, callers will
try to release the wrong lock (now that we re-acquire the lock
correctly, previously it would just continue with the old lock, leaving
the job unlocked for the rest of the return path). Fix this by not caching
the job's context.
This is only necessary for qmp_block_job_finalize, qmp_job_finalize and
job_exit, since everyone else calls through job_exit.
One test needed adapting, since it calls job_finalize directly, so it
manually needs to acquire the correct context.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Message-Id: <20200407115651.69472-2-s.reiter@proxmox.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Commit a31ca6801c ("qemu/queue.h: clear linked list pointers on
remove") revealed that a request was removed twice from a list, once
in xen_block_finish_request() and a second time in
xen_block_release_request() when both function are called from
xen_block_complete_aio(). But also, the `requests_inflight' counter is
decreased twice, and thus became negative.
This is a bug that was introduced in bfd0d63660 ("xen-block: improve
response latency"), where a `finished' list was removed.
That commit also introduced a leak of request in xen_block_do_aio().
That function calls xen_block_finish_request() but the request is
never released after that.
To fix both issue, we do two changes:
- we squash finish_request() and release_request() together as we want
to remove a request from 'inflight' list to add it to 'freelist'.
- before releasing a request, we need to let the other end know the
result, thus we should call xen_block_send_response() before
releasing a request.
The first change fixes the double QLIST_REMOVE() as we remove the extra
call. The second change makes the leak go away because if we want to
call finish_request(), we need to call a function that does all of
finish, send response, and release.
Fixes: bfd0d63660 ("xen-block: improve response latency")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Message-Id: <20200406140217.1441858-1-anthony.perard@citrix.com>
Reviewed-by: Paul Durrant <paul@xen.org>
[mreitz: Amended commit message as per Paul's suggestions]
Signed-off-by: Max Reitz <mreitz@redhat.com>
From time to time, my shell decides to repace the bracketed numbers here
by the numbers inside (i.e., "=== Clusters to be compressed [1]" is
printed as "=== Clusters to be compressed 1"). That makes tests that
use common.pattern fail. Prevent that from happening by quoting the
arguments to all echos in common.pattern.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200403101134.805871-1-mreitz@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
When issuing a compressed write request the number of bytes must be a
multiple of the cluster size or reach the end of the last cluster.
With the current code such requests are allowed and we hit an
assertion:
$ qemu-img create -f qcow2 img.qcow2 1M
$ qemu-io -c 'write -c 0 32k' img.qcow2
qemu-io: block/qcow2.c:4257: qcow2_co_pwritev_compressed_task:
Assertion `bytes == s->cluster_size || (bytes < s->cluster_size &&
(offset + bytes == bs->total_sectors << BDRV_SECTOR_BITS))' failed.
Aborted
This patch fixes a regression introduced in 0d483dce38
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20200406143401.26854-1-berto@igalia.com>
Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Various qemu-img commands are inconsistent on whether they report
status/errors in terms of bytes or sector offsets. The latter is
confusing (especially as more places move to 4k block sizes), so let's
switch everything to just use bytes everywhere. One iotest is
impacted.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200402135717.476398-1-eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
A discard request deallocates the selected clusters so they read back
as zeroes. This is done by clearing the cluster offset field and
setting QCOW_OFLAG_ZERO in the L2 entry.
This flag is however only supported when qcow_version >= 3. In older
images the cluster is simply deallocated, exposing any possible stale
data from the backing file.
Since discard is an advisory operation it's safer to simply forbid it
in this scenario.
Note that we are adding this check to qcow2_co_pdiscard() and not to
qcow2_cluster_discard() or discard_in_l2_slice() because the last
two are also used by qcow2_snapshot_create() to discard the clusters
used by the VM state. In this case there's no risk of exposing stale
data to the guest and we really want that the clusters are always
discarded.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20200331114345.29993-1-berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
An assortment of fixes for qemu-5.0, including a number for the FWNMI
feature which is new this release.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl6MAv4ACgkQbDjKyiDZ
s5LxjhAAuyKEYD0n6U5JL/+4zIe2YPZbzyUJL4ZeTCwcKgPY71bvOO+57/ATHOBU
7DJ92glrdN1DN5iA6tpI3nTEA6ghDOR3/16UizqC0CdT17a5V4LOhcZ76ZbtPXyM
ct9bZPEkGvnHsr5UGZ9fcGLeapHYQa6GEww1BI0Ed8Sa6D3yX+mk9x4PTMQKN8+R
P8GmLUt6CYNxtvv15mILNA5BsFJDpAYKh3WSqYrXD0UF10w0UKI/Ma0hfqevrpiE
aXF/TQOnEzirQEQ5jAgFy88einu70LuJx5ws96CJJAUPrjhi9vWuuHe4hk03z795
CFpjWBwB5Kf/bkwkJgzQ2SxOHRnFPaQPCJG1Baf2B+VXdo/6R9Y7tq+ia3qDq4QK
+O30HVHjzGFerURiTFh5yvaT7nN24BL6L3vSU/njyLpy2hy8Kfa8q4ic2k17DBGn
ZfD844r+8mFLiSCr+g11iLBYqFRtzqdvCYYJlhdJxQISzQdDHIoobNfWglv9uTD2
CZ6wPPdw7nGIPdBIxsTmztJREgLVO0oeXa/0yZG//gaQKHhxDeD2iajSd87Z20Cl
bZ8K/mDE3QSt8KQOtq22whivJIwzUAi6XWcRTnCaCs39rYRoVRo9Ey55KnV1b6gI
WjkCCzoPrGxrDMyHw9o571QvWWynANCHBzolUR8p9VU0abBugI4=
=F8JM
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-5.0-20200407' into staging
ppc patch queue 2020-04-07
An assortment of fixes for qemu-5.0, including a number for the FWNMI
feature which is new this release.
# gpg: Signature made Tue 07 Apr 2020 05:35:10 BST
# gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-5.0-20200407:
ppc/pnv: Create BMC devices only when defaults are enabled
pseries: Update SLOF firmware image
hw/ppc/ppc440_uc.c: Remove incorrect iothread locking from dcr_write_pcie()
spapr: Fix failure path for attempting to hot unplug PCI bridges
ppc/spapr: Don't kill the guest if a recovered FWNMI machine check delivery fails
ppc/spapr: Add FWNMI machine check delivery warnings
ppc/spapr: Improve FWNMI machine check delivery corner case comments
ppc/spapr: KVM FWNMI should not be enabled until guest requests it
vfio/spapr: Fix page size calculation
hw/ppc/e500.c: Handle qemu_find_file() failure
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In some corner cases (that never happen during normal operation but a
malicious guest could program wrong values) pixman functions were
called with parameters that result in a crash. Fix this and add more
checks to disallow such cases.
Reported-by: Ziming Zhang <ezrakiez@gmail.com>
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-id: 20200406204029.19559747D5D@zero.eik.bme.hu
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Commit e2392d4395 ("ppc/pnv: Create BMC devices at machine init")
introduced default BMC devices which can be a problem when the same
devices are defined on the command line with :
-device ipmi-bmc-sim,id=bmc0 -device isa-ipmi-bt,bmc=bmc0,irq=10
QEMU fails with :
qemu-system-ppc64: error creating device tree: node: FDT_ERR_EXISTS
Use defaults_enabled() when creating the default BMC devices to let
the user provide its own BMC devices using '-nodefaults'. If no BMC
device are provided, output a warning but let QEMU run as this is a
supported configuration. However, when multiple BMC devices are
defined, stop QEMU with a clear error as the results are unexpected.
Fixes: e2392d4395 ("ppc/pnv: Create BMC devices at machine init")
Reported-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20200404153655.166834-1-clg@kaod.org>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is a single regression fix for for 5.0:
Greg Kurz (1):
slof: Only close stdout for virtio-serial devices
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In dcr_write_pcie() we take the iothread lock around a call to
pcie_host_mmcfg_udpate(). This is an incorrect attempt to deal with
the bug fixed in commit 235352ee6e, where we were not taking
the iothread lock before calling device dcr read/write functions.
(It's not sufficient locking, because although the other cases in the
switch statement won't assert, there is no locking which prevents
multiple guest CPUs from trying to access the PPC460EXPCIEState
struct at the same time and corrupting data.)
Unfortunately with commit 235352ee6e we are now trying
to recursively take the iothread lock, which will assert:
$ qemu-system-ppc -M sam460ex --display none
**
ERROR:/home/petmay01/linaro/qemu-from-laptop/qemu/cpus.c:1830:qemu_mutex_lock_iothread_impl: assertion failed: (!qemu_mutex_iothread_locked())
Aborted (core dumped)
Remove the locking within dcr_write_pcie().
Fixes: 235352ee6e
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20200330125228.24994-1-peter.maydell@linaro.org>
Tested-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For various technical reasons we can't currently allow unplug a PCI to PCI
bridge on the pseries machine. spapr_pci_unplug_request() correctly
generates an error message if that's attempted.
But.. if the given errp is not error_abort or error_fatal, it doesn't
actually stop trying to unplug the bridge anyway.
Fixes: 14e714900f "spapr: Allow hot plug/unplug of PCI bridges and devices under PCI bridges"
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Try to be tolerant of FWNMI delivery errors if the machine check had been
recovered by the host.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200325142906.221248-5-npiggin@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
[dwg: Updated comment at Greg's suggestion]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add some messages which explain problems and guest misbehaviour that
may be difficult to diagnose in rare cases of machine checks.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200325142906.221248-4-npiggin@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Some of the conditions are not as clearly documented as they could be.
Also the non-FWNMI case does not need a large comment.
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200325142906.221248-3-npiggin@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The KVM FWNMI capability should be enabled with the "ibm,nmi-register"
rtas call. Although MCEs from KVM will be delivered as architected
interrupts to the guest before "ibm,nmi-register" is called, KVM has
different behaviour depending on whether the guest has enabled FWNMI
(it attempts to do more recovery on behalf of a non-FWNMI guest).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200325142906.221248-2-npiggin@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Coverity detected an issue (CID 1421903) with potential call of clz64(0)
which returns 64 which make it do "<<" with a negative number.
This checks the mask and avoids undefined behaviour.
In practice pgsizes and memory_region_iommu_get_min_page_size() always
have some common page sizes and even if they did not, the resulting page
size would be 0x8000.0000.0000.0000 (gcc 9.2) and
ioctl(VFIO_IOMMU_SPAPR_TCE_CREATE) would fail anyway.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20200324063912.25063-1-aik@ozlabs.ru>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
If qemu_find_file() doesn't find the BIOS it returns NULL; we were
passing that unchecked through to load_elf(), which assumes a non-NULL
pointer and may misbehave. In practice it fails with a weird message:
$ qemu-system-ppc -M ppce500 -display none -kernel nonesuch
Bad address
qemu-system-ppc: could not load firmware '(null)'
Handle the failure case better:
$ qemu-system-ppc -M ppce500 -display none -kernel nonesuch
qemu-system-ppc: could not find firmware/kernel file 'nonesuch'
Spotted by Coverity (CID 1238954).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20200324121216.23899-1-peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>