Switch the exynos MCT LFRC timers over to the ptimer transaction API.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-13-peter.maydell@linaro.org
We want to switch the exynos MCT code away from bottom-half based ptimers to
the new transaction-based ptimer API. The MCT is complicated
and uses multiple different ptimers, so it's clearer to switch
it a piece at a time. Here we change over only the GFRC.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-12-peter.maydell@linaro.org
Switch the digic-timer.c code away from bottom-half based ptimers to
the new transaction-based ptimer API. This just requires adding
begin/commit calls around the various places that modify the ptimer
state, and using the new ptimer_init() function to create the timer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-11-peter.maydell@linaro.org
Switch the cmsdk-apb-timer code away from bottom-half based ptimers
to the new transaction-based ptimer API. This just requires adding
begin/commit calls around the various places that modify the ptimer
state, and using the new ptimer_init() function to create the timer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-10-peter.maydell@linaro.org
Switch the cmsdk-apb-dualtimer code away from bottom-half based
ptimers to the new transaction-based ptimer API. This just requires
adding begin/commit calls around the various places that modify the
ptimer state, and using the new ptimer_init() function to create the
timer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-9-peter.maydell@linaro.org
Switch the arm_mptimer.c code away from bottom-half based ptimers to
the new transaction-based ptimer API. This just requires adding
begin/commit calls around the various places that modify the ptimer
state, and using the new ptimer_init() function to create the timer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-8-peter.maydell@linaro.org
Switch the allwinner-a10-pit code away from bottom-half based ptimers to
the new transaction-based ptimer API. This just requires adding
begin/commit calls around the various places that modify the ptimer
state, and using the new ptimer_init() function to create the timer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-7-peter.maydell@linaro.org
Switch the musicpal code away from bottom-half based ptimers to
the new transaction-based ptimer API. This just requires adding
begin/commit calls around the various places that modify the ptimer
state, and using the new ptimer_init() function to create the timer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-6-peter.maydell@linaro.org
Switch the arm_timer.c code away from bottom-half based ptimers
to the new transaction-based ptimer API. This just requires
adding begin/commit calls around the various arms of
arm_timer_write() that modify the ptimer state, and using the
new ptimer_init() function to create the timer.
Fixes: https://bugs.launchpad.net/qemu/+bug/1777777
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-5-peter.maydell@linaro.org
Convert the ptimer test cases to the transaction-based ptimer API,
by changing to ptimer_init(), dropping the now-unused QEMUBH
variables, and surrounding each set of changes to the ptimer
state in ptimer_transaction_begin/commit calls.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-4-peter.maydell@linaro.org
Provide the new transaction-based API. If a ptimer is created
using ptimer_init() rather than ptimer_init_with_bh(), then
instead of providing a QEMUBH, it provides a pointer to the
callback function directly, and has opted into the transaction
API. All calls to functions which modify ptimer state:
- ptimer_set_period()
- ptimer_set_freq()
- ptimer_set_limit()
- ptimer_set_count()
- ptimer_run()
- ptimer_stop()
must be between matched calls to ptimer_transaction_begin()
and ptimer_transaction_commit(). When ptimer_transaction_commit()
is called it will evaluate the state of the timer after all the
changes in the transaction, and call the callback if necessary.
In the old API the individual update functions generally would
call ptimer_trigger() immediately, which would schedule the QEMUBH.
In the new API the update functions will instead defer the
"set s->next_event and call ptimer_reload()" work to
ptimer_transaction_commit().
Because ptimer_trigger() can now immediately call into the
device code which may then call other ptimer functions that
update ptimer_state fields, we must be more careful in
ptimer_reload() not to cache fields from ptimer_state across
the ptimer_trigger() call. (This was harmless with the QEMUBH
mechanism as the BH would not be invoked until much later.)
We use assertions to check that:
* the functions modifying ptimer state are not called outside
a transaction block
* ptimer_transaction_begin() and _commit() calls are paired
* the transaction API is not used with a QEMUBH ptimer
There is some slight repetition of code:
* most of the set functions have similar looking "if s->bh
call ptimer_reload, otherwise set s->need_reload" code
* ptimer_init() and ptimer_init_with_bh() have similar code
We deliberately don't try to avoid this repetition, because
it will all be deleted when the QEMUBH version of the API
is removed.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-3-peter.maydell@linaro.org
Currently the ptimer design uses a QEMU bottom-half as its
mechanism for calling back into the device model using the
ptimer when the timer has expired. Unfortunately this design
is fatally flawed, because it means that there is a lag
between the ptimer updating its own state and the device
callback function updating device state, and guest accesses
to device registers between the two can return inconsistent
device state.
We want to replace the bottom-half design with one where
the guest device's callback is called either immediately
(when the ptimer triggers by timeout) or when the device
model code closes a transaction-begin/end section (when the
ptimer triggers because the device model changed the
ptimer's count value or other state). As the first step,
rename ptimer_init() to ptimer_init_with_bh(), to free up
the ptimer_init() name for the new API. We can then convert
all the ptimer users away from ptimer_init_with_bh() before
removing it entirely.
(Commit created with
git grep -l ptimer_init | xargs sed -i -e 's/ptimer_init/ptimer_init_with_bh/'
and three overlong lines folded by hand.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191008171740.9679-2-peter.maydell@linaro.org
Host kernel within [4.18, 5.3] report an erroneous KVM_MAX_VCPUS=512
for ARM. The actual capability to instantiate more than 256 vcpus
was fixed in 5.4 with the upgrade of the KVM_IRQ_LINE ABI to support
vcpu id encoded on 12 bits instead of 8 and a redistributor consuming
a single KVM IO device instead of 2.
So let's check this capability when attempting to use more than 256
vcpus within any ARM kvm accelerated machine.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-id: 20191003154640.22451-4-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Host kernels that expose the KVM_CAP_ARM_IRQ_LINE_LAYOUT_2 capability
allow injection of interrupts along with vcpu ids larger than 255.
Let's encode the vpcu id on 12 bits according to the upgraded KVM_IRQ_LINE
ABI when needed.
Given that we have two callsites that need to assemble
the value for kvm_set_irq(), a new helper routine, kvm_arm_set_irq
is introduced.
Without that patch qemu exits with "kvm_set_irq: Invalid argument"
message.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-id: 20191003154640.22451-3-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update the headers against commit:
0f1a7b3fac05 ("timer-of: don't use conditional expression
with mixed 'void' types")
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-id: 20191003154640.22451-2-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- block: Fix crash with qcow2 partial cluster COW with small cluster
sizes (misaligned write requests with BDRV_REQ_NO_FALLBACK)
- qcow2: Fix integer overflow potentially causing corruption with huge
requests
- vhdx: Detect truncated image files
- tools: Support help options for --object
- Various block-related replay improvements
- iotests/028: Fix for long $TEST_DIRs
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJdpJwuAAoJEH8JsnLIjy/W4EIP/ieDG6LwYIkxk6UPAnPm2ZtT
jEBj1jZNbPPMVPCzeVryS2qgoSB4ItnKlSpbW9Z+DX2EL8gr/sHZot4J6BXkxaJV
mxHa4KCLZUYHfLbVBX+ubb/fufu2ItGUtY2LcdxyWASk/6mWKYVivPunwaK4gEiD
CBl9RgSMSy1bpBGIky2NMylOCr5KlSeweH/XL3J3Jodpp/3nnDpZ96iy953R7Bes
OpwZEsDHQmYhDufPqIWVzyq2hEUn22kGe/Rn2KfOHO7SR6aqr+DqtUwlDE9mxxBf
hFhm6bqbJhHQOTMbTrzqiYwDirR4S5/FlynI9+YbngU9fnkbCIOheTKL5M8PlSow
H+ZUtmU1Avp0wG3RZVmtCT9upFV7hpC4/fiMr8bdXCyuWy/7d7WB1G4e9ELiX7uo
VCl6gVviDQbEgnoNS7v6JbP/xjhHuu7Fxh5K0xgT6wtwP53cBqbxORMkwv2u3zCI
QRuiKOHZW3wv8tdRP/5qhdtIxTy6w20v/lAO/s0Xqn8YlnyfrH71LCNWmG4MOfgP
ZXwCv9nxpzVsTPU2nLowl0avCwmDVY8Iv/0sN+eybo8xp47pCmPV9dKa0rJ+RhFe
N5blnnwsmJFPW+QD5gBZn7eH3jafHxN2URhG3cwNdWxQS6SiTcVXDsTfn5YB6PjN
Tb9P2aUJYw94BvUJF2c+
=gCHy
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches:
- block: Fix crash with qcow2 partial cluster COW with small cluster
sizes (misaligned write requests with BDRV_REQ_NO_FALLBACK)
- qcow2: Fix integer overflow potentially causing corruption with huge
requests
- vhdx: Detect truncated image files
- tools: Support help options for --object
- Various block-related replay improvements
- iotests/028: Fix for long $TEST_DIRs
# gpg: Signature made Mon 14 Oct 2019 17:02:54 BST
# gpg: using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream:
iotests: Test large write request to qcow2 file
qcow2: Limit total allocation range to INT_MAX
qemu-nbd: Support help options for --object
qemu-img: Support help options for --object
qemu-io: Support help options for --object
vl: Split off user_creatable_print_help()
iotests/028: Fix for long $TEST_DIRs
block: Reject misaligned write requests with BDRV_REQ_NO_FALLBACK
replay: add BH oneshot event for block layer
replay: finish record/replay before closing the disks
replay: don't drain/flush bdrv queue while RR is working
replay: update docs for record/replay with block devices
replay: disable default snapshot for record/replay
block: implement bdrv_snapshot_goto for blkreplay
block/vhdx: add check for truncated image files
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The following statement produces a SyntaxWarning with Python 3.8:
if len(format) is 0:
scripts/tracetool/__init__.py:459: SyntaxWarning: "is" with a literal. Did you mean "=="?
Use the conventional len(x) == 0 syntax instead.
Reported-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20191010122154.10553-1-stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
tracetool needs to know the group name ("all", "root", or a specific
subdirectory). Also remove the stdin redirection because tracetool.py
needs the path to the trace-events file. Update the documentation.
Fixes: 2098c56a9b
("trace: move setting of group name into Makefiles")
Buglink: https://bugs.launchpad.net/bugs/1844814
Reported-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20191009135154.10970-1-stefanha@redhat.com>
Without HEAD^, the following happens when you attempt a large write
request to a qcow2 file such that the number of bytes covered by all
clusters involved in a single allocation will exceed INT_MAX:
(A) handle_alloc_space() decides to fill the whole area with zeroes and
fails because bdrv_co_pwrite_zeroes() fails (the request is too
large).
(B) If handle_alloc_space() does not do anything, but merge_cow()
decides that the requests can be merged, it will create a too long
IOV that later cannot be written.
(C) Otherwise, all parts will be written separately, so those requests
will work.
In either B or C, though, qcow2_alloc_cluster_link_l2() will have an
overflow: We use an int (i) to iterate over nb_clusters, and then
calculate the L2 entry based on "i << s->cluster_bits" -- which will
overflow if the range covers more than INT_MAX bytes. This then leads
to image corruption because the L2 entry will be wrong (it will be
recognized as a compressed cluster).
Even if that were not the case, the .cow_end area would be empty
(because handle_alloc() will cap avail_bytes and nb_bytes at INT_MAX, so
their difference (which is the .cow_end size) will be 0).
So this test checks that on such large requests, the image will not be
corrupted. Unfortunately, we cannot check whether COW will be handled
correctly, because that data is discarded when it is written to null-co
(but we have to use null-co, because writing 2 GB of data in a test is
not quite reasonable).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When the COW areas are included, the size of an allocation can exceed
INT_MAX. This is kind of limited by handle_alloc() in that it already
caps avail_bytes at INT_MAX, but the number of clusters still reflects
the original length.
This can have all sorts of effects, ranging from the storage layer write
call failing to image corruption. (If there were no image corruption,
then I suppose there would be data loss because the .cow_end area is
forced to be empty, even though there might be something we need to
COW.)
Fix all of it by limiting nb_clusters so the equivalent number of bytes
will not exceed INT_MAX.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Instead of parsing help options as normal object properties and
returning an error, provide the same help functionality as the system
emulator in qemu-nbd, too.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Instead of parsing help options as normal object properties and
returning an error, provide the same help functionality as the system
emulator in qemu-img, too.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Instead of parsing help options as normal object properties and
returning an error, provide the same help functionality as the system
emulator in qemu-io, too.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Printing help for --object is something that we not only want in the
system emulator, but also in tools that support --object. Move it into a
separate function in qom/object_interfaces.c to make the code accessible
for tools.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
For long test image paths, the order of the "Formatting" line and the
"(qemu)" prompt after a drive_backup HMP command may be reversed. In
fact, the interaction between the prompt and the line may lead to the
"Formatting" to being greppable at all after "read"-ing it (if the
prompt injects an IFS character into the "Formatting" string).
So just wait until we get a prompt. At that point, the block job must
have been started, so "info block-jobs" will only return "No active
jobs" once it is done.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The BDRV_REQ_NO_FALLBACK flag means that an operation should only be
performed if it can be offloaded or otherwise performed efficiently.
However a misaligned write request requires a RMW so we should return
an error and let the caller decide how to proceed.
This hits an assertion since commit c8bb23cbdb if the required
alignment is larger than the cluster size:
qemu-img create -f qcow2 -o cluster_size=2k img.qcow2 4G
qemu-io -c "open -o driver=qcow2,file.align=4k blkdebug::img.qcow2" \
-c 'write 0 512'
qemu-io: block/io.c:1127: bdrv_driver_pwritev: Assertion `!(flags & BDRV_REQ_NO_FALLBACK)' failed.
Aborted
The reason is that when writing to an unallocated cluster we try to
skip the copy-on-write part and zeroize it using BDRV_REQ_NO_FALLBACK
instead, resulting in a write request that is too small (2KB cluster
size vs 4KB required alignment).
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Replay is capable of recording normal BH events, but sometimes
there are single use callbacks scheduled with aio_bh_schedule_oneshot
function. This patch enables recording and replaying such callbacks.
Block layer uses these events for calling the completion function.
Replaying these calls makes the execution deterministic.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
After recent updates block devices cannot be closed on qemu exit.
This happens due to the block request polling when replay is not finished.
Therefore now we stop execution recording before closing the block devices.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In record/replay mode bdrv queue is controlled by replay mechanism.
It does not allow saving or loading the snapshots
when bdrv queue is not empty. Stopping the VM is not blocked by nonempty
queue, but flushing the queue is still impossible there,
because it may cause deadlocks in replay mode.
This patch disables bdrv_drain_all and bdrv_flush_all in
record/replay mode.
Stopping the machine when the IO requests are not finished is needed
for the debugging. E.g., breakpoint may be set at the specified step,
and forcing the IO requests to finish may break the determinism
of the execution.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch updates the description of the command lines for using
record/replay with attached block devices.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch disables setting '-snapshot' option on by default
in record/replay mode. This is needed for creating vmstates in record
and replay modes.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch enables making snapshots with blkreplay used in
block devices.
This function is required to make bdrv_snapshot_goto without
calling .bdrv_open which is not implemented.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
qemu is currently not able to detect truncated vhdx image files.
Add a basic check if all allocated blocks are reachable at open and
report all errors during bdrv_co_check.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Mostly cleanups and minor fixes
[Note I'm seeing a hang on the aarch64 hosted x86-64 tcg migration
test in xbzrle; but I'm seeing that on current head as well]
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEERfXHG0oMt/uXep+pBRYzHrxb/ecFAl2g1JcACgkQBRYzHrxb
/efl1RAAjYukmf+kCFCw4Ws6nJ4000O85mpj0117SJpgTck1ivTC968REpl5pD0C
aHDzamNW82fiqjRxwF6KJRWic217NrmR1Z/j++SDyIjOc1ERQdB+RdCc7T2NkBT5
2HiPaceNiu9wOpqX/bto/xAug9vAxq5/1jeq+vhKxd+IcvAZII0SwKWn9mWA2209
H4i3v8OCv9isT6MRNitfWT/giYkI5HwFzA9a13S+zXioEGnoAmqzrrAQs2/MkyDt
bIeLbZyonH9hKbdrwmIXCvNEHA32BOPQyrsRp9CPZwRKVP2AzRYU9K9UjKncmYJS
bPdLYFmqEQm8ILQI6lyJ+pW1r/cyAUQBQii6NA+9ZfimxCSB06ArU+JeM0csl7HV
b4cG/bENFmtOzaoc3SrE6t1APlTiS9nxW6iH8zW3ozMEQGGihru7/6VIlwKTOfeX
kXKF92FTiTBpJ1u3/t05TPnxo4c2bKWM+Gj1okDAUsP8HovQpvJa8r92n1cC0+l8
l3pkFnrejzTcrexWIiKXYnPnO7Ez/Dm+0aCzlQkX7DSFxDnwI2T/BYk21FNlcI/L
rCHnkSLjYMWPelTLo9ZNuFaKL9UMeMtLPaIU9NBSSmsQ32/d8EXpDQwe8uAq+9Z/
qBir/mKyDe7I/InumtWQS46SS1/E1VyxDG2dxRWK9lN8DDOXRlM=
=Jouv
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20191011a' into staging
Migration pull 2019-10-11
Mostly cleanups and minor fixes
[Note I'm seeing a hang on the aarch64 hosted x86-64 tcg migration
test in xbzrle; but I'm seeing that on current head as well]
# gpg: Signature made Fri 11 Oct 2019 20:14:31 BST
# gpg: using RSA key 45F5C71B4A0CB7FB977A9FA90516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" [full]
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-migration-20191011a: (21 commits)
migration: Support gtree migration
migration/multifd: pages->used would be cleared when attach to multifd_send_state
migration/multifd: initialize packet->magic/version once at setup stage
migration/multifd: use pages->allocated instead of the static max
migration/multifd: fix a typo in comment of multifd_recv_unfill_packet()
migration/postcopy: check PostcopyState before setting to POSTCOPY_INCOMING_RUNNING
migration/postcopy: rename postcopy_ram_enable_notify to postcopy_ram_incoming_setup
migration/postcopy: postpone setting PostcopyState to END
migration/postcopy: mis->have_listen_thread check will never be touched
migration: report SaveStateEntry id and name on failure
migration: pass in_postcopy instead of check state again
migration/postcopy: fix typo in mark_postcopy_blocktime_begin's comment
migration/postcopy: map large zero page in postcopy_ram_incoming_setup()
migration/postcopy: allocate tmp_page in setup stage
migration: Don't try and recover return path in non-postcopy
rcu: Use automatic rc_read unlock in core memory/exec code
migration: Use automatic rcu_read unlock in rdma.c
migration: Use automatic rcu_read unlock in ram.c
migration: Fix missing rcu_read_unlock
rcu: Add automatically released rcu_read_lock variants
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
host since it may cause inode number collision and mayhem in the guest.
A new fsdev property is added for the user to choose the appropriate
policy to handle that: either remap all inode numbers or fail I/Os to
another host device or just print out a warning (default behaviour).
This is also my last PR as _active_ maintainer of 9pfs.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEtIKLr5QxQM7yo0kQcdTV5YIvc9YFAl2fEn8ACgkQcdTV5YIv
c9bnTxAApYimbNUT+OjfNfPDjMHrezHCLnczuAWya3JcUCEkZC2E+qEwYdCzdwvq
TGcdXPcbiUKUNY/3V3pEefuckPJ2+UVmqPpzYcuRjZNYrxqo7SzVPyxxMtG3f5Fh
+dMu6Hx1s/vkoWf81HO1tnkTdL9aiOMQS7yUtEYidD8yoqJRLwbKGB+uGZrY6aDy
65n9z/0uwwzOwJsFlRjLMeifkmMC4tA1DLIZHQxGLCUk9K0/xCcI2CbYITgt1T4m
2xf/0t/+RQT/n6sXheskDpI8hf3A0rvEDETrvHp90zal3iDq93ZfvPd134LFRZIu
tWsRYNKsaJE4ecIHa/wp535isb4uQa7PL10+oD075o+BF98Nk10ALyAQf7RTefkC
90lkXeRAGfJaMCuDuTmxFVBmQPgUjXsfKvASG8V4yweqO7oUSl5D8m+aOu7t3+f4
8n+DhEZp1ANQPgLv4raAxwFhlsVl+BImOZRv/SGKzqgf0jy+NT1/ebfTFyPttFff
vn7kYfm1V/hPhQVVm7xqGwyRybP+V8td3mWo8hVsiqziZIN4x1wb/qFpJeuHuFSj
IcJymcH7BgeBYWyjpmn+W94DdIoj20cLwcLHxU6d2L61oUrhKHd7R2g1Ow/aXh4L
ohoK104GUqTBPbmxn0Dpal/Xz26X4k4l0JvVXzwPdBv99JkRF4I=
=TqfQ
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/gkurz/tags/9p-next-2019-10-10' into staging
The most notable change is that we now detect cross-device setups in the
host since it may cause inode number collision and mayhem in the guest.
A new fsdev property is added for the user to choose the appropriate
policy to handle that: either remap all inode numbers or fail I/Os to
another host device or just print out a warning (default behaviour).
This is also my last PR as _active_ maintainer of 9pfs.
# gpg: Signature made Thu 10 Oct 2019 12:14:07 BST
# gpg: using RSA key B4828BAF943140CEF2A3491071D4D5E5822F73D6
# gpg: Good signature from "Greg Kurz <groug@kaod.org>" [full]
# gpg: aka "Gregory Kurz <gregory.kurz@free.fr>" [full]
# gpg: aka "[jpeg image of size 3330]" [full]
# Primary key fingerprint: B482 8BAF 9431 40CE F2A3 4910 71D4 D5E5 822F 73D6
* remotes/gkurz/tags/9p-next-2019-10-10:
MAINTAINERS: Downgrade status of virtio-9p to "Odd Fixes"
9p: Use variable length suffixes for inode remapping
9p: stat_to_qid: implement slow path
9p: Added virtfs option 'multidevs=remap|forbid|warn'
9p: Treat multiple devices on one export as an error
fsdev: Add return value to fsdev_throttle_parse_opts()
9p: Simplify error path of v9fs_device_realize_common()
9p: unsigned type for type, version, path
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- Implement more TCG CPU features related to the MMU (e.g., IEP)
- Add the current instruction length to unwind data and clean up
- Resolve one TODO for the MVCL instruction
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl2fFRIRHGRhdmlkQHJl
ZGhhdC5jb20ACgkQTd4Q9wD/g1pLZBAApbNdwchTcYR483BWFCDRktU+jILOvepe
yZ8ek6kvVAL0U0psVPYrUw74C11ig7c06JADL9ON3aF5RHRppNpLG/ZNVSat/1P5
hcgCMwNYkXVJeL5PDWW2sDVjBvY9n8sDH6rlslmtZB+uetIpTS6ixfv/GhZak4E5
YHJPK2eNAJHMOuasvGeBdnObQNSTAr+pE9I7k4+wt4OKHZiT+k6Dlm44JYQtiv6s
DRJClt25pdxSxjrMzG9nEDm5Ql+K/9qJ23sSniqfTD4UtgILBHODc9p9VNVf+92Q
y2iMRVnHHA8wlp6UI6uJLPIoVPcEKcBQYFnEN1zTKGwoPHIxlzh1zj6wgbQTJfns
bUASRu3o6coUdUAX1YeLczzP5Gac+nWzhbF8jf8p9WdKAgcMyMgQfox+sC8GRh+v
Gdc6/tFCLZjpgRXFZL8cL6xRrHMBGjP9DmZC7tzVJUGpfLei7RE9WBJ3HHzUiQIp
Q/zg/SkriJJwTiWh5QiSMYcQlHsUfA5qaex7ZbQKM+JKUuLAbydb7yN82HcaVyam
zhopRFkNsobIev4ywCGYeypQ5MhO2DDzDqyH6g4P+Q2DO2l8wVj+vNcGgNZ1lu9R
Bn/NSsREae6jCTTSKc4TFYs63R5xaCkfSoz0NlLwSEgO7BbcilX9tjlk80UO7eFE
VuQDlPl/Sg0=
=+rtL
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/davidhildenbrand/tags/s390x-tcg-2019-10-10' into staging
- MMU DAT translation rewrite and cleanup
- Implement more TCG CPU features related to the MMU (e.g., IEP)
- Add the current instruction length to unwind data and clean up
- Resolve one TODO for the MVCL instruction
# gpg: Signature made Thu 10 Oct 2019 12:25:06 BST
# gpg: using RSA key 1BD9CAAD735C4C3A460DFCCA4DDE10F700FF835A
# gpg: issuer "david@redhat.com"
# gpg: Good signature from "David Hildenbrand <david@redhat.com>" [unknown]
# gpg: aka "David Hildenbrand <davidhildenbrand@gmail.com>" [full]
# Primary key fingerprint: 1BD9 CAAD 735C 4C3A 460D FCCA 4DDE 10F7 00FF 835A
* remotes/davidhildenbrand/tags/s390x-tcg-2019-10-10: (31 commits)
s390x/tcg: MVCL: Exit to main loop if requested
target/s390x: Remove ILEN_UNWIND
target/s390x: Remove ilen argument from trigger_pgm_exception
target/s390x: Remove ilen argument from trigger_access_exception
target/s390x: Remove ILEN_AUTO
target/s390x: Rely on unwinding in s390_cpu_virt_mem_rw
target/s390x: Rely on unwinding in s390_cpu_tlb_fill
target/s390x: Simplify helper_lra
target/s390x: Remove fail variable from s390_cpu_tlb_fill
target/s390x: Return exception from translate_pages
target/s390x: Return exception from mmu_translate
target/s390x: Remove exc argument to mmu_translate_asce
target/s390x: Return exception from mmu_translate_real
target/s390x: Handle tec in s390_cpu_tlb_fill
target/s390x: Push trigger_pgm_exception lower in s390_cpu_tlb_fill
target/s390x: Use tcg_s390_program_interrupt in TCG helpers
target/s390x: Remove ilen parameter from s390_program_interrupt
target/s390x: Remove ilen parameter from tcg_s390_program_interrupt
target/s390x: Add ilen to unwind data
s390x/cpumodel: Add new TCG features to QEMU cpu model
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
tests/test-bdrv-drain can hang in tests/iothread.c:iothread_run():
while (!atomic_read(&iothread->stopping)) {
aio_poll(iothread->ctx, true);
}
The iothread_join() function works as follows:
void iothread_join(IOThread *iothread)
{
iothread->stopping = true;
aio_notify(iothread->ctx);
qemu_thread_join(&iothread->thread);
If iothread_run() checks iothread->stopping before the iothread_join()
thread sets stopping to true, then aio_notify() may be optimized away
and iothread_run() hangs forever in aio_poll().
The correct way to change iothread->stopping is from a BH that executes
within iothread_run(). This ensures that iothread->stopping is checked
after we set it to true.
This was already fixed for ./iothread.c (note this is a different source
file!) by commit 2362a28ea1 ("iothread:
fix iothread_stop() race condition"), but not for tests/iothread.c.
Fixes: 0c330a734b
("aio: introduce aio_co_schedule and aio_co_wake")
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20191003100103.331-1-stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Introduce support for GTree migration. A custom save/restore
is implemented. Each item is made of a key and a data.
If the key is a pointer to an object, 2 VMSDs are passed into
the GTree VMStateField.
When putting the items, the tree is traversed in sorted order by
g_tree_foreach.
On the get() path, gtrees must be allocated using the proper
key compare, key destroy and value destroy. This must be handled
beforehand, for example in a pre_load method.
Tests are added to test save/dump of structs containing gtrees
including the virtio-iommu domain/mappings scenario.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Message-Id: <20191011121724.433-1-eric.auger@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
uintptr_t fixup for test on 32bit
When we found an available channel in multifd_send_pages(), its
pages->used is cleared and then attached to multifd_send_state.
It is not necessary to do this twice.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20191011085050.17622-5-richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
MultiFDPacket_t's magic and version field never changes during
migration, so move these two fields in setup stage.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20191011085050.17622-4-richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
multifd_send_fill_packet() prepares meta data for following pages to
transfer. It would be more proper to fill pages->allocated instead of
static max value, especially we want to support flexible packet size.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20191011085050.17622-3-richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20191011085050.17622-2-richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Currently, we set PostcopyState blindly to RUNNING, even we found the
previous state is not LISTENING. This will lead to a corner case.
First let's look at the code flow:
qemu_loadvm_state_main()
ret = loadvm_process_command()
loadvm_postcopy_handle_run()
return -1;
if (ret < 0) {
if (postcopy_state_get() == POSTCOPY_INCOMING_RUNNING)
...
}
>From above snippet, the corner case is loadvm_postcopy_handle_run()
always sets state to RUNNING. And then it checks the previous state. If
the previous state is not LISTENING, it will return -1. But at this
moment, PostcopyState is already been set to RUNNING.
Then ret is checked in qemu_loadvm_state_main(), when it is -1
PostcopyState is checked. Current logic would pause postcopy and retry
if PostcopyState is RUNNING. This is not what we expect, because
postcopy is not active yet.
This patch makes sure state is set to RUNNING only previous state is
LISTENING by checking the state first.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Suggested by: Peter Xu <peterx@redhat.com>
Message-Id: <20191010011316.31363-3-richardw.yang@linux.intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Function postcopy_ram_incoming_setup and postcopy_ram_incoming_cleanup
is a pair. Rename to make it clear for audience.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20191010011316.31363-2-richardw.yang@linux.intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>