qemu/tests/qemu-iotests/182.out
Stefan Hajnoczi f389309d29 monitor: only run coroutine commands in qemu_aio_context
monitor_qmp_dispatcher_co() runs in the iohandler AioContext that is not
polled during nested event loops. The coroutine currently reschedules
itself in the main loop's qemu_aio_context AioContext, which is polled
during nested event loops. One known problem is that QMP device-add
calls drain_call_rcu(), which temporarily drops the BQL, leading to all
sorts of havoc like other vCPU threads re-entering device emulation code
while another vCPU thread is waiting in device emulation code with
aio_poll().

Paolo Bonzini suggested running non-coroutine QMP handlers in the
iohandler AioContext. This avoids trouble with nested event loops. His
original idea was to move coroutine rescheduling to
monitor_qmp_dispatch(), but I resorted to moving it to qmp_dispatch()
because we don't know if the QMP handler needs to run in coroutine
context in monitor_qmp_dispatch(). monitor_qmp_dispatch() would have
been nicer since it's associated with the monitor implementation and not
as general as qmp_dispatch(), which is also used by qemu-ga.

A number of qemu-iotests need updated .out files because the order of
QMP events vs QMP responses has changed.

Solves Issue #1933.

Cc: qemu-stable@nongnu.org
Fixes: 7bed89958b ("device_core: use drain_call_rcu in in qmp_device_add")
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2215192
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2214985
Buglink: https://issues.redhat.com/browse/RHEL-17369
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240118144823.1497953-4-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit effd60c878)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-01-26 19:31:33 +03:00

59 lines
1.8 KiB
Plaintext

QA output created by 182
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=33554432
Starting QEMU
Starting a second QEMU using the same image should fail
QEMU_PROG: -drive file=TEST_DIR/t.qcow2,if=none,id=drive0,file.locking=on: Failed to get "write" lock
Is another process using the image [TEST_DIR/t.qcow2]?
=== Testing reopen ===
{'execute': 'qmp_capabilities'}
{"return": {}}
{'execute': 'blockdev-add',
'arguments': {
'node-name': 'node0',
'driver': 'file',
'filename': 'TEST_DIR/t.IMGFMT',
'locking': 'on'
} }
{"return": {}}
{'execute': 'blockdev-snapshot-sync',
'arguments': {
'node-name': 'node0',
'snapshot-file': 'TEST_DIR/t.IMGFMT.overlay',
'snapshot-node-name': 'node1'
} }
Formatting 'TEST_DIR/t.qcow2.overlay', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=197120 backing_file=TEST_DIR/t.qcow2 backing_fmt=file lazy_refcounts=off refcount_bits=16
{"return": {}}
{'execute': 'blockdev-add',
'arguments': {
'node-name': 'node1',
'driver': 'file',
'filename': 'TEST_DIR/t.IMGFMT',
'locking': 'on'
} }
{"return": {}}
{'execute': 'nbd-server-start',
'arguments': {
'addr': {
'type': 'unix',
'data': {
'path': 'SOCK_DIR/nbd.socket'
} } } }
{"return": {}}
{'execute': 'nbd-server-add',
'arguments': {
'device': 'node1'
} }
{"return": {}}
=== Testing failure to loosen restrictions ===
{'execute': 'qmp_capabilities'}
{"return": {}}
{'execute': 'quit'}
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
{"return": {}}
*** done