qemu/tests/qemu-iotests/184.out

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

268 lines
4.9 KiB
Plaintext
Raw Normal View History

QA output created by 184
== checking interface ==
Testing:
{
QMP_VERSION
}
{
"return": {
}
}
{
"return": {
}
}
{
"return": {
}
}
{
"return": {
}
}
{
"return": [
{
"iops_rd": 0,
"detect_zeroes": "off",
"image": {
"backing-image": {
"virtual-size": 1073741824,
"filename": "null-co://",
"format": "null-co",
"actual-size": 0
},
"virtual-size": 1073741824,
"filename": "json:{\"throttle-group\": \"group0\", \"driver\": \"throttle\", \"file\": {\"driver\": \"null-co\"}}",
"format": "throttle",
"actual-size": 0
},
"iops_wr": 0,
"ro": false,
"node-name": "throttle0",
"backing_file_depth": 1,
"drv": "throttle",
"iops": 0,
"bps_wr": 0,
"write_threshold": 0,
"encrypted": false,
"bps": 0,
"bps_rd": 0,
"cache": {
"no-flush": false,
"direct": false,
"writeback": true
},
"file": "json:{\"throttle-group\": \"group0\", \"driver\": \"throttle\", \"file\": {\"driver\": \"null-co\"}}"
},
{
"iops_rd": 0,
"detect_zeroes": "off",
"image": {
"virtual-size": 1073741824,
"filename": "null-co://",
"format": "null-co",
"actual-size": 0
},
"iops_wr": 0,
"ro": false,
"node-name": "disk0",
"backing_file_depth": 0,
"drv": "null-co",
"iops": 0,
"bps_wr": 0,
"write_threshold": 0,
"encrypted": false,
"bps": 0,
"bps_rd": 0,
"cache": {
"no-flush": false,
"direct": false,
"writeback": true
},
"file": "null-co://"
}
]
}
{
"return": [
]
}
{
"timestamp": {
"seconds": TIMESTAMP,
"microseconds": TIMESTAMP
},
"event": "SHUTDOWN",
"data": {
"guest": false,
"reason": "host-qmp-quit"
}
}
monitor: only run coroutine commands in qemu_aio_context monitor_qmp_dispatcher_co() runs in the iohandler AioContext that is not polled during nested event loops. The coroutine currently reschedules itself in the main loop's qemu_aio_context AioContext, which is polled during nested event loops. One known problem is that QMP device-add calls drain_call_rcu(), which temporarily drops the BQL, leading to all sorts of havoc like other vCPU threads re-entering device emulation code while another vCPU thread is waiting in device emulation code with aio_poll(). Paolo Bonzini suggested running non-coroutine QMP handlers in the iohandler AioContext. This avoids trouble with nested event loops. His original idea was to move coroutine rescheduling to monitor_qmp_dispatch(), but I resorted to moving it to qmp_dispatch() because we don't know if the QMP handler needs to run in coroutine context in monitor_qmp_dispatch(). monitor_qmp_dispatch() would have been nicer since it's associated with the monitor implementation and not as general as qmp_dispatch(), which is also used by qemu-ga. A number of qemu-iotests need updated .out files because the order of QMP events vs QMP responses has changed. Solves Issue #1933. Cc: qemu-stable@nongnu.org Fixes: 7bed89958bfbf40df9ca681cefbdca63abdde39d ("device_core: use drain_call_rcu in in qmp_device_add") Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2215192 Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2214985 Buglink: https://issues.redhat.com/browse/RHEL-17369 Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20240118144823.1497953-4-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Tested-by: Fiona Ebner <f.ebner@proxmox.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2024-01-18 17:48:23 +03:00
{
"return": {
}
}
== property changes in ThrottleGroup ==
Testing:
{
QMP_VERSION
}
{
"return": {
}
}
{
"return": {
}
}
{
"return": {
"bps-read-max-length": 1,
"iops-read-max-length": 1,
"bps-read-max": 0,
"bps-total": 0,
"iops-total-max-length": 1,
"iops-total": 1000,
"iops-write-max": 0,
"bps-write": 0,
"bps-total-max": 0,
"bps-write-max": 0,
"iops-size": 0,
"iops-read": 0,
"iops-write-max-length": 1,
"iops-write": 0,
"bps-total-max-length": 1,
"iops-read-max": 0,
"bps-read": 0,
"bps-write-max-length": 1,
"iops-total-max": 0
}
}
{
"return": {
}
}
{
"return": {
"bps-read-max-length": 1,
"iops-read-max-length": 1,
"bps-read-max": 0,
"bps-total": 0,
"iops-total-max-length": 1,
"iops-total": 0,
"iops-write-max": 0,
"bps-write": 0,
"bps-total-max": 0,
"bps-write-max": 0,
"iops-size": 0,
"iops-read": 0,
"iops-write-max-length": 1,
"iops-write": 0,
"bps-total-max-length": 1,
"iops-read-max": 0,
"bps-read": 0,
"bps-write-max-length": 1,
"iops-total-max": 0
}
}
{
"timestamp": {
"seconds": TIMESTAMP,
"microseconds": TIMESTAMP
},
"event": "SHUTDOWN",
"data": {
"guest": false,
"reason": "host-qmp-quit"
}
}
monitor: only run coroutine commands in qemu_aio_context monitor_qmp_dispatcher_co() runs in the iohandler AioContext that is not polled during nested event loops. The coroutine currently reschedules itself in the main loop's qemu_aio_context AioContext, which is polled during nested event loops. One known problem is that QMP device-add calls drain_call_rcu(), which temporarily drops the BQL, leading to all sorts of havoc like other vCPU threads re-entering device emulation code while another vCPU thread is waiting in device emulation code with aio_poll(). Paolo Bonzini suggested running non-coroutine QMP handlers in the iohandler AioContext. This avoids trouble with nested event loops. His original idea was to move coroutine rescheduling to monitor_qmp_dispatch(), but I resorted to moving it to qmp_dispatch() because we don't know if the QMP handler needs to run in coroutine context in monitor_qmp_dispatch(). monitor_qmp_dispatch() would have been nicer since it's associated with the monitor implementation and not as general as qmp_dispatch(), which is also used by qemu-ga. A number of qemu-iotests need updated .out files because the order of QMP events vs QMP responses has changed. Solves Issue #1933. Cc: qemu-stable@nongnu.org Fixes: 7bed89958bfbf40df9ca681cefbdca63abdde39d ("device_core: use drain_call_rcu in in qmp_device_add") Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2215192 Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2214985 Buglink: https://issues.redhat.com/browse/RHEL-17369 Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20240118144823.1497953-4-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Tested-by: Fiona Ebner <f.ebner@proxmox.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2024-01-18 17:48:23 +03:00
{
"return": {
}
}
== object creation/set errors ==
Testing:
{
QMP_VERSION
}
{
"return": {
}
}
{
"return": {
}
}
{
"error": {
"class": "GenericError",
"desc": "Property cannot be set after initialization"
}
}
{
"error": {
"class": "GenericError",
"desc": "bps/iops/max total values and read/write values cannot be used at the same time"
}
}
{
"timestamp": {
"seconds": TIMESTAMP,
"microseconds": TIMESTAMP
},
"event": "SHUTDOWN",
"data": {
"guest": false,
"reason": "host-qmp-quit"
}
}
monitor: only run coroutine commands in qemu_aio_context monitor_qmp_dispatcher_co() runs in the iohandler AioContext that is not polled during nested event loops. The coroutine currently reschedules itself in the main loop's qemu_aio_context AioContext, which is polled during nested event loops. One known problem is that QMP device-add calls drain_call_rcu(), which temporarily drops the BQL, leading to all sorts of havoc like other vCPU threads re-entering device emulation code while another vCPU thread is waiting in device emulation code with aio_poll(). Paolo Bonzini suggested running non-coroutine QMP handlers in the iohandler AioContext. This avoids trouble with nested event loops. His original idea was to move coroutine rescheduling to monitor_qmp_dispatch(), but I resorted to moving it to qmp_dispatch() because we don't know if the QMP handler needs to run in coroutine context in monitor_qmp_dispatch(). monitor_qmp_dispatch() would have been nicer since it's associated with the monitor implementation and not as general as qmp_dispatch(), which is also used by qemu-ga. A number of qemu-iotests need updated .out files because the order of QMP events vs QMP responses has changed. Solves Issue #1933. Cc: qemu-stable@nongnu.org Fixes: 7bed89958bfbf40df9ca681cefbdca63abdde39d ("device_core: use drain_call_rcu in in qmp_device_add") Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2215192 Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2214985 Buglink: https://issues.redhat.com/browse/RHEL-17369 Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20240118144823.1497953-4-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Tested-by: Fiona Ebner <f.ebner@proxmox.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2024-01-18 17:48:23 +03:00
{
"return": {
}
}
== don't specify group ==
Testing:
{
QMP_VERSION
}
{
"return": {
}
}
{
"return": {
}
}
{
"error": {
"class": "GenericError",
"desc": "Parameter 'throttle-group' is missing"
}
}
{
"timestamp": {
"seconds": TIMESTAMP,
"microseconds": TIMESTAMP
},
"event": "SHUTDOWN",
"data": {
"guest": false,
"reason": "host-qmp-quit"
}
}
monitor: only run coroutine commands in qemu_aio_context monitor_qmp_dispatcher_co() runs in the iohandler AioContext that is not polled during nested event loops. The coroutine currently reschedules itself in the main loop's qemu_aio_context AioContext, which is polled during nested event loops. One known problem is that QMP device-add calls drain_call_rcu(), which temporarily drops the BQL, leading to all sorts of havoc like other vCPU threads re-entering device emulation code while another vCPU thread is waiting in device emulation code with aio_poll(). Paolo Bonzini suggested running non-coroutine QMP handlers in the iohandler AioContext. This avoids trouble with nested event loops. His original idea was to move coroutine rescheduling to monitor_qmp_dispatch(), but I resorted to moving it to qmp_dispatch() because we don't know if the QMP handler needs to run in coroutine context in monitor_qmp_dispatch(). monitor_qmp_dispatch() would have been nicer since it's associated with the monitor implementation and not as general as qmp_dispatch(), which is also used by qemu-ga. A number of qemu-iotests need updated .out files because the order of QMP events vs QMP responses has changed. Solves Issue #1933. Cc: qemu-stable@nongnu.org Fixes: 7bed89958bfbf40df9ca681cefbdca63abdde39d ("device_core: use drain_call_rcu in in qmp_device_add") Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2215192 Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2214985 Buglink: https://issues.redhat.com/browse/RHEL-17369 Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20240118144823.1497953-4-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Tested-by: Fiona Ebner <f.ebner@proxmox.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2024-01-18 17:48:23 +03:00
{
"return": {
}
}
*** done