Commit Graph

107791 Commits

Author SHA1 Message Date
Hawkins Jiawei
acec5f685c vdpa: Send cvq state load commands in parallel
This patch enables sending CVQ state load commands
in parallel at device startup by following steps:

  * Refactor vhost_vdpa_net_load_cmd() to iterate through
the control commands shadow buffers. This allows different
CVQ state load commands to use their own unique buffers.

  * Delay the polling and checking of buffers until either
the SVQ is full or control commands shadow buffers are full.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1578
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Message-Id: <9350f32278e39f7bce297b8f2d82dac27c6f8c9a.1697165821.git.yin31149@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-18 10:41:50 -04:00
Hawkins Jiawei
99d6a32469 vhost: Expose vhost_svq_available_slots()
Next patches in this series will delay the polling
and checking of buffers until either the SVQ is
full or control commands shadow buffers are full,
no longer perform an immediate poll and check of
the device's used buffers for each CVQ state load command.

To achieve this, this patch exposes
vhost_svq_available_slots(), allowing QEMU to know
whether the SVQ is full.

Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Message-Id: <25938079f0bd8185fd664c64e205e629f7a966be.1697165821.git.yin31149@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-18 10:41:50 -04:00
Hawkins Jiawei
1d7e2a8fd4 vdpa: Introduce cursors to vhost_vdpa_net_loadx()
This patch introduces two new arugments, `out_cursor`
and `in_cursor`, to vhost_vdpa_net_loadx(). Addtionally,
it includes a helper function
vhost_vdpa_net_load_cursor_reset() for resetting these
cursors.

Furthermore, this patch refactors vhost_vdpa_net_load_cmd()
so that vhost_vdpa_net_load_cmd() prepares buffers
for the device using the cursors arguments, instead
of directly accesses `s->cvq_cmd_out_buffer` and
`s->status` fields.

By making these change, next patches in this series
can refactor vhost_vdpa_net_load_cmd() directly to
iterate through the control commands shadow buffers,
allowing QEMU to send CVQ state load commands in parallel
at device startup.

Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
Message-Id: <1c6516e233a14cc222f0884e148e4e1adceda78d.1697165821.git.yin31149@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-18 10:41:50 -04:00
Hawkins Jiawei
a864a3219d vdpa: Move vhost_svq_poll() to the caller of vhost_vdpa_net_cvq_add()
This patch moves vhost_svq_poll() to the caller of
vhost_vdpa_net_cvq_add() and introduces a helper funtion.

By making this change, next patches in this series is
able to refactor vhost_vdpa_net_load_x() only to delay
the polling and checking process until either the SVQ
is full or control commands shadow buffers are full.

Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
Message-Id: <196cadb55175a75275660c6634a538289f027ae3.1697165821.git.yin31149@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-18 10:41:50 -04:00
Hawkins Jiawei
24e59cfe0c vdpa: Check device ack in vhost_vdpa_net_load_rx_mode()
Considering that vhost_vdpa_net_load_rx_mode() is only called
within vhost_vdpa_net_load_rx() now, this patch refactors
vhost_vdpa_net_load_rx_mode() to include a check for the
device's ack, simplifying the code and improving its maintainability.

Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Message-Id: <68811d52f96ae12d68f0d67d996ac1642a623943.1697165821.git.yin31149@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-18 10:41:49 -04:00
Hawkins Jiawei
327dedb8df vdpa: Avoid using vhost_vdpa_net_load_*() outside vhost_vdpa_net_load()
Next patches in this series will refactor vhost_vdpa_net_load_cmd()
to iterate through the control commands shadow buffers, allowing QEMU
to send CVQ state load commands in parallel at device startup.

Considering that QEMU always forwards the CVQ command serialized
outside of vhost_vdpa_net_load(), it is more elegant to send the
CVQ commands directly without invoking vhost_vdpa_net_load_*() helpers.

Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Message-Id: <254f0618efde7af7229ba4fdada667bb9d318991.1697165821.git.yin31149@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-18 10:41:49 -04:00
Hawkins Jiawei
0e6bff0d43 vdpa: Use iovec for vhost_vdpa_net_cvq_add()
Next patches in this series will no longer perform an
immediate poll and check of the device's used buffers
for each CVQ state load command. Consequently, there
will be multiple pending buffers in the shadow VirtQueue,
making it a must for every control command to have its
own buffer.

To achieve this, this patch refactor vhost_vdpa_net_cvq_add()
to accept `struct iovec`, which eliminates the coupling of
control commands to `s->cvq_cmd_out_buffer` and `s->status`,
allowing them to use their own buffer.

Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Message-Id: <8a328f146fb043f34edb75ba6d043d2d6de88f99.1697165821.git.yin31149@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-10-18 10:41:49 -04:00
Stefan Hajnoczi
ec6f9f135d Migration Pull request (20231017)
Hi
 
 Same that yesterday one, except:
 - rebased to latest (clean rebase)
 - fixed 64 bits read on big endian host
 
 CI: https://gitlab.com/juan.quintela/qemu/-/pipelines/1039214198
 
 Please, apply.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEGJn/jt6/WMzuA0uC9IfvGFhy1yMFAmUuReUACgkQ9IfvGFhy
 1yO+FQ/+Nx2botbrUVJb3vLeG6f+x5xeWJjB0boOqhk7227cKmAA33Oqwx5l4UtL
 oLOHA6P4ThqacpaluGOMMp44BSr/jOMDC/HUDVJtSplTD+droPiklIIGUfYScLbA
 oYx6lXfSB2jMpSuSU19STbjwBRvd4bjJix3zDGwEIgXYqYt0tY0FY/nnGTmImnM1
 KDjRerf1lg4Rt0vvwg7I0onIDvh3CKX26Sj5a3wSRaLoocUe3jpsuBNH7MMqroHs
 WpocBIsLiBAf/CbeLZsQlhbVeOi1R+kSAR5hDPvvJCPWHIrd2wf8+3NXjcFepb7d
 M4wE2jLjCvHhzwYwSc0ir4n74jwD22IirEPQs8ONHrjLCb5VoBKYV5bqsFUHF55N
 SbFvcZIzJFiOm2anEWiiqiNTLtYAdQCKtUvbyJ7Mq4ck6icIInLdX9zrm4voofYJ
 02lX/IIGlT3C3dGSz09LBoJ6E82zmQWNHmov8A90+3RYvMF9uSpxi0z40lhj6jWC
 6Q2AHxrJJ040ZboeOfJQG78BtvZ/9PQ2ORhJ3ceRDND4kSTDtfe/TSNAZ3thM33y
 Sv99o+F/HaqrKnxK8eTJrvIEWxojDu3lnqJERWAm2AOxTnQ+6mgGtsCfLEdrv5D1
 xVsY2QczB1quRjaU2ml/7Cxe4Q1urTtfl82IEXGded6UL+cmF/I=
 =br93
 -----END PGP SIGNATURE-----

Merge tag 'migration-20231017-pull-request' of https://gitlab.com/juan.quintela/qemu into staging

Migration Pull request (20231017)

Hi

Same that yesterday one, except:
- rebased to latest (clean rebase)
- fixed 64 bits read on big endian host

CI: https://gitlab.com/juan.quintela/qemu/-/pipelines/1039214198

Please, apply.

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEGJn/jt6/WMzuA0uC9IfvGFhy1yMFAmUuReUACgkQ9IfvGFhy
# 1yO+FQ/+Nx2botbrUVJb3vLeG6f+x5xeWJjB0boOqhk7227cKmAA33Oqwx5l4UtL
# oLOHA6P4ThqacpaluGOMMp44BSr/jOMDC/HUDVJtSplTD+droPiklIIGUfYScLbA
# oYx6lXfSB2jMpSuSU19STbjwBRvd4bjJix3zDGwEIgXYqYt0tY0FY/nnGTmImnM1
# KDjRerf1lg4Rt0vvwg7I0onIDvh3CKX26Sj5a3wSRaLoocUe3jpsuBNH7MMqroHs
# WpocBIsLiBAf/CbeLZsQlhbVeOi1R+kSAR5hDPvvJCPWHIrd2wf8+3NXjcFepb7d
# M4wE2jLjCvHhzwYwSc0ir4n74jwD22IirEPQs8ONHrjLCb5VoBKYV5bqsFUHF55N
# SbFvcZIzJFiOm2anEWiiqiNTLtYAdQCKtUvbyJ7Mq4ck6icIInLdX9zrm4voofYJ
# 02lX/IIGlT3C3dGSz09LBoJ6E82zmQWNHmov8A90+3RYvMF9uSpxi0z40lhj6jWC
# 6Q2AHxrJJ040ZboeOfJQG78BtvZ/9PQ2ORhJ3ceRDND4kSTDtfe/TSNAZ3thM33y
# Sv99o+F/HaqrKnxK8eTJrvIEWxojDu3lnqJERWAm2AOxTnQ+6mgGtsCfLEdrv5D1
# xVsY2QczB1quRjaU2ml/7Cxe4Q1urTtfl82IEXGded6UL+cmF/I=
# =br93
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 17 Oct 2023 04:29:25 EDT
# gpg:                using RSA key 1899FF8EDEBF58CCEE034B82F487EF185872D723
# gpg: Good signature from "Juan Quintela <quintela@redhat.com>" [full]
# gpg:                 aka "Juan Quintela <quintela@trasno.org>" [full]
# Primary key fingerprint: 1899 FF8E DEBF 58CC EE03  4B82 F487 EF18 5872 D723

* tag 'migration-20231017-pull-request' of https://gitlab.com/juan.quintela/qemu: (38 commits)
  migration/multifd: Clarify Error usage in multifd_channel_connect
  migration/multifd: Unify multifd_send_thread error paths
  migration/multifd: Remove direct "socket" references
  migration/ram: Merge save_zero_page functions
  migration/ram: Move xbzrle zero page handling into save_zero_page
  migration/ram: Stop passing QEMUFile around in save_zero_page
  migration/ram: Remove RAMState from xbzrle_cache_zero_page
  migration/ram: Refactor precopy ram loading code
  multifd: reset next_packet_len after sending pages
  multifd: fix counters in multifd_send_thread
  migration: check for rate_limit_max for RATE_LIMIT_DISABLED
  migration: Improve json and formatting
  migration/rdma: Remove all "ret" variables that are used only once
  migration/rdma: Declare for index variables local
  migration/rdma: Use i as for index instead of idx
  migration/rdma: Check sooner if we are in postcopy for save_page()
  migration/rdma: Remove qemu_ prefix from exported functions
  migration/rdma: Move rdma constants from qemu-file.h to rdma.h
  qemu-file: Remove QEMUFileHooks
  migration/rdma: Create rdma_control_save_page()
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-10-17 10:06:21 -04:00
Stefan Hajnoczi
0193b3bc05 virtio-gpu rutabaga support
-----BEGIN PGP SIGNATURE-----
 
 iQJQBAABCAA6FiEEh6m9kz+HxgbSdvYt2ujhCXWWnOUFAmUtP5YcHG1hcmNhbmRy
 ZS5sdXJlYXVAcmVkaGF0LmNvbQAKCRDa6OEJdZac5X9CD/4s1n/GZyDr9bh04V03
 otAqtq2CSyuUOviqBrqxYgraCosUD1AuX8WkDy5cCPtnKC4FxRjgVlm9s7K/yxOW
 xZ78e4oVgB1F3voOq6LgtKK6BRG/BPqNzq9kuGcayCHQbSxg7zZVwa702Y18r2ZD
 pjOhbZCrJTSfASL7C3e/rm7798Wk/hzSrClGR56fbRAVgQ6Lww2L97/g0nHyDsWK
 DrCBrdqFtKjpLeUHmcqqS4AwdpG2SyCgqE7RehH/wOhvGTxh/JQvHbLGWK2mDC3j
 Qvs8mClC5bUlyNQuUz7lZtXYpzCW6VGMWlz8bIu+ncgSt6RK1TRbdEfDJPGoS4w9
 ZCGgcTxTG/6BEO76J/VpydfTWDo1FwQCQ0Vv7EussGoRTLrFC3ZRFgDWpqCw85yi
 AjPtc0C49FHBZhK0l1CoJGV4gGTDtD9jTYN0ffsd+aQesOjcsgivAWBaCOOQWUc8
 KOv9sr4kLLxcnuCnP7p/PuVRQD4eg0TmpdS8bXfnCzLSH8fCm+n76LuJEpGxEBey
 3KPJPj/1BNBgVgew+znSLD/EYM6YhdK2gF5SNrYsdR6UcFdrPED/xmdhzFBeVym/
 xbBWqicDw4HLn5YrJ4tzqXje5XUz5pmJoT5zrRMXTHiu4pjBkEXO/lOdAoFwSy8M
 WNOtmSyB69uCrbyLw6xE2/YX8Q==
 =5a/Z
 -----END PGP SIGNATURE-----

Merge tag 'gpu-pull-request' of https://gitlab.com/marcandre.lureau/qemu into staging

virtio-gpu rutabaga support

# -----BEGIN PGP SIGNATURE-----
#
# iQJQBAABCAA6FiEEh6m9kz+HxgbSdvYt2ujhCXWWnOUFAmUtP5YcHG1hcmNhbmRy
# ZS5sdXJlYXVAcmVkaGF0LmNvbQAKCRDa6OEJdZac5X9CD/4s1n/GZyDr9bh04V03
# otAqtq2CSyuUOviqBrqxYgraCosUD1AuX8WkDy5cCPtnKC4FxRjgVlm9s7K/yxOW
# xZ78e4oVgB1F3voOq6LgtKK6BRG/BPqNzq9kuGcayCHQbSxg7zZVwa702Y18r2ZD
# pjOhbZCrJTSfASL7C3e/rm7798Wk/hzSrClGR56fbRAVgQ6Lww2L97/g0nHyDsWK
# DrCBrdqFtKjpLeUHmcqqS4AwdpG2SyCgqE7RehH/wOhvGTxh/JQvHbLGWK2mDC3j
# Qvs8mClC5bUlyNQuUz7lZtXYpzCW6VGMWlz8bIu+ncgSt6RK1TRbdEfDJPGoS4w9
# ZCGgcTxTG/6BEO76J/VpydfTWDo1FwQCQ0Vv7EussGoRTLrFC3ZRFgDWpqCw85yi
# AjPtc0C49FHBZhK0l1CoJGV4gGTDtD9jTYN0ffsd+aQesOjcsgivAWBaCOOQWUc8
# KOv9sr4kLLxcnuCnP7p/PuVRQD4eg0TmpdS8bXfnCzLSH8fCm+n76LuJEpGxEBey
# 3KPJPj/1BNBgVgew+znSLD/EYM6YhdK2gF5SNrYsdR6UcFdrPED/xmdhzFBeVym/
# xbBWqicDw4HLn5YrJ4tzqXje5XUz5pmJoT5zrRMXTHiu4pjBkEXO/lOdAoFwSy8M
# WNOtmSyB69uCrbyLw6xE2/YX8Q==
# =5a/Z
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 16 Oct 2023 09:50:14 EDT
# gpg:                using RSA key 87A9BD933F87C606D276F62DDAE8E10975969CE5
# gpg:                issuer "marcandre.lureau@redhat.com"
# gpg: Good signature from "Marc-André Lureau <marcandre.lureau@redhat.com>" [full]
# gpg:                 aka "Marc-André Lureau <marcandre.lureau@gmail.com>" [full]
# Primary key fingerprint: 87A9 BD93 3F87 C606 D276  F62D DAE8 E109 7596 9CE5

* tag 'gpu-pull-request' of https://gitlab.com/marcandre.lureau/qemu:
  docs/system: add basic virtio-gpu documentation
  gfxstream + rutabaga: enable rutabaga
  gfxstream + rutabaga: meson support
  gfxstream + rutabaga: add initial support for gfxstream
  gfxstream + rutabaga prep: added need defintions, fields, and options
  virtio-gpu: blob prep
  virtio-gpu: hostmem
  virtio-gpu: CONTEXT_INIT feature
  virtio: Add shared memory capability

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-10-17 10:05:51 -04:00
Fabiano Rosas
967e388987 migration/multifd: Clarify Error usage in multifd_channel_connect
The function is currently called from two sites, one always gives it a
NULL Error and the other always gives it a non-NULL Error.

In the non-NULL case, all it does it trace the error and return. One
of the callers already have tracing, add a tracepoint to the other and
stop passing the error into the function.

Cc: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231012134343.23757-4-farosas@suse.de>
2023-10-17 09:25:14 +02:00
Fabiano Rosas
ee8a7c9c46 migration/multifd: Unify multifd_send_thread error paths
The preferred usage of the Error type is to always set both the return
code and the error when a failure happens. As all code called from the
send thread follows this pattern, we'll always have the return code
and the error set at the same time.

Aside from the convention, in this piece of code this must be the
case, otherwise the if (ret != 0) would be exiting the thread without
calling multifd_send_terminate_threads() which is incorrect.

Unify both paths to make it clear that both are taken when there's an
error.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231012134343.23757-3-farosas@suse.de>
2023-10-17 09:25:14 +02:00
Fabiano Rosas
0e92f64448 migration/multifd: Remove direct "socket" references
We're about to enable support for other transports in multifd, so
remove direct references to sockets.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231012134343.23757-2-farosas@suse.de>
2023-10-17 09:25:14 +02:00
Fabiano Rosas
8697eb8577 migration/ram: Merge save_zero_page functions
We don't need to do this in two pieces. One single function makes it
easier to grasp, specially since it removes the indirection on the
return value handling.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011184604.32364-7-farosas@suse.de>
2023-10-17 09:25:14 +02:00
Fabiano Rosas
ccc09db87c migration/ram: Move xbzrle zero page handling into save_zero_page
It makes a bit more sense to have the zero page handling of xbzrle
right where we save the zero page.

Also invert the exit condition to remove one level of indentation
which makes the next patch easier to grasp.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011184604.32364-6-farosas@suse.de>
2023-10-17 09:25:14 +02:00
Fabiano Rosas
1e43f165d0 migration/ram: Stop passing QEMUFile around in save_zero_page
We don't need the QEMUFile when we're already passing the
PageSearchStatus.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011184604.32364-5-farosas@suse.de>
2023-10-17 09:25:14 +02:00
Fabiano Rosas
8f47d4ee43 migration/ram: Remove RAMState from xbzrle_cache_zero_page
'rs' is not used in that function. It's a leftover from commit
9360447d34 ("ram: Use MigrationStats for statistics").

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011184604.32364-4-farosas@suse.de>
2023-10-17 09:25:14 +02:00
Nikolay Borisov
2f5ced5b93 migration/ram: Refactor precopy ram loading code
Extract the ramblock parsing code into a routine that operates on the
sequence of headers from the stream and another the parses the
individual ramblock. This makes ram_load_precopy() easier to
comprehend.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011184604.32364-3-farosas@suse.de>
2023-10-17 09:25:14 +02:00
Elena Ufimtseva
1618f55221 multifd: reset next_packet_len after sending pages
Sometimes multifd sends just sync packet with no pages
(normal_num is 0). In this case the old value is being
preserved and being accounted for while only packet_len
is being transferred.
Reset it to 0 after sending and accounting for.

Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011184358.97349-5-elena.ufimtseva@oracle.com>
2023-10-17 09:25:13 +02:00
Elena Ufimtseva
68b6e00048 multifd: fix counters in multifd_send_thread
Previous commit cbec7eb768
"migration/multifd: Compute transferred bytes correctly"
removed accounting for packet_len in non-rdma
case, but the next_packet_size only accounts for pages, not for
the header packet (normal_pages * PAGE_SIZE) that is being sent
as iov[0]. The packet_len part should be added to account for
the size of MultiFDPacket and the array of the offsets.

Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011184358.97349-4-elena.ufimtseva@oracle.com>
2023-10-17 09:25:13 +02:00
Elena Ufimtseva
60c7981aa3 migration: check for rate_limit_max for RATE_LIMIT_DISABLED
In migration rate limiting atomic operations are used
to read the rate limit variables and transferred bytes and
they are expensive. Check first if rate_limit_max is equal
to RATE_LIMIT_DISABLED and return false immediately if so.

Note that with this patch we will also will stop flushing
by not calling qemu_fflush() from migration_transferred_bytes()
if the migration rate is not exceeded.
This should be fine since migration thread calls in the loop
migration_update_counters from migration_rate_limit() that
calls the migration_transferred_bytes() and flushes there.

Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011184358.97349-2-elena.ufimtseva@oracle.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
e4ceec292f migration: Improve json and formatting
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231013104736.31722-2-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
8f5a7faa4e migration/rdma: Remove all "ret" variables that are used only once
Change code that is:

int ret;
...

ret = foo();
if (ret[ < 0]?) {

to:

if (foo()[ < 0]) {

Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-14-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
14e2fcbbf8 migration/rdma: Declare for index variables local
Declare all variables that are only used inside a for loop inside the
for statement.

This makes clear that they are not used outside of the for loop.

Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-13-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
ebdb85f9d1 migration/rdma: Use i as for index instead of idx
Once there, all the uses are local to the for, so declare the variable
inside the for statement.

Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-12-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
a4832d299d migration/rdma: Check sooner if we are in postcopy for save_page()
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-11-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
b1b3838722 migration/rdma: Remove qemu_ prefix from exported functions
Functions are long enough even without this.

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-10-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
10cb3336b1 migration/rdma: Move rdma constants from qemu-file.h to rdma.h
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-9-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
8b670f48ed qemu-file: Remove QEMUFileHooks
The only user was rdma, and its use is gone.

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-8-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
e493008d50 migration/rdma: Create rdma_control_save_page()
The only user of ram_control_save_page() and save_page() hook was
rdma. Just move the function to rdma.c, rename it to
rdma_control_save_page().

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-7-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
a6323300e8 migration/rdma: Unfold hook_ram_load()
There is only one flag called with: RAM_CONTROL_BLOCK_REG.

Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-6-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
f6d6c089b7 migration/rdma: Remove all uses of RAM_CONTROL_HOOK
Instead of going through ram_control_load_hook(), call
qemu_rdma_registration_handle() directly.

Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-5-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
5f5b8858dc migration/rdma: Unfold ram_control_after_iterate()
Once there:
- Remove unused data parameter
- unfold it in its callers
- change all callers to call qemu_rdma_registration_stop()
- We need to call QIO_CHANNEL_RDMA() after we check for migrate_rdma()

Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-4-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
48408174a7 migration/rdma: Unfold ram_control_before_iterate()
Once there:
- Remove unused data parameter
- unfold it in its callers.
- change all callers to call qemu_rdma_registration_start()
- We need to call QIO_CHANNEL_RDMA() after we check for migrate_rdma()

Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-3-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
27fd25b0fb migration: Create migrate_rdma()
Helper to say if we are doing a migration over rdma.

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011203527.9061-2-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Juan Quintela
d4f34485ca migration: Non multifd migration don't care about multifd flushes
RDMA was having trouble because
migrate_multifd_flush_after_each_section() can only be true or false,
but we don't want to send any flush when we are not in multifd
migration.

CC: Fabiano Rosas <farosas@suse.de
Fixes: 294e5a4034 ("multifd: Only flush once each full round of memory")

Reported-by: Li Zhijian <lizhijian@fujitsu.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011205548.10571-2-quintela@redhat.com>
2023-10-17 09:25:13 +02:00
Fiona Ebner
930e239d11 migration: hold the BQL during setup
This is intended to be a semantic revert of commit 9b09503752
("migration: run setup callbacks out of big lock"). There have been so
many changes since that commit (e.g. a new setup callback
dirty_bitmap_save_setup() that also needs to be adapted now), it's
easier to do the revert manually.

For snapshots, the bdrv_writev_vmstate() function is used during setup
(in QIOChannelBlock backing the QEMUFile), but not holding the BQL
while calling it could lead to an assertion failure. To understand
how, first note the following:

1. Generated coroutine wrappers for block layer functions spawn the
coroutine and use AIO_WAIT_WHILE()/aio_poll() to wait for it.
2. If the host OS switches threads at an inconvenient time, it can
happen that a bottom half scheduled for the main thread's AioContext
is executed as part of a vCPU thread's aio_poll().

An example leading to the assertion failure is as follows:

main thread:
1. A snapshot-save QMP command gets issued.
2. snapshot_save_job_bh() is scheduled.

vCPU thread:
3. aio_poll() for the main thread's AioContext is called (e.g. when
the guest writes to a pflash device, as part of blk_pwrite which is a
generated coroutine wrapper).
4. snapshot_save_job_bh() is executed as part of aio_poll().
3. qemu_savevm_state() is called.
4. qemu_mutex_unlock_iothread() is called. Now
qemu_get_current_aio_context() returns 0x0.
5. bdrv_writev_vmstate() is executed during the usual savevm setup
via qemu_fflush(). But this function is a generated coroutine wrapper,
so it uses AIO_WAIT_WHILE. There, the assertion
assert(qemu_get_current_aio_context() == qemu_get_aio_context());
will fail.

To fix it, ensure that the BQL is held during setup. While it would
only be needed for snapshots, adapting migration too avoids additional
logic for conditional locking/unlocking in the setup callbacks.
Writing the header could (in theory) also trigger qemu_fflush() and
thus bdrv_writev_vmstate(), so the locked section also covers the
qemu_savevm_state_header() call, even for migration for consistency.

The section around multifd_send_sync_main() needs to be unlocked to
avoid a deadlock. In particular, the multifd_save_setup() function calls
socket_send_channel_create() using multifd_new_send_channel_async() as a
callback and then waits for the callback to signal via the
channels_ready semaphore. The connection happens via
qio_task_run_in_thread(), but the callback is only executed via
qio_task_thread_result() which is scheduled for the main event loop.
Without unlocking the section, the main thread would never get to
process the task result and the callback meaning there would be no
signal via the channels_ready semaphore.

The comment in ram_init_bitmaps() was introduced by 4987783400
("migration: fix incorrect memory_global_dirty_log_start outside BQL")
and is removed, because it referred to the qemu_mutex_lock_iothread()
call.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231013105839.415989-1-f.ebner@proxmox.com>
2023-10-17 09:25:13 +02:00
Fabiano Rosas
3dc35470c8 tests/qtest: migration-test: Add tests for file-based migration
Add basic tests for file-based migration.

Note that we cannot use test_precopy_common because that routine
expects it to be possible to run the migration live. With the file
transport there is no live migration because we must wait for the
source to finish writing the migration data to the file before the
destination can start reading. Add a new migration function
specifically to handle the file migration.

Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20230712190742.22294-7-farosas@suse.de>
2023-10-17 09:25:06 +02:00
Fabiano Rosas
d864756e87 tests/qtest/migration: Add a test for the analyze-migration script
Add a smoke test that migrates to a file and gives it to the
script. It should catch the most annoying errors such as changes in
the ram flags.

After code has been merged it becomes way harder to figure out what is
causing the script to fail, the person making the change is the most
likely to know right away what the problem is.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Acked-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231009184326.15777-7-farosas@suse.de>
2023-10-17 09:14:32 +02:00
Fabiano Rosas
caea03279e migration: Fix analyze-migration read operation signedness
The migration code uses unsigned values for 16, 32 and 64-bit
operations. Fix the script to do the same.

This was causing an issue when parsing the migration stream generated
on the ppc64 target because one of instance_ids was larger than the
32bit signed maximum:

Traceback (most recent call last):
  File "/home/fabiano/kvm/qemu/build/scripts/analyze-migration.py", line 658, in <module>
    dump.read(dump_memory = args.memory)
  File "/home/fabiano/kvm/qemu/build/scripts/analyze-migration.py", line 592, in read
    classdesc = self.section_classes[section_key]
KeyError: ('spapr_iommu', -2147483648)

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231009184326.15777-6-farosas@suse.de>
2023-10-17 09:14:32 +02:00
Fabiano Rosas
ff40c7f0b7 migration: Fix analyze-migration.py when ignore-shared is used
The script is currently broken when the x-ignore-shared capability is
used:

Traceback (most recent call last):
  File "./scripts/analyze-migration.py", line 656, in <module>
    dump.read(dump_memory = args.memory)
  File "./scripts/analyze-migration.py", line 593, in read
    section.read()
  File "./scripts/analyze-migration.py", line 163, in read
    self.name = self.file.readstr(len = namelen)
  File "./scripts/analyze-migration.py", line 53, in readstr
    return self.readvar(len).decode('utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x82 in position 55: invalid start byte

We're currently adding data to the middle of the ram section depending
on the presence of the capability. As a consequence, any code loading
the ram section needs to know about capabilities so it can interpret
the stream.

Skip the byte that's added when x-ignore-shared is used to fix the
script.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231009184326.15777-5-farosas@suse.de>
2023-10-17 09:14:32 +02:00
Fabiano Rosas
31499a9dc1 migration: Add capability parsing to analyze-migration.py
The script is broken when the configuration/capabilities section is
present. Add support for parsing the capabilities so we can fix it in
the next patch.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231009184326.15777-4-farosas@suse.de>
2023-10-17 09:14:32 +02:00
Fabiano Rosas
c36c31c86b migration: Fix analyze-migration.py 'configuration' parsing
The 'configuration' state subsections are currently not being parsed
and the script fails when analyzing an aarch64 stream:

Traceback (most recent call last):
  File "./scripts/analyze-migration.py", line 625, in <module>
    dump.read(dump_memory = args.memory)
  File "./scripts/analyze-migration.py", line 571, in read
    raise Exception("Unknown section type: %d" % section_type)
Exception: Unknown section type: 5

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231009184326.15777-3-farosas@suse.de>
2023-10-17 09:14:32 +02:00
Nikolay Borisov
2aae1eb8da migration: Add the configuration vmstate to the json writer
Make the migration json writer part of MigrationState struct, allowing
the 'configuration' object be serialized to json.

This will facilitate the parsing of the 'configuration' object in the
next patch that fixes analyze-migration.py for arm.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231009184326.15777-2-farosas@suse.de>
2023-10-17 09:14:32 +02:00
Dmitry Frolov
f75ed59f40 migration: fix RAMBlock add NULL check
qemu_ram_block_from_host() may return NULL, which will be dereferenced w/o
check. Usualy return value is checked for this function.
Found by Linux Verification Center (linuxtesting.org) with SVACE.

Signed-off-by: Dmitry Frolov <frolov@swemel.ru>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231010104851.802947-1-frolov@swemel.ru>
2023-10-17 09:14:32 +02:00
Peter Xu
8b2395970a migration: Allow user to specify available switchover bandwidth
Migration bandwidth is a very important value to live migration.  It's
because it's one of the major factors that we'll make decision on when to
switchover to destination in a precopy process.

This value is currently estimated by QEMU during the whole live migration
process by monitoring how fast we were sending the data.  This can be the
most accurate bandwidth if in the ideal world, where we're always feeding
unlimited data to the migration channel, and then it'll be limited to the
bandwidth that is available.

However in reality it may be very different, e.g., over a 10Gbps network we
can see query-migrate showing migration bandwidth of only a few tens of
MB/s just because there are plenty of other things the migration thread
might be doing.  For example, the migration thread can be busy scanning
zero pages, or it can be fetching dirty bitmap from other external dirty
sources (like vhost or KVM).  It means we may not be pushing data as much
as possible to migration channel, so the bandwidth estimated from "how many
data we sent in the channel" can be dramatically inaccurate sometimes.

With that, the decision to switchover will be affected, by assuming that we
may not be able to switchover at all with such a low bandwidth, but in
reality we can.

The migration may not even converge at all with the downtime specified,
with that wrong estimation of bandwidth, keeping iterations forever with a
low estimation of bandwidth.

The issue is QEMU itself may not be able to avoid those uncertainties on
measuing the real "available migration bandwidth".  At least not something
I can think of so far.

One way to fix this is when the user is fully aware of the available
bandwidth, then we can allow the user to help providing an accurate value.

For example, if the user has a dedicated channel of 10Gbps for migration
for this specific VM, the user can specify this bandwidth so QEMU can
always do the calculation based on this fact, trusting the user as long as
specified.  It may not be the exact bandwidth when switching over (in which
case qemu will push migration data as fast as possible), but much better
than QEMU trying to wildly guess, especially when very wrong.

A new parameter "avail-switchover-bandwidth" is introduced just for this.
So when the user specified this parameter, instead of trusting the
estimated value from QEMU itself (based on the QEMUFile send speed), it
trusts the user more by using this value to decide when to switchover,
assuming that we'll have such bandwidth available then.

Note that specifying this value will not throttle the bandwidth for
switchover yet, so QEMU will always use the full bandwidth possible for
sending switchover data, assuming that should always be the most important
way to use the network at that time.

This can resolve issues like "unconvergence migration" which is caused by
hilarious low "migration bandwidth" detected for whatever reason.

Reported-by: Zhiyi Guo <zhguo@redhat.com>
Reviewed-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231010221922.40638-1-peterx@redhat.com>
2023-10-17 09:14:32 +02:00
Philippe Mathieu-Daudé
1a36e4c9d0 migration: Use g_autofree to simplify ram_dirty_bitmap_reload()
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231011023627.86691-1-philmd@linaro.org>
2023-10-17 09:14:32 +02:00
Wei Wang
d7f5a04320 migration: refactor migration_completion
Current migration_completion function is a bit long. Refactor the long
implementation into different subfunctions:
- migration_completion_precopy: completion code related to precopy
- migration_completion_postcopy: completion code related to postcopy

Rename await_return_path_close_on_source to
close_return_path_on_source: It is renamed to match with
open_return_path_on_source.

This improves readability and is easier for future updates (e.g. add new
subfunctions when completion code related to new features are needed). No
functional changes intended.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20230804093053.5037-1-wei.w.wang@intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-10-17 09:14:32 +02:00
Stefan Hajnoczi
800485762e Python Pullreq
Python PR:
 
 - Use socketpair for all machine.py connections
 - Support Python 3.12
 - Switch iotests over to using raise-on-error QMP command interface
   (Thank you very much, Vladimir!)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+ber27ys35W+dsvQfe+BBqr8OQ4FAmUpldkACgkQfe+BBqr8
 OQ4NtRAAnkEmXsECAxQ2ewvf3yK8PTFm4Oq5nqMIw+KB94ATrsGzk3z1rLvatSl3
 6VLsV2+FWoOEyKrsfu5DIfbuo4d3TZTU7N2DIZpVpvO166K+fXbzp8skAg+n3BMC
 tWkSOcnsT6+8aqyxxyASdHvbbE7pvPw8OA3oIIstsYeZ5/HHpOWXNj1kjCsnL0lW
 7y5h6UUKGmnCPdixyk042+AvKkT7GAKVjFnjUF5JHv0iR2KpQ+O9H7OEalqQT5w5
 eab4oMGuIYhzYe+MNpyybAB3Xd2pxhcppk+sl4dCE8qmMn7KRoTNw1iu+qhsNQfQ
 JILZoCPtYMhpef4X0ulH8PFBMweBptqOjo4lpz9QIdMWTf86IE0yIT9DCy3aSjpp
 ywwxhFKJS43gz4WHkEJlrY9PHwLsULaV/Cz6HKJAU6h9aFtcNdT4pkCOERnZ8X4C
 yHlNReTG5Dz1sYzKJ/k9LTjAaVDasumR8/yadaUCwalj5zexQ27qlIM6oc5wdIRQ
 up1VHi7odF5KHb6GeqdniuuEF6NBCYRAV5nz+dbd6exfKOaxYRrr48yh9SUm8QS6
 JCvMMFFAZCIrI/nkRVajbLi9L5O3fg5abtlzSzh9o4iyf8Rf/1gtKNxZRK1NZIjQ
 cTYBJXpMulNx7bM2CPNsPWGqCTAjAcu10svqTA8luGj4fqdTNyU=
 =02Bd
 -----END PGP SIGNATURE-----

Merge tag 'python-pull-request' of https://gitlab.com/jsnow/qemu into staging

Python Pullreq

Python PR:

- Use socketpair for all machine.py connections
- Support Python 3.12
- Switch iotests over to using raise-on-error QMP command interface
  (Thank you very much, Vladimir!)

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+ber27ys35W+dsvQfe+BBqr8OQ4FAmUpldkACgkQfe+BBqr8
# OQ4NtRAAnkEmXsECAxQ2ewvf3yK8PTFm4Oq5nqMIw+KB94ATrsGzk3z1rLvatSl3
# 6VLsV2+FWoOEyKrsfu5DIfbuo4d3TZTU7N2DIZpVpvO166K+fXbzp8skAg+n3BMC
# tWkSOcnsT6+8aqyxxyASdHvbbE7pvPw8OA3oIIstsYeZ5/HHpOWXNj1kjCsnL0lW
# 7y5h6UUKGmnCPdixyk042+AvKkT7GAKVjFnjUF5JHv0iR2KpQ+O9H7OEalqQT5w5
# eab4oMGuIYhzYe+MNpyybAB3Xd2pxhcppk+sl4dCE8qmMn7KRoTNw1iu+qhsNQfQ
# JILZoCPtYMhpef4X0ulH8PFBMweBptqOjo4lpz9QIdMWTf86IE0yIT9DCy3aSjpp
# ywwxhFKJS43gz4WHkEJlrY9PHwLsULaV/Cz6HKJAU6h9aFtcNdT4pkCOERnZ8X4C
# yHlNReTG5Dz1sYzKJ/k9LTjAaVDasumR8/yadaUCwalj5zexQ27qlIM6oc5wdIRQ
# up1VHi7odF5KHb6GeqdniuuEF6NBCYRAV5nz+dbd6exfKOaxYRrr48yh9SUm8QS6
# JCvMMFFAZCIrI/nkRVajbLi9L5O3fg5abtlzSzh9o4iyf8Rf/1gtKNxZRK1NZIjQ
# cTYBJXpMulNx7bM2CPNsPWGqCTAjAcu10svqTA8luGj4fqdTNyU=
# =02Bd
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 13 Oct 2023 15:09:13 EDT
# gpg:                using RSA key F9B7ABDBBCACDF95BE76CBD07DEF8106AAFC390E
# gpg: Good signature from "John Snow (John Huston) <jsnow@redhat.com>" [full]
# Primary key fingerprint: FAEB 9711 A12C F475 812F  18F2 88A9 064D 1835 61EB
#      Subkey fingerprint: F9B7 ABDB BCAC DF95 BE76  CBD0 7DEF 8106 AAFC 390E

* tag 'python-pull-request' of https://gitlab.com/jsnow/qemu: (25 commits)
  python: use vm.cmd() instead of vm.qmp() where appropriate
  scripts: add python_qmp_updater.py
  tests/vm/basevm.py: use cmd() instead of qmp()
  iotests.py: pause_job(): drop return value
  iotests: drop some extra ** in qmp() call
  iotests: drop some extra semicolons
  iotests: refactor some common qmp result checks into generic pattern
  iotests: add some missed checks of qmp result
  iotests: QemuStorageDaemon: add cmd() method like in QEMUMachine.
  python/machine.py: upgrade vm.cmd() method
  python/qemu: rename command() to cmd()
  python: rename QEMUMonitorProtocol.cmd() to cmd_raw()
  scripts/cpu-x86-uarch-abi.py: use .command() instead of .cmd()
  qmp_shell.py: _fill_completion() use .command() instead of .cmd()
  python/qemu/qmp/legacy: cmd(): drop cmd_id unused argument
  Python: Enable python3.12 support
  configure: fix error message to say Python 3.8
  python/qmp: remove Server.wait_closed() call for Python 3.12
  Python/iotests: Add type hint for nbd module
  python/machine: remove unused sock_dir argument
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-10-16 12:37:48 -04:00
Stefan Hajnoczi
9390f0fd3e pull-loongarch-20231013
-----BEGIN PGP SIGNATURE-----
 
 iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZSimNQAKCRBAov/yOSY+
 33XwBADF9ZKlESDBDa/huNFAKD7BsUIdglHfz9lHnLY+kQbCun4HyTLtp2IBsySu
 mZTjdfU/LnaBidFLjEnmZZMPyiI3oV1ruSzT53egSDaxrFUXGpc9oxtMNLsyfk9P
 swdngG13Fc9sWVKC7IJeYDYXgkvHY7NxsiV8U9vdqXOyw2uoHA==
 =ufPc
 -----END PGP SIGNATURE-----

Merge tag 'pull-loongarch-20231013' of https://gitlab.com/gaosong/qemu into staging

pull-loongarch-20231013

# -----BEGIN PGP SIGNATURE-----
#
# iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZSimNQAKCRBAov/yOSY+
# 33XwBADF9ZKlESDBDa/huNFAKD7BsUIdglHfz9lHnLY+kQbCun4HyTLtp2IBsySu
# mZTjdfU/LnaBidFLjEnmZZMPyiI3oV1ruSzT53egSDaxrFUXGpc9oxtMNLsyfk9P
# swdngG13Fc9sWVKC7IJeYDYXgkvHY7NxsiV8U9vdqXOyw2uoHA==
# =ufPc
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 12 Oct 2023 22:06:45 EDT
# gpg:                using RSA key B8FF1DA0D2FDCB2DA09C6C2C40A2FFF239263EDF
# gpg: Good signature from "Song Gao <m17746591750@163.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B8FF 1DA0 D2FD CB2D A09C  6C2C 40A2 FFF2 3926 3EDF

* tag 'pull-loongarch-20231013' of https://gitlab.com/gaosong/qemu:
  LoongArch: step down as general arch maintainer
  hw/loongarch/virt: Remove unused 'loongarch_virt_pm' region
  hw/loongarch/virt: Remove unused ISA Bus
  hw/loongarch/virt: Remove unused ISA UART
  hw/loongarch: remove global loaderparams variable
  target/loongarch: Add preldx instruction
  target/loongarch: fix ASXE flag conflict

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-10-16 12:37:35 -04:00
Stefan Hajnoczi
2778f754e6 hw/ufs: fixes
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEUBfYMVl8eKPZB+73EuIgTA5dtgIFAmUozswACgkQEuIgTA5d
 tgITExAAo0BSNir4I5MfeNIjZTNNdxLXDl0+92JyairB2m/gWH/02jGtrJBYp5On
 ELnixKj2Ntn9IIRr3NwQHNTnDOZHRkUBH+pRVeMbZ+IWLjEoWQdl03ge7e9sHai3
 CLXB4HPSnXddy1SmS9FEkdBWopqxKF4BLZnpAfwh/dj2fzSyDyNIMmGoRimRQhph
 9A90304ERUdpREAXncTgSdXeDZz+lScadzUJZrPPiG2ZHXL+qzDCX7ojEnNaUFxz
 W1IfriI8oeeORfCQaNEOncLKhSwE1WscGxP0vILPApKOu251tObgSbK90QlQR2qT
 BMl7k4BDfYeksXMGc0BXVFrOfv1ud86NlCE2OokK6HBZVuHio4C6TU/t65MC4Rw5
 mJ8CPgbN+7sgVmAGo0sLYzI6GiRR27VqqLh6KXVAa5c/fAdt5pHSkakwSvxiXsAl
 EqskmOY2em5O//+7CWN1CtY+I2pHyltMXAi3Cb2vjweNx88kuhmxFQWeZVI10/H3
 gNrNfu32+ihDLMqR7uQamdAZV0lnIwp97nCbf3LzpM0btjl70QvGZhsbiCDiLQrG
 mJjnaix4xDb8T21WKrI8DKcwR4rvD8hZsCUp31XJnA8HWtdPnEQldK8NEGNlU5ye
 lrGc6gxiwZLCBBIj9lwbZW3Zv9Vz9jNWISOmY+KWLCIus98DBxQ=
 =XXsQ
 -----END PGP SIGNATURE-----

Merge tag 'pull-ufs-20231013' of https://gitlab.com/jeuk20.kim/qemu into staging

hw/ufs: fixes

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCgAdFiEEUBfYMVl8eKPZB+73EuIgTA5dtgIFAmUozswACgkQEuIgTA5d
# tgITExAAo0BSNir4I5MfeNIjZTNNdxLXDl0+92JyairB2m/gWH/02jGtrJBYp5On
# ELnixKj2Ntn9IIRr3NwQHNTnDOZHRkUBH+pRVeMbZ+IWLjEoWQdl03ge7e9sHai3
# CLXB4HPSnXddy1SmS9FEkdBWopqxKF4BLZnpAfwh/dj2fzSyDyNIMmGoRimRQhph
# 9A90304ERUdpREAXncTgSdXeDZz+lScadzUJZrPPiG2ZHXL+qzDCX7ojEnNaUFxz
# W1IfriI8oeeORfCQaNEOncLKhSwE1WscGxP0vILPApKOu251tObgSbK90QlQR2qT
# BMl7k4BDfYeksXMGc0BXVFrOfv1ud86NlCE2OokK6HBZVuHio4C6TU/t65MC4Rw5
# mJ8CPgbN+7sgVmAGo0sLYzI6GiRR27VqqLh6KXVAa5c/fAdt5pHSkakwSvxiXsAl
# EqskmOY2em5O//+7CWN1CtY+I2pHyltMXAi3Cb2vjweNx88kuhmxFQWeZVI10/H3
# gNrNfu32+ihDLMqR7uQamdAZV0lnIwp97nCbf3LzpM0btjl70QvGZhsbiCDiLQrG
# mJjnaix4xDb8T21WKrI8DKcwR4rvD8hZsCUp31XJnA8HWtdPnEQldK8NEGNlU5ye
# lrGc6gxiwZLCBBIj9lwbZW3Zv9Vz9jNWISOmY+KWLCIus98DBxQ=
# =XXsQ
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 13 Oct 2023 00:59:56 EDT
# gpg:                using RSA key 5017D831597C78A3D907EEF712E2204C0E5DB602
# gpg: Good signature from "Jeuk Kim <jeuk20.kim@samsung.com>" [unknown]
# gpg:                 aka "Jeuk Kim <jeuk20.kim@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 5017 D831 597C 78A3 D907  EEF7 12E2 204C 0E5D B602

* tag 'pull-ufs-20231013' of https://gitlab.com/jeuk20.kim/qemu:
  hw/ufs: Fix incorrect register fields
  hw/ufs: Fix code coverity issues

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2023-10-16 12:37:22 -04:00