Migration Pull request
Hi This is the migration PULL request. It is the same than yesterday with proper PULL headers. It pass CI. It contains: - Fabiano rosas trheadinfo cleanups - Hyman Huang dirtylimit changes - Part of my changes - Peter Xu documentation - Tejus updato to migration descriptions - Wei want improvements for postocpy and multifd setup Please apply. Thanks, Juan. -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEGJn/jt6/WMzuA0uC9IfvGFhy1yMFAmTBCrgACgkQ9IfvGFhy 1yPCphAAvZr6HqULECPv/g6gYIiNjl2WQxSgaOnJPnxSV3aaDMl4+rn3GowXbj1a V7xQIxxyYR+4BOBPHc1Ey9z2huB6tr5YhzbHhdpOPOfTdGP4LzQogyBCM9elIGbg GVnBX4k1yT2bE3qoKkD7FZ8GhQdFTq9NFXg/prAJm5fUnoUVVGhz4YSlWVXcpC19 XJIAC4QA5LtQYKe9TAlLqECNHeOiMDIFa1QHtrz+52OUWgh8WOvAPtj1CK0pm9Qa AsvN8HvKJ2PlCBct7c+E17O/xVihKVciEgu3KXjGHurUipUSD3XCHXOURlS1IrLK ShegHFmMQjmS0m9mUy1+2K7DQ+ZcfScqSQCEuEOtTdnzs2him4c6p9VEGyQXa5bc PChjihbYmxuz1GwrprtjUGyXgqhjnwGi1yRDl9L3mZc41vfO4m2sHnMZpdJZc+dt 5f5oi69cXVmtzSNJqT/4nCa7g5PuaPLg34NdwpbZv7Dt0Hq1yzlkNgUNb9R0XGET /BIpIuYYcNdmBUEVebMydndrzY8UDq0KC+e35OADSGkg6B6ZNwYaoungCb2gy6hM WCcv+3UATb/oF7HoPmh1+f1MzUZENAdmDtddXOCvWBZQReByKR7eFZLUHR+yBODH dVP9zOkPfrm8XVG4fSYhb/4BPK4XhBlibFsxxwOohTttTNHA5ew= =J74B -----END PGP SIGNATURE----- Merge tag 'migration-20230726-pull-request' of https://gitlab.com/juan.quintela/qemu into staging Migration Pull request Hi This is the migration PULL request. It is the same than yesterday with proper PULL headers. It pass CI. It contains: - Fabiano rosas trheadinfo cleanups - Hyman Huang dirtylimit changes - Part of my changes - Peter Xu documentation - Tejus updato to migration descriptions - Wei want improvements for postocpy and multifd setup Please apply. Thanks, Juan. # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEGJn/jt6/WMzuA0uC9IfvGFhy1yMFAmTBCrgACgkQ9IfvGFhy # 1yPCphAAvZr6HqULECPv/g6gYIiNjl2WQxSgaOnJPnxSV3aaDMl4+rn3GowXbj1a # V7xQIxxyYR+4BOBPHc1Ey9z2huB6tr5YhzbHhdpOPOfTdGP4LzQogyBCM9elIGbg # GVnBX4k1yT2bE3qoKkD7FZ8GhQdFTq9NFXg/prAJm5fUnoUVVGhz4YSlWVXcpC19 # XJIAC4QA5LtQYKe9TAlLqECNHeOiMDIFa1QHtrz+52OUWgh8WOvAPtj1CK0pm9Qa # AsvN8HvKJ2PlCBct7c+E17O/xVihKVciEgu3KXjGHurUipUSD3XCHXOURlS1IrLK # ShegHFmMQjmS0m9mUy1+2K7DQ+ZcfScqSQCEuEOtTdnzs2him4c6p9VEGyQXa5bc # PChjihbYmxuz1GwrprtjUGyXgqhjnwGi1yRDl9L3mZc41vfO4m2sHnMZpdJZc+dt # 5f5oi69cXVmtzSNJqT/4nCa7g5PuaPLg34NdwpbZv7Dt0Hq1yzlkNgUNb9R0XGET # /BIpIuYYcNdmBUEVebMydndrzY8UDq0KC+e35OADSGkg6B6ZNwYaoungCb2gy6hM # WCcv+3UATb/oF7HoPmh1+f1MzUZENAdmDtddXOCvWBZQReByKR7eFZLUHR+yBODH # dVP9zOkPfrm8XVG4fSYhb/4BPK4XhBlibFsxxwOohTttTNHA5ew= # =J74B # -----END PGP SIGNATURE----- # gpg: Signature made Wed 26 Jul 2023 04:59:52 AM PDT # gpg: using RSA key 1899FF8EDEBF58CCEE034B82F487EF185872D723 # gpg: Good signature from "Juan Quintela <quintela@redhat.com>" [undefined] # gpg: aka "Juan Quintela <quintela@trasno.org>" [undefined] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723 * tag 'migration-20230726-pull-request' of https://gitlab.com/juan.quintela/qemu: (25 commits) migration/rdma: Split qemu_fopen_rdma() into input/output functions qemu-file: Make qemu_file_get_error_obj() static qemu-file: Simplify qemu_file_shutdown() qemu_file: Make qemu_file_is_writable() static migration: Change qemu_file_transferred to noflush qemu-file: Rename qemu_file_transferred_ fast -> noflush qtest/migration-tests.c: use "-incoming defer" for postcopy tests migration: enforce multifd and postcopy preempt to be set before incoming migration: Update error description whenever migration fails docs/migration: Update postcopy bits migration: skipped field is really obsolete. migration-test: machine_opts is really arch specific migration-test: Create arch_opts migration-test: Make machine_opts regular with other options migration-test: Be consistent for ppc migration: Extend query-migrate to provide dirty page limit info migration: Implement dirty-limit convergence algo migration: Put the detection logic before auto-converge checking migration: Refactor auto-converge capability logic migration: Introduce dirty-limit capability ... Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
commit
ec28194b85
@ -451,3 +451,13 @@ both, older and future versions of QEMU.
|
||||
The ``blacklist`` config file option has been renamed to ``block-rpcs``
|
||||
(to be in sync with the renaming of the corresponding command line
|
||||
option).
|
||||
|
||||
Migration
|
||||
---------
|
||||
|
||||
``skipped`` MigrationStats field (since 8.1)
|
||||
''''''''''''''''''''''''''''''''''''''''''''
|
||||
|
||||
``skipped`` field in Migration stats has been deprecated. It hasn't
|
||||
been used for more than 10 years.
|
||||
|
||||
|
@ -594,8 +594,7 @@ Postcopy
|
||||
'Postcopy' migration is a way to deal with migrations that refuse to converge
|
||||
(or take too long to converge) its plus side is that there is an upper bound on
|
||||
the amount of migration traffic and time it takes, the down side is that during
|
||||
the postcopy phase, a failure of *either* side or the network connection causes
|
||||
the guest to be lost.
|
||||
the postcopy phase, a failure of *either* side causes the guest to be lost.
|
||||
|
||||
In postcopy the destination CPUs are started before all the memory has been
|
||||
transferred, and accesses to pages that are yet to be transferred cause
|
||||
@ -721,6 +720,42 @@ processing.
|
||||
is no longer used by migration, while the listen thread carries on servicing
|
||||
page data until the end of migration.
|
||||
|
||||
Postcopy Recovery
|
||||
-----------------
|
||||
|
||||
Comparing to precopy, postcopy is special on error handlings. When any
|
||||
error happens (in this case, mostly network errors), QEMU cannot easily
|
||||
fail a migration because VM data resides in both source and destination
|
||||
QEMU instances. On the other hand, when issue happens QEMU on both sides
|
||||
will go into a paused state. It'll need a recovery phase to continue a
|
||||
paused postcopy migration.
|
||||
|
||||
The recovery phase normally contains a few steps:
|
||||
|
||||
- When network issue occurs, both QEMU will go into PAUSED state
|
||||
|
||||
- When the network is recovered (or a new network is provided), the admin
|
||||
can setup the new channel for migration using QMP command
|
||||
'migrate-recover' on destination node, preparing for a resume.
|
||||
|
||||
- On source host, the admin can continue the interrupted postcopy
|
||||
migration using QMP command 'migrate' with resume=true flag set.
|
||||
|
||||
- After the connection is re-established, QEMU will continue the postcopy
|
||||
migration on both sides.
|
||||
|
||||
During a paused postcopy migration, the VM can logically still continue
|
||||
running, and it will not be impacted from any page access to pages that
|
||||
were already migrated to destination VM before the interruption happens.
|
||||
However, if any of the missing pages got accessed on destination VM, the VM
|
||||
thread will be halted waiting for the page to be migrated, it means it can
|
||||
be halted until the recovery is complete.
|
||||
|
||||
The impact of accessing missing pages can be relevant to different
|
||||
configurations of the guest. For example, when with async page fault
|
||||
enabled, logically the guest can proactively schedule out the threads
|
||||
accessing missing pages.
|
||||
|
||||
Postcopy states
|
||||
---------------
|
||||
|
||||
@ -765,36 +800,31 @@ ADVISE->DISCARD->LISTEN->RUNNING->END
|
||||
(although it can't do the cleanup it would do as it
|
||||
finishes a normal migration).
|
||||
|
||||
- Paused
|
||||
|
||||
Postcopy can run into a paused state (normally on both sides when
|
||||
happens), where all threads will be temporarily halted mostly due to
|
||||
network errors. When reaching paused state, migration will make sure
|
||||
the qemu binary on both sides maintain the data without corrupting
|
||||
the VM. To continue the migration, the admin needs to fix the
|
||||
migration channel using the QMP command 'migrate-recover' on the
|
||||
destination node, then resume the migration using QMP command 'migrate'
|
||||
again on source node, with resume=true flag set.
|
||||
|
||||
- End
|
||||
|
||||
The listen thread can now quit, and perform the cleanup of migration
|
||||
state, the migration is now complete.
|
||||
|
||||
Source side page maps
|
||||
---------------------
|
||||
Source side page map
|
||||
--------------------
|
||||
|
||||
The source side keeps two bitmaps during postcopy; 'the migration bitmap'
|
||||
and 'unsent map'. The 'migration bitmap' is basically the same as in
|
||||
the precopy case, and holds a bit to indicate that page is 'dirty' -
|
||||
i.e. needs sending. During the precopy phase this is updated as the CPU
|
||||
dirties pages, however during postcopy the CPUs are stopped and nothing
|
||||
should dirty anything any more.
|
||||
|
||||
The 'unsent map' is used for the transition to postcopy. It is a bitmap that
|
||||
has a bit cleared whenever a page is sent to the destination, however during
|
||||
the transition to postcopy mode it is combined with the migration bitmap
|
||||
to form a set of pages that:
|
||||
|
||||
a) Have been sent but then redirtied (which must be discarded)
|
||||
b) Have not yet been sent - which also must be discarded to cause any
|
||||
transparent huge pages built during precopy to be broken.
|
||||
|
||||
Note that the contents of the unsentmap are sacrificed during the calculation
|
||||
of the discard set and thus aren't valid once in postcopy. The dirtymap
|
||||
is still valid and is used to ensure that no page is sent more than once. Any
|
||||
request for a page that has already been sent is ignored. Duplicate requests
|
||||
such as this can happen as a page is sent at about the same time the
|
||||
destination accesses it.
|
||||
The 'migration bitmap' in postcopy is basically the same as in the precopy,
|
||||
where each of the bit to indicate that page is 'dirty' - i.e. needs
|
||||
sending. During the precopy phase this is updated as the CPU dirties
|
||||
pages, however during postcopy the CPUs are stopped and nothing should
|
||||
dirty anything any more. Instead, dirty bits are cleared when the relevant
|
||||
pages are sent during postcopy.
|
||||
|
||||
Postcopy with hugepages
|
||||
-----------------------
|
||||
@ -853,6 +883,16 @@ Retro-fitting postcopy to existing clients is possible:
|
||||
guest memory access is made while holding a lock then all other
|
||||
threads waiting for that lock will also be blocked.
|
||||
|
||||
Postcopy Preemption Mode
|
||||
------------------------
|
||||
|
||||
Postcopy preempt is a new capability introduced in 8.0 QEMU release, it
|
||||
allows urgent pages (those got page fault requested from destination QEMU
|
||||
explicitly) to be sent in a separate preempt channel, rather than queued in
|
||||
the background migration channel. Anyone who cares about latencies of page
|
||||
faults during a postcopy migration should enable this feature. By default,
|
||||
it's not enabled.
|
||||
|
||||
Firmware
|
||||
========
|
||||
|
||||
|
@ -34,4 +34,6 @@ void dirtylimit_set_vcpu(int cpu_index,
|
||||
void dirtylimit_set_all(uint64_t quota,
|
||||
bool enable);
|
||||
void dirtylimit_vcpu_execute(CPUState *cpu);
|
||||
uint64_t dirtylimit_throttle_time_per_round(void);
|
||||
uint64_t dirtylimit_ring_full_time(void);
|
||||
#endif
|
||||
|
@ -190,6 +190,16 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
|
||||
info->cpu_throttle_percentage);
|
||||
}
|
||||
|
||||
if (info->has_dirty_limit_throttle_time_per_round) {
|
||||
monitor_printf(mon, "dirty-limit throttle time: %" PRIu64 " us\n",
|
||||
info->dirty_limit_throttle_time_per_round);
|
||||
}
|
||||
|
||||
if (info->has_dirty_limit_ring_full_time) {
|
||||
monitor_printf(mon, "dirty-limit ring full time: %" PRIu64 " us\n",
|
||||
info->dirty_limit_ring_full_time);
|
||||
}
|
||||
|
||||
if (info->has_postcopy_blocktime) {
|
||||
monitor_printf(mon, "postcopy blocktime: %u\n",
|
||||
info->postcopy_blocktime);
|
||||
@ -364,6 +374,14 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
monitor_printf(mon, "%s: %" PRIu64 " ms\n",
|
||||
MigrationParameter_str(MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD),
|
||||
params->x_vcpu_dirty_limit_period);
|
||||
|
||||
monitor_printf(mon, "%s: %" PRIu64 " MB/s\n",
|
||||
MigrationParameter_str(MIGRATION_PARAMETER_VCPU_DIRTY_LIMIT),
|
||||
params->vcpu_dirty_limit);
|
||||
}
|
||||
|
||||
qapi_free_MigrationParameters(params);
|
||||
@ -620,6 +638,14 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
|
||||
error_setg(&err, "The block-bitmap-mapping parameter can only be set "
|
||||
"through QMP");
|
||||
break;
|
||||
case MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD:
|
||||
p->has_x_vcpu_dirty_limit_period = true;
|
||||
visit_type_size(v, param, &p->x_vcpu_dirty_limit_period, &err);
|
||||
break;
|
||||
case MIGRATION_PARAMETER_VCPU_DIRTY_LIMIT:
|
||||
p->has_vcpu_dirty_limit = true;
|
||||
visit_type_size(v, param, &p->vcpu_dirty_limit, &err);
|
||||
break;
|
||||
default:
|
||||
assert(0);
|
||||
}
|
||||
|
@ -64,6 +64,7 @@
|
||||
#include "yank_functions.h"
|
||||
#include "sysemu/qtest.h"
|
||||
#include "options.h"
|
||||
#include "sysemu/dirtylimit.h"
|
||||
|
||||
static NotifierList migration_state_notifiers =
|
||||
NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
|
||||
@ -166,6 +167,9 @@ void migration_cancel(const Error *error)
|
||||
if (error) {
|
||||
migrate_set_error(current_migration, error);
|
||||
}
|
||||
if (migrate_dirty_limit()) {
|
||||
qmp_cancel_vcpu_dirty_limit(false, -1, NULL);
|
||||
}
|
||||
migrate_fd_cancel(current_migration);
|
||||
}
|
||||
|
||||
@ -971,6 +975,15 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
|
||||
info->ram->dirty_pages_rate =
|
||||
stat64_get(&mig_stats.dirty_pages_rate);
|
||||
}
|
||||
|
||||
if (migrate_dirty_limit() && dirtylimit_in_service()) {
|
||||
info->has_dirty_limit_throttle_time_per_round = true;
|
||||
info->dirty_limit_throttle_time_per_round =
|
||||
dirtylimit_throttle_time_per_round();
|
||||
|
||||
info->has_dirty_limit_ring_full_time = true;
|
||||
info->dirty_limit_ring_full_time = dirtylimit_ring_full_time();
|
||||
}
|
||||
}
|
||||
|
||||
static void populate_disk_info(MigrationInfo *info)
|
||||
@ -1676,7 +1689,7 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
|
||||
if (!resume_requested) {
|
||||
yank_unregister_instance(MIGRATION_YANK_INSTANCE);
|
||||
}
|
||||
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "uri",
|
||||
error_setg(&local_err, QERR_INVALID_PARAMETER_VALUE, "uri",
|
||||
"a valid migration protocol");
|
||||
migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
|
||||
MIGRATION_STATUS_FAILED);
|
||||
@ -2069,7 +2082,7 @@ migration_wait_main_channel(MigrationState *ms)
|
||||
* Switch from normal iteration to postcopy
|
||||
* Returns non-0 on error
|
||||
*/
|
||||
static int postcopy_start(MigrationState *ms)
|
||||
static int postcopy_start(MigrationState *ms, Error **errp)
|
||||
{
|
||||
int ret;
|
||||
QIOChannelBuffer *bioc;
|
||||
@ -2179,7 +2192,7 @@ static int postcopy_start(MigrationState *ms)
|
||||
*/
|
||||
ret = qemu_file_get_error(ms->to_dst_file);
|
||||
if (ret) {
|
||||
error_report("postcopy_start: Migration stream errored (pre package)");
|
||||
error_setg(errp, "postcopy_start: Migration stream errored (pre package)");
|
||||
goto fail_closefb;
|
||||
}
|
||||
|
||||
@ -2216,7 +2229,7 @@ static int postcopy_start(MigrationState *ms)
|
||||
|
||||
ret = qemu_file_get_error(ms->to_dst_file);
|
||||
if (ret) {
|
||||
error_report("postcopy_start: Migration stream errored");
|
||||
error_setg(errp, "postcopy_start: Migration stream errored");
|
||||
migrate_set_state(&ms->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
|
||||
MIGRATION_STATUS_FAILED);
|
||||
}
|
||||
@ -2737,6 +2750,7 @@ typedef enum {
|
||||
static MigIterateState migration_iteration_run(MigrationState *s)
|
||||
{
|
||||
uint64_t must_precopy, can_postcopy;
|
||||
Error *local_err = NULL;
|
||||
bool in_postcopy = s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE;
|
||||
bool can_switchover = migration_can_switchover(s);
|
||||
|
||||
@ -2760,8 +2774,9 @@ static MigIterateState migration_iteration_run(MigrationState *s)
|
||||
/* Still a significant amount to transfer */
|
||||
if (!in_postcopy && must_precopy <= s->threshold_size && can_switchover &&
|
||||
qatomic_read(&s->start_postcopy)) {
|
||||
if (postcopy_start(s)) {
|
||||
error_report("%s: postcopy failed to start", __func__);
|
||||
if (postcopy_start(s, &local_err)) {
|
||||
migrate_set_error(s, local_err);
|
||||
error_report_err(local_err);
|
||||
}
|
||||
return MIG_ITERATE_SKIP;
|
||||
}
|
||||
@ -2953,7 +2968,7 @@ static void *migration_thread(void *opaque)
|
||||
MigThrError thr_error;
|
||||
bool urgent = false;
|
||||
|
||||
thread = MigrationThreadAdd("live_migration", qemu_get_thread_id());
|
||||
thread = migration_threads_add("live_migration", qemu_get_thread_id());
|
||||
|
||||
rcu_register_thread();
|
||||
|
||||
@ -3031,7 +3046,7 @@ static void *migration_thread(void *opaque)
|
||||
migration_iteration_finish(s);
|
||||
object_unref(OBJECT(s));
|
||||
rcu_unregister_thread();
|
||||
MigrationThreadDel(thread);
|
||||
migration_threads_remove(thread);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
@ -3252,8 +3267,10 @@ void migrate_fd_connect(MigrationState *s, Error *error_in)
|
||||
*/
|
||||
if (migrate_postcopy_ram() || migrate_return_path()) {
|
||||
if (open_return_path_on_source(s, !resume)) {
|
||||
error_report("Unable to open return-path for postcopy");
|
||||
error_setg(&local_err, "Unable to open return-path for postcopy");
|
||||
migrate_set_state(&s->state, s->state, MIGRATION_STATUS_FAILED);
|
||||
migrate_set_error(s, local_err);
|
||||
error_report_err(local_err);
|
||||
migrate_fd_cleanup(s);
|
||||
return;
|
||||
}
|
||||
@ -3277,6 +3294,7 @@ void migrate_fd_connect(MigrationState *s, Error *error_in)
|
||||
}
|
||||
|
||||
if (multifd_save_setup(&local_err) != 0) {
|
||||
migrate_set_error(s, local_err);
|
||||
error_report_err(local_err);
|
||||
migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
|
||||
MIGRATION_STATUS_FAILED);
|
||||
|
@ -651,7 +651,7 @@ static void *multifd_send_thread(void *opaque)
|
||||
int ret = 0;
|
||||
bool use_zero_copy_send = migrate_zero_copy_send();
|
||||
|
||||
thread = MigrationThreadAdd(p->name, qemu_get_thread_id());
|
||||
thread = migration_threads_add(p->name, qemu_get_thread_id());
|
||||
|
||||
trace_multifd_send_thread_start(p->id);
|
||||
rcu_register_thread();
|
||||
@ -767,7 +767,7 @@ out:
|
||||
qemu_mutex_unlock(&p->mutex);
|
||||
|
||||
rcu_unregister_thread();
|
||||
MigrationThreadDel(thread);
|
||||
migration_threads_remove(thread);
|
||||
trace_multifd_send_thread_end(p->id, p->num_packets, p->total_normal_pages);
|
||||
|
||||
return NULL;
|
||||
|
@ -27,6 +27,7 @@
|
||||
#include "qemu-file.h"
|
||||
#include "ram.h"
|
||||
#include "options.h"
|
||||
#include "sysemu/kvm.h"
|
||||
|
||||
/* Maximum migrate downtime set to 2000 seconds */
|
||||
#define MAX_MIGRATE_DOWNTIME_SECONDS 2000
|
||||
@ -80,6 +81,9 @@
|
||||
#define DEFINE_PROP_MIG_CAP(name, x) \
|
||||
DEFINE_PROP_BOOL(name, MigrationState, capabilities[x], false)
|
||||
|
||||
#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD 1000 /* milliseconds */
|
||||
#define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT 1 /* MB/s */
|
||||
|
||||
Property migration_properties[] = {
|
||||
DEFINE_PROP_BOOL("store-global-state", MigrationState,
|
||||
store_global_state, true),
|
||||
@ -163,6 +167,12 @@ Property migration_properties[] = {
|
||||
DEFINE_PROP_STRING("tls-creds", MigrationState, parameters.tls_creds),
|
||||
DEFINE_PROP_STRING("tls-hostname", MigrationState, parameters.tls_hostname),
|
||||
DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
|
||||
DEFINE_PROP_UINT64("x-vcpu-dirty-limit-period", MigrationState,
|
||||
parameters.x_vcpu_dirty_limit_period,
|
||||
DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD),
|
||||
DEFINE_PROP_UINT64("vcpu-dirty-limit", MigrationState,
|
||||
parameters.vcpu_dirty_limit,
|
||||
DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT),
|
||||
|
||||
/* Migration capabilities */
|
||||
DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
|
||||
@ -187,7 +197,7 @@ Property migration_properties[] = {
|
||||
#endif
|
||||
DEFINE_PROP_MIG_CAP("x-switchover-ack",
|
||||
MIGRATION_CAPABILITY_SWITCHOVER_ACK),
|
||||
|
||||
DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
|
||||
DEFINE_PROP_END_OF_LIST(),
|
||||
};
|
||||
|
||||
@ -233,6 +243,13 @@ bool migrate_dirty_bitmaps(void)
|
||||
return s->capabilities[MIGRATION_CAPABILITY_DIRTY_BITMAPS];
|
||||
}
|
||||
|
||||
bool migrate_dirty_limit(void)
|
||||
{
|
||||
MigrationState *s = migrate_get_current();
|
||||
|
||||
return s->capabilities[MIGRATION_CAPABILITY_DIRTY_LIMIT];
|
||||
}
|
||||
|
||||
bool migrate_events(void)
|
||||
{
|
||||
MigrationState *s = migrate_get_current();
|
||||
@ -424,6 +441,11 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
|
||||
MIGRATION_CAPABILITY_VALIDATE_UUID,
|
||||
MIGRATION_CAPABILITY_ZERO_COPY_SEND);
|
||||
|
||||
static bool migrate_incoming_started(void)
|
||||
{
|
||||
return !!migration_incoming_get_current()->transport_data;
|
||||
}
|
||||
|
||||
/**
|
||||
* @migration_caps_check - check capability compatibility
|
||||
*
|
||||
@ -547,6 +569,12 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, Error **errp)
|
||||
error_setg(errp, "Postcopy preempt not compatible with compress");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (migrate_incoming_started()) {
|
||||
error_setg(errp,
|
||||
"Postcopy preempt must be set before incoming starts");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (new_caps[MIGRATION_CAPABILITY_MULTIFD]) {
|
||||
@ -554,6 +582,10 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, Error **errp)
|
||||
error_setg(errp, "Multifd is not compatible with compress");
|
||||
return false;
|
||||
}
|
||||
if (migrate_incoming_started()) {
|
||||
error_setg(errp, "Multifd must be set before incoming starts");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (new_caps[MIGRATION_CAPABILITY_SWITCHOVER_ACK]) {
|
||||
@ -563,6 +595,19 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, Error **errp)
|
||||
return false;
|
||||
}
|
||||
}
|
||||
if (new_caps[MIGRATION_CAPABILITY_DIRTY_LIMIT]) {
|
||||
if (new_caps[MIGRATION_CAPABILITY_AUTO_CONVERGE]) {
|
||||
error_setg(errp, "dirty-limit conflicts with auto-converge"
|
||||
" either of then available currently");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!kvm_enabled() || !kvm_dirty_ring_enabled()) {
|
||||
error_setg(errp, "dirty-limit requires KVM with accelerator"
|
||||
" property 'dirty-ring-size' set");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
@ -908,6 +953,11 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
|
||||
s->parameters.block_bitmap_mapping);
|
||||
}
|
||||
|
||||
params->has_x_vcpu_dirty_limit_period = true;
|
||||
params->x_vcpu_dirty_limit_period = s->parameters.x_vcpu_dirty_limit_period;
|
||||
params->has_vcpu_dirty_limit = true;
|
||||
params->vcpu_dirty_limit = s->parameters.vcpu_dirty_limit;
|
||||
|
||||
return params;
|
||||
}
|
||||
|
||||
@ -940,6 +990,8 @@ void migrate_params_init(MigrationParameters *params)
|
||||
params->has_announce_max = true;
|
||||
params->has_announce_rounds = true;
|
||||
params->has_announce_step = true;
|
||||
params->has_x_vcpu_dirty_limit_period = true;
|
||||
params->has_vcpu_dirty_limit = true;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -1100,6 +1152,23 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
|
||||
}
|
||||
#endif
|
||||
|
||||
if (params->has_x_vcpu_dirty_limit_period &&
|
||||
(params->x_vcpu_dirty_limit_period < 1 ||
|
||||
params->x_vcpu_dirty_limit_period > 1000)) {
|
||||
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
|
||||
"x-vcpu-dirty-limit-period",
|
||||
"a value between 1 and 1000");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (params->has_vcpu_dirty_limit &&
|
||||
(params->vcpu_dirty_limit < 1)) {
|
||||
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
|
||||
"vcpu_dirty_limit",
|
||||
"is invalid, it must greater then 1 MB/s");
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -1199,6 +1268,14 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
|
||||
dest->has_block_bitmap_mapping = true;
|
||||
dest->block_bitmap_mapping = params->block_bitmap_mapping;
|
||||
}
|
||||
|
||||
if (params->has_x_vcpu_dirty_limit_period) {
|
||||
dest->x_vcpu_dirty_limit_period =
|
||||
params->x_vcpu_dirty_limit_period;
|
||||
}
|
||||
if (params->has_vcpu_dirty_limit) {
|
||||
dest->vcpu_dirty_limit = params->vcpu_dirty_limit;
|
||||
}
|
||||
}
|
||||
|
||||
static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
|
||||
@ -1317,6 +1394,14 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
|
||||
QAPI_CLONE(BitmapMigrationNodeAliasList,
|
||||
params->block_bitmap_mapping);
|
||||
}
|
||||
|
||||
if (params->has_x_vcpu_dirty_limit_period) {
|
||||
s->parameters.x_vcpu_dirty_limit_period =
|
||||
params->x_vcpu_dirty_limit_period;
|
||||
}
|
||||
if (params->has_vcpu_dirty_limit) {
|
||||
s->parameters.vcpu_dirty_limit = params->vcpu_dirty_limit;
|
||||
}
|
||||
}
|
||||
|
||||
void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
|
||||
|
@ -29,6 +29,7 @@ bool migrate_block(void);
|
||||
bool migrate_colo(void);
|
||||
bool migrate_compress(void);
|
||||
bool migrate_dirty_bitmaps(void);
|
||||
bool migrate_dirty_limit(void);
|
||||
bool migrate_events(void);
|
||||
bool migrate_ignore_shared(void);
|
||||
bool migrate_late_block_activate(void);
|
||||
|
@ -65,8 +65,6 @@ struct QEMUFile {
|
||||
*/
|
||||
int qemu_file_shutdown(QEMUFile *f)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
/*
|
||||
* We must set qemufile error before the real shutdown(), otherwise
|
||||
* there can be a race window where we thought IO all went though
|
||||
@ -96,22 +94,10 @@ int qemu_file_shutdown(QEMUFile *f)
|
||||
}
|
||||
|
||||
if (qio_channel_shutdown(f->ioc, QIO_CHANNEL_SHUTDOWN_BOTH, NULL) < 0) {
|
||||
ret = -EIO;
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
bool qemu_file_mode_is_not_valid(const char *mode)
|
||||
{
|
||||
if (mode == NULL ||
|
||||
(mode[0] != 'r' && mode[0] != 'w') ||
|
||||
mode[1] != 'b' || mode[2] != 0) {
|
||||
fprintf(stderr, "qemu_fopen: Argument validity check failed\n");
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static QEMUFile *qemu_file_new_impl(QIOChannel *ioc, bool is_writable)
|
||||
@ -160,7 +146,7 @@ void qemu_file_set_hooks(QEMUFile *f, const QEMUFileHooks *hooks)
|
||||
* is not 0.
|
||||
*
|
||||
*/
|
||||
int qemu_file_get_error_obj(QEMUFile *f, Error **errp)
|
||||
static int qemu_file_get_error_obj(QEMUFile *f, Error **errp)
|
||||
{
|
||||
if (errp) {
|
||||
*errp = f->last_error_obj ? error_copy(f->last_error_obj) : NULL;
|
||||
@ -228,7 +214,7 @@ void qemu_file_set_error(QEMUFile *f, int ret)
|
||||
qemu_file_set_error_obj(f, ret, NULL);
|
||||
}
|
||||
|
||||
bool qemu_file_is_writable(QEMUFile *f)
|
||||
static bool qemu_file_is_writable(QEMUFile *f)
|
||||
{
|
||||
return f->is_writable;
|
||||
}
|
||||
@ -694,7 +680,7 @@ int coroutine_mixed_fn qemu_get_byte(QEMUFile *f)
|
||||
return result;
|
||||
}
|
||||
|
||||
uint64_t qemu_file_transferred_fast(QEMUFile *f)
|
||||
uint64_t qemu_file_transferred_noflush(QEMUFile *f)
|
||||
{
|
||||
uint64_t ret = f->total_transferred;
|
||||
int i;
|
||||
|
@ -86,16 +86,15 @@ int qemu_fclose(QEMUFile *f);
|
||||
uint64_t qemu_file_transferred(QEMUFile *f);
|
||||
|
||||
/*
|
||||
* qemu_file_transferred_fast:
|
||||
* qemu_file_transferred_noflush:
|
||||
*
|
||||
* As qemu_file_transferred except for writable
|
||||
* files, where no flush is performed and the reported
|
||||
* amount will include the size of any queued buffers,
|
||||
* on top of the amount actually transferred.
|
||||
* As qemu_file_transferred except for writable files, where no flush
|
||||
* is performed and the reported amount will include the size of any
|
||||
* queued buffers, on top of the amount actually transferred.
|
||||
*
|
||||
* Returns: the total bytes transferred and queued
|
||||
*/
|
||||
uint64_t qemu_file_transferred_fast(QEMUFile *f);
|
||||
uint64_t qemu_file_transferred_noflush(QEMUFile *f);
|
||||
|
||||
/*
|
||||
* put_buffer without copying the buffer.
|
||||
@ -103,8 +102,6 @@ uint64_t qemu_file_transferred_fast(QEMUFile *f);
|
||||
*/
|
||||
void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, size_t size,
|
||||
bool may_free);
|
||||
bool qemu_file_mode_is_not_valid(const char *mode);
|
||||
bool qemu_file_is_writable(QEMUFile *f);
|
||||
|
||||
#include "migration/qemu-file-types.h"
|
||||
|
||||
@ -130,7 +127,6 @@ void qemu_file_skip(QEMUFile *f, int size);
|
||||
* accounting information tracks the total migration traffic.
|
||||
*/
|
||||
void qemu_file_credit_transfer(QEMUFile *f, size_t size);
|
||||
int qemu_file_get_error_obj(QEMUFile *f, Error **errp);
|
||||
int qemu_file_get_error_obj_any(QEMUFile *f1, QEMUFile *f2, Error **errp);
|
||||
void qemu_file_set_error_obj(QEMUFile *f, int ret, Error *err);
|
||||
void qemu_file_set_error(QEMUFile *f, int ret);
|
||||
|
@ -46,6 +46,7 @@
|
||||
#include "qapi/error.h"
|
||||
#include "qapi/qapi-types-migration.h"
|
||||
#include "qapi/qapi-events-migration.h"
|
||||
#include "qapi/qapi-commands-migration.h"
|
||||
#include "qapi/qmp/qerror.h"
|
||||
#include "trace.h"
|
||||
#include "exec/ram_addr.h"
|
||||
@ -59,6 +60,8 @@
|
||||
#include "multifd.h"
|
||||
#include "sysemu/runstate.h"
|
||||
#include "options.h"
|
||||
#include "sysemu/dirtylimit.h"
|
||||
#include "sysemu/kvm.h"
|
||||
|
||||
#include "hw/boards.h" /* for machine_dump_guest_core() */
|
||||
|
||||
@ -984,6 +987,37 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Enable dirty-limit to throttle down the guest
|
||||
*/
|
||||
static void migration_dirty_limit_guest(void)
|
||||
{
|
||||
/*
|
||||
* dirty page rate quota for all vCPUs fetched from
|
||||
* migration parameter 'vcpu_dirty_limit'
|
||||
*/
|
||||
static int64_t quota_dirtyrate;
|
||||
MigrationState *s = migrate_get_current();
|
||||
|
||||
/*
|
||||
* If dirty limit already enabled and migration parameter
|
||||
* vcpu-dirty-limit untouched.
|
||||
*/
|
||||
if (dirtylimit_in_service() &&
|
||||
quota_dirtyrate == s->parameters.vcpu_dirty_limit) {
|
||||
return;
|
||||
}
|
||||
|
||||
quota_dirtyrate = s->parameters.vcpu_dirty_limit;
|
||||
|
||||
/*
|
||||
* Set all vCPU a quota dirtyrate, note that the second
|
||||
* parameter will be ignored if setting all vCPU for the vm
|
||||
*/
|
||||
qmp_set_vcpu_dirty_limit(false, -1, quota_dirtyrate, NULL);
|
||||
trace_migration_dirty_limit_guest(quota_dirtyrate);
|
||||
}
|
||||
|
||||
static void migration_trigger_throttle(RAMState *rs)
|
||||
{
|
||||
uint64_t threshold = migrate_throttle_trigger_threshold();
|
||||
@ -995,19 +1029,26 @@ static void migration_trigger_throttle(RAMState *rs)
|
||||
/* During block migration the auto-converge logic incorrectly detects
|
||||
* that ram migration makes no progress. Avoid this by disabling the
|
||||
* throttling logic during the bulk phase of block migration. */
|
||||
if (migrate_auto_converge() && !blk_mig_bulk_active()) {
|
||||
/* The following detection logic can be refined later. For now:
|
||||
Check to see if the ratio between dirtied bytes and the approx.
|
||||
amount of bytes that just got transferred since the last time
|
||||
we were in this routine reaches the threshold. If that happens
|
||||
twice, start or increase throttling. */
|
||||
if (blk_mig_bulk_active()) {
|
||||
return;
|
||||
}
|
||||
|
||||
if ((bytes_dirty_period > bytes_dirty_threshold) &&
|
||||
(++rs->dirty_rate_high_cnt >= 2)) {
|
||||
/*
|
||||
* The following detection logic can be refined later. For now:
|
||||
* Check to see if the ratio between dirtied bytes and the approx.
|
||||
* amount of bytes that just got transferred since the last time
|
||||
* we were in this routine reaches the threshold. If that happens
|
||||
* twice, start or increase throttling.
|
||||
*/
|
||||
if ((bytes_dirty_period > bytes_dirty_threshold) &&
|
||||
(++rs->dirty_rate_high_cnt >= 2)) {
|
||||
rs->dirty_rate_high_cnt = 0;
|
||||
if (migrate_auto_converge()) {
|
||||
trace_migration_throttle();
|
||||
rs->dirty_rate_high_cnt = 0;
|
||||
mig_throttle_guest_down(bytes_dirty_period,
|
||||
bytes_dirty_threshold);
|
||||
} else if (migrate_dirty_limit()) {
|
||||
migration_dirty_limit_guest();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -4053,27 +4053,26 @@ static void qio_channel_rdma_register_types(void)
|
||||
|
||||
type_init(qio_channel_rdma_register_types);
|
||||
|
||||
static QEMUFile *qemu_fopen_rdma(RDMAContext *rdma, const char *mode)
|
||||
static QEMUFile *rdma_new_input(RDMAContext *rdma)
|
||||
{
|
||||
QIOChannelRDMA *rioc;
|
||||
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(object_new(TYPE_QIO_CHANNEL_RDMA));
|
||||
|
||||
if (qemu_file_mode_is_not_valid(mode)) {
|
||||
return NULL;
|
||||
}
|
||||
rioc->file = qemu_file_new_input(QIO_CHANNEL(rioc));
|
||||
rioc->rdmain = rdma;
|
||||
rioc->rdmaout = rdma->return_path;
|
||||
qemu_file_set_hooks(rioc->file, &rdma_read_hooks);
|
||||
|
||||
rioc = QIO_CHANNEL_RDMA(object_new(TYPE_QIO_CHANNEL_RDMA));
|
||||
return rioc->file;
|
||||
}
|
||||
|
||||
if (mode[0] == 'w') {
|
||||
rioc->file = qemu_file_new_output(QIO_CHANNEL(rioc));
|
||||
rioc->rdmaout = rdma;
|
||||
rioc->rdmain = rdma->return_path;
|
||||
qemu_file_set_hooks(rioc->file, &rdma_write_hooks);
|
||||
} else {
|
||||
rioc->file = qemu_file_new_input(QIO_CHANNEL(rioc));
|
||||
rioc->rdmain = rdma;
|
||||
rioc->rdmaout = rdma->return_path;
|
||||
qemu_file_set_hooks(rioc->file, &rdma_read_hooks);
|
||||
}
|
||||
static QEMUFile *rdma_new_output(RDMAContext *rdma)
|
||||
{
|
||||
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(object_new(TYPE_QIO_CHANNEL_RDMA));
|
||||
|
||||
rioc->file = qemu_file_new_output(QIO_CHANNEL(rioc));
|
||||
rioc->rdmaout = rdma;
|
||||
rioc->rdmain = rdma->return_path;
|
||||
qemu_file_set_hooks(rioc->file, &rdma_write_hooks);
|
||||
|
||||
return rioc->file;
|
||||
}
|
||||
@ -4099,9 +4098,9 @@ static void rdma_accept_incoming_migration(void *opaque)
|
||||
return;
|
||||
}
|
||||
|
||||
f = qemu_fopen_rdma(rdma, "rb");
|
||||
f = rdma_new_input(rdma);
|
||||
if (f == NULL) {
|
||||
fprintf(stderr, "RDMA ERROR: could not qemu_fopen_rdma\n");
|
||||
fprintf(stderr, "RDMA ERROR: could not open RDMA for input\n");
|
||||
qemu_rdma_cleanup(rdma);
|
||||
return;
|
||||
}
|
||||
@ -4224,7 +4223,7 @@ void rdma_start_outgoing_migration(void *opaque,
|
||||
|
||||
trace_rdma_start_outgoing_migration_after_rdma_connect();
|
||||
|
||||
s->to_dst_file = qemu_fopen_rdma(rdma, "wb");
|
||||
s->to_dst_file = rdma_new_output(rdma);
|
||||
migrate_fd_connect(s, NULL);
|
||||
return;
|
||||
return_path_err:
|
||||
|
@ -927,9 +927,9 @@ static int vmstate_load(QEMUFile *f, SaveStateEntry *se)
|
||||
static void vmstate_save_old_style(QEMUFile *f, SaveStateEntry *se,
|
||||
JSONWriter *vmdesc)
|
||||
{
|
||||
uint64_t old_offset = qemu_file_transferred_fast(f);
|
||||
uint64_t old_offset = qemu_file_transferred_noflush(f);
|
||||
se->ops->save_state(f, se->opaque);
|
||||
uint64_t size = qemu_file_transferred_fast(f) - old_offset;
|
||||
uint64_t size = qemu_file_transferred_noflush(f) - old_offset;
|
||||
|
||||
if (vmdesc) {
|
||||
json_writer_int64(vmdesc, "size", size);
|
||||
@ -3007,7 +3007,7 @@ bool save_snapshot(const char *name, bool overwrite, const char *vmstate,
|
||||
goto the_end;
|
||||
}
|
||||
ret = qemu_savevm_state(f, errp);
|
||||
vm_state_size = qemu_file_transferred(f);
|
||||
vm_state_size = qemu_file_transferred_noflush(f);
|
||||
ret2 = qemu_fclose(f);
|
||||
if (ret < 0) {
|
||||
goto the_end;
|
||||
|
@ -10,23 +10,35 @@
|
||||
* See the COPYING file in the top-level directory.
|
||||
*/
|
||||
|
||||
#include "qemu/osdep.h"
|
||||
#include "qemu/queue.h"
|
||||
#include "qemu/lockable.h"
|
||||
#include "threadinfo.h"
|
||||
|
||||
QemuMutex migration_threads_lock;
|
||||
static QLIST_HEAD(, MigrationThread) migration_threads;
|
||||
|
||||
MigrationThread *MigrationThreadAdd(const char *name, int thread_id)
|
||||
static void __attribute__((constructor)) migration_threads_init(void)
|
||||
{
|
||||
qemu_mutex_init(&migration_threads_lock);
|
||||
}
|
||||
|
||||
MigrationThread *migration_threads_add(const char *name, int thread_id)
|
||||
{
|
||||
MigrationThread *thread = g_new0(MigrationThread, 1);
|
||||
thread->name = name;
|
||||
thread->thread_id = thread_id;
|
||||
|
||||
QLIST_INSERT_HEAD(&migration_threads, thread, node);
|
||||
WITH_QEMU_LOCK_GUARD(&migration_threads_lock) {
|
||||
QLIST_INSERT_HEAD(&migration_threads, thread, node);
|
||||
}
|
||||
|
||||
return thread;
|
||||
}
|
||||
|
||||
void MigrationThreadDel(MigrationThread *thread)
|
||||
void migration_threads_remove(MigrationThread *thread)
|
||||
{
|
||||
QEMU_LOCK_GUARD(&migration_threads_lock);
|
||||
if (thread) {
|
||||
QLIST_REMOVE(thread, node);
|
||||
g_free(thread);
|
||||
@ -39,6 +51,7 @@ MigrationThreadInfoList *qmp_query_migrationthreads(Error **errp)
|
||||
MigrationThreadInfoList **tail = &head;
|
||||
MigrationThread *thread = NULL;
|
||||
|
||||
QEMU_LOCK_GUARD(&migration_threads_lock);
|
||||
QLIST_FOREACH(thread, &migration_threads, node) {
|
||||
MigrationThreadInfo *info = g_new0(MigrationThreadInfo, 1);
|
||||
info->name = g_strdup(thread->name);
|
||||
|
@ -10,8 +10,6 @@
|
||||
* See the COPYING file in the top-level directory.
|
||||
*/
|
||||
|
||||
#include "qemu/queue.h"
|
||||
#include "qemu/osdep.h"
|
||||
#include "qapi/error.h"
|
||||
#include "qapi/qapi-commands-migration.h"
|
||||
|
||||
@ -23,6 +21,5 @@ struct MigrationThread {
|
||||
QLIST_ENTRY(MigrationThread) node;
|
||||
};
|
||||
|
||||
MigrationThread *MigrationThreadAdd(const char *name, int thread_id);
|
||||
|
||||
void MigrationThreadDel(MigrationThread *info);
|
||||
MigrationThread *migration_threads_add(const char *name, int thread_id);
|
||||
void migration_threads_remove(MigrationThread *info);
|
||||
|
@ -93,6 +93,7 @@ migration_bitmap_sync_start(void) ""
|
||||
migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64
|
||||
migration_bitmap_clear_dirty(char *str, uint64_t start, uint64_t size, unsigned long page) "rb %s start 0x%"PRIx64" size 0x%"PRIx64" page 0x%lx"
|
||||
migration_throttle(void) ""
|
||||
migration_dirty_limit_guest(int64_t dirtyrate) "guest dirty page rate limit %" PRIi64 " MB/s"
|
||||
ram_discard_range(const char *rbname, uint64_t start, size_t len) "%s: start: %" PRIx64 " %zx"
|
||||
ram_load_loop(const char *rbname, uint64_t addr, int flags, void *host) "%s: addr: 0x%" PRIx64 " flags: 0x%x host: %p"
|
||||
ram_load_postcopy_loop(int channel, uint64_t addr, int flags) "chan=%d addr=0x%" PRIx64 " flags=0x%x"
|
||||
|
@ -361,7 +361,7 @@ int vmstate_save_state_v(QEMUFile *f, const VMStateDescription *vmsd,
|
||||
void *curr_elem = first_elem + size * i;
|
||||
|
||||
vmsd_desc_field_start(vmsd, vmdesc_loop, field, i, n_elems);
|
||||
old_offset = qemu_file_transferred_fast(f);
|
||||
old_offset = qemu_file_transferred_noflush(f);
|
||||
if (field->flags & VMS_ARRAY_OF_POINTER) {
|
||||
assert(curr_elem);
|
||||
curr_elem = *(void **)curr_elem;
|
||||
@ -391,7 +391,7 @@ int vmstate_save_state_v(QEMUFile *f, const VMStateDescription *vmsd,
|
||||
return ret;
|
||||
}
|
||||
|
||||
written_bytes = qemu_file_transferred_fast(f) - old_offset;
|
||||
written_bytes = qemu_file_transferred_noflush(f) - old_offset;
|
||||
vmsd_desc_field_end(vmsd, vmdesc_loop, field, written_bytes, i);
|
||||
|
||||
/* Compressed arrays only care about the first element */
|
||||
|
@ -23,7 +23,8 @@
|
||||
#
|
||||
# @duplicate: number of duplicate (zero) pages (since 1.2)
|
||||
#
|
||||
# @skipped: number of skipped zero pages (since 1.5)
|
||||
# @skipped: number of skipped zero pages. Always zero, only provided for
|
||||
# compatibility (since 1.5)
|
||||
#
|
||||
# @normal: number of normal pages (since 1.2)
|
||||
#
|
||||
@ -62,11 +63,18 @@
|
||||
# between 0 and @dirty-sync-count * @multifd-channels. (since
|
||||
# 7.1)
|
||||
#
|
||||
# Features:
|
||||
#
|
||||
# @deprecated: Member @skipped is always zero since 1.5.3
|
||||
#
|
||||
# Since: 0.14
|
||||
#
|
||||
##
|
||||
{ 'struct': 'MigrationStats',
|
||||
'data': {'transferred': 'int', 'remaining': 'int', 'total': 'int' ,
|
||||
'duplicate': 'int', 'skipped': 'int', 'normal': 'int',
|
||||
'duplicate': 'int',
|
||||
'skipped': { 'type': 'int', 'features': ['deprecated'] },
|
||||
'normal': 'int',
|
||||
'normal-bytes': 'int', 'dirty-pages-rate': 'int',
|
||||
'mbps': 'number', 'dirty-sync-count': 'int',
|
||||
'postcopy-requests': 'int', 'page-size': 'int',
|
||||
@ -250,6 +258,18 @@
|
||||
# blocked. Present and non-empty when migration is blocked.
|
||||
# (since 6.0)
|
||||
#
|
||||
# @dirty-limit-throttle-time-per-round: Maximum throttle time (in microseconds) of virtual
|
||||
# CPUs each dirty ring full round, which shows how
|
||||
# MigrationCapability dirty-limit affects the guest
|
||||
# during live migration. (since 8.1)
|
||||
#
|
||||
# @dirty-limit-ring-full-time: Estimated average dirty ring full time (in microseconds)
|
||||
# each dirty ring full round, note that the value equals
|
||||
# dirty ring memory size divided by average dirty page rate
|
||||
# of virtual CPU, which can be used to observe the average
|
||||
# memory load of virtual CPU indirectly. Note that zero
|
||||
# means guest doesn't dirty memory (since 8.1)
|
||||
#
|
||||
# Since: 0.14
|
||||
##
|
||||
{ 'struct': 'MigrationInfo',
|
||||
@ -267,7 +287,9 @@
|
||||
'*postcopy-blocktime': 'uint32',
|
||||
'*postcopy-vcpu-blocktime': ['uint32'],
|
||||
'*compression': 'CompressionStats',
|
||||
'*socket-address': ['SocketAddress'] } }
|
||||
'*socket-address': ['SocketAddress'],
|
||||
'*dirty-limit-throttle-time-per-round': 'uint64',
|
||||
'*dirty-limit-ring-full-time': 'uint64'} }
|
||||
|
||||
##
|
||||
# @query-migrate:
|
||||
@ -497,6 +519,16 @@
|
||||
# are present. 'return-path' capability must be enabled to use
|
||||
# it. (since 8.1)
|
||||
#
|
||||
# @dirty-limit: If enabled, migration will use the dirty-limit algo to
|
||||
# throttle down guest instead of auto-converge algo.
|
||||
# Throttle algo only works when vCPU's dirtyrate greater
|
||||
# than 'vcpu-dirty-limit', read processes in guest os
|
||||
# aren't penalized any more, so this algo can improve
|
||||
# performance of vCPU during live migration. This is an
|
||||
# optional performance feature and should not affect the
|
||||
# correctness of the existing auto-converge algo.
|
||||
# (since 8.1)
|
||||
#
|
||||
# Features:
|
||||
#
|
||||
# @unstable: Members @x-colo and @x-ignore-shared are experimental.
|
||||
@ -512,7 +544,8 @@
|
||||
'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
|
||||
{ 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
|
||||
'validate-uuid', 'background-snapshot',
|
||||
'zero-copy-send', 'postcopy-preempt', 'switchover-ack'] }
|
||||
'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
|
||||
'dirty-limit'] }
|
||||
|
||||
##
|
||||
# @MigrationCapabilityStatus:
|
||||
@ -789,9 +822,17 @@
|
||||
# Nodes are mapped to their block device name if there is one, and
|
||||
# to their node name otherwise. (Since 5.2)
|
||||
#
|
||||
# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty limit during
|
||||
# live migration. Should be in the range 1 to 1000ms,
|
||||
# defaults to 1000ms. (Since 8.1)
|
||||
#
|
||||
# @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
|
||||
# Defaults to 1. (Since 8.1)
|
||||
#
|
||||
# Features:
|
||||
#
|
||||
# @unstable: Member @x-checkpoint-delay is experimental.
|
||||
# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
|
||||
# are experimental.
|
||||
#
|
||||
# Since: 2.4
|
||||
##
|
||||
@ -809,8 +850,10 @@
|
||||
'multifd-channels',
|
||||
'xbzrle-cache-size', 'max-postcopy-bandwidth',
|
||||
'max-cpu-throttle', 'multifd-compression',
|
||||
'multifd-zlib-level' ,'multifd-zstd-level',
|
||||
'block-bitmap-mapping' ] }
|
||||
'multifd-zlib-level', 'multifd-zstd-level',
|
||||
'block-bitmap-mapping',
|
||||
{ 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] },
|
||||
'vcpu-dirty-limit'] }
|
||||
|
||||
##
|
||||
# @MigrateSetParameters:
|
||||
@ -945,9 +988,17 @@
|
||||
# Nodes are mapped to their block device name if there is one, and
|
||||
# to their node name otherwise. (Since 5.2)
|
||||
#
|
||||
# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty limit during
|
||||
# live migration. Should be in the range 1 to 1000ms,
|
||||
# defaults to 1000ms. (Since 8.1)
|
||||
#
|
||||
# @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
|
||||
# Defaults to 1. (Since 8.1)
|
||||
#
|
||||
# Features:
|
||||
#
|
||||
# @unstable: Member @x-checkpoint-delay is experimental.
|
||||
# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
|
||||
# are experimental.
|
||||
#
|
||||
# TODO: either fuse back into MigrationParameters, or make
|
||||
# MigrationParameters members mandatory
|
||||
@ -982,7 +1033,10 @@
|
||||
'*multifd-compression': 'MultiFDCompression',
|
||||
'*multifd-zlib-level': 'uint8',
|
||||
'*multifd-zstd-level': 'uint8',
|
||||
'*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
|
||||
'*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
|
||||
'*x-vcpu-dirty-limit-period': { 'type': 'uint64',
|
||||
'features': [ 'unstable' ] },
|
||||
'*vcpu-dirty-limit': 'uint64'} }
|
||||
|
||||
##
|
||||
# @migrate-set-parameters:
|
||||
@ -1137,9 +1191,17 @@
|
||||
# Nodes are mapped to their block device name if there is one, and
|
||||
# to their node name otherwise. (Since 5.2)
|
||||
#
|
||||
# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty limit during
|
||||
# live migration. Should be in the range 1 to 1000ms,
|
||||
# defaults to 1000ms. (Since 8.1)
|
||||
#
|
||||
# @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
|
||||
# Defaults to 1. (Since 8.1)
|
||||
#
|
||||
# Features:
|
||||
#
|
||||
# @unstable: Member @x-checkpoint-delay is experimental.
|
||||
# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
|
||||
# are experimental.
|
||||
#
|
||||
# Since: 2.4
|
||||
##
|
||||
@ -1171,7 +1233,10 @@
|
||||
'*multifd-compression': 'MultiFDCompression',
|
||||
'*multifd-zlib-level': 'uint8',
|
||||
'*multifd-zstd-level': 'uint8',
|
||||
'*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
|
||||
'*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
|
||||
'*x-vcpu-dirty-limit-period': { 'type': 'uint64',
|
||||
'features': [ 'unstable' ] },
|
||||
'*vcpu-dirty-limit': 'uint64'} }
|
||||
|
||||
##
|
||||
# @query-migrate-parameters:
|
||||
|
@ -24,6 +24,9 @@
|
||||
#include "hw/boards.h"
|
||||
#include "sysemu/kvm.h"
|
||||
#include "trace.h"
|
||||
#include "migration/misc.h"
|
||||
#include "migration/migration.h"
|
||||
#include "migration/options.h"
|
||||
|
||||
/*
|
||||
* Dirtylimit stop working if dirty page rate error
|
||||
@ -75,14 +78,21 @@ static bool dirtylimit_quit;
|
||||
|
||||
static void vcpu_dirty_rate_stat_collect(void)
|
||||
{
|
||||
MigrationState *s = migrate_get_current();
|
||||
VcpuStat stat;
|
||||
int i = 0;
|
||||
int64_t period = DIRTYLIMIT_CALC_TIME_MS;
|
||||
|
||||
if (migrate_dirty_limit() &&
|
||||
migration_is_active(s)) {
|
||||
period = s->parameters.x_vcpu_dirty_limit_period;
|
||||
}
|
||||
|
||||
/* calculate vcpu dirtyrate */
|
||||
vcpu_calculate_dirtyrate(DIRTYLIMIT_CALC_TIME_MS,
|
||||
&stat,
|
||||
GLOBAL_DIRTY_LIMIT,
|
||||
false);
|
||||
vcpu_calculate_dirtyrate(period,
|
||||
&stat,
|
||||
GLOBAL_DIRTY_LIMIT,
|
||||
false);
|
||||
|
||||
for (i = 0; i < stat.nvcpu; i++) {
|
||||
vcpu_dirty_rate_stat->stat.rates[i].id = i;
|
||||
@ -426,6 +436,23 @@ static void dirtylimit_cleanup(void)
|
||||
dirtylimit_state_finalize();
|
||||
}
|
||||
|
||||
/*
|
||||
* dirty page rate limit is not allowed to set if migration
|
||||
* is running with dirty-limit capability enabled.
|
||||
*/
|
||||
static bool dirtylimit_is_allowed(void)
|
||||
{
|
||||
MigrationState *ms = migrate_get_current();
|
||||
|
||||
if (migration_is_running(ms->state) &&
|
||||
(!qemu_thread_is_self(&ms->thread)) &&
|
||||
migrate_dirty_limit() &&
|
||||
dirtylimit_in_service()) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
void qmp_cancel_vcpu_dirty_limit(bool has_cpu_index,
|
||||
int64_t cpu_index,
|
||||
Error **errp)
|
||||
@ -439,6 +466,12 @@ void qmp_cancel_vcpu_dirty_limit(bool has_cpu_index,
|
||||
return;
|
||||
}
|
||||
|
||||
if (!dirtylimit_is_allowed()) {
|
||||
error_setg(errp, "can't cancel dirty page rate limit while"
|
||||
" migration is running");
|
||||
return;
|
||||
}
|
||||
|
||||
if (!dirtylimit_in_service()) {
|
||||
return;
|
||||
}
|
||||
@ -489,6 +522,12 @@ void qmp_set_vcpu_dirty_limit(bool has_cpu_index,
|
||||
return;
|
||||
}
|
||||
|
||||
if (!dirtylimit_is_allowed()) {
|
||||
error_setg(errp, "can't set dirty page rate limit while"
|
||||
" migration is running");
|
||||
return;
|
||||
}
|
||||
|
||||
if (!dirty_rate) {
|
||||
qmp_cancel_vcpu_dirty_limit(has_cpu_index, cpu_index, errp);
|
||||
return;
|
||||
@ -515,14 +554,54 @@ void hmp_set_vcpu_dirty_limit(Monitor *mon, const QDict *qdict)
|
||||
int64_t cpu_index = qdict_get_try_int(qdict, "cpu_index", -1);
|
||||
Error *err = NULL;
|
||||
|
||||
qmp_set_vcpu_dirty_limit(!!(cpu_index != -1), cpu_index, dirty_rate, &err);
|
||||
if (err) {
|
||||
hmp_handle_error(mon, err);
|
||||
return;
|
||||
if (dirty_rate < 0) {
|
||||
error_setg(&err, "invalid dirty page limit %" PRId64, dirty_rate);
|
||||
goto out;
|
||||
}
|
||||
|
||||
monitor_printf(mon, "[Please use 'info vcpu_dirty_limit' to query "
|
||||
"dirty limit for virtual CPU]\n");
|
||||
qmp_set_vcpu_dirty_limit(!!(cpu_index != -1), cpu_index, dirty_rate, &err);
|
||||
|
||||
out:
|
||||
hmp_handle_error(mon, err);
|
||||
}
|
||||
|
||||
/* Return the max throttle time of each virtual CPU */
|
||||
uint64_t dirtylimit_throttle_time_per_round(void)
|
||||
{
|
||||
CPUState *cpu;
|
||||
int64_t max = 0;
|
||||
|
||||
CPU_FOREACH(cpu) {
|
||||
if (cpu->throttle_us_per_full > max) {
|
||||
max = cpu->throttle_us_per_full;
|
||||
}
|
||||
}
|
||||
|
||||
return max;
|
||||
}
|
||||
|
||||
/*
|
||||
* Estimate average dirty ring full time of each virtaul CPU.
|
||||
* Return 0 if guest doesn't dirty memory.
|
||||
*/
|
||||
uint64_t dirtylimit_ring_full_time(void)
|
||||
{
|
||||
CPUState *cpu;
|
||||
uint64_t curr_rate = 0;
|
||||
int nvcpus = 0;
|
||||
|
||||
CPU_FOREACH(cpu) {
|
||||
if (cpu->running) {
|
||||
nvcpus++;
|
||||
curr_rate += vcpu_dirty_rate_get(cpu->cpu_index);
|
||||
}
|
||||
}
|
||||
|
||||
if (!curr_rate || !nvcpus) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
return dirtylimit_dirty_ring_full_time(curr_rate / nvcpus);
|
||||
}
|
||||
|
||||
static struct DirtyLimitInfo *dirtylimit_query_vcpu(int cpu_index)
|
||||
|
@ -702,6 +702,8 @@ static int test_migrate_start(QTestState **from, QTestState **to,
|
||||
{
|
||||
g_autofree gchar *arch_source = NULL;
|
||||
g_autofree gchar *arch_target = NULL;
|
||||
/* options for source and target */
|
||||
g_autofree gchar *arch_opts = NULL;
|
||||
g_autofree gchar *cmd_source = NULL;
|
||||
g_autofree gchar *cmd_target = NULL;
|
||||
const gchar *ignore_stderr;
|
||||
@ -709,7 +711,6 @@ static int test_migrate_start(QTestState **from, QTestState **to,
|
||||
g_autofree char *shmem_opts = NULL;
|
||||
g_autofree char *shmem_path = NULL;
|
||||
const char *arch = qtest_get_arch();
|
||||
const char *machine_opts = NULL;
|
||||
const char *memory_size;
|
||||
|
||||
if (args->use_shmem) {
|
||||
@ -727,36 +728,29 @@ static int test_migrate_start(QTestState **from, QTestState **to,
|
||||
assert(sizeof(x86_bootsect) == 512);
|
||||
init_bootfile(bootpath, x86_bootsect, sizeof(x86_bootsect));
|
||||
memory_size = "150M";
|
||||
arch_source = g_strdup_printf("-drive file=%s,format=raw", bootpath);
|
||||
arch_target = g_strdup(arch_source);
|
||||
arch_opts = g_strdup_printf("-drive file=%s,format=raw", bootpath);
|
||||
start_address = X86_TEST_MEM_START;
|
||||
end_address = X86_TEST_MEM_END;
|
||||
} else if (g_str_equal(arch, "s390x")) {
|
||||
init_bootfile(bootpath, s390x_elf, sizeof(s390x_elf));
|
||||
memory_size = "128M";
|
||||
arch_source = g_strdup_printf("-bios %s", bootpath);
|
||||
arch_target = g_strdup(arch_source);
|
||||
arch_opts = g_strdup_printf("-bios %s", bootpath);
|
||||
start_address = S390_TEST_MEM_START;
|
||||
end_address = S390_TEST_MEM_END;
|
||||
} else if (strcmp(arch, "ppc64") == 0) {
|
||||
machine_opts = "vsmt=8";
|
||||
memory_size = "256M";
|
||||
start_address = PPC_TEST_MEM_START;
|
||||
end_address = PPC_TEST_MEM_END;
|
||||
arch_source = g_strdup_printf("-nodefaults "
|
||||
"-prom-env 'use-nvramrc?=true' -prom-env "
|
||||
arch_source = g_strdup_printf("-prom-env 'use-nvramrc?=true' -prom-env "
|
||||
"'nvramrc=hex .\" _\" begin %x %x "
|
||||
"do i c@ 1 + i c! 1000 +loop .\" B\" 0 "
|
||||
"until'", end_address, start_address);
|
||||
arch_target = g_strdup("");
|
||||
arch_opts = g_strdup("-nodefaults -machine vsmt=8");
|
||||
} else if (strcmp(arch, "aarch64") == 0) {
|
||||
init_bootfile(bootpath, aarch64_kernel, sizeof(aarch64_kernel));
|
||||
machine_opts = "virt,gic-version=max";
|
||||
memory_size = "150M";
|
||||
arch_source = g_strdup_printf("-cpu max "
|
||||
"-kernel %s",
|
||||
bootpath);
|
||||
arch_target = g_strdup(arch_source);
|
||||
arch_opts = g_strdup_printf("-machine virt,gic-version=max -cpu max "
|
||||
"-kernel %s", bootpath);
|
||||
start_address = ARM_TEST_MEM_START;
|
||||
end_address = ARM_TEST_MEM_END;
|
||||
|
||||
@ -791,17 +785,17 @@ static int test_migrate_start(QTestState **from, QTestState **to,
|
||||
shmem_opts = g_strdup("");
|
||||
}
|
||||
|
||||
cmd_source = g_strdup_printf("-accel kvm%s -accel tcg%s%s "
|
||||
cmd_source = g_strdup_printf("-accel kvm%s -accel tcg "
|
||||
"-name source,debug-threads=on "
|
||||
"-m %s "
|
||||
"-serial file:%s/src_serial "
|
||||
"%s %s %s %s",
|
||||
"%s %s %s %s %s",
|
||||
args->use_dirty_ring ?
|
||||
",dirty-ring-size=4096" : "",
|
||||
machine_opts ? " -machine " : "",
|
||||
machine_opts ? machine_opts : "",
|
||||
memory_size, tmpfs,
|
||||
arch_source, shmem_opts,
|
||||
arch_opts ? arch_opts : "",
|
||||
arch_source ? arch_source : "",
|
||||
shmem_opts,
|
||||
args->opts_source ? args->opts_source : "",
|
||||
ignore_stderr);
|
||||
if (!args->only_target) {
|
||||
@ -811,18 +805,18 @@ static int test_migrate_start(QTestState **from, QTestState **to,
|
||||
&got_src_stop);
|
||||
}
|
||||
|
||||
cmd_target = g_strdup_printf("-accel kvm%s -accel tcg%s%s "
|
||||
cmd_target = g_strdup_printf("-accel kvm%s -accel tcg "
|
||||
"-name target,debug-threads=on "
|
||||
"-m %s "
|
||||
"-serial file:%s/dest_serial "
|
||||
"-incoming %s "
|
||||
"%s %s %s %s",
|
||||
"%s %s %s %s %s",
|
||||
args->use_dirty_ring ?
|
||||
",dirty-ring-size=4096" : "",
|
||||
machine_opts ? " -machine " : "",
|
||||
machine_opts ? machine_opts : "",
|
||||
memory_size, tmpfs, uri,
|
||||
arch_target, shmem_opts,
|
||||
arch_opts ? arch_opts : "",
|
||||
arch_target ? arch_target : "",
|
||||
shmem_opts,
|
||||
args->opts_target ? args->opts_target : "",
|
||||
ignore_stderr);
|
||||
*to = qtest_init(cmd_target);
|
||||
@ -1245,10 +1239,9 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
|
||||
QTestState **to_ptr,
|
||||
MigrateCommon *args)
|
||||
{
|
||||
g_autofree char *uri = g_strdup_printf("unix:%s/migsocket", tmpfs);
|
||||
QTestState *from, *to;
|
||||
|
||||
if (test_migrate_start(&from, &to, uri, &args->start)) {
|
||||
if (test_migrate_start(&from, &to, "defer", &args->start)) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
@ -1268,10 +1261,13 @@ static int migrate_postcopy_prepare(QTestState **from_ptr,
|
||||
migrate_ensure_non_converge(from);
|
||||
|
||||
migrate_prepare_for_dirty_mem(from);
|
||||
qtest_qmp_assert_success(to, "{ 'execute': 'migrate-incoming',"
|
||||
" 'arguments': { 'uri': 'tcp:127.0.0.1:0' }}");
|
||||
|
||||
/* Wait for the first serial output from the source */
|
||||
wait_for_serial("src_serial");
|
||||
|
||||
g_autofree char *uri = migrate_get_socket_address(to, "socket-address");
|
||||
migrate_qmp(from, uri, "{}");
|
||||
|
||||
migrate_wait_for_dirty_mem(from, to);
|
||||
|
Loading…
Reference in New Issue
Block a user