2008-10-13 07:12:02 +04:00
|
|
|
/*
|
|
|
|
* QEMU live migration
|
|
|
|
*
|
|
|
|
* Copyright IBM, Corp. 2008
|
|
|
|
*
|
|
|
|
* Authors:
|
|
|
|
* Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
*
|
|
|
|
* This work is licensed under the terms of the GNU GPL, version 2. See
|
|
|
|
* the COPYING file in the top-level directory.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef QEMU_MIGRATION_H
|
|
|
|
#define QEMU_MIGRATION_H
|
|
|
|
|
2019-08-12 08:23:46 +03:00
|
|
|
#include "exec/cpu-common.h"
|
2019-08-12 08:23:51 +03:00
|
|
|
#include "hw/qdev-core.h"
|
2018-02-11 12:36:01 +03:00
|
|
|
#include "qapi/qapi-types-migration.h"
|
2023-01-17 14:22:43 +03:00
|
|
|
#include "qapi/qmp/json-writer.h"
|
2012-12-19 12:55:50 +04:00
|
|
|
#include "qemu/thread.h"
|
2024-05-03 11:57:41 +03:00
|
|
|
#include "qemu/coroutine.h"
|
2017-07-24 13:42:02 +03:00
|
|
|
#include "io/channel.h"
|
2021-01-29 13:14:06 +03:00
|
|
|
#include "io/channel-buffer.h"
|
2019-02-27 16:24:08 +03:00
|
|
|
#include "net/announce.h"
|
2020-09-03 23:43:22 +03:00
|
|
|
#include "qom/object.h"
|
2022-07-07 21:55:02 +03:00
|
|
|
#include "postcopy-ram.h"
|
2023-05-17 15:37:51 +03:00
|
|
|
#include "sysemu/runstate.h"
|
2024-03-11 20:48:58 +03:00
|
|
|
#include "migration/misc.h"
|
2009-03-06 02:01:23 +03:00
|
|
|
|
migration: add postcopy blocktime ctx into MigrationIncomingState
This patch adds request to kernel space for UFFD_FEATURE_THREAD_ID, in
case this feature is provided by kernel.
PostcopyBlocktimeContext is encapsulated inside postcopy-ram.c,
due to it being a postcopy-only feature.
Also it defines PostcopyBlocktimeContext's instance live time.
Information from PostcopyBlocktimeContext instance will be provided
much after postcopy migration end, instance of PostcopyBlocktimeContext
will live till QEMU exit, but part of it (vcpu_addr,
page_fault_vcpu_time) used only during calculation, will be released
when postcopy ended or failed.
To enable postcopy blocktime calculation on destination, need to
request proper compatibility (Patch for documentation will be at the
tail of the patch set).
As an example following command enable that capability, assume QEMU was
started with
-chardev socket,id=charmonitor,path=/var/lib/migrate-vm-monitor.sock
option to control it
[root@host]#printf "{\"execute\" : \"qmp_capabilities\"}\r\n \
{\"execute\": \"migrate-set-capabilities\" , \"arguments\": {
\"capabilities\": [ { \"capability\": \"postcopy-blocktime\", \"state\":
true } ] } }" | nc -U /var/lib/migrate-vm-monitor.sock
Or just with HMP
(qemu) migrate_set_capability postcopy-blocktime on
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1521742647-25550-3-git-send-email-a.perevalov@samsung.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-03-22 21:17:23 +03:00
|
|
|
struct PostcopyBlocktimeContext;
|
|
|
|
|
2018-05-02 13:47:30 +03:00
|
|
|
#define MIGRATION_RESUME_ACK_VALUE (1)
|
|
|
|
|
migration: Split log_clear() into smaller chunks
Currently we are doing log_clear() right after log_sync() which mostly
keeps the old behavior when log_clear() was still part of log_sync().
This patch tries to further optimize the migration log_clear() code
path to split huge log_clear()s into smaller chunks.
We do this by spliting the whole guest memory region into memory
chunks, whose size is decided by MigrationState.clear_bitmap_shift (an
example will be given below). With that, we don't do the dirty bitmap
clear operation on the remote node (e.g., KVM) when we fetch the dirty
bitmap, instead we explicitly clear the dirty bitmap for the memory
chunk for each of the first time we send a page in that chunk.
Here comes an example.
Assuming the guest has 64G memory, then before this patch the KVM
ioctl KVM_CLEAR_DIRTY_LOG will be a single one covering 64G memory.
If after the patch, let's assume when the clear bitmap shift is 18,
then the memory chunk size on x86_64 will be 1UL<<18 * 4K = 1GB. Then
instead of sending a big 64G ioctl, we'll send 64 small ioctls, each
of the ioctl will cover 1G of the guest memory. For each of the 64
small ioctls, we'll only send if any of the page in that small chunk
was going to be sent right away.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190603065056.25211-12-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-06-03 09:50:56 +03:00
|
|
|
/*
|
|
|
|
* 1<<6=64 pages -> 256K chunk when page size is 4K. This gives us
|
|
|
|
* the benefit that all the chunks are 64 pages aligned then the
|
|
|
|
* bitmaps are always aligned to LONG.
|
|
|
|
*/
|
|
|
|
#define CLEAR_BITMAP_SHIFT_MIN 6
|
|
|
|
/*
|
|
|
|
* 1<<18=256K pages -> 1G chunk when page size is 4K. This is the
|
|
|
|
* default value to use if no one specified.
|
|
|
|
*/
|
|
|
|
#define CLEAR_BITMAP_SHIFT_DEFAULT 18
|
|
|
|
/*
|
|
|
|
* 1<<31=2G pages -> 8T chunk when page size is 4K. This should be
|
|
|
|
* big enough and make sure we won't overflow easily.
|
|
|
|
*/
|
|
|
|
#define CLEAR_BITMAP_SHIFT_MAX 31
|
|
|
|
|
migration: Introduce postcopy channels on dest node
Postcopy handles huge pages in a special way that currently we can only have
one "channel" to transfer the page.
It's because when we install pages using UFFDIO_COPY, we need to have the whole
huge page ready, it also means we need to have a temp huge page when trying to
receive the whole content of the page.
Currently all maintainance around this tmp page is global: firstly we'll
allocate a temp huge page, then we maintain its status mostly within
ram_load_postcopy().
To enable multiple channels for postcopy, the first thing we need to do is to
prepare N temp huge pages as caching, one for each channel.
Meanwhile we need to maintain the tmp huge page status per-channel too.
To give some example, some local variables maintained in ram_load_postcopy()
are listed; they are responsible for maintaining temp huge page status:
- all_zero: this keeps whether this huge page contains all zeros
- target_pages: this counts how many target pages have been copied
- host_page: this keeps the host ptr for the page to install
Move all these fields to be together with the temp huge pages to form a new
structure called PostcopyTmpPage. Then for each (future) postcopy channel, we
need one structure to keep the state around.
For vanilla postcopy, obviously there's only one channel. It contains both
precopy and postcopy pages.
This patch teaches the dest migration node to start realize the possible number
of postcopy channels by introducing the "postcopy_channels" variable. Its
value is calculated when setup postcopy on dest node (during POSTCOPY_LISTEN
phase).
Vanilla postcopy will have channels=1, but when postcopy-preempt capability is
enabled (in the future), we will boost it to 2 because even during partial
sending of a precopy huge page we still want to preempt it and start sending
the postcopy requested page right away (so we start to keep two temp huge
pages; more if we want to enable multifd). In this patch there's a TODO marked
for that; so far the channels is always set to 1.
We need to send one "host huge page" on one channel only and we cannot split
them, because otherwise the data upon the same huge page can locate on more
than one channel so we need more complicated logic to manage. One temp host
huge page for each channel will be enough for us for now.
Postcopy will still always use the index=0 huge page even after this patch.
However it prepares for the latter patches where it can start to use multiple
channels (which needs src intervention, because only src knows which channel we
should use).
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220301083925.33483-5-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
dgilbert: Fixed up long line
2022-03-01 11:39:04 +03:00
|
|
|
/* This is an abstraction of a "temp huge page" for postcopy's purpose */
|
|
|
|
typedef struct {
|
|
|
|
/*
|
|
|
|
* This points to a temporary huge page as a buffer for UFFDIO_COPY. It's
|
|
|
|
* mmap()ed and needs to be freed when cleanup.
|
|
|
|
*/
|
|
|
|
void *tmp_huge_page;
|
|
|
|
/*
|
|
|
|
* This points to the host page we're going to install for this temp page.
|
|
|
|
* It tells us after we've received the whole page, where we should put it.
|
|
|
|
*/
|
|
|
|
void *host_addr;
|
|
|
|
/* Number of small pages copied (in size of TARGET_PAGE_SIZE) */
|
|
|
|
unsigned int target_pages;
|
|
|
|
/* Whether this page contains all zeros */
|
|
|
|
bool all_zero;
|
|
|
|
} PostcopyTmpPage;
|
|
|
|
|
2023-03-26 20:25:39 +03:00
|
|
|
typedef enum {
|
|
|
|
PREEMPT_THREAD_NONE = 0,
|
|
|
|
PREEMPT_THREAD_CREATED,
|
|
|
|
PREEMPT_THREAD_QUIT,
|
|
|
|
} PreemptThreadStatus;
|
|
|
|
|
2015-05-21 15:24:14 +03:00
|
|
|
/* State for the incoming migration */
|
|
|
|
struct MigrationIncomingState {
|
2015-11-05 21:10:34 +03:00
|
|
|
QEMUFile *from_src_file;
|
2022-03-01 11:39:07 +03:00
|
|
|
/* Previously received RAM's RAMBlock pointer */
|
migration: Postcopy preemption enablement
This patch enables postcopy-preempt feature.
It contains two major changes to the migration logic:
(1) Postcopy requests are now sent via a different socket from precopy
background migration stream, so as to be isolated from very high page
request delays.
(2) For huge page enabled hosts: when there's postcopy requests, they can now
intercept a partial sending of huge host pages on src QEMU.
After this patch, we'll live migrate a VM with two channels for postcopy: (1)
PRECOPY channel, which is the default channel that transfers background pages;
and (2) POSTCOPY channel, which only transfers requested pages.
There's no strict rule of which channel to use, e.g., if a requested page is
already being transferred on precopy channel, then we will keep using the same
precopy channel to transfer the page even if it's explicitly requested. In 99%
of the cases we'll prioritize the channels so we send requested page via the
postcopy channel as long as possible.
On the source QEMU, when we found a postcopy request, we'll interrupt the
PRECOPY channel sending process and quickly switch to the POSTCOPY channel.
After we serviced all the high priority postcopy pages, we'll switch back to
PRECOPY channel so that we'll continue to send the interrupted huge page again.
There's no new thread introduced on src QEMU.
On the destination QEMU, one new thread is introduced to receive page data from
the postcopy specific socket (done in the preparation patch).
This patch has a side effect: after sending postcopy pages, previously we'll
assume the guest will access follow up pages so we'll keep sending from there.
Now it's changed. Instead of going on with a postcopy requested page, we'll go
back and continue sending the precopy huge page (which can be intercepted by a
postcopy request so the huge page can be sent partially before).
Whether that's a problem is debatable, because "assuming the guest will
continue to access the next page" may not really suite when huge pages are
used, especially if the huge page is large (e.g. 1GB pages). So that locality
hint is much meaningless if huge pages are used.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185504.27203-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-07 21:55:04 +03:00
|
|
|
RAMBlock *last_recv_block[RAM_CHANNEL_MAX];
|
2021-04-21 14:28:32 +03:00
|
|
|
/* A hook to allow cleanup at the end of incoming migration */
|
|
|
|
void *transport_data;
|
|
|
|
void (*transport_cleanup)(void *data);
|
2022-03-01 11:39:06 +03:00
|
|
|
/*
|
|
|
|
* Used to sync thread creations. Note that we can't create threads in
|
|
|
|
* parallel with this sem.
|
|
|
|
*/
|
|
|
|
QemuSemaphore thread_sync_sem;
|
2015-11-05 21:10:50 +03:00
|
|
|
/*
|
|
|
|
* Free at the start of the main state load, set as the main thread finishes
|
|
|
|
* loading state.
|
|
|
|
*/
|
|
|
|
QemuEvent main_thread_load_event;
|
|
|
|
|
2019-02-27 16:24:08 +03:00
|
|
|
/* For network announces */
|
|
|
|
AnnounceTimer announce_timer;
|
|
|
|
|
2017-02-24 21:28:34 +03:00
|
|
|
size_t largest_page_size;
|
2015-11-05 21:11:17 +03:00
|
|
|
bool have_fault_thread;
|
2015-11-05 21:11:04 +03:00
|
|
|
QemuThread fault_thread;
|
2018-02-08 13:31:06 +03:00
|
|
|
/* Set this when we want the fault thread to quit */
|
|
|
|
bool fault_thread_quit;
|
2015-11-05 21:11:04 +03:00
|
|
|
|
2015-11-05 21:11:18 +03:00
|
|
|
bool have_listen_thread;
|
|
|
|
QemuThread listen_thread;
|
|
|
|
|
2015-11-05 21:11:03 +03:00
|
|
|
/* For the kernel to send us notifications */
|
|
|
|
int userfault_fd;
|
2018-02-08 13:31:06 +03:00
|
|
|
/* To notify the fault_thread to wake, e.g., when need to quit */
|
|
|
|
int userfault_event_fd;
|
2015-11-05 21:10:46 +03:00
|
|
|
QEMUFile *to_src_file;
|
2015-11-05 21:10:47 +03:00
|
|
|
QemuMutex rp_mutex; /* We send replies from multiple threads */
|
2018-03-12 20:21:12 +03:00
|
|
|
/* RAMBlock of last request sent to source */
|
|
|
|
RAMBlock *last_rb;
|
migration: Introduce postcopy channels on dest node
Postcopy handles huge pages in a special way that currently we can only have
one "channel" to transfer the page.
It's because when we install pages using UFFDIO_COPY, we need to have the whole
huge page ready, it also means we need to have a temp huge page when trying to
receive the whole content of the page.
Currently all maintainance around this tmp page is global: firstly we'll
allocate a temp huge page, then we maintain its status mostly within
ram_load_postcopy().
To enable multiple channels for postcopy, the first thing we need to do is to
prepare N temp huge pages as caching, one for each channel.
Meanwhile we need to maintain the tmp huge page status per-channel too.
To give some example, some local variables maintained in ram_load_postcopy()
are listed; they are responsible for maintaining temp huge page status:
- all_zero: this keeps whether this huge page contains all zeros
- target_pages: this counts how many target pages have been copied
- host_page: this keeps the host ptr for the page to install
Move all these fields to be together with the temp huge pages to form a new
structure called PostcopyTmpPage. Then for each (future) postcopy channel, we
need one structure to keep the state around.
For vanilla postcopy, obviously there's only one channel. It contains both
precopy and postcopy pages.
This patch teaches the dest migration node to start realize the possible number
of postcopy channels by introducing the "postcopy_channels" variable. Its
value is calculated when setup postcopy on dest node (during POSTCOPY_LISTEN
phase).
Vanilla postcopy will have channels=1, but when postcopy-preempt capability is
enabled (in the future), we will boost it to 2 because even during partial
sending of a precopy huge page we still want to preempt it and start sending
the postcopy requested page right away (so we start to keep two temp huge
pages; more if we want to enable multifd). In this patch there's a TODO marked
for that; so far the channels is always set to 1.
We need to send one "host huge page" on one channel only and we cannot split
them, because otherwise the data upon the same huge page can locate on more
than one channel so we need more complicated logic to manage. One temp host
huge page for each channel will be enough for us for now.
Postcopy will still always use the index=0 huge page even after this patch.
However it prepares for the latter patches where it can start to use multiple
channels (which needs src intervention, because only src knows which channel we
should use).
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220301083925.33483-5-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
dgilbert: Fixed up long line
2022-03-01 11:39:04 +03:00
|
|
|
/*
|
|
|
|
* Number of postcopy channels including the default precopy channel, so
|
|
|
|
* vanilla postcopy will only contain one channel which contain both
|
|
|
|
* precopy and postcopy streams.
|
|
|
|
*
|
|
|
|
* This is calculated when the src requests to enable postcopy but before
|
|
|
|
* it starts. Its value can depend on e.g. whether postcopy preemption is
|
|
|
|
* enabled.
|
|
|
|
*/
|
|
|
|
unsigned int postcopy_channels;
|
2022-07-07 21:55:02 +03:00
|
|
|
/* QEMUFile for postcopy only; it'll be handled by a separate thread */
|
|
|
|
QEMUFile *postcopy_qemufile_dst;
|
2023-02-08 23:28:13 +03:00
|
|
|
/*
|
|
|
|
* When postcopy_qemufile_dst is properly setup, this sem is posted.
|
|
|
|
* One can wait on this semaphore to wait until the preempt channel is
|
|
|
|
* properly setup.
|
|
|
|
*/
|
|
|
|
QemuSemaphore postcopy_qemufile_dst_done;
|
2022-07-07 21:55:02 +03:00
|
|
|
/* Postcopy priority thread is used to receive postcopy requested pages */
|
|
|
|
QemuThread postcopy_prio_thread;
|
2023-03-26 20:25:39 +03:00
|
|
|
/*
|
|
|
|
* Always set by the main vm load thread only, but can be read by the
|
|
|
|
* postcopy preempt thread. "volatile" makes sure all reads will be
|
2023-07-14 14:32:41 +03:00
|
|
|
* up-to-date across cores.
|
2023-03-26 20:25:39 +03:00
|
|
|
*/
|
|
|
|
volatile PreemptThreadStatus preempt_thread_status;
|
2022-07-07 21:55:06 +03:00
|
|
|
/*
|
|
|
|
* Used to sync between the ram load main thread and the fast ram load
|
|
|
|
* thread. It protects postcopy_qemufile_dst, which is the postcopy
|
|
|
|
* fast channel.
|
|
|
|
*
|
|
|
|
* The ram fast load thread will take it mostly for the whole lifecycle
|
|
|
|
* because it needs to continuously read data from the channel, and
|
|
|
|
* it'll only release this mutex if postcopy is interrupted, so that
|
|
|
|
* the ram load main thread will take this mutex over and properly
|
|
|
|
* release the broken channel.
|
|
|
|
*/
|
|
|
|
QemuMutex postcopy_prio_thread_mutex;
|
migration: Introduce postcopy channels on dest node
Postcopy handles huge pages in a special way that currently we can only have
one "channel" to transfer the page.
It's because when we install pages using UFFDIO_COPY, we need to have the whole
huge page ready, it also means we need to have a temp huge page when trying to
receive the whole content of the page.
Currently all maintainance around this tmp page is global: firstly we'll
allocate a temp huge page, then we maintain its status mostly within
ram_load_postcopy().
To enable multiple channels for postcopy, the first thing we need to do is to
prepare N temp huge pages as caching, one for each channel.
Meanwhile we need to maintain the tmp huge page status per-channel too.
To give some example, some local variables maintained in ram_load_postcopy()
are listed; they are responsible for maintaining temp huge page status:
- all_zero: this keeps whether this huge page contains all zeros
- target_pages: this counts how many target pages have been copied
- host_page: this keeps the host ptr for the page to install
Move all these fields to be together with the temp huge pages to form a new
structure called PostcopyTmpPage. Then for each (future) postcopy channel, we
need one structure to keep the state around.
For vanilla postcopy, obviously there's only one channel. It contains both
precopy and postcopy pages.
This patch teaches the dest migration node to start realize the possible number
of postcopy channels by introducing the "postcopy_channels" variable. Its
value is calculated when setup postcopy on dest node (during POSTCOPY_LISTEN
phase).
Vanilla postcopy will have channels=1, but when postcopy-preempt capability is
enabled (in the future), we will boost it to 2 because even during partial
sending of a precopy huge page we still want to preempt it and start sending
the postcopy requested page right away (so we start to keep two temp huge
pages; more if we want to enable multifd). In this patch there's a TODO marked
for that; so far the channels is always set to 1.
We need to send one "host huge page" on one channel only and we cannot split
them, because otherwise the data upon the same huge page can locate on more
than one channel so we need more complicated logic to manage. One temp host
huge page for each channel will be enough for us for now.
Postcopy will still always use the index=0 huge page even after this patch.
However it prepares for the latter patches where it can start to use multiple
channels (which needs src intervention, because only src knows which channel we
should use).
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220301083925.33483-5-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
dgilbert: Fixed up long line
2022-03-01 11:39:04 +03:00
|
|
|
/*
|
|
|
|
* An array of temp host huge pages to be used, one for each postcopy
|
|
|
|
* channel.
|
|
|
|
*/
|
|
|
|
PostcopyTmpPage *postcopy_tmp_pages;
|
|
|
|
/* This is shared for all postcopy channels */
|
2017-02-24 21:28:36 +03:00
|
|
|
void *postcopy_tmp_zero_page;
|
2018-03-12 20:21:04 +03:00
|
|
|
/* PostCopyFD's for external userfaultfds & handlers of shared memory */
|
|
|
|
GArray *postcopy_remote_fds;
|
2015-11-05 21:10:46 +03:00
|
|
|
|
2015-12-16 14:47:34 +03:00
|
|
|
int state;
|
2016-10-27 09:42:55 +03:00
|
|
|
|
2023-05-15 16:06:39 +03:00
|
|
|
/*
|
|
|
|
* The incoming migration coroutine, non-NULL during qemu_loadvm_state().
|
|
|
|
* Used to wake the migration incoming coroutine from rdma code. How much is
|
|
|
|
* it safe - it's a question.
|
|
|
|
*/
|
|
|
|
Coroutine *loadvm_co;
|
|
|
|
|
2016-10-27 09:42:55 +03:00
|
|
|
/* The coroutine we should enter (back) after failover */
|
2023-05-15 16:06:39 +03:00
|
|
|
Coroutine *colo_incoming_co;
|
2017-01-17 15:57:43 +03:00
|
|
|
QemuSemaphore colo_incoming_sem;
|
migration: add postcopy blocktime ctx into MigrationIncomingState
This patch adds request to kernel space for UFFD_FEATURE_THREAD_ID, in
case this feature is provided by kernel.
PostcopyBlocktimeContext is encapsulated inside postcopy-ram.c,
due to it being a postcopy-only feature.
Also it defines PostcopyBlocktimeContext's instance live time.
Information from PostcopyBlocktimeContext instance will be provided
much after postcopy migration end, instance of PostcopyBlocktimeContext
will live till QEMU exit, but part of it (vcpu_addr,
page_fault_vcpu_time) used only during calculation, will be released
when postcopy ended or failed.
To enable postcopy blocktime calculation on destination, need to
request proper compatibility (Patch for documentation will be at the
tail of the patch set).
As an example following command enable that capability, assume QEMU was
started with
-chardev socket,id=charmonitor,path=/var/lib/migrate-vm-monitor.sock
option to control it
[root@host]#printf "{\"execute\" : \"qmp_capabilities\"}\r\n \
{\"execute\": \"migrate-set-capabilities\" , \"arguments\": {
\"capabilities\": [ { \"capability\": \"postcopy-blocktime\", \"state\":
true } ] } }" | nc -U /var/lib/migrate-vm-monitor.sock
Or just with HMP
(qemu) migrate_set_capability postcopy-blocktime on
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1521742647-25550-3-git-send-email-a.perevalov@samsung.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-03-22 21:17:23 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* PostcopyBlocktimeContext to keep information for postcopy
|
|
|
|
* live migration, to calculate vCPU block time
|
|
|
|
* */
|
|
|
|
struct PostcopyBlocktimeContext *blocktime_ctx;
|
2018-05-02 13:47:20 +03:00
|
|
|
|
|
|
|
/* notify PAUSED postcopy incoming migrations to try to continue */
|
|
|
|
QemuSemaphore postcopy_pause_sem_dst;
|
2018-05-02 13:47:22 +03:00
|
|
|
QemuSemaphore postcopy_pause_sem_fault;
|
2022-07-07 21:55:06 +03:00
|
|
|
/*
|
|
|
|
* This semaphore is used to allow the ram fast load thread (only when
|
|
|
|
* postcopy preempt is enabled) fall into sleep when there's network
|
|
|
|
* interruption detected. When the recovery is done, the main load
|
|
|
|
* thread will kick the fast ram load thread using this semaphore.
|
|
|
|
*/
|
|
|
|
QemuSemaphore postcopy_pause_sem_fast_load;
|
2019-02-27 13:51:27 +03:00
|
|
|
|
|
|
|
/* List of listening socket addresses */
|
|
|
|
SocketAddressList *socket_address_list;
|
2020-10-22 00:27:18 +03:00
|
|
|
|
|
|
|
/* A tree of pages that we requested to the source VM */
|
|
|
|
GTree *page_requested;
|
2023-09-18 20:28:15 +03:00
|
|
|
/*
|
|
|
|
* For postcopy only, count the number of requested page faults that
|
|
|
|
* still haven't been resolved.
|
|
|
|
*/
|
2020-10-22 00:27:18 +03:00
|
|
|
int page_requested_count;
|
|
|
|
/*
|
|
|
|
* The mutex helps to maintain the requested pages that we sent to the
|
|
|
|
* source, IOW, to guarantee coherent between the page_requests tree and
|
|
|
|
* the per-ramblock receivedmap. Note! This does not guarantee consistency
|
|
|
|
* of the real page copy procedures (using UFFDIO_[ZERO]COPY). E.g., even
|
|
|
|
* if one bit in receivedmap is cleared, UFFDIO_COPY could have happened
|
|
|
|
* for that page already. This is intended so that the mutex won't
|
|
|
|
* serialize and blocked by slow operations like UFFDIO_* ioctls. However
|
|
|
|
* this should be enough to make sure the page_requested tree always
|
|
|
|
* contains valid information.
|
|
|
|
*/
|
|
|
|
QemuMutex page_request_mutex;
|
2023-09-18 20:28:15 +03:00
|
|
|
/*
|
|
|
|
* If postcopy preempt is enabled, there is a chance that the main
|
|
|
|
* thread finished loading its data before the preempt channel has
|
|
|
|
* finished loading the urgent pages. If that happens, the two threads
|
|
|
|
* will use this condvar to synchronize, so the main thread will always
|
|
|
|
* wait until all pages received.
|
|
|
|
*/
|
|
|
|
QemuCond page_request_cond;
|
2023-06-21 14:11:55 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Number of devices that have yet to approve switchover. When this reaches
|
|
|
|
* zero an ACK that it's OK to do switchover is sent to the source. No lock
|
|
|
|
* is needed as this field is updated serially.
|
|
|
|
*/
|
|
|
|
unsigned int switchover_ack_pending_num;
|
2024-04-30 11:56:46 +03:00
|
|
|
|
|
|
|
/* Do exit on incoming migration failure */
|
|
|
|
bool exit_on_error;
|
2015-05-21 15:24:14 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
MigrationIncomingState *migration_incoming_get_current(void);
|
|
|
|
void migration_incoming_state_destroy(void);
|
2022-03-01 11:39:14 +03:00
|
|
|
void migration_incoming_transport_cleanup(MigrationIncomingState *mis);
|
2018-03-22 21:17:27 +03:00
|
|
|
/*
|
|
|
|
* Functions to work with blocktime context
|
|
|
|
*/
|
|
|
|
void fill_destination_postcopy_migration_info(MigrationInfo *info);
|
2015-05-21 15:24:14 +03:00
|
|
|
|
2017-06-27 07:10:13 +03:00
|
|
|
#define TYPE_MIGRATION "migration"
|
|
|
|
|
2020-09-03 23:43:22 +03:00
|
|
|
typedef struct MigrationClass MigrationClass;
|
2020-09-01 00:07:33 +03:00
|
|
|
DECLARE_OBJ_CHECKERS(MigrationState, MigrationClass,
|
|
|
|
MIGRATION_OBJ, TYPE_MIGRATION)
|
2017-06-27 07:10:13 +03:00
|
|
|
|
2020-09-03 23:43:22 +03:00
|
|
|
struct MigrationClass {
|
2017-06-27 07:10:13 +03:00
|
|
|
/*< private >*/
|
|
|
|
DeviceClass parent_class;
|
2020-09-03 23:43:22 +03:00
|
|
|
};
|
2017-06-27 07:10:13 +03:00
|
|
|
|
2020-10-20 06:10:44 +03:00
|
|
|
struct MigrationState {
|
2017-06-27 07:10:13 +03:00
|
|
|
/*< private >*/
|
|
|
|
DeviceState parent_obj;
|
|
|
|
|
|
|
|
/*< public >*/
|
2012-12-19 12:55:50 +04:00
|
|
|
QemuThread thread;
|
2021-07-22 20:58:38 +03:00
|
|
|
/* Protected by qemu_file_lock */
|
2016-01-15 06:37:42 +03:00
|
|
|
QEMUFile *to_dst_file;
|
2022-07-07 21:55:02 +03:00
|
|
|
/* Postcopy specific transfer channel */
|
|
|
|
QEMUFile *postcopy_qemufile_src;
|
migration: Create the postcopy preempt channel asynchronously
This patch allows the postcopy preempt channel to be created
asynchronously. The benefit is that when the connection is slow, we won't
take the BQL (and potentially block all things like QMP) for a long time
without releasing.
A function postcopy_preempt_wait_channel() is introduced, allowing the
migration thread to be able to wait on the channel creation. The channel
is always created by the main thread, in which we'll kick a new semaphore
to tell the migration thread that the channel has created.
We'll need to wait for the new channel in two places: (1) when there's a
new postcopy migration that is starting, or (2) when there's a postcopy
migration to resume.
For the start of migration, we don't need to wait for this channel until
when we want to start postcopy, aka, postcopy_start(). We'll fail the
migration if we found that the channel creation failed (which should
probably not happen at all in 99% of the cases, because the main channel is
using the same network topology).
For a postcopy recovery, we'll need to wait in postcopy_pause(). In that
case if the channel creation failed, we can't fail the migration or we'll
crash the VM, instead we keep in PAUSED state, waiting for yet another
recovery.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Manish Mishra <manish.mishra@nutanix.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185509.27311-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-07 21:55:09 +03:00
|
|
|
/*
|
|
|
|
* It is posted when the preempt channel is established. Note: this is
|
|
|
|
* used for both the start or recover of a postcopy migration. We'll
|
|
|
|
* post to this sem every time a new preempt channel is created in the
|
|
|
|
* main thread, and we keep post() and wait() in pair.
|
|
|
|
*/
|
|
|
|
QemuSemaphore postcopy_qemufile_src_sem;
|
2021-01-29 13:14:06 +03:00
|
|
|
QIOChannelBuffer *bioc;
|
2018-05-02 13:47:38 +03:00
|
|
|
/*
|
2021-07-22 20:58:38 +03:00
|
|
|
* Protects to_dst_file/from_dst_file pointers. We need to make sure we
|
|
|
|
* won't yield or hang during the critical section, since this lock will be
|
|
|
|
* used in OOB command handler.
|
2018-05-02 13:47:38 +03:00
|
|
|
*/
|
|
|
|
QemuMutex qemu_file_lock;
|
2016-04-27 13:05:14 +03:00
|
|
|
|
2018-06-13 13:26:41 +03:00
|
|
|
/*
|
|
|
|
* Used to allow urgent requests to override rate limiting.
|
|
|
|
*/
|
|
|
|
QemuSemaphore rate_limit_sem;
|
|
|
|
|
2019-01-11 09:37:30 +03:00
|
|
|
/* pages already send at the beginning of current iteration */
|
|
|
|
uint64_t iteration_initial_pages;
|
|
|
|
|
|
|
|
/* pages transferred per second */
|
|
|
|
double pages_per_second;
|
|
|
|
|
|
|
|
/* bytes already send at the beginning of current iteration */
|
2018-01-03 15:20:13 +03:00
|
|
|
uint64_t iteration_initial_bytes;
|
|
|
|
/* time at the start of current iteration */
|
|
|
|
int64_t iteration_start_time;
|
|
|
|
/*
|
|
|
|
* The final stage happens when the remaining data is smaller than
|
|
|
|
* this threshold; it's calculated from the requested downtime and
|
migration: Allow user to specify available switchover bandwidth
Migration bandwidth is a very important value to live migration. It's
because it's one of the major factors that we'll make decision on when to
switchover to destination in a precopy process.
This value is currently estimated by QEMU during the whole live migration
process by monitoring how fast we were sending the data. This can be the
most accurate bandwidth if in the ideal world, where we're always feeding
unlimited data to the migration channel, and then it'll be limited to the
bandwidth that is available.
However in reality it may be very different, e.g., over a 10Gbps network we
can see query-migrate showing migration bandwidth of only a few tens of
MB/s just because there are plenty of other things the migration thread
might be doing. For example, the migration thread can be busy scanning
zero pages, or it can be fetching dirty bitmap from other external dirty
sources (like vhost or KVM). It means we may not be pushing data as much
as possible to migration channel, so the bandwidth estimated from "how many
data we sent in the channel" can be dramatically inaccurate sometimes.
With that, the decision to switchover will be affected, by assuming that we
may not be able to switchover at all with such a low bandwidth, but in
reality we can.
The migration may not even converge at all with the downtime specified,
with that wrong estimation of bandwidth, keeping iterations forever with a
low estimation of bandwidth.
The issue is QEMU itself may not be able to avoid those uncertainties on
measuing the real "available migration bandwidth". At least not something
I can think of so far.
One way to fix this is when the user is fully aware of the available
bandwidth, then we can allow the user to help providing an accurate value.
For example, if the user has a dedicated channel of 10Gbps for migration
for this specific VM, the user can specify this bandwidth so QEMU can
always do the calculation based on this fact, trusting the user as long as
specified. It may not be the exact bandwidth when switching over (in which
case qemu will push migration data as fast as possible), but much better
than QEMU trying to wildly guess, especially when very wrong.
A new parameter "avail-switchover-bandwidth" is introduced just for this.
So when the user specified this parameter, instead of trusting the
estimated value from QEMU itself (based on the QEMUFile send speed), it
trusts the user more by using this value to decide when to switchover,
assuming that we'll have such bandwidth available then.
Note that specifying this value will not throttle the bandwidth for
switchover yet, so QEMU will always use the full bandwidth possible for
sending switchover data, assuming that should always be the most important
way to use the network at that time.
This can resolve issues like "unconvergence migration" which is caused by
hilarious low "migration bandwidth" detected for whatever reason.
Reported-by: Zhiyi Guo <zhguo@redhat.com>
Reviewed-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231010221922.40638-1-peterx@redhat.com>
2023-10-11 01:19:22 +03:00
|
|
|
* measured bandwidth, or avail-switchover-bandwidth if specified.
|
2018-01-03 15:20:13 +03:00
|
|
|
*/
|
2024-01-17 10:58:46 +03:00
|
|
|
uint64_t threshold_size;
|
2018-01-03 15:20:13 +03:00
|
|
|
|
2017-04-05 22:00:09 +03:00
|
|
|
/* params from 'migrate-set-parameters' */
|
2016-04-27 13:05:14 +03:00
|
|
|
MigrationParameters parameters;
|
2013-02-22 20:36:41 +04:00
|
|
|
|
|
|
|
int state;
|
2015-11-05 21:10:49 +03:00
|
|
|
|
|
|
|
/* State related to return path */
|
|
|
|
struct {
|
2021-07-22 20:58:38 +03:00
|
|
|
/* Protected by qemu_file_lock */
|
2015-11-05 21:10:49 +03:00
|
|
|
QEMUFile *from_dst_file;
|
|
|
|
QemuThread rp_thread;
|
2021-07-22 20:58:37 +03:00
|
|
|
/*
|
|
|
|
* We can also check non-zero of rp_thread, but there's no "official"
|
|
|
|
* way to do this, so this bool makes it slightly more elegant.
|
|
|
|
* Checking from_dst_file for this is racy because from_dst_file will
|
|
|
|
* be cleared in the rp_thread!
|
|
|
|
*/
|
|
|
|
bool rp_thread_created;
|
2023-10-05 01:02:37 +03:00
|
|
|
/*
|
|
|
|
* Used to synchronize between migration main thread and return
|
|
|
|
* path thread. The migration thread can wait() on this sem, while
|
|
|
|
* other threads (e.g., return path thread) can kick it using a
|
|
|
|
* post().
|
|
|
|
*/
|
migration: synchronize dirty bitmap for resume
This patch implements the first part of core RAM resume logic for
postcopy. ram_resume_prepare() is provided for the work.
When the migration is interrupted by network failure, the dirty bitmap
on the source side will be meaningless, because even the dirty bit is
cleared, it is still possible that the sent page was lost along the way
to destination. Here instead of continue the migration with the old
dirty bitmap on source, we ask the destination side to send back its
received bitmap, then invert it to be our initial dirty bitmap.
The source side send thread will issue the MIG_CMD_RECV_BITMAP requests,
once per ramblock, to ask for the received bitmap. On destination side,
MIG_RP_MSG_RECV_BITMAP will be issued, along with the requested bitmap.
Data will be received on the return-path thread of source, and the main
migration thread will be notified when all the ramblock bitmaps are
synchronized.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180502104740.12123-17-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2018-05-02 13:47:32 +03:00
|
|
|
QemuSemaphore rp_sem;
|
2023-02-08 23:28:12 +03:00
|
|
|
/*
|
|
|
|
* We post to this when we got one PONG from dest. So far it's an
|
|
|
|
* easy way to know the main channel has successfully established
|
|
|
|
* on dest QEMU.
|
|
|
|
*/
|
|
|
|
QemuSemaphore rp_pong_acks;
|
2015-11-05 21:10:49 +03:00
|
|
|
} rp_state;
|
|
|
|
|
2013-06-26 05:35:30 +04:00
|
|
|
double mbps;
|
2018-01-03 15:20:08 +03:00
|
|
|
/* Timestamp when recent migration starts (ms) */
|
|
|
|
int64_t start_time;
|
|
|
|
/* Total time used by latest migration (ms) */
|
2012-05-22 00:01:07 +04:00
|
|
|
int64_t total_time;
|
2018-01-03 15:20:10 +03:00
|
|
|
/* Timestamp when VM is down (ms) to migrate the last stuff */
|
|
|
|
int64_t downtime_start;
|
2012-08-13 11:35:16 +04:00
|
|
|
int64_t downtime;
|
2012-08-13 11:53:12 +04:00
|
|
|
int64_t expected_downtime;
|
2023-03-01 20:26:59 +03:00
|
|
|
bool capabilities[MIGRATION_CAPABILITY__MAX];
|
2013-07-22 18:01:58 +04:00
|
|
|
int64_t setup_time;
|
2023-05-17 15:37:51 +03:00
|
|
|
|
2018-01-03 15:20:09 +03:00
|
|
|
/*
|
2023-05-17 15:37:51 +03:00
|
|
|
* State before stopping the vm by vm_stop_force_state().
|
2018-01-03 15:20:09 +03:00
|
|
|
* If migration is interrupted by any reason, we need to continue
|
2023-05-17 15:37:51 +03:00
|
|
|
* running the guest on source if it was running or restore its stopped
|
|
|
|
* state.
|
2018-01-03 15:20:09 +03:00
|
|
|
*/
|
2023-05-17 15:37:51 +03:00
|
|
|
RunState vm_old_state;
|
2015-11-05 21:10:56 +03:00
|
|
|
|
|
|
|
/* Flag set once the migration has been asked to enter postcopy */
|
|
|
|
bool start_postcopy;
|
2015-11-05 21:11:05 +03:00
|
|
|
|
|
|
|
/* Flag set once the migration thread is running (and needs joining) */
|
|
|
|
bool migration_thread_running;
|
2015-11-05 21:11:08 +03:00
|
|
|
|
2017-01-24 10:59:52 +03:00
|
|
|
/* Flag set once the migration thread called bdrv_inactivate_all */
|
|
|
|
bool block_inactive;
|
|
|
|
|
2019-10-29 14:49:02 +03:00
|
|
|
/* Migration is waiting for guest to unplug device */
|
|
|
|
QemuSemaphore wait_unplug_sem;
|
|
|
|
|
2017-10-20 12:05:52 +03:00
|
|
|
/* Migration is paused due to pause-before-switchover */
|
|
|
|
QemuSemaphore pause_sem;
|
|
|
|
|
2017-01-17 15:57:43 +03:00
|
|
|
/* The semaphore is used to notify COLO thread that failover is finished */
|
|
|
|
QemuSemaphore colo_exit_sem;
|
migration: add reporting of errors for outgoing migration
Currently if an application initiates an outgoing migration,
it may or may not, get an error reported back on failure. If
the error occurs synchronously to the 'migrate' command
execution, the client app will see the error message. This
is the case for DNS lookup failures. If the error occurs
asynchronously to the monitor command though, the error
will be thrown away and the client left guessing about
what went wrong. This is the case for failure to connect
to the TCP server (eg due to wrong port, or firewall
rules, or other similar errors).
In the future we'll be adding more scope for errors to
happen asynchronously with the TLS protocol handshake.
TLS errors are hard to diagnose even when they are well
reported, so discarding errors entirely will make it
impossible to debug TLS connection problems.
Management apps which do migration are already using
'query-migrate' / 'info migrate' to check up on progress
of background migration operations and to see their end
status. This is a fine place to also include the error
message when things go wrong.
This patch thus adds an 'error-desc' field to the
MigrationInfo struct, which will be populated when
the 'status' is set to 'failed':
(qemu) migrate -d tcp:localhost:9001
(qemu) info migrate
capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off events: off x-postcopy-ram: off
Migration status: failed (Error connecting to socket: Connection refused)
total time: 0 milliseconds
In the HMP, when doing non-detached migration, it is
also possible to display this error message directly
to the app.
(qemu) migrate tcp:localhost:9001
Error connecting to socket: Connection refused
Or with QMP
{
"execute": "query-migrate",
"arguments": {}
}
{
"return": {
"status": "failed",
"error-desc": "address resolution failed for myhost:9000: No address associated with hostname"
}
}
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1461751518-12128-11-git-send-email-berrange@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-04-27 13:05:00 +03:00
|
|
|
|
2020-05-11 14:10:44 +03:00
|
|
|
/* The event is used to notify COLO thread to do checkpoint */
|
|
|
|
QemuEvent colo_checkpoint_event;
|
2017-01-17 15:57:42 +03:00
|
|
|
int64_t colo_checkpoint_time;
|
|
|
|
QEMUTimer *colo_delay_timer;
|
|
|
|
|
2017-09-05 13:50:22 +03:00
|
|
|
/* The first error that has occurred.
|
|
|
|
We used the mutex to be able to return the 1st error message */
|
migration: add reporting of errors for outgoing migration
Currently if an application initiates an outgoing migration,
it may or may not, get an error reported back on failure. If
the error occurs synchronously to the 'migrate' command
execution, the client app will see the error message. This
is the case for DNS lookup failures. If the error occurs
asynchronously to the monitor command though, the error
will be thrown away and the client left guessing about
what went wrong. This is the case for failure to connect
to the TCP server (eg due to wrong port, or firewall
rules, or other similar errors).
In the future we'll be adding more scope for errors to
happen asynchronously with the TLS protocol handshake.
TLS errors are hard to diagnose even when they are well
reported, so discarding errors entirely will make it
impossible to debug TLS connection problems.
Management apps which do migration are already using
'query-migrate' / 'info migrate' to check up on progress
of background migration operations and to see their end
status. This is a fine place to also include the error
message when things go wrong.
This patch thus adds an 'error-desc' field to the
MigrationInfo struct, which will be populated when
the 'status' is set to 'failed':
(qemu) migrate -d tcp:localhost:9001
(qemu) info migrate
capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off events: off x-postcopy-ram: off
Migration status: failed (Error connecting to socket: Connection refused)
total time: 0 milliseconds
In the HMP, when doing non-detached migration, it is
also possible to display this error message directly
to the app.
(qemu) migrate tcp:localhost:9001
Error connecting to socket: Connection refused
Or with QMP
{
"execute": "query-migrate",
"arguments": {}
}
{
"return": {
"status": "failed",
"error-desc": "address resolution failed for myhost:9000: No address associated with hostname"
}
}
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1461751518-12128-11-git-send-email-berrange@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-04-27 13:05:00 +03:00
|
|
|
Error *error;
|
2017-09-05 13:50:22 +03:00
|
|
|
/* mutex to protect errp */
|
|
|
|
QemuMutex error_mutex;
|
|
|
|
|
2017-06-27 07:10:14 +03:00
|
|
|
/*
|
|
|
|
* Global switch on whether we need to store the global state
|
|
|
|
* during migration.
|
|
|
|
*/
|
|
|
|
bool store_global_state;
|
2017-06-27 07:10:15 +03:00
|
|
|
|
2017-06-27 07:10:16 +03:00
|
|
|
/* Whether we send QEMU_VM_CONFIGURATION during migration */
|
|
|
|
bool send_configuration;
|
2017-06-27 07:10:17 +03:00
|
|
|
/* Whether we send section footer during migration */
|
|
|
|
bool send_section_footer;
|
2018-05-02 13:47:19 +03:00
|
|
|
|
|
|
|
/* Needed by postcopy-pause state */
|
|
|
|
QemuSemaphore postcopy_pause_sem;
|
2023-03-26 20:25:39 +03:00
|
|
|
/*
|
|
|
|
* This variable only affects behavior when postcopy preempt mode is
|
|
|
|
* enabled.
|
|
|
|
*
|
|
|
|
* When set:
|
|
|
|
*
|
|
|
|
* - postcopy preempt src QEMU instance will generate an EOS message at
|
|
|
|
* the end of migration to shut the preempt channel on dest side.
|
|
|
|
*
|
2023-03-26 20:25:40 +03:00
|
|
|
* - postcopy preempt channel will be created at the setup phase on src
|
|
|
|
QEMU.
|
|
|
|
*
|
2023-03-26 20:25:39 +03:00
|
|
|
* When clear:
|
|
|
|
*
|
|
|
|
* - postcopy preempt src QEMU instance will _not_ generate an EOS
|
|
|
|
* message at the end of migration, the dest qemu will shutdown the
|
|
|
|
* channel itself.
|
|
|
|
*
|
2023-03-26 20:25:40 +03:00
|
|
|
* - postcopy preempt channel will be created at the switching phase
|
2023-07-14 14:32:41 +03:00
|
|
|
* from precopy -> postcopy (to avoid race condition of misordered
|
2023-03-26 20:25:40 +03:00
|
|
|
* creation of channels).
|
|
|
|
*
|
2023-03-26 20:25:39 +03:00
|
|
|
* NOTE: See message-id <ZBoShWArKDPpX/D7@work-vm> on qemu-devel
|
|
|
|
* mailing list for more information on the possible race. Everyone
|
|
|
|
* should probably just keep this value untouched after set by the
|
|
|
|
* machine type (or the default).
|
|
|
|
*/
|
|
|
|
bool preempt_pre_7_2;
|
migration: Split log_clear() into smaller chunks
Currently we are doing log_clear() right after log_sync() which mostly
keeps the old behavior when log_clear() was still part of log_sync().
This patch tries to further optimize the migration log_clear() code
path to split huge log_clear()s into smaller chunks.
We do this by spliting the whole guest memory region into memory
chunks, whose size is decided by MigrationState.clear_bitmap_shift (an
example will be given below). With that, we don't do the dirty bitmap
clear operation on the remote node (e.g., KVM) when we fetch the dirty
bitmap, instead we explicitly clear the dirty bitmap for the memory
chunk for each of the first time we send a page in that chunk.
Here comes an example.
Assuming the guest has 64G memory, then before this patch the KVM
ioctl KVM_CLEAR_DIRTY_LOG will be a single one covering 64G memory.
If after the patch, let's assume when the clear bitmap shift is 18,
then the memory chunk size on x86_64 will be 1UL<<18 * 4K = 1GB. Then
instead of sending a big 64G ioctl, we'll send 64 small ioctls, each
of the ioctl will cover 1G of the guest memory. For each of the 64
small ioctls, we'll only send if any of the page in that small chunk
was going to be sent right away.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190603065056.25211-12-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-06-03 09:50:56 +03:00
|
|
|
|
2022-06-21 13:13:14 +03:00
|
|
|
/*
|
|
|
|
* flush every channel after each section sent.
|
|
|
|
*
|
|
|
|
* This assures that we can't mix pages from one iteration through
|
|
|
|
* ram pages with pages for the following iteration. We really
|
|
|
|
* only need to do this flush after we have go through all the
|
|
|
|
* dirty pages. For historical reasons, we do that after each
|
|
|
|
* section. This is suboptimal (we flush too many times).
|
2022-06-21 14:36:11 +03:00
|
|
|
* Default value is false. (since 8.1)
|
2022-06-21 13:13:14 +03:00
|
|
|
*/
|
|
|
|
bool multifd_flush_after_each_section;
|
migration: Split log_clear() into smaller chunks
Currently we are doing log_clear() right after log_sync() which mostly
keeps the old behavior when log_clear() was still part of log_sync().
This patch tries to further optimize the migration log_clear() code
path to split huge log_clear()s into smaller chunks.
We do this by spliting the whole guest memory region into memory
chunks, whose size is decided by MigrationState.clear_bitmap_shift (an
example will be given below). With that, we don't do the dirty bitmap
clear operation on the remote node (e.g., KVM) when we fetch the dirty
bitmap, instead we explicitly clear the dirty bitmap for the memory
chunk for each of the first time we send a page in that chunk.
Here comes an example.
Assuming the guest has 64G memory, then before this patch the KVM
ioctl KVM_CLEAR_DIRTY_LOG will be a single one covering 64G memory.
If after the patch, let's assume when the clear bitmap shift is 18,
then the memory chunk size on x86_64 will be 1UL<<18 * 4K = 1GB. Then
instead of sending a big 64G ioctl, we'll send 64 small ioctls, each
of the ioctl will cover 1G of the guest memory. For each of the 64
small ioctls, we'll only send if any of the page in that small chunk
was going to be sent right away.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190603065056.25211-12-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-06-03 09:50:56 +03:00
|
|
|
/*
|
|
|
|
* This decides the size of guest memory chunk that will be used
|
|
|
|
* to track dirty bitmap clearing. The size of memory chunk will
|
|
|
|
* be GUEST_PAGE_SIZE << N. Say, N=0 means we will clear dirty
|
|
|
|
* bitmap for each page to send (1<<0=1); N=10 means we will clear
|
|
|
|
* dirty bitmap only once for 1<<10=1K continuous guest pages
|
|
|
|
* (which is in 4M chunk).
|
|
|
|
*/
|
|
|
|
uint8_t clear_bitmap_shift;
|
2020-09-15 06:03:57 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This save hostname when out-going migration starts
|
|
|
|
*/
|
|
|
|
char *hostname;
|
2023-01-17 14:22:43 +03:00
|
|
|
|
|
|
|
/* QEMU_VM_VMDESCRIPTION content filled for all non-iterable devices. */
|
|
|
|
JSONWriter *vmdesc;
|
2023-06-21 14:11:55 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Indicates whether an ACK from the destination that it's OK to do
|
|
|
|
* switchover has been received.
|
|
|
|
*/
|
|
|
|
bool switchover_acked;
|
2023-10-11 23:35:15 +03:00
|
|
|
/* Is this a rdma migration */
|
|
|
|
bool rdma_migration;
|
2008-11-11 19:46:33 +03:00
|
|
|
};
|
|
|
|
|
2015-12-16 14:47:33 +03:00
|
|
|
void migrate_set_state(int *state, int old_state, int new_state);
|
|
|
|
|
2023-12-31 12:30:09 +03:00
|
|
|
void migration_fd_process_incoming(QEMUFile *f);
|
2019-01-13 17:08:46 +03:00
|
|
|
void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp);
|
2018-03-07 10:40:52 +03:00
|
|
|
void migration_incoming_process(void);
|
2010-06-09 16:10:55 +04:00
|
|
|
|
2017-07-24 14:06:25 +03:00
|
|
|
bool migration_has_all_channels(void);
|
|
|
|
|
2017-09-05 13:50:22 +03:00
|
|
|
void migrate_set_error(MigrationState *s, const Error *error);
|
2023-10-05 01:02:32 +03:00
|
|
|
bool migrate_has_error(MigrationState *s);
|
2008-11-11 19:46:33 +03:00
|
|
|
|
2017-12-15 20:16:54 +03:00
|
|
|
void migrate_fd_connect(MigrationState *s, Error *error_in);
|
2008-11-11 19:46:33 +03:00
|
|
|
|
2024-03-11 20:48:58 +03:00
|
|
|
int migration_call_notifiers(MigrationState *s, MigrationEventType type,
|
|
|
|
Error **errp);
|
|
|
|
|
2023-09-06 18:08:51 +03:00
|
|
|
int migrate_init(MigrationState *s, Error **errp);
|
2016-05-04 22:44:19 +03:00
|
|
|
bool migration_is_blocked(Error **errp);
|
2015-11-05 21:10:58 +03:00
|
|
|
/* True if outgoing migration has entered postcopy phase */
|
2017-03-21 00:25:28 +03:00
|
|
|
bool migration_in_postcopy(void);
|
migration: Allow network to fail even during recovery
Normally the postcopy recover phase should only exist for a super short
period, that's the duration when QEMU is trying to recover from an
interrupted postcopy migration, during which handshake will be carried out
for continuing the procedure with state changes from PAUSED -> RECOVER ->
POSTCOPY_ACTIVE again.
Here RECOVER phase should be super small, that happens right after the
admin specified a new but working network link for QEMU to reconnect to
dest QEMU.
However there can still be case where the channel is broken in this small
RECOVER window.
If it happens, with current code there's no way the src QEMU can got kicked
out of RECOVER stage. No way either to retry the recover in another channel
when established.
This patch allows the RECOVER phase to fail itself too - we're mostly
ready, just some small things missing, e.g. properly kick the main
migration thread out when sleeping on rp_sem when we found that we're at
RECOVER stage. When this happens, it fails the RECOVER itself, and
rollback to PAUSED stage. Then the user can retry another round of
recovery.
To make it even stronger, teach QMP command migrate-pause to explicitly
kick src/dst QEMU out when needed, so even if for some reason the migration
thread didn't got kicked out already by a failing rethrn-path thread, the
admin can also kick it out.
This will be an super, super corner case, but still try to cover that.
One can try to test this with two proxy channels for migration:
(a) socat unix-listen:/tmp/src.sock,reuseaddr,fork tcp:localhost:10000
(b) socat tcp-listen:10000,reuseaddr,fork unix:/tmp/dst.sock
So the migration channel will be:
(a) (b)
src -> /tmp/src.sock -> tcp:10000 -> /tmp/dst.sock -> dst
Then to make QEMU hang at RECOVER stage, one can do below:
(1) stop the postcopy using QMP command postcopy-pause
(2) kill the 2nd proxy (b)
(3) try to recover the postcopy using /tmp/src.sock on src
(4) src QEMU will go into RECOVER stage but won't be able to continue
from there, because the channel is actually broken at (b)
Before this patch, step (4) will make src QEMU stuck in RECOVER stage,
without a way to kick the QEMU out or continue the postcopy again. After
this patch, (4) will quickly fail qemu and bounce back to PAUSED stage.
Admin can also kick QEMU from (4) into PAUSED when needed using
migrate-pause when needed.
After bouncing back to PAUSED stage, one can recover again.
Reported-by: Xiaohui Li <xiaohli@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2111332
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231017202633.296756-3-peterx@redhat.com>
2023-10-17 23:26:30 +03:00
|
|
|
bool migration_postcopy_is_alive(int state);
|
2012-08-13 11:42:49 +04:00
|
|
|
MigrationState *migrate_get_current(void);
|
2024-03-11 20:48:58 +03:00
|
|
|
bool migration_has_failed(MigrationState *);
|
|
|
|
bool migrate_mode_is_cpr(MigrationState *);
|
2010-12-13 19:30:12 +03:00
|
|
|
|
2019-01-11 09:37:30 +03:00
|
|
|
uint64_t ram_get_total_transferred_pages(void);
|
|
|
|
|
2015-11-05 21:10:47 +03:00
|
|
|
/* Sending on the return path - generic and then for each message type */
|
|
|
|
void migrate_send_rp_shut(MigrationIncomingState *mis,
|
|
|
|
uint32_t value);
|
|
|
|
void migrate_send_rp_pong(MigrationIncomingState *mis,
|
|
|
|
uint32_t value);
|
2020-09-08 23:30:18 +03:00
|
|
|
int migrate_send_rp_req_pages(MigrationIncomingState *mis, RAMBlock *rb,
|
2020-10-22 00:27:18 +03:00
|
|
|
ram_addr_t start, uint64_t haddr);
|
2020-10-22 00:27:17 +03:00
|
|
|
int migrate_send_rp_message_req_pages(MigrationIncomingState *mis,
|
|
|
|
RAMBlock *rb, ram_addr_t start);
|
2018-05-02 13:47:28 +03:00
|
|
|
void migrate_send_rp_recv_bitmap(MigrationIncomingState *mis,
|
|
|
|
char *block_name);
|
2018-05-02 13:47:30 +03:00
|
|
|
void migrate_send_rp_resume_ack(MigrationIncomingState *mis, uint32_t value);
|
2023-06-21 14:11:55 +03:00
|
|
|
int migrate_send_rp_switchover_ack(MigrationIncomingState *mis);
|
2015-11-05 21:10:47 +03:00
|
|
|
|
2018-03-13 22:34:01 +03:00
|
|
|
void dirty_bitmap_mig_before_vm_start(void);
|
2020-07-27 22:42:31 +03:00
|
|
|
void dirty_bitmap_mig_cancel_outgoing(void);
|
|
|
|
void dirty_bitmap_mig_cancel_incoming(void);
|
2020-08-20 18:07:23 +03:00
|
|
|
bool check_dirty_bitmap_mig_alias_map(const BitmapMigrationNodeAliasList *bbm,
|
|
|
|
Error **errp);
|
|
|
|
|
2019-02-27 13:51:27 +03:00
|
|
|
void migrate_add_address(SocketAddress *address);
|
2023-10-23 21:20:52 +03:00
|
|
|
bool migrate_uri_parse(const char *uri, MigrationChannel **channel,
|
|
|
|
Error **errp);
|
2019-02-15 20:45:46 +03:00
|
|
|
int foreach_not_ignored_block(RAMBlockIterFunc func, void *opaque);
|
|
|
|
|
2018-06-05 19:25:45 +03:00
|
|
|
#define qemu_ram_foreach_block \
|
2019-02-15 20:45:46 +03:00
|
|
|
#warning "Use foreach_not_ignored_block in migration code"
|
2018-06-05 19:25:45 +03:00
|
|
|
|
2018-06-13 13:26:41 +03:00
|
|
|
void migration_make_urgent_request(void);
|
|
|
|
void migration_consume_urgent_request(void);
|
2019-12-05 13:29:18 +03:00
|
|
|
bool migration_rate_limit(void);
|
2024-01-20 02:39:22 +03:00
|
|
|
void migration_bh_schedule(QEMUBHFunc *cb, void *opaque);
|
2021-09-29 17:43:10 +03:00
|
|
|
void migration_cancel(const Error *error);
|
2018-06-13 13:26:41 +03:00
|
|
|
|
2023-09-06 18:08:48 +03:00
|
|
|
void migration_populate_vfio_info(MigrationInfo *info);
|
|
|
|
void migration_reset_vfio_bytes_transferred(void);
|
migration: Introduce postcopy channels on dest node
Postcopy handles huge pages in a special way that currently we can only have
one "channel" to transfer the page.
It's because when we install pages using UFFDIO_COPY, we need to have the whole
huge page ready, it also means we need to have a temp huge page when trying to
receive the whole content of the page.
Currently all maintainance around this tmp page is global: firstly we'll
allocate a temp huge page, then we maintain its status mostly within
ram_load_postcopy().
To enable multiple channels for postcopy, the first thing we need to do is to
prepare N temp huge pages as caching, one for each channel.
Meanwhile we need to maintain the tmp huge page status per-channel too.
To give some example, some local variables maintained in ram_load_postcopy()
are listed; they are responsible for maintaining temp huge page status:
- all_zero: this keeps whether this huge page contains all zeros
- target_pages: this counts how many target pages have been copied
- host_page: this keeps the host ptr for the page to install
Move all these fields to be together with the temp huge pages to form a new
structure called PostcopyTmpPage. Then for each (future) postcopy channel, we
need one structure to keep the state around.
For vanilla postcopy, obviously there's only one channel. It contains both
precopy and postcopy pages.
This patch teaches the dest migration node to start realize the possible number
of postcopy channels by introducing the "postcopy_channels" variable. Its
value is calculated when setup postcopy on dest node (during POSTCOPY_LISTEN
phase).
Vanilla postcopy will have channels=1, but when postcopy-preempt capability is
enabled (in the future), we will boost it to 2 because even during partial
sending of a precopy huge page we still want to preempt it and start sending
the postcopy requested page right away (so we start to keep two temp huge
pages; more if we want to enable multifd). In this patch there's a TODO marked
for that; so far the channels is always set to 1.
We need to send one "host huge page" on one channel only and we cannot split
them, because otherwise the data upon the same huge page can locate on more
than one channel so we need more complicated logic to manage. One temp host
huge page for each channel will be enough for us for now.
Postcopy will still always use the index=0 huge page even after this patch.
However it prepares for the latter patches where it can start to use multiple
channels (which needs src intervention, because only src knows which channel we
should use).
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220301083925.33483-5-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
dgilbert: Fixed up long line
2022-03-01 11:39:04 +03:00
|
|
|
void postcopy_temp_page_reset(PostcopyTmpPage *tmp_page);
|
2021-04-14 14:20:02 +03:00
|
|
|
|
migration: Allow network to fail even during recovery
Normally the postcopy recover phase should only exist for a super short
period, that's the duration when QEMU is trying to recover from an
interrupted postcopy migration, during which handshake will be carried out
for continuing the procedure with state changes from PAUSED -> RECOVER ->
POSTCOPY_ACTIVE again.
Here RECOVER phase should be super small, that happens right after the
admin specified a new but working network link for QEMU to reconnect to
dest QEMU.
However there can still be case where the channel is broken in this small
RECOVER window.
If it happens, with current code there's no way the src QEMU can got kicked
out of RECOVER stage. No way either to retry the recover in another channel
when established.
This patch allows the RECOVER phase to fail itself too - we're mostly
ready, just some small things missing, e.g. properly kick the main
migration thread out when sleeping on rp_sem when we found that we're at
RECOVER stage. When this happens, it fails the RECOVER itself, and
rollback to PAUSED stage. Then the user can retry another round of
recovery.
To make it even stronger, teach QMP command migrate-pause to explicitly
kick src/dst QEMU out when needed, so even if for some reason the migration
thread didn't got kicked out already by a failing rethrn-path thread, the
admin can also kick it out.
This will be an super, super corner case, but still try to cover that.
One can try to test this with two proxy channels for migration:
(a) socat unix-listen:/tmp/src.sock,reuseaddr,fork tcp:localhost:10000
(b) socat tcp-listen:10000,reuseaddr,fork unix:/tmp/dst.sock
So the migration channel will be:
(a) (b)
src -> /tmp/src.sock -> tcp:10000 -> /tmp/dst.sock -> dst
Then to make QEMU hang at RECOVER stage, one can do below:
(1) stop the postcopy using QMP command postcopy-pause
(2) kill the 2nd proxy (b)
(3) try to recover the postcopy using /tmp/src.sock on src
(4) src QEMU will go into RECOVER stage but won't be able to continue
from there, because the channel is actually broken at (b)
Before this patch, step (4) will make src QEMU stuck in RECOVER stage,
without a way to kick the QEMU out or continue the postcopy again. After
this patch, (4) will quickly fail qemu and bounce back to PAUSED stage.
Admin can also kick QEMU from (4) into PAUSED when needed using
migrate-pause when needed.
After bouncing back to PAUSED stage, one can recover again.
Reported-by: Xiaohui Li <xiaohli@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2111332
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231017202633.296756-3-peterx@redhat.com>
2023-10-17 23:26:30 +03:00
|
|
|
/*
|
|
|
|
* Migration thread waiting for return path thread. Return non-zero if an
|
|
|
|
* error is detected.
|
|
|
|
*/
|
|
|
|
int migration_rp_wait(MigrationState *s);
|
2023-10-05 01:02:37 +03:00
|
|
|
/*
|
|
|
|
* Kick the migration thread waiting for return path messages. NOTE: the
|
|
|
|
* name can be slightly confusing (when read as "kick the rp thread"), just
|
|
|
|
* to remember the target is always the migration thread.
|
|
|
|
*/
|
|
|
|
void migration_rp_kick(MigrationState *s);
|
|
|
|
|
2008-10-13 07:12:02 +04:00
|
|
|
#endif
|