qemu/migration/migration.h

436 lines
14 KiB
C
Raw Normal View History

/*
* QEMU live migration
*
* Copyright IBM, Corp. 2008
*
* Authors:
* Anthony Liguori <aliguori@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
*/
#ifndef QEMU_MIGRATION_H
#define QEMU_MIGRATION_H
#include "exec/cpu-common.h"
#include "hw/qdev-core.h"
#include "qapi/qapi-types-migration.h"
#include "qemu/thread.h"
#include "qemu/coroutine_int.h"
#include "io/channel.h"
#include "io/channel-buffer.h"
#include "net/announce.h"
#include "qom/object.h"
migration: add postcopy blocktime ctx into MigrationIncomingState This patch adds request to kernel space for UFFD_FEATURE_THREAD_ID, in case this feature is provided by kernel. PostcopyBlocktimeContext is encapsulated inside postcopy-ram.c, due to it being a postcopy-only feature. Also it defines PostcopyBlocktimeContext's instance live time. Information from PostcopyBlocktimeContext instance will be provided much after postcopy migration end, instance of PostcopyBlocktimeContext will live till QEMU exit, but part of it (vcpu_addr, page_fault_vcpu_time) used only during calculation, will be released when postcopy ended or failed. To enable postcopy blocktime calculation on destination, need to request proper compatibility (Patch for documentation will be at the tail of the patch set). As an example following command enable that capability, assume QEMU was started with -chardev socket,id=charmonitor,path=/var/lib/migrate-vm-monitor.sock option to control it [root@host]#printf "{\"execute\" : \"qmp_capabilities\"}\r\n \ {\"execute\": \"migrate-set-capabilities\" , \"arguments\": { \"capabilities\": [ { \"capability\": \"postcopy-blocktime\", \"state\": true } ] } }" | nc -U /var/lib/migrate-vm-monitor.sock Or just with HMP (qemu) migrate_set_capability postcopy-blocktime on Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-Id: <1521742647-25550-3-git-send-email-a.perevalov@samsung.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-03-22 21:17:23 +03:00
struct PostcopyBlocktimeContext;
#define MIGRATION_RESUME_ACK_VALUE (1)
migration: Split log_clear() into smaller chunks Currently we are doing log_clear() right after log_sync() which mostly keeps the old behavior when log_clear() was still part of log_sync(). This patch tries to further optimize the migration log_clear() code path to split huge log_clear()s into smaller chunks. We do this by spliting the whole guest memory region into memory chunks, whose size is decided by MigrationState.clear_bitmap_shift (an example will be given below). With that, we don't do the dirty bitmap clear operation on the remote node (e.g., KVM) when we fetch the dirty bitmap, instead we explicitly clear the dirty bitmap for the memory chunk for each of the first time we send a page in that chunk. Here comes an example. Assuming the guest has 64G memory, then before this patch the KVM ioctl KVM_CLEAR_DIRTY_LOG will be a single one covering 64G memory. If after the patch, let's assume when the clear bitmap shift is 18, then the memory chunk size on x86_64 will be 1UL<<18 * 4K = 1GB. Then instead of sending a big 64G ioctl, we'll send 64 small ioctls, each of the ioctl will cover 1G of the guest memory. For each of the 64 small ioctls, we'll only send if any of the page in that small chunk was going to be sent right away. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190603065056.25211-12-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-06-03 09:50:56 +03:00
/*
* 1<<6=64 pages -> 256K chunk when page size is 4K. This gives us
* the benefit that all the chunks are 64 pages aligned then the
* bitmaps are always aligned to LONG.
*/
#define CLEAR_BITMAP_SHIFT_MIN 6
/*
* 1<<18=256K pages -> 1G chunk when page size is 4K. This is the
* default value to use if no one specified.
*/
#define CLEAR_BITMAP_SHIFT_DEFAULT 18
/*
* 1<<31=2G pages -> 8T chunk when page size is 4K. This should be
* big enough and make sure we won't overflow easily.
*/
#define CLEAR_BITMAP_SHIFT_MAX 31
migration: Introduce postcopy channels on dest node Postcopy handles huge pages in a special way that currently we can only have one "channel" to transfer the page. It's because when we install pages using UFFDIO_COPY, we need to have the whole huge page ready, it also means we need to have a temp huge page when trying to receive the whole content of the page. Currently all maintainance around this tmp page is global: firstly we'll allocate a temp huge page, then we maintain its status mostly within ram_load_postcopy(). To enable multiple channels for postcopy, the first thing we need to do is to prepare N temp huge pages as caching, one for each channel. Meanwhile we need to maintain the tmp huge page status per-channel too. To give some example, some local variables maintained in ram_load_postcopy() are listed; they are responsible for maintaining temp huge page status: - all_zero: this keeps whether this huge page contains all zeros - target_pages: this counts how many target pages have been copied - host_page: this keeps the host ptr for the page to install Move all these fields to be together with the temp huge pages to form a new structure called PostcopyTmpPage. Then for each (future) postcopy channel, we need one structure to keep the state around. For vanilla postcopy, obviously there's only one channel. It contains both precopy and postcopy pages. This patch teaches the dest migration node to start realize the possible number of postcopy channels by introducing the "postcopy_channels" variable. Its value is calculated when setup postcopy on dest node (during POSTCOPY_LISTEN phase). Vanilla postcopy will have channels=1, but when postcopy-preempt capability is enabled (in the future), we will boost it to 2 because even during partial sending of a precopy huge page we still want to preempt it and start sending the postcopy requested page right away (so we start to keep two temp huge pages; more if we want to enable multifd). In this patch there's a TODO marked for that; so far the channels is always set to 1. We need to send one "host huge page" on one channel only and we cannot split them, because otherwise the data upon the same huge page can locate on more than one channel so we need more complicated logic to manage. One temp host huge page for each channel will be enough for us for now. Postcopy will still always use the index=0 huge page even after this patch. However it prepares for the latter patches where it can start to use multiple channels (which needs src intervention, because only src knows which channel we should use). Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220301083925.33483-5-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> dgilbert: Fixed up long line
2022-03-01 11:39:04 +03:00
/* This is an abstraction of a "temp huge page" for postcopy's purpose */
typedef struct {
/*
* This points to a temporary huge page as a buffer for UFFDIO_COPY. It's
* mmap()ed and needs to be freed when cleanup.
*/
void *tmp_huge_page;
/*
* This points to the host page we're going to install for this temp page.
* It tells us after we've received the whole page, where we should put it.
*/
void *host_addr;
/* Number of small pages copied (in size of TARGET_PAGE_SIZE) */
unsigned int target_pages;
/* Whether this page contains all zeros */
bool all_zero;
} PostcopyTmpPage;
/* State for the incoming migration */
struct MigrationIncomingState {
QEMUFile *from_src_file;
/* Previously received RAM's RAMBlock pointer */
RAMBlock *last_recv_block;
/* A hook to allow cleanup at the end of incoming migration */
void *transport_data;
void (*transport_cleanup)(void *data);
/*
* Used to sync thread creations. Note that we can't create threads in
* parallel with this sem.
*/
QemuSemaphore thread_sync_sem;
/*
* Free at the start of the main state load, set as the main thread finishes
* loading state.
*/
QemuEvent main_thread_load_event;
/* For network announces */
AnnounceTimer announce_timer;
size_t largest_page_size;
bool have_fault_thread;
QemuThread fault_thread;
/* Set this when we want the fault thread to quit */
bool fault_thread_quit;
bool have_listen_thread;
QemuThread listen_thread;
/* For the kernel to send us notifications */
int userfault_fd;
/* To notify the fault_thread to wake, e.g., when need to quit */
int userfault_event_fd;
QEMUFile *to_src_file;
QemuMutex rp_mutex; /* We send replies from multiple threads */
/* RAMBlock of last request sent to source */
RAMBlock *last_rb;
migration: Introduce postcopy channels on dest node Postcopy handles huge pages in a special way that currently we can only have one "channel" to transfer the page. It's because when we install pages using UFFDIO_COPY, we need to have the whole huge page ready, it also means we need to have a temp huge page when trying to receive the whole content of the page. Currently all maintainance around this tmp page is global: firstly we'll allocate a temp huge page, then we maintain its status mostly within ram_load_postcopy(). To enable multiple channels for postcopy, the first thing we need to do is to prepare N temp huge pages as caching, one for each channel. Meanwhile we need to maintain the tmp huge page status per-channel too. To give some example, some local variables maintained in ram_load_postcopy() are listed; they are responsible for maintaining temp huge page status: - all_zero: this keeps whether this huge page contains all zeros - target_pages: this counts how many target pages have been copied - host_page: this keeps the host ptr for the page to install Move all these fields to be together with the temp huge pages to form a new structure called PostcopyTmpPage. Then for each (future) postcopy channel, we need one structure to keep the state around. For vanilla postcopy, obviously there's only one channel. It contains both precopy and postcopy pages. This patch teaches the dest migration node to start realize the possible number of postcopy channels by introducing the "postcopy_channels" variable. Its value is calculated when setup postcopy on dest node (during POSTCOPY_LISTEN phase). Vanilla postcopy will have channels=1, but when postcopy-preempt capability is enabled (in the future), we will boost it to 2 because even during partial sending of a precopy huge page we still want to preempt it and start sending the postcopy requested page right away (so we start to keep two temp huge pages; more if we want to enable multifd). In this patch there's a TODO marked for that; so far the channels is always set to 1. We need to send one "host huge page" on one channel only and we cannot split them, because otherwise the data upon the same huge page can locate on more than one channel so we need more complicated logic to manage. One temp host huge page for each channel will be enough for us for now. Postcopy will still always use the index=0 huge page even after this patch. However it prepares for the latter patches where it can start to use multiple channels (which needs src intervention, because only src knows which channel we should use). Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220301083925.33483-5-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> dgilbert: Fixed up long line
2022-03-01 11:39:04 +03:00
/*
* Number of postcopy channels including the default precopy channel, so
* vanilla postcopy will only contain one channel which contain both
* precopy and postcopy streams.
*
* This is calculated when the src requests to enable postcopy but before
* it starts. Its value can depend on e.g. whether postcopy preemption is
* enabled.
*/
unsigned int postcopy_channels;
/*
* An array of temp host huge pages to be used, one for each postcopy
* channel.
*/
PostcopyTmpPage *postcopy_tmp_pages;
/* This is shared for all postcopy channels */
void *postcopy_tmp_zero_page;
/* PostCopyFD's for external userfaultfds & handlers of shared memory */
GArray *postcopy_remote_fds;
QEMUBH *bh;
int state;
bool have_colo_incoming_thread;
QemuThread colo_incoming_thread;
/* The coroutine we should enter (back) after failover */
Coroutine *migration_incoming_co;
QemuSemaphore colo_incoming_sem;
migration: add postcopy blocktime ctx into MigrationIncomingState This patch adds request to kernel space for UFFD_FEATURE_THREAD_ID, in case this feature is provided by kernel. PostcopyBlocktimeContext is encapsulated inside postcopy-ram.c, due to it being a postcopy-only feature. Also it defines PostcopyBlocktimeContext's instance live time. Information from PostcopyBlocktimeContext instance will be provided much after postcopy migration end, instance of PostcopyBlocktimeContext will live till QEMU exit, but part of it (vcpu_addr, page_fault_vcpu_time) used only during calculation, will be released when postcopy ended or failed. To enable postcopy blocktime calculation on destination, need to request proper compatibility (Patch for documentation will be at the tail of the patch set). As an example following command enable that capability, assume QEMU was started with -chardev socket,id=charmonitor,path=/var/lib/migrate-vm-monitor.sock option to control it [root@host]#printf "{\"execute\" : \"qmp_capabilities\"}\r\n \ {\"execute\": \"migrate-set-capabilities\" , \"arguments\": { \"capabilities\": [ { \"capability\": \"postcopy-blocktime\", \"state\": true } ] } }" | nc -U /var/lib/migrate-vm-monitor.sock Or just with HMP (qemu) migrate_set_capability postcopy-blocktime on Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-Id: <1521742647-25550-3-git-send-email-a.perevalov@samsung.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-03-22 21:17:23 +03:00
/*
* PostcopyBlocktimeContext to keep information for postcopy
* live migration, to calculate vCPU block time
* */
struct PostcopyBlocktimeContext *blocktime_ctx;
/* notify PAUSED postcopy incoming migrations to try to continue */
QemuSemaphore postcopy_pause_sem_dst;
QemuSemaphore postcopy_pause_sem_fault;
/* List of listening socket addresses */
SocketAddressList *socket_address_list;
migration: Maintain postcopy faulted addresses Maintain a list of faulted addresses on the destination host for which we're waiting on. This is implemented using a GTree rather than a real list to make sure even there're plenty of vCPUs/threads that are faulting, the lookup will still be fast with O(log(N)) (because we'll do that after placing each page). It should bring a slight overhead, but ideally that shouldn't be a big problem simply because in most cases the requested page list will be short. Actually we did similar things for postcopy blocktime measurements. This patch didn't use that simply because: (1) blocktime measurement is towards vcpu threads only, but here we need to record all faulted addresses, including main thread and external thread (like, DPDK via vhost-user). (2) blocktime measurement will require UFFD_FEATURE_THREAD_ID, but here we don't want to add that extra dependency on the kernel version since not necessary. E.g., we don't need to know which thread faulted on which page, we also don't care about multiple threads faulting on the same page. But we only care about what addresses are faulted so waiting for a page copying from src. (3) blocktime measurement is not enabled by default. However we need this by default especially for postcopy recover. Another thing to mention is that this patch introduced a new mutex to serialize the receivedmap and the page_requested tree, however that serialization does not cover other procedures like UFFDIO_COPY. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20201021212721.440373-4-peterx@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2020-10-22 00:27:18 +03:00
/* A tree of pages that we requested to the source VM */
GTree *page_requested;
/* For debugging purpose only, but would be nice to keep */
int page_requested_count;
/*
* The mutex helps to maintain the requested pages that we sent to the
* source, IOW, to guarantee coherent between the page_requests tree and
* the per-ramblock receivedmap. Note! This does not guarantee consistency
* of the real page copy procedures (using UFFDIO_[ZERO]COPY). E.g., even
* if one bit in receivedmap is cleared, UFFDIO_COPY could have happened
* for that page already. This is intended so that the mutex won't
* serialize and blocked by slow operations like UFFDIO_* ioctls. However
* this should be enough to make sure the page_requested tree always
* contains valid information.
*/
QemuMutex page_request_mutex;
};
MigrationIncomingState *migration_incoming_get_current(void);
void migration_incoming_state_destroy(void);
void migration_incoming_transport_cleanup(MigrationIncomingState *mis);
/*
* Functions to work with blocktime context
*/
void fill_destination_postcopy_migration_info(MigrationInfo *info);
#define TYPE_MIGRATION "migration"
typedef struct MigrationClass MigrationClass;
DECLARE_OBJ_CHECKERS(MigrationState, MigrationClass,
MIGRATION_OBJ, TYPE_MIGRATION)
struct MigrationClass {
/*< private >*/
DeviceClass parent_class;
};
struct MigrationState {
/*< private >*/
DeviceState parent_obj;
/*< public >*/
QemuThread thread;
QEMUBH *vm_start_bh;
QEMUBH *cleanup_bh;
/* Protected by qemu_file_lock */
QEMUFile *to_dst_file;
QIOChannelBuffer *bioc;
/*
* Protects to_dst_file/from_dst_file pointers. We need to make sure we
* won't yield or hang during the critical section, since this lock will be
* used in OOB command handler.
*/
QemuMutex qemu_file_lock;
/*
* Used to allow urgent requests to override rate limiting.
*/
QemuSemaphore rate_limit_sem;
/* pages already send at the beginning of current iteration */
uint64_t iteration_initial_pages;
/* pages transferred per second */
double pages_per_second;
/* bytes already send at the beginning of current iteration */
uint64_t iteration_initial_bytes;
/* time at the start of current iteration */
int64_t iteration_start_time;
/*
* The final stage happens when the remaining data is smaller than
* this threshold; it's calculated from the requested downtime and
* measured bandwidth
*/
int64_t threshold_size;
/* params from 'migrate-set-parameters' */
MigrationParameters parameters;
int state;
/* State related to return path */
struct {
/* Protected by qemu_file_lock */
QEMUFile *from_dst_file;
QemuThread rp_thread;
bool error;
2021-07-22 20:58:37 +03:00
/*
* We can also check non-zero of rp_thread, but there's no "official"
* way to do this, so this bool makes it slightly more elegant.
* Checking from_dst_file for this is racy because from_dst_file will
* be cleared in the rp_thread!
*/
bool rp_thread_created;
QemuSemaphore rp_sem;
} rp_state;
double mbps;
/* Timestamp when recent migration starts (ms) */
int64_t start_time;
/* Total time used by latest migration (ms) */
int64_t total_time;
/* Timestamp when VM is down (ms) to migrate the last stuff */
int64_t downtime_start;
int64_t downtime;
int64_t expected_downtime;
bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
int64_t setup_time;
/*
* Whether guest was running when we enter the completion stage.
* If migration is interrupted by any reason, we need to continue
* running the guest on source.
*/
bool vm_was_running;
/* Flag set once the migration has been asked to enter postcopy */
bool start_postcopy;
/* Flag set after postcopy has sent the device state */
bool postcopy_after_devices;
/* Flag set once the migration thread is running (and needs joining) */
bool migration_thread_running;
migration: re-active images while migration been canceled after inactive them commit fe904ea8242cbae2d7e69c052c754b8f5f1ba1d6 fixed a case which migration aborted QEMU because it didn't regain the control of images while some errors happened. Actually, there are another two cases can trigger the same error reports: " bdrv_co_do_pwritev: Assertion `!(bs->open_flags & 0x0800)' failed", Case 1, codes path: migration_thread() migration_completion() bdrv_inactivate_all() ----------------> inactivate images qemu_savevm_state_complete_precopy() socket_writev_buffer() --------> error because destination fails qemu_fflush() ----------------> set error on migration stream -> qmp_migrate_cancel() ----------------> user cancelled migration concurrently -> migrate_set_state() ------------------> set migrate CANCELLIN migration_completion() -----------------> go on to fail_invalidate if (s->state == MIGRATION_STATUS_ACTIVE) -> Jump this branch Case 2, codes path: migration_thread() migration_completion() bdrv_inactivate_all() ----------------> inactivate images migreation_completion() finished -> qmp_migrate_cancel() ---------------> user cancelled migration concurrently qemu_mutex_lock_iothread(); qemu_bh_schedule (s->cleanup_bh); As we can see from above, qmp_migrate_cancel can slip in whenever migration_thread does not hold the global lock. If this happens after bdrv_inactive_all() been called, the above error reports will appear. To prevent this, we can call bdrv_invalidate_cache_all() in qmp_migrate_cancel() directly if we find images become inactive. Besides, bdrv_invalidate_cache_all() in migration_completion() doesn't have the protection of big lock, fix it by add the missing qemu_mutex_lock_iothread(); Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> Message-Id: <1485244792-11248-1-git-send-email-zhang.zhanghailiang@huawei.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-01-24 10:59:52 +03:00
/* Flag set once the migration thread called bdrv_inactivate_all */
bool block_inactive;
/* Migration is waiting for guest to unplug device */
QemuSemaphore wait_unplug_sem;
/* Migration is paused due to pause-before-switchover */
QemuSemaphore pause_sem;
/* The semaphore is used to notify COLO thread that failover is finished */
QemuSemaphore colo_exit_sem;
migration: add reporting of errors for outgoing migration Currently if an application initiates an outgoing migration, it may or may not, get an error reported back on failure. If the error occurs synchronously to the 'migrate' command execution, the client app will see the error message. This is the case for DNS lookup failures. If the error occurs asynchronously to the monitor command though, the error will be thrown away and the client left guessing about what went wrong. This is the case for failure to connect to the TCP server (eg due to wrong port, or firewall rules, or other similar errors). In the future we'll be adding more scope for errors to happen asynchronously with the TLS protocol handshake. TLS errors are hard to diagnose even when they are well reported, so discarding errors entirely will make it impossible to debug TLS connection problems. Management apps which do migration are already using 'query-migrate' / 'info migrate' to check up on progress of background migration operations and to see their end status. This is a fine place to also include the error message when things go wrong. This patch thus adds an 'error-desc' field to the MigrationInfo struct, which will be populated when the 'status' is set to 'failed': (qemu) migrate -d tcp:localhost:9001 (qemu) info migrate capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off events: off x-postcopy-ram: off Migration status: failed (Error connecting to socket: Connection refused) total time: 0 milliseconds In the HMP, when doing non-detached migration, it is also possible to display this error message directly to the app. (qemu) migrate tcp:localhost:9001 Error connecting to socket: Connection refused Or with QMP { "execute": "query-migrate", "arguments": {} } { "return": { "status": "failed", "error-desc": "address resolution failed for myhost:9000: No address associated with hostname" } } Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <1461751518-12128-11-git-send-email-berrange@redhat.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-04-27 13:05:00 +03:00
/* The event is used to notify COLO thread to do checkpoint */
QemuEvent colo_checkpoint_event;
int64_t colo_checkpoint_time;
QEMUTimer *colo_delay_timer;
/* The first error that has occurred.
We used the mutex to be able to return the 1st error message */
migration: add reporting of errors for outgoing migration Currently if an application initiates an outgoing migration, it may or may not, get an error reported back on failure. If the error occurs synchronously to the 'migrate' command execution, the client app will see the error message. This is the case for DNS lookup failures. If the error occurs asynchronously to the monitor command though, the error will be thrown away and the client left guessing about what went wrong. This is the case for failure to connect to the TCP server (eg due to wrong port, or firewall rules, or other similar errors). In the future we'll be adding more scope for errors to happen asynchronously with the TLS protocol handshake. TLS errors are hard to diagnose even when they are well reported, so discarding errors entirely will make it impossible to debug TLS connection problems. Management apps which do migration are already using 'query-migrate' / 'info migrate' to check up on progress of background migration operations and to see their end status. This is a fine place to also include the error message when things go wrong. This patch thus adds an 'error-desc' field to the MigrationInfo struct, which will be populated when the 'status' is set to 'failed': (qemu) migrate -d tcp:localhost:9001 (qemu) info migrate capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off events: off x-postcopy-ram: off Migration status: failed (Error connecting to socket: Connection refused) total time: 0 milliseconds In the HMP, when doing non-detached migration, it is also possible to display this error message directly to the app. (qemu) migrate tcp:localhost:9001 Error connecting to socket: Connection refused Or with QMP { "execute": "query-migrate", "arguments": {} } { "return": { "status": "failed", "error-desc": "address resolution failed for myhost:9000: No address associated with hostname" } } Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <1461751518-12128-11-git-send-email-berrange@redhat.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-04-27 13:05:00 +03:00
Error *error;
/* mutex to protect errp */
QemuMutex error_mutex;
/* Do we have to clean up -b/-i from old migrate parameters */
/* This feature is deprecated and will be removed */
bool must_remove_block_options;
/*
* Global switch on whether we need to store the global state
* during migration.
*/
bool store_global_state;
/* Whether we send QEMU_VM_CONFIGURATION during migration */
bool send_configuration;
/* Whether we send section footer during migration */
bool send_section_footer;
/* Needed by postcopy-pause state */
QemuSemaphore postcopy_pause_sem;
QemuSemaphore postcopy_pause_rp_sem;
/*
* Whether we abort the migration if decompression errors are
* detected at the destination. It is left at false for qemu
* older than 3.0, since only newer qemu sends streams that
* do not trigger spurious decompression errors.
*/
bool decompress_error_check;
migration: Split log_clear() into smaller chunks Currently we are doing log_clear() right after log_sync() which mostly keeps the old behavior when log_clear() was still part of log_sync(). This patch tries to further optimize the migration log_clear() code path to split huge log_clear()s into smaller chunks. We do this by spliting the whole guest memory region into memory chunks, whose size is decided by MigrationState.clear_bitmap_shift (an example will be given below). With that, we don't do the dirty bitmap clear operation on the remote node (e.g., KVM) when we fetch the dirty bitmap, instead we explicitly clear the dirty bitmap for the memory chunk for each of the first time we send a page in that chunk. Here comes an example. Assuming the guest has 64G memory, then before this patch the KVM ioctl KVM_CLEAR_DIRTY_LOG will be a single one covering 64G memory. If after the patch, let's assume when the clear bitmap shift is 18, then the memory chunk size on x86_64 will be 1UL<<18 * 4K = 1GB. Then instead of sending a big 64G ioctl, we'll send 64 small ioctls, each of the ioctl will cover 1G of the guest memory. For each of the 64 small ioctls, we'll only send if any of the page in that small chunk was going to be sent right away. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190603065056.25211-12-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-06-03 09:50:56 +03:00
/*
* This decides the size of guest memory chunk that will be used
* to track dirty bitmap clearing. The size of memory chunk will
* be GUEST_PAGE_SIZE << N. Say, N=0 means we will clear dirty
* bitmap for each page to send (1<<0=1); N=10 means we will clear
* dirty bitmap only once for 1<<10=1K continuous guest pages
* (which is in 4M chunk).
*/
uint8_t clear_bitmap_shift;
/*
* This save hostname when out-going migration starts
*/
char *hostname;
};
void migrate_set_state(int *state, int old_state, int new_state);
void migration_fd_process_incoming(QEMUFile *f, Error **errp);
void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp);
void migration_incoming_process(void);
bool migration_has_all_channels(void);
uint64_t migrate_max_downtime(void);
void migrate_set_error(MigrationState *s, const Error *error);
migration: add reporting of errors for outgoing migration Currently if an application initiates an outgoing migration, it may or may not, get an error reported back on failure. If the error occurs synchronously to the 'migrate' command execution, the client app will see the error message. This is the case for DNS lookup failures. If the error occurs asynchronously to the monitor command though, the error will be thrown away and the client left guessing about what went wrong. This is the case for failure to connect to the TCP server (eg due to wrong port, or firewall rules, or other similar errors). In the future we'll be adding more scope for errors to happen asynchronously with the TLS protocol handshake. TLS errors are hard to diagnose even when they are well reported, so discarding errors entirely will make it impossible to debug TLS connection problems. Management apps which do migration are already using 'query-migrate' / 'info migrate' to check up on progress of background migration operations and to see their end status. This is a fine place to also include the error message when things go wrong. This patch thus adds an 'error-desc' field to the MigrationInfo struct, which will be populated when the 'status' is set to 'failed': (qemu) migrate -d tcp:localhost:9001 (qemu) info migrate capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off events: off x-postcopy-ram: off Migration status: failed (Error connecting to socket: Connection refused) total time: 0 milliseconds In the HMP, when doing non-detached migration, it is also possible to display this error message directly to the app. (qemu) migrate tcp:localhost:9001 Error connecting to socket: Connection refused Or with QMP { "execute": "query-migrate", "arguments": {} } { "return": { "status": "failed", "error-desc": "address resolution failed for myhost:9000: No address associated with hostname" } } Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <1461751518-12128-11-git-send-email-berrange@redhat.com> Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-04-27 13:05:00 +03:00
void migrate_fd_error(MigrationState *s, const Error *error);
void migrate_fd_connect(MigrationState *s, Error *error_in);
bool migration_is_setup_or_active(int state);
bool migration_is_running(int state);
void migrate_init(MigrationState *s);
bool migration_is_blocked(Error **errp);
/* True if outgoing migration has entered postcopy phase */
bool migration_in_postcopy(void);
MigrationState *migrate_get_current(void);
bool migrate_postcopy(void);
bool migrate_release_ram(void);
bool migrate_postcopy_ram(void);
bool migrate_zero_blocks(void);
bool migrate_dirty_bitmaps(void);
bool migrate_ignore_shared(void);
bool migrate_validate_uuid(void);
bool migrate_auto_converge(void);
bool migrate_use_multifd(void);
bool migrate_pause_before_switchover(void);
int migrate_multifd_channels(void);
MultiFDCompression migrate_multifd_compression(void);
int migrate_multifd_zlib_level(void);
int migrate_multifd_zstd_level(void);
int migrate_use_xbzrle(void);
uint64_t migrate_xbzrle_cache_size(void);
bool migrate_colo_enabled(void);
bool migrate_use_block(void);
bool migrate_use_block_incremental(void);
int migrate_max_cpu_throttle(void);
bool migrate_use_return_path(void);
uint64_t ram_get_total_transferred_pages(void);
bool migrate_use_compression(void);
int migrate_compress_level(void);
int migrate_compress_threads(void);
int migrate_compress_wait_thread(void);
int migrate_decompress_threads(void);
bool migrate_use_events(void);
bool migrate_postcopy_blocktime(void);
bool migrate_background_snapshot(void);
/* Sending on the return path - generic and then for each message type */
void migrate_send_rp_shut(MigrationIncomingState *mis,
uint32_t value);
void migrate_send_rp_pong(MigrationIncomingState *mis,
uint32_t value);
int migrate_send_rp_req_pages(MigrationIncomingState *mis, RAMBlock *rb,
migration: Maintain postcopy faulted addresses Maintain a list of faulted addresses on the destination host for which we're waiting on. This is implemented using a GTree rather than a real list to make sure even there're plenty of vCPUs/threads that are faulting, the lookup will still be fast with O(log(N)) (because we'll do that after placing each page). It should bring a slight overhead, but ideally that shouldn't be a big problem simply because in most cases the requested page list will be short. Actually we did similar things for postcopy blocktime measurements. This patch didn't use that simply because: (1) blocktime measurement is towards vcpu threads only, but here we need to record all faulted addresses, including main thread and external thread (like, DPDK via vhost-user). (2) blocktime measurement will require UFFD_FEATURE_THREAD_ID, but here we don't want to add that extra dependency on the kernel version since not necessary. E.g., we don't need to know which thread faulted on which page, we also don't care about multiple threads faulting on the same page. But we only care about what addresses are faulted so waiting for a page copying from src. (3) blocktime measurement is not enabled by default. However we need this by default especially for postcopy recover. Another thing to mention is that this patch introduced a new mutex to serialize the receivedmap and the page_requested tree, however that serialization does not cover other procedures like UFFDIO_COPY. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20201021212721.440373-4-peterx@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2020-10-22 00:27:18 +03:00
ram_addr_t start, uint64_t haddr);
int migrate_send_rp_message_req_pages(MigrationIncomingState *mis,
RAMBlock *rb, ram_addr_t start);
void migrate_send_rp_recv_bitmap(MigrationIncomingState *mis,
char *block_name);
void migrate_send_rp_resume_ack(MigrationIncomingState *mis, uint32_t value);
void dirty_bitmap_mig_before_vm_start(void);
void dirty_bitmap_mig_cancel_outgoing(void);
void dirty_bitmap_mig_cancel_incoming(void);
bool check_dirty_bitmap_mig_alias_map(const BitmapMigrationNodeAliasList *bbm,
Error **errp);
void migrate_add_address(SocketAddress *address);
int foreach_not_ignored_block(RAMBlockIterFunc func, void *opaque);
#define qemu_ram_foreach_block \
#warning "Use foreach_not_ignored_block in migration code"
void migration_make_urgent_request(void);
void migration_consume_urgent_request(void);
bool migration_rate_limit(void);
void migration_cancel(const Error *error);
void populate_vfio_info(MigrationInfo *info);
migration: Introduce postcopy channels on dest node Postcopy handles huge pages in a special way that currently we can only have one "channel" to transfer the page. It's because when we install pages using UFFDIO_COPY, we need to have the whole huge page ready, it also means we need to have a temp huge page when trying to receive the whole content of the page. Currently all maintainance around this tmp page is global: firstly we'll allocate a temp huge page, then we maintain its status mostly within ram_load_postcopy(). To enable multiple channels for postcopy, the first thing we need to do is to prepare N temp huge pages as caching, one for each channel. Meanwhile we need to maintain the tmp huge page status per-channel too. To give some example, some local variables maintained in ram_load_postcopy() are listed; they are responsible for maintaining temp huge page status: - all_zero: this keeps whether this huge page contains all zeros - target_pages: this counts how many target pages have been copied - host_page: this keeps the host ptr for the page to install Move all these fields to be together with the temp huge pages to form a new structure called PostcopyTmpPage. Then for each (future) postcopy channel, we need one structure to keep the state around. For vanilla postcopy, obviously there's only one channel. It contains both precopy and postcopy pages. This patch teaches the dest migration node to start realize the possible number of postcopy channels by introducing the "postcopy_channels" variable. Its value is calculated when setup postcopy on dest node (during POSTCOPY_LISTEN phase). Vanilla postcopy will have channels=1, but when postcopy-preempt capability is enabled (in the future), we will boost it to 2 because even during partial sending of a precopy huge page we still want to preempt it and start sending the postcopy requested page right away (so we start to keep two temp huge pages; more if we want to enable multifd). In this patch there's a TODO marked for that; so far the channels is always set to 1. We need to send one "host huge page" on one channel only and we cannot split them, because otherwise the data upon the same huge page can locate on more than one channel so we need more complicated logic to manage. One temp host huge page for each channel will be enough for us for now. Postcopy will still always use the index=0 huge page even after this patch. However it prepares for the latter patches where it can start to use multiple channels (which needs src intervention, because only src knows which channel we should use). Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20220301083925.33483-5-peterx@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> dgilbert: Fixed up long line
2022-03-01 11:39:04 +03:00
void postcopy_temp_page_reset(PostcopyTmpPage *tmp_page);
bool migrate_multi_channels_is_allowed(void);
void migrate_protocol_allow_multi_channels(bool allow);
#endif