After the next patch, each vmstate field will extract parts of a larger
(32x512-bit) array, so we cannot check the vmstate field against the
type of the array.
While changing this, change the macros to accept the index of the first
element (which will not be 0 for Hi16_ZMM_REGS) instead of the number
of elements (which is always CPU_NB_REGS).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
While we cannot check against the type of the full array, we can check
against the type of the fields.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Add QEMUFile interface to allow a socket to be 'shut down' - i.e. any
reads/writes will fail (and any blocking read/write will be woken).
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Avoid hot pages being replaced by others to remarkably decrease cache
misses
Sample results with the test program which quote from xbzrle.txt ran in
vm:(migrate bandwidth:1GE and xbzrle cache size 8MB)
the test program:
include <stdlib.h>
include <stdio.h>
int main()
{
char *buf = (char *) calloc(4096, 4096);
while (1) {
int i;
for (i = 0; i < 4096 * 4; i++) {
buf[i * 4096 / 4]++;
}
printf(".");
}
}
before this patch:
virsh qemu-monitor-command test_vm '{"execute": "query-migrate"}'
{"return":{"expected-downtime":1020,"xbzrle-cache":{"bytes":1108284,
"cache-size":8388608,"cache-miss-rate":0.987013,"pages":18297,"overflow":8,
"cache-miss":1228737},"status":"active","setup-time":10,"total-time":52398,
"ram":{"total":12466991104,"remaining":1695744,"mbps":935.559472,
"transferred":5780760580,"dirty-sync-counter":271,"duplicate":2878530,
"dirty-pages-rate":29130,"skipped":0,"normal-bytes":5748592640,
"normal":1403465}},"id":"libvirt-706"}
18k pages sent compressed in 52 seconds.
cache-miss-rate is 98.7%, totally miss.
after optimizing:
virsh qemu-monitor-command test_vm '{"execute": "query-migrate"}'
{"return":{"expected-downtime":2054,"xbzrle-cache":{"bytes":5066763,
"cache-size":8388608,"cache-miss-rate":0.485924,"pages":194823,"overflow":0,
"cache-miss":210653},"status":"active","setup-time":11,"total-time":18729,
"ram":{"total":12466991104,"remaining":3895296,"mbps":937.663549,
"transferred":1615042219,"dirty-sync-counter":98,"duplicate":2869840,
"dirty-pages-rate":58781,"skipped":0,"normal-bytes":1588404224,
"normal":387794}},"id":"libvirt-266"}
194k pages sent compressed in 18 seconds.
The value of cache-miss-rate decrease to 48.59%.
Signed-off-by: ChenLiang <chenliang88@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
The QEMUFileStdio code will use qemu_file_is_writable() and will be
moved to a separate file.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This extends use of VMS_ALLOC flag from arrays to VBUFFER as well.
This defines VMSTATE_VBUFFER_ALLOC_UINT32 which makes use of VMS_ALLOC
and uses uint32_t type for a size.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This is based on Stefan and Joel's patch that creates a QEMUFile that goes
to a memory buffer; from:
http://lists.gnu.org/archive/html/qemu-devel/2013-03/msg05036.html
Using the QEMUFile interface, this patch adds support functions for
operating on in-memory sized buffers that can be written to or read from.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Joel Schopp <jschopp@linux.vnet.ibm.com>
For fixes/tweeks I've done:
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
There are few helpers already to support array migration. However they all
require the destination side to preallocate arrays before migration which
is not always possible due to unknown array size as it might be some
sort of dynamic state. One of the examples is an array of MSIX-enabled
devices in SPAPR PHB - this array may vary from 0 to 65536 entries and
its size depends on guest's ability to enable MSIX or do PCI hotplug.
This adds new VMSTATE_VARRAY_STRUCT_ALLOC macro which is pretty similar to
VMSTATE_STRUCT_VARRAY_POINTER_INT32 but it can alloc memory for migratign
array on the destination side.
This defines VMS_ALLOC flag for a field.
This changes vmstate_base_addr() to do the allocation when receiving
migration.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Juan Quintela <quintela@redhat.com>
[agraf: drop g_malloc_n usage]
Signed-off-by: Alexander Graf <agraf@suse.de>
This commit adds a new command, '-dump-vmstate', that takes a filename
as an argument. When executed, QEMU will dump the vmstate information
for the machine type it's invoked with to the file, and quit.
The JSON-format output can then be used to compare the vmstate info for
different QEMU versions, specifically to test whether live migration
would break due to changes in the vmstate data.
A Python script that compares the output of such JSON dumps is included
in the following commit.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This patch introduces self_announce_delay() to calculate the delay for
the next announce round. This could be used by other device e.g
virtio-net who wants to do announcing by itself.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Export it for other users.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
expose the count that logs the times of updating the dirty bitmap to
end user.
Signed-off-by: ChenLiang <chenliang88@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Provide ram_mig_init (like blk_mig_init) for vl.c to initialise stuff
to do with ram migration (currently in arch_init.c).
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Make qemu_peek_buffer repeatedly call fill_buffer until it gets
all the data it requires, or until there is an error.
At the moment, qemu_peek_buffer will try one qemu_fill_buffer if there
isn't enough data waiting, however the kernel is entitled to return
just a few bytes, and still leave qemu_peek_buffer with less bytes
than it needed. I've seen this fail in a dev world, and I think it
could theoretically fail in the peeking of the subsection headers in
the current world.
Comment qemu_peek_byte to point out it's not guaranteed to work for
non-continuous peeks
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: ChenLiang <chenliang0016@icloud.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
As the macro verifies the value is positive, rename it
to make the function clearer.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Validate state using VMS_ARRAY with num = 0 and VMS_MUST_EXIST
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Can be used to verify a required field exists or validate
state in some other way.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
c5f52875 changed the size of sense array in vmstate_scsi_device by
mistake. This patch restores the old size, and add a subsection for the
remaining part of the buffer size. So that migration is not broken.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Push zero'd pages into the XBZRLE cache
A page that was cached by XBZRLE, zero'd and then XBZRLE'd again
was being compared against a stale cache value
Don't use 'qemu_put_buffer_async' to put pages from the XBZRLE cache
Since the cache might change before the data hits the wire
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Add support for saving VMState of 2D arrays of uint32 values.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It is better to fail migration in case of failure to
allocate new cache item
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
When qemu do live migration with xbzrle, qemu malloc decoded_buf
at destination end but free it at source end. It will crash qemu
by double free error in some scenarios. Splitting the XBZRLE structure
for clear logic distinguishing src/dst side.
Signed-off-by: ChenLiang <chenliang88@huawei.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: GongLei <arei.gonglei@huawei.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The VMSTATE_STRUCT_POINTER macros are a bit odd in that they
must be passed an argument "FooType *" rather than just taking
the FooType. They're only used in one place, so it's easy to
tidy this up. This also lets us use the macro to replace the
hand-rolled VMSTATE_PTIMER.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The VMState code will be moved to vmstate.c and it uses some of the
QEMU_VM_* constants, so move it to a header.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The QEMUFile code will be moved to qemu-file.c. This will require making
the following functions non-static because they are used by the savevm.c
code:
* qemu_peek_byte()
* qemu_peek_buffer()
* qemu_file_skip()
* qemu_file_set_error()
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Add support for defining a vmstate field which is an array
of pointers to structures, and use this to define a
VMSTATE_PTIMER_ARRAY() which allows an array of ptimer_state*
to be used by devices.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1387159292-10436-2-git-send-email-lig.fnst@cn.fujitsu.com
This adds version supporting macros VMSTATE_STRUCT_POINTER_TEST_V
and VMSTATE_STRUCT_POINTER_V in addition to the already existing
VMSTATE_STRUCT_POINTER and VMSTATE_STRUCT_POINTER_TEST macros.
Cc: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Commit 29ae8a4133 ("rdma: introduce
MIG_STATE_NONE and change MIG_STATE_SETUP state transition") changed the
state transitions during migration setup.
Spice used to be notified with MIG_STATE_ACTIVE and it detected this
using migration_is_active(). Spice is now notified with
MIG_STATE_SETUP and migration_is_active() no longer works.
Replace migration_is_active() with migration_in_setup() to fix spice
migration.
Cc: Michael R. Hines <mrhines@us.ibm.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Using the previous patches, we're now able to timestamp the SETUP
state. Once we have this time, let the user know about it in the
schema.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Code that does need to be visible is kept
well contained inside this file and this is the only
new additional file to the entire patch.
This file includes the entire protocol and interfaces
required to perform RDMA migration.
Also, the configure and Makefile modifications to link
this file are included.
Full documentation is in docs/rdma.txt
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This gives RDMA shared access to madvise() on the destination side
when an entire chunk is found to be zero.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
this patch adds a efficient encoding for zero blocks by
adding a new flag indicating a block is completely zero.
additionally bdrv_write_zeros() is used at the destination
to efficiently write these zeroes. depending on the implementation
this avoids that the destination target gets fully provisioned.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The auto-converge migration capability allows the user to specify if they
choose live migration seqeunce to automatically detect and force convergence.
Signed-off-by: Chegu Vinod <chegu_vinod@hp.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This capability allows you to disable dynamic chunk registration
for better throughput on high-performance links.
For example, using an 8GB RAM virtual machine with all 8GB of memory in
active use and the VM itself is completely idle using a 40 gbps infiniband link:
1. x-rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps
2. x-rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps
These numbers would of course scale up to whatever size virtual machine
you have to migrate using RDMA.
Enabling this feature does *not* have any measurable affect on
migration *downtime*. This is because, without this feature, all of the
memory will have already been registered already in advance during
the bulk round and does not need to be re-registered during the successive
iteration rounds.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
These are the prototypes and implementation of new hooks that
RDMA takes advantage of to perform dynamic page registration.
An optional hook is also introduced for a custom function
to be able to override the default save_page function.
Also included are the prototypes and accessor methods used by
arch_init.c which invoke funtions inside savevm.c to call out
to the hooks that may or may not have been overridden
inside of QEMUFileOps.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
RDMA uses this to flush the control channel before sending its
own message to handle page registrations.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
QEMUFileRDMA also has read and write modes. This function is now
shared to reduce code duplication.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This exposes throughput (in megabits/sec) through QMP.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
RDMA writes happen asynchronously, and thus the performance accounting
also needs to be able to occur asynchronously. This allows anybody
to call into savevm.c to update both f->pos as well as into arch_init.c
to update the acct_info structure with up-to-date values when
the RDMA transfer actually completes.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Instead of breaking up RAM state into many small chunks, pass the iovec
to the block layer for better performance.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add support for migrating two dimensional arrays, by defining
a set of new macros VMSTATE_*_2DARRAY paralleling the existing
VMSTATE_*_ARRAY macros. 2D arrays are handled the same for actual
state serialization; the only difference is that the type check
has to change for a 2D array.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Igor Mitsyanko <i.mitsyanko@gmail.com>
Message-id: 1363975375-3166-2-git-send-email-peter.maydell@linaro.org
Macro could be used to migrate a dynamically allocated buffer of known size.
Signed-off-by: Igor Mitsyanko <i.mitsyanko@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1362923278-4080-2-git-send-email-i.mitsyanko@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This allows us to add a buffer to the iovec to send without copying it
into the static buffer, the buffer will be sent later when qemu_fflush is called.
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This will allow us to write an iovec
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
during bulk stage of ram migration if a page is a
zero page do not send it at all.
the memory at the destination reads as zero anyway.
even if there is an madvise with QEMU_MADV_DONTNEED
at the target upon receipt of a zero page I have observed
that the target starts swapping if the memory is overcommitted.
it seems that the pages are dropped asynchronously.
this patch also updates QMP to return the number of
skipped pages in MigrationStats.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The VMSTATE_BUFFER_MULTIPLY macro is misnamed - it actually specifies
a variably sized buffer with VMS_VBUFFER, so should be named
VMSTATE_VBUFFER_MULTIPLY. This patch fixes this (the macro had no current
users under either name).
In addition, unlike the other VMSTATE_VBUFFER variants, this macro did not
specify VMS_POINTER. This patch fixes this bug as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Currently the savevm code contains a VMSTATE_STRUCT_VARRAY_POINTER_INT32
helper (a variably sized array with the number of elements in an int32_t),
but not VMSTATE_STRUCT_VARRAY_POINTER_UINT32 (... with the number of
elements in a uint32_t). This patch (trivially) fixes the deficiency.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The current savevm code includes VMSTATE helpers for a number of commonly
used data types, but not for the float64 type used by the internal floating
point emulation code. This patch fixes the deficiency.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>