Use the automatic rcu_read unlocker to fix a missing unlock.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20191007143642.301445-3-dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Wrap the check into a function to make it easy to read.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190717005341.14140-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Various parts of the migration code do different things when they're
in postcopy mode; prior to this patch this has been 'postcopy-active'.
This patch extends 'in_postcopy' to include 'postcopy-paused' and
'postcopy-recover'.
In particular, when you set the max-postcopy-bandwidth parameter, this
only affects the current migration fd if we're 'in_postcopy';
this leads to a race in the postcopy recovery test where it increases
the speed from 4k/sec to unlimited, but that increase can get ignored
if the change is made between the point at which the reconnection
happens and it transitions back to active.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190923174942.12182-1-dgilbert@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This fixes a deadlock that can occur on the migration source after
a failed RDMA migration; as the source tries to cleanup it
clears a pair of pointers and uses synchronize_rcu to wait; this
is happening on the main thread. With the CPUs running
a CPU thread can be an rcu reader and attempt to grab the main lock
(kvm_handle_io->address_space_write->flatview_write->flatview_write_continue->
prepare_mmio_access->qemu_mutex_lock_iothread_impl)
Replace the synchronize_rcu with a call_rcu to postpone the freeing.
Fixes: 74637e6f08 ("migration: implement bi-directional RDMA QIOChannel")
( https://bugzilla.redhat.com/show_bug.cgi?id=1746787 )
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190913163507.1403-3-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
If we've already finished the migration or something has
already gone wrong, don't moan about the migration stream disconnecting.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190913163507.1403-2-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This is a cleanup for previous removal of unsentmap.
The sent parameter is not necessary now.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190819061843.28642-4-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Commit f3f491fcd6 ('Postcopy: Maintain unsentmap') introduced
unsentmap to track not yet sent pages.
This is not necessary since:
* unsentmap is a sub-set of bmap before postcopy start
* unsentmap is the summation of bmap and unsentmap after canonicalizing
This patch just removes it.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190819061843.28642-3-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
All pages, either partially sent or partially dirty, will be discarded in
postcopy_send_discard_bm_ram(), since we update the unsentmap to be
unsentmap = unsentmap | dirty in ram_postcopy_send_discard_bitmap().
This is not necessary to do discard when canonicalizing bitmap. And by
doing so, we separate the page discard into two individual steps:
* canonicalize bitmap
* discard page
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190819061843.28642-2-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20190912122514.22504-2-marcandre.lureau@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Replace confusing usage:
~BDRV_SECTOR_MASK
With more clear:
(BDRV_SECTOR_SIZE - 1)
Remove BDRV_SECTOR_MASK and the unused BDRV_BLOCK_OFFSET_MASK which was
it's last user.
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Message-id: 20190827185913.27427-3-nsoffer@redhat.com
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190912024957.11780-1-richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
In add_to_iovec(), qemu_fflush() will be called if iovec is full. If
this happens, buf_index is reset. Currently, this is not checked and
buf_index would always been adjust with buf size.
This is not harmful, but will waste some space in file buffer.
This patch make add_to_iovec() return 1 when it has flushed the file.
Then the caller could check the return value to see whether it is
necessary to adjust the buf_index any more.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190911132839.23336-3-richard.weiyang@gmail.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The check of writev_buffer is in qemu_fflush, which means it is not
harmful if it is NULL.
And removing it will make the code consistent since all other
add_to_iovec() is called without the check.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190911132839.23336-2-richard.weiyang@gmail.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
We've got max-postcopy-bandwidth parameter but it's not applied
correctly after a postcopy recovery so the recovered migration stream
will still eat the whole net bandwidth. Fix that up.
Reported-by: Xiaohui Li <xiaohli@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20190906130103.20961-1-peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This capability realizes simple source validation by UUID.
It's useful for live migration between hosts.
Signed-off-by: Yury Kotov <yury-kotov@yandex-team.ru>
Message-Id: <20190903162246.18524-2-yury-kotov@yandex-team.ru>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Commit 11808bb removed the non-iovec based write support,
the comment hung on.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190823103946.7388-1-dgilbert@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Commit 78dd48df3 removed the last caller of register_savevm_live for an
instantiable device (rather than a single system wide device);
so trim out the parameter.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190822115433.12070-1-dgilbert@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
During migration, there are several places to iterate on
savevm.handlers. And on each iteration, we need to check its ops and
related callbacks before invoke it.
Generally, ops is the first element to check, and it is only necessary
to check it once.
This patch clean all the related part in savevm.c to check ops only once
in those iterations.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190819032804.8579-1-richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
When encounter error, multifd_send_thread should always notify who pay
attention to it before exit. Otherwise it may block migration_thread
at multifd_send_sync_main forever.
Error as follow:
-------------------------------------------------------------------------------
(gdb) bt
#0 0x00007f4d669dfa0b in do_futex_wait.constprop.1 () from /lib64/libpthread.so.0
#1 0x00007f4d669dfa9f in __new_sem_wait_slow.constprop.0 () from /lib64/libpthread.so.0
#2 0x00007f4d669dfb3b in sem_wait@@GLIBC_2.2.5 () from /lib64/libpthread.so.0
#3 0x0000562ccf0a5614 in qemu_sem_wait (sem=sem@entry=0x562cd1b698e8) at util/qemu-thread-posix.c:319
#4 0x0000562ccecb4752 in multifd_send_sync_main (rs=<optimized out>) at /qemu/migration/ram.c:1099
#5 0x0000562ccecb95f4 in ram_save_iterate (f=0x562cd0ecc000, opaque=<optimized out>) at /qemu/migration/ram.c:3550
#6 0x0000562ccef43c23 in qemu_savevm_state_iterate (f=0x562cd0ecc000, postcopy=false) at migration/savevm.c:1189
#7 0x0000562ccef3dcf3 in migration_iteration_run (s=0x562cd09fabf0) at migration/migration.c:3131
#8 migration_thread (opaque=opaque@entry=0x562cd09fabf0) at migration/migration.c:3258
#9 0x0000562ccf0a4c26 in qemu_thread_start (args=<optimized out>) at util/qemu-thread-posix.c:502
#10 0x00007f4d669d9e25 in start_thread () from /lib64/libpthread.so.0
#11 0x00007f4d6670635d in clone () from /lib64/libc.so.6
(gdb) f 4
#4 0x0000562ccecb4752 in multifd_send_sync_main (rs=<optimized out>) at /qemu/migration/ram.c:1099
1099 qemu_sem_wait(&p->sem_sync);
(gdb) list
1094 }
1095 for (i = 0; i < migrate_multifd_channels(); i++) {
1096 MultiFDSendParams *p = &multifd_send_state->params[i];
1097
1098 trace_multifd_send_sync_main_wait(p->id);
1099 qemu_sem_wait(&p->sem_sync);
1100 }
1101 trace_multifd_send_sync_main(multifd_send_state->packet_num);
1102 }
1103
(gdb) p i
$1 = 0
(gdb) p multifd_send_state->params[0].pending_job
$2 = 2 //It means the job before MULTIFD_FLAG_SYNC has already fail
(gdb) p multifd_send_state->params[0].quit
$3 = true
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Message-Id: <1567044996-2362-1-git-send-email-ivanren@tencent.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
There is a race between TCG and accesses to the dirty log:
vCPU thread reader thread
----------------------- -----------------------
TLB check -> slow path
notdirty_mem_write
write to RAM
set dirty flag
clear dirty flag
TLB check -> fast path
read memory
write to RAM
Fortunately, in order to fix it, no change is required to the
vCPU thread. However, the reader thread must delay the read after
the vCPU thread has finished the write. This can be approximated
conservatively by run_on_cpu, which waits for the end of the current
translation block.
A similar technique is used by KVM, which has to do a synchronous TLB
flush after doing a test-and-clear of the dirty-page flags.
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It is used to do transactional movement of the bitmap (which is
possible in conjunction with merge command). Transactional bitmap
movement is needed in scenarios with external snapshot, when we don't
want to leave copy of the bitmap in the base image.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190708220502.12977-3-jsnow@redhat.com
[Edited "since" version to 4.2 --js]
Signed-off-by: John Snow <jsnow@redhat.com>
Add a public interface for get. While we're at it,
rename "bdrv_get_dirty_bitmap_locked" to "bdrv_dirty_bitmap_get_locked".
(There are more functions to rename to the bdrv_dirty_bitmap_VERB form,
but they will wait until the conclusion of this series.)
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-11-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAl1WleASHGFybWJydUBy
ZWRoYXQuY29tAAoJEDhwtADrkYZTBBYQALQLzIYb2Zux95bAxoJdhqNuEOGLfxeu
gx0i0roPe6SBleHozUK+gf7kVYyw7he58n2dZURGqrpqktgZOFcea2a6Dq1rnVw6
JMJ2Oy7V326bHwJT0Np9rW4n+FHsMQZoAUEHjl9EeGCZfO/zy2aSWPsD8mbcbm0g
hUW5Jr4+cpm28BCL8I+2HhWFazB6G2IPAF9oEXmNsOM6J1Ho8WGrTAjASe0Il5Yi
m2B4QWG+4uz77WYnkttnssm41K1S95HYyaKluIVyNwTnsPTN303V/sUj+wdRaooL
k1O6WqaavGhal7QeRqy+vCpF8m6qLq7NaYCzSCOrrkkuC8TAnpVn7Xmi9qI+vb6O
kGBpDWhq5wOnphsEhnFvhPZgD+WZo3mwTgW4h0d3UhB6orOTPTMvWKEwFJ1j/O6/
gntV61o542c9gpZjS133221HRmNjteHF/5/TFzmX/G50sgivJn+WOP87naM2aBAz
8MW5HatTox+qQqYD4VMUIVnVkguxHDVhFRBunYu0HvZZ1Rud+Lc6Xzi6H4jDlZ81
vtOmAlMU3dbp97gNvJrAVqV4JIL3puOWbu0MMaQWoG53Kcdfu46LIr57TTg3dw61
R9e7HSOQjYILChoodwELlyeAsVeZo3IzX9vPX8aw7MoHvneyTUNqtha/rHsLEwsb
97G19dydGEC6
=eSUz
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-include-2019-08-13-v2' into staging
Header cleanup patches for 2019-08-13
# gpg: Signature made Fri 16 Aug 2019 12:39:12 BST
# gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg: issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-include-2019-08-13-v2: (29 commits)
sysemu: Split sysemu/runstate.h off sysemu/sysemu.h
sysemu: Move the VMChangeStateEntry typedef to qemu/typedefs.h
Include sysemu/sysemu.h a lot less
Clean up inclusion of sysemu/sysemu.h
numa: Move remaining NUMA declarations from sysemu.h to numa.h
Include sysemu/hostmem.h less
numa: Don't include hw/boards.h into sysemu/numa.h
Include hw/boards.h a bit less
Include hw/qdev-properties.h less
Include qemu/main-loop.h less
Include qemu/queue.h slightly less
Include hw/hw.h exactly where needed
Include qom/object.h slightly less
Include exec/memory.h slightly less
Include migration/vmstate.h less
migration: Move the VMStateDescription typedef to typedefs.h
Clean up inclusion of exec/cpu-common.h
Include hw/irq.h a lot less
typedefs: Separate incomplete types and function types
ide: Include hw/ide/internal a bit less outside hw/ide/
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
sysemu/sysemu.h is a rather unfocused dumping ground for stuff related
to the system-emulator. Evidence:
* It's included widely: in my "build everything" tree, changing
sysemu/sysemu.h still triggers a recompile of some 1100 out of 6600
objects (not counting tests and objects that don't depend on
qemu/osdep.h, down from 5400 due to the previous two commits).
* It pulls in more than a dozen additional headers.
Split stuff related to run state management into its own header
sysemu/runstate.h.
Touching sysemu/sysemu.h now recompiles some 850 objects. qemu/uuid.h
also drops from 1100 to 850, and qapi/qapi-types-run-state.h from 4400
to 4200. Touching new sysemu/runstate.h recompiles some 500 objects.
Since I'm touching MAINTAINERS to add sysemu/runstate.h anyway, also
add qemu/main-loop.h.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190812052359.30071-30-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
[Unbreak OS-X build]
In my "build everything" tree, changing sysemu/sysemu.h triggers a
recompile of some 5400 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h).
hw/qdev-core.h includes sysemu/sysemu.h since recent commit e965ffa70a
"qdev: add qdev_add_vm_change_state_handler()". This is a bad idea:
hw/qdev-core.h is widely included.
Move the declaration of qdev_add_vm_change_state_handler() to
sysemu/sysemu.h, and drop the problematic include from hw/qdev-core.h.
Touching sysemu/sysemu.h now recompiles some 1800 objects.
qemu/uuid.h also drops from 5400 to 1800. A few more headers show
smaller improvement: qemu/notify.h drops from 5600 to 5200,
qemu/timer.h from 5600 to 4500, and qapi/qapi-types-run-state.h from
5500 to 5000.
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20190812052359.30071-28-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
In my "build everything" tree, changing hw/qdev-properties.h triggers
a recompile of some 2700 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h).
Many places including hw/qdev-properties.h (directly or via hw/qdev.h)
actually need only hw/qdev-core.h. Include hw/qdev-core.h there
instead.
hw/qdev.h is actually pointless: all it does is include hw/qdev-core.h
and hw/qdev-properties.h, which in turn includes hw/qdev-core.h.
Replace the remaining uses of hw/qdev.h by hw/qdev-properties.h.
While there, delete a few superfluous inclusions of hw/qdev-core.h.
Touching hw/qdev-properties.h now recompiles some 1200 objects.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20190812052359.30071-22-armbru@redhat.com>
In my "build everything" tree, changing qemu/main-loop.h triggers a
recompile of some 5600 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h). It includes block/aio.h,
which in turn includes qemu/event_notifier.h, qemu/notify.h,
qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h,
qemu/thread.h, qemu/timer.h, and a few more.
Include qemu/main-loop.h only where it's needed. Touching it now
recompiles only some 1700 objects. For block/aio.h and
qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the
others, they shrink only slightly.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190812052359.30071-21-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Drop unnecessary inclusions from headers. Downgrade a few more to
exec/hwaddr.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190812052359.30071-17-armbru@redhat.com>
In my "build everything" tree, changing migration/vmstate.h triggers a
recompile of some 2700 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h).
hw/hw.h supposedly includes it for convenience. Several other headers
include it just to get VMStateDescription. The previous commit made
that unnecessary.
Include migration/vmstate.h only where it's still needed. Touching it
now recompiles only some 1600 objects.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190812052359.30071-16-armbru@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
migration/qemu-file.h neglects to include it even though it needs
ram_addr_t. Fix that. Drop a few superfluous inclusions elsewhere.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190812052359.30071-14-armbru@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20190814020218.1868-6-quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This makes easy to debug things because when you want for all threads
to arrive at that semaphore, you know which one your are waiting for.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20190814020218.1868-3-quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20190814020218.1868-2-quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Move migration helpers for strings under include/, so they can be used
outside of migration/
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20190808150325.21939-2-marcandre.lureau@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
For migration incoming side, it either quit in precopy or postcopy. It
is safe to use the mis->bh for both instead of allocating a dedicated
QEMUBH for postcopy.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190805053146.32326-1-richardw.yang@linux.intel.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Rename for better understanding of the code.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190808033155.30162-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Multifd sync will send MULTIFD_FLAG_SYNC flag info to destination, add
these bytes to ram_counters record.
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Suggested-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <1564464816-21804-4-git-send-email-ivanren@tencent.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Limit the speed of multifd migration through common speed limitation
qemu file.
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Message-Id: <1564464816-21804-3-git-send-email-ivanren@tencent.com>
Reviewed-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Add qemu_file_update_transfer for just update bytes_xfer for speed
limitation. This will be used for further migration feature such as
multifd migration.
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Reviewed-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1564464816-21804-2-git-send-email-ivanren@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This patch fix a multifd migration bug in migration speed calculation, this
problem can be reproduced as follows:
1. start a vm and give a heavy memory write stress to prevent the vm be
successfully migrated to destination
2. begin a migration with multifd
3. migrate for a long time [actually, this can be measured by transferred bytes]
4. migrate cancel
5. begin a new migration with multifd, the migration will directly run into
migration_completion phase
Reason as follows:
Migration update bandwidth and s->threshold_size in function
migration_update_counters after BUFFER_DELAY time:
current_bytes = migration_total_bytes(s);
transferred = current_bytes - s->iteration_initial_bytes;
time_spent = current_time - s->iteration_start_time;
bandwidth = (double)transferred / time_spent;
s->threshold_size = bandwidth * s->parameters.downtime_limit;
In multifd migration, migration_total_bytes function return
qemu_ftell(s->to_dst_file) + ram_counters.multifd_bytes.
s->iteration_initial_bytes will be initialized to 0 at every new migration,
but ram_counters is a global variable, and history migration data will be
accumulated. So if the ram_counters.multifd_bytes is big enough, it may lead
pending_size >= s->threshold_size become false in migration_iteration_run
after the first migration_update_counters.
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Wei Yang <richardw.yang@linux.intel.com>
Suggested-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <1564741121-1840-1-git-send-email-ivanren@tencent.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
MigrationState->bytes_xfer is only set to 0 in migrate_init().
Remove this unnecessary field.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190402003106.17614-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Use QEMU_IS_ALIGNED for the check, it would be more consistent with
other align calculations.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190806004648.8659-3-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The purpose of the calculation is to find a HostPage which is partially
dirty.
* fixup_start_addr points to the start of the HostPage to discard
* run_start points to the next HostPage to check
While in the middle stage, there would two cases for run_start:
* aligned with HostPage means this is not partially dirty
* not aligned means this is partially dirty
When it is aligned, no work and calculation is necessary. run_start
already points to the start of next HostPage and is ready to continue.
When it is not aligned, the calculation could be simplified with:
* fixup_start_addr = QEMU_ALIGN_DOWN(run_start, host_ratio)
* run_start = QEMU_ALIGN_UP(run_start, host_ratio)
By doing so, run_start always points to the next HostPage to check.
fixup_start_addr always points to the HostPage to discard.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190806004648.8659-2-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
In postcopy-ram.c, we provide three functions to discard certain
RAMBlock range:
* postcopy_discard_send_init()
* postcopy_discard_send_range()
* postcopy_discard_send_finish()
Currently, we allocate/deallocate PostcopyDiscardState for each RAMBlock
on sending discard information to destination. This is not necessary and
the same data area could be reused for each RAMBlock.
This patch defines PostcopyDiscardState a static variable. By doing so:
1) avoid memory allocation and deallocation to the system
2) avoid potential failure of memory allocation
3) hide some details for their users
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190724010721.2146-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
After cleanup, it would be clear to audience there are two cases
ram_load:
* precopy
* postcopy
And it is not necessary to check postcopy_running on each iteration for
precopy.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190725002023.2335-3-richardw.yang@linux.intel.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
It is not reasonable to continue when version_id mismatch.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190722075339.25121-2-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
LOADVM_QUIT allows a command to quit all layers of nested loadvm loops,
while current return value check is not that proper even it works now.
Current return value check "ret & LOADVM_QUIT" would return true if
bit[0] is 1. This would be true when ret is -1 which is used to indicate
an error of handling a command.
Since there is only one place return LOADVM_QUIT and no other
combination of return value, use "ret == LOADVM_QUIT" would be more
proper.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190718064257.29218-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
RAMBlock->used_length is always passed to migration_bitmap_sync_range(),
which could be retrieved from RAMBlock.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190718012547.16373-1-richardw.yang@linux.intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Use common helper function to check the state.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190719071129.11880-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
There is only one place to set start_postcopy to true,
qmp_migrate_start_postcopy(), which make sure start_postcopy could be
set to true when migrate_postcopy() return true.
So start_postcopy is true implies the other one.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190718083747.5859-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
PostcopyState is already set to ADVISE at the beginning of
loadvm_postcopy_handle_advise().
Remove the redundant set.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190711080816.6405-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
in_postcopy and iterable_only are not SaveStateEntry specific, it would
be more proper to check them out of iteration.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190709140924.13291-4-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This is a preparation patch for further cleanup.
No functional change, just wrap two major part of
qemu_savevm_state_complete_precopy() into function.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190709140924.13291-3-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
It would be proper to flush file even for iterable_only case.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190709140924.13291-2-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This means it is not necessary to spare an extra variable to hold this
condition. Use host_offset directly is fine.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190710050814.31344-3-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Use the same way for run_end to calculate run_start, which saves one
operation.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190710050814.31344-2-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Since we break the loop when there is no more page to discard, we are
sure the following process would find some page to discard.
It is not necessary to check it again.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190627020822.15485-4-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
When one is equal or bigger then end, it means there is no page to
discard. Just break the loop in this case instead of processing it.
No functional change, just refactor it a little.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190627020822.15485-3-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
If one equals end, it means we have gone through the whole bitmap.
Use a more restrict check to skip a unnecessary condition.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190627020822.15485-2-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Consolidate time information fill up into its function for better
readability.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190716005411.4156-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Currently, there is no information about error if outgoing migration was failed
because of file channel errors.
Example (QMP session):
-> { "execute": "migrate", "arguments": { "uri": "exec:head -c 1" }}
<- { "return": {} }
...
-> { "execute": "query-migrate" }
<- { "return": { "status": "failed" }} // There is not error's description
And even in the QEMU's output there is nothing.
This patch
1) Adds errp for the most of QEMUFileOps
2) Adds qemu_file_get_error_obj/qemu_file_set_error_obj
3) And finally using of qemu_file_get_error_obj in migration.c
And now, the status for the mentioned fail will be:
-> { "execute": "query-migrate" }
<- { "return": { "status": "failed",
"error-desc": "Unable to write to command: Broken pipe" }}
Signed-off-by: Yury Kotov <yury-kotov@yandex-team.ru>
Message-Id: <20190422103420.15686-1-yury-kotov@yandex-team.ru>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
When migrate_cancel a multifd migration, if run sequence like this:
[source] [destination]
multifd_send_sync_main[finish]
multifd_recv_thread wait &p->sem_sync
shutdown to_dst_file
detect error from_src_file
send RAM_SAVE_FLAG_EOS[fail] [no chance to run multifd_recv_sync_main]
multifd_load_cleanup
join multifd receive thread forever
will lead destination qemu hung at following stack:
pthread_join
qemu_thread_join
multifd_load_cleanup
process_incoming_migration_co
coroutine_trampoline
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1561468699-9819-4-git-send-email-ivanren@tencent.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
When we 'migrate_cancel' a multifd migration, live_migration thread may
hung forever at some points, because of multifd_send_thread has already
exit for socket error:
1. multifd_send_pages may hung at qemu_sem_wait(&multifd_send_state->
channels_ready)
2. multifd_send_sync_main my hung at qemu_sem_wait(&multifd_send_state->
sem_sync)
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Message-Id: <1561468699-9819-3-git-send-email-ivanren@tencent.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
Remove spurious not needed bits
When we 'migrate_cancel' a multifd migration, live_migration thread may
go into endless loop in multifd_send_pages functions.
Reproduce steps:
(qemu) migrate_set_capability multifd on
(qemu) migrate -d url
(qemu) [wait a while]
(qemu) migrate_cancel
Then may get live_migration 100% cpu usage in following stack:
pthread_mutex_lock
qemu_mutex_lock_impl
multifd_send_pages
multifd_queue_page
ram_save_multifd_page
ram_save_target_page
ram_save_host_page
ram_find_and_save_block
ram_find_and_save_block
ram_save_iterate
qemu_savevm_state_iterate
migration_iteration_run
migration_thread
qemu_thread_start
start_thread
clone
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Message-Id: <1561468699-9819-2-git-send-email-ivanren@tencent.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reproduce the problem:
migrate
migrate_cancel
migrate
Error happen for memory migration
The reason as follows:
1. qemu start, ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] all set to
1 by a series of cpu_physical_memory_set_dirty_range
2. migration start:ram_init_bitmaps
- memory_global_dirty_log_start: begin log diry
- memory_global_dirty_log_sync: sync dirty bitmap to
ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]
- migration_bitmap_sync_range: sync ram_list.
dirty_memory[DIRTY_MEMORY_MIGRATION] to RAMBlock.bmap
and ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to zero
3. migration data...
4. migrate_cancel, will stop log dirty
5. migration start:ram_init_bitmaps
- memory_global_dirty_log_start: begin log diry
- memory_global_dirty_log_sync: sync dirty bitmap to
ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]
- migration_bitmap_sync_range: sync ram_list.
dirty_memory[DIRTY_MEMORY_MIGRATION] to RAMBlock.bmap
and ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to zero
Here RAMBlock.bmap only have new logged dirty pages, don't contain
the whole guest pages.
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <1563115879-2715-1-git-send-email-ivanren@tencent.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
cpu_synchronize_all_post_init() is called twice in
loadvm_postcopy_handle_run_bh(), so remove one redundant call.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20190715080751.24304-1-richardw.yang@linux.intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Commit 6b6712efcc ('ram: Split dirty bitmap by RAMBlock') changes the
parameter of postcopy_send_discard_bm_ram(), while left the document
part untouched.
This patch correct the document and fix two typo by hand.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190715020549.15018-1-richardw.yang@linux.intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
By removing the share ram check, qemu is able to migrate
to private destination ram when x-ignore-shared capability
is on. Then we can create multiple destination VMs based
on the same source VM.
This changes the x-ignore-shared migration capability to
work similar to Lai's original bypass-shared-memory
work(https://lists.gnu.org/archive/html/qemu-devel/2018-04/msg00003.html)
which enables kata containers (https://katacontainers.io)
to implement the VM templating feature.
An example usage in kata containers(https://katacontainers.io):
1. Start the source VM:
qemu-system-x86 -m 2G \
-object memory-backend-file,id=mem0,size=2G,share=on,mem-path=/tmpfs/template-memory \
-numa node,memdev=mem0
2. Stop the template VM, set migration x-ignore-shared capability,
migrate "exec:cat>/tmpfs/state", quit it
3. Start target VM:
qemu-system-x86 -m 2G \
-object memory-backend-file,id=mem0,size=2G,share=off,mem-path=/tmpfs/template-memory \
-numa node,memdev=mem0 \
-incoming defer
4. connect to target VM qmp, set migration x-ignore-shared capability,
migrate_incoming "exec:cat /tmpfs/state"
5. create more target VMs repeating 3 and 4
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Yury Kotov <yury-kotov@yandex-team.ru>
Cc: Jiangshan Lai <laijs@hyper.sh>
Cc: Xu Wang <xu@hyper.sh>
Signed-off-by: Peng Tao <tao.peng@linux.alibaba.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1560494113-1141-1-git-send-email-tao.peng@linux.alibaba.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Currently we are doing log_clear() right after log_sync() which mostly
keeps the old behavior when log_clear() was still part of log_sync().
This patch tries to further optimize the migration log_clear() code
path to split huge log_clear()s into smaller chunks.
We do this by spliting the whole guest memory region into memory
chunks, whose size is decided by MigrationState.clear_bitmap_shift (an
example will be given below). With that, we don't do the dirty bitmap
clear operation on the remote node (e.g., KVM) when we fetch the dirty
bitmap, instead we explicitly clear the dirty bitmap for the memory
chunk for each of the first time we send a page in that chunk.
Here comes an example.
Assuming the guest has 64G memory, then before this patch the KVM
ioctl KVM_CLEAR_DIRTY_LOG will be a single one covering 64G memory.
If after the patch, let's assume when the clear bitmap shift is 18,
then the memory chunk size on x86_64 will be 1UL<<18 * 4K = 1GB. Then
instead of sending a big 64G ioctl, we'll send 64 small ioctls, each
of the ioctl will cover 1G of the guest memory. For each of the 64
small ioctls, we'll only send if any of the page in that small chunk
was going to be sent right away.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190603065056.25211-12-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
cpu_physical_memory_sync_dirty_bitmap() has one RAMBlock* as
parameter, which means that it must be with RCU read lock held
already. Taking it again inside seems redundant. Removing it.
Instead comment on the functions about the RCU read lock.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20190603065056.25211-2-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
In case we gets a queued page, the order of block is interrupted. We may
not rely on the complete_round flag to say we have already searched the
whole blocks on the list.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20190605010828.6969-1-richardw.yang@linux.intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Notification from recv thread is not ordered, which means we may be
notified by one MultiFDRecvParams but adjust packet_num for another.
Move the adjustment after we are sure each recv thread are sync-ed.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20190604023540.26532-1-richardw.yang@linux.intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
When we are not in the last_stage, we need to update the cache if page
is not the same.
Currently this procedure is scattered in two places and mixed with
encoding status check.
This patch extract this general step out to make the code a little bit
easy to read.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190610004159.20966-1-richardw.yang@linux.intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
On receiving RAM_SAVE_FLAG_EOS, multifd_recv_sync_main() is called to
synchronize receive threads. Current synchronization mechanism is to wait
for each channel's sem_sync semaphore. This semaphore is triggered by a
packet with MULTIFD_FLAG_SYNC flag. While in current implementation, we
don't do multifd_send_sync_main() to send such packet when
blk_mig_bulk_active() is true.
This will leads to the receive threads won't notify
multifd_recv_sync_main() by sem_sync. And multifd_recv_sync_main() will
always wait there.
[Note]: normal migration test works, while didn't test the
blk_mig_bulk_active() case. Since not sure how to produce this
situation.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20190612014337.11255-1-richardw.yang@linux.intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
It uses num in multifd_send(). Make it coherent.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Wei Yang <richardw.yang@linux.intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Basically, the context could get the MachineState reference via call
chains or unrecommended qdev_get_machine() in !CONFIG_USER_ONLY mode.
A local variable of the same name would be introduced in the declaration
phase out of less effort OR replace it on the spot if it's only used
once in the context. No semantic changes.
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190518205428.90532-4-like.xu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Otherwise the FreeBSD compiler complains about an unused variable.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
We need to notify net filter to do checkpoint for Xen COLO, like KVM side.
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
No header includes qemu-common.h after this commit, as prescribed by
qemu-common.h's file comment.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190523143508.25387-5-armbru@redhat.com>
[Rebased with conflicts resolved automatically, except for
include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c
block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c
target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h
target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h
target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h
target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and
net/tap-bsd.c fixed up]
'postocpy' should be 'postcopy'.
CC: qemu-trivial@nongnu.org
Signed-off-by: Li Qiang <liq3ea@163.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20190525062832.18009-1-liq3ea@163.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
During migration, we would sync bitmap from ram_list.dirty_memory to
RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
means at the first round this sync is meaningless and is a duplicated
work.
Leaving RAMBlock->bmap blank on allocating would have a side effect on
migration_dirty_pages, since it is calculated from the result of
cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
set migration_dirty_pages to 0 in ram_state_init().
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Currently, incoming migration through fd supports only command-line case:
E.g.
fork();
fd = open();
exec("qemu ... -incoming fd:%d", fd);
It's possible to use add-fd commands to pass fd for migration, but it's
invalid case. add-fd works with fdset but not with particular fds.
To work with getfd in incoming defer it's enough to use monitor_fd_param
instead of strtol. monitor_fd_param supports both cases:
* fd:123
* fd:fd_name (added by getfd).
And also the use of monitor_fd_param improves error messages.
Signed-off-by: Yury Kotov <yury-kotov@yandex-team.ru>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Besides init and destroy, MultiFDSendParams.sem_sync is not really used.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This adds a new parameter to blk_new() which requires its callers to
declare from which AioContext this BlockBackend is going to be used (or
the locks of which AioContext need to be taken anyway).
The given context is only stored and kept up to date when changing
AioContexts. Actually applying the stored AioContext to the root node
is saved for another commit.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Shift from looking at every root BDS to *every* BDS. This will migrate
bitmaps that are attached to blockdev created nodes instead of just ones
attached to emulated storage devices.
Note that this will not migrate anonymous or internal-use bitmaps, as
those are defined as having no name.
This will also fix the Coverity issues Peter Maydell has been asking
about for the past several releases, as well as fixing a real bug.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Reported-by: Coverity 😅
Reported-by: aihua liang <aliang@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20190514201926.10407-1-jsnow@redhat.com
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1652490
Fixes: Coverity CID 1390625
CC: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <155800428514.543845.17558475870097990036.stgit@bahia.lan>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190510233729.15554-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
It fixes heap-use-after-free which was found by clang's ASAN.
Control flow of this use-after-free:
main_thread:
* Got SIGTERM and completes main loop
* Calls migration_shutdown
- migrate_fd_cancel (so, migration_thread begins to complete)
- object_unref(OBJECT(current_migration));
migration_thread:
* migration_iteration_finish -> schedule cleanup bh
* object_unref(OBJECT(s)); (Now, current_migration is freed)
* exits
main_thread:
* Calls vm_shutdown -> drain bdrvs -> main loop
-> cleanup_bh -> use after free
If you want to reproduce, these couple of sleeps will help:
vl.c:4613:
migration_shutdown();
+ sleep(2);
migration.c:3269:
+ sleep(1);
trace_migration_thread_after_loop();
migration_iteration_finish(s);
Original output:
qemu-system-x86_64: terminating on signal 15 from pid 31980 (<unknown process>)
=================================================================
==31958==ERROR: AddressSanitizer: heap-use-after-free on address 0x61900001d210
at pc 0x555558a535ca bp 0x7fffffffb190 sp 0x7fffffffb188
READ of size 8 at 0x61900001d210 thread T0 (qemu-vm-0)
#0 0x555558a535c9 in migrate_fd_cleanup migration/migration.c:1502:23
#1 0x5555594fde0a in aio_bh_call util/async.c:90:5
#2 0x5555594fe522 in aio_bh_poll util/async.c:118:13
#3 0x555559524783 in aio_poll util/aio-posix.c:725:17
#4 0x555559504fb3 in aio_wait_bh_oneshot util/aio-wait.c:71:5
#5 0x5555573bddf6 in virtio_blk_data_plane_stop
hw/block/dataplane/virtio-blk.c:282:5
#6 0x5555589d5c09 in virtio_bus_stop_ioeventfd hw/virtio/virtio-bus.c:246:9
#7 0x5555589e9917 in virtio_pci_stop_ioeventfd hw/virtio/virtio-pci.c:287:5
#8 0x5555589e22bf in virtio_pci_vmstate_change hw/virtio/virtio-pci.c:1072:9
#9 0x555557628931 in virtio_vmstate_change hw/virtio/virtio.c:2257:9
#10 0x555557c36713 in vm_state_notify vl.c:1605:9
#11 0x55555716ef53 in do_vm_stop cpus.c:1074:9
#12 0x55555716eeff in vm_shutdown cpus.c:1092:12
#13 0x555557c4283e in main vl.c:4617:5
#14 0x7fffdfdb482f in __libc_start_main
(/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
#15 0x555556ecb118 in _start (x86_64-softmmu/qemu-system-x86_64+0x1977118)
0x61900001d210 is located 144 bytes inside of 952-byte region
[0x61900001d180,0x61900001d538)
freed by thread T6 (live_migration) here:
#0 0x555556f76782 in __interceptor_free
/tmp/final/llvm.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:124:3
#1 0x555558d5fa94 in object_finalize qom/object.c:618:9
#2 0x555558d57651 in object_unref qom/object.c:1068:9
#3 0x555558a55588 in migration_thread migration/migration.c:3272:5
#4 0x5555595393f2 in qemu_thread_start util/qemu-thread-posix.c:502:9
#5 0x7fffe057f6b9 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x76b9)
previously allocated by thread T0 (qemu-vm-0) here:
#0 0x555556f76b03 in __interceptor_malloc
/tmp/final/llvm.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:146:3
#1 0x7ffff6ee37b8 in g_malloc (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x4f7b8)
#2 0x555558d58031 in object_new qom/object.c:640:12
#3 0x555558a31f21 in migration_object_init migration/migration.c:139:25
#4 0x555557c41398 in main vl.c:4320:5
#5 0x7fffdfdb482f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
Thread T6 (live_migration) created by T0 (qemu-vm-0) here:
#0 0x555556f5f0dd in pthread_create
/tmp/final/llvm.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:210:3
#1 0x555559538cf9 in qemu_thread_create util/qemu-thread-posix.c:539:11
#2 0x555558a53304 in migrate_fd_connect migration/migration.c:3332:5
#3 0x555558a72bd8 in migration_channel_connect migration/channel.c:92:5
#4 0x555558a6ef87 in exec_start_outgoing_migration migration/exec.c:42:5
#5 0x555558a4f3c2 in qmp_migrate migration/migration.c:1922:9
#6 0x555558bb4f6a in qmp_marshal_migrate qapi/qapi-commands-migration.c:607:5
#7 0x555559363738 in do_qmp_dispatch qapi/qmp-dispatch.c:131:5
#8 0x555559362a15 in qmp_dispatch qapi/qmp-dispatch.c:174:11
#9 0x5555571bac15 in monitor_qmp_dispatch monitor.c:4124:11
#10 0x55555719a22d in monitor_qmp_bh_dispatcher monitor.c:4207:9
#11 0x5555594fde0a in aio_bh_call util/async.c:90:5
#12 0x5555594fe522 in aio_bh_poll util/async.c:118:13
#13 0x5555595201e0 in aio_dispatch util/aio-posix.c:460:5
#14 0x555559503553 in aio_ctx_dispatch util/async.c:261:5
#15 0x7ffff6ede196 in g_main_context_dispatch
(/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x4a196)
SUMMARY: AddressSanitizer: heap-use-after-free migration/migration.c:1502:23
in migrate_fd_cleanup
Shadow bytes around the buggy address:
0x0c327fffb9f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c327fffba00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c327fffba10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c327fffba20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c327fffba30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
=>0x0c327fffba40: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fffba50: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fffba60: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fffba70: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fffba80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fffba90: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==31958==ABORTING
Signed-off-by: Yury Kotov <yury-kotov@yandex-team.ru>
Message-Id: <20190408113343.2370-1-yury-kotov@yandex-team.ru>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Fixed up comment formatting
On source side, we have qemu_savevm_state_header() to send related data,
while on the receiving side those steps are scattered in
qemu_loadvm_state().
This patch wrap those related steps into qemu_loadvm_state_header() to
make it friendly to read.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190424004700.12766-5-richardw.yang@linux.intel.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
In migration_thread() and qemu_savevm_state(), we savevm_state in
following sequence:
qemu_savevm_state_header(f);
qemu_savevm_state_setup(f);
Then it would be more proper to loadvm_state in the save sequence.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190424004700.12766-4-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Current call flow of save_snapshot is:
save_snapshot
migration_is_blocked
qemu_savevm_state
migration_is_blocked
Since qemu_savevm_state is only called in save_snapshot, this means
migration_is_blocked has been already checked.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190424004700.12766-2-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Since the ram bitmap and the unsent bitmap are split by RAMBlock
in commit 6b6712e, it's better to update the comments about them.
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Message-Id: <1555311089-18610-1-git-send-email-wang.yi59@zte.com.cn>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
We can eliminate to pass 0.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190430034412.12935-2-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The colo_do_failover no need the input parameter.
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Message-Id: <20190426090730.2691-2-chen.zhang@intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>