The coroutine files are currently referenced by the block-obj-y
variable. The coroutine functionality though is already used by
more than just the block code. eg migration code uses coroutine
yield. In the future the I/O channel code will also use the
coroutine yield functionality. Since the coroutine code is nicely
self-contained it can be easily built as part of the libqemuutil.a
library, making it widely available.
The headers are also moved into include/qemu, instead of the
include/block directory, since they are now part of the util
codebase, and the impl was never in the block/ directory
either.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Release qemu global mutex before call synchronize_rcu().
synchronize_rcu() waiting for all readers to finish their critical
sections. There is at least one critical section in which we try
to get QGM (critical section is in address_space_rw() and
prepare_mmio_access() is trying to aquire QGM).
Both functions (migration_end() and migration_bitmap_extend())
are called from main thread which is holding QGM.
Thus there is a race condition that ends up with deadlock:
main thread working thread
Lock QGA |
| Call KVM_EXIT_IO handler
| |
| Open rcu reader's critical section
Migration cleanup bh |
| |
synchronize_rcu() is |
waiting for readers |
| prepare_mmio_access() is waiting for QGM
\ /
deadlock
The patch changes bitmap freeing from direct g_free after synchronize_rcu
to free inside call_rcu.
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reported-by: Igor Redko <redkoi@virtuozzo.com>
Tested-by: Igor Redko <redkoi@virtuozzo.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
CC: Anna Melekhova <annam@virtuozzo.com>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Wen Congyang <wency@cn.fujitsu.com>
We were announcing the dest host's IP as our new IP a bit too soon -- if
there were errors detected after this announcement was done, the
migration is failed and the VM could continue running on the src host --
causing problems later.
Move around the qemu_announce_self() call so it's done just before the
VM is runnable.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The current migration-completed event is generated a bit too early,
which means that an eager libvirt that's ready to go as soon
as it sees the event ends up racing with the actual end of migration.
This corresponds to RH bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1271145
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
xSigned-off-by: Juan Quintela <quintela@redhat.com>
Migration has a define for MAX_THROTTLE. Update comment to clarify that this is
used for throttling transfer speed. Hopefully this will prevent it from being
confused with a guest cpu throttling entity.
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Report throttle percentage in info migrate and query-migrate responses when
cpu throttling is active.
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Remove traditional auto-converge static 30ms throttling code and replace it
with a dynamic throttling algorithm.
Additionally, be more aggressive when deciding when to start throttling.
Previously we waited until four unproductive memory passes. Now we begin
throttling after only two unproductive memory passes. Four seemed quite
arbitrary and only waiting for two passes allows us to complete the migration
faster.
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Add migration parameters to allow the user to adjust the parameters
that control cpu throttling when auto-converge is in effect. The added
parameters are as follows:
x-cpu-throttle-initial : Initial percantage of time guest cpus are throttled
when migration auto-converge is activated.
x-cpu-throttle-increment: throttle percantage increase each time
auto-converge detects that migration is not making progress.
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Split out the finding of the dirty page and all the wrap detection
into a separate function since it was getting a bit hairy.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1443018431-11170-3-git-send-email-dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
[Fix comment -- Amit]
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Pull the search state for one iteration of the dirty page
search into a structure.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Message-Id: <1443018431-11170-2-git-send-email-dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer,
for two reasons. One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.
This commit only touches allocations with size arguments of the form
sizeof(T). Same Coccinelle semantic patch as in commit b45c03f.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1442231491-23352-1-git-send-email-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
This time convert the external functions:
qemu_get_buffer, qemu_peek_buffer
qemu_put_buffer and qemu_put_buffer_async
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1439463094-5394-6-git-send-email-dgilbert@redhat.com>
Reviewed-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
This is a start on using size_t more in qemu-file and friends;
it fixes up QEMUFilePutBufferFunc and QEMUFileGetBufferFunc
to take size_t lengths and return ssize_t return values (like read(2))
and fixes up all the different implementations of them.
Note that I've not yet followed this deeply into bdrv_ implementations.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1439463094-5394-5-git-send-email-dgilbert@redhat.com>
Reviewed-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
The code that gets run at the end of the migration process
is getting large, and I'm about to add more for postcopy.
Split it into a separate function.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1439463094-5394-3-git-send-email-dgilbert@redhat.com>
Reviewed-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
RAM migration mainly works on RAMBlocks but in a few places
uses data from MemoryRegions to access the same information that's
already held in RAMBlocks; clean it up just to avoid the
MemoryRegion use.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1439463094-5394-2-git-send-email-dgilbert@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
The free() and g_free() functions both happily accept
NULL on any platform QEMU builds on. As such putting a
conditional 'if (foo)' check before calls to 'free(foo)'
merely serves to bloat the lines of code.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Many source files have doubled words (eg "the the", "to to",
and so on). Most of these can simply be removed, but a couple
were actual mis-spellings (eg "to to" instead of "to do").
There was even one triple word score "to to to" :-)
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
When doing migration via the QMP command xen_save_devices_state, the
current runstate is not store into the global state section. Also the
current runstate is not the one we want on the receiver side.
During migration, the Xen toolstack paused QEMU before save the devices
state. Also, the toolstack expect QEMU to autostart when the migration is
finished.
So this patch store "running" as it's current runstate.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
The error checks I added used 'break' after the error, but I'm
in a switch inside the while loop, so they need to be 'goto out'.
Spotted by coverity; entries 1311368 and 1311369
Fixes: afcddefd
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1436555332-19076-1-git-send-email-dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit df4b102452 introduced global_state
section. But it only filled the state while doing migration. While
doing a savevm, we stored an empty string as state. So when we did a
loadvm, it complained that state was invalid.
Fedora 21, 4.1.1, qemu 2.4.0-rc0
> ../../configure --target-list="x86_64-softmmu"
068 2s ... - output mismatch (see 068.out.bad)
--- /home/bos/jhuston/src/qemu/tests/qemu-iotests/068.out 2015-07-08
17:56:18.588164979 -0400
+++ 068.out.bad 2015-07-09 17:39:58.636651317 -0400
@@ -6,6 +6,8 @@
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm 0
(qemu) quit
+qemu-system-x86_64: Unknown savevm section or instance 'globalstate' 0
+qemu-system-x86_64: Error -22 while loading VM state
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) quit
*** done
Failures: 068
Failed 1 of 1 tests
Actually, there were two problems here:
- we registered global_state too late for load_vm (fixed on another
patch on the list)
- we didn't store a valid state for savevm (fixed by this patch).
Reported-by: John Snow <jsnow@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
'strlen' is called three times in 'save_page_header', it's
inefficient.
Signed-off-by: Liang Li <liang.z.li@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
We can want the trace event even without migration events enabled.
Reported-by: Wen Congyang <ghostwcy@gmail.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
On previous change, we changed state at post load time if it was not
running, special casing the "running" change. Now, we change any states
at the end of the migration.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
migration_end calls synchronize_rcu() within a critical section.
That causes a deadlock; move the call after rcu_read_unlock().
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There callers work on a single BlockDriverState subtree, where using
bdrv_drain() is more accurate.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Prevously, if we hotplug a device(e.g. device_add e1000) during
migration is processing in source side, qemu will add a new ram
block but migration_bitmap is not extended.
In this case, migration_bitmap will overflow and lead qemu abort
unexpectedly.
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The section footers check was incorrectly checking the section_id
in the SaveStateEntry not the LoadStateEntry. These can validly be different
if the two QEMU instances have instantiated their devices in a
different order. The test only cares that we're finishing the same
section we started, and hence it's the LoadStateEntry that we care about.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
We reuse the migration events from the source side, sending them on the
appropiate place.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Make check fails with events. THis is due to the parser/lexer that it
uses. Just in case that they are more broken parsers, just only send
events when there are capabilities.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
We have one argument that tells us what event has happened.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
We now use the helper everywhere, so no need to call this on this two
places. See on previous commit that there were a place where we missed
to mark the trace. Now all tracing is done in migrate_set_state().
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
There were three places that were not using the migrate_set_state()
helper, just fix that.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
It needs to be the first one and it is not optional, that is the reason
why it is opencoded. For new machine types, it is required that machine
type name is the same in both sides.
It is just done right now for pc's.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
To make sections optional, we need to do it at the beggining of the code.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This section would be sent:
a- for all new machine types
b- for old machine types if section state is different form {running,paused}
that were the only giving us troubles.
So, in new qemus: it is alwasy there. In old qemus: they are only
there if it an error has happened, basically stoping on target.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This includes a new section that for now just stores the current qemu state.
Right now, there are only one way to control what is the state of the
target after migration.
- If you run the target qemu with -S, it would start stopped.
- If you run the target qemu without -S, it would run just after migration finishes.
The problem here is what happens if we start the target without -S and
there happens one error during migration that puts current state as
-EIO. Migration would ends (notice that the error happend doing block
IO, network IO, i.e. nothing related with migration), and when
migration finish, we would just "continue" running on destination,
probably hanging the guest/corruption data, whatever.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
If the number of RAMBlocks was different on the source from the
destination, QEMU would hang waiting for a disconnect on the source
and wouldn't release from that hang until the destination was manually
killed.
Mark the stream as being in error, this causes the destination to die
and the source to carry on.
(It still gets a whole bunch of warnings on the destination, and I've
not managed to complete another migration after the 1st one, still
progress).
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Perform some basic (but probably not complete) sanity checking on
requests from the RDMA source.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Use the order of incoming RAMBlocks from the source to record
an index number; that then allows us to sort the destination
local RAMBlock list to match the source.
Now that the RAMBlocks are known to be in the same order, this
simplifies the RDMA Registration step which previously tried to
match RAMBlocks based on offset (which isn't guaranteed to match).
Looking at the existing compress code, I think it was erroneously
relying on an assumption of matching ordering, which this fixes.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
RDMA uses a hash from block offset->RAM Block; this isn't needed
on the destination, and it becomes harder to maintain after the next
patch in the series that sorts the block list.
Split the hash so that it's only generated on the source.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
In the next patch we remove the hash on the destination,
rdma_delete_block does two things with the hash which can be avoided:
a) The caller passes the offset and rdma_delete_block looks it up
in the hash; fixed by getting the caller to pass the block
b) The hash gets recreated after deletion; fixed by making that
conditional on the hash being initialised.
While this function is currently only used during cleanup, Michael
asked that we keep it general for future dynamic block registration
work.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
We need the names of RAMBlocks as they're loaded for RDMA,
reuse a slightly modified ram_control_load_hook:
a) Pass a 'data' parameter to use for the name in the block-reg
case
b) Only some hook types now require the presence of a hook function.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The 'offset' field in RDMACompress and 'current_addr' field
in RDMARegister are commented as being offsets within a particular
RAMBlock, however they appear to actually be offsets within the
ram_addr_t space.
The code currently assumes that the offsets on the source/destination
match, this change removes the need for the assumption for these
structures by translating the addresses into the ram_addr_t space of
the destination host.
Note: An alternative would be to change the fields to actually
take the data they're commented for; this would potentially be
simpler but would break stream compatibility for those cases
that currently work.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
In a later patch the block name will be used to match up two views
of the block list. Keep a copy of the block name with the local block
list.
(At some point it could be argued that it would be best just to let
migration see the innards of RAMBlock and avoid the need to use
foreach).
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>