Commit Graph

385 Commits

Author SHA1 Message Date
Chao Fan
1ffb5dfd35 Change the method to calculate dirty-pages-rate
In function cpu_physical_memory_sync_dirty_bitmap, file
include/exec/ram_addr.h:

if (src[idx][offset]) {
    unsigned long bits = atomic_xchg(&src[idx][offset], 0);
    unsigned long new_dirty;
    new_dirty = ~dest[k];
    dest[k] |= bits;
    new_dirty &= bits;
    num_dirty += ctpopl(new_dirty);
}

After these codes executed, only the pages not dirtied in bitmap(dest),
but dirtied in dirty_memory[DIRTY_MEMORY_MIGRATION] will be calculated.
For example:
When ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] = 0b00001111,
and atomic_rcu_read(&migration_bitmap_rcu)->bmap = 0b00000011,
the new_dirty will be 0b00001100, and this function will return 2 but not
4 which is expected.
the dirty pages in dirty_memory[DIRTY_MEMORY_MIGRATION] are all new,
so these should be calculated also.

Signed-off-by: Chao Fan <fanc.fnst@cn.fujitsu.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2017-03-16 08:55:56 +01:00
Dr. David Alan Gilbert
4c011c37ec postcopy: Send whole huge pages
The RAM save code uses ram_save_host_page to send whole
host pages at a time;  change this to use the host page size associated
with the RAM Block which may be a huge page.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20170224182844.32452-12-dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-02-28 11:30:24 +00:00
Dr. David Alan Gilbert
28abd20014 postcopy: Load huge pages in one go
The existing postcopy RAM load loop already ensures that it
glues together whole host-pages from the target page size chunks sent
over the wire.  Modify the definition of host page that it uses
to be the RAM block page size and thus be huge pages where appropriate.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20170224182844.32452-10-dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-02-28 11:30:23 +00:00
Dr. David Alan Gilbert
df9ff5e1e3 postcopy: Plumb pagesize down into place helpers
Now we deal with normal size pages and huge pages we need
to tell the place handlers the size we're dealing with
and make sure the temporary page is large enough.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20170224182844.32452-8-dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-02-28 11:30:23 +00:00
Dr. David Alan Gilbert
d3a5038c46 exec: ram_block_discard_range
Create ram_block_discard_range in exec.c to replace
postcopy_ram_discard_range and most of ram_discard_range.

Those two routines are a bit of a weird combination, and
ram_discard_range is about to get more complex for hugepages.
It's OS dependent code (so shouldn't be in migration/ram.c) but
it needs quite a bit of the innards of RAMBlock so doesn't belong in
the os*.c.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20170224182844.32452-5-dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-02-28 11:30:23 +00:00
Dr. David Alan Gilbert
29c5917201 postcopy: Chunk discards for hugepages
At the start of the postcopy phase, partially sent huge pages
must be discarded.  The code for dealing with host page sizes larger
than the target page size can be reused for this case.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20170224182844.32452-4-dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-02-28 11:30:23 +00:00
Dr. David Alan Gilbert
ef08fb389f postcopy: Transmit and compare individual page sizes
When using postcopy with hugepages, we require the source
and destination page sizes for any RAMBlock to match; note
that different RAMBlocks in the same VM can have different
page sizes.

Transmit them as part of the RAM information header and
fail if there's a difference.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20170224182844.32452-3-dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-02-28 11:30:23 +00:00
Dr. David Alan Gilbert
e8ca1db29b postcopy: Transmit ram size summary word
Replace the host page-size in the 'advise' command by a pagesize
summary bitmap; if the VM is just using normal RAM then
this will be exactly the same as before, however if they're using
huge pages they'll be different, and thus:
   a) Migration from/to old qemu's that don't understand huge pages
      will fail early.
   b) Migrations with different size RAMBlocks will also fail early.

This catches it very early; earlier than the detailed per-block
check in the next patch.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20170224182844.32452-2-dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-02-28 11:30:23 +00:00
Ashijeet Acharya
0827b9e97d migrate: Introduce zero RAM checks to skip RAM migration
Migration of a "none" machine with no RAM crashes abruptly as
bitmap_new() fails and thus aborts. Instead place zero RAM checks at
appropriate places to skip migration of RAM in this case and complete
migration successfully for devices only.

Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com>
Message-Id: <1486564125-31366-1-git-send-email-ashijeetacharya@gmail.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-02-13 17:27:13 +00:00
Pavel Butsykin
ced1c6166e migration: discard non-dirty ram pages after the start of postcopy
After the start of postcopy migration there are some non-dirty pages which have
already been migrated. These pages are no longer needed on the source vm so that
we can free them and it doen't hurt to complete the migration.

Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Message-Id: <20170203152321.19739-4-pbutsykin@virtuozzo.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-02-13 17:27:13 +00:00
Pavel Butsykin
53f09a1076 add 'release-ram' migrate capability
This feature frees the migrated memory on the source during postcopy-ram
migration. In the second step of postcopy-ram migration when the source vm
is put on pause we can free unnecessary memory. It will allow, in particular,
to start relaxing the memory stress on the source host in a load-balancing
scenario.

Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Message-Id: <20170203152321.19739-3-pbutsykin@virtuozzo.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
   Manually merged in Pavel's 'migration: madvise error_report fixup!'
2017-02-13 17:27:13 +00:00
Pavel Butsykin
9eb1476610 migration: add MigrationState arg for ram_save_/compressed_/page()
Cosmetic patch. The use of ms variable instead of migrate_get_current()
looks nicer, especially when there reuse.

Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Message-Id: <20170203152321.19739-2-pbutsykin@virtuozzo.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-02-13 17:27:13 +00:00
Juan Quintela
55c4446b8b migration: transform remaining DPRINTF into trace_
So we can remove DPRINTF() macro

Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1485207141-1941-2-git-send-email-quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
  Fixed up 'remained/remaining' as requested by Eric
2017-01-24 18:00:31 +00:00
Thomas Huth
5c90308f07 migration: Fix return code of ram_save_iterate()
qemu_savevm_state_iterate() expects the iterators to return 1
when they are done, and 0 if there is still something left to do.
However, ram_save_iterate() does not obey this rule and returns
the number of saved pages instead. This causes a fatal hang with
ppc64 guests when you run QEMU like this (also works with TCG):

 qemu-img create -f qcow2  /tmp/test.qcow2 1M
 qemu-system-ppc64 -nographic -nodefaults -m 256 \
                   -hda /tmp/test.qcow2 -serial mon:stdio

... then switch to the monitor by pressing CTRL-a c and try to
save a snapshot with "savevm test1" for example.

After the first iteration, ram_save_iterate() always returns 0 here,
so that qemu_savevm_state_iterate() hangs in an endless loop and you
can only "kill -9" the QEMU process.
Fix it by using proper return values in ram_save_iterate().

Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2016-11-14 19:35:41 +01:00
zhanghailiang
a91246c95f COLO: Send PVM state to secondary side when do checkpoint
VM checkpointing is to synchronize the state of PVM to SVM, just
like migration does, we re-use save helpers to achieve migrating
PVM's state to Secondary side.

COLO need to cache the data of VM's state in the secondary side before
synchronize it to SVM. COLO need the size of the data to determine
how much data should be read in the secondary side.
So here, we can get the size of the data by saving it into I/O channel
before send it to the secondary side.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Amit Shah <amit@amitshah.net>
2016-10-30 15:17:39 +05:30
Vijaya Kumar K
adb65dec4b migration: Remove static allocation of xzblre cache buffer
Allocate xzblre zero page cache buffer dynamically.
Remove dependency on TARGET_PAGE_SIZE to make run-time
page size detection for arm platforms.

Signed-off-by: Vijaya Kumar K <vijayak@cavium.com>
Message-id: 1465808915-4887-2-git-send-email-vijayak@caviumnetworks.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-10-24 16:26:49 +01:00
Dr. David Alan Gilbert
2ebeaec012 Postcopy vs xbzrle: Don't send xbzrle pages once in postcopy [for 2.8]
xbzrle relies on reading pages that have already been sent
to the destination and then applying the modifications; we can't
do that in postcopy because the destination may well have
modified the page already or the page has been discarded.

I already didn't allow reception of xbzrle pages, but I
forgot to add the test to stop them being sent.

Enabling both xbzrle and postcopy can make some sense;
if you think that your migration might finish if you
have xbzrle, then when it doesn't complete you flick
over to postcopy and stop xbzrle'ing.

This corresponds to RH bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1368422

Symptom is:

Unknown combination of migration flags: 0x60 (postcopy mode)
(either 0x60 or 0x40)

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2016-10-13 17:23:53 +02:00
Paolo Bonzini
9c1f8f4493 migration: sync all address spaces
Migrating a VM during reboot sometimes results in differences
between the source and destination in the SMRAM area.

This is because migration_bitmap_sync() only fetches from KVM
the dirty log of address_space_memory.  SMRAM memory slots
are ignored and the modifications to SMRAM are not sent to the
destination.

Reported-by: He Rongguang <herongguang.he@huawei.com>
Reviewed-by: He Rongguang <herongguang.he@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27 11:57:29 +02:00
Richard Henderson
a1febc4950 cutils: Export only buffer_is_zero
Since the two users don't make use of the returned offset,
beyond ensuring that the entire buffer is zero, consider the
can_use_buffer_find_nonzero_offset and buffer_find_nonzero_offset
functions internal.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1472496380-19706-4-git-send-email-rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-13 19:09:45 +02:00
Cao jin
e110aa919a migration/ram: fix typo
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Message-Id: <1469776231-23820-1-git-send-email-caoj.fnst@cn.fujitsu.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-08-11 16:59:33 +05:30
Liang Li
0d9f9a5c52 migration: code clean up
Use 'QemuMutex comp_done_lock' and 'QemuCond comp_done_cond' instead
of 'QemuMutex *comp_done_lock' and 'QemuCond comp_done_cond'. To keep
consistent with 'QemuMutex decomp_done_lock' and
'QemuCond comp_done_cond'.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Message-Id: <1462433579-13691-10-git-send-email-liang.z.li@intel.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-17 18:24:31 +05:30
Liang Li
33d151f418 migration: refine the decompression code
The current code for multi-thread decompression is not clear,
especially in the aspect of using lock. Refine the code
to make it clear.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Message-Id: <1462433579-13691-9-git-send-email-liang.z.li@intel.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-17 18:24:28 +05:30
Liang Li
a7a9a88f9d migration: refine the compression code
The current code for multi-thread compression is not clear,
especially in the aspect of using lock. Refine the code
to make it clear.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Message-Id: <1462433579-13691-8-git-send-email-liang.z.li@intel.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-17 18:24:26 +05:30
Liang Li
90e56fb46d migration: protect the quit flag by lock
quit_comp_thread and quit_decomp_thread are accessed by several
thread, it's better to protect them with locks. We use a per
thread flag to replace the global one, and the new flag is protected
by a lock.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Message-Id: <1462433579-13691-7-git-send-email-liang.z.li@intel.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-17 18:24:23 +05:30
Liang Li
fc50438ed0 migration: refine ram_save_compressed_page
Use qemu_put_compression_data to do the compression directly
instead of using do_compress_ram_page, avoid some data copy.
very small improvement, at the same time, add code to check
if the compression is successful.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Message-Id: <1462433579-13691-6-git-send-email-liang.z.li@intel.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-17 18:24:21 +05:30
Liang Li
b3be28969b qemu-file: Fix qemu_put_compression_data flaw
Current qemu_put_compression_data can only work with no writable
QEMUFile, and can't work with the writable QEMUFile. But it does
not provide any measure to prevent users from using it with a
writable QEMUFile.

We should fix this flaw to make it works with writable QEMUFile.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Suggested-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1462433579-13691-5-git-send-email-liang.z.li@intel.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-17 18:24:18 +05:30
Liang Li
e7bb92e21a migration: remove useless code
page_buffer is set twice repeatedly, remove the previous set.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1462433579-13691-4-git-send-email-liang.z.li@intel.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-17 18:24:14 +05:30
Liang Li
5533b2e9bc migration: Fix a potential issue
At the end of live migration and before vm_start() on the destination
side, we should make sure all the decompression tasks are finished, if
this can not be guaranteed, the VM may get the incorrect memory data,
or the updated memory may be overwritten by the decompression thread.
Add the code to fix this potential issue.

Suggested-by: David Alan Gilbert <dgilbert@redhat.com>
Suggested-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Message-Id: <1462433579-13691-3-git-send-email-liang.z.li@intel.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-17 18:24:09 +05:30
Liang Li
73a8912b8a migration: Fix multi-thread compression bug
Recently, a bug related to multiple thread compression feature for
live migration is reported. The destination side will be blocked
during live migration if there are heavy workload in host and
memory intensive workload in guest, this is most likely to happen
when there is one decompression thread.

Some parts of the decompression code are incorrect:
1. The main thread receives data from source side will enter a busy
loop to wait for a free decompression thread.
2. A lock is needed to protect the decomp_param[idx]->start, because
it is checked in the main thread and is updated in the decompression
thread.

Fix these two issues by following the code pattern for compression.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Reported-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Tested-by: Daniel P. Berrange <berrange@redhat.com>

Signed-off-by: Liang Li <liang.z.li@intel.com>
Message-Id: <1462433579-13691-2-git-send-email-liang.z.li@intel.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-17 18:24:01 +05:30
Dr. David Alan Gilbert
d3bf5418e2 Postcopy: Add stats on page requests
On the source, add a count of page requests received from the
destination.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Message-id: 1465816605-29488-4-git-send-email-dgilbert@redhat.com
Message-Id: <1465816605-29488-4-git-send-email-dgilbert@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-16 09:50:07 +05:30
Dr. David Alan Gilbert
d688c62d09 Postcopy: Avoid 0 length discards
The discard code in migration/ram.c would send request for
zero length discards in the case where no discards were needed.
It doesn't appear to have had any bad effect.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Message-id: 1465816605-29488-2-git-send-email-dgilbert@redhat.com
Message-Id: <1465816605-29488-2-git-send-email-dgilbert@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-06-16 09:50:07 +05:30
Daniel P. Berrange
2594f56d4c migration: don't use an array for storing migrate parameters
The MigrateState struct uses an array for storing migration
parameters. This presumes that all future parameters will
be integers too, which is not going to be the case. There
is no functional reason why an array is used, if anything
it makes the code less clear. The QAPI schema already
defines a struct - MigrationParameters - capable of storing
all the individual parameters, so just use that instead of
an array.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <1461751518-12128-25-git-send-email-berrange@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-05-26 11:32:07 +05:30
Peter Maydell
99694362ee migration fixes:
- ensure src block devices continue fine after a failed migration
 - fail on migration blockers; helps 9p savevm/loadvm
 - move autoconverge commands out of experimental state
 - move the migration-specific qjson in migration/
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJXQzqdAAoJEOsLTfxlfvZwMDgP/0WjJc6tcrRYWPnZ0I4+6/1A
 MByxfBf0LBeST5/A8HDOg8KrTasNHXKisMAQ5kHUxxWLuzF9GYScLdZ2Sf+2VrP2
 rRLJXW2c56cVPsc3j4ZU5t93SO5Q2Dd1hZ2uabu5XMMH2IhtO5H05wfPkkMdRZO2
 XzRt97z0LRBHOvh4O/ZfGjtEaMlmUTpl5X/PpPUW+o6yeDZU00kWFUz7BR7D9q27
 Adru6G8N3pN3KJEMWMqIdmlgoSTEdebTItwLLJ7XwKlKF+bPwr/gsqM6i66C0ahB
 HjpS2T4ly7U33B2JdWElDCZSwlFXAy3Tv7oB0mHgCEqgfryabQXRupVpK0Vyk2EV
 yV7Hf+R/DdkHBNeCCl+rduQiA6ed/DFHSa62vt796Yilf2vUlvdeuh4d1aNp5uxo
 M4QCuxOUsvp75b9mBEuVhz/CCgkq/Hm8HlMZX6/lDTyvNc7qKQnVKWCx95zGsKem
 vPMKxfrKNPY6J08LcjXtqfNNdJEQ5Z1St2a9HiDg5eWuWT2vCgRrjizkMH5zbKEx
 5BJbJlifY1JN7f5+guh9trQRRfB4CTAuuTOLrOH7xbST7jGNaFKAlmzsV0s0xDxF
 /47GcSz5uzLY4T2S4BMSu88mt3gVMTUIaZYxphHvCHqiOMuYG33HHLm8FyAdMBS2
 hhyG4UcKTJtxiO5ymqv5
 =RpPT
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/amit-migration/tags/migration-2.7-1' into staging

migration fixes:

- ensure src block devices continue fine after a failed migration
- fail on migration blockers; helps 9p savevm/loadvm
- move autoconverge commands out of experimental state
- move the migration-specific qjson in migration/

# gpg: Signature made Mon 23 May 2016 18:15:09 BST using RSA key ID 657EF670
# gpg: Good signature from "Amit Shah <amit@amitshah.net>"
# gpg:                 aka "Amit Shah <amit@kernel.org>"
# gpg:                 aka "Amit Shah <amitshah@gmx.net>"

* remotes/amit-migration/tags/migration-2.7-1:
  migration: regain control of images when migration fails to complete
  savevm: fail if migration blockers are present
  migration: Promote improved autoconverge commands out of experimental state
  migration/qjson: Drop gratuitous use of QOM
  migration: Move qjson.[ch] to migration/

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-05-24 12:21:07 +01:00
Gonglei
fa53a0e53e memory: drop find_ram_block()
On the one hand, we have already qemu_get_ram_block() whose function
is similar. On the other hand, we can directly use mr->ram_block but
searching RAMblock by ram_addr which is a kind of waste.

Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-Id: <1462845901-89716-2-git-send-email-arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-23 16:53:44 +02:00
Jason J. Herne
d85a31d1f4 migration: Promote improved autoconverge commands out of experimental state
The new autoconverge throttling commands have been tested for a release now. It
is time to move them out of the experimental state.

Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Message-Id: <1461262038-8197-1-git-send-email-jjherne@linux.vnet.ibm.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-05-23 16:05:09 +05:30
Paolo Bonzini
33c11879fd qemu-common: push cpu.h inclusion out of qemu-common.h
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19 16:42:29 +02:00
Stefan Weil
cb8d4c8f54 Fix some typos found by codespell
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2016-05-18 15:04:27 +03:00
Veronia Bahaa
f348b6d1a5 util: move declarations out of qemu-common.h
Move declarations out of qemu-common.h for functions declared in
utils/ files: e.g. include/qemu/path.h for utils/path.c.
Move inline functions out of qemu-common.h and into new files (e.g.
include/qemu/bcd.h)

Signed-off-by: Veronia Bahaa <veroniabahaa@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-22 22:20:17 +01:00
Paolo Bonzini
4987783400 migration: fix incorrect memory_global_dirty_log_start outside BQL
This can cause various segmentation faults or aborts in qemu-iotests
test 091.

Fixes: 5b82b703b6
Cc: Dave Gilbert <dgilbert@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-02-16 15:34:43 +01:00
Stefan Hajnoczi
5b82b703b6 memory: RCU ram_list.dirty_memory[] for safe RAM hotplug
Although accesses to ram_list.dirty_memory[] use atomics so multiple
threads can safely dirty the bitmap, the data structure is not fully
thread-safe yet.

This patch handles the RAM hotplug case where ram_list.dirty_memory[] is
grown.  ram_list.dirty_memory[] is change from a regular bitmap to an
RCU array of pointers to fixed-size bitmap blocks.  Threads can continue
accessing bitmap blocks while the array is being extended.  See the
comments in the code for an in-depth explanation of struct
DirtyMemoryBlocks.

I have tested that live migration with virtio-blk dataplane works.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <1453728801-5398-2-git-send-email-stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-02-09 15:45:26 +01:00
zhanghailiang
a08f689034 migration/ram: Fix some helper functions' parameter to use PageSearchStatus
Some helper functions use parameters 'RAMBlock *block' and 'ram_addr_t *offset',
We can use 'PageSearchStatus *pss' directly instead, with this change, we
can reduce the number of parameters for these helper function, also
it is easily to add new parameters for these helper functions.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Message-Id: <1452829066-9764-5-git-send-email-zhang.zhanghailiang@huawei.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-02-05 19:09:50 +05:30
zhanghailiang
4c4bad4861 ram: Split host_from_stream_offset() into two helper functions
Split host_from_stream_offset() into two parts:
One is to get ram block, which the block idstr may be get from migration
stream, the other is to get hva (host) address from block and the offset.
Besides, we will do the check working in a new helper offset_in_ramblock().

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Message-Id: <1452829066-9764-2-git-send-email-zhang.zhanghailiang@huawei.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-02-05 19:09:50 +05:30
Peter Maydell
1393a48526 migration: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-2-git-send-email-peter.maydell@linaro.org
2016-01-29 15:07:22 +00:00
Peter Maydell
d341d9f306 fpu: Replace uint8 typedef with uint8_t
Replace the uint8 softfloat-specific typedef with uint8_t.
This change was made with

find include hw fpu target-* -name '*.[ch]' | xargs sed -i -e 's/\buint8\b/uint8_t/g'

together with manual removal of the typedef definition and
manual fixing of more erroneous uses found via test compilation.

It turns out that the only code using this type is an accidental
use where uint8_t was intended anyway...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Leon Alrae <leon.alrae@imgtec.com>
Acked-by: James Hogan <james.hogan@imgtec.com>
Message-id: 1452603315-27030-7-git-send-email-peter.maydell@linaro.org
2016-01-22 15:09:21 +00:00
Peter Maydell
17c8a21978 Error reporting patches for 2016-01-13
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJWll18AAoJEDhwtADrkYZTLL8QAKB2zTF8/9QwIA46T/nNuQKV
 ZckiADC6Aeh0Ksu5DAS7fZmfgPDmlwYYCN3x5KGeKGKIIPiVrddEYwyHqa6eTCOu
 pbJBu5WeVamre8/9SH7u2VC/RMU0OZ+OhhJJf174Fc2mTALDtK1JJO4kXCzSUA5V
 Iop04YtliH5dnDhCdIHH2tByDLMf1Iaq8NYJ0xWb3btNGX6iIT8F3EsbD9rGiE1m
 c+F0qPRFDIrE+OseafrTHeKy/4D9biWnP9CmOGv49m+OxqYs33B26DhaIq41TvYv
 /1sECCz2GmIFbpL1B0MvxNjKtj08btrz4EkpU4YBHxK+8EhOX2nJdfrZEhcone7A
 c92esN8ATFbsG3AP1Vnt/dxG0YzQB8/azGP/MgVczYaj0m7WZ89etqendj1GeYAZ
 2xXewICcmexBeMOodxthHxyQaUQ9oZyk8+sK5T9O6JKvb3uCHKJ6MeRwurHUEtL8
 rzPLzKw8Tdalfa7AhQevVquH0QCmm4IEUC7xalHmfsFuqqTU95zfLa+DbdhzdIG+
 KdRkCv4+yX8//kUM5LwiqSd7ruMDEMQPQz3pbegrKrUJDCcTt5TccZ6NxiccCpC3
 6YXaUG2HqBNH5hznhR1Lf+gRdLeCW8WjI3fWHsAuyTGvl6z8qHm5/Q944UrIlJ8A
 Ea1BUSMwgFqx5xp6KYjB
 =OVhB
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/armbru/tags/pull-error-2016-01-13' into staging

Error reporting patches for 2016-01-13

# gpg: Signature made Wed 13 Jan 2016 14:21:48 GMT using RSA key ID EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>"

* remotes/armbru/tags/pull-error-2016-01-13: (41 commits)
  checkpatch: Detect newlines in error_report and other error functions
  error: Consistently name Error * objects err, and not errp
  s390/sclp: Simplify control flow in sclp_realize()
  hw/s390x: Rename local variables Error *l_err to just err
  error: Clean up errors with embedded newlines (again)
  vhdx: Fix "log that needs to be replayed" error message
  pci-assign: Clean up "Failed to assign" error messages
  vmdk: Clean up "Invalid extent lines" error message
  vmdk: Clean up control flow in vmdk_parse_extents() a bit
  error: Strip trailing '\n' from error string arguments (again)
  qemu-io qemu-nbd: Use error_report() etc. instead of fprintf()
  migration: Use error_reportf_err() instead of monitor_printf()
  spapr: Use error_reportf_err()
  error: Use error_prepend() where it makes obvious sense
  error: Use error_reportf_err() where it makes obvious sense
  error: Don't decorate original error message when adding to it
  error: New error_prepend(), error_reportf_err()
  test-throttle: Simplify qemu_init_main_loop() error handling
  qemu-nbd: Clean up "Failed to load snapshot" error message
  block: Clean up "Could not create temporary overlay" error message
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-01-14 13:07:38 +00:00
Markus Armbruster
9af9e0fed7 error: Strip trailing '\n' from error string arguments (again)
Commit 6daf194d, be62a2eb and 312fd5f got rid of a bunch, but they
keep coming back.  Tracked down with the Coccinelle semantic patch
from commit 312fd5f.

Cc: Fam Zheng <famz@redhat.com>
Cc: Peter Crosthwaite <crosthwaitepeter@gmail.com>
Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Changchun Ouyang <changchun.ouyang@intel.com>
Cc: zhanghailiang <zhang.zhanghailiang@huawei.com>
Cc: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Markus Armbruster <armbru@pond.sub.org>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Acked-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1450452927-8346-17-git-send-email-armbru@redhat.com>
2016-01-13 15:16:18 +01:00
Dr. David Alan Gilbert
c1bc66263c multithread decompression: Avoid one copy
qemu_get_buffer does a copy, we can avoid the memcpy, and
we can then remove the extra buffer.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Liang Li <liang.z.li@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Message-Id: <1450266458-3178-7-git-send-email-dgilbert@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-01-13 16:03:01 +05:30
Dr. David Alan Gilbert
063e760a5f Use qemu_get_buffer_in_place for xbzrle data
Avoid a data copy (if we're lucky) in the xbzrle code.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Message-Id: <1450266458-3178-6-git-send-email-dgilbert@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-01-13 16:02:37 +05:30
Dr. David Alan Gilbert
4addcd4fdc Migration: Emit event at start of pass
Emit an event each time we sync the dirty bitmap on the source;
this helps libvirt use postcopy by giving it a kick when it
might be a good idea to start the postcopy.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Message-Id: <1450266458-3178-5-git-send-email-dgilbert@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2016-01-13 16:02:13 +05:30
Dr. David Alan Gilbert
3fd3c4b37c Fix xbzrle vs last_sent_block update
My fix (84e7b80a) replaced the last_sent_block update that I'd
removed earlier; however it was too aggressive in the xbzrle case.

save_xbzrle_page might return '0' to mean that the page didn't
need sending since it was the same as the last sent version;
in this case we can't update 'last_sent_block' since we didn't
actually send it.

Symptom: 'Illegal RAM offset 1018000' as we try and send a page
        to the wrong RAMBlock;  potentially that could be a data
        corruption if you were really unlucky.

Fixes: 84e7b80a05

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-id: 1449765106-6528-1-git-send-email-dgilbert@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-12-11 12:51:27 +00:00
Dr. David Alan Gilbert
84e7b80a05 Set last_sent_block
In a82d593b61 I accidentally removed the setting of
last_sent_block,  put it back.

Symptoms:
  Multithreaded compression only uses one thread.
  Migration is a bit less efficient since it won't use 'cont' flags.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Fixes: a82d593b61
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-19 11:49:53 +01:00
Dr. David Alan Gilbert
a3b6ff6d0a Postcopy: Fix TP!=HP zero case
Where the target page size is different from the host page
we special case it, but I messed up on the zero case check.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-12 17:52:29 +01:00
Juan Quintela
9458ad6b44 migration: print ram_addr_t as RAM_ADDR_FMT not %zx
Not all the wold is 64bits (yet).

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2015-11-12 17:52:29 +01:00
Dr. David Alan Gilbert
99e314ebca Host page!=target page: Cleanup bitmaps
Prior to the start of postcopy, ensure that everything that will
be transferred later is a whole host-page in size.

This is accomplished by discarding partially transferred host pages
and marking any that are partially dirty as fully dirty.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:28 +01:00
Dr. David Alan Gilbert
663e6c1df8 Don't sync dirty bitmaps in postcopy
Once we're in postcopy the source processors are stopped and memory
shouldn't change any more, so there's no need to look at the dirty
map.

There are two notes to this:
  1) If we do resync and a page had changed then the page would get
     sent again, which the destination wouldn't allow (since it might
     have also modified the page)
  2) Before disabling this I'd seen very rare cases where a page had been
     marked dirtied although the memory contents are apparently identical

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:28 +01:00
Dr. David Alan Gilbert
c53b7ddc61 postcopy: Check order of received target pages
Ensure that target pages received within a host page are in order.
This shouldn't trigger, but in the cases where the sender goes
wrong and sends stuff out of order it produces a corruption that's
really nasty to debug.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:27 +01:00
Dr. David Alan Gilbert
a71808772a Postcopy: Use helpers to map pages during migration
In postcopy, the destination guest is running at the same time
as it's receiving pages; as we receive new pages we must put
them into the guests address space atomically to avoid a running
CPU accessing a partially written page.

Use the helpers in postcopy-ram.c to map these pages.

qemu_get_buffer_in_place is used to avoid a copy out of qemu_file
in the case that postcopy is going to do a copy anyway.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:27 +01:00
Dr. David Alan Gilbert
a82d593b61 Page request: Consume pages off the post-copy queue
When transmitting RAM pages, consume pages that have been queued by
MIG_RPCOMM_REQPAGE commands and send them ahead of normal page scanning.

Note:
  a) After a queued page the linear walk carries on from after the
unqueued page; there is a reasonable chance that the destination
was about to ask for other closeby pages anyway.

  b) We have to be careful of any assumptions that the page walking
code makes, in particular it does some short cuts on its first linear
walk that break as soon as we do a queued page.

  c) We have to be careful to not break up host-page size chunks, since
this makes it harder to place the pages on the destination.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:27 +01:00
Dr. David Alan Gilbert
6c595cdee1 Page request: Process incoming page request
On receiving MIG_RPCOMM_REQ_PAGES look up the address and
queue the page.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:27 +01:00
Dr. David Alan Gilbert
1caddf8a81 postcopy: Incoming initialisation
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:27 +01:00
Dr. David Alan Gilbert
e0b266f01d migration_completion: Take current state
Soon we'll be in either ACTIVE or POSTCOPY_ACTIVE when we
complete migration, and we need to know which we expect to be
in to change state safely.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:27 +01:00
Dr. David Alan Gilbert
f3f491fcd6 Postcopy: Maintain unsentmap
Maintain an 'unsentmap' of pages that have yet to be sent.
This is used in the following patches to discard some set of
the pages already sent as we enter postcopy mode.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:27 +01:00
Dr. David Alan Gilbert
763c906b0e Add qemu_savevm_state_complete_postcopy
Add qemu_savevm_state_complete_postcopy to complement
qemu_savevm_state_complete_precopy together with a new
save_live_complete_postcopy method on devices.

The save_live_complete_precopy method is called on
all devices during a precopy migration, and all non-postcopy
devices during a postcopy migration at the transition.

The save_live_complete_postcopy method is called at
the end of postcopy for all postcopiable devices.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:27 +01:00
Dr. David Alan Gilbert
c31b098f64 Modify save_live_pending for postcopy
Modify save_live_pending to return separate postcopiable and
non-postcopiable counts.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 15:00:26 +01:00
Dr. David Alan Gilbert
a3e06c3d13 Rename save_live_complete to save_live_complete_precopy
In postcopy we're going to need to perform the complete phase
for postcopiable devices at a different point, start out by
renaming all of the 'complete's to make the difference obvious.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 14:51:49 +01:00
Dr. David Alan Gilbert
a776aa15a7 ram_load: Factor out host_from_stream_offset call and check
The main RAM load loop has a call to host_from_stream_offset for
each page type that actually loads data with the same test;
factor it out before the switch.

The host = NULL is to silence a bogus gcc warning of
an unitialised in the RAM_SAVE_COMPRESS_PAGE case, it
doesn't seem to realise that host is always initialised by the if at
the top in the cases the switch takes.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 14:51:49 +01:00
Dr. David Alan Gilbert
4f2e425267 ram_debug_dump_bitmap: Dump a migration bitmap as text
Useful for debugging the migration bitmap and other bitmaps
of the same format (including the sentmap in postcopy).

The bitmap is printed to stderr.
Lines that are all the expected value are excluded so the output
can be quite compact for many bitmaps.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 14:51:48 +01:00
Dr. David Alan Gilbert
e3dd74934f qemu_ram_block_by_name
Add a function to find a RAMBlock by name; use it in two
of the places that already open code that loop; we've
got another use later in postcopy.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10 14:51:48 +01:00
Liang Li
6ad2a215e7 migration: code clean up
Just clean up code, no behavior change.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>al3
Reviewed-by: Amit Shah <amit.shah@redhat.com>al3
Signed-off-by: Juan Quintela <quintela@redhat.com>al3
2015-11-04 13:40:13 +01:00
Liang Li
d1a8548c10 migration: rename cancel to cleanup in SaveVMHandles
'cleanup' seems more appropriate than 'cancel'.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>al3
Reviewed-by: Amit Shah <amit.shah@redhat.com>al3
Signed-off-by: Juan Quintela <quintela@redhat.com>al3
2015-11-04 13:40:13 +01:00
Liang Li
94f5a43704 migration: defer migration_end & blk_mig_cleanup
Because of the patch 3ea3b7fa9af067982f34b of kvm, which introduces a
lazy collapsing of small sptes into large sptes mechanism, now
migration_end() is a time consuming operation because it calls
memroy_global_dirty_log_stop(), which will trigger the dropping of small
sptes operation and takes about dozens of milliseconds, so call
migration_end() before all the vmsate data has already been transferred
to the destination will prolong VM downtime. This operation should be
deferred after all the data has been transferred to the destination.

blk_mig_cleanup() can be deferred too.

For a VM with 8G RAM, this patch can reduce the VM downtime about 30 ms.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>al3
Reviewed-by: Amit Shah <amit.shah@redhat.com>al3
Signed-off-by: Juan Quintela <quintela@redhat.com>al3
2015-11-04 13:40:13 +01:00
Denis V. Lunev
60be634079 migration: fix deadlock
Release qemu global mutex before call synchronize_rcu().
synchronize_rcu() waiting for all readers to finish their critical
sections. There is at least one critical section in which we try
to get QGM (critical section is in address_space_rw() and
prepare_mmio_access() is trying to aquire QGM).

Both functions (migration_end() and migration_bitmap_extend())
are called from main thread which is holding QGM.

Thus there is a race condition that ends up with deadlock:
main thread     working thread
Lock QGA                |
|             Call KVM_EXIT_IO handler
|                       |
|        Open rcu reader's critical section
Migration cleanup bh    |
|                       |
synchronize_rcu() is    |
waiting for readers     |
|            prepare_mmio_access() is waiting for QGM
  \                   /
         deadlock

The patch changes bitmap freeing from direct g_free after synchronize_rcu
to free inside call_rcu.

Signed-off-by: Denis V. Lunev <den@openvz.org>
Reported-by: Igor Redko <redkoi@virtuozzo.com>
Tested-by: Igor Redko <redkoi@virtuozzo.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>

CC: Anna Melekhova <annam@virtuozzo.com>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Wen Congyang <wency@cn.fujitsu.com>
2015-10-15 08:14:13 +02:00
Jason J. Herne
070afca258 migration: Dynamic cpu throttling for auto-converge
Remove traditional auto-converge static 30ms throttling code and replace it
with a dynamic throttling algorithm.

Additionally, be more aggressive when deciding when to start throttling.
Previously we waited until four unproductive memory passes. Now we begin
throttling after only two unproductive memory passes. Four seemed quite
arbitrary and only waiting for two passes allows us to complete the migration
faster.

Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
2015-09-30 09:42:04 +02:00
Dr. David Alan Gilbert
b9e6092814 ram_find_and_save_block: Split out the finding
Split out the finding of the dirty page and all the wrap detection
into a separate function since it was getting a bit hairy.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1443018431-11170-3-git-send-email-dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>

[Fix comment -- Amit]
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2015-09-29 11:38:29 +05:30
Dr. David Alan Gilbert
b8fb8cb748 Move dirty page search state into separate structure
Pull the search state for one iteration of the dirty page
search into a structure.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Message-Id: <1443018431-11170-2-git-send-email-dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2015-09-29 11:37:07 +05:30
Dr. David Alan Gilbert
2f68e39956 migration/ram.c: Use RAMBlock rather than MemoryRegion
RAM migration mainly works on RAMBlocks but in a few places
uses data from MemoryRegions to access the same information that's
already held in RAMBlocks; clean it up just to avoid the
MemoryRegion use.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <1439463094-5394-2-git-send-email-dgilbert@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
2015-09-29 11:33:02 +05:30
Liang Li
9f5f380b54 migration: reduce the count of strlen call
'strlen' is called three times in 'save_page_header', it's
inefficient.

Signed-off-by: Liang Li <liang.z.li@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-07-15 12:21:28 +02:00
Paolo Bonzini
d09a6fde15 migration: fix RCU deadlock
migration_end calls synchronize_rcu() within a critical section.
That causes a deadlock; move the call after rcu_read_unlock().

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-07-09 08:47:58 +02:00
Li Zhijian
dd63169766 migration: extend migration_bitmap
Prevously, if we hotplug a device(e.g. device_add e1000) during
migration is processing in source side, qemu will add a new ram
block but migration_bitmap is not extended.
In this case, migration_bitmap will overflow and lead qemu abort
unexpectedly.

Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-07-07 14:54:56 +02:00
Li Zhijian
2ff64038a5 migration: protect migration_bitmap
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-07-07 14:54:56 +02:00
Dr. David Alan Gilbert
632e3a5cd8 Rework ram_control_load_hook to hook during block load
We need the names of RAMBlocks as they're loaded for RDMA,
reuse a slightly modified ram_control_load_hook:
  a) Pass a 'data' parameter to use for the name in the block-reg
     case
  b) Only some hook types now require the presence of a hook function.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-07-07 14:54:48 +02:00
zhanghailiang
5ee6926582 arch_init: Clean up the duplicate variable 'len' defining in ram_load()
There are two places that define 'len' variable, It's OK for compiling,
but makes it difficult for reading.

Remove the local one which defined in the inside 'while' loop.

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-06-12 06:42:34 +02:00
Juan Quintela
7205c9ec52 migration: reduce include files
To make changes easier, with the copy, I maintained almost all include
files.  Now I remove the unnecessary ones on this patch.  This compiles
on linux x64 with all architectures configured, and cross-compiles for
windows 32 and 64 bits.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
2015-06-12 06:42:34 +02:00
Juan Quintela
76cc7b587f migration: Add myself to the copyright list of both files
If anyone feels like adding himself to the list, just sent me a patch.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
2015-06-12 06:42:34 +02:00
Juan Quintela
56e93d26b8 migration: move ram stuff to migration/ram
For historic reasons, ram migration have been on arch_init.c.  Just
split it into migration/ram.c, the same that happened with block.c.

There is only code movement, no changes altogether.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
2015-06-12 06:40:59 +02:00