Make consistent the result of blk_mig_save_dirty_block() and
mig_save_device_dirty()
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
This means we don't need to pass through qemu_file to get the errors.
Adjust all callers.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Move the error check to the beggining of the callers. Once this is fixed
qemu_file_set_if_error() is not used anymore, so remove it.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
It was setting last_error directly once, and with the helper the other time.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
It was used only one, and was only one if. It makes error handling
saner.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Adjust all the callers. We moved the set of last_error from inside
qemu_fflush() to all the callers.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
qemu_fseek() is known to be wrong. Would be removed on the next
commit. This code should never been used (value has been
MAC_TABLE_ENTRIES since 2009).
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
We only use it once, just remove the callback indirection.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
We only used it once, just remove the callback indirection
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
It always have that type, just change it.
We will remove buffered file later on the migration thread series.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Simplify the logic for pushing data from the buffer to the output
pipe/socket. This also matches more closely what will be the
operation of the migration thread.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
This patch creates a migration bitmap, which is periodically kept in
sync with the qemu bitmap. A separate copy of the dirty bitmap for the
migration limits the amount of concurrent access to the qemu bitmap
from iothread and migration thread (which requires taking the big
lock).
We use the qemu bitmap type. We have to "undo" the dirty_pages
counting optimization on the general dirty bitmap and do the counting
optimization with the migration local bitmap.
Signed-off-by: Umesh Deshpande <udeshpan@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Helper that we use each time that we need to syncronize the migration
bitmap with the other dirty bitmaps.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
It just test if the dirty bit is set, and clears it.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
It just marks a region of memory as dirty.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Like add2, do operand ordering, constant folding, and dead operand
elimination. The latter happens about 15% of all mulu2 during an
x86_64 bios boot.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
When x86_64 guest is not in 64-bit mode, the high-part of the 64-bit
add is dead. When the host is 32-bit, we can simplify to 32-bit
arithmetic.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We can re-use these for implementing double-word folding.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This saves a whole lot of repetitive code sequences.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reduces code duplication and prefers
movcond d, c1, c2, const, s
to
movcond d, c1, c2, s, const
It also prefers
add r, r, c
over
add r, c, r
when both inputs are known constants. This doesn't matter for true add, as
we will fully constant fold that. But it matters for a follow-on patch using
this routine for add2 which may not be fully foldable.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Destroying a memory region is illegal within a transaction, as until
the transaction is committed, the memory core may hold references to
the region. Add an assert to check for violations of this rule.
Signed-off-by: Avi Kivity <avi@redhat.com>
Calling memory_region_destroy() within a transaction is illegal, since
the memory API is allowed to continue to dispatch to a region until the
transaction commits. 440fx does that however when managing PAM registers.
This bug is benign, since the regions are all aliases (which the memory
core tends to throw anyway), and since we don't do concurrent dispatch yet,
but instead of relying on that, tighten ship ahead of the coming concurrency
storm.
Fix by having a predefined set of regions, of which one will be enabled at
any time.
Signed-off-by: Avi Kivity <avi@redhat.com>