It always have that type, just change it.
We will remove buffered file later on the migration thread series.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Simplify the logic for pushing data from the buffer to the output
pipe/socket. This also matches more closely what will be the
operation of the migration thread.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
This patch creates a migration bitmap, which is periodically kept in
sync with the qemu bitmap. A separate copy of the dirty bitmap for the
migration limits the amount of concurrent access to the qemu bitmap
from iothread and migration thread (which requires taking the big
lock).
We use the qemu bitmap type. We have to "undo" the dirty_pages
counting optimization on the general dirty bitmap and do the counting
optimization with the migration local bitmap.
Signed-off-by: Umesh Deshpande <udeshpan@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Helper that we use each time that we need to syncronize the migration
bitmap with the other dirty bitmaps.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
It just test if the dirty bit is set, and clears it.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
It just marks a region of memory as dirty.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Like add2, do operand ordering, constant folding, and dead operand
elimination. The latter happens about 15% of all mulu2 during an
x86_64 bios boot.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
When x86_64 guest is not in 64-bit mode, the high-part of the 64-bit
add is dead. When the host is 32-bit, we can simplify to 32-bit
arithmetic.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
We can re-use these for implementing double-word folding.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This saves a whole lot of repetitive code sequences.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reduces code duplication and prefers
movcond d, c1, c2, const, s
to
movcond d, c1, c2, s, const
It also prefers
add r, r, c
over
add r, c, r
when both inputs are known constants. This doesn't matter for true add, as
we will fully constant fold that. But it matters for a follow-on patch using
this routine for add2 which may not be fully foldable.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Destroying a memory region is illegal within a transaction, as until
the transaction is committed, the memory core may hold references to
the region. Add an assert to check for violations of this rule.
Signed-off-by: Avi Kivity <avi@redhat.com>
Calling memory_region_destroy() within a transaction is illegal, since
the memory API is allowed to continue to dispatch to a region until the
transaction commits. 440fx does that however when managing PAM registers.
This bug is benign, since the regions are all aliases (which the memory
core tends to throw anyway), and since we don't do concurrent dispatch yet,
but instead of relying on that, tighten ship ahead of the coming concurrency
storm.
Fix by having a predefined set of regions, of which one will be enabled at
any time.
Signed-off-by: Avi Kivity <avi@redhat.com>
Our memory API MMIO regions know the concept of device endianness. This
is used to automatically swap endianness between devices and host CPU,
depending on whether buses in between would swizzle the bits.
The ioeventfd value comparison does not adhere to that semantic though.
Probably because nobody has been running ioeventfd on a BE platform and
the only device implementing ioeventfd right now is LE (PCI) based.
So add swizzling to ioeventfd registration / deletion to make the rest
of the code as consistent as possible.
Thanks a lot to Michael Tsirkin to point me towards the right direction.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Needed for changing mips_vpe_sleep() argument type to MIPSCPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Needed for moving halted field to CPUState.
The variable name "c" is retained for MIPSCPU to leave "cpu" for CPUState.
Also change return type to bool while at it.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Needed for changing mips_vpe_is_wfi() argument type to MIPSCPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Free the variable name "other_cpu" for later use for MIPSCPU.
Fix off-by-one indentation while at it.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Note that in the general reg=reg,reg case we're restricted
to 16-bit insertions. This makes it easy to allow "any"
constant as input, as post-truncation it will fit into the
constant load insn for which we have room in the bundle.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
It is possible to slightly optimize the TLB access code, by replacing
the movi + and instructions by a deposit instruction.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Remove suboptimal register shifting in qemu_ld/st ops, introduced at the
CONFIG_TCG_PASS_AREG0 time.
As mem_idx is now loaded in register R58/R59 for the slow path, we have
to make sure to do it last, to not add additional register constraints.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Implement movcond_i32/64 on ia64 hosts. It is not possible to have
immediate compare arguments without adding a new bundle, but it is
possible to have 22-bit immediate value arguments.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Use stack instead of temp_buf array in CPUState for TCG temps.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Implement movcond_i32 for ARM, as the sequence
mov dst, v2 (implicitly done by the tcg common code)
cmp c1, c2
movCC dst, v1
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The code to emit either an immediate cmp or a register cmp insn is
duplicated in several places; factor it out into its own function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Many listeners don't need to respond to all MemoryListener callbacks;
provide suitable no-op defaults instead.
Signed-off-by: Avi Kivity <avi@redhat.com>
Instead of embedding knowledge of the memory and I/O address spaces in the
memory core, maintain a list of all address spaces. This list will later
be extended dynamically for other bus masters.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>