Avoid splitting the state of outgoing migration, more or less arbitrarily,
between two data structures. QEMUFileBuffered anyway is used only during
migration.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
It could only return 0 if we only found dirty xbzrle pages that hadn't
changed (i.e. they were written with the same content). We don't care
about that case, it is the same than nothing dirty.
So now the return of the function is how much have it written, nothing
else. Adjust callers.
And we also made ram_save_iterate() return the number of transferred
bytes, not the number of transferred pages.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Instead of testing each page individually, we search what is the next
dirty page with a bitmap operation. We have to reorganize the code to
move from a "for" loop, to a while(dirty) loop.
Signed-off-by: Juan Quintela <quintela@redhat.com>
This avoids having to do two walks over the dirty bitmap, once reading
the dirty bits, and anthoer cleaning them.
Signed-off-by: Juan Quintela <quintela@redhat.com>
This is the last block from where we have sent data.
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The "magic" divisions by 10 are there because of the value of BUFFER_DELAY.
Introduce a constant to explain them better.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This only moves the code (also from buffered_file.h to migration.h).
Fix whitespace until checkpatch is happy.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Code just now does (simplified for clarity)
if (qemu_savevm_state_iterate(s->file) == 1) {
vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
qemu_savevm_state_complete(s->file);
}
Problem here is that qemu_savevm_state_iterate() returns 1 when it
knows that remaining memory to sent takes less than max downtime.
But this means that we could end spending 2x max_downtime, one
downtime in qemu_savevm_iterate, and the other in
qemu_savevm_state_complete.
Changed code to:
pending_size = qemu_savevm_state_pending(s->file, max_size);
DPRINTF("pending size %lu max %lu\n", pending_size, max_size);
if (pending_size >= max_size) {
ret = qemu_savevm_state_iterate(s->file);
} else {
vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
qemu_savevm_state_complete(s->file);
}
So what we do is: at current network speed, we calculate the maximum
number of bytes we can sent: max_size.
Then we ask every save_live section how much they have pending. If
they are less than max_size, we move to complete phase, otherwise we
do an iterate one.
This makes things much simpler, because now individual sections don't
have to caluclate the bandwidth (it was implossible to do right from
there).
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
It was the only user, and now buffered_put_buffer just do the append
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
We call buffered_put_buffer with iothread held, and buffered_flush() does
synchronous writes. We only want to do the synchronous writes outside.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
This was needed before due to the way that the callbacks worked.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Now that we have a thread, and blocking writes, we don't need it.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Move all the writes to the migration_thread, and make writings
blocking. Notice that are still using the iothread for everything
that we do.
Signed-off-by: Juan Quintela <quintela@redhat.com>
We want the file assignment to happen before the thread is created to
avoid locking, so we just do it before creating the thread.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
We still protect everything except the wait with the iothread lock.
But we moved from a timer to a thread. Steps one by one.
We also need to detect when we have finished with a variable "complete".
Signed-off-by: Juan Quintela <quintela@redhat.com>
Add the new mutex that protects shared state between ram_save_live
and the iothread. If the iothread mutex has to be taken together
with the ramlist mutex, the iothread shall always be _outside_.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Umesh Deshpande <udeshpan@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
This will be used to detect if last_block might have become invalid
across different calls to ram_save_live.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Umesh Deshpande <udeshpan@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Most of the time, only 2 items will be active (from/to for a string operation,
or code/data). But TCG guests likely won't have gigabytes of memory, so
this actually goes down to 1 item.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The call in buffered_close is enough, because buffered_close is called
already by migrate_fd_cleanup.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Sending more was possible if the buffer was large.
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Move bindings from opaque to DeviceState.
This gives us better type safety with no performance cost.
Add macros to make future QOM work easier.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* bonzini/header-dirs: (45 commits)
janitor: move remaining public headers to include/
hw: move executable format header files to hw/
fpu: move public header file to include/fpu
softmmu: move remaining include files to include/ subdirectories
softmmu: move include files to include/sysemu/
misc: move include files to include/qemu/
qom: move include files to include/qom/
migration: move include files to include/migration/
monitor: move include files to include/monitor/
exec: move include files to include/exec/
block: move include files to include/block/
qapi: move include files to include/qobject/
janitor: add guards to headers
qapi: make struct Visitor opaque
qapi: remove qapi/qapi-types-core.h
qapi: move inclusions of qemu-common.h from headers to .c files
ui: move files to ui/ and include/ui/
qemu-ga: move qemu-ga files to qga/
net: reorganize headers
net: move net.c to net/
...
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Document that the x86 CPU refactorings are going through the qom-cpu
tree. This does not contradict the established practice that patches
adding KVM features to the x86 CPU go through the KVM maintainers,
it merely takes it out of target-i386 TCG's Odd Fixes status.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Cc: Marcello Tosatti <mtosatti@redhat.com>
This separates the qdev properties code in two parts:
- qdev-properties.c, that contains most of the qdev properties code;
- qdev-properties-system.c for code specific for qemu-system-*,
containing:
- Property types: drive, chr, netdev, vlan, that depend on code that
won't be included on *-user
- qemu_add_globals(), that depends on qemu-config.o.
This change should help on two things:
- Allowing DeviceState to be used by *-user without pulling
dependencies that are specific for qemu-system-*;
- Writing qdev unit tests without pulling too many dependencies.
The copyright/license of qdev-properties.c isn't explicitly stated at
the file, so add a simple copyright/license header pointing to the
commit ID of the original file.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>