Commit Graph

15 Commits

Author SHA1 Message Date
Peter Maydell
bd7f92e59e vmstate: Add support for two dimensional arrays
Add support for migrating two dimensional arrays, by defining
a set of new macros VMSTATE_*_2DARRAY paralleling the existing
VMSTATE_*_ARRAY macros. 2D arrays are handled the same for actual
state serialization; the only difference is that the type check
has to change for a 2D array.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Igor Mitsyanko <i.mitsyanko@gmail.com>
Message-id: 1363975375-3166-2-git-send-email-peter.maydell@linaro.org
2013-04-05 16:17:59 +01:00
Igor Mitsyanko
8070568b9a vmstate.h: introduce VMSTATE_BUFFER_POINTER_UNSAFE macro
Macro could be used to migrate a dynamically allocated buffer of known size.

Signed-off-by: Igor Mitsyanko <i.mitsyanko@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1362923278-4080-2-git-send-email-i.mitsyanko@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2013-04-05 16:17:58 +01:00
David Gibson
377e2cb96b savevm: Fix bugs in the VMSTATE_VBUFFER_MULTIPLY definition
The VMSTATE_BUFFER_MULTIPLY macro is misnamed - it actually specifies
a variably sized buffer with VMS_VBUFFER, so should be named
VMSTATE_VBUFFER_MULTIPLY.  This patch fixes this (the macro had no current
users under either name).

In addition, unlike the other VMSTATE_VBUFFER variants, this macro did not
specify VMS_POINTER.  This patch fixes this bug as well.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26 13:30:49 +01:00
David Gibson
8474a9dd67 savevm: Add VMSTATE_STRUCT_VARRAY_POINTER_UINT32
Currently the savevm code contains a VMSTATE_STRUCT_VARRAY_POINTER_INT32
helper (a variably sized array with the number of elements in an int32_t),
but not VMSTATE_STRUCT_VARRAY_POINTER_UINT32 (... with the number of
elements in a uint32_t).  This patch (trivially) fixes the deficiency.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26 13:30:49 +01:00
David Gibson
213945e4d7 savevm: Add VMSTATE_FLOAT64 helpers
The current savevm code includes VMSTATE helpers for a number of commonly
used data types, but not for the float64 type used by the internal floating
point emulation code.  This patch fixes the deficiency.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26 13:30:49 +01:00
David Gibson
d58f559834 savevm: Add VMSTATE_UINTTL_EQUAL helper
This adds an _EQUAL VMSTATE helper for target_ulongs, defined in terms of
VMSTATE_UINT32_EQUAL or VMSTATE_UINT64_EQUAL as appropriate.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26 13:30:49 +01:00
David Gibson
e344b8a16d savevm: Add VMSTATE_UINT64_EQUAL helpers
The savevm code already includes a number of *_EQUAL helpers which act as
sanity checks verifying that the configuration of the saved state matches
that of the machine we're loading into to work.  Variants already exist
for 8 bit 16 bit and 32 bit integers, but not 64 bit integers.  This patch
fills that hole, adding a UINT64 version.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26 13:30:48 +01:00
Andreas Färber
c71c3e99b8 stubs: Add a vmstate_dummy struct for CONFIG_USER_ONLY
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12 10:35:54 +01:00
Andreas Färber
d7650eab42 vmstate: Make vmstate_register() static inline
This avoids adding a duplicate stub for CONFIG_USER_ONLY.

Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12 10:35:54 +01:00
Paolo Bonzini
9b09503752 migration: run setup callbacks out of big lock
Only the migration_bitmap_sync() call needs the iothread lock.

Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11 13:32:01 +01:00
Paolo Bonzini
32c835ba39 migration: run pending/iterate callbacks out of big lock
This makes it possible to do blocking writes directly to the socket,
with no buffer in the middle.  For RAM, only the migration_bitmap_sync()
call needs the iothread lock.  For block migration, it is needed by
the block layer (including bdrv_drain_all and dirty bitmap access),
but because some code is shared between iterate and complete, all of
mig_save_device_dirty is run with the lock taken.

In the savevm case, the iterate callback runs within the big lock.
This is annoying because it complicates the rules.  Luckily we do not
need to do anything about it: the RAM iterate callback does not need
the iothread lock, and block migration never runs during savevm.

Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11 13:32:01 +01:00
Paolo Bonzini
8c8de19d93 migration: reorder SaveVMHandlers members
This groups together the callbacks that later will have similar
locking rules.

Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11 13:32:01 +01:00
Paolo Bonzini
fd7f0d6617 hw: move fifo.[ch] to libqemuutil
fifo.c is generic code that can be easily unit tested.  So it
belongs in libqemuutil.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-03-01 13:53:10 +01:00
Juan Quintela
e4ed1541ac savevm: New save live migration method: pending
Code just now does (simplified for clarity)

    if (qemu_savevm_state_iterate(s->file) == 1) {
       vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
       qemu_savevm_state_complete(s->file);
    }

Problem here is that qemu_savevm_state_iterate() returns 1 when it
knows that remaining memory to sent takes less than max downtime.

But this means that we could end spending 2x max_downtime, one
downtime in qemu_savevm_iterate, and the other in
qemu_savevm_state_complete.

Changed code to:

    pending_size = qemu_savevm_state_pending(s->file, max_size);
    DPRINTF("pending size %lu max %lu\n", pending_size, max_size);
    if (pending_size >= max_size) {
        ret = qemu_savevm_state_iterate(s->file);
     } else {
        vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
        qemu_savevm_state_complete(s->file);
     }

So what we do is: at current network speed, we calculate the maximum
number of bytes we can sent: max_size.

Then we ask every save_live section how much they have pending.  If
they are less than max_size, we move to complete phase, otherwise we
do an iterate one.

This makes things much simpler, because now individual sections don't
have to caluclate the bandwidth (it was implossible to do right from
there).

Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-20 23:09:25 +01:00
Paolo Bonzini
caf71f86a3 migration: move include files to include/migration/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:31:32 +01:00