As the virtio-balloon-pci is switched to the new API, we can use QOM
casts.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-6-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This remove old init and exit function as they are no longer needed.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-5-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Here the virtio-balloon-ccw is modified for the new API. The device
virtio-balloon-ccw extends virtio-ccw-device as before. It creates and
connects a virtio-balloon during the init. The properties are not modified.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-4-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Here the virtio-balloon-pci is modified for the new API. The device
virtio-balloon-pci extends virtio-pci. It creates and connects a
virtio-balloon during the init. The properties are not changed.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-3-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Create virtio-balloon which extends virtio-device, so it can be connected on
virtio-bus.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-2-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
# By Kevin Wolf (22) and Peter Lieven (1)
# Via Stefan Hajnoczi
* stefanha/block: (23 commits)
block: Fix direct use of protocols as driver for bdrv_open()
qcow2: Gather clusters in a looping loop
qcow2: Move cluster gathering to a non-looping loop
qcow2: Allow requests with multiple l2metas
qcow2: Use byte granularity in qcow2_alloc_cluster_offset()
qcow2: Prepare handle_alloc/copied() for byte granularity
qcow2: handle_copied(): Implement non-zero host_offset
qcow2: handle_copied(): Get rid of keep_clusters parameter
qcow2: handle_copied(): Get rid of nb_clusters parameter
qcow2: Factor out handle_copied()
qcow2: Clean up handle_alloc()
qcow2: Finalise interface of handle_alloc()
qcow2: handle_alloc(): Get rid of keep_clusters parameter
qcow2: handle_alloc(): Get rid of nb_clusters parameter
qcow2: Factor out handle_alloc()
qcow2: Decouple cluster allocation from cluster reuse code
qcow2: Change handle_dependency to byte granularity
qcow2: Improve check for overlapping allocations
qcow2: Handle dependencies earlier
qcow2: Remove bogus unlock of s->lock
...
# By Lluís Vilanova (7) and others
# Via Stefan Hajnoczi
* stefanha/tracing:
vl: add runstate_set tracepoint
.gitignore: rename trace/generated-tracers.dtrace
.gitignore: add trace/generated-events.[ch]
trace: rebuild generated-events.o when configuration changes
trace: [stderr] Port to generic event information and new control interface
trace: [simple] Port to generic event information and new control interface
trace: [default] Port to generic event information and new control interface
trace: [monitor] Use new event control interface
trace: Provide a detailed event control interface
trace: Provide a generic tracing event descriptor
trace: [tracetool] Explicitly identify public backends
This patch enables us to know RunState transition. It will be userful
for investigation when the trouble occured in special event such like
live migration, shutdown, suspend, and so on.
Signed-off-by: Kazuya Saito <saito.kazuya@jp.fujitsu.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
For a while the file was called trace/generated-tracers-dtrace.dtrace
but today it's called trace/generated-tracers.dtrace.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Make sure to rebuild generated-events.o when ./configure options change.
This prevents linker errors when a stale generated-events.o gets linked
with code compiled against fresh headers. For example, try building
with ./configure --enable-trace-backend=stderr followed by ./configure
--enable-trace-backend=dtrace.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The backend is forced to dump event numbers using 64 bits, as TraceEventID is
an enum.
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This interface decouples event obtaining from interaction.
Events can be obtained through three different methods:
* identifier
* name
* simple wildcard pattern
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Uses tracetool to generate a backend-independent tracing event description
(struct TraceEvent).
The values for such structure are generated with the non-public "events"
backend ("events-c" frontend).
The generation of the defines to check if an event is statically enabled is also
moved to the "events" backend ("events-h" frontend).
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Public backends are those printed by "--list-backends" and thus considered valid
by the configure script.
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
bdrv_open_common() implements direct use of protocols by copying the
pre-opened BlockDriverStates to bs using bdrv_swap(). It did however
first set some fields in bs, which end up in file after the swap. When
bdrv_open() destroys file, it appears to be open, and because it isn't,
qemu could segfault while trying to close it.
Reorder the operations to return immediately in such cases so that file
is correctly detected as closed.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Instead of just checking once in exactly this order if there are
dependendies, non-COW clusters and new allocation, this starts looping
around these. This way we can, for example, gather non-COW clusters after
new allocations as long as the host cluster offsets stay contiguous.
Once handle_dependencies() is extended so that COW areas of in-flight
allocations can be overwritten, this allows to continue with gathering
other clusters (we wouldn't be able to do that without this change
because we would have missed a possible second dependency in one of the
next clusters).
This means that in the typical sequential write case, we can combine the
COW overwrite of one cluster with the allocation of the next cluster as
soon as something like Delayed COW gets actually implemented. It is only
by avoiding splitting requests this way that Delayed COW actually starts
improving performance noticably.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This patch is mainly to separate the indentation change from the
semantic changes. All that really changes here is that everything moves
into a while loop, all 'goto done' become 'break' and at the end of the
loop a new 'break is inserted.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Instead of expecting a single l2meta, have a list of them. This allows
to still have a single I/O request for the guest data, even though
multiple l2meta may be needed in order to describe both a COW overwrite
and a new cluster allocation (typical sequential write case).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This gets rid of the nb_clusters and keep_clusters and the associated
complicated calculations. Just advance the number of bytes that have
been processed and everything is fine.
This patch advances the variables even after the last operation even
though they aren't used any more afterwards to make things look more
uniform. A later patch will turn the whole thing into a loop and then
it actually starts making sense.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This makes handle_alloc() and handle_copied() return byte-granularity
host offsets instead of returning always the cluster start. This is
required so that qcow2_alloc_cluster_offset() can stop aligning
everything to cluster boundaries.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Look only for clusters that start at a given physical offset.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Now *bytes is used to return the length of the area that can be written
to without performing an allocation or COW.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
handle_copied() uses its bytes parameter now to determine how many
clusters it should try to find.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Things can be simplified a bit now. No semantic changes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The interface works completely on a byte granularity now and duplicated
parameters are removed.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
handle_alloc() is now called with the offset at which the actual new
allocation starts instead of the offset at which the whole write request
starts, part of which may already be processed.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
We already communicate the same information in *bytes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This moves some code that prepares the allocation of new clusters to
where the actual allocation happens. This is the minimum required to be
able to move it to a separate function in the next patch.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This is a more precise description of what really constitutes a
dependency. The behaviour doesn't change at this point because the COW
area of the old request is still aligned to cluster boundaries and
therefore an overlap is detected wheneven the requests touch any part of
the same cluster.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The old code detected an overlapping allocation even when the
allocations didn't actually overlap, but were only adjacent.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Handling overlapping allocations isn't just a detail of cluster
allocation. It is rather one of three ways to get the host cluster
offset for a write request:
1. If a request overlaps an in-flight allocations, the cluster offset
can be taken from there (this is what handle_dependencies will evolve
into) or the request must just wait until the allocation has
completed. Accessing the L2 is not valid in this case, it has
outdated information.
2. Outside overlapping areas, check the clusters that can be written to
as they are, with no COW involved.
3. If a COW is required, allocate new clusters
Changing the code to reflect this doesn't change the behaviour because
overlaps cannot exist for clusters that are kept in step 2. It does
however make it easier for later patches to work on clusters that belong
to an allocation that is still in flight.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The unlock wakes up the next coroutine, but the currently running
coroutine will lock it again before it yields, so this doesn't make a
lot of sense.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This should be based on the virtual disk size, not on the size of the
image.
Interesting observation: With some VM state stored in the image file,
percentages higher than 100% are possible, even though snapshots
themselves are ignored. This is a qcow2 bug to be fixed another day: The
VM state should be discarded in the active L2 tables after completing
the snapshot creation.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
commit 4d454574 "qemu-option: move standard option definitions
out of qemu-config.c" broke support for commandline option
groups that where registered during bdrv_init(). In particular
support for -iscsi options was broken since that commit.
Fix by moving the bdrv_init_with_whitelist() before command
line argument parsing.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Now that the core takes care of fe_open tracking we no longer need this hack.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Message-id: 1364292483-16564-12-git-send-email-hdegoede@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
When migrating a host with with a spice agent running the mouse becomes
non operational after the migration due to the agent state being
inconsistent between the guest and the client.
After migration the spicevmc backend on the destination has never been notified
of the (non 0) guest_connected state. Virtio-serial holds this state
information and migrates it, this patch properly propagates this information
to virtio-console and through that to interested chardev backends.
rhbz #725965
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Message-id: 1364292483-16564-11-git-send-email-hdegoede@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Resending the be_open event only is useful when a frontend is registering, not
when it is unregistering.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Message-id: 1364292483-16564-9-git-send-email-hdegoede@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The decrement of avail_connections is done in qdev-properties-system move
the increment there too for proper balancing of the calls.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Message-id: 1364292483-16564-8-git-send-email-hdegoede@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Most frontends can't really determine if the guest actually has the frontend
side open. So lets automatically generate fe_open / fe_close as soon as a
frontend becomes ready (as signalled by calling qemu_chr_add_handlers) /
becomes non ready (as signalled by setting all handlers to NULL).
And allow frontends which can actually determine if the guest is listening to
opt-out of this.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Message-id: 1364292483-16564-5-git-send-email-hdegoede@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Add tracking of the fe_open state to struct CharDriverState.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Message-id: 1364292483-16564-4-git-send-email-hdegoede@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>