Instead of having duplicate feature names on the ext2_feature array for
the AMD feature bit aliases, we keep the feature names only on the
feature_name[] array, and copy the corresponding bits to
cpuid_ext2_features in case the CPU vendor is AMD.
This will:
- Make sure we don't set the feature bit aliases on Intel CPUs;
- Make it easier to convert feature bits to CPU properties, as now we
have a single bit on the x86_def_t struct for each CPU feature.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Don Slutz <Don@CloudSwitch.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Both constants have the same value, but CPUID_EXT2_AMD_ALIASES is
defined without using magic numbers.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Don Slutz <Don@CloudSwitch.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Instea of using a hardcoded hex constant, define CPUID_EXT2_AMD_ALIASES
as the set of CPUID[8000_0001].EDX bits that on AMD are the same as the
bits of CPUID[1].EDX.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-By: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Don Slutz <Don@CloudSwitch.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Bit 10 of CPUID[8000_0001].EDX is not defined as an alias of
CPUID[1].EDX[10], so do not duplicate it on
kvm_arch_get_supported_cpuid().
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-By: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Don Slutz <Don@CloudSwitch.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Add a test for each of report/ignore/stop. The tests use blkdebug
to generate an error in the middle of a script. The error is
recoverable (once = "on") so that we can test resuming a job after
stopping for an error.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
iotests.py provides a convenience function that uses Python keyword
arguments to represent QMP command arguments. However, almost all
QMP commands use dashes for argument names (the sole exception is
block_set_io_throttle), and dashes are not allowed in a keyword
argument name. Hence provide automatic conversion of underscores
to dashes.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Currently it is impossible to write a blkdebug script that ping-pongs
between two states, because the second set-state rule will use the
state that is set in the first. If you have
[set-state]
event = "..."
state = "1"
new_state = "2"
[set-state]
event = "..."
state = "2"
new_state = "1"
for example the state will remain locked at 1. This can be fixed
by first processing all rules, and then setting the state.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch adds support for error management to streaming.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The following behaviors are possible:
'report': The behavior is the same as in 1.1. An I/O error,
respectively during a read or a write, will complete the job immediately
with an error code.
'ignore': An I/O error, respectively during a read or a write, will be
ignored. For streaming, the job will complete with an error and the
backing file will be left in place. For mirroring, the sector will be
marked again as dirty and re-examined later.
'stop': The job will be paused and the job iostatus will be set to
failed or nospace, while the VM will keep running. This can only be
specified if the block device has rerror=stop and werror=stop or enospc.
'enospc': Behaves as 'stop' for ENOSPC errors, 'report' for others.
In all cases, even for 'report', the I/O error is reported as a QMP
event BLOCK_JOB_ERROR, with the same arguments as BLOCK_IO_ERROR.
It is possible that while stopping the VM a BLOCK_IO_ERROR event will be
reported and will clobber the event from BLOCK_JOB_ERROR, or vice versa.
This is not really avoidable since stopping the VM completes all pending
I/O requests. In fact, it is already possible now that a series of
BLOCK_IO_ERROR events are reported with rerror=stop, because vm_stop
calls bdrv_drain_all and this can generate further errors.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Move the common part of IDE/SCSI/virtio error handling to the block
layer. The new function bdrv_error_action subsumes all three of
bdrv_emit_qmp_error_event, vm_stop, bdrv_iostatus_set_err.
The same scheme will be used for errors in block jobs.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Do this while we are touching this part of the code, before introducing
more uses of "int is_read".
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This will let block-stream reuse the enum. Places that used the enums
are renamed accordingly.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We want to remove knowledge of BLOCK_ERR_STOP_ENOSPC from drivers;
drivers should only be told whether to stop/report/ignore the error.
On the other hand, we want to keep using the nicer BlockErrorAction
name in the drivers. So rename the enums, while leaving aside the
names of the enum values for now.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
These check that a paused streaming job does not advance its offset.
Sometimes the new test fails; the map is different between the source
and the destination of the streaming because qemu-io does not always
pack adjacent clusters that have the same allocated/unallocated state.
However, this also happens with the existing test_stream testcase, and
is better fixed in qemu-io.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add QMP commands matching the functionality.
Paused jobs cannot be canceled without first resuming them. This
ensures that I/O errors are never missed by management. However, an
optional force argument can be specified to allow that.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Job pausing reuses the existing support for cancellable sleeps. A pause
happens at the next sleeping point and lasts until the coroutine is
re-entered explicitly. Cancellation was already doing a forced resume,
so implement it explicitly in terms of resume.
Paused jobs cannot be canceled without first resuming them. This ensures
that I/O errors are never missed by management.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Because pausing a job is asynchronous, we need to know whether it has
completed. This is described by the "busy" field of BlockJob; copy it
to BlockJobInfo.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Extract it out of the implementation of info block-jobs.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Do this in a separate commit before we move the functions to
blockjob.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The DeviceNotActive text is not a particularly good match, add
a separate text while keeping the same class.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Derived from the streaming test cases (030), this adds the
following 9 tests:
1. For the following image chain, commit [mid] into [backing],
and use qemu-io to verify [backing] has its original data, as
well as the data from [mid]
[backing] <-- [mid] <-- [test]
2. Verifies that 'block-commit' with the 'speed' parameter sets the
speed parameter, as reported by 'query-block-jobs'
3. Verifies that a bogus 'device' parameter to 'block-commit'
results in error
4-9: Appropriate error values returned for the following argument errors:
* top == base
* top is nonexistent
* base is nonexistent
* top == active layer (this is currently not supported)
* top and base arguments are reversed
* top argument is omitted
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The command for live block commit is added, which has the following
arguments:
device: the block device to perform the commit on (mandatory)
base: the base image to commit into; optional (if not specified,
it is the underlying original image)
top: the top image of the commit - all data from inside top down
to base will be committed into base (mandatory for now; see
note, below)
speed: maximum speed, in bytes/sec
Note: Eventually this command will support merging down the active layer,
but that code is not yet complete. If the active layer is passed
in as top, then an error will be returned. Once merging down the
active layer is supported, the 'top' argument may become optional,
and default to the active layer.
The is done as a block job, so upon completion a BLOCK_JOB_COMPLETED will
be emitted.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This is a simple helper function, that will return the base image
of a given image chain.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds the live commit coroutine. This iteration focuses on the
commit only below the active layer, and not the active layer itself.
The behaviour is similar to block streaming; the sectors are walked
through, and anything that exists above 'base' is committed back down
into base. At the end, intermediate images are deleted, and the
chain stitched together. Images are restored to their original open
flags upon completion.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add bdrv_find_overlay(), and bdrv_drop_intermediate().
bdrv_find_overlay(): given 'bs' and the active (topmost) BDS of an image chain,
find the image that is the immediate top of 'bs'
bdrv_drop_intermediate():
Given 3 BDS (active, top, base), drop images above
base up to and including top, and set base to be the
backing file of top's overlay node.
E.g., this converts:
bottom <- base <- intermediate <- top <- active
to
bottom <- base <- active
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch adds gluster as the new block backend in QEMU. This gives
QEMU the ability to boot VM images from gluster volumes. Its already
possible to boot from VM images on gluster volumes using FUSE mount, but
this patchset provides the ability to boot VM images from gluster volumes
by by-passing the FUSE layer in gluster. This is made possible by
using libgfapi routines to perform IO on gluster volumes directly.
VM Image on gluster volume is specified like this:
file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]
'gluster' is the protocol.
'transport' specifies the transport type used to connect to gluster
management daemon (glusterd). Valid transport types are
tcp, unix and rdma. If a transport type isn't specified, then tcp
type is assumed.
'server' specifies the server where the volume file specification for
the given volume resides. This can be either hostname, ipv4 address
or ipv6 address. ipv6 address needs to be within square brackets [ ].
If transport type is 'unix', then 'server' field should not be specifed.
The 'socket' field needs to be populated with the path to unix domain
socket.
'port' is the port number on which glusterd is listening. This is optional
and if not specified, QEMU will send 0 which will make gluster to use the
default port. If the transport type is unix, then 'port' should not be
specified.
'volname' is the name of the gluster volume which contains the VM image.
'image' is the path to the actual VM image that resides on gluster volume.
Examples:
file=gluster://1.2.3.4/testvol/a.img
file=gluster+tcp://1.2.3.4/testvol/a.img
file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
file=gluster+tcp://server.domain.com:24007/testvol/dir/a.img
file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
file=gluster+rdma://1.2.3.4:24007/testvol/a.img
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
GlusterFS support in QEMU depends on libgfapi, libgfrpc and
libgfxdr provided by GlusterFS.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The AIO dispatch loop will call QLIST_REMOVE and g_free even if there
are other pending calls to qemu_aio_wait outside the current one.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add a new URI parsing library to QEMU. The code has been borrowed from
libxml2 and libvirt.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Fix qemu_aio_wait() to ensure that registered aio handlers don't get
deleted when they are still active. This is ensured by maintaning the
right count of walking_handlers.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Currently, after a live snapshot of a drive, the image that has
been 'demoted' to be below the new active layer remains r/w.
This patch reopens it read-only.
Note that we do not check for error on the reopen(), because we
will not abort the snapshots if the reopen fails.
This patch depends on the bdrv_reopen() series.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When cancelling block migration, all in-flight requests of the block
migration must be completed before the data can be freed. This was
visible as failing assertions and segfaults.
Reported-by: Peter Lieven <pl@dlhnet.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Using the virtqueue_avail_bytes() function had an unnecessarily
crippling effect on the number of bytes needed by the guest as reported
to the chardev layer in the can_read() callback.
Using the new virtqueue_get_avail_bytes() function will let us advertise
the exact number of bytes we can send to the guest.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The current virtqueue_avail_bytes() is oddly named, and checks if a
particular number of bytes are available in a vq. A better API is to
fetch the number of bytes available in the vq, and let the caller do
what's interesting with the numbers.
Introduce virtqueue_get_avail_bytes(), which returns the number of bytes
for buffers marked for both, in as well as out. virtqueue_avail_bytes()
is made a wrapper over this new function.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The virtqueue_avail_bytes() function counts bytes in an int. Use an
unsigned int instead.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
offset of accessed buffer is calculated using iov_length, so it
can exceed accessed len. If that happens
math in len - offset wraps around, and size becomes wrong.
As real value is 0, so this is harmless but unnecessary.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* 'arm-devs.for-upstream' of git://git.linaro.org/people/pmaydell/qemu-arm:
Versatile Express: Add modelling of NOR flash
Versatile Express: Fix NOR flash 0 address and remove flash alias
hw/armv7m_nvic: Correctly register GIC region when setting up NVIC
pl190: fix read of VECTADDR
The blank lines inside the single dump make it difficult for the
eye to pick out the block. Worse, with interior newlines, but
no blank line following, the PSW line appears to belong to the
next dump block.
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This is already handled generically in cpu_exec.
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Three places in the interrupt code did we not honor the mask.
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
For all targets that currently call tcg_gen_debug_insn_start,
add CPU_LOG_TB_OP_OPT to the condition that gates it.
This is useful for comparing optimization dumps, when the
pre-optimization dump is merely noise.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Support for helper functions with 5 arguments was missing
in the code generator and in the interpreter.
There is no need to pass the constant TCG_AREG0 from the
code generator to the interpreter. Remove that code for
the INDEX_op_qemu_st* opcodes.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>