Commit 0f842f8a24 replaced GETPC_EXT() which
was derived from GETPC() by GETRA_EXT() without fixing cputlb.c. A later
patch replaced GETRA_EXT() by GETRA() in exec/softmmu_template.h which
is included in cputlb.c.
The TCG interpreter failed because the values returned by GETRA() were no
longer explicitly set to 0. The redefinition of GETRA() introduced here
fixes this.
In addition, GETPC_ADJ which is also used in exec/softmmu_template.h is
set to 0. Both changes reduce the compiled code size for cputlb.c by more
than 100 bytes, so the normal TCG without interpreter also profits from
the reduced code size and slightly faster code.
Cc: qemu-stable@nongnu.org
Reported-by: Giovanni Mascellani <gio@debian.org>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We expose a generic helper "tcg_gen_extr_i64_tl" for 64bit targets, but the
same function for 32bit targets is a misnomer and refers to an invalid function
name.
Fix up the definition to point to the correct internal helper names instead.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Since all backends have been converted, remove the compatibility code.
Acked-by: Claudio Fontana <claudio.fontana@huawei.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The first non-register argument isn't placed at offset 0.
Cc: qemu-stable@nongnu.org
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The exit code 63 (check not supported by image format) was not even
documented in the comment above the check command in the source code;
add it, as it does indeed seem useful.
Also, document all of check's exit codes in the manpage.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reported-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The function init_blk_migration is better to be called before
set_dirty_tracking as the reasons below.
If we want to track dirty blocks via dirty_maps on a BlockDriverState
when doing live block-migration, its correspoding 'BlkMigDevState' should be
added to block_mig_state.bmds_list first for subsequent processing.
Otherwise set_dirty_tracking will do nothing on an empty list than allocating
dirty_bitmaps for them. And bdrv_get_dirty_count will access the
bmds->dirty_maps directly, then there would be a segfault triggered.
If the set_dirty_tracking fails, qemu_savevm_state_cancel will handle
the cleanup of init_blk_migration automatically.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: chai wen <chaiw.fnst@cn.fujitsu.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The block_set_io_throttle QMP and HMP commands modify I/O throttling
limits for block devices.
Acquire the BlockDriverState's AioContext to protect against race
conditions with an IOThread that is running I/O for this device.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Add a test case that checks the timer is really removed/added by the
detach/attach functions.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Block I/O throttling uses timers and currently always adds them to the
main loop. Throttling will break if bdrv_set_aio_context() is used to
move a BlockDriverState to a different AioContext.
This patch adds throttle_detach/attach_aio_context() interfaces so the
throttling timers and uses them to move timers to the new AioContext.
Note that bdrv_set_aio_context() already drains all requests so we're
sure no throttled requests are pending.
The test cases need to be updated since the throttle_init() interface
has changed.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
The common logic to process a scsi request in a VirtQueueElement is
extracted to a function to share with dataplane.
This makes VirtIOBlockReq.scsi unused, so drop it.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Dataplane now uses block layer. Protect bdrv_set_enable_write_cache with
aio_context_acquire and aio_context_release, so we can enable config-wce
to allow guest to modify the write cache online.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
block_int.h is for block layer and block drivers, other code shouldn't
include it. But similar to bdrv_set_aio_context, bdrv_get_aio_context
should also be accessible from outside of block layer.
Move it.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
virtio-blk data-plane now uses the QEMU block layer for I/O. We do not
need raw_get_aio_fd() anymore. It was a layering violation anyway, so
let's get rid of it.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Stop using the raw-posix file descriptor for synchronous
qemu_fdatasync(). Use bdrv_aio_flush() instead and drop the
VirtIOBlockDataPlane->fd field.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Stop using a custom Linux AIO request queue from ioq.h and instead use
the QEMU block layer for I/O.
This patch adjusts the VirtIOBlockRequest struct with fields needed for
bdrv_aio_readv()/bdrv_aio_writev(). ioq.h used struct iovec and struct
iocb, which we don't need directly anymore.
Modify dataplane start/stop to set the AioContext on the
BlockDriverState. We also no longer need to get the raw-posix file
descriptor. This means image formats are now supported with dataplane!
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Implement .bdrv_detach/attach_aio_context() interfaces to propagate
detach/attach to BDRVVmdkState->extents[].file. The block layer takes
care of ->file and ->backing_hd but doesn't know about our extents
BlockDriverStates, which is also part of the graph.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the assumption that we're using the main AioContext. Use
bdrv_get_aio_context() to register fd handlers in the right AioContext
for this BlockDriverState.
The .bdrv_detach_aio_context() and .bdrv_attach_aio_context() interfaces
are not needed since no fd handlers, timers, or BHs stay registered when
requests have been drained.
For now this doesn't make much difference but will allow ssh to work in
IOThread instances in the future.
Acked-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the assumption that we're using the main AioContext. Convert
qemu_aio_set_fd_handler() to aio_set_fd_handler() and qemu_aio_wait() to
aio_poll().
The .bdrv_detach/attach_aio_context() interfaces also need to be
implemented to move the socket fd handler from the old to the new
AioContext.
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Acked-by: Liu Yuan <namei.unix@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the assumption that we're using the main AioContext. Convert
qemu_bh_new() to aio_bh_new() and qemu_aio_wait() to aio_poll().
The .bdrv_detach_aio_context() and .bdrv_attach_aio_context() interfaces
are not needed since no fd handlers, timers, or BHs stay registered when
requests have been drained.
Cc: Josh Durgin <josh.durgin@inktank.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Drop the assumption that we're using the main AioContext for raw-win32.
Convert the aio-win32 code to support detach/attach and replace
qemu_aio_wait() with aio_poll().
The .bdrv_detach/attach_aio_context() interfaces move the aio-win32
event notifier from the old to the new AioContext.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Each QEMUWin32AIOState event notifier is associated with an AioContext.
Since BlockDriverState instances can use different AioContexts we cannot
continue to use a global QEMUWin32AIOState.
Let each BDRVRawState have its own QEMUWin32AIOState and free it when
BDRVRawState is closed.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Hot unplugging -drive aio=native,file=test.img,format=raw images leaves
the Linux AIO event notifier and struct qemu_laio_state allocated.
Luckily nothing will use the event notifier after the BlockDriverState
has been closed so the handler function is never called.
It's still worth fixing this resource leak.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the assumption that we're using the main AioContext for Linux AIO.
Convert the Linux AIO event notifier to use aio_set_event_notifier().
The .bdrv_detach/attach_aio_context() interfaces also need to be
implemented to move the event notifier handler from the old to the new
AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Implement .bdrv_detach/attach_aio_context() interfaces to propagate
detach/attach to BDRVQuorumState->bs[] children. The block layer takes
care of ->file and ->backing_hd but doesn't know about our ->bs[]
BlockDriverStates, which is also part of the graph.
Reviewed-by: Benoît Canet <benoit.canet@irqsave.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the assumption that we're using the main AioContext. Convert
qemu_bh_new() to aio_bh_new() and qemu_aio_wait() to aio_poll() so we're
using the BlockDriverState's AioContext.
Implement .bdrv_detach/attach_aio_context() interfaces to move the
QED_F_NEED_CHECK timer from the old AioContext to the new one.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the assumption that we're using the main AioContext. The following
functions need to be converted:
* qemu_bh_new() -> aio_bh_new()
* qemu_aio_set_fd_handler() -> aio_set_fd_handler()
* qemu_aio_wait() -> aio_poll()
The .bdrv_detach/attach_aio_context() interfaces also need to be
implemented to move the fd handler from the old to the new AioContext.
Cc: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
Drop the assumption that we're using the main AioContext. Convert
qemu_aio_set_fd_handler() calls to aio_set_fd_handler().
The .bdrv_detach/attach_aio_context() interfaces also need to be
implemented to move the socket fd handler from the old to the new
AioContext.
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the assumption that we're using the main AioContext for Linux
AIO. Convert qemu_aio_set_fd_handler() to aio_set_fd_handler() and
timer_new_ms() to aio_timer_new().
The .bdrv_detach/attach_aio_context() interfaces also need to be
implemented to move the fd and timer from the old to the new AioContext.
Cc: Peter Lieven <pl@kamp.de>
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
Drop the assumption that we're using the main AioContext. Use
aio_bh_new() instead of qemu_bh_new().
The .bdrv_detach_aio_context() and .bdrv_attach_aio_context() interfaces
are not needed since no fd handlers, timers, or BHs stay registered when
requests have been drained.
Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The curl block driver uses fd handlers, timers, and BHs. The fd
handlers and timers are managed on behalf of libcurl, which controls
them using callback functions that the block driver implements.
The simplest way to implement .bdrv_detach/attach_aio_context() is to
clean up libcurl in the old event loop and initialize it again in the
new event loop. We do not need to keep track of anything since there
are no pending requests when the AioContext is changed.
Also make sure to use aio_set_fd_handler() instead of
qemu_aio_set_fd_handler() and aio_bh_new() instead of qemu_bh_new() so
the current AioContext is passed in.
Cc: Alexander Graf <agraf@suse.de>
Cc: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Drop the assumption that we're using the main AioContext. Convert
qemu_bh_new() to aio_bh_new() and qemu_aio_wait() to aio_poll() so we
use the BlockDriverState's AioContext.
Implement .bdrv_detach/attach_aio_context() interfaces to propagate
detach/attach to BDRVBlkverifyState->test_file. The block layer takes
care of ->file and ->backing_hd but doesn't know about our ->test_file
BlockDriverState, which is also part of the graph.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the assumption that we're using the main AioContext. Convert
qemu_bh_new() to aio_bh_new() so we use the BlockDriverState's
AioContext.
The .bdrv_detach_aio_context() and .bdrv_attach_aio_context() interfaces
are not needed since no fd handlers, timers, or BHs stay registered when
requests have been drained.
Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Up until now all BlockDriverState instances have used the QEMU main loop
for fd handlers, timers, and BHs. This is not scalable on SMP guests
and hosts so we need to move to a model with multiple event loops on
different host CPUs.
bdrv_set_aio_context() assigns the AioContext event loop to use for a
particular BlockDriverState. It first detaches the entire
BlockDriverState graph from the current AioContext and then attaches to
the new AioContext.
This function will be used by virtio-blk data-plane to assign a
BlockDriverState to its IOThread AioContext. Make
bdrv_aio_set_context() public since data-plane should not include
block_int.h.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Modify bdrv_drain_all() to take into account that BlockDriverState
instances may be running in different AioContexts.
This patch changes the implementation of bdrv_drain_all() while
preserving the semantics. Previously kicking throttled requests and
checking for pending requests were done across all BlockDriverState
instances in sequence. Now we process each BlockDriverState in turn,
making sure to acquire and release its AioContext.
This prevents race conditions between the thread executing
bdrv_drain_all() and the thread running the AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
bdrv_close_all(), bdrv_commit_all(), bdrv_flush_all(),
bdrv_invalidate_cache_all(), and bdrv_clear_incoming_migration_all() are
called by main loop code and touch all BlockDriverState instances.
Some BlockDriverState instances may be running in another AioContext.
Make sure to acquire the AioContext before closing the BlockDriverState.
This will protect against race conditions once virtio-blk data-plane is
using the BlockDriverState from another AioContext event loop.
Note that this patch does not convert bdrv_drain_all() yet since that
conversion is non-trivial.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Drop the assumption that we're using the main AioContext. Convert
qemu_aio_wait() to aio_poll() and qemu_bh_new() to aio_bh_new() so the
BlockDriverState AioContext is used.
Note there is still one qemu_aio_wait() left in bdrv_create() but we do
not have a BlockDriverState there and only main loop code invokes this
function.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
qemu_bh_schedule() is supposed to be thread-safe at least the first time
it is called. Unfortunately this is not quite true:
bh->scheduled = 1;
aio_notify(bh->ctx);
Since another thread may run the BH callback once it has been scheduled,
there is a race condition if the callback frees the BH before
aio_notify(bh->ctx) has a chance to run.
Reported-by: Stefan Priebe <s.priebe@profihost.ag>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Stefan Priebe <s.priebe@profihost.ag>
Since Linux kernel 3.5, KVM has documented eax for leaf 0x40000000
to be KVM_CPUID_FEATURES:
57c22e5f35
But qemu still tries to set it to 0. It would be better to make qemu
and kvm consistent. This patch just fixes this issue.
Signed-off-by: Jidong Xiao <jidong.xiao@gmail.com>
[Include kvm_base in the value. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When using the autoseat feature of systemd/logind we'll only need
a single udev rule for the pci bridge, which simplifies the guest
setup a bit.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
token should be closed in all conditions.
So move CloseHandle(token) to "out" branch.
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>