* riku/linux-user-for-upstream: (21 commits)
linux-user: Handle compressed ISA encodings when processing MIPS exceptions
linux-user: Unlock mmap_lock when resuming guest from page_unprotect
linux-user: Reset copied CPUs in cpu_copy() always
linux-user: Fix epoll on ARM hosts
linux-user: fix segmentation fault passing with h2g(x) != x
linux-user: Fix pipe syscall return for SPARC
linux-user: Fix target_stat and target_stat64 for OpenRISC
linux-user: Avoid conditional cpu_reset()
configure: Make NPTL non-optional
linux-user: Enable NPTL for x86-64
linux-user: Add i386 TLS setter
linux-user: Clean up handling of clone() argument order
linux-user: Add missing 'break' in i386 get_thread_area syscall
linux-user: Enable NPTL for m68k
linux-user: Enable NPTL for SPARC targets
linux-user: Enable NPTL for OpenRISC
linux-user: Move includes of target-specific headers to end of qemu.h
configure: Enable threading for unicore32-linux-user
configure: Enable threading on all ppc and mips linux-user targets
configure: Don't say target_nptl="no" if there is no linux-user target
...
Conflicts:
linux-user/main.c
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
It is not used anymore.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Message-id: 1374501278-31549-15-git-send-email-pbonzini@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
# By Michael R. Hines (8) and others
# Via Juan Quintela
* quintela/migration.next:
migration: add autoconvergence documentation
Fix real mode guest segments dpl value in savevm
Fix real mode guest migration
rdma: account for the time spent in MIG_STATE_SETUP through QMP
rdma: introduce MIG_STATE_NONE and change MIG_STATE_SETUP state transition
rdma: allow state transitions between other states besides ACTIVE
rdma: send pc.ram
rdma: core logic
rdma: introduce ram_handle_compressed()
rdma: bugfix: ram_control_save_page()
rdma: update documentation to reflect new unpin support
Message-id: 1374590725-14144-1-git-send-email-quintela@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
When forwarding a segmentation fault into the guest process, we were passing
the host's address directly into the guest process's signal descriptor.
That obviously confused the guest process, since it didn't know what to make
of the (usually 32-bit truncated) address. Passing in h2g(address) makes the
guest process a lot happier.
To make the code more obvious, introduce a h2g_nocheck() macro that does the
same as h2g(), but allows us to convert addresses that may be outside of guest
mapped range into the guest's view of address space.
This fixes java running in arm-linux-user for me.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
Using the previous patches, we're now able to timestamp the SETUP
state. Once we have this time, let the user know about it in the
schema.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Code that does need to be visible is kept
well contained inside this file and this is the only
new additional file to the entire patch.
This file includes the entire protocol and interfaces
required to perform RDMA migration.
Also, the configure and Makefile modifications to link
this file are included.
Full documentation is in docs/rdma.txt
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This gives RDMA shared access to madvise() on the destination side
when an entire chunk is found to be zero.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Make inline target_memory_rw_debug() always available and change its
argument to CPUState. Let it check if CPUClass::memory_rw_debug provides
a specialized callback and fall back to cpu_memory_rw_debug() otherwise.
The only overriding implementation is for 32-bit sparc.
This prepares for changing GDBState::g_cpu to CPUState.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Change breakpoint_invalidate() argument to CPUState alongside.
Since all targets now assign a softmmu-only field, we can drop helpers
cpu_class_set_{do_unassigned_access,vmsd}() and device_class_set_vmsd().
Prepares for changing cpu_memory_rw_debug() argument to CPUState.
Acked-by: Max Filippov <jcmvbkbc@gmail.com> (for xtensa)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Callback implementations were specific to arm and m68k, so can easily
cast to ARMCPU and M68kCPU respectively.
Prepares for changing GDBState::c_cpu to CPUState.
Signed-off-by: Andreas Färber <afaerber@suse.de>
CPUArchState is no longer directly used since converting CPU loops to
CPUState.
Prepares for changing GDBState::c_cpu to CPUState.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Prepares for changing cpu_single_step() argument to CPUState.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Where no extra implementation is needed, fall back to CPUClass::set_pc().
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
This moves setting the Program Counter from gdbstub into target code.
Use vaddr type as upper-bound replacement for target_ulong.
Signed-off-by: Andreas Färber <afaerber@suse.de>
vaddr is to target_ulong what uintmax_t is to unsigned int.
Its purpose is to allow turning per-target functions with target_ulong
arguments into CPUClass hooks.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Let scsi_bus_legacy_add_drive() and scsi_bus_legacy_handle_cmdline()
return an Error**. Prepare qdev initfns for QOM realize error model.
Signed-off-by: Andreas Färber <afaerber@suse.de>
And remove variables if possible.
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
[AF: Converted remaining access and renamed to parent_obj]
Signed-off-by: Andreas Färber <afaerber@suse.de>
A common operation in instruction decoding is to take a field
from an instruction that represents a signed integer in some
arbitrary number of bits, and sign extend it into a C signed
integer type for manipulation. Provide new functions sextract32()
and sextract64() which perform this operation; they are like
the existing extract32() and extract64() except that the field
is sign-extended into the returned result.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1372419632-5521-2-git-send-email-peter.maydell@linaro.org
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Now all linux-user targets support building with NPTL, we can make it
mandatory. This is a good idea because:
* NPTL is no longer new and experimental; it is completely standard
* in practice, linux-user without NPTL is nearly useless for
binaries built against non-ancient glibc
* it allows us to delete the rather untested code for handling
the non-NPTL configuration
Note that this patch leaves the CONFIG_USE_NPTL ifdefs in the
bsd-user codebase alone. This makes no change for bsd-user, since
our configure test for NPTL had a "#include <linux/futex.h>"
which means bsd-user would never have been compiled with
CONFIG_USE_NPTL defined, and it still is not.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABCAAGBQJR7RnMAAoJEDwlJe0UNgze9BQP/iuj/QKrw71vrMMCHlrzqjRc
WXNaGkGHGaRw1r1X/XSiEqI3Ti2frHhsJ+annso4Q3forfGCbnB1Qaqvs/KzQW09
KQBK3b2AZ9m4b35ZpZYpmbNaIS60XVV1VVB9tshXKJgyYObGlHRWj8MpepSrl3Rr
texchdyNgZnqCS7Ep6oxzaR2bLqcr1Mi8+NG4dLJfw/z8BREPasQfxOYQoKxDVKV
Cg2gd31ZAVzqJXtUuwdtkuM7JddfOnGk/MfDkZEBFhQ/fnRE5GSGYTuOHQp9hYdt
bKnJbT0tqorP5+xg4dzVTqOJ+TsWm+ZfQrzQzkWSM34msYSoohCsF3/BA3xkF3/9
6iE4ZfHrM6R/XO3A61NbtE9CvhFq9YsLPq7TcAAEzapBFXZlQAGCbZNJlGqn72p1
XSTFwB02c2+gOXhhUtCwh0OKVbX79J99TQkBR1bEXr3C0yokxa0bIy7kJy+X2+vF
NOMzoWhEteylZn18tvDfjPCXXzO4kJ8+3sYtvyYAWRadG1QcCq+8xMwUgcVQgmnM
3TO2r+i4Cs+Ut9m6krW3P3ctL4cCoZj4bDqOu/8Fd7OVBK6u6LtXwej6LoiIDSPD
3D2Bns65EhEZVucoObgNxG2h+JFLcLm3qRKY51VxD0lJh4Nn90jo317I43FHWONe
HZZqqO8yPPf7LG/QGTzA
=AvPS
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'pmaydell/tags/pull-arm-devs-20130722' into staging
arm-devs queue
# gpg: Signature made Mon 22 Jul 2013 06:38:52 AM CDT using RSA key ID 14360CDE
# gpg: Can't check signature: public key not found
# By Peter Maydell (8) and Soren Brinkmann (2)
# Via Peter Maydell
* pmaydell/tags/pull-arm-devs-20130722:
hw/arm: Use 'load_ramdisk()' for loading ramdisks w/ U-Boot header
hw/loader: Support ramdisk with u-boot header
vexpress: Add virtio-mmio transports
vexpress: Make VEDBoardInfo extend arm_boot_info
arm/boot: Allow boards to modify the FDT blob
virtio: Implement MMIO based virtio transport
virtio: Support transports which can specify the vring alignment
virtio: Add support for guest setting of queue size
arm/boot: Use qemu_devtree_setprop_sized_cells()
device_tree: Add qemu_devtree_setprop_sized_cells() utility functions
Message-id: 1374493427-3254-1-git-send-email-peter.maydell@linaro.org
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Introduce 'load_ramdisk()' which can load "normal" ramdisks and ramdisks
with a u-boot header.
To enable this and leverage synergies 'load_uimage()' is refactored to
accomodate this additional use case.
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1373323202-17083-2-git-send-email-soren.brinkmann@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a callback hook in arm_boot_info to allow board models to
modify the device tree blob if they need to. (The major expected
use case is to add virtio-mmio nodes for virtio-mmio transports
that exist in QEMU but not in the hardware.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1373977512-28932-7-git-send-email-peter.maydell@linaro.org
Support virtio transports which can specify the vring alignment
(ie where the guest communicates this to the host) by providing
a new virtio_queue_set_align() function. (The default alignment
remains as before.)
Transports which wish to make use of this must set the
has_variable_vring_alignment field in their VirtioBusClass
struct to true; they can then change the alignment via
virtio_queue_set_align().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1373977512-28932-5-git-send-email-peter.maydell@linaro.org
The MMIO virtio transport spec allows the guest to tell the host how
large the queue size is. Add virtio_queue_set_num() function which
implements this in the QEMU common virtio support code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1373977512-28932-4-git-send-email-peter.maydell@linaro.org
We already have a qemu_devtree_setprop_cells() which sets a dtb
property to an array of cells whose values are specified by varargs.
However for the fairly common case of setting a property to a list
of addresses or of address,size pairs the number of cells used by
each element in the list depends on the parent's #address-cells
and #size-cells properties. To make this easier we provide an analogous
qemu_devtree_setprop_sized_cells() macro which allows the number
of cells used by each element to be specified. This is implemented
using an underlying qemu_devtree_setprop_sized_cells_from_array()
function which takes the values and sizes as an array; this may
also be directly useful for cases where the cell contents are
constructed programmatically.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: 1373977512-28932-2-git-send-email-peter.maydell@linaro.org
this patch adds a efficient encoding for zero blocks by
adding a new flag indicating a block is completely zero.
additionally bdrv_write_zeros() is used at the destination
to efficiently write these zeroes. depending on the implementation
this avoids that the destination target gets fully provisioned.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
BH will be used outside big lock, so introduce lock to protect
between the writers, ie, bh's adders and deleter. The lock only
affects the writers and bh's callback does not take this extra lock.
Note that for the same AioContext, aio_bh_poll() can not run in
parallel yet.
Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Load the virtio.c state into vring.c when we start dataplane mode and
vice versa when stopping dataplane mode. This patch makes it possible
to start and stop dataplane any time while the guest is running.
This will eventually allow us to go back to QEMU main loop for
bdrv_drain_all() and live migration. In the meantime, this patch makes
the dataplane lifecycle more robust but should make no visible
difference. It may be useful in the virtio-net dataplane effort.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This includes some fixes and enhancements that accumulated in my tree:
pci fixes by dkoch, virtio-net enhancements by akong and mst,
and a fix for xen pc by mst.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
iQEcBAABAgAGBQJR5meNAAoJECgfDbjSjVRp24IIAMOkxbb85FJ323G/x5cQBzA/
gjFDmvB6geIMBorX1YZRnIM+RFhx+mkXtBTu2raWVTNTt5G2u3vAQQWW2zSiOTBL
gH4BhzJnUoqLHOydWql2MsGS7DMQo4Fq8OnzRBkZ119AEEqNMad1w2LykwFWs4ra
k3bsPNCZM+ZNiLMWtQLOcD3FYvoiISinqFd81KOnxvDiT90rczk4dLWqjv8smNif
WqZ7aCD1hGJ5yD7JI2YjCbhVvu4F7tBK+fWkT/O3oYslh/o241lyxUriOXMKdKML
04sNXa5eWue9cOKlbo1G+yfFwFg1JDsAMe/Usg0KXz1MMK91wiWE763ESPbFBK0=
=P+pr
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'mst/tags/for_anthony' into staging
pci,net,pc enhancements
This includes some fixes and enhancements that accumulated in my tree:
pci fixes by dkoch, virtio-net enhancements by akong and mst,
and a fix for xen pc by mst.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Wed 17 Jul 2013 04:44:45 AM CDT using RSA key ID D28D5469
# gpg: Can't check signature: public key not found
# By Don Koch (2) and others
# Via Michael S. Tsirkin
* mst/tags/for_anthony:
pc: don't access fw cfg if NULL
virtio-net: add feature bit for any header s/g
net: add support of mac-programming over macvtap in QEMU side
pci: fix BRDIGE typo
pci-bridge: update mappings for migration/restore
Message-id: 1374054430-21966-1-git-send-email-mst@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
# By Chegu Vinod
# Via Juan Quintela
* quintela/migration.next:
Force auto-convegence of live migration
Add 'auto-converge' migration capability
Introduce async_run_on_cpu()
Message-id: 1373664508-5404-1-git-send-email-quintela@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Old qemu versions required that 1st s/g entry is the header.
Since QEMU 1.5, patchset titled "virtio-net: iovec handling cleanup"
removed this limitation but a feature bit is needed so guests know it's
safe to lay out header differently.
This patch applies on top and adds such a feature bit to QEMU.
It is set by default for virtio-net.
virtio net header inline with the data is beneficial
for latency and small packet bandwidth - guest driver
code utilizing this feature has been acked but missed 3.11
by a narrow margin, it's pending for 3.12.
This feature bit is cleared by default when compatibility with old
machine types is requested.
Other performance-sensitive devices (blk and scsi)
don't yet support arbitrary s/g layouts, so
we only set this bit for virtio-net for now.
There are plans to allow arbitrary layouts there, but
no code has been posted yet.
Cc: Rusty Russell <rusty@rustcorp.com.au>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Currently macvtap based macvlan device is working in promiscuous
mode, we want to implement mac-programming over macvtap through
Libvirt for better performance.
Design:
QEMU notifies Libvirt when rx-filter config is changed in guest,
then Libvirt query the rx-filter information by a monitor command,
and sync the change to macvtap device. Related rx-filter config
of the nic contains main mac, rx-mode items and vlan table.
This patch adds a QMP event to notify management of rx-filter change,
and adds a monitor command for management to query rx-filter
information.
Test:
If we repeatedly add/remove vlan, and change macaddr of vlan
interfaces in guest by a loop script.
Result:
The events will flood the QMP client(management), management takes
too much resource to process the events.
Event_throttle API (set rate to 1 ms) can avoid the events to flood
QMP client, but it could cause an unexpected delay (~1ms), guests
guests normally expect rx-filter updates immediately.
So we use a flag for each nic to avoid events flooding, the event
is emitted once until the query command is executed. The flag
implementation could not introduce unexpected delay.
There maybe exist an uncontrollable delay if we let Libvirt do the
real change, guests normally expect rx-filter updates immediately.
But it's another separate issue, we can investigate it when the
work in Libvirt side is done.
Michael S. Tsirkin: tweaked to enable events on start
Michael S. Tsirkin: fixed not to crash when no id
Michael S. Tsirkin: fold in patch:
"additional fixes for mac-programming feature"
Amos Kong: always notify QMP client if mactable is changed
Amos Kong: return NULL list if no net client supports rx-filter query
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Amos Kong <akong@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
If flushing the block devices fails, return an error. The VM is stopped
anyway.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
bdrv_flush() can fail, and bdrv_flush_all() should return an error as
well if this happens for a block device. It returns the first error
return now, but still at least tries to flush the remaining devices even
in error cases.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
One of the major reasons for doing something new for -blockdev and
blockdev-add was that the old block layer code parses filenames instead
of just taking them literally. So we should really leave it untouched
when it's passing using the new interfaces (like -drive
file.filename=...).
This allows opening relative file names that contain a colon.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The auto-converge migration capability allows the user to specify if they
choose live migration seqeunce to automatically detect and force convergence.
Signed-off-by: Chegu Vinod <chegu_vinod@hp.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Introduce an asynchronous version of run_on_cpu() i.e. the caller
doesn't have to block till the call back routine finishes execution
on the target vcpu.
Signed-off-by: Chegu Vinod <chegu_vinod@hp.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The DBDMA engine really just reads bytes from a producing device (IDE
in our case) and shoves these bytes into memory. It doesn't care whether
any alignment takes place or not.
Our code today however assumes that block accesses always happen on
sector (512 byte) boundaries. This is a fair assumption for most cases.
However, Mac OS X really likes to do unaligned, incomplete accesses
that it finishes with the next DMA request.
So we need to read / write the unaligned bits independent of the actual
asynchronous request, because that one can only handle 512-byte-aligned
data. We also need to cache these unaligned sectors until the next DMA
request, at which point the data might be successfully flushed from the
pipe.
Signed-off-by: Alexander Graf <agraf@suse.de>
Soon we will introduce intermediate processing pauses which will
allow the bottom half to restart a DMA request that couldn't be
fulfilled yet.
For that to work, move the processing variable into the io struct
which is what DMA providers work with.
While touching it, also change it into a bool
Signed-off-by: Alexander Graf <agraf@suse.de>
The DBDMA controller has a bottom half to asynchronously process DMA
request queues.
This bh was stored as a gross static variable. Move it into the device
struct instead.
While at it, move all users of it to the new generic kick function.
Signed-off-by: Alexander Graf <agraf@suse.de>
The DBDMA engine really is running all the time, waiting for input. However
we don't want to waste cycles constantly polling.
So introduce a kick function that data providers can call to notify the
DBDMA controller of new input.
Signed-off-by: Alexander Graf <agraf@suse.de>
We usually keep struct and constant definitions in header files. Move
them there to stay consistent and to make access to fields easier.
Signed-off-by: Alexander Graf <agraf@suse.de>