The SSE-200 has 4 banks of SRAM, each with its own internal
Memory Protection Controller. The interrupt status for these
extra MPCs appears in the same security controller SECMPCINTSTATUS
register as the MPC for the IoTKit's single SRAM bank. Enhance the
iotkit-secctl device to allow 4 MPCs. (If the particular IoTKit/SSE
variant in use does not have all 4 MPCs then the unused inputs will
simply result in the SECMPCINTSTATUS bits being zero as required.)
The hardcoded constant "1"s in armsse.c indicate the actual number
of SRAM MPCs the IoTKit has, and will be replaced in the following
commit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-9-peter.maydell@linaro.org
Rename the files that used to be iotkit.[ch] to
armsse.[ch] to reflect the fact they new cover
multiple Arm subsystems for embedded.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-8-peter.maydell@linaro.org
Rename various internal uses of 'iotkit' in hw/arm/iotkit.c to
'armsse', for consistency. The remaining occurences are:
* related to the devices TYPE_IOTKIT_SYSCTL, TYPE_IOTKIT_SYSINFO,
etc, which this refactor is not touching
* references that apply specifically to the IoTKit (like
the lack of a private CPU region)
* the vmstate, which keeps its old "iotkit" name for
migration compatibility reasons
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-7-peter.maydell@linaro.org
The Arm SSE-200 Subsystem for Embedded is a revised and
extended version of the older IoTKit SoC. Prepare for
adding a model of it by refactoring the IoTKit code into
an abstract base class which contains the functionality,
driven by a class data block specific to each subclass.
(This is the same approach used by the existing bcm283x
SoC family implementation.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-6-peter.maydell@linaro.org
The Arm IoTKit was effectively the forerunner of a series of
subsystems for embedded SoCs, named the SSE-050, SSE-100 and SSE-200:
https://developer.arm.com/products/system-design/subsystems
These are generally quite similar, though later iterations have
extra devices that earlier ones do not.
We want to add a model of the SSE-200, which means refactoring the
IoTKit code into an abstract base class and subclasses (using the
same design that the bcm283x SoC and Aspeed SoC family
implementations do). As a first step, rename the IoTKit struct and
QOM macros to ARMSSE, which is what we're going to name the base
class. We temporarily retain TYPE_IOTKIT to avoid changing the
code that instantiates a TYPE_IOTKIT device here and then changing
it back again when it is re-introduced as a subclass.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-5-peter.maydell@linaro.org
Expose "start-powered-off" as a property of the ARMv7M container,
which we just pass through to the CPU object in the same way that we
do for "init-svtor" and "idau". (We want this for the SSE-200, which
powers up only the first CPU at reset and leaves the second powered
down.)
As with the other CPU properties here, we can't just use alias
properties, because the CPU QOM object is not created until armv7m
realize time.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-4-peter.maydell@linaro.org
Rather than just creating the CPUs with object_new, make them child
objects of the armv7m container. This will allow the cluster code to
find the CPUs if an armv7m object is made a child of a cluster object.
object_new_with_props() will do the parenting for us.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-3-peter.maydell@linaro.org
Currently the ARMv7M NVIC object's realize method assumes that the
CPU the NVIC is attached to is CPU 0, because it thinks there can
only ever be one CPU in the system. To allow a dual-Cortex-M33
setup we need to remove this assumption; instead the armv7m
wrapper object tells the NVIC its CPU, in the same way that it
already tells the CPU what the NVIC is.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-2-peter.maydell@linaro.org
set object owner in memory_region_init_ram() instead
of NULL.
Signed-off-by: kumar sourav <sourav.jb1988@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190125155630.17430-1-sourav.jb1988@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The new device_id property specifies which value to use for the vendor
specific designator in the Device Identification VPD page.
In particular, this is necessary for libvirt to maintain guest ABI
compatibility when no serial number is given and a VM is switched from
-drive (where the BlockBackend name is used) to -blockdev (where the
vendor specific designator is left out by default).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
scsi-disk includes in the Device Identification VPD page, depending on
configuration amongst others, a vendor specific designator that consists
either of the serial number if given or the BlockBackend name (which is
a host detail that better shouldn't have been leaked to the guest, but
now we have to maintain it for compatibility).
With anonymous BlockBackends, i.e. scsi-disk devices constructed with
drive=<node-name>, and no serial number explicitly specified, this ends
up as an empty string. If this happens to more than one disk, we have
accidentally signalled to the OS that this is a multipath setup, which
is obviously not what was intended.
Instead of using an empty string for the vendor specific designator,
simply leave out that designator, which makes Linux detect such setups
as separate disks again.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
The cmd() method of the QEMUQtestProtocol class sends a qtest command
to QEMU but doesn't wait for the return message ("OK", "FAIL", "ERR").
Because of this, it can return control to the caller before the
command has actually finished.
In cases like clock_step or clock_set this means that cmd() can return
before all the timers triggered by the clock change have been fired.
This can be fixed by making cmd() wait for the output of the qtest
command.
This fixes iotests 093 and 136, which are flaky since commit
8258292e18 when the machine is under heavy workload.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_co_invalidate_cache() clears the BDRV_O_INACTIVE flag before
actually activating a node so that the correct permissions etc. are
taken. In case of errors, the flag must be restored so that the next
call to bdrv_co_invalidate_cache() retries activation.
Restoring the flag was missing in the error path for a failed
parent->role->activate() call. The consequence is that this attempt to
activate all images correctly fails because we still set errp, however
on the next attempt BDRV_O_INACTIVE is already clear, so we return
success without actually retrying the failed action.
An example where this is observable in practice is migration to a QEMU
instance that has a raw format block node attached to a guest device
with share-rw=off (the default) while another process holds
BLK_PERM_WRITE for the same image. In this case, all activation steps
before parent->role->activate() succeed because raw can tolerate other
writers to the image. Only the parent callback (in particular
blk_root_activate()) tries to implement the share-rw=on property and
requests exclusive write permissions. This fails when the migration
completes and correctly displays an error. However, a manual 'cont' will
incorrectly resume the VM without calling blk_root_activate() again.
This case is described in more detail in the following bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1531888
Fix this by correctly restoring the BDRV_O_INACTIVE flag in the error
path.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
It's not enough to order the kwargs for consistent QMP log output,
we must also sort any sub-dictionaries in lists that appear as values.
Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Without this filter, this test sometimes fails.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch forbids attaching a disk to a SCSI device if its using a
different AioContext. Test case included.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This fixes a crash when attaching two disks with the same blockdev to
a SCSI device that is using iothreads. Test case included.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This fixes a crash when attaching a disk to a SCSI device using
iothreads, then detaching it and reattaching it again. Test case
included.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We define 54 macros for the powers of two >= 1024. We use six, in six
macro definitions. Four of them could just as well use the common MiB
macro, so do that. The remaining two can't, because they get passed
to stringify. Replace the macro by the literal number there.
Slightly harder to read in one instance (1048576 vs. S_1MiB), so add a
comment there. The other instance is a wash: 65536 vs S_64KiB. 65536
has been good enough for more than seven years there.
This effectively reverts commit 540b849261 and 1240ac558d.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The last user of blk_attach_dev_legacy() was the code in xen_disk which
has recently been reworked. Now there is no user for this legacy function
anymore. Thus we can finally remove all code related to the "legacy_dev"
flag, too, and turn the related "void *" in block-backend.c into proper
"DeviceState *" to fix some of the remaining TODOs there.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If QEMU was configured with a driver in --block-drv-ro-whitelist, trying
to use that driver read-write resulted in an error message even if
auto-read-only=on was set.
Consider auto-read-only=on for the whitelist checking and use it to
automatically degrade to read-only for block drivers on the read-only
whitelist.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Currently qemu_uuid_bswap() takes a pointer to the QemuUUID to
be byte-swapped. This means it can't be used when the UUID
to be swapped is in a packed member of a struct. It's also
out of line with the general bswap*() functions we provide
in bswap.h, which take the value to be swapped and return it.
Make qemu_uuid_bswap() take a QemuUUID and return the swapped version.
This fixes some clang warnings about taking the address of
a packed struct member in block/vdi.c.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Taking the address of a field in a packed struct is a bad idea, because
it might not be actually aligned enough for that pointer type (and
thus cause a crash on dereference on some host architectures). Newer
versions of clang warn about this.
Instead of passing UUID related functions the address of a possibly
unaligned QemuUUID struct, use local variables and then copy to/from
the struct field as appropriate.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Taking the address of a field in a packed struct is a bad idea, because
it might not be actually aligned enough for that pointer type (and
thus cause a crash on dereference on some host architectures). Newer
versions of clang warn about this. Avoid the bug by generating the
UUID into a local variable which is definitely safely aligned and
then copying it into place.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Clarify that the number of extents provided in BlockdevCreateOptionsVmdk
must match the number of extents that will actually be used. Providing
more extents will result in an error now.
This requires adapting the test case to provide the right number of
extents.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
This makes VMDK support blockdev-create. The implementation reuses the
image creation code in vmdk_co_create_opts which now acceptes a callback
pointer to "retrieve" BlockBackend pointers from the caller. This way we
separate the logic between file/extent acquisition and initialization.
The QAPI command parameters are mostly the same as the old create_opts
except the dropped legacy @compat6 switch, which is redundant with
@hwversion.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The extracted vmdk_init_extent takes a BlockBackend object and
initializes the format metadata. It is the common part between "qemu-img
create" and "blockdev-create".
Add a "BlockBackend *pbb" parameter to vmdk_create_extent, to return the
opened BB to the caller in the next patch.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This test waits for a MIGRATION event with status=completed on the
source VM before querying the migration status on both source and
destination. However, just because the source says migration has
completed does not mean the destination thinks the same. Therefore, in
some cases, the destination VM may still report "active" instead of
"completed" when asked for its migration status.
Fix this by enabling migration events on both VMs and waiting until both
source and destination emit a status=completed MIGRATION event.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In the block layer, synchronous APIs are often implemented by creating a
coroutine that calls the asynchronous coroutine-based implementation and
then waiting for completion with BDRV_POLL_WHILE().
For this to work with iothreads (more specifically, when the synchronous
API is called in a thread that is not the home thread of the block
device, so that the coroutine will run in a different thread), we must
make sure to call aio_wait_kick() at the end of the operation. Many
places are missing this, so that BDRV_POLL_WHILE() keeps hanging even if
the condition has long become false.
Note that bdrv_dec_in_flight() involves an aio_wait_kick() call. This
corresponds to the BDRV_POLL_WHILE() in the drain functions, but it is
generally not enough for most other operations because they haven't set
the return value in the coroutine entry stub yet. To avoid race
conditions there, we need to kick after setting the return value.
The race window is small enough that the problem doesn't usually surface
in the common path. However, it does surface and causes easily
reproducible hangs if the operation can return early before even calling
bdrv_inc/dec_in_flight, which many of them do (trivial error or no-op
success paths).
The bug in bdrv_truncate(), bdrv_check() and bdrv_invalidate_cache() is
slightly different: These functions even neglected to schedule the
coroutine in the home thread of the node. This avoids the hang, but is
obviously wrong, too. Fix those to schedule the coroutine in the right
AioContext in addition to adding aio_wait_kick() calls.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Patch created mechanically by rerunning:
$ spatch --sp-file scripts/coccinelle/qobject.cocci \
--macro-file scripts/cocci-macro-file.h \
--dir block --in-place
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Recently, some bugs in dmg file have been fixed. To prevent reading dmg
is broken someday in the future, add a simple test which ensures the
conversion from dmg to raw should not hang or face any I/O error.
Signed-off-by: yuchenlin <npes87184@gmail.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Refcount table entries have a field to store the offset of the
refcount block. The rest of the bits of the entry are currently
reserved.
The offset is always taken from the entry using REFT_OFFSET_MASK to
ensure that we only use the bits that belong to that field.
While that mask is used every time we read from the refcount table, it
is never used when we write to it. Due to the other constraints of the
qcow2 format QEMU can never produce refcount block offsets that don't
fit in that field so any such offset when allocating a refcount block
would indicate a bug in QEMU.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The mirror_start_job() function used for the commit-active job blocks
the source, target and all intermediate nodes for the duration of the
job.
target <- intermediate <- source
Since 4ef85a9c23 this function creates a dummy mirror_top_bs that
goes on top of the source node, and it is this dummy node that gets
blocked instead. The source node is never blocked or added to the
job's list of nodes.
target <- intermediate <- source <- mirror_top
At the moment I don't think it is possible to exploit this problem
because any additional job on 'source' would either be forbidden for
other reasons or it would need to involve an additional node that is
blocked, causing an error.
This can be seen in the error messages, however, because they never
refer to the source node being blocked:
$ qemu-img create -f qcow2 hd0.qcow2 1M
$ qemu-img create -f qcow2 -b hd0.qcow2 hd1.qcow2
$ qemu-io -c 'write 0 1M' hd0.qcow2
$ $QEMU -drive if=none,file=hd1.qcow2,node-name=hd1
{ "execute": "qmp_capabilities" }
{ "execute": "block-commit", "arguments": {"device": "hd1", "speed": 256}}
{ "execute": "block-stream", "arguments": {"device": "hd1"}}
{ "error": {"class": "GenericError",
"desc": "Node 'hd0' is busy: block device is in use by block job: commit"}}
After this patch the error message refers to 'hd1', as it should.
The expected output of iotest 141 also needs to be updated for the
same reason.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
At the moment I don't see how to make this function fail after the
dirty bitmap has been created, but if that was possible then we would
hit the assert(QLIST_EMPTY(&bs->dirty_bitmaps)) in bdrv_close().
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
SDL1.2 was deprecated in the 2.12.0 release with:
commit e52c6ba341
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Mon Jan 15 14:25:33 2018 +0000
ui: deprecate use of SDL 1.2 in favour of 2.0 series
The SDL 2.0 release was made in Aug, 2013:
https://www.libsdl.org/release/
That will soon be 4 + 1/2 years ago, which is enough time to consider
the 2.0 series widely supported.
Thus we deprecate the SDL 1.2 support, which will allow us to delete it
in the last release of 2018. By this time, SDL 2.0 will be more than 5
years old.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 20180115142533.24585-1-berrange@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
It is thus able to be removed in the 3.1.0 release.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20180822131554.3398-4-berrange@redhat.com>
[ kraxel: rebase ]
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Move the complexity of milkymist_tmu2_create() into the
source file. Doing so we avoid to include the X11/OpenGL
headers in all LM32 devices, and we also avoid the duplicate
declaration of glx_fbconfig_attr[] (it is already declared
in hw/display/milkymist-tmu2.c).
Since TYPE_MILKYMIST_TMU2 is now accessible, use it.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190130120005.23123-5-philmd@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
The TMU device requires both X11 and OpenGL.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190130120005.23123-4-philmd@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Commit 5f9b1e3506 remove the dependency between OpenGL and X11.
However the milkymist-tmu2 device do require X11.
When using SDL, the configure script sets need_x11=yes, so the X11
flags are populated to the makefiles.
When building without SDL, X11 is not pulled and populated, leading
to a link failure:
LINK lm32-softmmu/qemu-system-lm32
hw/lm32/milkymist.o: In function `milkymist_tmu2_create':
hw/lm32/milkymist-hw.h:114: undefined reference to `XOpenDisplay'
hw/lm32/milkymist-hw.h:140: undefined reference to `XFree'
hw/lm32/milkymist-hw.h:141: undefined reference to `XCloseDisplay'
hw/lm32/milkymist-hw.h:130: undefined reference to `XCloseDisplay'
../hw/display/milkymist-tmu2.o: In function `tmu2_glx_init':
hw/display/milkymist-tmu2.c:112: undefined reference to `XOpenDisplay'
hw/display/milkymist-tmu2.c:123: undefined reference to `XFree'
collect2: error: ld returned 1 exit status
gmake[1]: *** [Makefile:199: qemu-system-lm32] Error 1
Enforce the X11 dependency when the LM32 target is built.
This will allow us to build QEMU without SDL.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190130120005.23123-3-philmd@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
The Milkymist specific hardware is only used by the LM32 target,
it is pointless to compile those objects in other targets.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190130120005.23123-2-philmd@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
- New debugging QMP command to explore block graphs
- Converted DPRINTF()s to trace events
- Fixed qemu-io's use of getopt() for systems with optreset
- Minor NVMe emulation fixes
- An iotest fix
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJcUkaiAAoJEPQH2wBh1c9AHsEIAIU0+FNjtdz7lNgyeBCSFCFa
/qWNk4+w6QBfhTTx/N0hGwh5/FvNYQhby8VHtZitE4/QcLbJwHYgWf14pwce3tP3
3qNB87AdQpKMpbajQM2x2Xy8lnlPeM7fe21Q/12vuX7AlEDT3gH+W9rg94bw2oFN
r+xBk6H5F2aVElw3CwMM7eary4+dPnnCQwAnoqM+g5hdpL+0scrIyARGw7v0hmSn
LDWESCM4a55lEYmwj1wS3J3uj6Fj00yzBvcEuCcT1GO+lXlV8/ciO9r2HqxVKwgz
4GAi/BERoMKjfn+/77/yI5flprPx2voNGgkyBY4C3z9ncnN6u02QBZSusBIWpSg=
=Kt4r
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/xanclic/tags/pull-block-2019-01-31' into staging
Block patches:
- New debugging QMP command to explore block graphs
- Converted DPRINTF()s to trace events
- Fixed qemu-io's use of getopt() for systems with optreset
- Minor NVMe emulation fixes
- An iotest fix
# gpg: Signature made Thu 31 Jan 2019 00:51:46 GMT
# gpg: using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>" [full]
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* remotes/xanclic/tags/pull-block-2019-01-31:
iotests: Allow 147 to be run concurrently
iotests: Bind qemu-nbd to localhost in 147
iotests.py: Add qemu_nbd_pipe()
nvme: use pci_dev directly in nvme_realize
nvme: ensure the num_queues is not zero
nvme: use TYPE_NVME instead of constant string
qemu-io: Add generic function for reinitializing optind.
block/sheepdog: Convert from DPRINTF() macro to trace events
block/file-posix: Convert from DPRINTF() macro to trace events
block/curl: Convert from DPRINTF() macro to trace events
block/ssh: Convert from DPRINTF() macro to trace events
scripts: add render_block_graph function for QEMUMachine
qapi: add x-debug-query-block-graph
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- code clean-up
- LGPL information clean-up
- fix typo (acpi)
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJcUaTuAAoJEPMMOL0/L748/ZMP/AwJM0P7AjFSuPxC5ETNK2Nn
kJSMOMhAYs4oZppwQ25tphHNFd7xEzQd1JBS5aT4svp5qN0kzoUoPY6IpxvfXueW
aFmXJPgCZvWPvD8xebHYPiH1wr0ITeRKDjZBV+YykCwIOiigg5RraMslon+Djf2f
WNB7L/abhb4eTf5vNvS7cpCLOSXbslYYtj4Z5WcSARRAFlvBzHczooiYMuovMhtP
zCOhG7tW9scqoEJyIW2Bxmw4QqHAuOtrDnqN3DHseM1Eh4PoJCNLVf4lTU8qWhF4
W8IGysjQQot2V2JIkD9XkNcfGlQNniwNj/vYhpKCDLACE65ztZ42DV7j+oe2SDB2
ljqTc2Vi4pmqEuIGcT3MykBKdsjdLOS3KBP3S6fgMV7/R347xeRf85bGcZQb/AnS
rsL5MLA4nkd0xdztpuvHcqJdxhk+SbwsY8Zlj4agSELhFuIuEKz4VLd/WrxQvAhp
yimX2+m7dHvIsHWpRduZ8I5nR2U0O+sxmIhPbaQNCcHTU/JZjtZEKxU0kpCHkBtd
AdMVXf8NMGe8NecY8n9Y80Veencpq+nCthGtRRJwWOptlXXYdR5aYHaX6IlJaVFV
jmVZITn1HxddPsVqGW0ZAJip78xVi8eYHiMvQO4+dfECLAH6m7TyLRKh8MyIyCHD
VJ6roxeKSgosPaw1tzc2
=1OOM
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/vivier2/tags/trivial-branch-pull-request' into staging
- add device category (edu, i8042, sd memory card)
- code clean-up
- LGPL information clean-up
- fix typo (acpi)
# gpg: Signature made Wed 30 Jan 2019 13:21:50 GMT
# gpg: using RSA key F30C38BD3F2FBE3C
# gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full]
# gpg: aka "Laurent Vivier <laurent@vivier.eu>" [full]
# gpg: aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full]
# Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F 5173 F30C 38BD 3F2F BE3C
* remotes/vivier2/tags/trivial-branch-pull-request:
virtio-blk: remove duplicate definition of VirtIOBlock *s pointer
hw/block: clean up stale xen_disk trace entries
target/m68k: Fix LGPL information in the file headers
target/s390x: Fix LGPL version in the file header comments
tcg: Fix LGPL version number
target/tricore: Fix LGPL version number
target/openrisc: Fix LGPL version number
COPYING.LIB: Synchronize the LGPL 2.1 with the version from gnu.org
Don't talk about the LGPL if the file is licensed under the GPL
hw: sd: set category of the sd memory card
hw: input: set category of the i8042 device
typo: apci->acpi
hw: edu: set category of the edu device
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
User-visible changes:
* The new qemu-trace-stap script makes it convenient to collect traces without
writing SystemTap scripts. See "man qemu-trace-stap" for details.
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJcURdlAAoJEJykq7OBq3PIPqAH/iSkYDDeWLQy4eqeTPpbsxd4
U6mUYC/m2g1bevj1TxdFmr2g5LReGTd4w35w6/SUaLMHsu701T7gK+0z1gP2/N/D
qzJiM9xxF6xYq1P4hWJGf+XsbJ0OVf7oRwn1j8qXVBxjIxERX98z0ZUtbk/aulGi
wnNXycBufpKGk2PkQC+pBfhU2775dMqpUV49z9mqyVzsiZQlzbx8WMDQj1Ic1fbe
ZcAvX5D350HJjB3Z+9wJ1V2pC9Gu+z3TIup+YR1Bkch0ywyTCVTqcepoOXwzQamm
84bifPdObBm7SbbwtrwoVKYLrdIrbb3PTWaOlWVUKruKIIf8hzn5BxC3wChb2Qk=
=bex6
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stefanha/tags/tracing-pull-request' into staging
Pull request
User-visible changes:
* The new qemu-trace-stap script makes it convenient to collect traces without
writing SystemTap scripts. See "man qemu-trace-stap" for details.
# gpg: Signature made Wed 30 Jan 2019 03:17:57 GMT
# gpg: using RSA key 9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" [full]
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" [full]
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha/tags/tracing-pull-request:
trace: rerun tracetool after ./configure changes
trace: improve runstate tracing
trace: add ability to do simple printf logging via systemtap
trace: forbid use of %m in trace event format strings
trace: enforce that every trace-events file has a final newline
display: ensure qxl log_buf is a nul terminated string
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
To do this, we need to allow creating the NBD server on various ports
instead of a single one (which may not even work if you run just one
instance, because something entirely else might be using that port).
So we just pick a random port in [32768, 32768 + 1024) and try to create
a server there. If that fails, we just retry until something sticks.
For the IPv6 test, we need a different range, though (just above that
one). This is because "localhost" resolves to both 127.0.0.1 and ::1.
This means that if you bind to it, it will bind to both, if possible, or
just one if the other is already in use. Therefore, if the IPv6 test
has already taken [::1]:some_port and we then try to take
localhost:some_port, that will work -- only the second server will be
bound to 127.0.0.1:some_port alone and not [::1]:some_port in addition.
So we have two different servers on the same port, one for IPv4 and one
for IPv6.
But when we then try to connect to the server through
localhost:some_port, we will always end up at the IPv6 one (as long as
it is up), and this may not be the one we want.
Thus, we must make sure not to create an IPv6-only NBD server on the
same port as a normal "dual-stack" NBD server -- which is done by using
distinct port ranges, as explained above.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-4-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
By default, qemu-nbd binds to 0.0.0.0. However, we then proceed to
connect to "localhost". Usually, this works out fine; but if this test
is run concurrently, some other test function may have bound a different
server to ::1 (on the same port -- you can bind different serves to the
same port, as long as one is on IPv4 and the other on IPv6).
So running qemu-nbd works, it can bind to 0.0.0.0:NBD_PORT. But
potentially a concurrent test has successfully taken [::1]:NBD_PORT. In
this case, trying to connect to "localhost" will lead us to the IPv6
instance, where we do not want to end up.
Fix this by just binding to "localhost". This will make qemu-nbd error
out immediately and not give us cryptic errors later.
(Also, it will allow us to just try a different port as of a future
patch.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-3-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
In some cases, we may want to deal with qemu-nbd errors (e.g. by
launching it in a different configuration until it no longer throws
any). In that case, we do not want its output ending up in the test
output.
It may still be useful for handling the error, though, so add a new
function that works basically like qemu_nbd(), only that it returns the
qemu-nbd output instead of making it end up in the log. In contrast to
qemu_img_pipe(), it does still return the exit code as well, though,
because that is even more important for error handling.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-2-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>