Commit Graph

66626 Commits

Author SHA1 Message Date
Peter Maydell
0a78d7ebf8 hw/misc/iotkit-secctl: Support 4 internal MPCs
The SSE-200 has 4 banks of SRAM, each with its own internal
Memory Protection Controller. The interrupt status for these
extra MPCs appears in the same security controller SECMPCINTSTATUS
register as the MPC for the IoTKit's single SRAM bank. Enhance the
iotkit-secctl device to allow 4 MPCs. (If the particular IoTKit/SSE
variant in use does not have all 4 MPCs then the unused inputs will
simply result in the SECMPCINTSTATUS bits being zero as required.)

The hardcoded constant "1"s in armsse.c indicate the actual number
of SRAM MPCs the IoTKit has, and will be replaced in the following
commit.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-9-peter.maydell@linaro.org
2019-02-01 14:55:42 +00:00
Peter Maydell
6eee5d241a hw/arm/iotkit: Rename files to hw/arm/armsse.[ch]
Rename the files that used to be iotkit.[ch] to
armsse.[ch] to reflect the fact they new cover
multiple Arm subsystems for embedded.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-8-peter.maydell@linaro.org
2019-02-01 14:55:42 +00:00
Peter Maydell
13628891b3 hw/arm/iotkit: Rename 'iotkit' local variables and functions
Rename various internal uses of 'iotkit' in hw/arm/iotkit.c to
'armsse', for consistency. The remaining occurences are:
 * related to the devices TYPE_IOTKIT_SYSCTL, TYPE_IOTKIT_SYSINFO,
   etc, which this refactor is not touching
 * references that apply specifically to the IoTKit (like
   the lack of a private CPU region)
 * the vmstate, which keeps its old "iotkit" name for
   migration compatibility reasons

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-7-peter.maydell@linaro.org
2019-02-01 14:55:41 +00:00
Peter Maydell
4c3690b591 hw/arm/iotkit: Refactor into abstract base class and subclass
The Arm SSE-200 Subsystem for Embedded is a revised and
extended version of the older IoTKit SoC. Prepare for
adding a model of it by refactoring the IoTKit code into
an abstract base class which contains the functionality,
driven by a class data block specific to each subclass.
(This is the same approach used by the existing bcm283x
SoC family implementation.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-6-peter.maydell@linaro.org
2019-02-01 14:55:41 +00:00
Peter Maydell
93dbd10347 hw/arm/iotkit: Rename IoTKit to ARMSSE
The Arm IoTKit was effectively the forerunner of a series of
subsystems for embedded SoCs, named the SSE-050, SSE-100 and SSE-200:
https://developer.arm.com/products/system-design/subsystems
These are generally quite similar, though later iterations have
extra devices that earlier ones do not.

We want to add a model of the SSE-200, which means refactoring the
IoTKit code into an abstract base class and subclasses (using the
same design that the bcm283x SoC and Aspeed SoC family
implementations do). As a first step, rename the IoTKit struct and
QOM macros to ARMSSE, which is what we're going to name the base
class. We temporarily retain TYPE_IOTKIT to avoid changing the
code that instantiates a TYPE_IOTKIT device here and then changing
it back again when it is re-introduced as a subclass.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-5-peter.maydell@linaro.org
2019-02-01 14:55:41 +00:00
Peter Maydell
66647809f5 armv7m: Pass through start-powered-off CPU property
Expose "start-powered-off" as a property of the ARMv7M container,
which we just pass through to the CPU object in the same way that we
do for "init-svtor" and "idau". (We want this for the SSE-200, which
powers up only the first CPU at reset and leaves the second powered
down.)

As with the other CPU properties here, we can't just use alias
properties, because the CPU QOM object is not created until armv7m
realize time.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-4-peter.maydell@linaro.org
2019-02-01 14:55:41 +00:00
Peter Maydell
e4c81e3a45 armv7m: Make cpu object a child of the armv7m container
Rather than just creating the CPUs with object_new, make them child
objects of the armv7m container. This will allow the cluster code to
find the CPUs if an armv7m object is made a child of a cluster object.
object_new_with_props() will do the parenting for us.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-3-peter.maydell@linaro.org
2019-02-01 14:55:41 +00:00
Peter Maydell
3693f217d3 armv7m: Don't assume the NVIC's CPU is CPU 0
Currently the ARMv7M NVIC object's realize method assumes that the
CPU the NVIC is attached to is CPU 0, because it thinks there can
only ever be one CPU in the system. To allow a dual-Cortex-M33
setup we need to remove this assumption; instead the armv7m
wrapper object tells the NVIC its CPU, in the same way that it
already tells the CPU what the NVIC is.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190121185118.18550-2-peter.maydell@linaro.org
2019-02-01 14:55:41 +00:00
kumar sourav
287a7f6e39 hw/arm/nrf51_soc: set object owner in memory_region_init_ram
set object owner in memory_region_init_ram() instead
of NULL.

Signed-off-by: kumar sourav <sourav.jb1988@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190125155630.17430-1-sourav.jb1988@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01 14:55:41 +00:00
Peter Maydell
a1bc3e7dc8 ui: fix build with SDL disabled, drop SDL1 support.
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJcVDv3AAoJEEy22O7T6HE4Q2IQAIKbBCZ03yEmYoPwZhmGPZHE
 CV+R4poHE+eCC0kau3wbimGnO3895gym/FDTxyVhdki5xvsLQKwh7bNmzamrkM9N
 Bmr/SCnYdxlC07cdIFJXGbW83y57rQwCmEZTg8HGbpsDNsB+eaxZoBT1hkWUS88m
 jewyhRNnDG+JPGrNOWHT1wpJSVD0VDlg82CfnfxXL/ff7n30vHGMgWvjr3dHyj95
 NjiDHpXR/2NVRrdKHj5ZHB/rJmwWFMbzerzLfr06WmYylcGuCHxPmnz0152ORRCF
 UqCmj8eLdWfQ1iDHI1k+RYE9HRG26fZmSgRqFBPC7YFy9BWU5BM8YqRn2iHX9vA7
 r7KVIot7QJAg15/qnmBd9XjBtDsjr26+xDjBEJc0e9/3UQi3tLNtiI4b/PFHUZIw
 W4POYPP+NPBKVxufFUD7g02BN+3a0kcXNia7U7916JWeanBmQBn5OUNzGavM0QEn
 uqhHIch9bTN/mMoLBLbOAGKly9SQUEEkkXiCP2qH61arjNXw9EsZj0JRM7JNHllC
 +1+Y5DvKZXl2hc3MebLbBDO0p7w6+gN+XK4u8tqcYHGkdozvnZfVBB74Y7d0xBlz
 j58a8/yC4MiqkdWLujAXiQlpIuz3FmMlm0STn+aburBwjk3x/NP/BOxLbNV++YTv
 sCFWFRQ13DPFXSEMTDqM
 =SZn0
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/kraxel/tags/ui-20190201-pull-request' into staging

ui: fix build with SDL disabled, drop SDL1 support.

# gpg: Signature made Fri 01 Feb 2019 12:30:47 GMT
# gpg:                using RSA key 4CB6D8EED3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>" [full]
# gpg:                 aka "Gerd Hoffmann <gerd@kraxel.org>" [full]
# gpg:                 aka "Gerd Hoffmann (private) <kraxel@gmail.com>" [full]
# Primary key fingerprint: A032 8CFF B93A 17A7 9901  FE7D 4CB6 D8EE D3E8 7138

* remotes/kraxel/tags/ui-20190201-pull-request:
  ui: remove support for SDL1.2 in favour of SDL2
  hw/display/milkymist-tmu2: Move inlined code from header to source
  hw/display/milkymist-tmu2: Explicit the dependency to both X11 / OpenGL
  configure: LM32 Milkymist Texture Mapping Unit (tmu2) also depends of X11
  hw/display: Move Milkymist specific hardware out of common-obj list

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-01 13:15:10 +00:00
Kevin Wolf
7471a649fc scsi-disk: Add device_id property
The new device_id property specifies which value to use for the vendor
specific designator in the Device Identification VPD page.

In particular, this is necessary for libvirt to maintain guest ABI
compatibility when no serial number is given and a VM is switched from
-drive (where the BlockBackend name is used) to -blockdev (where the
vendor specific designator is left out by default).

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
2019-02-01 13:48:11 +01:00
Kevin Wolf
a8f58afcdb scsi-disk: Don't use empty string as device id
scsi-disk includes in the Device Identification VPD page, depending on
configuration amongst others, a vendor specific designator that consists
either of the serial number if given or the BlockBackend name (which is
a host detail that better shouldn't have been leaked to the guest, but
now we have to maintain it for compatibility).

With anonymous BlockBackends, i.e. scsi-disk devices constructed with
drive=<node-name>, and no serial number explicitly specified, this ends
up as an empty string. If this happens to more than one disk, we have
accidentally signalled to the OS that this is a multipath setup, which
is obviously not what was intended.

Instead of using an empty string for the vendor specific designator,
simply leave out that designator, which makes Linux detect such setups
as separate disks again.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-02-01 13:47:09 +01:00
Alberto Garcia
a5df73baaf qtest.py: Wait for the result of qtest commands
The cmd() method of the QEMUQtestProtocol class sends a qtest command
to QEMU but doesn't wait for the return message ("OK", "FAIL", "ERR").
Because of this, it can return control to the caller before the
command has actually finished.

In cases like clock_step or clock_set this means that cmd() can return
before all the timers triggered by the clock change have been fired.
This can be fixed by making cmd() wait for the output of the qtest
command.

This fixes iotests 093 and 136, which are flaky since commit
8258292e18 when the machine is under heavy workload.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:45 +01:00
Kevin Wolf
78fc3b3a26 block: Fix invalidate_cache error path for parent activation
bdrv_co_invalidate_cache() clears the BDRV_O_INACTIVE flag before
actually activating a node so that the correct permissions etc. are
taken. In case of errors, the flag must be restored so that the next
call to bdrv_co_invalidate_cache() retries activation.

Restoring the flag was missing in the error path for a failed
parent->role->activate() call. The consequence is that this attempt to
activate all images correctly fails because we still set errp, however
on the next attempt BDRV_O_INACTIVE is already clear, so we return
success without actually retrying the failed action.

An example where this is observable in practice is migration to a QEMU
instance that has a raw format block node attached to a guest device
with share-rw=off (the default) while another process holds
BLK_PERM_WRITE for the same image. In this case, all activation steps
before parent->role->activate() succeed because raw can tolerate other
writers to the image. Only the parent callback (in particular
blk_root_activate()) tries to implement the share-rw=on property and
requests exclusive write permissions. This fails when the migration
completes and correctly displays an error. However, a manual 'cont' will
incorrectly resume the VM without calling blk_root_activate() again.

This case is described in more detail in the following bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1531888

Fix this by correctly restoring the BDRV_O_INACTIVE flag in the error
path.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-02-01 13:46:45 +01:00
John Snow
039be85c41 iotests/236: fix transaction kwarg order
It's not enough to order the kwargs for consistent QMP log output,
we must also sort any sub-dictionaries in lists that appear as values.

Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:45 +01:00
Max Reitz
fff2388d5d iotests: Filter second BLOCK_JOB_ERROR from 229
Without this filter, this test sometimes fails.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:45 +01:00
Alberto Garcia
eb97813ff5 virtio-scsi: Forbid devices with different iothreads sharing a blockdev
This patch forbids attaching a disk to a SCSI device if its using a
different AioContext. Test case included.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:45 +01:00
Alberto Garcia
3ff35ba391 scsi-disk: Acquire the AioContext in scsi_*_realize()
This fixes a crash when attaching two disks with the same blockdev to
a SCSI device that is using iothreads. Test case included.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:45 +01:00
Alberto Garcia
a6f230c8d1 virtio-scsi: Move BlockBackend back to the main AioContext on unplug
This fixes a crash when attaching a disk to a SCSI device using
iothreads, then detaching it and reattaching it again. Test case
included.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:45 +01:00
Markus Armbruster
14632122b8 block: Eliminate the S_1KiB, S_2KiB, ... macros
We define 54 macros for the powers of two >= 1024.  We use six, in six
macro definitions.  Four of them could just as well use the common MiB
macro, so do that.  The remaining two can't, because they get passed
to stringify.  Replace the macro by the literal number there.
Slightly harder to read in one instance (1048576 vs. S_1MiB), so add a
comment there.  The other instance is a wash: 65536 vs S_64KiB.  65536
has been good enough for more than seven years there.

This effectively reverts commit 540b849261 and 1240ac558d.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:45 +01:00
Thomas Huth
d09ea2d227 block: Remove blk_attach_dev_legacy() / legacy_dev code
The last user of blk_attach_dev_legacy() was the code in xen_disk which
has recently been reworked. Now there is no user for this legacy function
anymore. Thus we can finally remove all code related to the "legacy_dev"
flag, too, and turn the related "void *" in block-backend.c into proper
"DeviceState *" to fix some of the remaining TODOs there.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:45 +01:00
Kevin Wolf
8be25de643 block: Apply auto-read-only for ro-whitelist drivers
If QEMU was configured with a driver in --block-drv-ro-whitelist, trying
to use that driver read-write resulted in an error message even if
auto-read-only=on was set.

Consider auto-read-only=on for the whitelist checking and use it to
automatically degrade to read-only for block drivers on the read-only
whitelist.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-02-01 13:46:45 +01:00
Peter Maydell
1324f06384 uuid: Make qemu_uuid_bswap() take and return a QemuUUID
Currently qemu_uuid_bswap() takes a pointer to the QemuUUID to
be byte-swapped. This means it can't be used when the UUID
to be swapped is in a packed member of a struct. It's also
out of line with the general bswap*() functions we provide
in bswap.h, which take the value to be swapped and return it.

Make qemu_uuid_bswap() take a QemuUUID and return the swapped version.

This fixes some clang warnings about taking the address of
a packed struct member in block/vdi.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:45 +01:00
Peter Maydell
ac928b8ee8 block/vdi: Don't take address of fields in packed structs
Taking the address of a field in a packed struct is a bad idea, because
it might not be actually aligned enough for that pointer type (and
thus cause a crash on dereference on some host architectures). Newer
versions of clang warn about this.

Instead of passing UUID related functions the address of a possibly
unaligned QemuUUID struct, use local variables and then copy to/from
the struct field as appropriate.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Peter Maydell
0dbaaa7981 block/vpc: Don't take address of fields in packed structs
Taking the address of a field in a packed struct is a bad idea, because
it might not be actually aligned enough for that pointer type (and
thus cause a crash on dereference on some host architectures). Newer
versions of clang warn about this. Avoid the bug by generating the
UUID into a local variable which is definitely safely aligned and
then copying it into place.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Kevin Wolf
4a960ece17 vmdk: Reject excess extents in blockdev-create
Clarify that the number of extents provided in BlockdevCreateOptionsVmdk
must match the number of extents that will actually be used. Providing
more extents will result in an error now.

This requires adapting the test case to provide the right number of
extents.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
2019-02-01 13:46:44 +01:00
Kevin Wolf
1c4e7b640b iotests: Add VMDK tests for blockdev-create
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Fam Zheng
bab4feb2fa iotests: Filter cid numbers in VMDK extent info
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Fam Zheng
3015372dd0 vmdk: Implement .bdrv_co_create callback
This makes VMDK support blockdev-create. The implementation reuses the
image creation code in vmdk_co_create_opts which now acceptes a callback
pointer to "retrieve" BlockBackend pointers from the caller. This way we
separate the logic between file/extent acquisition and initialization.

The QAPI command parameters are mostly the same as the old create_opts
except the dropped legacy @compat6 switch, which is redundant with
@hwversion.

Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Fam Zheng
5be28490ca vmdk: Refactor vmdk_create_extent
The extracted vmdk_init_extent takes a BlockBackend object and
initializes the format metadata. It is the common part between "qemu-img
create" and "blockdev-create".

Add a "BlockBackend *pbb" parameter to vmdk_create_extent, to return the
opened BB to the caller in the next patch.

Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Max Reitz
9a378495c3 iotests: Make 234 stable
This test waits for a MIGRATION event with status=completed on the
source VM before querying the migration status on both source and
destination.  However, just because the source says migration has
completed does not mean the destination thinks the same.  Therefore, in
some cases, the destination VM may still report "active" instead of
"completed" when asked for its migration status.

Fix this by enabling migration events on both VMs and waiting until both
source and destination emit a status=completed MIGRATION event.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Kevin Wolf
4720cbeea1 block: Fix hangs in synchronous APIs with iothreads
In the block layer, synchronous APIs are often implemented by creating a
coroutine that calls the asynchronous coroutine-based implementation and
then waiting for completion with BDRV_POLL_WHILE().

For this to work with iothreads (more specifically, when the synchronous
API is called in a thread that is not the home thread of the block
device, so that the coroutine will run in a different thread), we must
make sure to call aio_wait_kick() at the end of the operation. Many
places are missing this, so that BDRV_POLL_WHILE() keeps hanging even if
the condition has long become false.

Note that bdrv_dec_in_flight() involves an aio_wait_kick() call. This
corresponds to the BDRV_POLL_WHILE() in the drain functions, but it is
generally not enough for most other operations because they haven't set
the return value in the coroutine entry stub yet. To avoid race
conditions there, we need to kick after setting the return value.

The race window is small enough that the problem doesn't usually surface
in the common path. However, it does surface and causes easily
reproducible hangs if the operation can return early before even calling
bdrv_inc/dec_in_flight, which many of them do (trivial error or no-op
success paths).

The bug in bdrv_truncate(), bdrv_check() and bdrv_invalidate_cache() is
slightly different: These functions even neglected to schedule the
coroutine in the home thread of the node. This avoids the hang, but is
obviously wrong, too. Fix those to schedule the coroutine in the right
AioContext in addition to adding aio_wait_kick() calls.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-02-01 13:46:44 +01:00
Markus Armbruster
4e20c1becb block: Replace qdict_put() by qdict_put_obj() where appropriate
Patch created mechanically by rerunning:

  $  spatch --sp-file scripts/coccinelle/qobject.cocci \
	    --macro-file scripts/cocci-macro-file.h \
	    --dir block --in-place

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
yuchenlin
76f1cf0a5e qemu-iotests: add test case for dmg
Recently, some bugs in dmg file have been fixed. To prevent reading dmg
is broken someday in the future, add a simple test which ensures the
conversion from dmg to raw should not hang or face any I/O error.

Signed-off-by: yuchenlin <npes87184@gmail.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Alberto Garcia
cdc674c736 qcow2: Assert that refcount block offsets fit in the refcount table
Refcount table entries have a field to store the offset of the
refcount block. The rest of the bits of the entry are currently
reserved.

The offset is always taken from the entry using REFT_OFFSET_MASK to
ensure that we only use the bits that belong to that field.

While that mask is used every time we read from the refcount table, it
is never used when we write to it. Due to the other constraints of the
qcow2 format QEMU can never produce refcount block offsets that don't
fit in that field so any such offset when allocating a refcount block
would indicate a bug in QEMU.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Alberto Garcia
67b24427fe mirror: Block the source BlockDriverState in mirror_start_job()
The mirror_start_job() function used for the commit-active job blocks
the source, target and all intermediate nodes for the duration of the
job.

   target <- intermediate <- source

Since 4ef85a9c23 this function creates a dummy mirror_top_bs that
goes on top of the source node, and it is this dummy node that gets
blocked instead. The source node is never blocked or added to the
job's list of nodes.

   target <- intermediate <- source <- mirror_top

At the moment I don't think it is possible to exploit this problem
because any additional job on 'source' would either be forbidden for
other reasons or it would need to involve an additional node that is
blocked, causing an error.

This can be seen in the error messages, however, because they never
refer to the source node being blocked:

  $ qemu-img create -f qcow2 hd0.qcow2 1M
  $ qemu-img create -f qcow2 -b hd0.qcow2 hd1.qcow2
  $ qemu-io -c 'write 0 1M' hd0.qcow2
  $ $QEMU -drive if=none,file=hd1.qcow2,node-name=hd1
  { "execute": "qmp_capabilities" }
  { "execute": "block-commit", "arguments": {"device": "hd1", "speed": 256}}
  { "execute": "block-stream", "arguments": {"device": "hd1"}}
  { "error": {"class": "GenericError",
    "desc": "Node 'hd0' is busy: block device is in use by block job: commit"}}

After this patch the error message refers to 'hd1', as it should.

The expected output of iotest 141 also needs to be updated for the
same reason.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Alberto Garcia
e917e2cb2a mirror: Release the dirty bitmap if mirror_start_job() fails
At the moment I don't see how to make this function fail after the
dirty bitmap has been created, but if that was possible then we would
hit the assert(QLIST_EMPTY(&bs->dirty_bitmaps)) in bdrv_close().

Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-02-01 13:46:44 +01:00
Daniel P. Berrangé
0015ca5cba ui: remove support for SDL1.2 in favour of SDL2
SDL1.2 was deprecated in the 2.12.0 release with:

  commit e52c6ba341
  Author: Daniel P. Berrange <berrange@redhat.com>
  Date:   Mon Jan 15 14:25:33 2018 +0000

    ui: deprecate use of SDL 1.2 in favour of 2.0 series

    The SDL 2.0 release was made in Aug, 2013:

      https://www.libsdl.org/release/

    That will soon be 4 + 1/2 years ago, which is enough time to consider
    the 2.0 series widely supported.

    Thus we deprecate the SDL 1.2 support, which will allow us to delete it
    in the last release of 2018. By this time, SDL 2.0 will be more than 5
    years old.

    Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
    Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
    Message-id: 20180115142533.24585-1-berrange@redhat.com
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>

It is thus able to be removed in the 3.1.0 release.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20180822131554.3398-4-berrange@redhat.com>

[ kraxel: rebase ]

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2019-02-01 11:59:12 +01:00
Philippe Mathieu-Daudé
70cc0c1fb0 hw/display/milkymist-tmu2: Move inlined code from header to source
Move the complexity of milkymist_tmu2_create() into the
source file. Doing so we avoid to include the X11/OpenGL
headers in all LM32 devices, and we also avoid the duplicate
declaration of glx_fbconfig_attr[] (it is already declared
in hw/display/milkymist-tmu2.c).
Since TYPE_MILKYMIST_TMU2 is now accessible, use it.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190130120005.23123-5-philmd@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2019-02-01 11:58:50 +01:00
Philippe Mathieu-Daudé
57d434407a hw/display/milkymist-tmu2: Explicit the dependency to both X11 / OpenGL
The TMU device requires both X11 and OpenGL.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190130120005.23123-4-philmd@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2019-02-01 11:58:50 +01:00
Philippe Mathieu-Daudé
99e1a93bbf configure: LM32 Milkymist Texture Mapping Unit (tmu2) also depends of X11
Commit 5f9b1e3506 remove the dependency between OpenGL and X11.
However the milkymist-tmu2 device do require X11.
When using SDL, the configure script sets need_x11=yes, so the X11
flags are populated to the makefiles.
When building without SDL, X11 is not pulled and populated, leading
to a link failure:

    LINK    lm32-softmmu/qemu-system-lm32
  hw/lm32/milkymist.o: In function `milkymist_tmu2_create':
  hw/lm32/milkymist-hw.h:114: undefined reference to `XOpenDisplay'
  hw/lm32/milkymist-hw.h:140: undefined reference to `XFree'
  hw/lm32/milkymist-hw.h:141: undefined reference to `XCloseDisplay'
  hw/lm32/milkymist-hw.h:130: undefined reference to `XCloseDisplay'
  ../hw/display/milkymist-tmu2.o: In function `tmu2_glx_init':
  hw/display/milkymist-tmu2.c:112: undefined reference to `XOpenDisplay'
  hw/display/milkymist-tmu2.c:123: undefined reference to `XFree'
  collect2: error: ld returned 1 exit status
  gmake[1]: *** [Makefile:199: qemu-system-lm32] Error 1

Enforce the X11 dependency when the LM32 target is built.
This will allow us to build QEMU without SDL.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190130120005.23123-3-philmd@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2019-02-01 11:58:50 +01:00
Philippe Mathieu-Daudé
3a831fc0df hw/display: Move Milkymist specific hardware out of common-obj list
The Milkymist specific hardware is only used by the LM32 target,
it is pointless to compile those objects in other targets.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190130120005.23123-2-philmd@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2019-02-01 11:58:50 +01:00
Peter Maydell
cfe6c54769 Block patches:
- New debugging QMP command to explore block graphs
 - Converted DPRINTF()s to trace events
 - Fixed qemu-io's use of getopt() for systems with optreset
 - Minor NVMe emulation fixes
 - An iotest fix
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJcUkaiAAoJEPQH2wBh1c9AHsEIAIU0+FNjtdz7lNgyeBCSFCFa
 /qWNk4+w6QBfhTTx/N0hGwh5/FvNYQhby8VHtZitE4/QcLbJwHYgWf14pwce3tP3
 3qNB87AdQpKMpbajQM2x2Xy8lnlPeM7fe21Q/12vuX7AlEDT3gH+W9rg94bw2oFN
 r+xBk6H5F2aVElw3CwMM7eary4+dPnnCQwAnoqM+g5hdpL+0scrIyARGw7v0hmSn
 LDWESCM4a55lEYmwj1wS3J3uj6Fj00yzBvcEuCcT1GO+lXlV8/ciO9r2HqxVKwgz
 4GAi/BERoMKjfn+/77/yI5flprPx2voNGgkyBY4C3z9ncnN6u02QBZSusBIWpSg=
 =Kt4r
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/xanclic/tags/pull-block-2019-01-31' into staging

Block patches:
- New debugging QMP command to explore block graphs
- Converted DPRINTF()s to trace events
- Fixed qemu-io's use of getopt() for systems with optreset
- Minor NVMe emulation fixes
- An iotest fix

# gpg: Signature made Thu 31 Jan 2019 00:51:46 GMT
# gpg:                using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>" [full]
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1  1829 F407 DB00 61D5 CF40

* remotes/xanclic/tags/pull-block-2019-01-31:
  iotests: Allow 147 to be run concurrently
  iotests: Bind qemu-nbd to localhost in 147
  iotests.py: Add qemu_nbd_pipe()
  nvme: use pci_dev directly in nvme_realize
  nvme: ensure the num_queues is not zero
  nvme: use TYPE_NVME instead of constant string
  qemu-io: Add generic function for reinitializing optind.
  block/sheepdog: Convert from DPRINTF() macro to trace events
  block/file-posix: Convert from DPRINTF() macro to trace events
  block/curl: Convert from DPRINTF() macro to trace events
  block/ssh: Convert from DPRINTF() macro to trace events
  scripts: add render_block_graph function for QEMUMachine
  qapi: add x-debug-query-block-graph

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-31 19:26:09 +00:00
Peter Maydell
e8977901b7 - add device category (edu, i8042, sd memory card)
- code clean-up
 - LGPL information clean-up
 - fix typo (acpi)
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJcUaTuAAoJEPMMOL0/L748/ZMP/AwJM0P7AjFSuPxC5ETNK2Nn
 kJSMOMhAYs4oZppwQ25tphHNFd7xEzQd1JBS5aT4svp5qN0kzoUoPY6IpxvfXueW
 aFmXJPgCZvWPvD8xebHYPiH1wr0ITeRKDjZBV+YykCwIOiigg5RraMslon+Djf2f
 WNB7L/abhb4eTf5vNvS7cpCLOSXbslYYtj4Z5WcSARRAFlvBzHczooiYMuovMhtP
 zCOhG7tW9scqoEJyIW2Bxmw4QqHAuOtrDnqN3DHseM1Eh4PoJCNLVf4lTU8qWhF4
 W8IGysjQQot2V2JIkD9XkNcfGlQNniwNj/vYhpKCDLACE65ztZ42DV7j+oe2SDB2
 ljqTc2Vi4pmqEuIGcT3MykBKdsjdLOS3KBP3S6fgMV7/R347xeRf85bGcZQb/AnS
 rsL5MLA4nkd0xdztpuvHcqJdxhk+SbwsY8Zlj4agSELhFuIuEKz4VLd/WrxQvAhp
 yimX2+m7dHvIsHWpRduZ8I5nR2U0O+sxmIhPbaQNCcHTU/JZjtZEKxU0kpCHkBtd
 AdMVXf8NMGe8NecY8n9Y80Veencpq+nCthGtRRJwWOptlXXYdR5aYHaX6IlJaVFV
 jmVZITn1HxddPsVqGW0ZAJip78xVi8eYHiMvQO4+dfECLAH6m7TyLRKh8MyIyCHD
 VJ6roxeKSgosPaw1tzc2
 =1OOM
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/vivier2/tags/trivial-branch-pull-request' into staging

- add device category (edu, i8042, sd memory card)
- code clean-up
- LGPL information clean-up
- fix typo (acpi)

# gpg: Signature made Wed 30 Jan 2019 13:21:50 GMT
# gpg:                using RSA key F30C38BD3F2FBE3C
# gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full]
# gpg:                 aka "Laurent Vivier <laurent@vivier.eu>" [full]
# gpg:                 aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full]
# Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F  5173 F30C 38BD 3F2F BE3C

* remotes/vivier2/tags/trivial-branch-pull-request:
  virtio-blk: remove duplicate definition of VirtIOBlock *s pointer
  hw/block: clean up stale xen_disk trace entries
  target/m68k: Fix LGPL information in the file headers
  target/s390x: Fix LGPL version in the file header comments
  tcg: Fix LGPL version number
  target/tricore: Fix LGPL version number
  target/openrisc: Fix LGPL version number
  COPYING.LIB: Synchronize the LGPL 2.1 with the version from gnu.org
  Don't talk about the LGPL if the file is licensed under the GPL
  hw: sd: set category of the sd memory card
  hw: input: set category of the i8042 device
  typo: apci->acpi
  hw: edu: set category of the edu device

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-31 15:40:39 +00:00
Peter Maydell
aefcd28366 usb: xhci: fix iso transfers.
usb: mtp: break up writes, bugfixes.
 usb: fix lgpl info in headers.
 usb: hid: unique serials.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJcUVNBAAoJEEy22O7T6HE4awkQAKk1eaOPj5/9GdU5mioCAbGM
 zaYMuBvjfJqLragSfKDHEX2k7tTpQjkp48AklVOiKF2GZly+AqvCxAUjja9H4W8x
 TrXbnLe5No4aX5tHUzHmfDocfgDkuTi2aysQrAmUcJjChG1rMwj+fczr82L0Pbc2
 1pNw0Z+0Cu+2i2WNawTF9tiRwJobrCACrTgdRMhgorfIR/TCKtgFlEnMzkXZIZ5k
 awf7GRmYYHCFQffQK4gENPEmYVjiHLB0kpIO4m8As5F0Ex49Lg0rbZiCh+GLy014
 HF0eEVlIS3EEKsu3BxgeJtLTtxvRC4h4o1esbBGcaClvdPaD2DOphuilCMXiY8KL
 v8Z7r5qqfAiBHBkUAEGlahwjpsD52POz+OYQ7FgsUbQJ4hN34zrI4t7+o4ThoDV7
 MTegpOiKxt5xqmj19+bkl6zR/hIafuJxUVOhLk2Rt8YwusYdrRYZUDmGxfnQMBeI
 5VShYN1wCGkC/AvtCtnB2EruA7hYQZ2gOo6W9yZco4vwX+EHJKRx8H5L6MY1yIp+
 O5ksYZbVOqNp0dTZSSZBYeNwHSjGhAlgwzlJ3gj4jSKBaCv2bwnasCq02MW36VZA
 3tOoIDtSPXyB9n2AVxcwycce7t5fa3SRtbto8CU+IvrYi6AR0GQzoKp4pTI7H4Sw
 8bF4w5Ah9qC1on8/Ivfv
 =ilkX
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/kraxel/tags/usb-20190130-pull-request' into staging

usb: xhci: fix iso transfers.
usb: mtp: break up writes, bugfixes.
usb: fix lgpl info in headers.
usb: hid: unique serials.

# gpg: Signature made Wed 30 Jan 2019 07:33:21 GMT
# gpg:                using RSA key 4CB6D8EED3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>" [full]
# gpg:                 aka "Gerd Hoffmann <gerd@kraxel.org>" [full]
# gpg:                 aka "Gerd Hoffmann (private) <kraxel@gmail.com>" [full]
# Primary key fingerprint: A032 8CFF B93A 17A7 9901  FE7D 4CB6 D8EE D3E8 7138

* remotes/kraxel/tags/usb-20190130-pull-request:
  usb-mtp: replace the homebrew write with qemu_write_full
  usb-mtp: breakup MTP write into smaller chunks
  usb-mtp: Reallocate buffer in multiples of MTP_WRITE_BUF_SZ
  usb: implement XHCI underrun/overrun events
  usb: XHCI shall not halt isochronous endpoints
  hw/usb: Fix LGPL information in the file headers
  usb: dev-mtp: close fd in usb_mtp_object_readdir()
  usb: assign unique serial numbers to hid devices

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-31 12:53:21 +00:00
Peter Maydell
460da1005d Pull request
User-visible changes:
  * The new qemu-trace-stap script makes it convenient to collect traces without
    writing SystemTap scripts.  See "man qemu-trace-stap" for details.
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJcURdlAAoJEJykq7OBq3PIPqAH/iSkYDDeWLQy4eqeTPpbsxd4
 U6mUYC/m2g1bevj1TxdFmr2g5LReGTd4w35w6/SUaLMHsu701T7gK+0z1gP2/N/D
 qzJiM9xxF6xYq1P4hWJGf+XsbJ0OVf7oRwn1j8qXVBxjIxERX98z0ZUtbk/aulGi
 wnNXycBufpKGk2PkQC+pBfhU2775dMqpUV49z9mqyVzsiZQlzbx8WMDQj1Ic1fbe
 ZcAvX5D350HJjB3Z+9wJ1V2pC9Gu+z3TIup+YR1Bkch0ywyTCVTqcepoOXwzQamm
 84bifPdObBm7SbbwtrwoVKYLrdIrbb3PTWaOlWVUKruKIIf8hzn5BxC3wChb2Qk=
 =bex6
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/stefanha/tags/tracing-pull-request' into staging

Pull request

User-visible changes:
 * The new qemu-trace-stap script makes it convenient to collect traces without
   writing SystemTap scripts.  See "man qemu-trace-stap" for details.

# gpg: Signature made Wed 30 Jan 2019 03:17:57 GMT
# gpg:                using RSA key 9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" [full]
# gpg:                 aka "Stefan Hajnoczi <stefanha@gmail.com>" [full]
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35  775A 9CA4 ABB3 81AB 73C8

* remotes/stefanha/tags/tracing-pull-request:
  trace: rerun tracetool after ./configure changes
  trace: improve runstate tracing
  trace: add ability to do simple printf logging via systemtap
  trace: forbid use of %m in trace event format strings
  trace: enforce that every trace-events file has a final newline
  display: ensure qxl log_buf is a nul terminated string

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-31 12:03:40 +00:00
Peter Maydell
006dce5f8f Machine queue, 2019-01-28
* Fix small leak on NUMA code
 * Improve memory backend error messages
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABCAAGBQJcT1swAAoJECgHk2+YTcWm1doP/jTSghhACEzxwoEomBTV78Rh
 GuOSD5UvNwqh8WDGpOkffj5kyxHusX2z1Vr8txPxum0FlRlfU2/vS5nDjPX1qC9v
 Opi40Pv08lTZRs+LFUJhUnx+mJmCC6o3Qtq8IIEmodF6NVTEGz2JMOFdw0qSLRrG
 7rTlx0I3+0T3J/d0Q0HbnOpShaxDFrVI0B/INbRswSNC0QjdqfJ0riWVNIxmQAI6
 HdUbgOBlSgHp17LZmWQAoelwK2FMpQ0EajYGF+1XpFmSRtpxx4TcpY+C1utfh/CQ
 7kkEL8gYgcR4ITvFKUCTSegNAKZwN+rse4LlHr+kXvWYXYwjNEYzu2spSWuEstm+
 I209WbxPTj8dUbU/LJgqWV5P1xpH35sQt3jPdr5M7VCarXMsR+uwGURjZpzCobO+
 sesfECMFI9PCoHqE6x63LJ73lRt8qS+6Sa2Z2RpBhnF0I8e8pz52hyY5zM0USqUu
 MRQTIUsmo54nvWQwEj64tqL/9EfBVqnxteVrwLeI6m1wxfU5oFC5iY/Obh1fw0yd
 DHJ72UO6r++xYy+CiHb1J1A2/b5iohY+KoLYMoDzW5K5spxG+8PKo+MAdaC1ErrI
 RcJHXFby3zb/MTGm1HKfe7MNtp01EtH7EG/muVjLWqSbqeFtXemt+mPHf2jzm6pE
 6RFQWZs3KYz6WvgX4iig
 =tMiV
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/ehabkost/tags/machine-next-pull-request' into staging

Machine queue, 2019-01-28

* Fix small leak on NUMA code
* Improve memory backend error messages

# gpg: Signature made Mon 28 Jan 2019 19:42:40 GMT
# gpg:                using RSA key 2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" [full]
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF  D1AA 2807 936F 984D C5A6

* remotes/ehabkost/tags/machine-next-pull-request:
  hostmem: add more information in error messages
  numa: Fixed the memory leak of numa error message

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-31 11:20:26 +00:00
Max Reitz
908b30164b iotests: Allow 147 to be run concurrently
To do this, we need to allow creating the NBD server on various ports
instead of a single one (which may not even work if you run just one
instance, because something entirely else might be using that port).

So we just pick a random port in [32768, 32768 + 1024) and try to create
a server there.  If that fails, we just retry until something sticks.

For the IPv6 test, we need a different range, though (just above that
one).  This is because "localhost" resolves to both 127.0.0.1 and ::1.
This means that if you bind to it, it will bind to both, if possible, or
just one if the other is already in use.  Therefore, if the IPv6 test
has already taken [::1]:some_port and we then try to take
localhost:some_port, that will work -- only the second server will be
bound to 127.0.0.1:some_port alone and not [::1]:some_port in addition.
So we have two different servers on the same port, one for IPv4 and one
for IPv6.

But when we then try to connect to the server through
localhost:some_port, we will always end up at the IPv6 one (as long as
it is up), and this may not be the one we want.

Thus, we must make sure not to create an IPv6-only NBD server on the
same port as a normal "dual-stack" NBD server -- which is done by using
distinct port ranges, as explained above.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-4-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-01-31 00:44:55 +01:00
Max Reitz
dfadac9a37 iotests: Bind qemu-nbd to localhost in 147
By default, qemu-nbd binds to 0.0.0.0.  However, we then proceed to
connect to "localhost".  Usually, this works out fine; but if this test
is run concurrently, some other test function may have bound a different
server to ::1 (on the same port -- you can bind different serves to the
same port, as long as one is on IPv4 and the other on IPv6).

So running qemu-nbd works, it can bind to 0.0.0.0:NBD_PORT.  But
potentially a concurrent test has successfully taken [::1]:NBD_PORT.  In
this case, trying to connect to "localhost" will lead us to the IPv6
instance, where we do not want to end up.

Fix this by just binding to "localhost".  This will make qemu-nbd error
out immediately and not give us cryptic errors later.

(Also, it will allow us to just try a different port as of a future
patch.)

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-3-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-01-31 00:44:49 +01:00
Max Reitz
e1e6eccd49 iotests.py: Add qemu_nbd_pipe()
In some cases, we may want to deal with qemu-nbd errors (e.g. by
launching it in a different configuration until it no longer throws
any).  In that case, we do not want its output ending up in the test
output.

It may still be useful for handling the error, though, so add a new
function that works basically like qemu_nbd(), only that it returns the
qemu-nbd output instead of making it end up in the log.  In contrast to
qemu_img_pipe(), it does still return the exit code as well, though,
because that is even more important for error handling.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20181221234750.23577-2-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-01-31 00:44:29 +01:00