Now that we can report errors in the realize function, let's replace
the fprintf's and hw_error's with error_setg.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's move the qom definitions of the ipl device into ipl.h, replace
"s390-ipl" by a proper type define, turn it into a TYPE_DEVICE
and remove the unneeded class definition.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
For TYPE_DEVICE, the dc->reset() function is not called on system resets
yet. Until that is changed, we have to manually register a reset handler.
Let's provide qdev_reset_all_fn(), that can directly be used - just like
the reset handler that is already available for qbus.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Because of El Torito spec flaw boot image size needs to be verified.
Boot catalog entry size field has 16-bit width, and specifies size
in 512-byte units.
Thus, boot image size cannot exceed 32M.
We actually search for the file to get the file size.
This is done by scanning the ISO directory tree for the ISO block number
and reading the file size from the directory entry.
Signed-off-by: Maxim Samoylov <max7255@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Boot entry is considered compatible if boot image is Linux kernel
with matching S390 Linux magic string.
Empty boot images with sector_count == 0 are considered broken.
Signed-off-by: Maxim Samoylov <max7255@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch enables boot from media formatted according to
ISO-9660 and El Torito bootable CD specification.
We try to boot from device as ISO-9660 media when SCSI IPL failed.
The first boot catalog entry with bootable flag is used.
ISO-9660 media with default 2048-bytes sector size only is supported.
Signed-off-by: Maxim Samoylov <max7255@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's always adjust the sector number to be read using the current
virtio block size value.
This prepares for the implementation of IPL from ISO-9660 media.
Signed-off-by: Maxim Samoylov <max7255@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
On hugetlbfs CMMA will not be useful as every ESSA instruction will trap.
So don't offer CMMA to guests with a hugepages backing.
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
By replacing memory_region_init_ram with memory_region_allocate_system_memory
we gain goodies like mem-path backends. This will allow us to use hugetlbfs
once the kernel supports it.
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
We were missing some files, and some files should get an additional
entry to add the people actually looking after the code.
Reported-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
On s390x, each pci device has its own iommu, which is only properly
setup in qemu once the mpcifc instruction used to register the
translation table has been intercepted. Therefore, for a pci device that
is not configured or has not been initialized, proper translation is
neither required nor possible. Moreover, we may not have a host bridge
device ready yet.
This was exposed by a recent vfio change that triggers iommu translation
during the initialization of the vfio pci device. Let's do an early exit
in that case.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
We keep the device's sense data in a byte array (following the
architecture), but the ecws are an array of 32 bit values. If we
just blindly copy the values, the sense data will change from
de-facto BE data to de-facto cpu-endian data, which means we end
up doing an incorrect conversion on LE hosts.
Let's just explicitly convert to cpu-endianness while assembling
the irb.
Reported-by: Andy Lutomirski <luto@kernel.org>
Tested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
For append file open modes, use FILE_APPEND_DATA for the desired access
for writing at the end of the file.
Version 2:
For "a+", "ab+", and "a+b" modes use FILE_APPEND_DATA|GENERIC_READ.
ORing in GENERIC_READ starts a read at the begining of the file. All
writes will append to the end fo the file.
Added white space to maintain the alignment of the guest_file_open_modes[].
Signed-off-by: Kirk Allan <kallan@suse.com>
Cc: qemu-stable@nongnu.org
* use FILE_GENERIC_APPEND macro, which provides same semantics as
FILE_APPEND_DATA, but retains other flags from GENERIC_WRITE
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABCAAGBQJWQ2YyAAoJEDuxQgLoOKytN14IAKGIVOI1CfPNpWphrtc55Q0n
NEHxu9AeectWxbxHrVbhHqB7wEfIoPdymuiIg8WO3GPrEII2EUHJntBy0LyIxJUC
wj3waAsIs4bJPdkGBxFoqMkmXo+wKUFivh1aVx7pZwd0b3yjfgTZ117i3aHCBIEl
hvAoLdTp/tqRQcBvPu+/jnl2mSquf6IJg8Yhg409HFUrNcQ4bg2+gOQgrSuAKle8
1Jp73umA2509/FOB3J1QT0GZnX9+rbjoWxXbGyPucb26x6ep3KDEbMy5lMAv0+W2
Zoi4YfjPiPFTswkuNvqu2zpzJz1q7WNCP3V5/Yn/2q9KXNicTfsWQWs+T/hoByw=
=FKUB
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'mreitz/tags/pull-block-for-kevin-2015-11-11' into queue-block
Block patches from 2015-10-26 until 2015-11-11.
# gpg: Signature made Wed Nov 11 17:00:50 2015 CET using RSA key ID E838ACAD
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
* mreitz/tags/pull-block-for-kevin-2015-11-11:
iotests: Check for quorum support in test 139
qcow2: Fix qcow2_get_cluster_offset() for zero clusters
iotests: Add tests for the x-blockdev-del command
block: Add 'x-blockdev-del' QMP command
block: Add blk_get_refcnt()
mirror: block all operations on the target image during the job
qemu-iotests: fix -valgrind option for check
qemu-iotests: fix cleanup of background processes
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The quorum driver is always built in, but it is disabled during
run-time if there's no SHA256 support available (see commit e94867e).
This patch skips the quorum test in iotest 139 in that case.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 1447172891-20410-1-git-send-email-berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
When searching for contiguous zero clusters, we only need to check the
cluster type. Before this patch, an increasing offset (L2E_OFFSET_MASK)
was expected, so that the function never returned more than a single
zero cluster in practice. This patch fixes it to actually return as many
contiguous zero clusters as it can.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1446657384-5907-1-git-send-email-kwolf@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This command is still experimental, hence the name.
This is the companion to 'blockdev-add'. It allows deleting a
BlockBackend with its associated BlockDriverState tree, or a
BlockDriverState that is not attached to any backend.
In either case, the command fails if the reference count is greater
than 1 or the BlockDriverState has any parents.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 6cfc148c77aca1da942b094d811bfa3fcf7ac7bb.1446475331.git.berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This function returns the reference count of a given BlockBackend.
For convenience, it returns 0 if the BlockBackend pointer is NULL.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: dfdd8a17dbe3288842840636d2cfe5bb895abcb0.1446475331.git.berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
There's nothing preventing the target image from being used by other
operations during the 'drive-mirror' job, so we should block them all
until the job is done.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 82b88fd04cde918a08a6f9a4ab85626d7fd7b502.1446475331.git.berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Commit 934659c switched the iotests to run qemu-io from a bash subshell,
in order to catch segfaults. This method is incompatible with the
current valgrind_qemu_io() bash function.
Move the valgrind usage into the exec subshell in _qemu_io_wrapper(),
while making sure the original return value is passed back to the
caller.
Update test output for tests 039, 061, and 137 as it looks for the
specific subshell command when the process is terminated.
Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Message-id: 0066fd85d26ca641a1c25135ff2479b7985701cf.1446232490.git.jcody@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Commit 934659c switched the iotests to run qemu and qemu-nbd from a bash
subshell, in order to catch segfaults. Unfortunately, this means the
process PID cannot be captured via '$!'. We stopped killing qemu and
qemu-nbd processes, leaving a lot of orphaned, running qemu processes
after executing iotests.
Since the process is using exec in the subshell, the PID is the
same as the subshell PID.
Track these PIDs for cleanup using pidfiles in the $TEST_DIR. Only
track the qemu PID, however, if requested - not all usage requires
killing the process.
Reported-by: John Snow <jsnow@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Message-id: 9e4f958b3895b7259b98d845bb46f000ba362869.1446232490.git.jcody@redhat.com
[mreitz@redhat.com: Replaced '! -z "..."' by '-n "..."']
Signed-off-by: Max Reitz <mreitz@redhat.com>
This is simpler now that the driver has been converted to coroutines.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Make sure there's not trailing garbage, e.g.
"64k-whatever-i-want-here"
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
cvtnum() returns int64_t: we should not be storing this
result inside of an int.
In a few cases, we need an extra sprinkling of error handling
where we expect to pass this number on towards a function that
expects something smaller than int64_t.
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This test checks that it is not possible to create a snapshot if the
requested overlay node is a BDS which does not support backing images.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch removes the inner quotation marks in all cases like this:
cmd=" ... "${variable}" ... "
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
There are two ways to check for I/O limits in a BlockDriverState:
- bs->throttle_state: if this pointer is not NULL, it means that this
BDS is member of a throttling group, its ThrottleTimers structure
has been initialized and its I/O limits are ready to be applied.
- bs->io_limits_enabled: if true it means that the throttle_state
pointer is valid _and_ the limits are currently enabled.
The latter is used in several places to check whether a BDS has I/O
limits configured, but what it really checks is whether requests
are being throttled or not. For example, io_limits_enabled can be
temporarily set to false in cases like bdrv_read_unthrottled() without
otherwise touching the throtting configuration of that BDS.
This patch replaces bs->io_limits_enabled with bs->throttle_state in
all cases where what we really want to check is the existence of I/O
limits, not whether they are currently enabled or not.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
throttle_group_unregister_bs() removes a BlockDriverState from its
throttling group and destroys the timers. This means that there must
be no pending throttled requests at that point (because it would be
impossible to complete them), so the caller has to drain them first.
At the moment throttle_group_unregister_bs() is only called from
bdrv_io_limits_disable(), which already takes care of draining the
requests, so there's nothing to worry about, but this patch makes
this invariant explicit in the documentation and adds the relevant
assertions.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The mirror job doesn't update its total length until
it has already started running, so we should translate
a zero-length job-len as meaning 0%.
Otherwise, we may get divide-by-zero faults.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If we create a buffer directly on the stack by using 12 bytes, there's
no guarantee the 64bit value we want to swap will be aligned, which
could cause errors with undefined behavior.
Spotted with clang -fsanitize=undefined and observed in iotests 15, 26,
44, 115 and 121.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The 'block-commit' command needs the overlay image of 'top' to
be opened in read-write mode in order to update the backing file
string. If 'top' is not the active layer or its backing file then its
overlay needs to be reopened during the block job.
This is a test case for that scenario.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
'block-commit' needs write access to two different nodes of the chain:
- 'base', because that's where the data is written to.
- the overlay of 'top', because it needs to update the backing file
string to point to 'base' after the operation.
Both images have to be opened in read-write mode, and commit_start()
takes care of reopening them if necessary.
With the current implementation, however, when overlay_bs is reopened
in read-write mode it has the side effect of making 'base' read-only
again, eventually making 'block-commit' fail.
This needs to be fixed in bdrv_reopen(), but until we get to that it
can be worked around simply by swapping the order of base and
overlay_bs in the reopen queue.
In order to reproduce this bug, overlay_bs needs to be initially in
read-only mode. That is: the 'top' parameter of 'block-commit' cannot
be the active layer nor its immediate backing chain.
Cc: qemu-stable@nongnu.org
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
One of the limitations of the 'blockdev-snapshot-sync' command is that
it does not allow passing BlockdevOptions to the newly created
snapshots, so they are always opened using the default values.
Extending the command to allow passing options is not a practical
solution because there is overlap between those options and some of
the existing parameters of the command.
This patch introduces a new 'blockdev-snapshot' command with a simpler
interface: it just takes two references to existing block devices that
will be used as the source and target for the snapshot.
Since the main difference between the two commands is that one of them
creates and opens the target image, while the other uses an already
opened one, the bulk of the implementation is shared.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Passing an empty string allows opening an image but not its backing
file. This was already described in the API documentation, only the
implementation was missing.
This is useful for creating snapshots using images opened with
blockdev-add, since they are not supposed to have a backing image
before the operation.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We will introduce the 'blockdev-snapshot' command that will require
its own struct for the parameters, so we need to rename this one in
order to avoid name clashes.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The 'snapshot-node-name' parameter of blockdev-snapshot-sync allows
setting the node name of the image that is going to be created.
Before creating the image, external_snapshot_prepare() checks that the
name is not already being used. The check is however incomplete since
it only considers existing node names, but node names must not clash
with device IDs either because they share the same namespace.
If the user attempts to create a snapshot using the name of an
existing device for the 'snapshot-node-name' parameter the operation
will eventually fail, but only after the new image has been created.
This patch replaces bdrv_find_node() with bdrv_lookup_bs() to extend
the check to existing device IDs, and thus detect possible name
clashes before the new image is created.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Expose the new read-only-mode option of 'blockdev-change-medium' for the
'change' HMP command.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add an option to qmp_blockdev_change_medium() which allows changing the
read-only status of the block device whose medium is changed.
Some drives do not have a inherently fixed read-only status; for
instance, floppy disks can be set read-only or writable independently of
the drive. Some users may find it useful to be able to therefore change
the read-only status of a block device when changing the medium.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Use separate code paths for the two overloaded functions of the 'change'
HMP command, and invoke the 'blockdev-change-medium' QMP command if used
on a block device (by calling qmp_blockdev_change_medium()).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Introduce a new QMP command 'blockdev-change-medium' which is intended
to replace the 'change' command for block devices. The existing function
qmp_change_blockdev() is accordingly renamed to
qmp_blockdev_change_medium().
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
blk_dev_change_media_cb() is called for all potential tray movements;
however, it is possible to request closing the tray but nothing actually
happening (on a floppy disk drive without a medium).
Thus, the actual tray status should be inquired before sending a
tray-moved event (and an event should be sent whenever the status
changed).
Checking @load is now superfluous; it was necessary because it was
possible to change a medium without having explicitly opened the tray
and closed it again (or it might have been possible, at least). This is
no longer possible, though.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Implement 'change' on block devices by calling blockdev-open-tray,
blockdev-remove-medium, blockdev-insert-medium (a variation of that
which does not need a node-name) and blockdev-close-tray.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Implement 'eject' by calling blockdev-open-tray and
blockdev-remove-medium.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>