Interrupt conditions occurring while masked are not being
signaled when later unmasked.
The fix is to raise/lower IRQs when IMASK is changed.
To avoid problems like this in future, consolidate
IRQ pin update logic in one function.
Also fix probable typo "IEVENT_TXF | IEVENT_TXF",
and update IRQ pins on reset.
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Recent cleanup in commit a028dd423e dropped the ICPStateClass::reset
handler. It is now up to child ICP classes to call the DeviceClass::reset
handler of the parent class, thanks to device_class_set_parent_reset().
This is a better object programming pattern, but unfortunately it causes
QEMU to crash during CPU hotplug:
(qemu) device_add host-spapr-cpu-core,id=core1,core-id=1
Segmentation fault (core dumped)
When the hotplug path tries to reset the ICP device, we end up calling:
static void icp_kvm_reset(DeviceState *dev)
{
ICPStateClass *icpc = ICP_GET_CLASS(dev);
icpc->parent_reset(dev);
but icpc->parent_reset is NULL... This happens because icp_kvm_class_init()
calls:
device_class_set_parent_reset(dc, icp_kvm_reset,
&icpc->parent_reset);
but dc->reset, ie, DeviceClass::reset for the TYPE_ICP type, is
itself NULL.
This patch hence sets DeviceClass::reset for the TYPE_ICP type to
point to icp_reset(). It then registers a reset handler that calls
DeviceClass::reset. If the ICP subtype has configured its own reset
handler with device_class_set_parent_reset(), this ensures it will
be called first and it can then call ICPStateClass::parent_reset
safely. This fixes the reset path for the TYPE_KVM_ICP type, which
is the only subtype that defines its own reset function.
Reported-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: a028dd423e
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This function was introduced between v2.11 and v2.12 to replace obsolete
ways of specifying the NUMA nodes for DIMMs. It's used to find the correct
node for an LMB, by locating which DIMM object it lies within.
Unfortunately, one of the checks is inverted, so we check whether the
address is less than two different things, rather than actually checking
a range. This introduced a regression, meaning that after a reboot qemu
will advertise incorrect node information for memory to the guest.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
When the guest changes the address of the frame buffer we need to
refresh the screen to correctly display the new content. This fixes
display update problems when changing between screens on AmigaOS.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Ville Skyttä <ville.skytta@iki.fi>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180612065150.21110-1-ville.skytta@iki.fi
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We currently don't enforce that the sparse segments we detect during convert are
aligned. This leads to unnecessary and costly read-modify-write cycles either
internally in Qemu or in the background on the storage device as nearly all
modern filesystems or hardware have a 4k alignment internally.
This patch modifies is_allocated_sectors so that its *pnum result will always
end at an alignment boundary. This way all requests will end at an alignment
boundary. The start of all requests will also be aligned as long as the results
of get_block_status do not lead to an unaligned offset.
The number of RMW cycles when converting an example image [1] to a raw device that
has 4k sector size is about 4600 4k read requests to perform a total of about 15000
write requests. With this path the additional 4600 read requests are eliminated while
the number of total write requests stays constant.
[1] https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The current BDC VPD page (page 0xb1) is too short. This can be
seen running sg_utils:
$ sg_vpd --page=bdc /dev/sda
Block device characteristics VPD page (SBC):
Block device characteristics VPD page length too short=8
By the SCSI spec, the expected size of the SBC page is 0x40.
There is no telling how the guest will behave with a shorter
message - it can ignore it, or worse, make (wrong)
assumptions.
This patch fixes the emulation by setting the size to 0x40.
This is the output of the previous sg_vpd command after
applying it:
$ sg_vpd --page=bdc /dev/sda -v
inquiry cdb: 12 01 b1 00 fc 00
Block device characteristics VPD page (SBC):
[PQual=0 Peripheral device type: disk]
Medium rotation rate is not reported
Product type: Not specified
WABEREQ=0
WACEREQ=0
Nominal form factor not reported
FUAB=0
VBULS=0
To improve readability, this patch also adds the VBULS value
explictly and add comments on the existing fields we're
setting.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Test that we're rejecting what we ought to for file,
host_driver and host_cdrom drivers. Test that we're
seeing the deprecated message for block and chardevs
on the file driver.
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Adjust each caller of raw_open_common to specify if they are expecting
host and character devices or not. Tighten expectations of file types upon
open in the common code and refuse types that are not expected.
This has two effects:
(1) Character and block devices are now considered deprecated for the
'file' driver, which expects only S_IFREG, and
(2) no file-posix driver (file, host_cdrom, or host_device) can open
directories now.
I don't think there's a legitimate reason to open directories as if
they were files. This prevents QEMU from opening and attempting to probe
a directory inode, which can break in exciting ways. One of those ways
is lseek on ext4/xfs, which will return 0x7fffffffffffffff as the file
size instead of EISDIR. This can coax QEMU into responding with a
confusing "file too big" instead of "Hey, that's not a file".
See: https://bugs.launchpad.net/qemu/+bug/1739304/
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Explicitly enabling zero detection or compression suppresses copy
offloading during convert. Document it.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
197 is one example where _make_test_img is used twice without stopping
the NBD server in between. An error will occur like this:
@@ -26,9 +26,13 @@
=== Partial final cluster ===
+qemu-img: TEST_DIR/t.IMGFMT: Failed to get "resize" lock
+Is another process using the image?
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1024
+Failed to find an available port: Address already in use
read 1024/1024 bytes at offset 0
Patch _make_test_img to stop the old qemu-nbd before starting a new one,
which fixes this problem, and similarly 215.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This step was left behind my mistake. As suggested by the echoed text,
the intention was to test two devices with the same image, with
different options. The behavior should be the same as two QEMU
processes. Complete it.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The NSEvent class method scrollingDeltaY is available
for Mac OS 10.7 and newer. Since QEMU supports Mac OS
10.5 and up, we need to be using a method that is
available on these version of Mac OS X. The deltaY
method is a method that does almost the same thing as
scrollingDeltaY and is available on Mac OS 10.5 and
up. So we can replace scrollingDeltaY with deltaY.
We only check deltaY's value if it is not zero
because zero means that the scrolling increment was
sufficiently fine that it was only reported in scrollingDeltaY,
or that the scrolling was horizontal.
Signed-off-by: John Arbuckle <programmingkidx@gmail.com>
Message-id: 20180709150235.7573-1-programmingkidx@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: tweak commit message and comment a little]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Current and upcoming mesa releases rely on a shader disk cash. It uses
a thread job queue with low priority, set with
sched_setscheduler(SCHED_IDLE). However, that syscall is rejected by
the "resourcecontrol" seccomp qemu filter.
Since it should be safe to allow lowering thread priority, let's allow
scheduling thread to idle policy.
Related to:
https://bugzilla.redhat.com/show_bug.cgi?id=1594456
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Acked-by: Eduardo Otubo <otubo@redhat.com>
PCI devices needing a ROM allocate an optional MemoryRegion with
pci_add_option_rom(). pci_del_option_rom() does the cleanup when the
device is destroyed. The only action taken by this routine is to call
vmstate_unregister_ram() which clears the id string of the optional
ROM RAMBlock and now, also flags the RAMBlock as non-migratable. This
was recently added by commit b895de5027 ("migration: discard
non-migratable RAMBlocks"), .
VFIO devices do their own loading of the PCI option ROM in
vfio_pci_size_rom(). The memory region is switched to an I/O region
and the PCI attribute 'has_rom' is set but the RAMBlock of the ROM
region is not allocated. When the associated PCI device is deleted,
pci_del_option_rom() calls vmstate_unregister_ram() which tries to
flag a NULL RAMBlock, leading to a SEGV.
It seems that 'has_rom' was set to have memory_region_destroy()
called, but since commit 469b046ead ("memory: remove
memory_region_destroy") this is not necessary anymore as the
MemoryRegion is freed automagically.
Remove the PCIDevice 'has_rom' attribute setting in vfio.
Fixes: b895de5027 ("migration: discard non-migratable RAMBlocks")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
qmp_error_response() will free the given error. Fix double-free in
later qmp_request_free().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180705164201.9853-1-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Fixes: 1cc3747152
Signed-off-by: Markus Armbruster <armbru@redhat.com>
The macro CMMA_BLOCK_SIZE was defined but not used, and a hardcoded
value was instead used in the code.
This patch fixes the value of CMMA_BLOCK_SIZE and uses it in the
appropriate place in the code, and fixes another case of hardcoded
value in the KVM backend, replacing it with the more appropriate
constant KVM_S390_CMMA_SIZE_MAX.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Message-Id: <1530787170-3101-1-git-send-email-imbrenda@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Truncation is the last to convert from open coded req handling to
reusing helpers. This time the permission check in prepare has to adapt
to the new caller: it checks a different permission bit, and doesn't
trigger the before write notifier.
Also, truncation should always trigger a bs->total_sectors update and in
turn call parent resize_cb. Update the condition in finish helper, too.
It's intended to do a duplicated bs->read_only check before calling
bdrv_co_write_req_prepare() so that we can be more informative with the
error message, as bdrv_co_write_req_prepare() doesn't have Error
parameter.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If we are growing the image and potentially using preallocation for the
new area, we need to make sure that no write requests are made to the
"preallocated" area which is [@old_size, @offset), not
[@offset, offset * 2 - @old_size).
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This brings the request handling logic inline with write and discard,
fixing write_gen, resize_cb, dirty bitmaps and image size refreshing.
The last of these issues broke iotest case 222, which is now fixed.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reuse the new bdrv_co_write_req_prepare/finish helpers. The variation
here is that discard requests don't affect bs->wr_highest_offset, and it
cannot extend the image.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It was accidently added before MIG_CMD_PACKAGED so it might break
command compatibility when we run postcopy migration between old/new
QEMUs. Fix that up quickly before the QEMU 3.0 release.
Reported-by: Lukáš Doktor <ldoktor@redhat.com>
Suggested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710094424.30754-1-peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
We dumped something when network failure happens. We should avoid those
messages to be dumped when running the tests:
$ ./tests/migration-test -p /x86_64/migration/postcopy/recovery
/x86_64/migration/postcopy/recovery: qemu-system-x86_64: check_section_footer: Read section footer failed: -5
qemu-system-x86_64: Detected IO failure for postcopy. Migration paused.
qemu-system-x86_64: Detected IO failure for postcopy. Migration paused.
OK
After the patch:
$ ./tests/migration-test -p /x86_64/migration/postcopy/recovery
/x86_64/migration/postcopy/recovery: OK
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-11-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Test the postcopy recovery procedure by emulating a network failure
using migrate-pause command.
Tested-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-10-peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
It's generalized from wait_for_migration_complete() to allow us to wait
for any migration status besides failure.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-9-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Introduce helpers to query migration states and use it.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-8-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
For example, we can pass in '"resume": true' to resume a migration.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-7-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Separate the old postcopy UNIX socket test into three steps, provide a
helper for each step. With these helpers, we can do more compliated
tests like postcopy recovery, while keep the codes shared.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-6-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Fix up merge with 2e295789 / Skip tests for ppc tcg
Two problems exist when a write request that enlarges the image (i.e.
write beyond EOF) finishes:
1) parent is not notified about size change;
2) dirty bitmap is not resized although we try to set the dirty bits;
Fix them just like how bdrv_co_truncate works.
Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
As a mechanical refactoring patch, this is the first step towards
unified and more correct write code paths. This is helpful because
multiple BlockDriverState fields need to be updated after modifying
image data, and it's hard to maintain in multiple places such as copy
offload, discard and truncate.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This matches the types used for bytes in the rest parts of block layer.
In the case of bdrv_co_truncate, new_bytes can be the image size which
probably doesn't fit in a 32 bit int.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Other I/O functions are already using a BdrvChild pointer in the API, so
make discard do the same. It makes it possible to initiate the same
permission checks before doing I/O, and much easier to share the
helper functions for this, which will be added and used by write,
truncate and copy range paths.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
A few trace points that can help reveal what is happening in a copy
offloading I/O path.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
With in one module, trace points usually have a common prefix named
after the module name. paio_submit and paio_submit_co are the only two
trace points so far in the two file protocol drivers. As we are adding
more, having a common prefix here is better so that trace points can be
enabled with a glob. Rename them.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This reverts commit a7aff6dd10.
Hold off removing this for one more QEMU release (current libvirt
release still uses it.)
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This reverts commit eae3bd1eb7.
Reverted to avoid conflicts for geometry options revert.
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This reverts commit b008326744.
Hold off removing this for one more QEMU release (current libvirt
release still uses it.)
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This reverts commit 6266e900b8.
Some deprecated -drive options were still in use by libvirt, only
fixed with libvirt commit b340c6c614 ("qemu: format serial and geometry
on frontend disk device"), which is not yet in any released version
of libvirt.
So let's hold off removing the deprecated options for one more QEMU
release.
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
These two states will be missing when doing "query-migrate" on
destination VM. Add these states so that we can get the query results
as expected.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-5-peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The calculation on size of received bitmap is incorrect for postcopy
recovery. Here we wanted to let the size to cover all the valid bits in
the bitmap, we should use DIV_ROUND_UP() instead of a division.
For example, a RAMBlock with size=4K (which contains only one single 4K
page) will have nbits=1, then nbits/8=0, then the real bitmap won't be
sent to source at all.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-4-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
We were checking against -EIO, assuming that it will cover all IO
failures. But actually it is not. One example is that in
qemu_loadvm_section_start_full() we can have tons of places that will
return -EINVAL even if the error is caused by IO failures on the
network.
Let's loosen the recovery check logic here to cover all the error cases
happened by removing the explicit check against -EIO. After all we
won't lose anything here if any other failure happened.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20180710091902.28780-3-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>