tg->any_timer_armed[] must be cleared when detaching pending timers from
the AioContext. Failure to do so leads to hung I/O because it looks
like there are still timers pending when in fact they have been removed.
Other ThrottleGroupMembers might have requests pending too so it's
necessary to schedule the next TGM so it can set a timer.
This patch fixes hung I/O when QEMU is launched with drives that are in
the same throttling group:
(guest)$ dd if=/dev/zero of=/dev/vdb oflag=direct bs=512 &
(guest)$ dd if=/dev/zero of=/dev/vdc oflag=direct bs=512 &
(qemu) stop
(qemu) cont
...I/O is stuck...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20171116112150.27607-1-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJaDCpTAAoJEHWtZYAqC0IRv8IH/10e3L9sHDT28vz9RzB3hRwU
FGPs0K1Zr6EQsJbzK4hlz+GjZUBPLHkQfbEoVAV+BEqZnq+nOqVw/pBn4+0zLSzU
urRN6j4Aj2msopPngM9iiJkKb4IU1LBUGDS4NhJR4RcMhEgu34zfJ6LmVOD5OjVa
IwIvWhaYX3oROPFme5uliRi9Rq9YQFfW65Ik2eqMJTYyLw741gH5xv2pD3dobTzy
AjYoWkRmmpi0YokMhnDx+gdqSC4E5xwO2KOcm1IYn21LwBEBPTpHv43ib6tJNZhi
h+bQmsCsGbVIg9rUDmN7LF0TcybCTZky7m/MDROo6taEbMCuVS8XJdR5pyxN89E=
=zYak
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stefanberger/tags/pull-tpm-2017-11-15-1' into staging
Merge tpm 2017/11/15 v1
# gpg: Signature made Wed 15 Nov 2017 11:51:47 GMT
# gpg: using RSA key 0x75AD65802A0B4211
# gpg: Good signature from "Stefan Berger <stefanb@linux.vnet.ibm.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B818 B9CA DF90 89C2 D5CE C66B 75AD 6580 2A0B 4211
* remotes/stefanberger/tags/pull-tpm-2017-11-15-1:
tpm_tis: Return 0 for every register in case of failure mode
tpm_tis: Return TPM_VERSION_UNSPEC in case of BE failure
tpm-emulator: protect concurrent ctrl_chr access
specs: Extend TPM spec with TPM emulator description
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With SDL 2.0.6, calling SDL_ShowWindow during SDL_WINDOWEVENT_HIDDEN
blocks all subsequent display updates.
Instead of trying to override the change, just update the scon->hidden
flag.
Signed-off-by: Jindrich Makovicka <makovick@gmail.com>
Message-Id: <20171112193032.9724-2-makovick@gmail.com>
This is a partial revert of d3f3a0f453,
which in turn is a workaround for a SDL bug. The bug is fixed in 2.0.6,
see https://bugzilla.libsdl.org/show_bug.cgi?id=3410
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Prevents displaying of a doubled mouse pointer when moving the pointer
to the screen edges when fullscreen.
Signed-off-by: Jindrich Makovicka <makovick@gmail.com>
Message-Id: <20171112193032.9724-8-makovick@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Also use a proper enum parameter for SDL_ShowCursor
Signed-off-by: Jindrich Makovicka <makovick@gmail.com>
Message-Id: <20171112193032.9724-4-makovick@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
The minus sign after << causes the shell to strip only
preceding tabs, not spaces.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20171110090354.29608-1-kraxel@redhat.com>
Fixes: 40bf8e9aed
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This fixes a crash caused by picking the wrong memory region in
address_space_lookup_region seen with client code accessing a device
model that uses alias memory regions. The expensive part of
address_space_lookup_region anyway is phys_page_find; performance-wise
it is okay to repeat the subsequent subpage lookup.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <20171114225941.072707456B5@zero.eik.bme.hu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than returning ~0, return 0 for every register in case of failure
mode. The '0' is better to indicate that there's no device there. It avoids
SeaBIOS detecting a device and getting stuck on it trying to read and write
its registers.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
In case the backend has a failure, such as the tpm_emulator's CMD_INIT
failing, the TIS goes into failure mode and does not respond to reads
or writes to MMIO registers. In this case we need to prevent the ACPI
table from being added and the straight-forward way is to indicate that
there's no known TPM version being used.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The control chardev is being used from the data thread to set the
locality of the next request. Altough the chr has a write mutex, we
may potentially read the reply from another thread request.
Add a mutex to protect from concurrent control commands.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Following the recent extension of QEMU with a TPM emulator device,
update the specs describing for how to interact with the device.
The results of commands run inside a Linux VM are expected to be
similar to those when the TPM passthrough device is used, so we
just reuse that.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Use of GETPC must be restricted to those functions that are
directly called from TCG generated code.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Fixes: 2399d4e7ce
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We use raw memory primitives along the !parallel_cpus paths in order to
simplify the endianness handling. Because of that, we did not benefit
from the generic changes to cpu_ldst_user_only_template.h.
The simplest fix is to manipulate helper_retaddr here.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When we handle a signal from a fault within a user-only memory helper,
we cannot cpu_restore_state with the PC found within the signal frame.
Use a TLS variable, helper_retaddr, to record the unwind start point
to find the faulting guest insn.
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
stat64_min_slow() and stat64_max_slow() compare the wrong way. This
makes iotest 136 fail with clang and -m32.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20171114232223.25207-1-mreitz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJaCyZJAAoJEPQH2wBh1c9A9MgH/1NIx+nUKQwYnwvLADmbA/iB
2EtFTLf4yq3B0tK0DTQxmCltB2ZkmA3wyA7Njp0FbWQ1fpOmyfjr1ceQZYV1hLMm
TcpkCgbPffaBcfhosulwFfM4i8pG6zrm9jp/FUl3Qy1Ja2iwW+wPs/oh7P41516l
VlwiL3BlqZ/1c6uUpavGpAahsN1SU+i1JZZ4fM+xOXjtdqq9rMuvje6RDwXLkoCh
7FLky2Kw1M6InoGh5Er/J9qrQtb/WoaXAtzTRFtzGmWEAZ/ow9/OLYn0BLXp1ct6
BScQbiElrVfpKdyZ1IC1SwjdIV5PtyMeJGSZdcvbSpFspcFCgqyAA0y+HixexYc=
=J5Qp
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2017-11-14' into staging
Block patches for 2.11.0-rc1
# gpg: Signature made Tue 14 Nov 2017 17:22:17 GMT
# gpg: using RSA key 0xF407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* remotes/maxreitz/tags/pull-block-2017-11-14:
qemu-iotests: update unsupported image formats in 194
block/parallels: add migration blocker
block/parallels: Do not update header or truncate image when INMIGRATE
block/vhdx.c: Don't blindly update the header
iotests: 077: Filter out 'resume' lines
block/snapshot: dirty all dirty bitmaps on snapshot-switch
qcow2: Check that corrupted images can be repaired in iotest 060
iotests: Use new-style NBD connections
iotests: Make 136 less flaky
iotests: Make 083 less flaky
iotests: Make 055 less flaky
iotests: Add missing 'blkdebug::' in 040
iotests: Make 030 less flaky
qcow2: Assert that the crypto header does not overlap other metadata
qcow2: Add iotest for an empty refcount table
qcow2: Add iotest for an image with header.refcount_table_offset == 0
qcow2: Don't open images with header.refcount_table_clusters == 0
qcow2: Prevent allocating compressed clusters at offset 0
qcow2: Prevent allocating L2 tables at offset 0
qcow2: Prevent allocating refcount blocks at offset 0
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Test 194 checks for 'luks' to exclude as an unsupported format,
However, most formats are unsupported, due to migration blockers.
Rather than specifying a blacklist of unsupported formats, whitelist
supported formats (specifically, qcow2, qed, raw, dmg).
Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Message-id: 23ca18c7f843c86a28b1529ca9ac6db4b35ca0e4.1510059970.git.jcody@redhat.com
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Migration does not work for parallels, and has been broken for a while
(see patch 'block/parallels: Do not update header or truncate image when
INMIGRATE'). The bdrv_invalidate_cache() method needs to be added for
migration to be supported. Until this is done, prohibit migration.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 5e04a7c8a3089913fa58d484af42dab7993984ad.1510059970.git.jcody@redhat.com
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Max Reitz <mreitz@redhat.com>
If we write or modify the image file while the QEMU run state is
INMIGRATE, then the BDRV_O_INACTIVE BDS flag is set. This will cause
an assert, since the image is marked inactive. Make sure we obey this
flag.
Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Message-id: 3996c930fa8cde8570b7a63032720d76a28fd78b.1510059970.git.jcody@redhat.com
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The VHDX specification requires that before user data modification of
the vhdx image, the VHDX header file and data GUIDs need to be updated.
In vhdx_open(), if the image is set to RDWR, we go ahead and update the
header.
However, just because the image is set to RDWR does not mean we can go
ahead and write at this point - specifically, if the QEMU run state is
INMIGRATE, the underlying file BS may be set to inactive via the BDS
open flag of BDRV_O_INACTIVE. Attempting to write under this condition
will cause an assert in bdrv_co_pwritev().
We can alternatively latch the first time the image is written. And lo
and behold, we do just that, via vhdx_user_visible_write() in
vhdx_co_writev(). This means the call to vhdx_update_headers() in
vhdx_open() is likely just vestigial, and can be removed.
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Message-id: 659e4cdba6ef4c651737852777c8c93d27b38040.1510059970.git.jcody@redhat.com
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Max Reitz <mreitz@redhat.com>
In the "Overlapping multiple requests" cases, the 3rd reqs (the break
point B) doesn't wait for the 2nd, and once resumed the I/O will just
continue. This is because the 2nd is already waiting for the 1st, and
in wait_serialising_requests() there is:
/* If the request is already (indirectly) waiting for us, or
* will wait for us as soon as it wakes up, then just go on
* (instead of producing a deadlock in the former case). */
if (!req->waiting_for) {
/* actually break */
...
}
Consequently, the following "sleep 100; resume A" command races with the
completion of that request, and sometimes results in an unexpected
order of output:
> @@ -56,9 +56,9 @@
> wrote XXX/XXX bytes at offset XXX
> XXX bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> blkdebug: Resuming request 'B'
> +blkdebug: Resuming request 'A'
> wrote XXX/XXX bytes at offset XXX
> XXX bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> -blkdebug: Resuming request 'A'
> wrote XXX/XXX bytes at offset XXX
> XXX bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> wrote XXX/XXX bytes at offset XXX
Filter out the "Resuming request" lines to make the output
deterministic.
Reported-by: Patchew <no-reply@patchew.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 20171113150026.4743-1-famz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Snapshot-switch actually changes active state of disk so it should
reflect on dirty bitmaps. Otherwise next incremental backup using
these bitmaps will be invalid.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20171023092945.54532-1-vsementsov@virtuozzo.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We just fixed a few bugs that caused QEMU to crash when trying to
write to corrupted qcow2 images, and iotest 060 was expanded to test
all those scenarios.
In almost all cases the corrupted images can be repaired using
qemu-img, so this patch verifies that.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 0b1b95340ecdfbc6927e36adf2fd42ae6198747a.1510143008.git.berto@igalia.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Old-style NBD is deprecated upstream (it is documented, but no
longer implemented in the reference implementation), and it is
severely limited (it cannot support structured replies, which
means it cannot support efficient handling of zeroes), when
compared to new-style NBD. We are better off having our iotests
favor new-style everywhere (although some explicit tests,
particularly 83, still cover old-style for back-compat reasons);
this is as simple as supplying the empty string as the default
export name, as it does not change the URI needed to connect a
client to the server. This also gives us more coverage of the
just-added structured reply code, when not overriding $QEMU_NBD
to intentionally point to an older server.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 20171109221216.10248-1-eblake@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
136 executes some AIO requests without a final aio_flush; then it
advances the virtual clock and thus expects the last access time of the
device to be less than the current time when queried (i.e. idle_time_ns
to be greater than 0). However, without the aio_flush, some requests
may be settled after the clock_step invocation. In that case,
idle_time_ns would be 0 and the test fails.
Fix this by adding an aio_flush if any AIO request other than some other
aio_flush has been executed.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20171109203025.27493-6-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
083 has (at least) two issues:
1. By launching the nbd-fault-injector in background, it may not be
scheduled until the first grep on its output file is executed.
However, until then, that file may not have been created yet -- so it
either does not exist yet (thus making the grep emit an error), or it
does exist but contains stale data (thus making the rest of the test
case work connect to a wrong address).
Fix this by explicitly overwriting the output file before executing
nbd-fault-injector.
2. The nbd-fault-injector prints things other than "Listening on...".
It also prints a "Closing connection" message from time to time. We
currently invoke sed on the whole file in the hope of it only
containing the "Listening on..." line yet. That hope is sometimes
shattered by the brutal reality of race conditions, so make the sed
script more robust.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20171109203025.27493-5-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
First of all, test 055 does a valiant job of invoking pause_drive()
sometimes, but that is worth nothing without blkdebug. So the first
thing to do is to sprinkle a couple of "blkdebug::" in there -- with the
exception of the transaction tests, because the blkdebug break points
make the transaction QMP command hang (which is bad). In that case, we
can get away with throttling the block job that it effectively is
paused.
Then, 055 usually does not pause the drive before starting a block job
that should be cancelled. This means that the backup job might be
completed already before block-job-cancel is invoked; thus making the
test either fail (currently) or moot if cancel_and_wait() ignored this
condition. Fix this by pausing the drive before starting the job.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20171109203025.27493-4-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
040 tries to invoke pause_drive() on a drive that does not use blkdebug.
Good idea, but let's use blkdebug to make it actually work.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20171109203025.27493-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This patch fixes two race conditions in 030:
1. The first is in TestENOSPC.test_enospc(). After resuming the job,
querying it to confirm it is no longer paused may fail because in the
meantime it might have completed already. The same was fixed in
TestEIO.test_ignore() already (in commit
2c3b44da07).
2. The second is in TestSetSpeed.test_set_speed_invalid(): Here, a
stream job is started on a drive without any break points, with a
block-job-set-speed invoked subsequently. However, without any break
points, the job might have completed in the meantime (on tmpfs at
least); or it might complete before cancel_and_wait() which expects
the job to still exist. This can be fixed like everywhere else by
pausing the drive (installing break points) before starting the job
and letting cancel_and_wait() resume it.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20171109203025.27493-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The crypto header is initialized only when QEMU is creating a new
image, so there's no chance of this happening on a corrupted image.
If QEMU is really trying to allocate the header overlapping other
existing metadata sections then this is a serious bug in QEMU itself
so let's add an assertion.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: ae3d77f312fc0c5e0ac2bbd71676c0112eebe2e5.1509718618.git.berto@igalia.com
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This patch adds a simple iotest in which we try to write to an image
with an empty refcount table (i.e. with all entries set to 0).
This scenario was already handled by the existing consistency checks,
but we add an explicit test case for completeness.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 7e48b0e2ae1a0a18e0ee303b3045f130feec0474.1509718618.git.berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
This patch adds a simple iotest in which we try to write to an image
with the refcount table offset set to 0.
This scenario was already handled by the existing consistency checks,
but we add an explicit test case for completeness.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: feeceada92486bb8790b90f303fc9fe82a27391a.1509718618.git.berto@igalia.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
qcow2_do_open() is checking that header.refcount_table_clusters is not
too large, but it doesn't check that it's greater than zero. Apart
from the fact that an image like that is obviously corrupted, trying
to use it crashes QEMU since we end up with a null s->refcount_table
after qcow2_refcount_init().
These images can however be repaired, so allow opening them if the
BDRV_O_CHECK flag is set.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: f9750f50c80359babba11062e88f5075a47e8e16.1509718618.git.berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
If the refcount data is corrupted then we can end up trying to
allocate a new compressed cluster at offset 0 in the image, triggering
an assertion in qcow2_alloc_bytes() that would crash QEMU:
qcow2_alloc_bytes: Assertion `offset' failed.
This patch adds an explicit check for this scenario and a new test
case.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: fb53467cf48e95ff3330def1cf1003a5b862b7d9.1509718618.git.berto@igalia.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
If the refcount data is corrupted then we can end up trying to
allocate a new L2 table at offset 0 in the image, triggering an
assertion in the qcow2 cache that would crash QEMU:
qcow2_cache_entry_mark_dirty: Assertion `c->entries[i].offset != 0' failed
This patch adds an explicit check for this scenario and a new test
case.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 92dac37191ae7844a2da22c122204eb493cc3133.1509718618.git.berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Each entry in the qcow2 cache contains an offset field indicating the
location of the data in the qcow2 image. If the offset is 0 then it
means that the entry contains no data and is available to be used when
needed.
Because of that it is not possible to store in the cache the first
cluster of the qcow2 image (offset = 0). This is not a problem because
that cluster always contains the qcow2 header and we're not using this
cache for that.
However, if the qcow2 image is corrupted it can happen that we try to
allocate a new refcount block at offset 0, triggering this assertion
and crashing QEMU:
qcow2_cache_entry_mark_dirty: Assertion `c->entries[i].offset != 0' failed
This patch adds an explicit check for this scenario and a new test
case.
This problem was originally reported here:
https://bugs.launchpad.net/qemu/+bug/1728615
Reported-by: R.Nageswara Sastry <nasastry@in.ibm.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 92a2fadd10d58b423f269c1d1a309af161cdc73f.1509718618.git.berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
The following disk I/O throttling fixes solve recent bugs.
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJaCsdYAAoJEJykq7OBq3PIZdMH/10xOuxOjvjxlJNkQquhrAmD
y9dVj0jEtopdter/XR7ZCsww1UgxpIt8K43Dk1yWTKrm2bNN1v3cqemJV+UUTLFl
LppKxt5Cm1JRKaCfN0hSwOp5pFJumzH6creVdQMQ3VNCSSw6xfV94pupaVE8at6D
n4r3ZDF03ARETMJW7HY7QIFi1YVcfmi4wrx8rfhEGLZu06nHrtFQsDdH7SeErgXi
wJh+ksji4EvX2xc54nhprCsc9HdzbfeBEYx6tdD0Uh3xm7xXd2oka5Rac74WuqYu
B4aKwyFbvKZ0DYnENiOCkemTN51s+0GHLz43T92/DmQhJrBy8EU4TTCn73vgmto=
=KnUT
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into staging
Pull request
The following disk I/O throttling fixes solve recent bugs.
# gpg: Signature made Tue 14 Nov 2017 10:37:12 GMT
# gpg: using RSA key 0x9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>"
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha/tags/block-pull-request:
qemu-iotests: Test I/O limits with removable media
block: Leave valid throttle timers when removing a BDS from a backend
block: Check for inserted BlockDriverState in blk_io_limits_disable()
throttle-groups: drain before detaching ThrottleState
block: all I/O should be completed before removing throttle timers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update our pre-release seabios snapshot to the final release.
git shortlog
============
Gerd Hoffmann (1):
sercon: Disable ScreenAndDebug in case both serial console and serial debug are active
Kevin O'Connor (2):
timer: Avoid integer overflows in usec and nsec calculations
docs: Note v1.11.0 release
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Version: GnuPG v1
iQEcBAABAgAGBQJaCk9uAAoJEO8Ells5jWIRj+4H/iY9KX/YNuifChitg29e5GpF
0SCxCjI1bMtnRzAhGDS3YKoLbLo/pePR4sNnZgEvrc3kt+JXxabP1+suSsQQ39k+
4Iv2qEMXBralmB6RkldjEMMTEz6VHW/bbCUnKqOZnHWVoZ71CO2n6mbaGljbY6ft
qhPZ9dRKL9Lv8sPKr1hzlsI/b8mMulJ96PZIuwWTxEDoTmeyjCn7WAotPcccjUGt
Vg3nMx2HphDpUctqrcmcA667pXgo4eUcRyxfVmdtxIvVR7Mox4Mave8nPch9WgzO
XhDc0zd1MLoW2mv+lPiM0a9Y4VCXoHzQ/ZF+WSBMTsZ5P+jOTmaN2YrZq82v7bA=
=Rgs2
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging
# gpg: Signature made Tue 14 Nov 2017 02:05:34 GMT
# gpg: using RSA key 0xEF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
net/socket: fix coverity issue
Add new PCI ID for i82559a
Fix eepro100 simple transmission mode
colo: Consolidate the duplicate code chunk into a routine
colo-compare: Fix comments
colo-compare: compare the packet in a specified Connection
colo-compare: Insert packet into the suitable position of packet queue directly
net: fix check for number of parameters to -netdev socket
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch ensures that icount_decr.u32.high is clear before calling
cpu_exec_nocache when exception is pending. Because the exception is
caused by the first instruction in the block and it cannot be executed
without resetting the flag.
There are two parts in the fix. First, clear icount_decr.u32.high in
cpu_handle_interrupt (just before processing the "dependent" request,
stored in cpu->interrupt_request or cpu->exit_request) rather than
cpu_loop_exec_tb; this ensures that cpu_handle_exception is always
reached with zero icount_decr.u32.high unless another interrupt has
happened in the meanwhile.
Second, try to cause the exception at the beginning of
cpu_handle_exception, and exit immediately if the TB cannot
execute. With this change, interrupts are processed and
cpu_exec_nocache can make process.
Signed-off-by: Maria Klimushenkova <maria.klimushenkova@ispras.ru>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20171114081818.27640.33165.stgit@pasha-VirtualBox>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch adds a condition before overwriting exception_index fiels.
It is needed when exception_index is already set to some meaningful value.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Message-Id: <20171114081812.27640.26372.stgit@pasha-VirtualBox>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit 5c0919d0 [1] introduced virtqueue_size parameter
for common virtio-scsi path, without updaing the vhost-user-scsi
code. vhost-user-scsi devices right now report size 0 for each vq.
This patch introduces virtqueue_size param to vhost-user-scsi,
that can now be set by the user. However, the most importantly, it
now has a default value of 128 (same as QEMU's virtio-scsi).
[1] 5c0919d0 ("virtio-scsi: Add virtqueue_size parameter
allowing virtqueue size to be set.")
Change-Id: I70e87eab702ebf1196c028dbf17d54fdc0c89a14
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Message-Id: <1510676916-76409-1-git-send-email-dariuszx.stojaczyk@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>