Just a single bugfix in this batch. It's not strictly in ppc code,
though it's for the pseries machine's benefit. Eduardo suggested it
go through my tree however.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJY057NAAoJEGw4ysog2bOSllYP/1OCzuR3f8vFuAB18i2a+kzC
Bw4zD9j/3BlbJ36G6NQg53LXaFsK0w9qjIXSi3Xni+UCvat5ktXYrhgKb4nGOwaq
bYmB+GDm1573MxxeSBPE5nfuM3Zg4gG9osWryZCEJr3eDMxezdIWFaaZEWDEkywz
N5F1e1KX7NTObGuugoH/XRoUatWVYAzUqnlIVDhSta2hUKnYQJFRtU1YZqBKME/W
USRxTq57zEl3TcV0gi+eWqfnTTlcCR4+Xp2FYDg/pOReDQaO8dhPZxueiZCi4wlL
aqH8nmUuaiPOP5JAS2I7ds978PTe6HwsIn7cIpsEnRsafYZoFHzL1wlGZWMlGf/1
ReNe25opOD1FC/hfDIYFkeCcW6g2Jm75BJGqBX8VDAlkyR7V/8Iqnu1/v24X8J1l
nNNrBeQrRXx5tPORARazS8mA9LYZpY5MOh2zQ9GuXxM9aqg//KrkM+i0GFLhIIsv
/P5lcpt4m+bA2sve9PU4uFdkST7dYyEdPqFoHEVx2Y5V4+XUjPSyvkCjrM8ljhtI
ELpRxynW4s9B3SX1HeFbY1LM66emSmBtk+3gAce1wBAGxIE9TZCPfXpcfOxGIrAx
/xnwbARx+7BRPgVHSz3YAYSsvejISoBeFutnv2OhwyUJBbGoWkdaSgGbiUKX1K+Z
/orW1eJ11ASuwfe+atza
=jFKm
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170323' into staging
ppc patch queue for 2017-03-23
Just a single bugfix in this batch. It's not strictly in ppc code,
though it's for the pseries machine's benefit. Eduardo suggested it
go through my tree however.
# gpg: Signature made Thu 23 Mar 2017 10:09:17 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.9-20170323:
numa,spapr: align default numa node memory size to 256MB
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (GNU/Linux)
iQEcBAABAgAGBQJY05PkAAoJEC7X/ekGPIZNa2EH/RGFDe7bqdB7ZhA9EIe2rwuE
gnNFm0rZZxooL7Bqmoy3+jrIHWz44eajTCesYQphbSTOKiUUGdL4R8hUxVNRJkgE
yXvXLjZVGmzBd02klJizXJHkCsaUo/079x7A8ne44jSsFjFSl90iGDUzMZZJcmmi
7ZWOk5fb2mEUMPVOAt+tB9tdqkv94IMxSPBmsZ+QjNoMh/DWmcC0RJ5y9kLAVWef
YcQtrT2Da8ZK69v9C/2Eh9CsgI7PaoBP3ZjgJCLOW4mDw5Wy32NQl1H24+5s7FKU
B5NFCf4kqCsYA0SU251qJBHJZ6r60f0Shc4aMpm/8hqYcy4JI5QxSGUZXkWmEoM=
=5HM7
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/gonglei/tags/cryptodev-next-20170323' into staging
cryptodev fixes
# gpg: Signature made Thu 23 Mar 2017 09:22:44 GMT
# gpg: using RSA key 0x2ED7FDE9063C864D
# gpg: Good signature from "Gonglei <arei.gonglei@huawei.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3EF1 8E53 3459 E6D1 963A 3C05 2ED7 FDE9 063C 864D
* remotes/gonglei/tags/cryptodev-next-20170323:
cryptodev: fix asserting single queue
cryptodev: setiv only when really need
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Returning NULL from get_max_cpu_model results in a SIGSEGV runtime error.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170130131517.8092-1-sw@weilnetz.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
We already check for queues == 1 in cryptodev_builtin_init and when that
is not true raise an error. But before that error is reported the
assertion in cryptodev_builtin_cleanup kicks in (because object is being
finalized and freed).
Let's remove assert(queues == 1) form cryptodev_builtin_cleanup as it
does only harm and no good.
Reported-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
ECB mode cipher doesn't need IV, if we setiv for it then qemu
crypto API would report "Expected IV size 0 not **", so we should
setiv only when the cipher algos really need.
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
An off-by-one in commit 15c2f669e meant that we were failing to
check for unparsed input in all QemuOpts visitors. Recent testsuite
additions show that fixing the obvious bug with bogus fields will
also fix the case of an incomplete list visit; update the tests to
match the new behavior.
Simple testcase:
./x86_64-softmmu/qemu-system-x86_64 -nodefaults -nographic -qmp stdio -numa node,size=1g
failed to diagnose that 'size' is not a valid argument to -numa, and
now once again reports:
qemu-system-x86_64: -numa node,size=1g: Invalid parameter 'size'
See also https://bugzilla.redhat.com/show_bug.cgi?id=1434666
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Tested-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20170322144525.18964-4-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
A regression in commit 15c2f669e caused us to silently ignore
excess input to the QemuOpts visitor. Later, commit ea4641
accidentally abused that situation, by removing "qom-type" and
"id" from the corresponding QDict but leaving them defined in
the QemuOpts, when using the pair of containers to create a
user-defined object. Note that since we are already traversing
two separate items (a QDict and a QemuOpts), we are already
able to flag bogus arguments, as in:
$ ./x86_64-softmmu/qemu-system-x86_64 -nodefaults -nographic -qmp stdio -object memory-backend-ram,id=mem1,size=4k,bogus=huh
qemu-system-x86_64: -object memory-backend-ram,id=mem1,size=4k,bogus=huh: Property '.bogus' not found
So the only real concern is that when we re-enable strict checking
in the QemuOpts visitor, we do not want to start flagging the two
leftover keys as unvisited. Rearrange the code to clean out the
QemuOpts listing in advance, rather than removing items from the
QDict. Since "qom-type" is usually an automatic implicit default,
we don't have to restore it (this does mean that once instantiated,
QemuOpts is not necessarily an accurate representation of the
original command line - but this is not the first place to do that);
however "id" has to be put back (requiring us to cast away a const).
[As a side note, hmp_object_add() turns a QDict into a QemuOpts,
then calls user_creatable_add_opts() which converts QemuOpts into
a new QDict. There are probably a lot of wasteful conversions like
this, but cleaning them up is a much bigger task than the immediate
regression fix.]
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170322144525.18964-3-eblake@redhat.com>
Tested-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
This lets us hook into drained_begin and drained_end requests from the
backend level, which is particularly useful for making sure that all
jobs associated with a particular node (whether the source or the target)
receive a drain request.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-id: 20170316212351.13797-4-jsnow@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
Allow block backends to forward drain requests to their devices/users.
The initial intended purpose for this patch is to allow BBs to forward
requests along to BlockJobs, which will want to pause if their associated
BB has entered a drained region.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-id: 20170316212351.13797-3-jsnow@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
The purpose of this shim is to allow us to pause pre-started jobs.
The purpose of *that* is to allow us to buffer a pause request that
will be able to take effect before the job ever does any work, allowing
us to create jobs during a quiescent state (under which they will be
automatically paused), then resuming the jobs after the critical section
in any order, either:
(1) -block_job_start
-block_job_resume (via e.g. drained_end)
(2) -block_job_resume (via e.g. drained_end)
-block_job_start
The problem that requires a startup wrapper is the idea that a job must
start in the busy=true state only its first time-- all subsequent entries
require busy to be false, and the toggling of this state is otherwise
handled during existing pause and yield points.
The wrapper simply allows us to mandate that a job can "start," set busy
to true, then immediately pause only if necessary. We could avoid
requiring a wrapper, but all jobs would need to do it, so it's been
factored out here.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-id: 20170316212351.13797-2-jsnow@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
Streaming or any other block job hangs when performed on a block device
that has a non-default iothread. This happens because the AioContext
is acquired twice by block_job_defer_to_main_loop_bh and then released
only once by BDRV_POLL_WHILE. (Insert rants on recursive mutexes, which
unfortunately are a temporary but necessary evil for iothreads at the
moment).
Luckily, the reason for the double acquisition is simple; the function
acquires the AioContext for both the job iothread and the BDS iothread,
in case the BDS iothread was changed while the job was running. It
is therefore enough to skip the second acquisition when the two
AioContexts are one and the same.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-id: 1490118490-5597-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
A system with multiple VMGENID devices is undefined in the VMGENID spec by
omission.
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Ben Warren <ben@skyportsystems.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
The WRITE_POINTER linker/loader command that underlies VMGENID depends on
commit baf2d5bfba ("fw-cfg: support writeable blobs", 2017-01-12), which
in turn depends on fw_cfg DMA.
DMA for fw_cfg is enabled in 2.5+ machine types only (see commit
e6915b5f3a, "fw_cfg: unbreak migration compatibility for 2.4 and earlier
machines", 2016-02-18).
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Ben Warren <ben@skyportsystems.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Ben Warren <ben@skyportsystems.com <mailto:ben@skyportsystems.com>>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Commit ad07cd6 ("virtio-scsi: always use dataplane path if ioeventfd is
active", 2016-10-30) and 9ffe337 ("virtio-blk: always use dataplane
path if ioeventfd is active", 2016-10-30) broke the virtio 1.0
indirect access registers.
The indirect access registers bypass the ioeventfd, so that virtio-blk
and virtio-scsi now repeatedly try to initialize dataplane instead of
triggering the guest->host EventNotifier. Detect the situation by
checking vq->handle_aio_output; if it is not NULL, trigger the
EventNotifier, which is how the device expects to get notifications
and in fact the only thread-safe manner to deliver them.
Fixes: ad07cd6
Fixes: 9ffe337
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Commit 15c2f669e broke the ability of the QemuOpts visitor to
flag extra input parameters, but the regression went unnoticed
because of missing testsuite coverage. Add a test to cover this;
take the approach already used in 9cb8ef3 of adding a test that
passes (to avoid breaking bisection) but marks with BUG the
behavior that we don't like, so that the actual impact of the
fix in a later patch is easier to see.
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Message-Id: <20170322144525.18964-2-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
For one thing we shouldn't continue if an error happened, for the other
two steps failing can cause an abort() in error_setg because we reuse
the same errp blindly.
Add error handling checks to fix both issues.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Since commit 224245b ("spapr: Add LMB DR connectors"), NUMA node
memory size must be aligned to 256MB (SPAPR_MEMORY_BLOCK_SIZE).
But when "-numa" option is provided without "mem" parameter,
the memory is equally divided between nodes, but 8MB aligned.
This can be not valid for pseries.
In that case we can have:
$ ./ppc64-softmmu/qemu-system-ppc64 -m 4G -numa node -numa node -numa node
qemu-system-ppc64: Node 0 memory size 0x55000000 is not aligned to 256 MiB
With this patch, we have:
(qemu) info numa
3 nodes
node 0 cpus: 0
node 0 size: 1280 MB
node 1 cpus:
node 1 size: 1280 MB
node 2 cpus:
node 2 size: 1536 MB
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The new test demonstrates known bugs: integers between INT64_MAX+1 and
UINT64_MAX rejected, and integers between INT64_MIN and -1 are
accepted modulo 2^64.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490118290-6133-1-git-send-email-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
We plan to drop support in a future QEMU release for host OSes
and host architectures for which we have no test machine where
we can build and run tests. For the 2.9 release, make configure
print a warning if it is run on such a host, so that the user
has some warning of the plans and can volunteer to help us
maintain the port if they need it to continue to function.
This commit flags up as deprecated the CPU architectures:
* ia64
* sparc
* anything which we don't have a TCG port for
(and which was presumably using TCI)
and the OSes:
* GNU/kFreeBSD
* DragonFly BSD
* NetBSD
* OpenBSD
* Solaris
* AIX
* Haiku
It also makes entirely unrecognized host OS strings be
rejected rather than treated as if they were Linux (which
likely never worked).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1490106717-9542-1-git-send-email-peter.maydell@linaro.org
reported by Coverity.
-----BEGIN PGP SIGNATURE-----
iEYEABECAAYFAljQ+SYACgkQAvw66wEB28J8ZwCgku9iE4sYZdkMxGdtyo1vVZkV
Fy4AnRDKY62QCJSewzHa6k0qX+UEKZP1
=ARLp
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/gkurz/tags/for-upstream' into staging
This pull request fixes a potential QEMU hang in 9pfs and two issues
reported by Coverity.
# gpg: Signature made Tue 21 Mar 2017 09:57:58 GMT
# gpg: using DSA key 0x02FC3AEB0101DBC2
# gpg: Good signature from "Greg Kurz <groug@kaod.org>"
# gpg: aka "Greg Kurz <groug@free.fr>"
# gpg: aka "Greg Kurz <gkurz@linux.vnet.ibm.com>"
# gpg: aka "Gregory Kurz (Groug) <groug@free.fr>"
# gpg: aka "[jpeg image of size 3330]"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2BD4 3B44 535E C0A7 9894 DBA2 02FC 3AEB 0101 DBC2
* remotes/gkurz/tags/for-upstream:
9pfs: proxy: assert if unmarshal fails
9pfs: don't try to flush self and avoid QEMU hang on reset
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
... and drop OPENGL_CFLAGS from Makefiles.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 1490079888-29029-1-git-send-email-kraxel@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
parallels block driver is completely broken since commit
commit 75cdcd1553
Author: Markus Armbruster <armbru@redhat.com>
Date: Tue Feb 21 21:14:08 2017 +0100
option: Fix checking of sizes for overflow and trailing crap
Right now even simple
qemu-io -c "read 512 64k" 1.hds
ends up with
Unexpected error in parse_option_size() at util/qemu-option.c:188:
Parameter 'prealloc-size' expects a non-negative number below 2^64
Aborted (core dumped)
The cure is simple - we should use 'M' as a suffix in default option value
instead of 'MiB'.
Signed-off-by: Edgar Kaziahmedov <edos@virtuozzo.mipt.ru>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Message-id: 1490002022-22653-1-git-send-email-den@openvz.org
CC: Markus Armbruster <armbru@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This reverts commit 1454d33f05.
The string input visitor regression fixed in the previous commit made
visit_type_uint16List() fail on empty input. query_memdev() calls it
via object_property_get_uint16List(). Because it doesn't expect it to
fail, it passes &error_abort, and duly crashes.
Commit 1454d33 "fixes" this crash by making
host_memory_backend_get_host_nodes() return a list containing just
MAX_NODES instead of the empty list. Papers over the regression, and
leads to bogus "info memdev" output, as shown below; revert.
I suspect that if we had bisected the crash back then, we would have
found and fixed the actual bug instead of papering over it.
To reproduce, run HMP command "info memdev" with
$ qemu-system-x86_64 --nodefaults -S -display none -monitor stdio -object memory-backend-ram,id=mem1,size=4k
With this commit, "info memdev" prints
memory backend: mem1
size: 4096
merge: true
dump: true
prealloc: false
policy: default
host nodes:
exactly like before commit 74f24cb.
Between commit 1454d33 and this commit, it prints
memory backend: mem1
size: 4096
merge: true
dump: true
prealloc: false
policy: default
host nodes: 128
The last line is bogus.
Between commit 74f24cb and 1454d33, it crashes like this:
Unexpected error in parse_str() at /work/armbru/tmp/qemu/qapi/string-input-visitor.c:126:
Parameter 'null' expects an int64 value or range
Aborted (core dumped)
Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490026424-11330-3-git-send-email-armbru@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Visiting a list when input is the empty string should result in an
empty list, not an error. Noticed when commit 3d089ce belatedly added
tests, but simply accepted as weird then. It's actually a regression:
broken in commit 74f24cb, v2.7.0. Fix it, and throw in another test
case for empty string.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490026424-11330-2-git-send-email-armbru@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
We have a number of negative tests, but we don't have systematic
positive coverage. Fix that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490015515-25851-6-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
test-qapi.py used to print the internal representation of doc comments
(commit 3313b61). This went away when we dropped the doc comments in
positive tests (commit 87c16dc). Bring it back, because I'm going to
add real positive doc comment tests.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490015515-25851-5-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Messed up in commit bc52d03.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490015515-25851-3-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
When qapi2texi.py changes, we regenerate everything QAPI. Screwed up
in commit 56e8bdd.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490015515-25851-2-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490014548-15083-6-git-send-email-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490014548-15083-5-git-send-email-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490014548-15083-4-git-send-email-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490014548-15083-3-git-send-email-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
We have a negative test case for a list index with leading zero. Add
positive ones.
Tweak the test case for list index greater or equal the number of
elements: test "equal" instead of "greater" to guard against
off-by-one mistakes.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1490014548-15083-2-git-send-email-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Replies from the virtfs proxy are made up of a fixed-size header (8 bytes)
and a payload of variable size (maximum 64kb). When receiving a reply,
the proxy backend first reads the whole header and then unmarshals it.
If the header is okay, it then does the same operation with the payload.
Since the proxy backend uses a pre-allocated buffer which has enough room
for a header and the maximum payload size, marshalling should never fail
with fixed size arguments. Any error here is likely to result from a more
serious corruption in QEMU and we'd better dump core right away.
This patch adds error checks where they are missing and converts the
associated error paths into assertions.
This should also address Coverity's complaints CID 1348519 and CID 1348520,
about not always checking the return value of proxy_unmarshal().
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
According to the 9P spec [*], when a client wants to cancel a pending I/O
request identified by a given tag (uint16), it must send a Tflush message
and wait for the server to respond with a Rflush message before reusing this
tag for another I/O. The server may still send a completion message for the
I/O if it wasn't actually cancelled but the Rflush message must arrive after
that.
QEMU hence waits for the flushed PDU to complete before sending the Rflush
message back to the client.
If a client sends 'Tflush tag oldtag' and tag == oldtag, QEMU will then
allocate a PDU identified by tag, find it in the PDU list and wait for
this same PDU to complete... i.e. wait for a completion that will never
happen. This causes a tag and ring slot leak in the guest, and a PDU
leak in QEMU, all of them limited by the maximal number of PDUs (128).
But, worse, this causes QEMU to hang on device reset since v9fs_reset()
wants to drain all pending I/O.
This insane behavior is likely to denote a bug in the client, and it would
deserve an Rerror message to be sent back. Unfortunately, the protocol
allows it and requires all flush requests to suceed (only a Tflush response
is expected).
The only option is to detect when we have to handle a self-referencing
flush request and report success to the client right away.
[*] http://man.cat-v.org/plan_9/5/flush
Reported-by: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Greg Kurz <groug@kaod.org>
sdl is probed before audio, so we can simply look at $sdl so see
whenever we have support or not. Throw an error in case sdl audio
is requested without sdl being available.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1490000743-3615-1-git-send-email-kraxel@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Cygwin target is really compiling for native Win32 with -mno-cygwin.
Except, GCC 4.7.0 has finally removed the long deprecated -mno-cygwin
option, and that happened about five years ago.
Let it rest in peace.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Message-id: 20170317160811.28370-1-pbonzini@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Our implementation of writes to the APSR for M-profile via the MSR
instruction was badly broken.
First and worst, we had the sense wrong on the test of bit 2 of the
SYSm field -- this is supposed to request an APSR write if bit 2 is 0
but we were doing it if bit 2 was 1. This bug was introduced in
commit 58117c9bb4, so hasn't been in a QEMU release.
Secondly, the choice of exactly which parts of APSR should be written
is defined by bits in the 'mask' field. We were not passing these
through from instruction decode, making it impossible to check them
in the helper.
Pass the mask bits through from the instruction decode to the helper
function and process them appropriately; fix the wrong sense of the
SYSm bit 2 check.
Invalid mask values and invalid combinations of mask and register
number are UNPREDICTABLE; we choose to treat them as if the mask
values were valid.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1487616072-9226-5-git-send-email-peter.maydell@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
The MRS instruction requires that bits [19..16] are all 1s, and for
A/R profile also that bits [7..0] are all 0s. At this point in the
decode tree we have checked all of the rest of the instruction but
were allowing these to be any value. If these bits are not set then
the result is architecturally UNPREDICTABLE, but choosing to UNDEF is
more helpful to the user and avoids unexpected odd behaviour if the
encodings are used for some purpose in future architecture versions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1487616072-9226-4-git-send-email-peter.maydell@linaro.org