It was broken by commit 8ecc89f6e7 which
moved the SDL linker flags from macro libs_softmmu to macro SDL_LIBS.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Message-id: 20171116163732.31584-1-sw@weilnetz.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit 2726627197 started to use tb_unlock() and tlb_set_dirty() on
non TCG code. Add the functions as stubs, so that builds with TCG
disabled continue to compile.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMM: tweaked commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When migrating a VM with 'migrate_set_capability postcopy-ram on'
a postcopy_state is set during the process, ending up with the
state POSTCOPY_INCOMING_END when the migration is over. This
postcopy_state is taken into account inside ram_load to check
how it will load the memory pages. This same ram_load is called when
in a loadvm command.
Inside ram_load, the logic to see if we're at postcopy_running state
is:
postcopy_running = postcopy_state_get() >= POSTCOPY_INCOMING_LISTENING
postcopy_state_get() returns this enum type:
typedef enum {
POSTCOPY_INCOMING_NONE = 0,
POSTCOPY_INCOMING_ADVISE,
POSTCOPY_INCOMING_DISCARD,
POSTCOPY_INCOMING_LISTENING,
POSTCOPY_INCOMING_RUNNING,
POSTCOPY_INCOMING_END
} PostcopyState;
In the case where ram_load is executed and postcopy_state is
POSTCOPY_INCOMING_END, postcopy_running will be set to 'true' and
ram_load will behave like a postcopy is in progress. This scenario isn't
achievable in a migration but it is reproducible when executing
savevm/loadvm after migrating with 'postcopy-ram on', causing loadvm
to fail with Error -22:
Source:
(qemu) migrate_set_capability postcopy-ram on
(qemu) migrate tcp:127.0.0.1:4444
Dest:
(qemu) migrate_set_capability postcopy-ram on
(qemu)
ubuntu1704-intel login:
Ubuntu 17.04 ubuntu1704-intel ttyS0
ubuntu1704-intel login: (qemu)
(qemu) savevm test1
(qemu) loadvm test1
Unknown combination of migration flags: 0x4 (postcopy mode)
error while loading state for instance 0x0 of device 'ram'
Error -22 while loading VM state
(qemu)
This patch fixes this problem by changing the existing logic for
postcopy_advised and postcopy_running in ram_load, making them
'false' if we're at POSTCOPY_INCOMING_END state.
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
CC: Juan Quintela <quintela@redhat.com>
CC: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reported-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Migration of a system under stress (for example, with
"stress-ng --numa 2") triggers on the destination
some kernel watchdog messages like:
NMI watchdog: BUG: soft lockup - CPU#0 stuck for 3489660870s!
NMI watchdog: BUG: soft lockup - CPU#1 stuck for 3489660884s!
This problem appears with the changes introduced by
42043e4 spapr: clock should count only if vm is running
I think this commit only triggers the problem.
Kernel computes the soft lockup duration using the
Virtual Timebase register (VTB), not using the Timebase
Register (TBR, the one 42043e4 stops).
It appears VTB is not migrated, so this patch adds it in
the list of the SPRs to migrate, and fixes the problem.
For the migration, I've tested a migration from qemu-2.8.0 and
pseries-2.8.0 to a patched master (qemu-2.11.0-rc1). The received
VTB is 0 (as is it not initialized by qemu-2.8.0), but the value
seems to be ignored by KVM and a non zero VTB is used by the kernel.
I have no explanation for that, but as the original problem appears
only with SMP system under stress I suspect some problems in KVM
(I think because VTB is shared by all threads of a core).
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The spapr-vty device implements the PAPR defined virtual console,
which is also implemented by IBM's proprietary PowerVM hypervisor.
PowerVM's implementation has a bug where it inserts an extra \0 after
every \r going to the guest. Because of that Linux's guest side
driver has a workaround which strips \0 characters that appear
immediately after a \r.
That means that when running under qemu, sending a binary stream from
host to guest via spapr-vty which happens to include a \r\0 sequence
will get corrupted by that workaround.
To deal with that, this patch duplicates PowerVM's bug, inserting an
extra \0 after each \r. Ugly, but the best option available.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
LUNs >= 256 have to be encoded with the so-called "flat space
addressing method" for virtio-scsi, where an additional bit has to
be set. SLOF already took care of this with the following commit:
https://git.qemu.org/?p=SLOF.git;a=commitdiff;h=f72a37713fea47da
(see https://bugzilla.redhat.com/show_bug.cgi?id=1431584 for details)
But QEMU does not use this encoding yet for device tree paths
that have to be handed over to SLOF to deal with the "bootindex"
property, so SLOF currently fails to boot from virtio-scsi devices
with LUNs >= 256 in the right boot order. Fix it by using the bit
to indicate the "flat space addressing method" for LUNs >= 256.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When doing a live migration of a Xen guest with libxl, the images for
block devices are locked by the original QEMU process, and this prevent
the QEMU at the destination to take the lock and the migration fail.
>From QEMU point of view, once the RAM of a domain is migrated, there is
two QMP commands, "stop" then "xen-save-devices-state", at which point a
new QEMU is spawned at the destination.
Release locks in "xen-save-devices-state" so the destination can takes
them, if it's a live migration.
This patch add the "live" parameter to "xen-save-devices-state" which
default to true so older version of libxenlight can work with newer
version of QEMU.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Add option to echo response to QMP / HMP command only on mismatch.
Useful for ignore all normal responses, but catching things like
segfaults.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
The previous patch fixed a race condition, in which there were
coroutines being executing doubly, or after coroutine deletion.
We can detect common scenarios when this happens, and print an error
message and abort before we corrupt memory / data, or segfault.
This patch will abort if an attempt to enter a coroutine is made while
it is currently pending execution, either in a specific AioContext bh,
or pending execution via a timer. It will also abort if a coroutine
is scheduled, before a prior scheduled run has occurred.
We cannot rely on the existing co->caller check for recursive re-entry
to catch this, as the coroutine may run and exit with
COROUTINE_TERMINATE before the scheduled coroutine executes.
(This is the scenario that was occurring and fixed in the previous
patch).
This patch also re-orders the Coroutine struct elements in an attempt to
optimize caching.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
When block_job_sleep_ns() is called, the co-routine is scheduled for
future execution. If we allow the job to be re-entered prior to the
scheduled time, we present a race condition in which a coroutine can be
entered recursively, or even entered after the coroutine is deleted.
The job->busy flag is used by blockjobs when a coroutine is busy
executing. The function 'block_job_enter()' obeys the busy flag,
and will not enter a coroutine if set. If we sleep a job, we need to
leave the busy flag set, so that subsequent calls to block_job_enter()
are prevented.
This changes the prior behavior of block_job_cancel() being able to
immediately wake up and cancel a job; in practice, this should not be an
issue, as the coroutine sleep times are generally very small, and the
cancel will occur the next time the coroutine wakes up.
This fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1508708
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJaFEGYAAoJEH8JsnLIjy/WiNYP/2kkepEzCiDpuMlzJVoyzBV0
wy3AIfZkdbLTBrsGTfGhdCNJB8dLKrOeZPW/YhhfnKoOpNV50PAG7r51y6/clavq
6QRO7tFq4OzIaNlVXzftNfFK96hWyUV1f+AazCoCdDxEL1PSOpKj8jvedUzfq8bq
mxbGeGY7romGmJ3ahPUZ/U1U7KbkpSm6Mcri+YnD4/xcT20b8FoC7mgPCDV/kOgi
sVGR858F54+Os/Gy1+VlvEmUxosOVA8ZBtRjwRQhS3b2/oIVbJR15Y2RHAK1TfzJ
4OAXkKpBNh6kybO9hsbfSXujK5aWuseov4gdy+c11msshkTt6YnLPZuH1HETYQ4M
wMYaIKcY54VF8jFFrh3aFqvZCdpvxtdZFYzuWq8vvubDSt1RkTGjZ0EpWpy+693o
IDrjdEqd18lU56K9/R57nwAYQKmocrWO8zzqPQrr2w2c3vo+TpoJENMV/4AQOUtJ
rV8p7AD8oKOtDWLCacDBM1sHXarTr6olFFFyZ1wMzxsdqo968mmWUbPARxygYj3C
9zfSomp5l7heDwFm19Qyd9z0Ah0ubQOqy5DSEtqqHg84OyOezcnLas0gXfH7Tc5C
zROup2vkB2y+cgDI/SS894QViJeaqLynuoga8cz5/XEImZhmM7s6mwmTj1n3wdYn
GcI52qfslBTZyovjJtvr
=qdJf
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches for 2.11.0-rc2
# gpg: Signature made Tue 21 Nov 2017 15:09:12 GMT
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream:
iotests: Fix 176 on 32-bit host
block: Close a BlockDriverState completely even when bs->drv is NULL
block: Error out on load_vm with active dirty bitmaps
block: Add errp to bdrv_all_goto_snapshot()
block: Add errp to bdrv_snapshot_goto()
block: Don't request I/O permission with BDRV_O_NO_IO
block: Don't use BLK_PERM_CONSISTENT_READ for format probing
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Developers sometimes mistakenly run 'make test' instead of 'make check'.
'make test' triggers the ancient, unmaintained tcg unit tests in
tests/tcg/Makefile which have long since ceased compiling.
Even if someone fixes the TCG tests, it makes little sense to put
them in a 'make test' target, rather they should be 'make check-tcg',
possibly wired up as a dependency of 'make check'.
In the meantime, this patch disarms the 'make test' trap by simply
deleting it so users get an immediate error. This should be enough
for them to remember to type 'make check' instead (or 'make help'
to learn). It also deletes 'make speed' which is another route
into the tcg tests.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Kashyap Chamarthy <kchamart@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
Message-id: 20171121142538.22072-1-berrange@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The contents of a qcow2 bitmap are rounded up to a size that
matches the number of bits available for the granularity, but
that granularity differs for 32-bit hosts (our default 64k
cluster allows for 2M bitmap coverage per 'long') and 64-bit
hosts (4M bitmap per 'long'). If the image is a multiple of
2M but not 4M, then the number of bytes occupied by the array
of longs in memory differs between architecture, thus
resulting in different SHA256 hashes.
Furthermore (but untested by me), if our computation of the
SHA256 hash is at all endian-dependent because of how we store
data in memory, that's another variable we'd have to account
for (ideally, we specified the bitmap stored in qcow2 as
fixed-endian on disk, because the same qcow2 file must be
usable across any architecture; but that says nothing about
how we represent things in memory). But we already have test
165 to validate that bitmaps are stored correctly on disk,
while this test is merely testing that the bitmap exists.
So for this test, the easiest solution is to filter out the
actual hash value. Broken in commit 4096974e.
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 20171117190422.23626-1-eblake@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
bdrv_close() skips much of its logic when bs->drv is NULL. This is
fine when we're closing a BlockDriverState that has just been created
(because e.g the initialization process failed), but it's not enough
in other cases.
For example, when a valid qcow2 image is found to be corrupted then
QEMU marks it as such in the file header and then sets bs->drv to
NULL in order to make the BlockDriverState unusable. When that BDS is
later closed then many of its data structures are not freed (leaking
their memory) and none of its children are detached. This results in
bdrv_close_all() failing to close all BDSs and making this assertion
fail when QEMU is being shut down:
bdrv_close_all: Assertion `QTAILQ_EMPTY(&all_bdrv_states)' failed.
This patch makes bdrv_close() do the full uninitialization process
in all cases. This fixes the problem with corrupted images and still
works fine with freshly created BDSs.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 20171106145345.12038-1-berto@igalia.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Loading a snapshot invalidates the bitmap. Just marking all blocks dirty
is not a useful response in practice, instead the user needs to be aware
that we switch to a completely different state. If they are okay with
losing the dirty bitmap, they can just explicitly delete it.
This effectively reverts commit 04dec3c3ae.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
'qemu-img info' makes sense even when BLK_PERM_CONSISTENT_READ cannot be
granted because of a block job in a running qemu process. It already
sets BDRV_O_NO_IO to indicate that it doesn't access the guest visible
data at all.
Check the BDRV_O_NO_IO flags in blk_new_open(), so that I/O related
permissions are not unnecessarily requested and 'qemu-img info' can work
even if BLK_PERM_CONSISTENT_READ cannot be granted.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
For format probing, we don't really care whether all of the image
content is consistent. The only thing we're looking at is the image
header, and specifically the magic numbers that are expected to never
change, no matter how inconsistent the guest visible disk content is.
Therefore, don't request BLK_PERM_CONSISTENT_READ. This allows to use
format probing, e.g. in the context of 'qemu-img info', even while the
guest visible data in the image is inconsistent during a running block
job.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
qemu.org enabled HTTPS in 2017 and it should be used instead of HTTP.
There are also URLs to json.org, openvpn.net, and other domains that
support HTTPS.
This patch updates the qemu.org domains everywhere and also third-party
domains that I have checked.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20171121120435.28728-3-stefanha@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The owner of qemu.org has delegated authority to modify DNS records to
the QEMU Project. This has allowed us to use the domain name without
worries about IP address changes or technical issues disrupting service.
The issues described in commit 8593898109
("Use qemu-project.org domain name") have therefore been mitigated.
This patch switches back to consistently using qemu.org instead of
qemu-project.org in documentation, version.rc, and the Windows installer
script.
The git submodules and SeaBIOS still use qemu-project.org for the time
being. This will be fixed in the QEMU 2.12 release cycle.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20171121120435.28728-2-stefanha@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The u-boot sources we ship currently cause problems with unpacking on
a case-insensitive filesystem due to path conflicts. This has been
fixed in upstream u-boot via commit 610eec7f, but since it is not
yet included in an official release we implement this approach as a
temporary workaround.
Once we move to a u-boot containing commit 610eec7f we should revert
this patch.
Cc: qemu-stable@nongnu.org
Cc: Alexander Graf <agraf@suse.de>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20171107205201.10207-1-mdroth@linux.vnet.ibm.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
To do a write to memory that is marked as notdirty, we need
to invalidate any TBs we have cached for that memory, and
update the cpu physical memory dirty flags for VGA and migration.
The slowpath code in notdirty_mem_write() does all this correctly,
but the new atomic handling code in atomic_mmu_lookup() doesn't
do anything at all, it just clears the dirty bit in the TLB.
The effect of this bug is that if the first write to a notdirty
page for which we have cached TBs is by a guest atomic access,
we fail to invalidate the TBs and subsequently will execute
incorrect code. This can be seen by trying to run 'javac' on AArch64.
Use the new notdirty_call_before() and notdirty_call_after()
functions to correctly handle the update to notdirty memory
in the atomic codepath.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1511201308-23580-3-git-send-email-peter.maydell@linaro.org
The function notdirty_mem_write() has a sequence of actions
it has to do before and after the actual business of writing
data to host RAM to ensure that dirty flags are correctly
updated and we flush any TCG translations for the region.
We need to do this also in other places that write directly
to host RAM, most notably the TCG atomic helper functions.
Pull out the before and after pieces into their own functions.
We use an API where the prepare function stashes the various
bits of information about the write into a struct for the
complete function to use, because in the calls for the atomic
helpers the place where the complete function will be called
doesn't have the information to hand.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1511201308-23580-2-git-send-email-peter.maydell@linaro.org
The data obtained by GetIfEntry is 32 bits, and it may overflow. Thus
using GetIfEntry2 instead of GetIfEntry.
Signed-off-by: ZhiPeng Lu <lu.zhipeng@zte.com.cn>
*avoid CamelCase variable names
*update field names for MIB_IFROW -> MIB_IF_ROW2
*dynamically probe for GetIfIndex2 to deal with older OSs
*check return value from get_interface_index
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Here's the current queue of ppc patches. These 2 patches are both
more complex than I'd ideally like this late in the 2.11 cycle.
However, they do fix important bugs, so I think it's worth it on
balance.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAloSS5cACgkQbDjKyiDZ
s5JaGw/+K7VYK6ogw+MpvF5Alyf+8fL5SUE0lVuE50d8NbbESqOd0/L8rfwVDCDB
8+1VmioEZ+F0LJGqy2d6ufpUuxQTYEmGswypyOJQxLJJYCbBWi7nXeH75t6A3E6f
bFE6dS0zFG+jvrmI0JJywK2zJSix7wh6JfyeLU7eTdjypoD8OCSQFgN0fIaSVdfQ
meTT0XN3Pyo2n2vZ97CKLScc337qwZ5PYq4/cqxcQJEhU9ccH1qTMorCxlzDBKd9
drCfyKWCPqVgwKDU47FpxTCuOF41ppH0C7qvwcu+JWapejqjFfsvH3OL0BFCku0a
7lSr/HD01Ue6scgrh7/1D4piGe+DiZ2ffAFrHXPI40ko33mFfQQiHOmL/Almbx80
GSBQBZkbWGTuRp871yEUmTBROxLostO3OPvUvY39GL3sxVQBMC0SF4AFGi8pRfbW
hY5LhFgSGdLaR25uX8f7PMdKEh3J7svBZkW+2sEOIkBDofCocG5xu2AiEKAFpxjw
ey664RKn+vlcuTWrJhLBSJlMpkzTdFUuFMmt/VBwydyFAHuGaLotW8wLVHFq21gT
OnhUmlEJL4hOUxK7XqnRYhICDDprn2ydZSaBoZbAJFh60gn78cybIxDA2+tAdc01
RrID8ZR1HBC4b/HTwDqggdmeHxV3InHLzhXJOPiRGcu0B13x85w=
=GHS0
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.11-20171120' into staging
ppc patch queue 2017-11-20
Here's the current queue of ppc patches. These 2 patches are both
more complex than I'd ideally like this late in the 2.11 cycle.
However, they do fix important bugs, so I think it's worth it on
balance.
# gpg: Signature made Mon 20 Nov 2017 03:27:19 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.11-20171120:
spapr: reset DRCs after devices
target/ppc: Update setting of cpu features to account for compat modes
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Contains the following commit:
- pc-bios/s390-ccw: Fix problem with invalid virtio-scsi LUN when rebooting
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Version: GnuPG v1
iQEcBAABAgAGBQJaEkv2AAoJEO8Ells5jWIRRCoIAKLqmostdlyyaFOJzGBXo7vg
4XvqiYsVAifIY2LmQ76ncHEpZA/ZreFGtpVfw1/1U827awOl3LztE2ustnM+6GoC
ic0I1mugFn89Uvp+prGA8hg8SiQNiLDHOmGFBhSy5g4RBTfpYjf7ynC7iT/16b/C
tpUL6my3h3Wb/turj9EQhvJl8NqaTvULB2LpeUCiPiEIW4cGSF4/P8jh3PtHUGCS
G+aq09mqek1+Ut+gdaxRpdPxgAXCIpVcXFF1fByvVSwvua+E20cmzDrum1lvLbmE
xLiBG1gMdaCxB4hqZmgUgrk2d8tRjejvgZ8V1xnhpJlfy9dfX7VjcA67qFUVkC4=
=vZE1
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging
# gpg: Signature made Mon 20 Nov 2017 03:28:54 GMT
# gpg: using RSA key 0xEF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
hw/net/vmxnet3: Fix code to work on big endian hosts, too
net: Transmit zero UDP checksum as 0xFFFF
MAINTAINERS: Add missing entry for eepro100 emulation
hw/net/eepro100: Fix endianness problem on big endian hosts
Revert "Add new PCI ID for i82559a"
colo-compare: fix the dangerous assignment
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In commit 7c4ee5bcc8 we changed the order in which we construct
the AUXV, but forgot to adjust the calculation of the length. The
result is that we set info->auxv_len to a bogus and negative value,
and then later on the code in open_self_auxv() gets confused and
ends up presenting the guest with an empty file.
Since we now have to calculate the auxv length up-front as part
of figuring out how much we're going to put on the stack, set
info->auxv_len then; this allows us to assert that we put the
same number of entries into auxv as we pre-calculated, rather
than merely having a comment saying we need to do that.
Fixes: https://bugs.launchpad.net/qemu/+bug/1728116
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
The new deprecation warning for the xlnx-ep108 machine also pops up
during "make check" which is kind of confusing. Silence it if testing
mode is enabled.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Acked-by: Wei Huang <wei@redhat.com>
Message-id: 1510846183-756-1-git-send-email-thuth@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ASPEED hardware contains a lock register for the SCU that disables
any writes to the SCU when it is locked. The machine comes up with the
lock enabled, but on all known hardware u-boot will unlock it and leave
it unlocked when loading the kernel.
This means the kernel expects the SCU to be unlocked. When booting from
an emulated ROM the normal u-boot unlock path is executed. Things don't
go well when booting using the -kernel command line, as u-boot does not
run first.
Change behaviour so that when a kernel is passed to the machine, set the
reset value of the SCU to be unlocked.
Signed-off-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20171114122018.12204-1-joel@jms.id.au
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In do_ats_write(), rather than using extended_addresses_enabled() to
decide whether the value we get back from get_phys_addr() is a 64-bit
format PAR or a 32-bit one, use arm_s1_regime_using_lpae_format().
This is not really the correct answer, because the PAR format
depends on the AT instruction being used, not just on the
translation regime. However getting this correct requires a
significant refactoring, so that get_phys_addr() returns raw
information about the fault which the caller can then assemble
into a suitable FSR/PAR/syndrome for its purposes, rather than
get_phys_addr() returning a pre-formatted FSR.
However this change at least improves the situation by making
the PAR work correctly for address translation operations done
at AArch64 EL2 on the EL2 translation regime. In particular,
this is necessary for Xen to be able to run in our emulation,
so this seems like a safer interim fix given that we are in freeze.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1509719814-6191-1-git-send-email-peter.maydell@linaro.org
Fix an incorrect mask expression in the handling of v7M MPU_RBAR
reads that meant that we would always report the ADDR field as zero.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1509732813-22957-1-git-send-email-peter.maydell@linaro.org
The CPU ID registers ID_AA64PFR0_EL1, ID_PFR1_EL1 and ID_PFR1
have a field for reporting presence of GICv3 system registers.
We need to report this field correctly in order for Xen to
work as a guest inside QEMU emulation. We mustn't incorrectly
claim the sysregs exist when they don't, though, or Linux will
crash.
Unfortunately the way we've designed the GICv3 emulation in QEMU
puts the system registers as part of the GICv3 device, which
may be created after the CPU proper has been realized. This
means that we don't know at the point when we define the ID
registers what the correct value is. Handle this by switching
them to calling a function at runtime to read the value, where
we can fill in the GIC field appropriately.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
Message-id: 1510066898-3725-1-git-send-email-peter.maydell@linaro.org
When rebooting a guest that has a virtio-scsi disk, the s390-ccw
bios sometimes bails out with an error message like this:
! SCSI cannot report LUNs: STATUS=02 RSPN=70 KEY=05 CODE=25 QLFR=00, sure !
Enabling the scsi_req* tracing in QEMU shows that the ccw bios is
trying to execute the REPORT LUNS SCSI command with a LUN != 0, and
this causes the SCSI command to fail.
Looks like we neither clear the BSS of the s390-ccw bios during reboot,
nor do we explicitly set the default_scsi_device.lun value to 0, so
this variable can contain random values from the OS after the reboot.
By setting this variable explicitly to 0, the problem is fixed and
the reboots always succeed.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1514352
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1510942228-22822-1-git-send-email-thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Currently, multi threaded TCG with > 1 VCPU gets stuck during IPL, when
the bios tries to switch to the loaded kernel via DIAG 308.
As run_on_cpu() is used, we run into a deadlock after handling the reset.
We need the iolock (just like KVM).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171116170526.12643-4-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Looks like the last fix + cleanup introduced another bug. (for now Linux
guests don't seem to care) - we store the crs into ars.
Fixes: 947a38bd6f ("s390x/kvm: fix and cleanup storing CPU status")
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171116170526.12643-2-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Since commit ab06ec4357 we test the vmxnet3 device in the
pxe-tester, too (when running "make check SPEED=slow"). This now
revealed that the code is not working there if the host is a big
endian machine (for example ppc64 or s390x) - "make check SPEED=slow"
is now failing on such hosts.
The vmxnet3 code lacks endianness conversions in a couple of places.
Interestingly, the bitfields in the structs in vmxnet3.h already tried to
take care of the *bit* endianness of the C compilers - but the code missed
to change the *byte* endianness when reading or writing the corresponding
structs. So the bitfields are now wrapped into unions which allow to change
the byte endianness during runtime with the non-bitfield member of the union.
With these changes, "make check SPEED=slow" now properly works on big endian
hosts, too.
Reported-by: David Gibson <dgibson@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: David Gibson <dgibson@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
The checksum algorithm used by IPv4, TCP and UDP allows a zero value
to be represented by either 0x0000 and 0xFFFF. But per RFC 768, a zero
UDP checksum must be transmitted as 0xFFFF because 0x0000 is a special
value meaning no checksum.
Substitute 0xFFFF whenever a checksum is computed as zero when
modifying a UDP datagram header. Doing this on IPv4 and TCP checksums
is unnecessary but legal. Add a wrapper for net_checksum_finish() that
makes the substitution.
(We can't just change net_checksum_finish(), as that function is also
used by receivers to verify checksums, and in that case the expected
value is always 0x0000.)
Signed-off-by: Ed Swierk <eswierk@skyportsystems.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>