Our IDE emulation can't handle logical block sizes other than 512. Check
for it.
The original assumption was that other values would silently be ignored
(which is bad enough), but it's not quite true: The physical block size
is exposed in IDENTIFY DEVICE as a multiple of the logical block size.
Setting a logical block size therefore also corrupts the physical block
size (4096/4096 doesn't silently downgrade to 4096/512, but 512/512).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Initialise our maximum page size capability to 64kB and increase
the page_size variable from 16 to 32 bits.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The transaction QMP command performs operations atomically on a group of
drives. This command needs to acquire AioContext in order to work
safely when virtio-blk dataplane IOThreads are accessing drives.
The transactional nature of the command means that actions are split
into prepare, commit, abort, and clean functions. Acquire the
AioContext in prepare and don't release it until one of the other
functions is called. This prevents the IOThread from running the
AioContext before the transaction has completed.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1416566940-4430-4-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
SATA 3.0 "10.3.1 FIS Type values" defines the constants used to
differentiate between FIS types.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1415874281-7371-3-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Debug code using #ifdef is susceptible to bitrot because the compiler
never checks the debug code.
This is easy to avoid, change the DPRINTF() macro to use if (DEBUG_AHCI)
and always give it a 0 or 1 value.
This also allows us to drop an #ifdef DEBUG_AHCI in ahci_start_dma()
since the compiler can now see the local variable is used.
The motivation for this change is a recent DEBUG_AHCI build failure due
to an outdated DPRINTF() format string. From now on the compiler will
catch these errors.
Cc: John Snow <jsnow@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1415874281-7371-2-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add dataplane support to the change-backing-file QMP commands. By
acquiring the AioContext we avoid race conditions with the dataplane
thread which may also be accessing the BlockDriverState.
Note that this command operates on both bs and a node in its chain
(image_bs). The bdrv_chain_contains(bs, image_bs) check guarantees that
bs and image_bs are in the same AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
By acquiring the AioContext we avoid race conditions with the dataplane
thread which may also be accessing the BlockDriverState.
Fix up eject, change, and block_passwd in a single patch because
qmp_eject() and qmp_change_blockdev() both call eject_device(). Also
fix block_passwd while we're tackling a command that takes a block
encryption password.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add dataplane support to the blockdev-snapshot-delete-internal-sync QMP
command. By acquiring the AioContext we avoid race conditions with the
dataplane thread which may also be accessing the BlockDriverState.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 1417166789-1960-1-git-send-email-arei.gonglei@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Issues:
* Doesn't check pitches correctly in case it is negative.
* Doesn't check width at all.
Turn macro into functions while being at it, also factor out the check
for one region which we then can simply call twice for src + dst.
This is CVE-2014-8106.
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
VirtIO devices now remember which endianness they're operating in in order
to support targets which may have guests of either endianness, such as
powerpc. This endianness state is transferred in a subsection of the
virtio device's information.
With virtio-rng this can lead to an abort after a loadvm hitting the
assert() in virtio_is_big_endian(). This can be reproduced by doing a
migrate and load from file on a bi-endian target with a virtio-rng device.
The actual guest state isn't particularly important to triggering this.
The cause is that virtio_rng_load_device() calls virtio_rng_process() which
accesses the ring and thus needs the endianness. However,
virtio_rng_process() is called via virtio_load() before it loads the
subsections. Essentially the ->load callback in VirtioDeviceClass should
only be used for actually reading the device state from the stream, not for
post-load re-initialization.
This patch fixes the bug by moving the virtio_rng_process() after the call
to virtio_load(). Better yet would be to convert virtio to use vmsd and
have the virtio_rng_process() as a post_load callback, but that's a bigger
project for another day.
This is bugfix, and should be considered for the 2.2 branch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Message-id: 1417067290-20715-1-git-send-email-david@gibson.dropbear.id.au
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
virtio_net_handle_ctrl() and other functions that process control vq
request call iov_discard_front() which will shorten the iov. This will
lead unmapping in virtqueue_push() leaks mapping.
Fixes this by keeping the original iov untouched and using a temp variable
in those functions.
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 1417082643-23907-1-git-send-email-jasowang@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The commits:
- 6a1fa9f5 (monitor: add del completion for peripheral device)
- 66e56b13 (qdev: add qdev_build_hotpluggable_device_list helper)
cause a QEMU crash when trying to use HMP device_del auto-completion.
It can be easily reproduced by:
<qemu-bin> -enable-kvm ~/images/fedora.qcow2 -monitor stdio -device virtio-net-pci,id=vnet
(qemu) device_del
/home/mapfelba/git/upstream/qemu/hw/core/qdev.c:941:qdev_build_hotpluggable_device_list: Object 0x7f6ce04e4fe0 is not an instance of type device
Aborted (core dumped)
The root cause is qdev_build_hotpluggable_device_list going recursively over
all peripherals and their children assuming all are devices. It doesn't work
since PCI devices have at least on child which is a memory region (bus master).
Solved by observing that all devices appear as direct children of
/machine/peripheral container. No need of going recursively
over all the children.
Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Reported-by: Gal Hammer <ghammer@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 1417002601-20799-1-git-send-email-marcel.a@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When we dynamically modify boot order, the length of
boot order will be changed, but we don't update
s->files->f[i].size with new length. This casuse
seabios read a wrong vale of qemu cfg file about
bootorder.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
c/s 9b23cfb76b
or
c/s b154537ad0
moved the testing of xen_enabled() from pc_init1() to
pc_machine_initfn().
xen_enabled() does not return the correct value in
pc_machine_initfn().
Changed vmport from a bool to an enum. Added the value "auto" to do
the old way. Move check of xen_enabled() back to pc_init1().
Acked-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A bunch of bugfixes for 2.2.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUc4AjAAoJECgfDbjSjVRp084IAIYh48pK0MrCTSrDh2/3UCkN
Z+htfYS9uEpJHbIeJrn4u1e2Z90bgcovq8Cg36pUeGZtI5CVZYv/lmPjcOwJEruX
RIveOes87S58rp39mM2/24k1lp2u6VyS+QlrQlmO/ukKEz4ABhy0yF70yTuldX3b
RQ6d1aa/MaEsEx/iTWcQo6cDIdeWUYg8Mt34fSLfcphbHcSC02t8VCOX9cOR4jR9
80Lf78LGEdTCMexrcxs392o7npZGJLzCTWXLRtf6Q4/uPBM8Cmq60yt8JI0WCqbT
ram3XLv9u7GwPgeAp5HPKVV/LxRsDf5HjET3fW8QwbrUP2BvQ5PCgzKCUY6sT20=
=akFH
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pc, pci, misc bugfixes
A bunch of bugfixes for 2.2.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Mon 24 Nov 2014 18:59:47 GMT using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream:
pc: acpi: mark all possible CPUs as enabled in SRAT
pcie: fix improper use of negative value
pcie: fix typo in pcie_cap_deverr_init()
target-i386: move generic memory hotplug methods to DSDTs
acpi-build: mark RAM dirty on table update
hw/pci: fix crash on shpc error flow
pc: count in 1Gb hugepage alignment when sizing hotplug-memory container
pc: explicitly check maxmem limit when adding DIMM
pc: pc-dimm: use backend alignment during address auto allocation
pc: align DIMM's address/size by backend's alignment value
memory: expose alignment used for allocating RAM as MemoryRegion API
pc: limit DIMM address and size to page aligned values
pc: make pc_dimm_plug() more readble
pc: kvm: check if KVM has free memory slots to avoid abort()
qemu-char: fix tcp_get_fds
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If QEMU is started with -numa ... Windows only notices that
CPU has been hot-added but it will not online such CPUs.
It's caused by the fact that possible CPUs are flagged as
not enabled in SRAT and Windows honoring that information
doesn't use corresponding CPU.
ACPI 5.0 Spec regarding to flag says:
"
Table 5-47 Local APIC Flags
...
Enabled: if zero, this processor is unusable, and the operating system
support will not attempt to use it.
"
Fix QEMU to adhere to spec and mark possible CPUs as enabled
in SRAT.
With that Windows onlines hot-added CPUs as expected.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reported-by:
https://bugs.launchpad.net/qemu/+bug/1393440
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This makes it simpler to keep the SSDT byte-for-byte identical for a
given machine type, which is a goal we want to have for 2.2 and newer
types.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
acpi build modifies internal FW CFG RAM on first access
but we forgot to mark it dirty.
If this RAM has been migrated already, it won't be
migrated again, returning corrupted tables to guest.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
If the pci bridge enters in error flow as part
of init process it will only delete the shpc mmio
subregion but not remove it from the properties list,
resulting in segmentation fault when the bridge runs
the exit function.
Example: add a pci bridge without specifing the chassis number:
<qemu-bin> ... -device pci-bridge,id=p1
Result:
(qemu) qemu-system-x86_64: -device pci-bridge,id=p1: Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
qemu-system-x86_64: -device pci-bridge,id=p1: Device
initialization failed.
Segmentation fault (core dumped)
if (child->class->unparent) {
#0 0x00005555558d629b in object_finalize_child_property (obj=0x555556d2e830, name=0x555556d30630 "shpc-mmio[0]", opaque=0x555556a42fc8) at qom/object.c:1078
#1 0x00005555558d4b1f in object_property_del_all (obj=0x555556d2e830) at qom/object.c:367
#2 0x00005555558d4ca1 in object_finalize (data=0x555556d2e830) at qom/object.c:412
#3 0x00005555558d55a1 in object_unref (obj=0x555556d2e830) at qom/object.c:720
#4 0x000055555572c907 in qdev_device_add (opts=0x5555563544f0) at qdev-monitor.c:566
#5 0x0000555555744f16 in device_init_func (opts=0x5555563544f0, opaque=0x0) at vl.c:2213
#6 0x00005555559cf5f0 in qemu_opts_foreach (list=0x555555e0f8e0 <qemu_device_opts>, func=0x555555744efa <device_init_func>, opaque=0x0, abort_on_failure=1) at util/qemu-option.c:1057
#7 0x000055555574a11b in main (argc=16, argv=0x7fffffffdde8, envp=0x7fffffffde70) at vl.c:423
Unparent the shpc mmio region as part of shpc cleanup.
Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
if DIMMs with different size/alignment are interleaved
in creation order, it could lead to hotplug-memory
container fragmentation and following inability to use
all RAM upto maxmem.
For example:
-m 4G,slots=3,maxmem=7G
-object memory-backend-file,id=mem-1,size=256M,mem-path=/pagesize-2MB
-device pc-dimm,id=mem1,memdev=mem-1
-object memory-backend-file,id=mem-2,size=1G,mem-path=/pagesize-1GB
-device pc-dimm,id=mem2,memdev=mem-2
-object memory-backend-file,id=mem-3,size=256M,mem-path=/pagesize-2MB
-device pc-dimm,id=mem3,memdev=mem-3
fragments hotplug-memory container and doesn't allow
to use 1GB hugepage backend to consume remainig 1Gb.
To ease managment factor count in max 1Gb alignment for
each memory slot when sizing hotplug-memory region so
that regadless of fragmentaion it would be possible to
add max aligned DIMM.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Currently maxmem limit is not checked and depends on
hotplug region container not being able to fit more RAM
than maxmem. Do check explicitly so that it would
be possible to change hotplug container size later
to deal with fragmentation.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This fixes another failure with ExtINT, demonstrated by QNX. The failure
mode is as follows:
- IPI sent to cpu 0 (bit set in APIC irr)
- IPI accepted by cpu 0 (bit cleared in irr, set in isr)
- IPI sent to cpu 0 (bit set in both irr and isr)
- PIC interrupt sent to cpu 0
The PIC interrupt causes CPU_INTERRUPT_HARD to be set, but
apic_irq_pending observes that the highest pending APIC interrupt priority
(the IPI) is the same as the processor priority (since the IPI is still
being handled), so apic_get_interrupt returns a spurious interrupt rather
than the pending PIC interrupt. The result is an endless sequence of
spurious interrupts, since nothing will clear CPU_INTERRUPT_HARD.
Instead, ExtINT interrupts should have ignored the processor priority.
Calling apic_check_pic early in apic_get_interrupt ensures that
apic_deliver_pic_intr is called instead of delivering the spurious
interrupt. apic_deliver_pic_intr then clears CPU_INTERRUPT_HARD if needed.
Reported-by: Richard Bilson <rbilson@qnx.com>
Tested-by: Richard Bilson <rbilson@qnx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch fixes an obscure failure of the QNX kernel on QEMU x86 SMP.
In QNX, all hardware interrupts come via the PIC, and are delivered by
the cpu 0 LAPIC in ExtINT mode, while IPIs are delivered by the LAPIC
in fixed mode.
This bug happens as follows:
- cpu 0 masks a particular PIC interrupt
- IPI sent to cpu 0 (CPU_INTERRUPT_HARD is set)
- before the IPI is accepted, the masked interrupt line is asserted by the
device
Since the interrupt is masked, apic_deliver_pic_intr will clear
CPU_INTERRUPT_HARD. The IPI will still be set in the APIC irr, but since
CPU_INTERRUPT_HARD is not set the cpu will not notice. Depending on the
scenario this can cause a system hang, i.e. if cpu 0 is expected to unmask
the interrupt.
In order to fix this, do a full check of the APIC before an EXTINT
is acknowledged. This can result in clearing CPU_INTERRUPT_HARD, but
can also result in delivering the lost IPI.
Reported-by: Richard Bilson <rbilson@qnx.com>
Tested-by: Richard Bilson <rbilson@qnx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
After the next patch, if a masked PIC interrupts causes CPU_INTERRUPT_POLL
to be set, the CPU will spuriously get out of halted state. While this
is technically valid, we should avoid that.
Make CPU_INTERRUPT_POLL run apic_update_irq in the right thread and then
look at CPU_INTERRUPT_HARD. If CPU_INTERRUPT_HARD does not get set,
do not report the CPU as having work.
Also move the handling of software-disabled APIC from apic_update_irq
to apic_irq_pending, and always trigger CPU_INTERRUPT_POLL. This will
be important once we will add a case that resets CPU_INTERRUPT_HARD
from apic_update_irq. We want to run it even if we go through
CPU_INTERRUPT_POLL, and even if the local APIC is software disabled.
Reported-by: Richard Bilson <rbilson@qnx.com>
Tested-by: Richard Bilson <rbilson@qnx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Performance wise it's better to align GVA by the backend's
page size.
Also do not allow to create DIMM device with suboptimal
size (i.e. not aligned to backends page size) to aviod
memory loss.
Do above only for 2.2 and newer machine types to avoid
breaking working configs with 2.1 machine type.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
When running in KVM mode, kvm_set_phys_mem() will silently
fail if registered MemoryRegion address/size is not page
aligned. Causing memory hotplug failure in guest.
Mapping non aligned MemoryRegion in TCG mode 'works', but
sane guest OS still expects page aligned memory module
and fails to initialize it if it's not aligned.
So do not allow non aligned (i.e. valid) address/size
values for DIMM to avoid either KVM failure or guest
issues caused by it.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
split addr initialization from declaration so that
later when new local vars are added property getter
wouldn't drift off of error check.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
When more memory devices are used than available
KVM memory slots, QEMU crashes with:
kvm_alloc_slot: no free slot available
Aborted (core dumped)
Fix this by checking that KVM has a free slot before
attempting to map memory in guest address space.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Coverity spot:
Assigning: iov = struct iovec [3]({{buf, 12UL},
{(void *)dot1q_buf, 4UL},
{buf + 12, size - 12}})
(address of temporary variable of type struct iovec [3]).
out_of_scope: Temporary variable of type struct iovec [3] goes out of scope.
Pointer to local outside scope (RETURN_LOCAL)
use_invalid:
Using iov, which points to an out-of-scope temporary variable of type struct iovec [3].
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
s->xmit_pos maybe assigned to a negative value (-1),
but in this branch variable s->xmit_pos as an index to
array s->buffer. Let's add a check for s->xmit_pos.
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
ePAPR 1.1 defines the stdout-path property, making the os-specific
linux,stdout-path property redundant. Change the DT setup for ARM virt
to use the generic property - supported by Linux since 3.15.
The old QEMU behaviour was not present in any released version of
QEMU, and was only added to QEMU after the kernel changed, so
this should not break any existing setups.
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
[PMM: add note to commit about the old behaviour never hving been
in a released version of QEMU]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The other callers to blk_set_enable_write_cache() in this file
already check for s->blk == NULL.
Signed-off-by: Don Slutz <dslutz@verizon.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1416259239-13281-1-git-send-email-dslutz@verizon.com
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
usb_ep_get and usb_handle_packet can deal with a NULL device, but we have
to avoid dereferencing NULL pointers when building the id.
Thanks to Gonglei for an initial stab at fixing this.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Operands don't affect result (CONSTANT_EXPRESSION_RESULT)
((n->bar.aqa >> AQA_ASQS_SHIFT) & AQA_ASQS_MASK) > 4095
is always false regardless of the values of its operands.
This occurs as the logical second operand of '||'.
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
lseek will return -1 on error, g_malloc0(size) and read(,,size)
paramenters cannot be negative. We should add a check for return
value of lseek().
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
May pass freed pointer filename as an argument to error_report.
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch fixes two issues with persistent grants and the disk PV backend
(Qdisk):
- Keep track of memory regions where persistent grants have been mapped
since we need to unmap them as a whole. It is not possible to unmap a
single grant if it has been batch-mapped. A new check has also been added
to make sure persistent grants are only used if the whole mapped region
can be persistently mapped in the batch_maps case.
- Unmap persistent grants before switching to the closed state, so the
frontend can also free them.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reported-by: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
If user starts QEMU with "-machine pc,accel=xen", then
compat property in xenfv won't work and it would cause error:
"Unsupported bus. Bus doesn't have property 'acpi-pcihp-bsel' set"
when PCI device is added with -device on QEMU CLI.
From: Igor Mammedov <imammedo@redhat.com>
In case of Xen instead of using compat property, just use the fact
that xen doesn't use QEMU's fw_cfg/acpi tables to switch piix4_pm
into legacy PCI hotplug mode when Xen is enabled.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Li Liang <liang.z.li@intel.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
In order to make handle_cmd more readable at the macro level,
the details of how to decompose particular types of FIS packets
are left to helper functions.
In our case, the only type of FIS packet we currently expect to
see is a Register H2D FIS packet, but the gory details of its
decomposition are of no particular interest in handle_cmd.
This patch keeps the receipt of FIS packets and the decomposition
thereof separated to two different functions.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1415058979-16604-6-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Instead of checking for a known byte, inspect the
fields of this byte explicitly to produce more meaningful
error messages and improve the readability of this section.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1415058979-16604-5-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Error checking in ahci's handle_cmd is re-ordered so that we
initialize as few things as possible before we've done our
sanity checking. This simplifies returning from this call
in case of an error.
A check to make sure the DMA memory map succeeds with the
correct size is also added, and the debug print of the
command fis is cleaned up with its size corrected.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1415058979-16604-4-git-send-email-jsnow@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>