Commit Graph

47546 Commits

Author SHA1 Message Date
Igor Mammedov
4da7faaeb0 pc: Enforce adding CPUs contiguously and removing them in opposite order
It will still allow us to use cpu_index as migration instance_id
since when CPUs are added contiguously (from the first to the last)
and removed in opposite order, cpu_index stays stable and it's
reproducible on destination side.

While there is work in progress to support migration when there
are holes in cpu_index range resulting from out-of-order plug or
unplug, this patch is intended as an interim solution until
cpu_index usage is cleaned up.

As result of this patch it would be possible to plug/unplug CPUs,
but in limited order that doesn't break migration.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 12:02:19 -03:00
Igor Mammedov
73360e2785 pc: Forbid BSP removal
Boot CPU is assumed to always present in QEMU code, so
untile that assumptions are gone, deny removal request,
In another words QEMU won't support BSP hot-unplug.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 12:02:19 -03:00
Igor Mammedov
a44a49dbf2 pc: Register created initial and hotpluged CPUs in one place pc_cpu_plug()
Consolidate possible_cpus array management in pc_cpu_plug() for
smp_cpus, coldplugged with -device and hotplugged with
device_add.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 12:02:18 -03:00
Igor Mammedov
ba157b696c pc: Delay setting number of boot CPUs to machine_done time
Currently present CPUs counter in CMOS only contains
smp_cpus (i.e. initial CPUs specified with -smp X) and
doesn't account for CPUs created with -device.
If VM is started with additional CPUs added with
 -device, it will hang in BIOS waiting for condition
   smp_cpus == counted_cpus
forever as counted_cpus will include -device CPUs as well
and be more than smp_cpus.

Make present CPUs counter in CMOS to count all CPUs
(initial and coldplugged with -device) by delaying
it to machine done time when it possible to count
CPUs added with -device.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
2016-07-20 12:02:18 -03:00
Igor Mammedov
e8f7b83e88 pc: Set APIC ID based on socket/core/thread ids if it's not been set yet
CPU added with device_add help won't have APIC ID set,
so set it according to socket/core/thread ids provided
with device_add command.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 12:02:18 -03:00
Igor Mammedov
67e55caa6d target-i386: Fix apic object leak when CPU is deleted
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 12:02:18 -03:00
Igor Mammedov
6816b1b381 target-i386: cpu: Do not ignore error and fix apic parent
object_property_add_child() silently fails with error that it can't
create duplicate propery 'apic' as we already have 'apic' property
registered for 'apic' feature. As result generic device_realize puts
apic into unattached container.

As it's programming error, abort if name collision happens in future
and fix property name for apic_state to 'lapic', this way apic is
a child of cpu instance.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 12:02:18 -03:00
Paolo Bonzini
c2f193b538 target-i386: Add support for UMIP and RDPID CPUID bits
These are both stored in CPUID[EAX=7,EBX=0].ECX.  KVM is going to
be able to emulate both (albeit with a performance loss in the case
of RDPID, which therefore will be in KVM_GET_EMULATED_CPUID rather
than KVM_GET_SUPPORTED_CPUID).

It's also possible to implement both in TCG, but this is for 2.8.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 12:02:18 -03:00
Igor Mammedov
d89c2b8b98 target-i386: Add socket/core/thread properties to X86CPU
These properties will be used by as address where to plug
CPU with help -device/device_add commands.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 12:02:18 -03:00
Igor Mammedov
2da00e3176 target-i386: Replace custom apic-id setter/getter with static property
Custom apic-id setter/getter doesn't do any property specific
checks anymore, so clean it up and use more compact static
property DEFINE_PROP_UINT32 instead.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 12:02:18 -03:00
Igor Mammedov
4ec60c76d5 pc: cpu: Consolidate apic-id validity checks in pc_cpu_pre_plug()
Machine code knows about all possible APIC IDs so use that
instead of hack which does O(n^2) complexity duplicate
checks, interating over global CPUs list.
As result duplicate check is done only once with O(log n) complexity.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 12:02:17 -03:00
Igor Mammedov
7baef5cfea pc: Extract CPU lookup into a separate function
It will be reused in the next patch at pre_plug time

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 11:58:44 -03:00
Dr. David Alan Gilbert
11f6fee576 target-i386: Set physical address bits based on host
Add the host-phys-bits boolean property, if true, take phys-bits
from the hosts physical bits value, overriding either the default
or the user specified value.

We can also use the value we read from the host to check the users
explicitly set value and warn them if it doesn't match.

Note:
   a) We only read the hosts value in KVM mode (because on non-x86
      we get an abort if we try)
   b) We don't warn about trying to use host-phys-bits in TCG mode,
      we just fall back to the TCG default.  This allows the machine
      type to set the host-phys-bits flag if it wants and then to
      work in both TCG and KVM.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 11:58:44 -03:00
Igor Mammedov
9f3aab5853 pc: Add x86_topo_ids_from_apicid()
It's reverse of apicid_from_topo_ids() and will be used in follow up
patches to fill in data structures for query-hotpluggable-cpus and
for user friendly error reporting.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 11:58:44 -03:00
Igor Mammedov
d9c84f1969 target-i386: Use uint32_t for X86CPU.apic_id
Redo 9886e834 (target-i386: Require APIC ID to be explicitly set before
CPU realize) in another way that doesn't use int64_t to detect
if apic-id property has been set.

Use the fact that 0xFFFFFFFF is the broadcast
value that a CPU can't have and set default
uint32_t apic_id to it instead of using int64_t.

Later uint32_t apic_id will be used to drop custom
property setter/getter in favor of static property.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 11:58:44 -03:00
Dr. David Alan Gilbert
fcc35e7cca target-i386: Fill high bits of mtrr mask
Fill the bits between 51..number-of-physical-address-bits in the
MTRR_PHYSMASKn variable range mtrr masks so that they're consistent
in the migration stream irrespective of the physical address space
of the source VM in a migration.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 11:58:44 -03:00
Dr. David Alan Gilbert
112dad69d7 target-i386: Mask mtrr mask based on CPU physical address limits
The CPU GPs if we try and set a bit in a variable MTRR mask above
the limit of physical address bits on the host.  We hit this
when loading a migration from a host with a larger physical
address limit than our destination (e.g. a Xeon->i7 of same
generation) but previously used to get away with it
until 48e1a45 started checking that msr writes actually worked.

It seems in our case the GP probably comes from KVM emulating
that GP.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 11:58:43 -03:00
Dr. David Alan Gilbert
af45907a13 target-i386: Allow physical address bits to be set
Currently QEMU sets the x86 number of physical address bits to the
magic number 40.  This is only correct on some small AMD systems;
Intel systems tend to have 36, 39, 46 bits, and large AMD systems
tend to have 48.

Having the value different from your actual hardware is detectable
by the guest and in principal can cause problems;
The current limit of 40 stops TB VMs being created by those lucky
enough to have that much.

This patch lets you set the physical bits by a cpu property but
defaults to the same 40bits which matches TCGs setup.

I've removed the ancient warning about the 42 bit limit in exec.c;
I can't find that limit in there and no one else seems to know where
it is.

We use a magic value of 0 as the property default so that we can
later distinguish between the default and a user set value.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 11:58:43 -03:00
Dr. David Alan Gilbert
709787ee99 target-i386: Provide TCG_PHYS_ADDR_BITS
Provide a constant for the number of address bits supported under TCG.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2016-07-20 11:58:43 -03:00
Cornelia Huck
2a79eb1a61 s390x/css: provide a dev_path for css devices
We need to implement the get_dev_path method for the css bus, or
else we might end up with two different devices having the same
qdev_path.

This was noticed when adding two scsi_hd controllers: The SCSIBus
code will produce a non-unique dev_path for vmstate usage if the
parent bus does not provide the get_dev_path method.

We simply use the device's bus id, as this is unique and we won't
have any deeper hierarchy from a channel subsystem perspective
anyway.

Note that we need to disable this for older machine versions,
as this changes the migration format.

Reported-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reviewed-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Tested-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2016-07-20 15:47:25 +02:00
Sascha Silbe
727a0424dd s390x/css: sch_handle_start_func() handles resume, too
It's not obvious from the code flow that sch_handle_start_func() gets
called for rsch. Add some comments explaining this.

Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2016-07-20 15:47:25 +02:00
Sascha Silbe
485dd69088 s390x/css: copy CCW format bit from ORB to SCSW
The CCW Format (F) flag of the Subchannel-Status Word (SCSW) indicates
the format of the CCWs "associated with an I/O operation", i.e. the
value of CCW-Format Control (F) bit of the Operation-Request Block
(ORB).

Copy the CCW format bit from the ORB to the SCSW so we correctly
indicate the format of the CCWs to the guest.

Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2016-07-20 15:47:25 +02:00
Peter Maydell
3b2e6798ff QAPI patches for 2016-07-19
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJXjnLvAAoJEDhwtADrkYZTkd0P/iVsviq3WwxjmpbjyYDqgBYl
 2hj9XL8ZtYGwFIU9WZXdu16lXlXtZom9u8XzspXIvhy5McNeWDxYarbE2kByyjac
 rl4873YzIspVKPPFMl2LWsHwtq7LLGzFF+f+ofjHN81ZdM6qTwZBe00gxxY8281I
 6x1aQYkhOAmqJAlqqnEVk76WtdScs23mdbqmy1LzGD3ZBKvDa8IasogAKvZquNZX
 P16kYdq9QFqFJ30non9fWg9VQWtdryisseVhpSY/PXlrM4H+XPDK4hvegr3pjN8S
 yqtmKkPOuVWQyCfRz6UWBP0ncz3QN7iOiVLqb6TJgxJ8jFp3lbEEsXsCkvpSWjOU
 JDFOcGuiE1AVoZF8NxWQ3fdLiHt2jyCgY7iqgbNc1A3eT2H5IshIc7OF5QOU7MtP
 EYqk5bXiusUftuvZE5Mh0pXJ5AeOEOoXM+/dGRt3IRxPtqOWlW7/NM4YGgMD9XFg
 83i5OIjdOBa1HYvetfEl3oKYsu+2zGsSqz4AH3CDIWJ8aoWJ9RNId3JJWFf+jyFR
 S+l7wMyFYdzDL7VqUmIy8xrVgc0IQPYT0udcjdOciEej873roEVencQ07VnK2zN0
 aiAmAN/BQFO0HmmH4rBvTcLxF06+HnW0CwKii0mlAeisUFTQJEDmyg823Uc+MAGl
 4pGFqDUbnzsy8xK3HCCP
 =MhfL
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2016-07-19' into staging

QAPI patches for 2016-07-19

# gpg: Signature made Tue 19 Jul 2016 19:35:27 BST
# gpg:                using RSA key 0x3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867  4E5F 3870 B400 EB91 8653

* remotes/armbru/tags/pull-qapi-2016-07-19:
  net: Use correct type for bool flag
  qapi: Change Netdev into a flat union
  block: Simplify drive-mirror
  block: Simplify block_set_io_throttle
  qapi: Implement boxed types for commands/events
  qapi: Plumb in 'boxed' to qapi generator lower levels
  qapi-event: Simplify visit of non-implicit data
  qapi: Drop useless gen_err_check()
  qapi: Add type.is_empty() helper
  qapi: Hide tag_name data member of variants
  qapi: Special case c_name() for empty type
  qapi: Require all branches of flat union enum to be covered
  net: use Netdev instead of NetClientOptions in client init
  qapi: change QmpInputVisitor to QSLIST
  qapi: change QmpOutputVisitor to QSLIST

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-07-20 14:34:08 +01:00
Eric Blake
decaeed773 raw_bsd: Convert to byte-based interface
Since the raw format driver is just passing things through, we can
do byte-based read and write if the underlying protocol does
likewise.

There's one tricky part - if we probed the image format, we document
that we restrict operations on the initial sector.  It's easiest to
keep this guarantee by enforcing read-modify-write on sub-sector
operations (yes, this partially reverts commit ad82be2f).

Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1468624988-423-20-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:24:25 +01:00
Eric Blake
70c4fb2648 nbd: Convert to byte-based interface
The NBD protocol doesn't have any notion of sectors, so it is
a fairly easy conversion to use byte-based read and write.

Signed-off-by: Eric Blake <eblake@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1468624988-423-19-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:24:25 +01:00
Eric Blake
02aefe43cb block: Kill .bdrv_co_discard()
Now that all drivers have a byte-based .bdrv_co_pdiscard(), we
no longer need to worry about the sector-based version.  We can
also relax our minimum alignment to 1 for drivers that support it.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-18-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:24:25 +01:00
Eric Blake
dde4753763 sheepdog: Switch .bdrv_co_discard() to byte-based
Another step towards killing off sector-based block APIs.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-17-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:24:25 +01:00
Eric Blake
5f61ad079a raw_bsd: Switch .bdrv_co_discard() to byte-based
Another step towards killing off sector-based block APIs.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-16-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:24:25 +01:00
Eric Blake
82e8a7888b qcow2: Switch .bdrv_co_discard() to byte-based
Another step towards killing off sector-based block APIs.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-15-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:24:25 +01:00
Eric Blake
447e57c3b0 nbd: Switch .bdrv_co_discard() to byte-based
Another step towards killing off sector-based block APIs.

While at it, call directly into nbd-client.c instead of having
a pointless trivial wrapper in nbd.c.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-14-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:24:25 +01:00
Eric Blake
97c7e85cfe iscsi: Switch .bdrv_co_discard() to byte-based
Another step towards killing off sector-based block APIs.

Unlike write_zeroes, where we can be handed unaligned requests
and must fail gracefully with -ENOTSUP for a fallback, we are
guaranteed that discard requests are always aligned because the
block layer already ignored unaligned head/tail.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-13-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:24:25 +01:00
Eric Blake
1014170b82 gluster: Switch .bdrv_co_discard() to byte-based
Another step towards killing off sector-based block APIs.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-12-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:55 +01:00
Eric Blake
aba76e2f03 blkreplay: Switch .bdrv_co_discard() to byte-based
Another step towards killing off sector-based block APIs.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-11-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:55 +01:00
Eric Blake
47a5486d59 block: Add .bdrv_co_pdiscard() driver callback
There's enough drivers with a sector-based callback that it will
be easier to switch one at a time.  This patch adds a byte-based
callback, and then after all drivers are swapped, we'll drop the
sector-based callback.

[checkpatch doesn't like the space after coroutine_fn in
block_int.h, but it's consistent with the rest of the file]

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-10-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:55 +01:00
Eric Blake
4da444a0bb block: Convert .bdrv_aio_discard() to byte-based
Another step towards byte-based interfaces everywhere.  Replace
the sector-based driver callback .bdrv_aio_discard() with a new
byte-based .bdrv_aio_pdiscard().  Only raw-posix and RBD drivers
are affected, so it was not worth splitting into multiple patches.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-9-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:55 +01:00
Eric Blake
7bbca9e290 rbd: Switch rbd_start_aio() to byte-based
The internal function converts to byte-based before calling into
RBD code; hoist the conversion to the callers so that callers
can then be switched to byte-based themselves.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-8-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:55 +01:00
Eric Blake
36e3b2e733 raw-posix: Switch paio_submit() to byte-based
The only remaining uses of paio_submit() were flush (with no
offset or count) and discard (which we are switching to byte-based);
furthermore, the similarly named paio_submit_co() is already
byte-based.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-7-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:55 +01:00
Eric Blake
1c6c4bb7f0 block: Convert BB interface to byte-based discards
Change sector-based blk_discard(), blk_co_discard(), and
blk_aio_discard() to instead be byte-based blk_pdiscard(),
blk_co_pdiscard(), and blk_aio_pdiscard().  NBD gets a lot
simpler now that ignoring the unaligned portion of a
byte-based discard request is handled under the hood by
the block layer.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-6-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:55 +01:00
Eric Blake
60ebac16bc block: Convert bdrv_aio_discard() to byte-based
Another step towards byte-based interfaces everywhere.  Replace
the sector-based bdrv_aio_discard() with a new byte-based
bdrv_aio_pdiscard(), which silently ignores any unaligned head
or tail.  Driver callbacks will be converted in followup patches.

Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1468624988-423-5-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:55 +01:00
Eric Blake
b15404e027 block: Switch BlockRequest to byte-based
BlockRequest is the internal struct used by bdrv_aio_*.  At the
moment, all such calls were sector-based, but we will eventually
convert to byte-based; start by changing the internal variables
to be byte-based.  No change to behavior, although the read and
write code can now go byte-based through more of the stack.

Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1468624988-423-4-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:55 +01:00
Eric Blake
0c51a893b6 block: Convert bdrv_discard() to byte-based
Another step towards byte-based interfaces everywhere.  Replace
the sector-based bdrv_discard() with a new byte-based
bdrv_pdiscard(), which silently ignores any unaligned head
or tail.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-3-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:55 +01:00
Eric Blake
9f1963b3f7 block: Convert bdrv_co_discard() to byte-based
Another step towards byte-based interfaces everywhere.  Replace
the sector-based bdrv_co_discard() with a new byte-based
bdrv_co_pdiscard(), which silently ignores any unaligned head
or tail.  Driver callbacks will be converted in followup patches.

By calculating the alignment outside of the loop, and clamping
the max discard to an aligned value, we can simplify the actions
done within the loop.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468624988-423-2-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:54 +01:00
Eric Blake
6bd01f14db iscsi: Rely on block layer to break up large requests
Now that the block layer honors max_request, we don't need to
bother with an EINVAL on overlarge requests, but can instead
assert that requests are well-behaved.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1468607524-19021-7-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:54 +01:00
Eric Blake
1e2a77a851 nbd: Drop unused offset parameter
Now that NBD relies on the block layer to fragment things, we no
longer need to track an offset argument for which fragment of
a request we are actually servicing.

While at it, use true and false instead of 0 and 1 for a bool
parameter.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1468607524-19021-6-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:54 +01:00
Eric Blake
fb1a6de14a nbd: Rely on block layer to break up large requests
Now that the block layer will honor max_transfer, we can simplify
our code to rely on that guarantee.

The readv code can call directly into nbd-client, just as the
writev code has done since commit 52a4650.

Interestingly enough, while qemu-io 'w 0 40m' splits into a 32M
and 8M transaction, 'w -z 0 40m' splits into two 16M and an 8M,
because the block layer caps the bounce buffer for writing zeroes
at 16M.  When we later introduce support for NBD_CMD_WRITE_ZEROES,
we can get a full 32M zero write (or larger, if the client and
server negotiate that write zeroes can use a larger size than
ordinary writes).

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1468607524-19021-5-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:54 +01:00
Eric Blake
04ed95f484 block: Fragment writes to max transfer length
Drivers should be able to rely on the block layer honoring the
max transfer length, rather than needing to return -EINVAL
(iscsi) or manually fragment things (nbd).  We already fragment
write zeroes at the block layer; this patch adds the fragmentation
for normal writes, after requests have been aligned (fragmenting
before alignment would lead to multiple unaligned requests, rather
than just the head and tail).

When fragmenting a large request where FUA was requested, but
where we know that FUA is implemented by flushing all requests
rather than the given request, then we can still get by with
only one flush.  Note, however, that we need a followup patch
to the raw format driver to avoid a regression in the number of
flushes actually issued.

The return value was previously nebulous on success (sometimes
zero, sometimes the length written); since we never have a short
write, and since fragmenting may store yet another positive
value in 'ret', change the function to always return 0 on success,
matching what we do in bdrv_aligned_preadv().

Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1468607524-19021-4-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:54 +01:00
Eric Blake
8a39b4d6e2 raw_bsd: Don't advertise flags not supported by protocol layer
The raw format layer supports all flags via passthrough - but
it only makes sense to pass through flags that the lower layer
actually supports.

The next patch gives stronger reasoning for why this is correct.
At the moment, the raw format layer ignores the max_transfer
limit of its protocol layer, and an attempt to do the qemu-io
'w -f 0 40m' to an NBD server that lacks FUA will pass the entire
40m request to the NBD driver, which then fragments the request
itself into a 32m write, 8m write, and flush.  But once the block
layer starts honoring limits and fragmenting packets, the raw
driver will hand the NBD driver two separate requests; if both
requests have BDRV_REQ_FUA set, then this would result in a 32m
write, flush, 8m write, and second flush.  By having the raw
layer no longer advertise FUA support when the protocol layer
lacks it, we are back to a single flush at the block layer for
the overall 40m request.

Note that 'w -f -z 0 40m' does not currently exhibit the same
problem, because there, the fragmentation does not occur until
at the NBD layer (the raw layer has .bdrv_co_pwrite_zeroes, and
the NBD layer doesn't advertise max_pwrite_zeroes to constrain
things at the raw layer) - but the problem is latent and we
would again have too many flushes without this patch once the
NBD layer implements support for the new NBD_CMD_WRITE_ZEROES
command, if it sets max_pwrite_zeroes to the same 32m limit as
recommended by the NBD protocol.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1468607524-19021-3-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:54 +01:00
Eric Blake
1a62d0accd block: Fragment reads to max transfer length
Drivers should be able to rely on the block layer honoring the
max transfer length, rather than needing to return -EINVAL
(iscsi) or manually fragment things (nbd).  This patch adds
the fragmentation in the block layer, after requests have been
aligned (fragmenting before alignment would lead to multiple
unaligned requests, rather than just the head and tail).

The return value was previously nebulous on success on whether
it was zero or the length read; and fragmenting may introduce
yet other non-zero values if we use the last length read.  But
as at least some callers are sloppy and expect only zero on
success, it is easiest to just guarantee 0.

[Fix uninitialized ret local variable in bdrv_aligned_preadv().
--Stefan]

Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1468607524-19021-2-git-send-email-eblake@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-07-20 14:11:54 +01:00
Peter Maydell
338404d061 target-arm queue:
* fix two minor Coverity complaints
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABCAAGBQJXjl0qAAoJEDwlJe0UNgze2aIP/0oxGzsmD7hOOQ2g7nF+E+7+
 rph8hLmbYqTXaMSfxDw1oDHJFbzESNfQA+YhLAAmEteJzIPuLgqzsdkjMIysucXj
 ypdsSQDT2XLOMkXVnQQJzZtf5zPH273MLR0zjSkKEjx5tf6MUNufV+XDIXBYH3bO
 Y6Cb3CaYKsddTIGw0Fib/jKyIAN7rR9DtJVqn36qe98Hde8T7xS1g04ZZdL+tc/0
 rVSJ/KNsRae9shdxN1M5zzPo4oED8eGEKCZy11LsqLn2UxDzxbJVBCPjl2EsLuxe
 9AnUHdStIIt7Fg9fdRl0Xk2XgUOmOapJ9fDU/Tm2SvVXkPUgpmU3mwYPnatDOhUf
 1yulR48VnmRhPCoWQMsI7ttFp3PLXJTQN3taYKaz6xC3T8tXkjJk/0g32c1P3R6v
 LUWD9/xmkVmwwfmo89kFT01c0jXyvnYJzRAgMil2TTDrVIS+Wfav2JKNSkxjhOJw
 XFvz5ztrJa0zCA59gdyCvVOsvY4Hk8W7DLyICUs2NKhRg68+SmlHyv4oc/0oJTu2
 cmxrXGLag683sJ5vUvPnbPipQ6uMtA8zOyUG9+pEIHNLoXNs5+6WRAr/rX0zd1+D
 kacohJeuXd8K9aoxum7LwBMLC7b6V8+w+mlP2I7NnT30ign6lPw/P1jyknIU5n47
 lksgtq+9IuqSwLx0GGiy
 =2Fao
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20160719' into staging

target-arm queue:
 * fix two minor Coverity complaints

# gpg: Signature made Tue 19 Jul 2016 18:02:34 BST
# gpg:                using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>"
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20160719:
  arm_gicv3: Add assert()s to tell Coverity that offsets are aligned
  target-arm: Fix unreachable code in gicv3_class_name()

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-07-20 12:48:18 +01:00
Gerd Hoffmann
87ae924b73 usbredir: add streams property
Enabled by default, can be used to turn off (usb3) streams support.
xhci has a such a property too (same name, same default).

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 1468408474-17648-1-git-send-email-kraxel@redhat.com
2016-07-20 13:31:20 +02:00