Commit Graph

37090 Commits

Author SHA1 Message Date
Peter Wu
f6e6652d7c block/dmg: validate chunk size to avoid overflow
Previously the chunk size was not checked, allowing for a large memory
allocation. This patch checks whether the chunks size is within the
resource fork length, and whether the resource fork is below the
trailer of the dmg file.

Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1420566495-13284-6-git-send-email-peter@lekensteyn.nl
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Peter Wu
7aee37b93a block/dmg: process a buffer instead of reading ints
As the decoded plist XML is not a pointer in the file,
dmg_read_mish_block must be able to process a buffer instead of a file
pointer. Since the full buffer must be processed, let's change the
return value again to just a success flag.

Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1420566495-13284-5-git-send-email-peter@lekensteyn.nl
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Peter Wu
b0e8dc5d54 block/dmg: extract processing of resource forks
Besides the offset, also read the resource length. This length is now
used in the extracted function to verify the end of the resource fork
against "count" from the resource fork.

Instead of relying on the value of offset to conclude whether the
resource fork is available or not (info_begin==0), check the
rsrc_fork_length instead. This would allow a dmg file to begin with a
resource fork. This seemingly unnecessary restriction was found while
trying to craft a DMG file by hand.

Other changes:

 - Do not require resource data offset to be 0x100 (but check that it
   is within bounds though).
 - Further improve boundary checking (resource data must be within
   the resource fork).
 - Use correct value for resource data length (spotted by John Snow)
 - Consider the resource data offset when determining info_end.
   This fixes an EINVAL on the tuxpaint dmg example.

The resource fork format is documented at
https://developer.apple.com/legacy/library/documentation/mac/pdf/MoreMacintoshToolbox.pdf#page=151

Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1420566495-13284-4-git-send-email-peter@lekensteyn.nl
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Peter Wu
65a1c7c96a block/dmg: extract mish block decoding functionality
Extract the mish block decoder such that this can be used for other
formats in the future. A new DmgHeaderState struct is introduced to
share state while decoding.

The code is kept unchanged as much as possible, a "fail" label is added
for example where a simple return would probably do. In dmg_open, the
variable "tmp" is renamed to "rsrc_data_offset" for clarity and comments
have been added explaining various data.

Note that this patch has one subtle difference with the previous
version which should not affect functionality. In the previous code,
the end of a resource was inferred from the mish block (the offsets
would be increased by the fields). In this patch, the resource length
is used instead to avoid the need to rely on the previous offsets.

Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1420566495-13284-3-git-send-email-peter@lekensteyn.nl
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Peter Wu
fa8354bd22 block/dmg: properly detect the UDIF trailer
DMG files have a variable length with a UDIF trailer at the end of a
file. This UDIF trailer is essential as it describes the contents of
the image. At the moment however, the start of this trailer is almost
always incorrect as bdrv_getlength() returns a multiple of the block
size (rounded up). This results in a failure to recognize DMG files,
resulting in Invalid argument (EINVAL) errors.

As there is no API to retrieve the real file size, look for the magic
header in the last two sectors to find the start of this 512-byte UDIF
trailer (the "koly" block).

The resource fork offset ("info_begin") has its offset adjusted as the
initial value of offset does not mean "end of file" anymore, but "begin
of UDIF trailer".

[Replaced error_set(errp, ERROR_CLASS_GENERIC_ERROR, ...) with
error_setg(errp, ...) as discussed with Peter.
--Stefan]

Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1420566495-13284-2-git-send-email-peter@lekensteyn.nl
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Francesco Romani
e2462113b2 block: add event when disk usage exceeds threshold
Managing applications, like oVirt (http://www.ovirt.org), make extensive
use of thin-provisioned disk images.
To let the guest run smoothly and be not unnecessarily paused, oVirt sets
a disk usage threshold (so called 'high water mark') based on the occupation
of the device,  and automatically extends the image once the threshold
is reached or exceeded.

In order to detect the crossing of the threshold, oVirt has no choice but
aggressively polling the QEMU monitor using the query-blockstats command.
This lead to unnecessary system load, and is made even worse under scale:
deployments with hundreds of VMs are no longer rare.

To fix this, this patch adds:
* A new monitor command `block-set-write-threshold', to set a mark for
  a given block device.
* A new event `BLOCK_WRITE_THRESHOLD', to report if a block device
  usage exceeds the threshold.
* A new `write_threshold' field into the `BlockDeviceInfo' structure,
  to report the configured threshold.

This will allow the managing application to use smarter and more
efficient monitoring, greatly reducing the need of polling.

[Updated qemu-iotests 067 output to add the new 'write_threshold'
property. --Stefan]
[Changed g_assert_false() to !g_assert() to fix the build on older glib
versions. --Kevin]

Signed-off-by: Francesco Romani <fromani@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1421068273-692-1-git-send-email-fromani@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Max Reitz
6440d44cea iotests: Specify format for qemu-nbd
This patch is necessary to suppress the "probed raw" warning when
running raw over nbd tests.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Fam Zheng
79e7a01954 qemu-iotests: Fix supported_oses check
There is a bug in the recently added sys.platform test, and we no longer
run python tests, because "linux2" is the value to compare here. So do a
prefix match. According to python doc [1], the way to use sys.platform
is "unless you want to test for a specific system version, it is
therefore recommended to use the following idiom":

if sys.platform.startswith('freebsd'):
    # FreeBSD-specific code here...
elif sys.platform.startswith('linux'):
    # Linux-specific code here...

[1]: https://docs.python.org/2.7/library/sys.html#sys.platform

Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Peter Lieven
c99495ac1b virtio-blk: add a knob to disable request merging
this adds a knob to disable request merging for debugging or benchmarks if dedired.

Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Peter Lieven
95f7142abc virtio-blk: introduce multiread
this patch finally introduces multiread support to virtio-blk. While
multiwrite support was there for a long time, read support was missing.

The complete merge logic is moved into virtio-blk.c which has
been the only user of request merging ever since. This is required
to be able to merge chunks of requests and immediately invoke callbacks
for those requests. Secondly, this is required to switch to
direct invocation of coroutines which is planned at a later stage.

The following benchmarks show the performance of running fio with
4 worker threads on a local ram disk. The numbers show the average
of 10 test runs after 1 run as warmup phase.

              |        4k        |       64k        |        4k
MB/s          | rd seq | rd rand | rd seq | rd rand | wr seq | wr rand
--------------+--------+---------+--------+---------+--------+--------
master        | 1221   | 1187    | 4178   | 4114    | 1745   | 1213
multiread     | 1829   | 1189    | 4639   | 4110    | 1894   | 1216

Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Peter Lieven
454057b7d9 block-backend: expose bs->bl.max_transfer_length
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Peter Lieven
d901f3c457 hw/virtio-blk: add a constant for max number of merged requests
As it was not obvious (at least for me) where the 32 comes from;
add a constant for it.

Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Peter Lieven
f4564d53c6 block: add accounting for merged requests
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Fam Zheng
35f5a49374 qed: Really remove unused field QEDAIOCB.finished
The commit 533ffb17a that removed qed_aiocb_info.cancel said to remove
this but didn't do it.

Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Don Slutz
61979a6adf qemu-img: Add QEMU_PKGVERSION to QEMU_IMG_VERSION
This is the same way vl.c handles this.

Signed-off-by: Don Slutz <dslutz@verizon.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Peter Lieven
98764152ad block: change default for discard and write zeroes to INT_MAX
do not trim requests if the driver does not supply a limit
through BlockLimits. For write zeroes we still keep a limit
for the unsupported path to avoid allocating a big bounce buffer.

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Suggested-by: Denis V. Lunev <den@openvz.org>
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:21 +01:00
Denis V. Lunev
1cdc3239f1 block: use fallocate(FALLOC_FL_PUNCH_HOLE) & fallocate(0) to write zeroes
This sequence works efficiently if FALLOC_FL_ZERO_RANGE is not supported.
Unfortunately, FALLOC_FL_ZERO_RANGE is supported on really modern systems
and only for a couple of filesystems. FALLOC_FL_PUNCH_HOLE is much more
mature.

The sequence of 2 operations FALLOC_FL_PUNCH_HOLE and 0 is necessary due
to the following reasons:
- FALLOC_FL_PUNCH_HOLE creates a hole in the file, the file becomes
  sparse. In order to retain original functionality we must allocate
  disk space afterwards. This is done using fallocate(0) call
- fallocate(0) without preceeding FALLOC_FL_PUNCH_HOLE will do nothing
  if called above already allocated areas of the file, i.e. the content
  will not be zeroed

This should increase the performance a bit for not-so-modern kernels.

CC: Max Reitz <mreitz@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Peter Lieven <pl@kamp.de>
CC: Fam Zheng <famz@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:20 +01:00
Denis V. Lunev
d50d822219 block/raw-posix: call plain fallocate in handle_aiocb_write_zeroes
There is a possibility that we are extending our image and thus writing
zeroes beyond the end of the file. In this case we do not need to care
about the hole to make sure that there is no data in the file under
this offset (pre-condition to fallocate(0) to work). We could simply call
fallocate(0).

This improves the performance of writing zeroes even on really old
platforms which do not have even FALLOC_FL_PUNCH_HOLE.

Before the patch do_fallocate was used when either
CONFIG_FALLOCATE_PUNCH_HOLE or CONFIG_FALLOCATE_ZERO_RANGE are defined.
Now the story is different. CONFIG_FALLOCATE is defined when Linux
fallocate is defined, posix_fallocate is completely different story
(CONFIG_POSIX_FALLOCATE). CONFIG_FALLOCATE is mandatory prerequite
for both CONFIG_FALLOCATE_PUNCH_HOLE and CONFIG_FALLOCATE_ZERO_RANGE
thus we are on the safe side.

CC: Max Reitz <mreitz@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Peter Lieven <pl@kamp.de>
CC: Fam Zheng <famz@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:20 +01:00
Denis V. Lunev
b953f07500 block: use fallocate(FALLOC_FL_ZERO_RANGE) in handle_aiocb_write_zeroes
This efficiently writes zeroes on Linux if the kernel is capable enough.
FALLOC_FL_ZERO_RANGE correctly handles all cases, including and not
including file expansion.

CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Peter Lieven <pl@kamp.de>
CC: Fam Zheng <famz@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:20 +01:00
Denis V. Lunev
37cc9f7f68 block/raw-posix: refactor handle_aiocb_write_zeroes a bit
move code dealing with a block device to a separate function. This will
allow to implement additional processing for ordinary files.

Please note, that xfs_code has been moved before checking for
s->has_write_zeroes as xfs_write_zeroes does not touch this flag inside.
This makes code a bit more consistent.

CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Peter Lieven <pl@kamp.de>
CC: Fam Zheng <famz@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:20 +01:00
Denis V. Lunev
0b99171230 block/raw-posix: create do_fallocate helper
The pattern
    do {
        if (fallocate(s->fd, mode, offset, len) == 0) {
            return 0;
        }
    } while (errno == EINTR);
    ret = translate_err(-errno);
will be commonly useful in next patches. Create helper for it.

CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Peter Lieven <pl@kamp.de>
CC: Fam Zheng <famz@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:20 +01:00
Denis V. Lunev
1486df0e31 block/raw-posix: create translate_err helper to merge errno values
actually the code
    if (ret == -ENODEV || ret == -ENOSYS || ret == -EOPNOTSUPP ||
        ret == -ENOTTY) {
        ret = -ENOTSUP;
    }
is present twice and will be added a couple more times. Create helper
for this.

CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Peter Lieven <pl@kamp.de>
CC: Fam Zheng <famz@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:20 +01:00
Dr. David Alan Gilbert
a71754e5b0 atapi migration: Throw recoverable error to avoid recovery
(With the previous atapi_dma flag recovery)
If migration happens between the ATAPI command being written and the
bmdma being started, the DMA is dropped.  Eventually the guest times
out and recovers, but that can take many seconds.
(This is rare, on a pingpong reading the CD continuously I hit
this about ~1/30-1/50 migrates)

I don't think we've got enough state to be able to recover safely
at this point, so I throw a 'medium error, no seek complete'
that I'm assuming guests will try and recover from an apparently
dirty CD.

OK, it's a hack, the real solution is probably to push a lot of
ATAPI state into the migration stream, but this is a fix that
works with no stream changes. Tested only on Linux (both RHEL5
(pre-libata) and RHEL7).

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:20 +01:00
Dr. David Alan Gilbert
819fa27631 Restore atapi_dma flag across migration
If a migration happens just after the guest has kicked
off an ATAPI command and kicked off DMA, we lose the atapi_dma
flag, and the destination tries to complete the command as PIO
rather than DMA.  This upsets Linux; modern libata based kernels
stumble and recover OK, older kernels end up passing bad data
to userspace.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2015-02-06 17:24:20 +01:00
Peter Maydell
cebbae86b4 -----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
 
 iQEcBAABAgAGBQJU1MtgAAoJEJykq7OBq3PId6IH/2p7BZSEal1CqmxgmcAyRxrB
 IZ3RkDKyCF3ELBozvJ9RLHEakARVBNBSc4YSiQTFIcE6QYe8rRWXthbo6k6MiCnC
 5w3Yh1EdocKLNOU0jCl0yN0cqJyWp6ax//66K4iFn7Q1+LCRVs74JO7z9U7tEXuW
 cz3fRzb2OsP2tjUDTsnaIQNs7zewn1w9DgSnhtt9KS6rF9V9qDHeX4pjIcdEM45w
 S+YMUaLtTmyTJ55ldq7YCMjBU+3KxFQi8LuEPjCwBMLyLaF35Uy2N99NIHGa0696
 P8WAL67SV4YR9KpKIjL3w82Fjx22cpe1cUuxVTkEzCTFKHgq2yzHTdy0I02nhkc=
 =9OUs
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/stefanha/tags/net-pull-request' into staging

# gpg: Signature made Fri 06 Feb 2015 14:10:40 GMT using RSA key ID 81AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg:                 aka "Stefan Hajnoczi <stefanha@gmail.com>"

* remotes/stefanha/tags/net-pull-request:
  monitor: more accurate completion for host_net_remove()
  net: del hub port when peer is deleted
  net: remove the wrong comment in net_init_hubport()
  monitor: print hub port name during info network
  rtl8139: simplify timer logic
  MAINTAINERS: add Jason Wang as net subsystem maintainer

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-02-06 14:35:52 +00:00
Jason Wang
2c4681f512 monitor: more accurate completion for host_net_remove()
Current completion for host_net_remove will show hub ports and clients
that were not peered with hub ports. Fix this.

Cc: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-id: 1422860798-17495-4-git-send-email-jasowang@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-02-06 14:06:45 +00:00
Jason Wang
64a55d6066 net: del hub port when peer is deleted
We should del hub port when peer is deleted since it will not be reused
and will only be freed during exit.

Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-id: 1422860798-17495-3-git-send-email-jasowang@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-02-06 14:06:44 +00:00
Jason Wang
07636d5399 net: remove the wrong comment in net_init_hubport()
Not only nic could be the one to peer.

Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-id: 1422860798-17495-2-git-send-email-jasowang@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-02-06 14:06:44 +00:00
Jason Wang
a6efd6ae7b monitor: print hub port name during info network
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-id: 1422860798-17495-1-git-send-email-jasowang@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-02-06 14:06:44 +00:00
Paolo Bonzini
237c255c6c rtl8139: simplify timer logic
Pavel Dovgalyuk reports that TimerExpire and the timer are not restored
correctly on the receiving end of migration.

It is not clear to me whether this is really the case, but we can take
the occasion to get rid of the complicated code that computes PCSTimeout
on the fly upon changes to IntrStatus/IntrMask.  Just always keep a
timer running, it will fire every ~130 seconds at most if the interrupt
is masked with TimerInt != 0.

This makes rtl8139_set_next_tctr_time idempotent (when the virtual clock
is stopped between two calls, as is the case during migration).

Tested with Frediano's qtest.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1421765099-26190-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-02-06 14:04:36 +00:00
Peter Maydell
b93acb92ca -----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
 
 iQEcBAABAgAGBQJU1MViAAoJEJykq7OBq3PIOmcH/3YHS4xuNrPeyFDHS+fDcWQ0
 xrNoEIbjDIGWpJ5yRPKk/1yooW4E6PJiHHr3qFyKFUdYx+0uwqr8VK2nxWDPGijv
 BFY/tRW2TfmiEV66hR1OnSO9vtRtC/vOkxqFP2COlilY8rLpxFdYV0xCUYqczvOR
 ytSi+SgzToqPDu8laBzc7vfRX4KcKQx+a4+PqyTfJePkFXo4zM9hzMXKobMmcoLS
 Gtx9v280jhNKjwMPQBfGasSrDvf0t0Xpzg5rURpxkIxIS+H+xgrVIevENfd0JWtG
 7CN7GTb5Y0QjUppKNlQvzEYlmJko9DqpMsLdFaDyJJ4g1FGF26zoADryIykefVs=
 =Tdi7
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/stefanha/tags/tracing-pull-request' into staging

# gpg: Signature made Fri 06 Feb 2015 13:45:06 GMT using RSA key ID 81AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg:                 aka "Stefan Hajnoczi <stefanha@gmail.com>"

* remotes/stefanha/tags/tracing-pull-request:
  trace: Print PID and time in stderr traces

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-02-06 13:46:12 +00:00
Dr. David Alan Gilbert
dd9fe29c80 trace: Print PID and time in stderr traces
When debugging migration it's useful to know the PID of
each trace message so you can figure out if it came from the source
or the destination.

Printing the time makes it easy to do latency measurements or timings
between trace points.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 1421746875-9962-1-git-send-email-dgilbert@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-02-06 10:27:22 +00:00
Peter Maydell
b3cd91e0ea migration/next for 20150205
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABCAAGBQJU05eEAAoJEPSH7xhYctcjHqkQAMxI24tjZyboLq6JydQOE9A4
 ifhV9RUQsx/LGfnb3yR7NmdinFYEe2Py1imoybaMi4FcTfGTkAuuFXYW7Zj4QsyJ
 1u/1nFjlM/83GP6NPU06AaRnff/0W7PfLXCcTNTDScPFwf+sEB12krKG85QnxWci
 hvMZWpKsvnwGKCLc69igoJSneRoAsGiXsTlKeWYhGWonzOoaNmZiuoBV8Red8UMB
 didlNNU5+0YoLug1KLC4UcId3khrACJi0RDqaxgwZrcgPxn+4yIWaAUuISnchGhg
 AOR7rZcIDXU760Ru3zpn/LfyN8VgHLUYS6zbRnxOo840CUJLiivpj6G4zBfMXxVY
 IGJ7Rc81pXmODmD6I5/8ckTrPC6wTf69jXCoxMn5UfzEZc0JP1/r98tBGzLRu7mX
 o1I+dHLipKgmoUyl2BFSk7BC3B8K9DnKq4loD2Cxn3lGtkHZxWvVIMvfA0LLZsf+
 xBgRka800I34WsdW41E6socLRBCaDb4O7zuvSDBs3+IRqwqnIww96KUf8KfqB9j+
 ujducdHSTJB7vTlionqjGCqQtHUd5Ivcqrs4QACXJeeCw4oG/umbB+uvOYal27y6
 kuZ5ZSWgTUBeskOtgIxYxpglf8Rcw79ZzmzwhB5d1dXlq/oTcBQ3lErIQVTfW0w9
 xXzeEdtqVTSStBkH/gee
 =CgvN
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/juanquintela/tags/migration/20150205' into staging

migration/next for 20150205

# gpg: Signature made Thu 05 Feb 2015 16:17:08 GMT using RSA key ID 5872D723
# gpg: Can't check signature: public key not found

* remotes/juanquintela/tags/migration/20150205:
  fix mc146818rtc wrong subsection name to avoid vmstate_subsection_load() fail
  Tracify migration/rdma.c
  Add migration stream analyzation script
  migration: Append JSON description of migration stream
  qemu-file: Add fast ftell code path
  QJSON: Add JSON writer
  Print errors in some of the early migration failure cases.
  Migration: Add lots of trace events
  savevm: Convert fprintf to error_report
  vmstate-static-checker: update whitelist

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-02-05 17:11:50 +00:00
Peter Maydell
651621b780 coverity: Improve and extend model
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJU05hhAAoJEDhwtADrkYZT92IP/imvbG800C9LZIbpQy2MBoD6
 h9RFw/MRXo3enNN2QyJUA5nDoUISVT2Gs7Pz+bZ4u9Y+5H09kZjfb1TXiA3iLaFl
 AJwU2gQCNNo5sOcXBI+9OxjBgE4N+w5dHatKIXb/DTOZCjUaEpUFSknyQppOy7tP
 gBAlo8cKPxI7hDyqjhX7KLUTwKWwhDK4jKHN/7WlBqECV7mRwtL9cGOtYfSws8r0
 R3Rg51mKgKDMqELzjIhBlCH4Z/XAlBV3qYgP10HtVaowflmKkyzdHMkVbqoRbMln
 Ji59o8UuSPQQg72Mv+WKZ+Q3OGcWQn08zKn1w2uZcl52oEzo1v+IJanMCXkyoue1
 vTDaYWSUvsK8Mc+C3I5go1/Erj5WAz2eKxMTQvhYx/Aw9WuoprFD9S7cXTF8pGWT
 tn+ZfX8RnM13I2th4y4uNE0lq0wXmhJ93AsnNcSeaux5UhB9LfZb5gM5mJo+fIeP
 jWdB90RqqnnF7JayKkW95Rw6eTWu0TmRrkgao/HBHvauncbzrdO2rmvLHWceOavs
 duXsk3hM4mBxqvTdvHkQegbLtLAai7Cd8XZsFwdIkKOGuPNfsYmsTDPnXsT4OQyP
 5/Y9lKnozRSHMst9qmczXw0GTlvYFO6Ch3N78VvRzhTxUwIV88Ul7zmSnb/RQ1XJ
 fDBsfgq0/Jpe43cqowO/
 =ZNaN
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/armbru/tags/pull-cov-model-2015-02-05' into staging

coverity: Improve and extend model

# gpg: Signature made Thu 05 Feb 2015 16:20:49 GMT using RSA key ID EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>"

* remotes/armbru/tags/pull-cov-model-2015-02-05:
  MAINTAINERS: Add myself as Coverity model maintainer
  coverity: Model g_free() isn't necessarily free()
  coverity: Model GLib string allocation partially
  coverity: Improve model for GLib memory allocation

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-02-05 16:40:00 +00:00
Zhang Haoyu
bb42631190 fix mc146818rtc wrong subsection name to avoid vmstate_subsection_load() fail
fix mc146818rtc wrong subsection name to avoid vmstate_subsection_load() fail
during incoming migration or loadvm.

Signed-off-by: Zhang Haoyu <zhanghy@sangfor.com.cn>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-02-05 17:16:14 +01:00
Markus Armbruster
8c413e7902 MAINTAINERS: Add myself as Coverity model maintainer
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
2015-02-05 17:16:14 +01:00
Dr. David Alan Gilbert
733252deb8 Tracify migration/rdma.c
Turn all the D/DD/DDDPRINTFs into trace events
Turn most of the fprintf(stderr, into error_report

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-02-05 17:16:14 +01:00
Alexander Graf
b17425701d Add migration stream analyzation script
This patch adds a python tool to the scripts directory that can read
a dumped migration stream if it contains the JSON description of the
device states. I constructs a human readable JSON stream out of it.

It's very simple to use:

  $ qemu-system-x86_64
    (qemu) migrate "exec:cat > mig"
  $ ./scripts/analyze_migration.py -f mig

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-02-05 17:16:14 +01:00
Alexander Graf
8118f0950f migration: Append JSON description of migration stream
One of the annoyances of the current migration format is the fact that
it's not self-describing. In fact, it's not properly describing at all.
Some code randomly scattered throughout QEMU elaborates roughly how to
read and write a stream of bytes.

We discussed an idea during KVM Forum 2013 to add a JSON description of
the migration protocol itself to the migration stream. This patch
adds a section after the VM_END migration end marker that contains
description data on what the device sections of the stream are composed of.

This approach is backwards compatible with any QEMU version reading the
stream, because QEMU just stops reading after the VM_END marker and ignores
any data following it.

With an additional external program this allows us to decipher the
contents of any migration stream and hopefully make migration bugs easier
to track down.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-02-05 17:16:14 +01:00
Alexander Graf
9722140011 qemu-file: Add fast ftell code path
For ftell we flush the output buffer to ensure that we don't have anything
lingering in our internal buffers. This is a very safe thing to do.

However, with the dynamic size measurement that the dynamic vmstate
description will bring this would turn out quite slow.

Instead, we can fast path this specific measurement and just take the
internal buffers into account when telling the kernel our position.

I'm sure I overlooked some corner cases where this doesn't work, so
instead of tuning the safe, existing version, this patch adds a fast
variant of ftell that gets used by the dynamic vmstate description code
which isn't critical when it fails.

Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-02-05 17:16:14 +01:00
Alexander Graf
190c882ce2 QJSON: Add JSON writer
To support programmatic JSON assembly while keeping the code that generates it
readable, this patch introduces a simple JSON writer. It emits JSON serially
into a buffer in memory.

The nice thing about this writer is its simplicity and low memory overhead.
Unlike the QMP JSON writer, this one does not need to spawn QObjects for every
element it wants to represent.

This is a prerequisite for the migration stream format description generator.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-02-05 17:16:14 +01:00
Dr. David Alan Gilbert
0457d07342 Print errors in some of the early migration failure cases.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-02-05 17:16:14 +01:00
Dr. David Alan Gilbert
a5df2a0222 Migration: Add lots of trace events
Mostly on the load side, so that when we get a complaint about
a migration failure we can figure out what it didn't like.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-02-05 17:16:14 +01:00
Dr. David Alan Gilbert
6a64b644ac savevm: Convert fprintf to error_report
Convert a bunch of fprintfs to error_reports

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-02-05 17:16:14 +01:00
Amit Shah
027f15696d vmstate-static-checker: update whitelist
Commit 22382bb96c renamed the
'hw_cursor_x' and 'hw_cursor_y' fields in cirrus_vga.  Update the static
checker's whitelist to allow matching against the old and new names.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-02-05 17:16:14 +01:00
Markus Armbruster
7ad4c72001 coverity: Model g_free() isn't necessarily free()
Memory allocated with GLib needs to be freed with GLib.  Freeing it
with free() instead of g_free() is a common error.  Harmless when
g_free() is a trivial wrapper around free(), which is commonly the
case.  But model the difference anyway.

In a local scan, this flags four ALLOC_FREE_MISMATCH.  Requires
--enable ALLOC_FREE_MISMATCH, because the checker is still preview.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
2015-02-05 17:16:11 +01:00
Markus Armbruster
e4b77daa57 coverity: Model GLib string allocation partially
Without a model, Coverity can't know that the result of g_strdup()
needs to be fed to g_free().

One way to get such a model is to scan GLib, build a derived model
file with cov-collect-models, and use that when scanning QEMU.
Unfortunately, the Coverity Scan service we use doesn't support that.

Thus, we're stuck with the other way: write a user model.  Doing that
for all of GLib is hardly practical.  I'm doing it for the "String
Utility Functions" we actually use that return dynamically allocated
strings.

In a local scan, this flags 20 additional RESOURCE_LEAKs.  The ones I
checked look genuine.

It also loses a NULL_RETURNS about ppce500_init() using
qemu_find_file() without error checking.  I don't understand why.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
2015-02-05 17:16:07 +01:00
Markus Armbruster
9d7a4c6690 coverity: Improve model for GLib memory allocation
In current versions of GLib, g_new() may expand into g_malloc_n().
When it does, Coverity can't see the memory allocation, because we
don't model g_malloc_n().  Similarly for g_new0(), g_renew(),
g_try_new(), g_try_new0(), g_try_renew().

Model g_malloc_n(), g_malloc0_n(), g_realloc_n().  Model
g_try_malloc_n(), g_try_malloc0_n(), g_try_realloc_n() by adding
indeterminate out of memory conditions on top.

To avoid undue duplication, replace the existing models for g_malloc()
& friends by trivial wrappers around g_malloc_n() & friends.

In a local scan, this flags four additional RESOURCE_LEAKs and one
NULL_RETURNS.

The NULL_RETURNS is a false positive: Coverity can now see that
g_try_malloc(l1_sz * sizeof(uint64_t)) in
qcow2_check_metadata_overlap() may return NULL, but is too stupid to
recognize that a loop executing l1_sz times won't be entered then.

Three out of the four RESOURCE_LEAKs appear genuine.  The false
positive is in ppce500_prep_device_tree(): the pointer dies, but a
pointer to a struct member escapes, and we get the pointer back for
freeing with container_of().  Too funky for Coverity.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
2015-02-05 17:05:12 +01:00
Peter Maydell
cd07b19307 target-arm queue:
* refactor/clean up armv7m_init()
  * some initial cleanup in the direction of supporting 64-bit EL3
  * fix broken synchronization of registers between QEMU and KVM
    for 32-bit ARM hosts (which among other things broke memory
    access via gdbstub)
  * fix flush-to-zero handling in FMULX, FRECPS, FRSQRTS and FRECPE
  * don't crash QEMU for UNPREDICTABLE BFI insns in A32 encoding
  * explain why virt board's device-to-transport mapping code is
    the way it is
  * implement mmu_idx values which match the architectural
    distinctions, and introduce the concept of a translation
    regime to get_phys_addr() rather than incorrectly looking
    at the current CPU state
  * update to upstream VIXL 1.7 (gives us correct code addresses
    when dissassembling pc-relative references)
  * sync system register state between KVM and QEMU for 64-bit ARM
  * support virtio on big-endian guests by implementing the
    "which endian is the guest now?" CPU method
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABCAAGBQJU03foAAoJEDwlJe0UNgze6xsP/jJHiEE4EieGfzkd0rKBAWlP
 0iW0oI8VRdYgbwmgRfZJwdbcR/7qYATS6ffS1QINfb6zQRyNHF1J3Qv9sOnU/NC5
 k7hQedqLoG68RUe3QwA0LxrF3r6NVYIddDKMPkjWgbByDbcAtUdElB2UTpd6yLFF
 hrRfkQWUbWqUoe1yqSPUaffo8s88MXFHqArhCHOhN5LQ/KAr70iggAEity6irJIX
 z+dhXaIoi7V6R1rSX+uAt6YAfja3/7GYzG3zK+xy/wdLv3Ka7ametCkwZZP+cvcp
 Zfbo1cpkbKSoPcxmaPoT/FDVH5AGKyO00QKQI/1Nsjb4CcR49dKczqIvlFfK82XL
 M0lNmfDFIf5K4D6KYsXkCbSCETEPuTeDQFI14z/gFNevAUMmRp02HGK+6/Z/mn0W
 n17nWiLiKhpvKo7xoPrIhCaYuaFP7OzL4g0ZktGlKYEGBrNATzpAH2v8pAYn4S41
 aF9Yzo5PF4lVlNpCZQSmilX6VmXLAuC4WSEB8nUkRjjk+wsBxzYO7SqXB+gxvagW
 leahFyHExRMTbOFXsrRAoCGcdOCpNjAam3QYKQaIAhgYy89XmqSl8wKnV6PESOxt
 QSBB/frbmn1Uj4aPMM2xSG/5vVpM/TtaBWDIi6+nlokCE4PO37kSjOprNUT/INeJ
 QATeeqh5iPio7BQwgjH8
 =gmUH
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20150205' into staging

target-arm queue:
 * refactor/clean up armv7m_init()
 * some initial cleanup in the direction of supporting 64-bit EL3
 * fix broken synchronization of registers between QEMU and KVM
   for 32-bit ARM hosts (which among other things broke memory
   access via gdbstub)
 * fix flush-to-zero handling in FMULX, FRECPS, FRSQRTS and FRECPE
 * don't crash QEMU for UNPREDICTABLE BFI insns in A32 encoding
 * explain why virt board's device-to-transport mapping code is
   the way it is
 * implement mmu_idx values which match the architectural
   distinctions, and introduce the concept of a translation
   regime to get_phys_addr() rather than incorrectly looking
   at the current CPU state
 * update to upstream VIXL 1.7 (gives us correct code addresses
   when dissassembling pc-relative references)
 * sync system register state between KVM and QEMU for 64-bit ARM
 * support virtio on big-endian guests by implementing the
   "which endian is the guest now?" CPU method

# gpg: Signature made Thu 05 Feb 2015 14:02:16 GMT using RSA key ID 14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"

* remotes/pmaydell/tags/pull-target-arm-20150205: (28 commits)
  target-arm: fix for exponent comparison in recpe_f64
  target-arm: Guest cpu endianness determination for virtio KVM ARM/ARM64
  target-arm: KVM64: Get and Sync up guest register state like kvm32.
  disas/arm-a64.cc: Tell libvixl correct code addresses
  disas/libvixl: Update to upstream VIXL 1.7
  target-arm: Fix brace style in reindented code
  target-arm: Reindent ancient page-table-walk code
  target-arm: Use mmu_idx in get_phys_addr()
  target-arm: Pass mmu_idx to get_phys_addr()
  target-arm: Split AArch64 cases out of ats_write()
  target-arm: Don't define any MMU_MODE*_SUFFIXes
  target-arm: Use correct mmu_idx for unprivileged loads and stores
  target-arm: Define correct mmu_idx values and pass them in TB flags
  target-arm/translate-a64: Fix wrong mmu_idx usage for LDT/STT
  target-arm: Make arm_current_el() return sensible values for M profile
  cpu_ldst.h: Allow NB_MMU_MODES to be 7
  hw/arm/virt: explain device-to-transport mapping in create_virtio_devices()
  target-arm: check that LSB <= MSB in BFI instruction
  target-arm: Squash input denormals in FRECPS and FRSQRTS
  Fix FMULX not squashing denormalized inputs when FZ is set.
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-02-05 14:22:51 +00:00
Ildar Isaev
fc1792e9aa target-arm: fix for exponent comparison in recpe_f64
f64 exponent in HELPER(recpe_f64) should be compared to 2045 rather than 1023
(FPRecipEstimate in ARMV8 spec). This fixes incorrect underflow handling when
flushing denormals to zero in the FRECPE instructions operating on 64-bit
values.

Signed-off-by: Ildar Isaev <ild@inbox.ru>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-02-05 13:37:25 +00:00