The SLOF boot process is always quite slow ... but we can speed it up
a little bit by specifying "-nodefaults" and by using the "nvramrc"
variable instead of "boot-command" (since "nvramrc" is evaluated earlier
in the SLOF boot process than "boot-command").
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1537204330-16076-1-git-send-email-thuth@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
There's a couple of error paths in qemu_loadvm_state
which happen early on but after we've initialised the
load state; that needs to be cleaned up otherwise
we can hit asserts if the state gets reinitialised later.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20180914170430.54271-3-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Clear have_listen_thread when we exit the thread.
The fallout from this was that various things thought there was
an ongoing postcopy after the postcopy had finished.
The case that failed was postcopy->savevm->loadvm.
This corresponds to RH bug https://bugzilla.redhat.com/show_bug.cgi?id=1608765
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20180914170430.54271-2-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The TCG backend uses LOWREGMASK to get the low 3 bits of register numbers.
This was defined as no-op for 32-bit x86, with the assumption that we have
eight registers anyway. This assumption is not true once we have xmm regs.
Since LOWREGMASK was a no-op, xmm register indidices were wrong in opcodes
and have overflown into other opcode fields, wreaking havoc.
To trigger these problems, you can try running the "movi d8, #0x0" AArch64
instruction on 32-bit x86. "vpxor %xmm0, %xmm0, %xmm0" should be generated,
but instead TCG generated "vpxor %xmm0, %xmm0, %xmm2".
Fixes: 770c2fc7bb ("Add vector operations")
Signed-off-by: Roman Kapl <rka@sysgo.com>
Message-Id: <20180824131734.18557-1-rka@sysgo.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Precomputing the hash values allows us to perform more frequent
accesses to the hash table, thereby reaching higher throughputs.
We keep the old behaviour by default, since (1) we might confuse
users if they measured a speedup without changing anything in
the QHT implementation, and (2) benchmarking the hash function
"on line" is also valuable.
Before:
$ taskset -c 0 tests/qht-bench -n 1
Throughput: 38.18 MT/s
After:
$ taskset -c 0 tests/qht-bench -n 1
Throughput: 38.16 MT/s
After (with precomputing):
$ taskset -c 0 tests/qht-bench -n 1 -p
Throughput: 50.87 MT/s
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These functions do not modify their @ht or @bucket arguments.
Constify those arguments.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
seqlock_read_begin takes a const param since c04649eeea
("seqlock: constify seqlock_read_begin", 2018-08-23), so
we can constify the entire lookup.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Accessing the HT from an iterator results almost always
in a deadlock. Given that only one qht-internal function
uses this argument, drop it from the interface.
Suggested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Perform first the tests that exercise code paths that are
easier to hit at small table sizes, and then resize the table
to speed up subsequent tests. If this resize is not too large,
we can make the test faster with no code coverage loss.
- With gcov enabled:
Before: 20.568s, 90.28% qht.c coverage
After: 5.168s, 93.06% qht.c coverage
The coverage increase is entirely due to calling qht_resize,
which we weren't calling before. Note that the code paths
that remain to be tested are either error handling or
can only occur when several threads are accessing the
hash table concurrently (e.g. seqlock retry, trylock fail).
- Without gcov:
Before: 1.987s
After: 0.528s
The speedup is almost the same as with gcov, although the
"before" run is a lot faster.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This improves coverage by one (!) LoC in qht.c, bringing the
coverage rate up from 90.00% to 90.28%.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This improves qht.c code coverage from 89.44% to 90.00%.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This currently has no users, but the use case is so common that I
think we must support it.
Note that without the appended we cannot safely remove a set of
elements; a 2-step approach (i.e. qht_iter first, keep track of
the to-be-deleted elements, and then a bunch of qht_remove calls)
would be racy, since between the iteration and the removals other
threads might insert additional elements.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
bitmap_to_extents function is broken: it switches dirty variable after
every iteration, however it can process only part of dirty (or zero)
area during one iteration in case when this area is too large for one
extent.
Fortunately, the bug doesn't produce wrong extent flags: it just inserts
a zero-length extent between sequential extents representing large dirty
(or zero) area. However, zero-length extents are forbidden by the NBD
protocol. So, a careful client should consider such a reply as a server
fault, while a less-careful will likely ignore zero-length extents.
The bug can only be triggered by a client that requests block status
for nearly 4G at once (a request of 4G and larger is impossible per
the protocol, and requests smaller than 4G less the bitmap granularity
cause the loop to quit iterating rather than revisit the tail of the
large area); it also cannot trigger if the client used the
NBD_CMD_FLAG_REQ_ONE flag. Since qemu 3.0 as client (using the
x-dirty-bitmap extension) always passes the flag, it is immune; and
we are not aware of other open-source clients that know how to request
qemu:dirty-bitmap:FOO contexts. Clients that want to avoid the bug
could cap block status requests to a smaller length, such as 2G or 3G.
Fix this by more careful handling of dirty variable.
Bug was introduced in 3d068aff16
"nbd/server: implement dirty bitmap export", with the whole function.
and is present in v3.0.0 release.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180914165116.23182-1-vsementsov@virtuozzo.com>
CC: qemu-stable@nongnu.org
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: improved commit message]
Signed-off-by: Eric Blake <eblake@redhat.com>
Remove -sandbox option if the host is not capable of TSYNC, since the
sandbox will fail at setup time otherwise. This will help libvirt, for
ex, to figure out if -sandbox will work.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Acked-by: Eduardo Otubo <otubo@redhat.com>
This patch moves the settings related migration-test from the
migration-test.c file to a new header file.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Message-Id: <1536174934-26022-4-git-send-email-wei@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Recently a new configure option, CROSS_CC_GUEST, was added to
$(TARGET)-softmmu/config-target.mak to support TCG-related tests. This
patch tries to leverage this option to support cross compilation when the
migration boot block file is being re-generated:
* The x86 related files are moved to a new sub-dir (named ./i386).
* A new top-layer Makefile is created in tests/migration/ directory.
This Makefile searches and parses CROSS_CC_GUEST to generate CROSS_PREFIX.
The CROSS_PREFIX, if available, is then passed to migration/$ARCH/Makefile.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Message-Id: <1536174934-26022-3-git-send-email-wei@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The x86 boot block header currently is generated with a shell script.
To better support other CPUs (e.g. aarch64), we convert the script
into Makefile. This allows us to 1) support cross-compilation easily,
and 2) avoid creating a script file for every architecture.
Note that, in the new design, the cross compiler prefix can be specified by
setting the CROSS_PREFIX in "make" command. Also to allow gcc pre-processor
to include the C-style file correctly, it also renames the
x86-a-b-bootblock.s file extension from .s to .S.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Message-Id: <1536174934-26022-2-git-send-email-wei@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
It avoids to touch compression locks if xbzrle and compression
are both enabled
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20180906070101.27280-4-xiaoguangrong@tencent.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount of count that no free thread to compress data
busy-rate: rate of thread busy
compressed-size: amount of bytes after compression
compression-rate: rate of compressed size
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180906070101.27280-3-xiaoguangrong@tencent.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
the throughput and use CPU resource more effectively
We do not need to flush all threads at the end of iteration, the
data can be kept locally until the memory block is changed or
memory migration starts over in that case we will meet a dirtied
page which may still exists in compression threads's ring
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20180906070101.27280-2-xiaoguangrong@tencent.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This patch adds a small hint for the failure case of the load snapshot
process. It may be useful for users to remember that the VM
configuration has changed between the save and load processes.
(qemu) loadvm vm-20180903083641
Unknown savevm section or instance 'cpu_common' 4.
Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices
Error -22 while loading VM state
(qemu) device_add host-spapr-cpu-core,core-id=4
(qemu) loadvm vm-20180903083641
(qemu) c
(qemu) info status
VM status: running
It also exits Qemu if the snapshot cannot be loaded before reaching the
main loop (-loadvm in the command line).
$ qemu-system-ppc64 ... -loadvm vm-20180903083641
qemu-system-ppc64: Unknown savevm section or instance 'cpu_common' 4.
Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices
qemu-system-ppc64: Error -22 while loading VM state
$
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.ibm.com>
Message-Id: <20180903162613.15877-1-joserz@linux.ibm.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
ram_find_and_save_block() can return negative if any error hanppens,
however, it is completely ignored in current code
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20180903092644.25812-5-xiaoguangrong@tencent.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
As Peter pointed out:
| - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's
| per-guest-page granularity
|
| - RAMState.iterations is done for each ram_find_and_save_block(), so
| it's per-host-page granularity
|
| An example is that when we migrate a 2M huge page in the guest, we
| will only increase the RAMState.iterations by 1 (since
| ram_find_and_save_block() will be called once), but we might increase
| xbzrle_counters.cache_miss for 2M/4K=512 times (we'll call
| save_xbzrle_page() that many times) if all the pages got cache miss.
| Then IMHO the cache miss rate will be 512/1=51200% (while it should
| actually be just 100% cache miss).
And he also suggested as xbzrle_counters.cache_miss_rate is the only
user of rs->iterations we can adapt it to count target guest page
numbers
After that, rename 'iterations' to 'target_page_count' to better reflect
its meaning
Suggested-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@tencent.com>
Message-Id: <20180903092644.25812-3-xiaoguangrong@tencent.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Clang correctly errors out moaning that rdma_return_path
is used uninitialised in the earlier error paths.
Make it NULL so that the error path ignores it.
Fixes: 55cc1b5937
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reported-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20180830173657.22939-1-dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
There is a rare case which the size of last compressed cluster
is larger than the cluster size, which will cause the file is
not aligned at the sector boundary.
There are three reasons to do it. First, if vmdk doesn't align at
the sector boundary, there may be many undefined behaviors,
such as, in vbox it will show VMDK: Compressed image is corrupted
'syno-vm-disk1.vmdk' (VERR_ZIP_CORRUPTED) when we try to import an
ova with unaligned vmdk. Second, all the cluster_sector is aligned
to sector, the last one should be like this, too. Third, it ease
reading with sector based I/Os.
Signed-off-by: yuchenlin <yuchenlin@synology.com>
Message-Id: <20180913082952.3675-1-yuchenlin@synology.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
-cpu max works with any accelerator, so we don't need
to use it only conditionally if not using KVM. Just use
it all the time.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180820155554.23476-1-peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Commit 70232b5253 ("aio-posix: Don't count ctx->notifier as progress when
2018-08-15), by not reporting progress, causes aio_poll to execute the
system call when polling succeeds because of ctx->notifier. This introduces
latency before the call to aio_bh_poll() and negates the advantages of
polling, unfortunately.
The fix builds on the previous patch, separating the effect of polling on
the timeout from the progress reported to aio_poll(). ctx->notifier
does zero the timeout, causing the caller to skip the system call,
but it does not report progress, so that the bug fix of commit 70232b5253
still stands.
Fixes: 70232b5253
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20180912171040.1732-4-pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
This is a preparation for the next patch, and also a very small
optimization. Compute the timeout only once, before invoking
try_poll_mode, and adjust it in run_poll_handlers. The adjustment
is the polling time when polling fails, or zero (non-blocking) if
polling succeeds.
Fixes: 70232b5253
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20180912171040.1732-3-pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
It is valid for an aio_set_fd_handler to happen concurrently with
aio_poll. In that case, poll_disable_cnt can change under the heels
of aio_poll, and the assertion on poll_disable_cnt can fail in
run_poll_handlers.
Therefore, this patch simply checks the counter on every polling
iteration. There are no particular needs for ordering, since the
polling loop is terminated anyway by aio_notify at the end of
aio_set_fd_handler.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20180912171040.1732-2-pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
setrlimit guest calls that affect memory resources
(RLIMIT_{AS,DATA,STACK}) may interfere with QEMU internal memory
management. They may result in QEMU lockup because mprotect call in
page_unprotect would fail with ENOMEM error code, causing infinite loop
of SIGSEGV. E.g. it happens when running libstdc++ testsuite for xtensa
target on x86_64 host.
Don't call host setrlimit for memory-related resources.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20180917181314.22551-1-jcmvbkbc@gmail.com>
[lv: rebase on master]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Bring linux-user write(2) handling into line with linux for the case
of a 0-byte write with a NULL buffer. Based on a patch originally
written by Zhuowei Zhang.
Addresses https://bugs.launchpad.net/qemu/+bug/1716292.
>From Zhuowei Zhang's patch (https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg08073.html):
Linux returns success for the special case of calling write with a
zero-length NULL buffer: compiling and running
int main() {
ssize_t ret = write(STDOUT_FILENO, NULL, 0);
fprintf(stderr, "write returned %ld\n", ret);
return 0;
}
gives "write returned 0" when run directly, but "write returned
-1" in QEMU.
This commit checks for this situation and returns success if
found.
Subsequent discussion raised the following questions (and my answers):
- Q. Should TARGET_NR_read pass through to safe_read in this
situation too?
A. I'm wary of changing unrelated code to the specific problem I'm
addressing. TARGET_NR_read is already consistent with Linux for
this case.
- Q. Do pread64/pwrite64 need to be changed similarly?
A. Experiment suggests not: both linux and linux-user yield -1 for
NULL 0-length reads/writes.
Signed-off-by: Tony Garnock-Jones <tonygarnockjones@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180908182205.GB409@mornington.dcs.gla.ac.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
If the hostpage size is greater than the TARGET_PAGESIZE, the
target-pages of size TARGET_PAGESIZE are marked valid only till the
length requested during the elfload. The glibc attempts to consume unused
space in the last page of data segment(__libc_memalign() in
elf/dl-minimal.c). If PT_LOAD p_align is greater than or
equal to hostpage size, the GLRO(dl_pagesize) is actually the host pagesize
as set in the auxillary vectors. So, there is no explicit mmap request for
the remaining target-pages on the last hostpage. The glibc assumes that
particular space as available and subsequent attempts to use
those addresses lead to crash as the target_mmap has not marked them valid
for those target-pages.
The issue is seen when trying to chroot to 16.04-x86_64 ubuntu on a PPC64
host where the fork fails to access the thread_id as it is allocated on a
page not marked valid. The recent glibc doesn't have checks for thread-id in
fork, but the issue can manifest somewhere else, none the less.
The fix here is to map all the target-pages of the hostpage during the
elfload if the p_align is greater than or equal to hostpage size, for
data segment to allow the glibc for proper consumption.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <153553435604.51992.5640085189104207249.stgit@lep8c.aus.stglabs.ibm.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Original implementation for setsockopt by Chen Gang[1]; all bugs mine,
including removing assignment for optname which hopefully makes the
logic easier to follow and moving some variables to make the code
more selfcontained.
[1] http://patchwork.ozlabs.org/patch/565659/
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Co-Authored-By: Chen Gang <gang.chen.5i5j@gmail.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180824085601.6259-1-carenas@gmail.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This will ease to move out syscall functions from syscall.c
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180823222215.13781-1-laurent@vivier.eu>
- Deprecate the "enforce-config-section" machine parameter
- Re-enable the wdt_ib700, endianness and vmxnet3 qtests
- Some trivial fixes and doc update patches that crossed my way
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJbqlsyAAoJEC7Z13T+cC21RbAP/3IvGfBxuRm6rBWoghjQgbl8
KU8nPnlZUtqjxmfUTILO/h+pJ3na5MQ8hh7v8JHi+xlQ2DPkECW21DtnfdxntVjw
+b+N5Ap6J22GHyEq4HJXPWAk2rDInqkU966DvL40RiMvOTfXdg9EO0TDX0VsVgZv
BR1r7/t3T0P7hiQ0XWb9U2JchRIC+Zgk34gXZPSTpoIv89fUhzNoK5LvAA6yV1FQ
TvE8VTKJm4wkqThH1ShtbJCBKjHjW/W8LYZr3YMothcs8vGjEdEcDL4BoJZDn3bF
h4VTkU+k8lp7W9LmlnPnu1WH/5ezhzdwJTeFaPJt4U10WKJptAS4vbK03DXlds9O
9d2BOXKrima2kSr1ejSe1f0kcE8fis1XFmSuhF61Nbw6ngT5+pP2JSc1XwFazd2K
zQwV4GXBLzAGnd4F2Ec+5TKzbGFVfczxeBDiBkkVmG+XdX/UXJpkpPYGAaw7DDiK
JwKVVYIPk1ll6MAbR6qEGsvE/adHNEm8lUdjXqwgbQlIeUZ2H0hCu9lJ0X81mtoQ
WZP+nMa/87COnlPX6VPVgxM2TXQOH/UbGz/WmYzZ6/gPKTX+gfwrHQGdp7Tjl33U
KxFKWioFnoqGuyWasvTtKEK67/IlrY+w1nXuuqKJg8J2/qx1SVtx45FHkRkxkIDx
4boRpx0XUqpDVdf8VhRB
=dXgp
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2018-09-25' into staging
- Deprecate the usage of a network backend via "name" instead of "id"
- Deprecate the "enforce-config-section" machine parameter
- Re-enable the wdt_ib700, endianness and vmxnet3 qtests
- Some trivial fixes and doc update patches that crossed my way
# gpg: Signature made Tue 25 Sep 2018 16:58:42 BST
# gpg: using RSA key 2ED9D774FE702DB5
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>"
# gpg: aka "Thomas Huth <thuth@redhat.com>"
# gpg: aka "Thomas Huth <huth@tuxfamily.org>"
# gpg: aka "Thomas Huth <th.huth@posteo.de>"
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* remotes/huth-gitlab/tags/pull-request-2018-09-25:
Revert "check: Move VMXNET3 test to common"
Revert "check: Move endianess test to common"
Revert "check: Move wdt_ib700 test to common"
tests/migration: Speed up the test on ppc64
hw/qdev-core: Fix description of instance_init
qdev: fix a typo in comment
docs: Fix some typos (most found by codespell)
trivial: Make bios files and source files non-executable
memfd: fix possible usage of the uninitialized file descriptor
hw/core/machine: Officially deprecate the enforce-config-section parameter
net/slirp: Deprecate the [hub_id name] parameter tuple
net: Deprecate the "name" parameter of -net
Makefile: Add missing dependency for qemu-deprecated.texi
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- Drain fixes
- node-name parameters for block-commit
- Refactor block jobs to use transactional callbacks for exiting
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJbqlBsAAoJEPQH2wBh1c9ABCQIAJ4adAAlr65kmcktHrOeQc6A
7VwSSCUa9B8BJS+/H3V8XF3eX1fa016cRQCHfH/ua3Wqavw00qcuS8Bz/ggc6qls
S1kNSSGhemvtf6ebTCN++HCxEg8g1RjsdnxaTiNWrYLKadX5kjLGofU1eAM2J/re
k5YsyB01X4RHS2L0eAUbYGgNFs+UJEU4p0aKGBPHsOj1LIYEzFhDTVNJ8OjNUG9R
mcMXFiYmQpJTV0hlIqL+pOtRvlR9YOKgkO8dmFkRe+z82f+GA+EZhLfpACxc7ilQ
HS4V2NMVucZ7G8gGudg9mqvd3u/AV5BiUtIGd0iIQ9pU9fUuPVPl977i5WHewoM=
=3TQt
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/xanclic/tags/pull-block-2018-09-25' into staging
Block layer patches:
- Drain fixes
- node-name parameters for block-commit
- Refactor block jobs to use transactional callbacks for exiting
# gpg: Signature made Tue 25 Sep 2018 16:12:44 BST
# gpg: using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* remotes/xanclic/tags/pull-block-2018-09-25: (42 commits)
test-bdrv-drain: Test draining job source child and parent
block: Use a single global AioWait
test-bdrv-drain: Fix outdated comments
test-bdrv-drain: AIO_WAIT_WHILE() in job .commit/.abort
job: Avoid deadlocks in job_completed_txn_abort()
test-bdrv-drain: Test nested poll in bdrv_drain_poll_top_level()
block: Remove aio_poll() in bdrv_drain_poll variants
blockjob: Lie better in child_job_drained_poll()
block-backend: Decrease in_flight only after callback
block-backend: Fix potential double blk_delete()
block-backend: Add .drained_poll callback
block: Add missing locking in bdrv_co_drain_bh_cb()
test-bdrv-drain: Test AIO_WAIT_WHILE() in completion callback
job: Use AIO_WAIT_WHILE() in job_finish_sync()
test-blockjob: Acquire AioContext around job_cancel_sync()
test-bdrv-drain: Drain with block jobs in an I/O thread
aio-wait: Increase num_waiters even in home thread
blockjob: Wake up BDS when job becomes idle
job: Fix missing locking due to mismerge
job: Fix nested aio_poll() hanging in job_txn_apply
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit 7a066770f5.
The patch did not work as expected: The vmxnet3 test is currently
not run at all anymore.
Signed-off-by: Thomas Huth <thuth@redhat.com>
This reverts commit 669cc71000.
The patch did not work as expected: The endianess test is currently
not run at all anymore.
Signed-off-by: Thomas Huth <thuth@redhat.com>
This reverts commit ee1f6c812b.
The patch did not work as expected: The wdt_ib700 test is currently
not run at all anymore.
Signed-off-by: Thomas Huth <thuth@redhat.com>
The SLOF boot process is always quite slow ... but we can speed it up
a little bit by specifying "-nodefaults" and by using the "nvramrc"
variable instead of "boot-command" (since "nvramrc" is evaluated earlier
in the SLOF boot process than "boot-command").
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The part of the documentation of DeviceClass that talks about instance_init
is partly wrong: instance_init() functions must not abort or exit, since
the function is also called during introspection of the device already.
So if a device calls exit() during its instance_init() function, QEMU
terminates unexpectedly if somebody tries to just have a look at the
interfaces from the device with "device_add xyz,help" or with the
"device-list-properties" QOM command. This should never happen.
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
These files can not be executed on the host, so they should not be
marked as executable.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The qemu_memfd_alloc_check() routine allocates the fd variable on stack.
This variable is initialized inside the qemu_memfd_alloc() function.
There are several cases when *fd will be left unintialized which can
lead to the unexpected close() in the qemu_memfd_free() call.
Set file descriptor to -1 before calling the qemu_memfd_alloc routine.
Signed-off-by: Dima Stepanov <dimastep@yandex-team.ru>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Commit 16f7244842 added this parameter
to the documentation, including a note that it is deprecated. But it
has never been added to the "Deprecated features" appendix, which is
our official way to deprecate legacy parameters. So let's do this now.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>