qemu-ga's socket activation support was not obeying the LISTEN_PID
environment variable, which avoids that a process uses a socket-activation
file descriptor meant for its parent.
Mess can for example ensue if a process forks a children before consuming
the socket-activation file descriptor and therefore setting O_CLOEXEC
on it.
Luckily, qemu-nbd also got socket activation code, and its copy does
support LISTEN_PID. Some extra fixups are needed to ensure that the
code can be used for both, but that's what this patch does. The
main change is to replace get_listen_fds's "consume" argument with
the FIRST_SOCKET_ACTIVATION_FD macro from the qemu-nbd code.
Cc: "Richard W.M. Jones" <rjones@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
keyval_parse() parses KEY=VALUE,... into a QDict. Works like
qemu_opts_parse(), except:
* Returns a QDict instead of a QemuOpts (d'oh).
* Supports nesting, unlike QemuOpts: a KEY is split into key
fragments at '.' (dotted key convention; the block layer does
something similar on top of QemuOpts). The key fragments are QDict
keys, and the last one's value is updated to VALUE.
* Each key fragment may be up to 127 bytes long. qemu_opts_parse()
limits the entire key to 127 bytes.
* Overlong key fragments are rejected. qemu_opts_parse() silently
truncates them.
* Empty key fragments are rejected. qemu_opts_parse() happily
accepts empty keys.
* It does not store the returned value. qemu_opts_parse() stores it
in the QemuOptsList.
* It does not treat parameter "id" specially. qemu_opts_parse()
ignores all but the first "id", and fails when its value isn't
id_wellformed(), or duplicate (a QemuOpts with the same ID is
already stored). It also screws up when a value contains ",id=".
* Implied value is not supported. qemu_opts_parse() desugars "foo" to
"foo=on", and "nofoo" to "foo=off".
* An implied key's value can't be empty, and can't contain ','.
I intend to grow this into a saner replacement for QemuOpts. It'll
take time, though.
Note: keyval_parse() provides no way to do lists, and its key syntax
is incompatible with the __RFQDN_ prefix convention for downstream
extensions, because it blindly splits at '.', even in __RFQDN_. Both
issues will be addressed later in the series.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1488317230-26248-4-git-send-email-armbru@redhat.com>
AioContext is fairly self contained, the only dependency is QEMUTimer but
that in turn doesn't need anything else. So move them out of block-obj-y
to avoid introducing a dependency from io/ to block-obj-y.
main-loop and its dependency iohandler also need to be moved, because
later in this series io/ will call iohandler_get_aio_context.
[Changed copyright "the QEMU team" to "other QEMU contributors" as
suggested by Daniel Berrange and agreed by Paolo.
--Stefan]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 20170213135235.12274-2-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
It is not possible to implement functions in host-utils.c for
architectures with quadwords because the guard is implemented in the
Makefile. This patch move the guard out of the Makefile to the
implementation file.
Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A QemuLockCnt comprises a counter and a mutex, with primitives
to increment and decrement the counter, and to take and release the
mutex. It can be used to do lock-free visits to a data structure
whenever mutexes would be too heavy-weight and the critical section
is too long for RCU.
This could be implemented simply by protecting the counter with the
mutex, but QemuLockCnt is harder to misuse and more efficient.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170112180800.21085-3-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
It is simpler and a bit faster, and QEMU does not need the contention
callbacks (and thus the fairness) anymore.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1477565348-5458-21-git-send-email-pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
A number of different places across the code base use CONFIG_UUID. Some
of them are soft dependency, some are not built if libuuid is not
available, some come with dummy fallback, some throws runtime error.
It is hard to maintain, and hard to reason for users.
Since UUID is a simple standard with only a small number of operations,
it is cleaner to have a central support in libqemuutil. This patch adds
qemu_uuid_* functions that all uuid users in the code base can
rely on. Except for qemu_uuid_generate which is new code, all other
functions are just copy from existing fallbacks from other files.
Note that qemu_uuid_parse is moved without updating the function
signature to use QemuUUID, to keep this patch simple.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-Id: <1474432046-325-2-git-send-email-famz@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1472496380-19706-2-git-send-email-rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
g_list_insert_sorted_merged() is rather large to be an inline
function; move it to its own file. range_merge() and
ranges_can_merge() can likewise move, as they are only used
internally. Also, it becomes obvious that the condition within
range_merge() is already satisfied by its caller, and that the
return value is not used.
The diffstat is misleading, because of the copyright boilerplate.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1464712890-14262-2-git-send-email-eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
This is a fast, scalable chained hash table with optional auto-resizing, allowing
reads that are concurrent with reads, and reads/writes that are concurrent
with writes to separate buckets.
A hash table with these features will be necessary for the scalability
of the ongoing MTTCG work; before those changes arrive we can already
benefit from the single-threaded speedup that qht also provides.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1465412133-3029-11-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Sometimes it is useful to have a quick histogram to represent a certain
distribution -- for example, when investigating a performance regression
in a hash table due to inadequate hashing.
The appended allows us to easily represent a distribution using Unicode
characters. Further, the data structure keeping track of the distribution
is so simple that obtaining its values for off-line processing is trivial.
Example, taking the last 10 commits to QEMU:
Characters in commit title Count
-----------------------------------
39 1
48 1
53 1
54 2
57 1
61 1
67 1
78 1
80 1
qdist_init(&dist);
qdist_inc(&dist, 39);
[...]
qdist_inc(&dist, 80);
char *str = qdist_pr(&dist, 9, QDIST_PR_LABELS);
// -> [39.0,43.6)▂▂ █▂ ▂ ▄[75.4,80.0]
g_free(str);
char *str = qdist_pr(&dist, 4, QDIST_PR_LABELS);
// -> [39.0,49.2)▁█▁▁[69.8,80.0]
g_free(str);
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1465412133-3029-9-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
log will become common facility with tracepoints support in next step.
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1452174932-28657-9-git-send-email-den@openvz.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The standard glib provided g_base64_decode doesn't provide any
kind of sensible error checking on its input. Add a QEMU custom
wrapper qbase64_decode which can be used with untrustworthy
input that can contain invalid base64 characters, embedded
NUL characters, or not be NUL terminated at all.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This module computes the average of a set of values within a time
window, keeping also track of the minimum and maximum values.
In order to produce more accurate results it works internally by
creating two time windows of the same period, offsetted by half of
that period. Values are accounted on both windows and the data is
always returned from the oldest one.
[Add missing util/replay.o to test-timed-average dependencies to fix the
build.
--Stefan]
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 201b09c21bbc9c329779d2b2365ee2b9c80dceeb.1446044837.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
New features:
VT-d support for devices behind a bridge
vhost-user migration support
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJWKMrnAAoJECgfDbjSjVRpVL0H/iRc31o00QE4nWBRpxUpf8WJ
V5RWE8qKkDgBha5bS5Nt4vs8K4jkkHGXCbmygMidWph96hUPK8/yHy1A/wmpBibB
5hVSPDK8onavNGJwpaWDrkhd9OhKAaKOuu49T6+VWJGZY/uX5ayqmcN934y0NPUa
4EhH5tyxPpYOYeW9i/VOMQ374gCJcpzYBMug4NJZRyFpfz/b2mzAQtoqw3EsPtB0
vpVJ+fKiCyG39HFKQJW7cL12yBeXOoyhjfDxpumLqwLWMfmde+vJwTFx6wbechgV
aU3jIdvUX8wHCNYaB937NsMaDALoGNqUjbpKnf+xD1w7xr9pwTzdyrGH3rpGLEE=
=+G1+
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
vhost, pc, virtio features, fixes, cleanups
New features:
VT-d support for devices behind a bridge
vhost-user migration support
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 22 Oct 2015 12:39:19 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream: (37 commits)
hw/isa/lpc_ich9: inject the SMI on the VCPU that is writing to APM_CNT
i386: keep cpu_model field in MachineState uptodate
vhost: set the correct queue index in case of migration with multiqueue
piix: fix resource leak reported by Coverity
seccomp: add memfd_create to whitelist
vhost-user-test: check ownership during migration
vhost-user-test: add live-migration test
vhost-user-test: learn to tweak various qemu arguments
vhost-user-test: wrap server in TestServer struct
vhost-user-test: remove useless static check
vhost-user-test: move wait_for_fds() out
vhost: add migration block if memfd failed
vhost-user: use an enum helper for features mask
vhost user: add rarp sending after live migration for legacy guest
vhost user: add support of live migration
net: add trace_vhost_user_event
vhost-user: document migration log
vhost: use a function for each call
vhost-user: add a migration blocker
vhost-user: send log shm fd along with log_base
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement memfd_create() fallback if not available in system libc.
memfd_create() is still not included in glibc today, atlhough it's been
available since Linux 3.17 in Oct 2014.
memfd has numerous advantages over traditional shm/mmap for ipc memory
sharing with fd handler, which we are going to make use of for
vhost-user logging memory in following patches.
The next patches are going to introduce helpers to use best practices of
memfd usage and provide some compatibility fallback. memfd.c is thus
temporarily useless and eventually empty if memfd_create() is provided
by the system.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
Make it easier to add new unrelated units with shorter lines.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
Anonymous and file-backed RAM allocation are now almost exactly the same.
Reduce code duplication by moving RAM mmap code out of oslib-posix.c and
exec.c.
Reported-by: Marc-André Lureau <mlureau@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
The Buffer code in the VNC server is useful for the IO channel
code, so pull it out into a shared module, QIOBuffer.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The coroutine files are currently referenced by the block-obj-y
variable. The coroutine functionality though is already used by
more than just the block code. eg migration code uses coroutine
yield. In the future the I/O channel code will also use the
coroutine yield functionality. Since the coroutine code is nicely
self-contained it can be easily built as part of the libqemuutil.a
library, making it widely available.
The headers are also moved into include/qemu, instead of the
include/block directory, since they are now part of the util
codebase, and the impl was never in the block/ directory
either.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
To prepare for a generic internal cipher API, move the
built-in AES implementation into the crypto/ directory
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <1435770638-25715-3-git-send-email-berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This includes a (mangled) copy of the liburcu code. The main changes
are: 1) removing dependencies on many other header files in liburcu; 2)
removing for simplicity the tentative busy waiting in synchronize_rcu,
which has limited performance effects; 3) replacing futexes in
synchronize_rcu with QemuEvents for Win32 portability. The API is
the same as liburcu, so it should be possible in the future to require
liburcu on POSIX systems for example and use our copy only on Windows.
Among the various versions available I chose urcu-mb, which is the
least invasive implementation even though it does not have the
fastest rcu_read_{lock,unlock} implementation. The urcu flavor can
be changed later, after benchmarking.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
IDs have long spread beyond QemuOpts: not everything with an ID
necessarily goes through QemuOpts. Commit 9aebf3b is about such a
case: block layer names are meant to be well-formed IDs, but some of
them don't go through QemuOpts, and thus weren't checked. The commit
fixed that the straightforward way: rename the internal QemuOpts
helper id_wellformed() to qemu_opts_id_wellformed() and give it
external linkage.
Instead of using it directly in block.c, the commit adds wrapper
bdrv_is_valid_name(), probably to hide the connection to QemuOpts.
Go one logical step further: emancipate IDs from QemuOpts. Rename the
function back to id_wellformed(), and put it in another file. While
there, clean up its value to bool. Peel off the bdrv_is_valid_name()
wrapper.
[Replaced stray return 0 with return false to match bool returns used
elsewhere in id_wellformed().
--Stefan]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
As a "utility", it only supported ppc, and in a way that other
tcg backends provided directly in tcg-target.h. Removing this
disparity is easier now that the two ppc backends are merged.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
QemuMutex does not guarantee fairness and cannot be acquired
recursively:
Fairness means each locker gets a turn and the scheduler cannot cause
starvation.
Recursive locking is useful for composition, it allows a sequence of
locking operations to be invoked atomically by acquiring the lock around
them.
This patch adds RFifoLock, a recursive lock that guarantees FIFO order.
Its first user is added in the next patch.
RFifoLock has one additional feature: it can be initialized with an
optional contention callback. The callback is invoked whenever a thread
must wait for the lock. For example, it can be used to poke the current
owner so that they release the lock soon.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Now that the monitor and readline are decoupled, readline.h no longer
belongs in include/monitor/. Put the header into include/qemu/.
Move the source file into util/ so it can be linked as part of
libqemuutil.a.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Implement the continuous leaky bucket algorithm devised on IRC as a separate
module.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
In two places qemu uses openpty() which is very system-dependent,
and in both places the pty is switched to raw mode as well.
Make a wrapper function which does both steps, and move all the
system-dependent complexity into a separate file, together
with static/local implementations of openpty() and cfmakeraw()
from qemu-char.c.
It is in a separate file, not part of oslib-posix.c, because
openpty() often resides in -lutil which is not linked to
every program qemu builds.
This change removes #including of <pty.h>, <termios.h>
and other rather specific system headers out of qemu-common.h,
which isn't a place for such specific headers really.
This version has been verified to build correctly on Linux,
OpenBSD, FreeBSD and OpenIndiana. On the latter it lets qemu
to be built with gtk gui which were not possible there due to
missing openpty() and cfmakeraw().
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Tested-by: Andreas Färber <andreas.faerber@web.de>
This adds the Castagnoli CRC32C algorithm, using the 0x11EDC6F41
polynomial.
This is extracted from the linux kernel cryptographic crc32.c module.
The algorithm is based on:
Castagnoli93: Guy Castagnoli and Stefan Braeuer and Martin Herrman
"Optimization of Cyclic Redundancy-Check Codes with 24
and 32 Parity Bits", IEEE Transactions on Communication,
Volume 41, Number 6, June 1993
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Factor out the hexdumper functionality from iov for all to use. Useful for
creating verbose debug printfery that dumps packet data.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: faaac219c55ea586d3f748befaf5a2788fd271b8.1361853677.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
HBitmaps provides an array of bits. The bits are stored as usual in an
array of unsigned longs, but HBitmap is also optimized to provide fast
iteration over set bits; going from one bit to the next is O(logB n)
worst case, with B = sizeof(long) * CHAR_BIT: the result is low enough
that the number of levels is in fact fixed.
In order to do this, it stacks multiple bitmaps with progressively coarser
granularity; in all levels except the last, bit N is set iff the N-th
unsigned long is nonzero in the immediately next level. When iteration
completes on the last level it can examine the 2nd-last level to quickly
skip entire words, and even do so recursively to skip blocks of 64 words or
powers thereof (32 on 32-bit machines).
Given an index in the bitmap, it can be split in group of bits like
this (for the 64-bit case):
bits 0-57 => word in the last bitmap | bits 58-63 => bit in the word
bits 0-51 => word in the 2nd-last bitmap | bits 52-57 => bit in the word
bits 0-45 => word in the 3rd-last bitmap | bits 46-51 => bit in the word
So it is easy to move up simply by shifting the index right by
log2(BITS_PER_LONG) bits. To move down, you shift the index left
similarly, and add the word index within the group. Iteration uses
ffs (find first set bit) to find the next word to examine; this
operation can be done in constant time in most current architectures.
Setting or clearing a range of m bits on all levels, the work to perform
is O(m + m/W + m/W^2 + ...), which is O(m) like on a regular bitmap.
When iterating on a bitmap, each bit (on any level) is only visited
once. Hence, The total cost of visiting a bitmap with m bits in it is
the number of bits that are set in all bitmaps. Unless the bitmap is
extremely sparse, this is also O(m + m/W + m/W^2 + ...), so the amortized
cost of advancing from one bit to the next is usually constant.
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>