It is simpler and a bit faster, and QEMU does not need the contention
callbacks (and thus the fairness) anymore.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1477565348-5458-21-git-send-email-pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
A number of different places across the code base use CONFIG_UUID. Some
of them are soft dependency, some are not built if libuuid is not
available, some come with dummy fallback, some throws runtime error.
It is hard to maintain, and hard to reason for users.
Since UUID is a simple standard with only a small number of operations,
it is cleaner to have a central support in libqemuutil. This patch adds
qemu_uuid_* functions that all uuid users in the code base can
rely on. Except for qemu_uuid_generate which is new code, all other
functions are just copy from existing fallbacks from other files.
Note that qemu_uuid_parse is moved without updating the function
signature to use QemuUUID, to keep this patch simple.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-Id: <1474432046-325-2-git-send-email-famz@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1472496380-19706-2-git-send-email-rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
g_list_insert_sorted_merged() is rather large to be an inline
function; move it to its own file. range_merge() and
ranges_can_merge() can likewise move, as they are only used
internally. Also, it becomes obvious that the condition within
range_merge() is already satisfied by its caller, and that the
return value is not used.
The diffstat is misleading, because of the copyright boilerplate.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1464712890-14262-2-git-send-email-eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
This is a fast, scalable chained hash table with optional auto-resizing, allowing
reads that are concurrent with reads, and reads/writes that are concurrent
with writes to separate buckets.
A hash table with these features will be necessary for the scalability
of the ongoing MTTCG work; before those changes arrive we can already
benefit from the single-threaded speedup that qht also provides.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1465412133-3029-11-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Sometimes it is useful to have a quick histogram to represent a certain
distribution -- for example, when investigating a performance regression
in a hash table due to inadequate hashing.
The appended allows us to easily represent a distribution using Unicode
characters. Further, the data structure keeping track of the distribution
is so simple that obtaining its values for off-line processing is trivial.
Example, taking the last 10 commits to QEMU:
Characters in commit title Count
-----------------------------------
39 1
48 1
53 1
54 2
57 1
61 1
67 1
78 1
80 1
qdist_init(&dist);
qdist_inc(&dist, 39);
[...]
qdist_inc(&dist, 80);
char *str = qdist_pr(&dist, 9, QDIST_PR_LABELS);
// -> [39.0,43.6)▂▂ █▂ ▂ ▄[75.4,80.0]
g_free(str);
char *str = qdist_pr(&dist, 4, QDIST_PR_LABELS);
// -> [39.0,49.2)▁█▁▁[69.8,80.0]
g_free(str);
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1465412133-3029-9-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
log will become common facility with tracepoints support in next step.
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1452174932-28657-9-git-send-email-den@openvz.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The standard glib provided g_base64_decode doesn't provide any
kind of sensible error checking on its input. Add a QEMU custom
wrapper qbase64_decode which can be used with untrustworthy
input that can contain invalid base64 characters, embedded
NUL characters, or not be NUL terminated at all.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This module computes the average of a set of values within a time
window, keeping also track of the minimum and maximum values.
In order to produce more accurate results it works internally by
creating two time windows of the same period, offsetted by half of
that period. Values are accounted on both windows and the data is
always returned from the oldest one.
[Add missing util/replay.o to test-timed-average dependencies to fix the
build.
--Stefan]
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 201b09c21bbc9c329779d2b2365ee2b9c80dceeb.1446044837.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
New features:
VT-d support for devices behind a bridge
vhost-user migration support
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJWKMrnAAoJECgfDbjSjVRpVL0H/iRc31o00QE4nWBRpxUpf8WJ
V5RWE8qKkDgBha5bS5Nt4vs8K4jkkHGXCbmygMidWph96hUPK8/yHy1A/wmpBibB
5hVSPDK8onavNGJwpaWDrkhd9OhKAaKOuu49T6+VWJGZY/uX5ayqmcN934y0NPUa
4EhH5tyxPpYOYeW9i/VOMQ374gCJcpzYBMug4NJZRyFpfz/b2mzAQtoqw3EsPtB0
vpVJ+fKiCyG39HFKQJW7cL12yBeXOoyhjfDxpumLqwLWMfmde+vJwTFx6wbechgV
aU3jIdvUX8wHCNYaB937NsMaDALoGNqUjbpKnf+xD1w7xr9pwTzdyrGH3rpGLEE=
=+G1+
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
vhost, pc, virtio features, fixes, cleanups
New features:
VT-d support for devices behind a bridge
vhost-user migration support
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 22 Oct 2015 12:39:19 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream: (37 commits)
hw/isa/lpc_ich9: inject the SMI on the VCPU that is writing to APM_CNT
i386: keep cpu_model field in MachineState uptodate
vhost: set the correct queue index in case of migration with multiqueue
piix: fix resource leak reported by Coverity
seccomp: add memfd_create to whitelist
vhost-user-test: check ownership during migration
vhost-user-test: add live-migration test
vhost-user-test: learn to tweak various qemu arguments
vhost-user-test: wrap server in TestServer struct
vhost-user-test: remove useless static check
vhost-user-test: move wait_for_fds() out
vhost: add migration block if memfd failed
vhost-user: use an enum helper for features mask
vhost user: add rarp sending after live migration for legacy guest
vhost user: add support of live migration
net: add trace_vhost_user_event
vhost-user: document migration log
vhost: use a function for each call
vhost-user: add a migration blocker
vhost-user: send log shm fd along with log_base
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement memfd_create() fallback if not available in system libc.
memfd_create() is still not included in glibc today, atlhough it's been
available since Linux 3.17 in Oct 2014.
memfd has numerous advantages over traditional shm/mmap for ipc memory
sharing with fd handler, which we are going to make use of for
vhost-user logging memory in following patches.
The next patches are going to introduce helpers to use best practices of
memfd usage and provide some compatibility fallback. memfd.c is thus
temporarily useless and eventually empty if memfd_create() is provided
by the system.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
Make it easier to add new unrelated units with shorter lines.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
Anonymous and file-backed RAM allocation are now almost exactly the same.
Reduce code duplication by moving RAM mmap code out of oslib-posix.c and
exec.c.
Reported-by: Marc-André Lureau <mlureau@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Thibaut Collet <thibaut.collet@6wind.com>
The Buffer code in the VNC server is useful for the IO channel
code, so pull it out into a shared module, QIOBuffer.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The coroutine files are currently referenced by the block-obj-y
variable. The coroutine functionality though is already used by
more than just the block code. eg migration code uses coroutine
yield. In the future the I/O channel code will also use the
coroutine yield functionality. Since the coroutine code is nicely
self-contained it can be easily built as part of the libqemuutil.a
library, making it widely available.
The headers are also moved into include/qemu, instead of the
include/block directory, since they are now part of the util
codebase, and the impl was never in the block/ directory
either.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
To prepare for a generic internal cipher API, move the
built-in AES implementation into the crypto/ directory
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <1435770638-25715-3-git-send-email-berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This includes a (mangled) copy of the liburcu code. The main changes
are: 1) removing dependencies on many other header files in liburcu; 2)
removing for simplicity the tentative busy waiting in synchronize_rcu,
which has limited performance effects; 3) replacing futexes in
synchronize_rcu with QemuEvents for Win32 portability. The API is
the same as liburcu, so it should be possible in the future to require
liburcu on POSIX systems for example and use our copy only on Windows.
Among the various versions available I chose urcu-mb, which is the
least invasive implementation even though it does not have the
fastest rcu_read_{lock,unlock} implementation. The urcu flavor can
be changed later, after benchmarking.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
IDs have long spread beyond QemuOpts: not everything with an ID
necessarily goes through QemuOpts. Commit 9aebf3b is about such a
case: block layer names are meant to be well-formed IDs, but some of
them don't go through QemuOpts, and thus weren't checked. The commit
fixed that the straightforward way: rename the internal QemuOpts
helper id_wellformed() to qemu_opts_id_wellformed() and give it
external linkage.
Instead of using it directly in block.c, the commit adds wrapper
bdrv_is_valid_name(), probably to hide the connection to QemuOpts.
Go one logical step further: emancipate IDs from QemuOpts. Rename the
function back to id_wellformed(), and put it in another file. While
there, clean up its value to bool. Peel off the bdrv_is_valid_name()
wrapper.
[Replaced stray return 0 with return false to match bool returns used
elsewhere in id_wellformed().
--Stefan]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
As a "utility", it only supported ppc, and in a way that other
tcg backends provided directly in tcg-target.h. Removing this
disparity is easier now that the two ppc backends are merged.
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
QemuMutex does not guarantee fairness and cannot be acquired
recursively:
Fairness means each locker gets a turn and the scheduler cannot cause
starvation.
Recursive locking is useful for composition, it allows a sequence of
locking operations to be invoked atomically by acquiring the lock around
them.
This patch adds RFifoLock, a recursive lock that guarantees FIFO order.
Its first user is added in the next patch.
RFifoLock has one additional feature: it can be initialized with an
optional contention callback. The callback is invoked whenever a thread
must wait for the lock. For example, it can be used to poke the current
owner so that they release the lock soon.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Now that the monitor and readline are decoupled, readline.h no longer
belongs in include/monitor/. Put the header into include/qemu/.
Move the source file into util/ so it can be linked as part of
libqemuutil.a.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Implement the continuous leaky bucket algorithm devised on IRC as a separate
module.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
In two places qemu uses openpty() which is very system-dependent,
and in both places the pty is switched to raw mode as well.
Make a wrapper function which does both steps, and move all the
system-dependent complexity into a separate file, together
with static/local implementations of openpty() and cfmakeraw()
from qemu-char.c.
It is in a separate file, not part of oslib-posix.c, because
openpty() often resides in -lutil which is not linked to
every program qemu builds.
This change removes #including of <pty.h>, <termios.h>
and other rather specific system headers out of qemu-common.h,
which isn't a place for such specific headers really.
This version has been verified to build correctly on Linux,
OpenBSD, FreeBSD and OpenIndiana. On the latter it lets qemu
to be built with gtk gui which were not possible there due to
missing openpty() and cfmakeraw().
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Tested-by: Andreas Färber <andreas.faerber@web.de>
This adds the Castagnoli CRC32C algorithm, using the 0x11EDC6F41
polynomial.
This is extracted from the linux kernel cryptographic crc32.c module.
The algorithm is based on:
Castagnoli93: Guy Castagnoli and Stefan Braeuer and Martin Herrman
"Optimization of Cyclic Redundancy-Check Codes with 24
and 32 Parity Bits", IEEE Transactions on Communication,
Volume 41, Number 6, June 1993
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Factor out the hexdumper functionality from iov for all to use. Useful for
creating verbose debug printfery that dumps packet data.
Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-id: faaac219c55ea586d3f748befaf5a2788fd271b8.1361853677.git.peter.crosthwaite@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
HBitmaps provides an array of bits. The bits are stored as usual in an
array of unsigned longs, but HBitmap is also optimized to provide fast
iteration over set bits; going from one bit to the next is O(logB n)
worst case, with B = sizeof(long) * CHAR_BIT: the result is low enough
that the number of levels is in fact fixed.
In order to do this, it stacks multiple bitmaps with progressively coarser
granularity; in all levels except the last, bit N is set iff the N-th
unsigned long is nonzero in the immediately next level. When iteration
completes on the last level it can examine the 2nd-last level to quickly
skip entire words, and even do so recursively to skip blocks of 64 words or
powers thereof (32 on 32-bit machines).
Given an index in the bitmap, it can be split in group of bits like
this (for the 64-bit case):
bits 0-57 => word in the last bitmap | bits 58-63 => bit in the word
bits 0-51 => word in the 2nd-last bitmap | bits 52-57 => bit in the word
bits 0-45 => word in the 3rd-last bitmap | bits 46-51 => bit in the word
So it is easy to move up simply by shifting the index right by
log2(BITS_PER_LONG) bits. To move down, you shift the index left
similarly, and add the word index within the group. Iteration uses
ffs (find first set bit) to find the next word to examine; this
operation can be done in constant time in most current architectures.
Setting or clearing a range of m bits on all levels, the work to perform
is O(m + m/W + m/W^2 + ...), which is O(m) like on a regular bitmap.
When iterating on a bitmap, each bit (on any level) is only visited
once. Hence, The total cost of visiting a bitmap with m bits in it is
the number of bits that are set in all bitmaps. Unless the bitmap is
extremely sparse, this is also O(m + m/W + m/W^2 + ...), so the amortized
cost of advancing from one bit to the next is usually constant.
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>