All callers now pass is_external=false to aio_set_fd_handler() and
aio_set_event_notifier(). The aio_disable_external() API that
temporarily disables fd handlers that were registered is_external=true
is therefore dead code.
Remove aio_disable_external(), aio_enable_external(), and the
is_external arguments to aio_set_fd_handler() and
aio_set_event_notifier().
The entire test-fdmon-epoll test is removed because its sole purpose was
testing aio_disable_external().
Parts of this patch were generated using the following coccinelle
(https://coccinelle.lip6.fr/) semantic patch:
@@
expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
@@
- aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
+ aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)
@@
expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
@@
- aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
+ aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230516190238.8401-21-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
vhost-user activity must be suspended during bdrv_drained_begin/end().
This prevents new requests from interfering with whatever is happening
in the drained section.
Previously this was done using aio_set_fd_handler()'s is_external
argument. In a multi-queue block layer world the aio_disable_external()
API cannot be used since multiple AioContext may be processing I/O, not
just one.
Switch to BlockDevOps->drained_begin/end() callbacks.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230516190238.8401-8-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Each vhost-user-blk request runs in a coroutine. When the BlockBackend
enters a drained section we need to enter a quiescent state. Currently
any in-flight requests race with bdrv_drained_begin() because it is
unaware of vhost-user-blk requests.
When blk_co_preadv/pwritev()/etc returns it wakes the
bdrv_drained_begin() thread but vhost-user-blk request processing has
not yet finished. The request coroutine continues executing while the
main loop thread thinks it is in a drained section.
One example where this is unsafe is for blk_set_aio_context() where
bdrv_drained_begin() is called before .aio_context_detached() and
.aio_context_attach(). If request coroutines are still running after
bdrv_drained_begin(), then the AioContext could change underneath them
and they race with new requests processed in the new AioContext. This
could lead to virtqueue corruption, for example.
(This example is theoretical, I came across this while reading the
code and have not tried to reproduce it.)
It's easy to make bdrv_drained_begin() wait for in-flight requests: add
a .drained_poll() callback that checks the VuServer's in-flight counter.
VuServer just needs an API that returns true when there are requests in
flight. The in-flight counter needs to be atomic.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230516190238.8401-7-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The VuServer object has a refcount field and ref/unref APIs. The name is
confusing because it's actually an in-flight request counter instead of
a refcount.
Normally a refcount destroys the object upon reaching zero. The VuServer
counter is used to wake up the vhost-user coroutine when there are no
more requests.
Avoid confusing by renaming refcount and ref/unref to in_flight and
inc/dec.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230516190238.8401-6-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Since commit abe34282 ("win32: avoid mixing SOCKET and file descriptor
space"), we set HANDLE_FLAG_PROTECT_FROM_CLOSE on the socket FD, to
prevent closing the HANDLE with CloseHandle. This raises an exception
which under gdb is fatal, and qemu exits.
Let's catch the expected error instead.
Note: this appears to work, but the mingw64 macro is not well documented
or tested, and it's not obvious how it is meant to be used.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20230515132440.1025315-1-marcandre.lureau@redhat.com>
* Fix for a memory corruption due to an extra free
* Fix for a compile breakage
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEoPZlSPBIlev+awtgUaNDx8/77KEFAmRtyB4ACgkQUaNDx8/7
7KFQvRAAhexL/Q8rWM8og+VESL5gPlpxDhWCI+l76+YJqQzZkgebwZ5rw920f8EG
bRs5AAk8fPTX/qKKq/JkYMmQwpM2jo8W4elcNumm44WAG7hDwd1LQ3nAZeOcvgU0
jQ1IwRYcgNo+oOTN9b7GhePQK27OraliLUrf/sBGUWvbdAttVc2pcB91CMur0Dxb
9KK2vEA4MJ9B8zf2/ZkaK6Z+28GsratR7803Nvv25rm5sP3VBb9w0TnKZAOmaHLv
X5Tz8yjNvQxxzB9SzgOK6yMtnrp42ArVC5u2aDa33uzSWUeFiTF1HEFeGAps2nJg
8tSNo0fTKhznrVR3q2pyxC05Dp+jmKicrmivc26iBdAWAUxQYX44UQoLYD5ISdti
nlSE+Is+0ZE5E2tHE9yAOPa4rrXHNBqpueu+VMPbYMyVEqzblP7twYe6HkGPYhrD
zbx/ABZAAGOf+3YmyL1yQrCc0WyJ2lHDySQt/llMrhkBTCHGEF8yjfWFypluZFWX
X7Mb0YZP0qPpFsV3TDcrqV3onaFSNehp2EJs2EJAa/DeUNbnKlz4LiYBzZE95egb
9PGrLnB5w1Vlp44H+ctrnYj55TnspHT+Qqwvhkr/vOMupZukbGus0VFIU2IDrh2g
qEqhaigwxfVyZ1Eqwti4IgX8RVX8bW43slR33aD6vsO7jpiP2Pk=
=TA2V
-----END PGP SIGNATURE-----
Merge tag 'pull-vfio-20230524' of https://github.com/legoater/qemu into staging
vfio queue:
* Fix for a memory corruption due to an extra free
* Fix for a compile breakage
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEoPZlSPBIlev+awtgUaNDx8/77KEFAmRtyB4ACgkQUaNDx8/7
# 7KFQvRAAhexL/Q8rWM8og+VESL5gPlpxDhWCI+l76+YJqQzZkgebwZ5rw920f8EG
# bRs5AAk8fPTX/qKKq/JkYMmQwpM2jo8W4elcNumm44WAG7hDwd1LQ3nAZeOcvgU0
# jQ1IwRYcgNo+oOTN9b7GhePQK27OraliLUrf/sBGUWvbdAttVc2pcB91CMur0Dxb
# 9KK2vEA4MJ9B8zf2/ZkaK6Z+28GsratR7803Nvv25rm5sP3VBb9w0TnKZAOmaHLv
# X5Tz8yjNvQxxzB9SzgOK6yMtnrp42ArVC5u2aDa33uzSWUeFiTF1HEFeGAps2nJg
# 8tSNo0fTKhznrVR3q2pyxC05Dp+jmKicrmivc26iBdAWAUxQYX44UQoLYD5ISdti
# nlSE+Is+0ZE5E2tHE9yAOPa4rrXHNBqpueu+VMPbYMyVEqzblP7twYe6HkGPYhrD
# zbx/ABZAAGOf+3YmyL1yQrCc0WyJ2lHDySQt/llMrhkBTCHGEF8yjfWFypluZFWX
# X7Mb0YZP0qPpFsV3TDcrqV3onaFSNehp2EJs2EJAa/DeUNbnKlz4LiYBzZE95egb
# 9PGrLnB5w1Vlp44H+ctrnYj55TnspHT+Qqwvhkr/vOMupZukbGus0VFIU2IDrh2g
# qEqhaigwxfVyZ1Eqwti4IgX8RVX8bW43slR33aD6vsO7jpiP2Pk=
# =TA2V
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 24 May 2023 01:17:34 AM PDT
# gpg: using RSA key A0F66548F04895EBFE6B0B6051A343C7CFFBECA1
# gpg: Good signature from "Cédric Le Goater <clg@kaod.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: A0F6 6548 F048 95EB FE6B 0B60 51A3 43C7 CFFB ECA1
* tag 'pull-vfio-20230524' of https://github.com/legoater/qemu:
util/vfio-helpers: Use g_file_read_link()
vfio/pci: Fix a use-after-free issue
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When _FORTIFY_SOURCE=2, glibc version is 2.35, and GCC version is
12.1.0, the compiler complains as follows:
In file included from /usr/include/features.h:490,
from /usr/include/bits/libc-header-start.h:33,
from /usr/include/stdint.h:26,
from /usr/lib/gcc/aarch64-unknown-linux-gnu/12.1.0/include/stdint.h:9,
from /home/alarm/q/var/qemu/include/qemu/osdep.h:94,
from ../util/vfio-helpers.c:13:
In function 'readlink',
inlined from 'sysfs_find_group_file' at ../util/vfio-helpers.c:116:9,
inlined from 'qemu_vfio_init_pci' at ../util/vfio-helpers.c:326:18,
inlined from 'qemu_vfio_open_pci' at ../util/vfio-helpers.c:517:9:
/usr/include/bits/unistd.h:119:10: error: argument 2 is null but the corresponding size argument 3 value is 4095 [-Werror=nonnull]
119 | return __glibc_fortify (readlink, __len, sizeof (char),
| ^~~~~~~~~~~~~~~
This error implies the allocated buffer can be NULL. Use
g_file_read_link(), which allocates buffer automatically to avoid the
error.
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Move the code from tcg/. The only use of these bits so far
is with respect to the atomicity of tcg operations.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use cpuinfo_init() during init_accel(), and the variable cpuinfo
during test_buffer_is_zero_next_accel(). Adjust the logic that
cycles through the set of accelerators for testing.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add a bit to indicate when VMOVDQU is also atomic if aligned.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add cpuinfo.h for i386 and x86_64, and the initialization
for that in util/. Populate that with a slightly altered
copy of the tcg host probing code. Other uses of cpuid.h
will be adjusted one patch at a time.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
QEMU's event loop supports nesting, which means that event handler
functions may themselves call aio_poll(). The condition that triggered a
handler must be reset before the nested aio_poll() call, otherwise the
same handler will be called and immediately re-enter aio_poll. This
leads to an infinite loop and stack exhaustion.
Poll handlers are especially prone to this issue, because they typically
reset their condition by finishing the processing of pending work.
Unfortunately it is during the processing of pending work that nested
aio_poll() calls typically occur and the condition has not yet been
reset.
Disable a poll handler during ->io_poll_ready() so that a nested
aio_poll() call cannot invoke ->io_poll_ready() again. As a result, the
disabled poll handler and its associated fd handler do not run during
the nested aio_poll(). Calling aio_set_fd_handler() from inside nested
aio_poll() could cause it to run again. If the fd handler is pending
inside nested aio_poll(), then it will also run again.
In theory fd handlers can be affected by the same issue, but they are
more likely to reset the condition before calling nested aio_poll().
This is a special case and it's somewhat complex, but I don't see a way
around it as long as nested aio_poll() is supported.
Cc: qemu-stable@nongnu.org
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2186181
Fixes: c382706925 ("block: Mark bdrv_co_io_(un)plug() and callers GRAPH_RDLOCK")
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230502184134.534703-2-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
To simplify the code, rename coroutine-win32.c to match the option
passed to configure.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QEMU adds the path to glib.h to all compilation commands. This is simpler
due to the pervasive use of static_library, and was grandfathered in from
the previous Make-based build system. Until Meson 0.63 the only way to
do this was to detect glib in configure and use add_project_arguments,
but now it is possible to use add_project_dependencies instead.
gmodule is detected in a separate variable, with export enabled for
modules and disabled for plugin.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add new -run-with option with an async-teardown=on|off parameter. It is
visible in the output of query-command-line-options QMP command, so it
can be discovered and used by libvirt.
The option -async-teardown is now redundant, deprecate it.
Reported-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
Fixes: c891c24b1a ("os-posix: asynchronous teardown for shutdown on Linux")
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <20230505120051.36605-2-imbrenda@linux.ibm.com>
[thuth: Add curly braces to fix error with GCC 8.5, fix bug in deprecated.rst]
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no need for the AioContext lock in aio_wait_bh_oneshot().
It's easy to remove the lock from existing callers and then switch from
AIO_WAIT_WHILE() to AIO_WAIT_WHILE_UNLOCKED() in aio_wait_bh_oneshot().
Document that the AioContext lock should not be held across
aio_wait_bh_oneshot(). Holding a lock across aio_poll() can cause
deadlock so we don't want callers to do that.
This is a step towards getting rid of the AioContext lock.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230404153307.458883-1-stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Use a store-release when enqueuing a new call_rcu, and a load-acquire
when dequeuing; and read the tail after checking that node->next is
consistent, which is the standard message passing pattern and it is
clearer than mb_read/mb_set.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Some time after systemd documented LISTEN_PID and LISTEN_FDS for
socket activation, they later added LISTEN_FDNAMES; now documented at:
https://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
In particular, look at the implementation of sd_listen_fds_with_names():
https://github.com/systemd/systemd/blob/main/src/libsystemd/sd-daemon/sd-daemon.c
If we ever pass LISTEN_PID=xxx and LISTEN_FDS=n to a child process,
but leave LISTEN_FDNAMES=... unchanged as inherited from our parent
process, then our child process using sd_listen_fds_with_names() might
see a mismatch in the number of names (unexpected -EINVAL failure), or
even if the number of names matches the values of those names may be
unexpected (with even less predictable results).
Usually, this is not an issue - the point of LISTEN_PID is to tell
systemd socket activation to ignore all other LISTEN_* if they were
not directed to this particular pid. But if we end up consuming a
socket directed to this qemu process, and later decide to spawn a
child process that also needs systemd socket activation, we must
ensure we are not leaking any stale systemd variables through to that
child. The easiest way to do this is to wipe ALL LISTEN_* variables
at the time we consume a socket, even if we do not yet care about a
LISTEN_FDNAMES passed in from the parent process.
See also https://lists.freedesktop.org/archives/systemd-devel/2023-March/048920.html
Thanks: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20230324153349.1123774-1-eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
A BH callback can free the BH, causing a use-after-free in aio_bh_call.
Fix that by keeping a local copy of the re-entrancy guard pointer.
Buglink: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=58513
Fixes: 9c86c97f12 ("async: Add an optional reentrancy guard to the BH API")
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Message-Id: <20230501141956.3444868-1-alxndr@bu.edu>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
* Update kernel headers to 6.3rc5
* Suppress GCC13 false positive in aio_bh_poll()
* Add new x86 feature bits
* Coverity fixes
* More steps towards removing qatomic_mb_set/read
* Fix reduced-phys-bits value for AMD SEV
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmRNC0IUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroNo0wgArWNGKZpbmQ0e5L6ajMvaaPmg4mVL
a2SJGU0TwTp0fUgZr14z2iwzIpSqQrsqhzTIAzOTs0OICDBPBuNvnRucMa+SVQGO
Tc89YAwBVDo66dAKhWi+WR9tx7sTFCso0nbsBfczzdnwAw3g1MJ87Ueqc5tlPGBK
E7YSAD6l4UuogoN5BLU7bSsG/X7bwcyzeUXRB4ik+Z9abWd4DH9qiROnBKLMmBLK
nAi47h8b8MltWORpO+wf6HtkMKi37SAzl9VLHVuHcRhIdY/JhWCRhYSo0HXhgX66
JLVkyxFpIndT0dUW/xnqATGez92FRZyTxHbxbAcWM0SoC1jOVfUXB+7Gdw==
=vxou
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* Fix compilation issues under Debian 10
* Update kernel headers to 6.3rc5
* Suppress GCC13 false positive in aio_bh_poll()
* Add new x86 feature bits
* Coverity fixes
* More steps towards removing qatomic_mb_set/read
* Fix reduced-phys-bits value for AMD SEV
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmRNC0IUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroNo0wgArWNGKZpbmQ0e5L6ajMvaaPmg4mVL
# a2SJGU0TwTp0fUgZr14z2iwzIpSqQrsqhzTIAzOTs0OICDBPBuNvnRucMa+SVQGO
# Tc89YAwBVDo66dAKhWi+WR9tx7sTFCso0nbsBfczzdnwAw3g1MJ87Ueqc5tlPGBK
# E7YSAD6l4UuogoN5BLU7bSsG/X7bwcyzeUXRB4ik+Z9abWd4DH9qiROnBKLMmBLK
# nAi47h8b8MltWORpO+wf6HtkMKi37SAzl9VLHVuHcRhIdY/JhWCRhYSo0HXhgX66
# JLVkyxFpIndT0dUW/xnqATGez92FRZyTxHbxbAcWM0SoC1jOVfUXB+7Gdw==
# =vxou
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 29 Apr 2023 01:19:14 PM BST
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [undefined]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
cpus-common: stop using mb_set/mb_read
async: Suppress GCC13 false positive in aio_bh_poll()
tests: vhost-user-test: release mutex on protocol violation
Update linux headers to v6.3rc5
update-linux-headers.sh: Add missing kernel headers.
Fix libvhost-user.c compilation.
target/i386: Add support for PREFETCHIT0/1 in CPUID enumeration
target/i386: Add support for AVX-NE-CONVERT in CPUID enumeration
target/i386: Add support for AVX-VNNI-INT8 in CPUID enumeration
target/i386: Add support for AVX-IFMA in CPUID enumeration
target/i386: Add support for AMX-FP16 in CPUID enumeration
target/i386: Add support for CMPCCXADD in CPUID enumeration
i386/cpu: Update how the EBX register of CPUID 0x8000001F is set
i386/sev: Update checks and information related to reduced-phys-bits
qemu-options.hx: Update the reduced-phys-bits documentation
qapi, i386/sev: Change the reduced-phys-bits value from 5 to 1
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
GCC13 reports an error :
../util/async.c: In function ‘aio_bh_poll’:
include/qemu/queue.h:303:22: error: storing the address of local variable ‘slice’ in ‘*ctx.bh_slice_list.sqh_last’ [-Werror=dangling-pointer=]
303 | (head)->sqh_last = &(elm)->field.sqe_next; \
| ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~
../util/async.c:169:5: note: in expansion of macro ‘QSIMPLEQ_INSERT_TAIL’
169 | QSIMPLEQ_INSERT_TAIL(&ctx->bh_slice_list, &slice, next);
| ^~~~~~~~~~~~~~~~~~~~
../util/async.c:161:17: note: ‘slice’ declared here
161 | BHListSlice slice;
| ^~~~~
../util/async.c:161:17: note: ‘ctx’ declared here
But the local variable 'slice' is removed from the global context list
in following loop of the same routine. Add a pragma to silent GCC.
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Tested-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20230420202939.1982044-1-clg@kaod.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Devices can pass their MemoryReentrancyGuard (from their DeviceState),
when creating new BHes. Then, the async API will toggle the guard
before/after calling the BH call-back. This prevents bh->mmio reentrancy
issues.
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Message-Id: <20230427211013.2994127-3-alxndr@bu.edu>
[thuth: Fix "line over 90 characters" checkpatch.pl error]
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Juan Quintela <quintela@redhat.com>
thread_pool_submit_aio() is always called on a pool taken from
qemu_get_current_aio_context(), and that is the only intended
use: each pool runs only in the same thread that is submitting
work to it, it can't run anywhere else.
Therefore simplify the thread_pool_submit* API and remove the
ThreadPool function parameter.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20230203131731.851116-5-eesposit@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Use qemu_get_current_aio_context() where possible, since we always
submit work to the current thread anyways.
We want to also be sure that the thread submitting the work is
the same as the one processing the pool, to avoid adding
synchronization to the pool list.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20230203131731.851116-4-eesposit@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This new helper fetches file system type for a fd. Only Linux is
implemented so far. Currently only tmpfs and hugetlbfs are defined,
but it can grow as needed.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Fix use-after-free errors in the code path that called error_handle(). A
call to error_handle() will now either free the passed Error 'err' or
assign it to '*errp' if '*errp' is currently NULL. This ensures that 'err'
either has been freed or is assigned to '*errp' if this function returns.
Adjust the two callers of this function to not assign the 'err' to '*errp'
themselves, since this is now handled by error_handle().
Fixes: commit 3ffef1a55c ("error: add global &error_warn destination")
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 20230406154347.4100700-1-stefanb@linux.ibm.com
qemu-user can hang in a multi-threaded fork. One common
reason is that when creating a TB, between fork and exec
we manipulate a GTree whose memory allocator (GSlice) is
not fork-safe.
Although POSIX does not mandate it, the system's allocator
(e.g. tcmalloc, libc malloc) is probably fork-safe.
Fix some of these hangs by using QTree, which uses the system's
allocator regardless of the Glib version that we used at
configuration time.
Tested with the test program in the original bug report, i.e.:
```
void garble() {
int pid = fork();
if (pid == 0) {
exit(0);
} else {
int wstatus;
waitpid(pid, &wstatus, 0);
}
}
void supragarble(unsigned depth) {
if (depth == 0)
return ;
std::thread a(supragarble, depth-1);
std::thread b(supragarble, depth-1);
garble();
a.join();
b.join();
}
int main() {
supragarble(10);
}
```
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/285
Reported-by: Valentin David <me@valentindavid.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Emilio Cota <cota@braap.org>
Message-Id: <20230205163758.416992-3-cota@braap.org>
[rth: Add QEMU_DISABLE_CFI for all callback using functions.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If another thread calls aio_set_fd_handler() while the IOThread event
loop is upgrading from ppoll(2) to epoll(7) then we might miss new
AioHandlers. The epollfd will not monitor the new AioHandler's fd,
resulting in hangs.
Take the AioHandler list lock while upgrading to epoll. This prevents
AioHandlers from changing while epoll is being set up. If we cannot lock
because we're in a nested event loop, then don't upgrade to epoll (it
will happen next time we're not in a nested call).
The downside to taking the lock is that the aio_set_fd_handler() thread
has to wait until the epoll upgrade is finished, which involves many
epoll_ctl(2) system calls. However, this scenario is rare and I couldn't
think of another solution that is still simple.
Reported-by: Qing Wang <qinwang@redhat.com>
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2090998
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Fam Zheng <fam@euphon.net>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230323144859.1338495-1-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
vhost_user_server_stop() uses AIO_WAIT_WHILE(). AIO_WAIT_WHILE()
requires that AioContext is only acquired once.
Since blk_exp_request_shutdown() already acquires the AioContext it
shouldn't be acquired again in vhost_user_server_stop().
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230323145853.1345527-1-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Close the given file descriptor, but returns the underlying SOCKET.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230320133643.1618437-2-marcandre.lureau@redhat.com>
Bring the files in line with the QEMU coding style, with spaces
for indentation.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/378
Signed-off-by: Yeqi Fu <fufuyqqqqqq@gmail.com>
Message-Id: <20230315032649.57568-1-fufuyqqqqqq@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Manually implement a socketpair() function, using UNIX sockets and
simple peer credential checking.
QEMU doesn't make much use of socketpair, beside vhost-user which is not
available for win32 at this point. However, I intend to use it for
writing some new portable tests.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230306122751.2355515-5-marcandre.lureau@redhat.com>
Use a close() wrapper instead, so that we don't need to worry about
closesocket() vs close() anymore, let's hope.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-Id: <20230221124802.4103554-17-marcandre.lureau@redhat.com>
Until now, a win32 SOCKET handle is often cast to an int file
descriptor, as this is what other OS use for sockets. When necessary,
QEMU eventually queries whether it's a socket with the help of
fd_is_socket(). However, there is no guarantee of conflict between the
fd and SOCKET space. Such conflict would have surprising consequences,
we shouldn't mix them.
Also, it is often forgotten that SOCKET must be closed with
closesocket(), and not close().
Instead, let's make the win32 socket wrapper functions return and take a
file descriptor, and let util/ wrappers do the fd/SOCKET conversion as
necessary. A bit of adaptation is necessary in io/ as well.
Unfortunately, we can't drop closesocket() usage, despite
_open_osfhandle() documentation claiming transfer of ownership, testing
shows bad behaviour if you forget to call closesocket().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-Id: <20230221124802.4103554-15-marcandre.lureau@redhat.com>
Open-code the socket registration where it's needed, to avoid
artificially used or unclear generic interface.
Furthermore, the following patches are going to make socket handling use
FD-only inside QEMU, but we need to handle win32 SOCKET from libslirp.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-Id: <20230221124802.4103554-12-marcandre.lureau@redhat.com>
Let's check if the argument is actually a SOCKET, else report an error
and return.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-Id: <20230221124802.4103554-10-marcandre.lureau@redhat.com>
A more explicit version of qemu_socket_select() with no events.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-Id: <20230221124802.4103554-8-marcandre.lureau@redhat.com>
This is a wrapper for WSAEventSelect, with Error handling. By default,
it will produce a warning, so callers don't have to be modified
now, and yet we can spot potential mis-use.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-Id: <20230221124802.4103554-7-marcandre.lureau@redhat.com>
This can help debugging issues or develop, when error handling is
introduced.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-Id: <20230221124802.4103554-6-marcandre.lureau@redhat.com>
Fortunately, qemu_fork() is no longer used since commit
a95570e3e4 ("io/command: use glib GSpawn, instead of open-coding
fork/exec"). (GSpawn uses posix_spawn() whenever possible instead)
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20230221124802.4103554-2-marcandre.lureau@redhat.com>