2011-09-13 12:30:52 +04:00
|
|
|
/*
|
|
|
|
* QEMU System Emulator
|
|
|
|
*
|
|
|
|
* Copyright (c) 2003-2008 Fabrice Bellard
|
|
|
|
*
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
|
|
* of this software and associated documentation files (the "Software"), to deal
|
|
|
|
* in the Software without restriction, including without limitation the rights
|
|
|
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
|
|
* copies of the Software, and to permit persons to whom the Software is
|
|
|
|
* furnished to do so, subject to the following conditions:
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice shall be included in
|
|
|
|
* all copies or substantial portions of the Software.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
|
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
|
|
|
* THE SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
2016-01-29 20:50:05 +03:00
|
|
|
#include "qemu/osdep.h"
|
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-14 11:01:28 +03:00
|
|
|
#include "qapi/error.h"
|
2016-03-20 20:16:19 +03:00
|
|
|
#include "qemu/cutils.h"
|
2012-12-17 21:20:00 +04:00
|
|
|
#include "qemu/timer.h"
|
2020-08-19 14:17:19 +03:00
|
|
|
#include "sysemu/cpu-timers.h"
|
2018-02-27 12:52:48 +03:00
|
|
|
#include "sysemu/replay.h"
|
2012-12-17 21:20:00 +04:00
|
|
|
#include "qemu/main-loop.h"
|
2012-12-17 21:19:44 +04:00
|
|
|
#include "block/aio.h"
|
2022-04-25 10:57:23 +03:00
|
|
|
#include "block/thread-pool.h"
|
2017-09-11 22:52:53 +03:00
|
|
|
#include "qemu/error-report.h"
|
2019-07-12 20:34:35 +03:00
|
|
|
#include "qemu/queue.h"
|
cfi: Initial support for cfi-icall in QEMU
LLVM/Clang, supports runtime checks for forward-edge Control-Flow
Integrity (CFI).
CFI on indirect function calls (cfi-icall) ensures that, in indirect
function calls, the function called is of the right signature for the
pointer type defined at compile time.
For this check to work, the code must always respect the function
signature when using function pointer, the function must be defined
at compile time, and be compiled with link-time optimization.
This rules out, for example, shared libraries that are dynamically loaded
(given that functions are not known at compile time), and code that is
dynamically generated at run-time.
This patch:
1) Introduces the CONFIG_CFI flag to support cfi in QEMU
2) Introduces a decorator to allow the definition of "sensitive"
functions, where a non-instrumented function may be called at runtime
through a pointer. The decorator will take care of disabling cfi-icall
checks on such functions, when cfi is enabled.
3) Marks functions currently in QEMU that exhibit such behavior,
in particular:
- The function in TCG that calls pre-compiled TBs
- The function in TCI that interprets instructions
- Functions in the plugin infrastructures that jump to callbacks
- Functions in util that directly call a signal handler
Signed-off-by: Daniele Buono <dbuono@linux.vnet.ibm.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org
Message-Id: <20201204230615.2392-3-dbuono@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-12-05 02:06:12 +03:00
|
|
|
#include "qemu/compiler.h"
|
2022-04-25 10:57:22 +03:00
|
|
|
#include "qom/object.h"
|
2019-07-12 20:34:35 +03:00
|
|
|
|
|
|
|
#ifndef _WIN32
|
|
|
|
#include <sys/wait.h>
|
|
|
|
#endif
|
2011-09-13 12:30:52 +04:00
|
|
|
|
|
|
|
#ifndef _WIN32
|
|
|
|
|
|
|
|
/* If we have signalfd, we mask out the signals we want to handle and then
|
|
|
|
* use signalfd to listen for them. We rely on whatever the current signal
|
|
|
|
* handler is to dispatch the signals when we receive them.
|
|
|
|
*/
|
cfi: Initial support for cfi-icall in QEMU
LLVM/Clang, supports runtime checks for forward-edge Control-Flow
Integrity (CFI).
CFI on indirect function calls (cfi-icall) ensures that, in indirect
function calls, the function called is of the right signature for the
pointer type defined at compile time.
For this check to work, the code must always respect the function
signature when using function pointer, the function must be defined
at compile time, and be compiled with link-time optimization.
This rules out, for example, shared libraries that are dynamically loaded
(given that functions are not known at compile time), and code that is
dynamically generated at run-time.
This patch:
1) Introduces the CONFIG_CFI flag to support cfi in QEMU
2) Introduces a decorator to allow the definition of "sensitive"
functions, where a non-instrumented function may be called at runtime
through a pointer. The decorator will take care of disabling cfi-icall
checks on such functions, when cfi is enabled.
3) Marks functions currently in QEMU that exhibit such behavior,
in particular:
- The function in TCG that calls pre-compiled TBs
- The function in TCI that interprets instructions
- Functions in the plugin infrastructures that jump to callbacks
- Functions in util that directly call a signal handler
Signed-off-by: Daniele Buono <dbuono@linux.vnet.ibm.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org
Message-Id: <20201204230615.2392-3-dbuono@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-12-05 02:06:12 +03:00
|
|
|
/*
|
|
|
|
* Disable CFI checks.
|
|
|
|
* We are going to call a signal hander directly. Such handler may or may not
|
|
|
|
* have been defined in our binary, so there's no guarantee that the pointer
|
|
|
|
* used to set the handler is a cfi-valid pointer. Since the handlers are
|
|
|
|
* stored in kernel memory, changing the handler to an attacker-defined
|
|
|
|
* function requires being able to call a sigaction() syscall,
|
|
|
|
* which is not as easy as overwriting a pointer in memory.
|
|
|
|
*/
|
|
|
|
QEMU_DISABLE_CFI
|
2011-09-13 12:30:52 +04:00
|
|
|
static void sigfd_handler(void *opaque)
|
|
|
|
{
|
|
|
|
int fd = (intptr_t)opaque;
|
|
|
|
struct qemu_signalfd_siginfo info;
|
|
|
|
struct sigaction action;
|
|
|
|
ssize_t len;
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
do {
|
|
|
|
len = read(fd, &info, sizeof(info));
|
|
|
|
} while (len == -1 && errno == EINTR);
|
|
|
|
|
|
|
|
if (len == -1 && errno == EAGAIN) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (len != sizeof(info)) {
|
2019-10-18 16:07:16 +03:00
|
|
|
error_report("read from sigfd returned %zd: %s", len,
|
|
|
|
g_strerror(errno));
|
2011-09-13 12:30:52 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
sigaction(info.ssi_signo, NULL, &action);
|
|
|
|
if ((action.sa_flags & SA_SIGINFO) && action.sa_sigaction) {
|
2017-02-08 15:22:12 +03:00
|
|
|
sigaction_invoke(&action, &info);
|
2011-09-13 12:30:52 +04:00
|
|
|
} else if (action.sa_handler) {
|
|
|
|
action.sa_handler(info.ssi_signo);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-01-13 17:08:45 +03:00
|
|
|
static int qemu_signal_init(Error **errp)
|
2011-09-13 12:30:52 +04:00
|
|
|
{
|
|
|
|
int sigfd;
|
|
|
|
sigset_t set;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* SIG_IPI must be blocked in the main thread and must not be caught
|
|
|
|
* by sigwait() in the signal thread. Otherwise, the cpu thread will
|
|
|
|
* not catch it reliably.
|
|
|
|
*/
|
|
|
|
sigemptyset(&set);
|
|
|
|
sigaddset(&set, SIG_IPI);
|
|
|
|
sigaddset(&set, SIGIO);
|
|
|
|
sigaddset(&set, SIGALRM);
|
|
|
|
sigaddset(&set, SIGBUS);
|
2014-10-27 17:13:02 +03:00
|
|
|
/* SIGINT cannot be handled via signalfd, so that ^C can be used
|
|
|
|
* to interrupt QEMU when it is being run under gdb. SIGHUP and
|
|
|
|
* SIGTERM are also handled asynchronously, even though it is not
|
|
|
|
* strictly necessary, because they use the same handler as SIGINT.
|
|
|
|
*/
|
2011-09-13 12:30:52 +04:00
|
|
|
pthread_sigmask(SIG_BLOCK, &set, NULL);
|
|
|
|
|
2012-01-12 13:05:35 +04:00
|
|
|
sigdelset(&set, SIG_IPI);
|
2011-09-13 12:30:52 +04:00
|
|
|
sigfd = qemu_signalfd(&set);
|
|
|
|
if (sigfd == -1) {
|
2019-01-13 17:08:45 +03:00
|
|
|
error_setg_errno(errp, errno, "failed to create signalfd");
|
2011-09-13 12:30:52 +04:00
|
|
|
return -errno;
|
|
|
|
}
|
|
|
|
|
2022-03-29 14:25:05 +03:00
|
|
|
g_unix_set_fd_nonblocking(sigfd, true, NULL);
|
2011-09-13 12:30:52 +04:00
|
|
|
|
Change qemu_set_fd_handler2(..., NULL, ...) to qemu_set_fd_handler
Done with following Coccinelle semantic patch, plus manual cosmetic changes in
net/*.c.
@@
expression E1, E2, E3, E4;
@@
- qemu_set_fd_handler2(E1, NULL, E2, E3, E4);
+ qemu_set_fd_handler(E1, E2, E3, E4);
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1433400324-7358-8-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2015-06-04 09:45:18 +03:00
|
|
|
qemu_set_fd_handler(sigfd, sigfd_handler, NULL, (void *)(intptr_t)sigfd);
|
2011-09-13 12:30:52 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
#else /* _WIN32 */
|
|
|
|
|
2019-01-13 17:08:45 +03:00
|
|
|
static int qemu_signal_init(Error **errp)
|
2011-09-13 12:30:52 +04:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2010-05-24 19:27:14 +04:00
|
|
|
#endif
|
|
|
|
|
|
|
|
static AioContext *qemu_aio_context;
|
2015-07-24 14:42:55 +03:00
|
|
|
static QEMUBH *qemu_notify_bh;
|
|
|
|
|
|
|
|
static void notify_event_cb(void *opaque)
|
|
|
|
{
|
|
|
|
/* No need to do anything; this bottom half is only used to
|
|
|
|
* kick the kernel out of ppoll/poll/WaitForMultipleObjects.
|
|
|
|
*/
|
|
|
|
}
|
2011-09-13 12:30:52 +04:00
|
|
|
|
2013-03-07 16:41:44 +04:00
|
|
|
AioContext *qemu_get_aio_context(void)
|
|
|
|
{
|
|
|
|
return qemu_aio_context;
|
|
|
|
}
|
|
|
|
|
2011-09-13 12:30:52 +04:00
|
|
|
void qemu_notify_event(void)
|
|
|
|
{
|
2010-05-24 19:27:14 +04:00
|
|
|
if (!qemu_aio_context) {
|
2012-01-21 05:08:27 +04:00
|
|
|
return;
|
|
|
|
}
|
2015-07-24 14:42:55 +03:00
|
|
|
qemu_bh_schedule(qemu_notify_bh);
|
2011-09-13 12:30:52 +04:00
|
|
|
}
|
|
|
|
|
2013-02-20 14:28:25 +04:00
|
|
|
static GArray *gpollfds;
|
|
|
|
|
2014-09-18 15:30:49 +04:00
|
|
|
int qemu_init_main_loop(Error **errp)
|
2011-09-13 12:30:52 +04:00
|
|
|
{
|
|
|
|
int ret;
|
2012-09-24 17:07:08 +04:00
|
|
|
GSource *src;
|
2011-09-13 12:30:52 +04:00
|
|
|
|
2017-03-03 13:50:29 +03:00
|
|
|
init_clocks(qemu_timer_notify_cb);
|
2012-10-29 18:28:36 +04:00
|
|
|
|
2019-01-13 17:08:45 +03:00
|
|
|
ret = qemu_signal_init(errp);
|
2011-09-13 12:30:52 +04:00
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
error: Eliminate error_propagate() with Coccinelle, part 1
When all we do with an Error we receive into a local variable is
propagating to somewhere else, we can just as well receive it there
right away. Convert
if (!foo(..., &err)) {
...
error_propagate(errp, err);
...
return ...
}
to
if (!foo(..., errp)) {
...
...
return ...
}
where nothing else needs @err. Coccinelle script:
@rule1 forall@
identifier fun, err, errp, lbl;
expression list args, args2;
binary operator op;
constant c1, c2;
symbol false;
@@
if (
(
- fun(args, &err, args2)
+ fun(args, errp, args2)
|
- !fun(args, &err, args2)
+ !fun(args, errp, args2)
|
- fun(args, &err, args2) op c1
+ fun(args, errp, args2) op c1
)
)
{
... when != err
when != lbl:
when strict
- error_propagate(errp, err);
... when != err
(
return;
|
return c2;
|
return false;
)
}
@rule2 forall@
identifier fun, err, errp, lbl;
expression list args, args2;
expression var;
binary operator op;
constant c1, c2;
symbol false;
@@
- var = fun(args, &err, args2);
+ var = fun(args, errp, args2);
... when != err
if (
(
var
|
!var
|
var op c1
)
)
{
... when != err
when != lbl:
when strict
- error_propagate(errp, err);
... when != err
(
return;
|
return c2;
|
return false;
|
return var;
)
}
@depends on rule1 || rule2@
identifier err;
@@
- Error *err = NULL;
... when != err
Not exactly elegant, I'm afraid.
The "when != lbl:" is necessary to avoid transforming
if (fun(args, &err)) {
goto out
}
...
out:
error_propagate(errp, err);
even though other paths to label out still need the error_propagate().
For an actual example, see sclp_realize().
Without the "when strict", Coccinelle transforms vfio_msix_setup(),
incorrectly. I don't know what exactly "when strict" does, only that
it helps here.
The match of return is narrower than what I want, but I can't figure
out how to express "return where the operand doesn't use @err". For
an example where it's too narrow, see vfio_intx_enable().
Silently fails to convert hw/arm/armsse.c, because Coccinelle gets
confused by ARMSSE being used both as typedef and function-like macro
there. Converted manually.
Line breaks tidied up manually. One nested declaration of @local_err
deleted manually. Preexisting unwanted blank line dropped in
hw/riscv/sifive_e.c.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200707160613.848843-35-armbru@redhat.com>
2020-07-07 19:06:02 +03:00
|
|
|
qemu_aio_context = aio_context_new(errp);
|
2014-09-18 15:30:49 +04:00
|
|
|
if (!qemu_aio_context) {
|
|
|
|
return -EMFILE;
|
|
|
|
}
|
async: the main AioContext is only "current" if under the BQL
If we want to wake up a coroutine from a worker thread, aio_co_wake()
currently does not work. In that scenario, aio_co_wake() calls
aio_co_enter(), but there is no current AioContext and therefore
qemu_get_current_aio_context() returns the main thread. aio_co_wake()
then attempts to call aio_context_acquire() instead of going through
aio_co_schedule().
The default case of qemu_get_current_aio_context() was added to cover
synchronous I/O started from the vCPU thread, but the main and vCPU
threads are quite different. The main thread is an I/O thread itself,
only running a more complicated event loop; the vCPU thread instead
is essentially a worker thread that occasionally calls
qemu_mutex_lock_iothread(). It is only in those critical sections
that it acts as if it were the home thread of the main AioContext.
Therefore, this patch detaches qemu_get_current_aio_context() from
iothreads, which is a useless complication. The AioContext pointer
is stored directly in the thread-local variable, including for the
main loop. Worker threads (including vCPU threads) optionally behave
as temporary home threads if they have taken the big QEMU lock,
but if that is not the case they will always schedule coroutines
on remote threads via aio_co_schedule().
With this change, the stub qemu_mutex_iothread_locked() must be changed
from true to false. The previous value of true was needed because the
main thread did not have an AioContext in the thread-local variable,
but now it does have one.
Reported-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210609122234.544153-1-pbonzini@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Tested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
[eblake: tweak commit message per Vladimir's review]
Signed-off-by: Eric Blake <eblake@redhat.com>
2021-06-09 15:22:34 +03:00
|
|
|
qemu_set_current_aio_context(qemu_aio_context);
|
2016-07-06 13:08:59 +03:00
|
|
|
qemu_notify_bh = qemu_bh_new(notify_event_cb, NULL);
|
2013-02-20 14:28:25 +04:00
|
|
|
gpollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
|
2012-09-24 17:07:08 +04:00
|
|
|
src = aio_get_g_source(qemu_aio_context);
|
2016-09-30 17:34:24 +03:00
|
|
|
g_source_set_name(src, "aio-context");
|
2012-09-24 17:07:08 +04:00
|
|
|
g_source_attach(src, NULL);
|
|
|
|
g_source_unref(src);
|
2015-09-07 06:28:58 +03:00
|
|
|
src = iohandler_get_g_source();
|
2016-09-30 17:34:24 +03:00
|
|
|
g_source_set_name(src, "io-handler");
|
2015-09-07 06:28:58 +03:00
|
|
|
g_source_attach(src, NULL);
|
|
|
|
g_source_unref(src);
|
2011-09-13 12:30:52 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-04-25 10:57:22 +03:00
|
|
|
static void main_loop_update_params(EventLoopBase *base, Error **errp)
|
|
|
|
{
|
2022-04-25 10:57:23 +03:00
|
|
|
ERRP_GUARD();
|
|
|
|
|
2022-04-25 10:57:22 +03:00
|
|
|
if (!qemu_aio_context) {
|
|
|
|
error_setg(errp, "qemu aio context not ready");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
aio_context_set_aio_params(qemu_aio_context, base->aio_max_batch, errp);
|
2022-04-25 10:57:23 +03:00
|
|
|
if (*errp) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
aio_context_set_thread_pool_params(qemu_aio_context, base->thread_pool_min,
|
|
|
|
base->thread_pool_max, errp);
|
2022-04-25 10:57:22 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
MainLoop *mloop;
|
|
|
|
|
|
|
|
static void main_loop_init(EventLoopBase *base, Error **errp)
|
|
|
|
{
|
|
|
|
MainLoop *m = MAIN_LOOP(base);
|
|
|
|
|
|
|
|
if (mloop) {
|
|
|
|
error_setg(errp, "only one main-loop instance allowed");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
main_loop_update_params(base, errp);
|
|
|
|
|
|
|
|
mloop = m;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool main_loop_can_be_deleted(EventLoopBase *base)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void main_loop_class_init(ObjectClass *oc, void *class_data)
|
|
|
|
{
|
|
|
|
EventLoopBaseClass *bc = EVENT_LOOP_BASE_CLASS(oc);
|
|
|
|
|
|
|
|
bc->init = main_loop_init;
|
|
|
|
bc->update_params = main_loop_update_params;
|
|
|
|
bc->can_be_deleted = main_loop_can_be_deleted;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const TypeInfo main_loop_info = {
|
|
|
|
.name = TYPE_MAIN_LOOP,
|
|
|
|
.parent = TYPE_EVENT_LOOP_BASE,
|
|
|
|
.class_init = main_loop_class_init,
|
|
|
|
.instance_size = sizeof(MainLoop),
|
|
|
|
};
|
|
|
|
|
|
|
|
static void main_loop_register_types(void)
|
|
|
|
{
|
|
|
|
type_register_static(&main_loop_info);
|
|
|
|
}
|
|
|
|
|
|
|
|
type_init(main_loop_register_types)
|
|
|
|
|
2011-09-13 12:30:52 +04:00
|
|
|
static int max_priority;
|
|
|
|
|
2012-03-20 13:49:21 +04:00
|
|
|
#ifndef _WIN32
|
2013-02-20 14:28:26 +04:00
|
|
|
static int glib_pollfds_idx;
|
|
|
|
static int glib_n_poll_fds;
|
|
|
|
|
2020-09-02 14:17:24 +03:00
|
|
|
void qemu_fd_register(int fd)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2013-08-21 19:02:54 +04:00
|
|
|
static void glib_pollfds_fill(int64_t *cur_timeout)
|
2011-09-13 12:30:52 +04:00
|
|
|
{
|
|
|
|
GMainContext *context = g_main_context_default();
|
2012-03-20 13:49:17 +04:00
|
|
|
int timeout = 0;
|
2013-08-21 19:02:54 +04:00
|
|
|
int64_t timeout_ns;
|
2013-02-20 14:28:26 +04:00
|
|
|
int n;
|
2011-09-13 12:30:52 +04:00
|
|
|
|
|
|
|
g_main_context_prepare(context, &max_priority);
|
|
|
|
|
2013-02-20 14:28:26 +04:00
|
|
|
glib_pollfds_idx = gpollfds->len;
|
|
|
|
n = glib_n_poll_fds;
|
|
|
|
do {
|
|
|
|
GPollFD *pfds;
|
|
|
|
glib_n_poll_fds = n;
|
|
|
|
g_array_set_size(gpollfds, glib_pollfds_idx + glib_n_poll_fds);
|
|
|
|
pfds = &g_array_index(gpollfds, GPollFD, glib_pollfds_idx);
|
|
|
|
n = g_main_context_query(context, max_priority, &timeout, pfds,
|
|
|
|
glib_n_poll_fds);
|
|
|
|
} while (n != glib_n_poll_fds);
|
2011-09-13 12:30:52 +04:00
|
|
|
|
2013-08-21 19:02:54 +04:00
|
|
|
if (timeout < 0) {
|
|
|
|
timeout_ns = -1;
|
|
|
|
} else {
|
|
|
|
timeout_ns = (int64_t)timeout * (int64_t)SCALE_MS;
|
2011-09-13 12:30:52 +04:00
|
|
|
}
|
2013-08-21 19:02:54 +04:00
|
|
|
|
|
|
|
*cur_timeout = qemu_soonest_timeout(timeout_ns, *cur_timeout);
|
2011-09-13 12:30:52 +04:00
|
|
|
}
|
|
|
|
|
2013-02-20 14:28:26 +04:00
|
|
|
static void glib_pollfds_poll(void)
|
2011-09-13 12:30:52 +04:00
|
|
|
{
|
|
|
|
GMainContext *context = g_main_context_default();
|
2013-02-20 14:28:26 +04:00
|
|
|
GPollFD *pfds = &g_array_index(gpollfds, GPollFD, glib_pollfds_idx);
|
2011-09-13 12:30:52 +04:00
|
|
|
|
2013-02-20 14:28:26 +04:00
|
|
|
if (g_main_context_check(context, max_priority, pfds, glib_n_poll_fds)) {
|
2011-09-13 12:30:52 +04:00
|
|
|
g_main_context_dispatch(context);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-04-05 17:46:00 +04:00
|
|
|
#define MAX_MAIN_LOOP_SPIN (1000)
|
|
|
|
|
2013-08-21 19:02:54 +04:00
|
|
|
static int os_host_main_loop_wait(int64_t timeout)
|
2012-03-20 13:49:18 +04:00
|
|
|
{
|
main-loop: Acquire main_context lock around os_host_main_loop_wait.
When running virt-rescue the serial console hangs from time to time.
Virt-rescue runs an ordinary Linux kernel "appliance", but there is
only a single idle process running inside, so the qemu main loop is
largely idle. With virt-rescue >= 1.37 you may be able to observe the
hang by doing:
$ virt-rescue -e ^] --scratch
><rescue> while true; do ls -l /usr/bin; done
The hang in virt-rescue can be resolved by pressing a key on the
serial console.
Possibly with the same root cause, we also observed hangs during very
early boot of regular Linux VMs with a serial console. Those hangs
are extremely rare, but you may be able to observe them by running
this command on baremetal for a sufficiently long time:
$ while libguestfs-test-tool -t 60 >& /tmp/log ; do echo -n . ; done
(Check in /tmp/log that the failure was caused by a hang during early
boot, and not some other reason)
During investigation of this bug, Paolo Bonzini wrote:
> glib is expecting QEMU to use g_main_context_acquire around accesses to
> GMainContext. However QEMU is not doing that, instead it is taking its
> own mutex. So we should add g_main_context_acquire and
> g_main_context_release in the two implementations of
> os_host_main_loop_wait; these should undo the effect of Frediano's
> glib patch.
This patch exactly implements Paolo's suggestion in that paragraph.
This fixes the serial console hang in my testing, across 3 different
physical machines (AMD, Intel Core i7 and Intel Xeon), over many hours
of automated testing. I wasn't able to reproduce the early boot hangs
(but as noted above, these are extremely rare in any case).
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1435432
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20170331205133.23906-1-rjones@redhat.com>
[Paolo: this is actually a glib bug: recent glib versions are also
expecting g_main_context_acquire around g_poll---but that is not
documented and probably not even intended].
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-31 23:51:33 +03:00
|
|
|
GMainContext *context = g_main_context_default();
|
2012-03-20 13:49:18 +04:00
|
|
|
int ret;
|
|
|
|
|
main-loop: Acquire main_context lock around os_host_main_loop_wait.
When running virt-rescue the serial console hangs from time to time.
Virt-rescue runs an ordinary Linux kernel "appliance", but there is
only a single idle process running inside, so the qemu main loop is
largely idle. With virt-rescue >= 1.37 you may be able to observe the
hang by doing:
$ virt-rescue -e ^] --scratch
><rescue> while true; do ls -l /usr/bin; done
The hang in virt-rescue can be resolved by pressing a key on the
serial console.
Possibly with the same root cause, we also observed hangs during very
early boot of regular Linux VMs with a serial console. Those hangs
are extremely rare, but you may be able to observe them by running
this command on baremetal for a sufficiently long time:
$ while libguestfs-test-tool -t 60 >& /tmp/log ; do echo -n . ; done
(Check in /tmp/log that the failure was caused by a hang during early
boot, and not some other reason)
During investigation of this bug, Paolo Bonzini wrote:
> glib is expecting QEMU to use g_main_context_acquire around accesses to
> GMainContext. However QEMU is not doing that, instead it is taking its
> own mutex. So we should add g_main_context_acquire and
> g_main_context_release in the two implementations of
> os_host_main_loop_wait; these should undo the effect of Frediano's
> glib patch.
This patch exactly implements Paolo's suggestion in that paragraph.
This fixes the serial console hang in my testing, across 3 different
physical machines (AMD, Intel Core i7 and Intel Xeon), over many hours
of automated testing. I wasn't able to reproduce the early boot hangs
(but as noted above, these are extremely rare in any case).
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1435432
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20170331205133.23906-1-rjones@redhat.com>
[Paolo: this is actually a glib bug: recent glib versions are also
expecting g_main_context_acquire around g_poll---but that is not
documented and probably not even intended].
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-31 23:51:33 +03:00
|
|
|
g_main_context_acquire(context);
|
|
|
|
|
2013-02-20 14:28:26 +04:00
|
|
|
glib_pollfds_fill(&timeout);
|
2012-03-20 13:49:18 +04:00
|
|
|
|
2018-02-27 12:52:48 +03:00
|
|
|
qemu_mutex_unlock_iothread();
|
|
|
|
replay_mutex_unlock();
|
2012-03-20 13:49:18 +04:00
|
|
|
|
2013-08-21 19:02:54 +04:00
|
|
|
ret = qemu_poll_ns((GPollFD *)gpollfds->data, gpollfds->len, timeout);
|
2013-02-20 14:28:25 +04:00
|
|
|
|
2018-02-27 12:52:48 +03:00
|
|
|
replay_mutex_lock();
|
|
|
|
qemu_mutex_lock_iothread();
|
2012-03-20 13:49:18 +04:00
|
|
|
|
2013-02-20 14:28:26 +04:00
|
|
|
glib_pollfds_poll();
|
main-loop: Acquire main_context lock around os_host_main_loop_wait.
When running virt-rescue the serial console hangs from time to time.
Virt-rescue runs an ordinary Linux kernel "appliance", but there is
only a single idle process running inside, so the qemu main loop is
largely idle. With virt-rescue >= 1.37 you may be able to observe the
hang by doing:
$ virt-rescue -e ^] --scratch
><rescue> while true; do ls -l /usr/bin; done
The hang in virt-rescue can be resolved by pressing a key on the
serial console.
Possibly with the same root cause, we also observed hangs during very
early boot of regular Linux VMs with a serial console. Those hangs
are extremely rare, but you may be able to observe them by running
this command on baremetal for a sufficiently long time:
$ while libguestfs-test-tool -t 60 >& /tmp/log ; do echo -n . ; done
(Check in /tmp/log that the failure was caused by a hang during early
boot, and not some other reason)
During investigation of this bug, Paolo Bonzini wrote:
> glib is expecting QEMU to use g_main_context_acquire around accesses to
> GMainContext. However QEMU is not doing that, instead it is taking its
> own mutex. So we should add g_main_context_acquire and
> g_main_context_release in the two implementations of
> os_host_main_loop_wait; these should undo the effect of Frediano's
> glib patch.
This patch exactly implements Paolo's suggestion in that paragraph.
This fixes the serial console hang in my testing, across 3 different
physical machines (AMD, Intel Core i7 and Intel Xeon), over many hours
of automated testing. I wasn't able to reproduce the early boot hangs
(but as noted above, these are extremely rare in any case).
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1435432
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20170331205133.23906-1-rjones@redhat.com>
[Paolo: this is actually a glib bug: recent glib versions are also
expecting g_main_context_acquire around g_poll---but that is not
documented and probably not even intended].
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-31 23:51:33 +03:00
|
|
|
|
|
|
|
g_main_context_release(context);
|
|
|
|
|
2012-03-20 13:49:18 +04:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
#else
|
2011-09-13 12:30:52 +04:00
|
|
|
/***********************************************************/
|
|
|
|
/* Polling handling */
|
|
|
|
|
|
|
|
typedef struct PollingEntry {
|
|
|
|
PollingFunc *func;
|
|
|
|
void *opaque;
|
|
|
|
struct PollingEntry *next;
|
|
|
|
} PollingEntry;
|
|
|
|
|
|
|
|
static PollingEntry *first_polling_entry;
|
|
|
|
|
|
|
|
int qemu_add_polling_cb(PollingFunc *func, void *opaque)
|
|
|
|
{
|
|
|
|
PollingEntry **ppe, *pe;
|
2022-03-15 17:41:56 +03:00
|
|
|
pe = g_new0(PollingEntry, 1);
|
2011-09-13 12:30:52 +04:00
|
|
|
pe->func = func;
|
|
|
|
pe->opaque = opaque;
|
|
|
|
for(ppe = &first_polling_entry; *ppe != NULL; ppe = &(*ppe)->next);
|
|
|
|
*ppe = pe;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void qemu_del_polling_cb(PollingFunc *func, void *opaque)
|
|
|
|
{
|
|
|
|
PollingEntry **ppe, *pe;
|
|
|
|
for(ppe = &first_polling_entry; *ppe != NULL; ppe = &(*ppe)->next) {
|
|
|
|
pe = *ppe;
|
|
|
|
if (pe->func == func && pe->opaque == opaque) {
|
|
|
|
*ppe = pe->next;
|
|
|
|
g_free(pe);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/***********************************************************/
|
|
|
|
/* Wait objects support */
|
|
|
|
typedef struct WaitObjects {
|
|
|
|
int num;
|
2012-03-20 13:49:20 +04:00
|
|
|
int revents[MAXIMUM_WAIT_OBJECTS + 1];
|
2011-09-13 12:30:52 +04:00
|
|
|
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
|
|
|
|
WaitObjectFunc *func[MAXIMUM_WAIT_OBJECTS + 1];
|
|
|
|
void *opaque[MAXIMUM_WAIT_OBJECTS + 1];
|
|
|
|
} WaitObjects;
|
|
|
|
|
|
|
|
static WaitObjects wait_objects = {0};
|
|
|
|
|
|
|
|
int qemu_add_wait_object(HANDLE handle, WaitObjectFunc *func, void *opaque)
|
|
|
|
{
|
|
|
|
WaitObjects *w = &wait_objects;
|
|
|
|
if (w->num >= MAXIMUM_WAIT_OBJECTS) {
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
w->events[w->num] = handle;
|
|
|
|
w->func[w->num] = func;
|
|
|
|
w->opaque[w->num] = opaque;
|
2012-03-20 13:49:20 +04:00
|
|
|
w->revents[w->num] = 0;
|
2011-09-13 12:30:52 +04:00
|
|
|
w->num++;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void qemu_del_wait_object(HANDLE handle, WaitObjectFunc *func, void *opaque)
|
|
|
|
{
|
|
|
|
int i, found;
|
|
|
|
WaitObjects *w = &wait_objects;
|
|
|
|
|
|
|
|
found = 0;
|
|
|
|
for (i = 0; i < w->num; i++) {
|
|
|
|
if (w->events[i] == handle) {
|
|
|
|
found = 1;
|
|
|
|
}
|
|
|
|
if (found) {
|
|
|
|
w->events[i] = w->events[i + 1];
|
|
|
|
w->func[i] = w->func[i + 1];
|
|
|
|
w->opaque[i] = w->opaque[i + 1];
|
2012-03-20 13:49:20 +04:00
|
|
|
w->revents[i] = w->revents[i + 1];
|
2011-09-13 12:30:52 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
if (found) {
|
|
|
|
w->num--;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-03-20 13:49:19 +04:00
|
|
|
void qemu_fd_register(int fd)
|
|
|
|
{
|
2010-05-24 19:27:14 +04:00
|
|
|
WSAEventSelect(fd, event_notifier_get_handle(&qemu_aio_context->notifier),
|
|
|
|
FD_READ | FD_ACCEPT | FD_CLOSE |
|
2012-03-20 13:49:19 +04:00
|
|
|
FD_CONNECT | FD_WRITE | FD_OOB);
|
|
|
|
}
|
|
|
|
|
2013-02-20 14:28:25 +04:00
|
|
|
static int pollfds_fill(GArray *pollfds, fd_set *rfds, fd_set *wfds,
|
|
|
|
fd_set *xfds)
|
|
|
|
{
|
|
|
|
int nfds = -1;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < pollfds->len; i++) {
|
|
|
|
GPollFD *pfd = &g_array_index(pollfds, GPollFD, i);
|
|
|
|
int fd = pfd->fd;
|
|
|
|
int events = pfd->events;
|
2013-05-16 19:36:00 +04:00
|
|
|
if (events & G_IO_IN) {
|
2013-02-20 14:28:25 +04:00
|
|
|
FD_SET(fd, rfds);
|
|
|
|
nfds = MAX(nfds, fd);
|
|
|
|
}
|
2013-05-16 19:36:00 +04:00
|
|
|
if (events & G_IO_OUT) {
|
2013-02-20 14:28:25 +04:00
|
|
|
FD_SET(fd, wfds);
|
|
|
|
nfds = MAX(nfds, fd);
|
|
|
|
}
|
|
|
|
if (events & G_IO_PRI) {
|
|
|
|
FD_SET(fd, xfds);
|
|
|
|
nfds = MAX(nfds, fd);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nfds;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void pollfds_poll(GArray *pollfds, int nfds, fd_set *rfds,
|
|
|
|
fd_set *wfds, fd_set *xfds)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < pollfds->len; i++) {
|
|
|
|
GPollFD *pfd = &g_array_index(pollfds, GPollFD, i);
|
|
|
|
int fd = pfd->fd;
|
|
|
|
int revents = 0;
|
|
|
|
|
|
|
|
if (FD_ISSET(fd, rfds)) {
|
2013-05-16 19:36:00 +04:00
|
|
|
revents |= G_IO_IN;
|
2013-02-20 14:28:25 +04:00
|
|
|
}
|
|
|
|
if (FD_ISSET(fd, wfds)) {
|
2013-05-16 19:36:00 +04:00
|
|
|
revents |= G_IO_OUT;
|
2013-02-20 14:28:25 +04:00
|
|
|
}
|
|
|
|
if (FD_ISSET(fd, xfds)) {
|
|
|
|
revents |= G_IO_PRI;
|
|
|
|
}
|
|
|
|
pfd->revents = revents & pfd->events;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-08-21 19:02:54 +04:00
|
|
|
static int os_host_main_loop_wait(int64_t timeout)
|
2011-09-13 12:30:52 +04:00
|
|
|
{
|
2012-03-20 13:49:21 +04:00
|
|
|
GMainContext *context = g_main_context_default();
|
2013-02-20 14:28:26 +04:00
|
|
|
GPollFD poll_fds[1024 * 2]; /* this is probably overkill */
|
2013-02-20 14:28:24 +04:00
|
|
|
int select_ret = 0;
|
2013-02-20 14:28:26 +04:00
|
|
|
int g_poll_ret, ret, i, n_poll_fds;
|
2011-09-13 12:30:52 +04:00
|
|
|
PollingEntry *pe;
|
2012-03-20 13:49:19 +04:00
|
|
|
WaitObjects *w = &wait_objects;
|
2012-04-27 19:02:08 +04:00
|
|
|
gint poll_timeout;
|
2013-08-21 19:02:54 +04:00
|
|
|
int64_t poll_timeout_ns;
|
2012-03-20 13:49:18 +04:00
|
|
|
static struct timeval tv0;
|
2013-02-20 14:28:30 +04:00
|
|
|
fd_set rfds, wfds, xfds;
|
|
|
|
int nfds;
|
2011-09-13 12:30:52 +04:00
|
|
|
|
main-loop: Acquire main_context lock around os_host_main_loop_wait.
When running virt-rescue the serial console hangs from time to time.
Virt-rescue runs an ordinary Linux kernel "appliance", but there is
only a single idle process running inside, so the qemu main loop is
largely idle. With virt-rescue >= 1.37 you may be able to observe the
hang by doing:
$ virt-rescue -e ^] --scratch
><rescue> while true; do ls -l /usr/bin; done
The hang in virt-rescue can be resolved by pressing a key on the
serial console.
Possibly with the same root cause, we also observed hangs during very
early boot of regular Linux VMs with a serial console. Those hangs
are extremely rare, but you may be able to observe them by running
this command on baremetal for a sufficiently long time:
$ while libguestfs-test-tool -t 60 >& /tmp/log ; do echo -n . ; done
(Check in /tmp/log that the failure was caused by a hang during early
boot, and not some other reason)
During investigation of this bug, Paolo Bonzini wrote:
> glib is expecting QEMU to use g_main_context_acquire around accesses to
> GMainContext. However QEMU is not doing that, instead it is taking its
> own mutex. So we should add g_main_context_acquire and
> g_main_context_release in the two implementations of
> os_host_main_loop_wait; these should undo the effect of Frediano's
> glib patch.
This patch exactly implements Paolo's suggestion in that paragraph.
This fixes the serial console hang in my testing, across 3 different
physical machines (AMD, Intel Core i7 and Intel Xeon), over many hours
of automated testing. I wasn't able to reproduce the early boot hangs
(but as noted above, these are extremely rare in any case).
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1435432
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20170331205133.23906-1-rjones@redhat.com>
[Paolo: this is actually a glib bug: recent glib versions are also
expecting g_main_context_acquire around g_poll---but that is not
documented and probably not even intended].
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-31 23:51:33 +03:00
|
|
|
g_main_context_acquire(context);
|
|
|
|
|
2011-09-13 12:30:52 +04:00
|
|
|
/* XXX: need to suppress polling by better using win32 events */
|
|
|
|
ret = 0;
|
|
|
|
for (pe = first_polling_entry; pe != NULL; pe = pe->next) {
|
|
|
|
ret |= pe->func(pe->opaque);
|
|
|
|
}
|
2012-03-20 13:49:19 +04:00
|
|
|
if (ret != 0) {
|
main-loop: Acquire main_context lock around os_host_main_loop_wait.
When running virt-rescue the serial console hangs from time to time.
Virt-rescue runs an ordinary Linux kernel "appliance", but there is
only a single idle process running inside, so the qemu main loop is
largely idle. With virt-rescue >= 1.37 you may be able to observe the
hang by doing:
$ virt-rescue -e ^] --scratch
><rescue> while true; do ls -l /usr/bin; done
The hang in virt-rescue can be resolved by pressing a key on the
serial console.
Possibly with the same root cause, we also observed hangs during very
early boot of regular Linux VMs with a serial console. Those hangs
are extremely rare, but you may be able to observe them by running
this command on baremetal for a sufficiently long time:
$ while libguestfs-test-tool -t 60 >& /tmp/log ; do echo -n . ; done
(Check in /tmp/log that the failure was caused by a hang during early
boot, and not some other reason)
During investigation of this bug, Paolo Bonzini wrote:
> glib is expecting QEMU to use g_main_context_acquire around accesses to
> GMainContext. However QEMU is not doing that, instead it is taking its
> own mutex. So we should add g_main_context_acquire and
> g_main_context_release in the two implementations of
> os_host_main_loop_wait; these should undo the effect of Frediano's
> glib patch.
This patch exactly implements Paolo's suggestion in that paragraph.
This fixes the serial console hang in my testing, across 3 different
physical machines (AMD, Intel Core i7 and Intel Xeon), over many hours
of automated testing. I wasn't able to reproduce the early boot hangs
(but as noted above, these are extremely rare in any case).
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1435432
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20170331205133.23906-1-rjones@redhat.com>
[Paolo: this is actually a glib bug: recent glib versions are also
expecting g_main_context_acquire around g_poll---but that is not
documented and probably not even intended].
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-31 23:51:33 +03:00
|
|
|
g_main_context_release(context);
|
2012-03-20 13:49:19 +04:00
|
|
|
return ret;
|
|
|
|
}
|
2011-09-13 12:30:52 +04:00
|
|
|
|
2013-05-16 19:36:01 +04:00
|
|
|
FD_ZERO(&rfds);
|
|
|
|
FD_ZERO(&wfds);
|
|
|
|
FD_ZERO(&xfds);
|
|
|
|
nfds = pollfds_fill(gpollfds, &rfds, &wfds, &xfds);
|
|
|
|
if (nfds >= 0) {
|
|
|
|
select_ret = select(nfds + 1, &rfds, &wfds, &xfds, &tv0);
|
|
|
|
if (select_ret != 0) {
|
|
|
|
timeout = 0;
|
|
|
|
}
|
|
|
|
if (select_ret > 0) {
|
|
|
|
pollfds_poll(gpollfds, nfds, &rfds, &wfds, &xfds);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-03-20 13:49:21 +04:00
|
|
|
g_main_context_prepare(context, &max_priority);
|
2012-04-27 19:02:08 +04:00
|
|
|
n_poll_fds = g_main_context_query(context, max_priority, &poll_timeout,
|
2012-03-20 13:49:21 +04:00
|
|
|
poll_fds, ARRAY_SIZE(poll_fds));
|
2019-06-19 22:14:47 +03:00
|
|
|
g_assert(n_poll_fds + w->num <= ARRAY_SIZE(poll_fds));
|
2012-03-20 13:49:21 +04:00
|
|
|
|
2012-03-20 13:49:20 +04:00
|
|
|
for (i = 0; i < w->num; i++) {
|
2012-04-12 22:42:34 +04:00
|
|
|
poll_fds[n_poll_fds + i].fd = (DWORD_PTR)w->events[i];
|
2012-03-20 13:49:21 +04:00
|
|
|
poll_fds[n_poll_fds + i].events = G_IO_IN;
|
2012-03-20 13:49:20 +04:00
|
|
|
}
|
|
|
|
|
2013-08-21 19:02:54 +04:00
|
|
|
if (poll_timeout < 0) {
|
|
|
|
poll_timeout_ns = -1;
|
|
|
|
} else {
|
|
|
|
poll_timeout_ns = (int64_t)poll_timeout * (int64_t)SCALE_MS;
|
2012-04-29 21:15:02 +04:00
|
|
|
}
|
|
|
|
|
2013-08-21 19:02:54 +04:00
|
|
|
poll_timeout_ns = qemu_soonest_timeout(poll_timeout_ns, timeout);
|
|
|
|
|
2012-03-20 13:49:19 +04:00
|
|
|
qemu_mutex_unlock_iothread();
|
2018-02-27 12:52:48 +03:00
|
|
|
|
|
|
|
replay_mutex_unlock();
|
|
|
|
|
2013-08-21 19:02:54 +04:00
|
|
|
g_poll_ret = qemu_poll_ns(poll_fds, n_poll_fds + w->num, poll_timeout_ns);
|
|
|
|
|
2018-02-27 12:52:48 +03:00
|
|
|
replay_mutex_lock();
|
|
|
|
|
2012-03-20 13:49:19 +04:00
|
|
|
qemu_mutex_lock_iothread();
|
2013-01-08 19:30:56 +04:00
|
|
|
if (g_poll_ret > 0) {
|
2012-03-20 13:49:20 +04:00
|
|
|
for (i = 0; i < w->num; i++) {
|
2012-03-20 13:49:21 +04:00
|
|
|
w->revents[i] = poll_fds[n_poll_fds + i].revents;
|
2012-03-20 13:49:19 +04:00
|
|
|
}
|
2012-03-20 13:49:20 +04:00
|
|
|
for (i = 0; i < w->num; i++) {
|
|
|
|
if (w->revents[i] && w->func[i]) {
|
|
|
|
w->func[i](w->opaque[i]);
|
2011-09-13 12:30:52 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-03-20 13:49:21 +04:00
|
|
|
if (g_main_context_check(context, max_priority, poll_fds, n_poll_fds)) {
|
|
|
|
g_main_context_dispatch(context);
|
|
|
|
}
|
|
|
|
|
main-loop: Acquire main_context lock around os_host_main_loop_wait.
When running virt-rescue the serial console hangs from time to time.
Virt-rescue runs an ordinary Linux kernel "appliance", but there is
only a single idle process running inside, so the qemu main loop is
largely idle. With virt-rescue >= 1.37 you may be able to observe the
hang by doing:
$ virt-rescue -e ^] --scratch
><rescue> while true; do ls -l /usr/bin; done
The hang in virt-rescue can be resolved by pressing a key on the
serial console.
Possibly with the same root cause, we also observed hangs during very
early boot of regular Linux VMs with a serial console. Those hangs
are extremely rare, but you may be able to observe them by running
this command on baremetal for a sufficiently long time:
$ while libguestfs-test-tool -t 60 >& /tmp/log ; do echo -n . ; done
(Check in /tmp/log that the failure was caused by a hang during early
boot, and not some other reason)
During investigation of this bug, Paolo Bonzini wrote:
> glib is expecting QEMU to use g_main_context_acquire around accesses to
> GMainContext. However QEMU is not doing that, instead it is taking its
> own mutex. So we should add g_main_context_acquire and
> g_main_context_release in the two implementations of
> os_host_main_loop_wait; these should undo the effect of Frediano's
> glib patch.
This patch exactly implements Paolo's suggestion in that paragraph.
This fixes the serial console hang in my testing, across 3 different
physical machines (AMD, Intel Core i7 and Intel Xeon), over many hours
of automated testing. I wasn't able to reproduce the early boot hangs
(but as noted above, these are extremely rare in any case).
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1435432
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20170331205133.23906-1-rjones@redhat.com>
[Paolo: this is actually a glib bug: recent glib versions are also
expecting g_main_context_acquire around g_poll---but that is not
documented and probably not even intended].
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-03-31 23:51:33 +03:00
|
|
|
g_main_context_release(context);
|
|
|
|
|
2013-01-08 19:30:56 +04:00
|
|
|
return select_ret || g_poll_ret;
|
2011-09-13 12:30:52 +04:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2019-01-17 14:43:55 +03:00
|
|
|
static NotifierList main_loop_poll_notifiers =
|
|
|
|
NOTIFIER_LIST_INITIALIZER(main_loop_poll_notifiers);
|
|
|
|
|
|
|
|
void main_loop_poll_add_notifier(Notifier *notify)
|
|
|
|
{
|
|
|
|
notifier_list_add(&main_loop_poll_notifiers, notify);
|
|
|
|
}
|
|
|
|
|
|
|
|
void main_loop_poll_remove_notifier(Notifier *notify)
|
|
|
|
{
|
|
|
|
notifier_remove(notify);
|
|
|
|
}
|
|
|
|
|
2017-06-27 20:32:49 +03:00
|
|
|
void main_loop_wait(int nonblocking)
|
2011-09-13 12:30:52 +04:00
|
|
|
{
|
2019-01-17 14:43:55 +03:00
|
|
|
MainLoopPoll mlpoll = {
|
|
|
|
.state = MAIN_LOOP_POLL_FILL,
|
|
|
|
.timeout = UINT32_MAX,
|
|
|
|
.pollfds = gpollfds,
|
|
|
|
};
|
2012-04-13 22:35:04 +04:00
|
|
|
int ret;
|
2013-08-21 19:02:54 +04:00
|
|
|
int64_t timeout_ns;
|
2011-09-13 12:30:52 +04:00
|
|
|
|
|
|
|
if (nonblocking) {
|
2019-01-17 14:43:55 +03:00
|
|
|
mlpoll.timeout = 0;
|
2011-09-13 12:30:52 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* poll any events */
|
2013-02-20 14:28:25 +04:00
|
|
|
g_array_set_size(gpollfds, 0); /* reset for new iteration */
|
2011-09-13 12:30:52 +04:00
|
|
|
/* XXX: separate device handlers from system ones */
|
2019-01-17 14:43:55 +03:00
|
|
|
notifier_list_notify(&main_loop_poll_notifiers, &mlpoll);
|
2013-08-21 19:02:54 +04:00
|
|
|
|
2019-01-17 14:43:55 +03:00
|
|
|
if (mlpoll.timeout == UINT32_MAX) {
|
2013-08-21 19:02:54 +04:00
|
|
|
timeout_ns = -1;
|
|
|
|
} else {
|
2019-01-17 14:43:55 +03:00
|
|
|
timeout_ns = (uint64_t)mlpoll.timeout * (int64_t)(SCALE_MS);
|
2013-08-21 19:02:54 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
timeout_ns = qemu_soonest_timeout(timeout_ns,
|
|
|
|
timerlistgroup_deadline_ns(
|
|
|
|
&main_loop_tlg));
|
|
|
|
|
|
|
|
ret = os_host_main_loop_wait(timeout_ns);
|
2019-01-17 14:43:55 +03:00
|
|
|
mlpoll.state = ret < 0 ? MAIN_LOOP_POLL_ERR : MAIN_LOOP_POLL_OK;
|
|
|
|
notifier_list_notify(&main_loop_poll_notifiers, &mlpoll);
|
2011-09-13 12:30:52 +04:00
|
|
|
|
2020-08-19 14:17:19 +03:00
|
|
|
if (icount_enabled()) {
|
|
|
|
/*
|
|
|
|
* CPU thread can infinitely wait for event after
|
|
|
|
* missing the warp
|
|
|
|
*/
|
2020-08-31 17:18:34 +03:00
|
|
|
icount_start_warp_timer();
|
2020-08-19 14:17:19 +03:00
|
|
|
}
|
2013-08-21 19:03:02 +04:00
|
|
|
qemu_clock_run_all_timers();
|
2011-09-13 12:30:52 +04:00
|
|
|
}
|
2012-10-30 02:45:23 +04:00
|
|
|
|
|
|
|
/* Functions to operate on the main QEMU AioContext. */
|
|
|
|
|
2021-04-14 23:02:46 +03:00
|
|
|
QEMUBH *qemu_bh_new_full(QEMUBHFunc *cb, void *opaque, const char *name)
|
2012-10-30 02:45:23 +04:00
|
|
|
{
|
2021-04-14 23:02:46 +03:00
|
|
|
return aio_bh_new_full(qemu_aio_context, cb, opaque, name);
|
2012-10-30 02:45:23 +04:00
|
|
|
}
|
2019-07-12 20:34:35 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Functions to operate on the I/O handler AioContext.
|
|
|
|
* This context runs on top of main loop. We can't reuse qemu_aio_context
|
|
|
|
* because iohandlers mustn't be polled by aio_poll(qemu_aio_context).
|
|
|
|
*/
|
|
|
|
static AioContext *iohandler_ctx;
|
|
|
|
|
|
|
|
static void iohandler_init(void)
|
|
|
|
{
|
|
|
|
if (!iohandler_ctx) {
|
|
|
|
iohandler_ctx = aio_context_new(&error_abort);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
AioContext *iohandler_get_aio_context(void)
|
|
|
|
{
|
|
|
|
iohandler_init();
|
|
|
|
return iohandler_ctx;
|
|
|
|
}
|
|
|
|
|
|
|
|
GSource *iohandler_get_g_source(void)
|
|
|
|
{
|
|
|
|
iohandler_init();
|
|
|
|
return aio_get_g_source(iohandler_ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
void qemu_set_fd_handler(int fd,
|
|
|
|
IOHandler *fd_read,
|
|
|
|
IOHandler *fd_write,
|
|
|
|
void *opaque)
|
|
|
|
{
|
|
|
|
iohandler_init();
|
|
|
|
aio_set_fd_handler(iohandler_ctx, fd, false,
|
aio-posix: split poll check from ready handler
Adaptive polling measures the execution time of the polling check plus
handlers called when a polled event becomes ready. Handlers can take a
significant amount of time, making it look like polling was running for
a long time when in fact the event handler was running for a long time.
For example, on Linux the io_submit(2) syscall invoked when a virtio-blk
device's virtqueue becomes ready can take 10s of microseconds. This
can exceed the default polling interval (32 microseconds) and cause
adaptive polling to stop polling.
By excluding the handler's execution time from the polling check we make
the adaptive polling calculation more accurate. As a result, the event
loop now stays in polling mode where previously it would have fallen
back to file descriptor monitoring.
The following data was collected with virtio-blk num-queues=2
event_idx=off using an IOThread. Before:
168k IOPS, IOThread syscalls:
9837.115 ( 0.020 ms): IO iothread1/620155 io_submit(ctx_id: 140512552468480, nr: 16, iocbpp: 0x7fcb9f937db0) = 16
9837.158 ( 0.002 ms): IO iothread1/620155 write(fd: 103, buf: 0x556a2ef71b88, count: 8) = 8
9837.161 ( 0.001 ms): IO iothread1/620155 write(fd: 104, buf: 0x556a2ef71b88, count: 8) = 8
9837.163 ( 0.001 ms): IO iothread1/620155 ppoll(ufds: 0x7fcb90002800, nfds: 4, tsp: 0x7fcb9f1342d0, sigsetsize: 8) = 3
9837.164 ( 0.001 ms): IO iothread1/620155 read(fd: 107, buf: 0x7fcb9f939cc0, count: 512) = 8
9837.174 ( 0.001 ms): IO iothread1/620155 read(fd: 105, buf: 0x7fcb9f939cc0, count: 512) = 8
9837.176 ( 0.001 ms): IO iothread1/620155 read(fd: 106, buf: 0x7fcb9f939cc0, count: 512) = 8
9837.209 ( 0.035 ms): IO iothread1/620155 io_submit(ctx_id: 140512552468480, nr: 32, iocbpp: 0x7fca7d0cebe0) = 32
174k IOPS (+3.6%), IOThread syscalls:
9809.566 ( 0.036 ms): IO iothread1/623061 io_submit(ctx_id: 140539805028352, nr: 32, iocbpp: 0x7fd0cdd62be0) = 32
9809.625 ( 0.001 ms): IO iothread1/623061 write(fd: 103, buf: 0x5647cfba5f58, count: 8) = 8
9809.627 ( 0.002 ms): IO iothread1/623061 write(fd: 104, buf: 0x5647cfba5f58, count: 8) = 8
9809.663 ( 0.036 ms): IO iothread1/623061 io_submit(ctx_id: 140539805028352, nr: 32, iocbpp: 0x7fd0d0388b50) = 32
Notice that ppoll(2) and eventfd read(2) syscalls are eliminated because
the IOThread stays in polling mode instead of falling back to file
descriptor monitoring.
As usual, polling is not implemented on Windows so this patch ignores
the new io_poll_read() callback in aio-win32.c.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20211207132336.36627-2-stefanha@redhat.com
[Fixed up aio_set_event_notifier() calls in
tests/unit/test-fdmon-epoll.c added after this series was queued.
--Stefan]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2021-12-07 16:23:31 +03:00
|
|
|
fd_read, fd_write, NULL, NULL, opaque);
|
2019-07-12 20:34:35 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void event_notifier_set_handler(EventNotifier *e,
|
|
|
|
EventNotifierHandler *handler)
|
|
|
|
{
|
|
|
|
iohandler_init();
|
|
|
|
aio_set_event_notifier(iohandler_ctx, e, false,
|
aio-posix: split poll check from ready handler
Adaptive polling measures the execution time of the polling check plus
handlers called when a polled event becomes ready. Handlers can take a
significant amount of time, making it look like polling was running for
a long time when in fact the event handler was running for a long time.
For example, on Linux the io_submit(2) syscall invoked when a virtio-blk
device's virtqueue becomes ready can take 10s of microseconds. This
can exceed the default polling interval (32 microseconds) and cause
adaptive polling to stop polling.
By excluding the handler's execution time from the polling check we make
the adaptive polling calculation more accurate. As a result, the event
loop now stays in polling mode where previously it would have fallen
back to file descriptor monitoring.
The following data was collected with virtio-blk num-queues=2
event_idx=off using an IOThread. Before:
168k IOPS, IOThread syscalls:
9837.115 ( 0.020 ms): IO iothread1/620155 io_submit(ctx_id: 140512552468480, nr: 16, iocbpp: 0x7fcb9f937db0) = 16
9837.158 ( 0.002 ms): IO iothread1/620155 write(fd: 103, buf: 0x556a2ef71b88, count: 8) = 8
9837.161 ( 0.001 ms): IO iothread1/620155 write(fd: 104, buf: 0x556a2ef71b88, count: 8) = 8
9837.163 ( 0.001 ms): IO iothread1/620155 ppoll(ufds: 0x7fcb90002800, nfds: 4, tsp: 0x7fcb9f1342d0, sigsetsize: 8) = 3
9837.164 ( 0.001 ms): IO iothread1/620155 read(fd: 107, buf: 0x7fcb9f939cc0, count: 512) = 8
9837.174 ( 0.001 ms): IO iothread1/620155 read(fd: 105, buf: 0x7fcb9f939cc0, count: 512) = 8
9837.176 ( 0.001 ms): IO iothread1/620155 read(fd: 106, buf: 0x7fcb9f939cc0, count: 512) = 8
9837.209 ( 0.035 ms): IO iothread1/620155 io_submit(ctx_id: 140512552468480, nr: 32, iocbpp: 0x7fca7d0cebe0) = 32
174k IOPS (+3.6%), IOThread syscalls:
9809.566 ( 0.036 ms): IO iothread1/623061 io_submit(ctx_id: 140539805028352, nr: 32, iocbpp: 0x7fd0cdd62be0) = 32
9809.625 ( 0.001 ms): IO iothread1/623061 write(fd: 103, buf: 0x5647cfba5f58, count: 8) = 8
9809.627 ( 0.002 ms): IO iothread1/623061 write(fd: 104, buf: 0x5647cfba5f58, count: 8) = 8
9809.663 ( 0.036 ms): IO iothread1/623061 io_submit(ctx_id: 140539805028352, nr: 32, iocbpp: 0x7fd0d0388b50) = 32
Notice that ppoll(2) and eventfd read(2) syscalls are eliminated because
the IOThread stays in polling mode instead of falling back to file
descriptor monitoring.
As usual, polling is not implemented on Windows so this patch ignores
the new io_poll_read() callback in aio-win32.c.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Message-id: 20211207132336.36627-2-stefanha@redhat.com
[Fixed up aio_set_event_notifier() calls in
tests/unit/test-fdmon-epoll.c added after this series was queued.
--Stefan]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2021-12-07 16:23:31 +03:00
|
|
|
handler, NULL, NULL);
|
2019-07-12 20:34:35 +03:00
|
|
|
}
|