Derived from kvm-tool patch
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/74309
Ingo Molnar pointed out that sending the timer signal to the whole
process, just blocking it everywhere, is suboptimal with an increasing
number of threads. QEMU is also using this pattern so far.
Linux provides a (non-portable) way to restrict the signal to a single
thread: We can use SIGEV_THREAD_ID unless we are forced to emulate
signalfd via an additional thread. That case could theoretically be
optimized as well, but it doesn't look worth bothering.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Qemu uses signalfd to figure out, if a signal occured without the need
to actually receive the signal. Instead, it can read from the fd to receive
its news.
Now, we obviously don't always have signalfd around. Especially not on
non-Linux systems. So what we do there is that we create a new thread,
block that thread on all signals and simply call sigwait to wait for a
signal we're interested in to occur.
This all sounds great, but what we're really doing is:
sigset_t all;
sigfillset(&all);
sigprocmask(SIG_BLOCK, &all, NULL);
which - on Darwin - blocks all signals on the current _process_, not only
on the current thread. To block signals on the thread, we can use
pthread_sigmask().
This patch does that, assuming that my above analysis is correct, and thus
renders Qemu useable on Darwin again.
Reported-by: Andreas Färber <andreas.faerber@web.de>
Acked-by: Paolo Bonizni <pbonzini@redhat.com>
CC: Jan Kiszka <jan.kiszka@siemens.com>
CC: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Port qemu-kvm's signalfd compat code.
commit 5a7fdd0abd7cd24dac205317a4195446ab8748b5
Author: Anthony Liguori <aliguori@us.ibm.com>
Date: Wed May 7 11:55:47 2008 -0500
Use signalfd() in io-thread
This patch reworks the IO thread to use signalfd() instead of sigtimedwait()
This will eliminate the need to use SIGIO everywhere.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Replace signalfd with signal handler/pipe. There is no way to interrupt
the CPU execution loop when a file descriptor becomes readable. This
results in a large performance regression in sparc emulation during
bootup.
This patch switches us to signal handler/pipe which was originally
suggested by Ian Jackson. The signal handler lets us interrupt the
CPU emulation loop while the write to a pipe lets us avoid the
select/signal race condition.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5451 c046a42c-6fe2-441c-8c8c-71466251a162
Be more friendly when signalfd() fails, and also add configure checks to detect
that syscall(SYS_signalfd) actually works. malc pointed out that some installs
do not have /usr/include/linux headers that are in sync with the glibc headers
so why SYS_signalfd is defined, it's #defined to _NR_signalfd which is not
defined in the /usr/include/linux header.
While this is a distro bug, it doesn't hurt to do a more thorough job in
detection.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5334 c046a42c-6fe2-441c-8c8c-71466251a162
Spotted by malc.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5333 c046a42c-6fe2-441c-8c8c-71466251a162