Instead of lnc_mchash() using its own implementation, we can simply call
net_crc32_le() directly and apply the bit shift inline.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
This provides a standard ethernet CRC32 little-endian implementation.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Separate out the standard ethernet CRC32 calculation into a new net_crc32()
function, renaming the constant POLYNOMIAL to POLYNOMIAL_BE to make it clear
that this is a big-endian CRC32 calculation.
As part of the constant rename, remove the duplicate definition of POLYNOMIAL
from eepro100.c and use the new POLYNOMIAL_BE constant instead.
Once this is complete remove the existing CRC32 implementation from
compute_mcast_idx() and call the new net_crc32() function in its place.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
The device is supposed to maintain two distinct contexts for transmit
offloads: one has parameters for both segmentation and checksum
offload, the other only for checksum offload. The guest driver can
send two context descriptors, one for each context (the TSE flag
specifies which). Then the guest can refer to one or the other context
in subsequent transmit data descriptors, depending on what offloads it
wants applied to each packet.
Currently the e1000 device stores just one context, and misinterprets
the TSE flags in the context and data descriptors. This is often okay:
Linux happens to send a fresh context descriptor before every data
descriptor, so forgetting the other context doesn't matter. Windows
does rely on separate contexts for TSO vs. non-TSO packets, but for
mostly-TCP traffic the two contexts have identical TCP-specific
offload parameters so confusing them doesn't matter.
One case where this confusion matters is when a Windows guest sets up
a TSO context for TCP and a non-TSO context for UDP, and then
transmits both TCP and UDP traffic in parallel. The e1000 device
sometimes ends up using TCP-specific parameters while doing checksum
offload on a UDP datagram: it writes the checksum to offset 16 (the
correct location for a TCP checksum), stomping on two bytes of UDP
data, and leaving the wrong value in the actual UDP checksum field at
offset 6. (Even worse, the host network stack may then recompute the
UDP checksum, "correcting" it to match the corrupt data before sending
it out a physical interface.)
Correct this by tracking the TSO context independently of the non-TSO
context, and selecting the appropriate context based on the TSE flag
in each transmit data descriptor.
Signed-off-by: Ed Swierk <eswierk@skyportsystems.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
sum_needed and cptse flags are received from the guest within each
transmit data descriptor. They are not part of the offload context;
instead, they determine how to apply a previously received context to
the packet being transmitted:
- If cptse is set, perform both segmentation and checksum offload
using the parameters in the TSO context; otherwise just do checksum
offload. (Currently the e1000 device incorrectly stores only one
context, which will be fixed in a subsequent patch.)
- Depending on the bits set in sum_needed, possibly perform L4
checksum offload and/or IP checksum offload, using the parameters in
the appropriate context.
Move these flags out of struct e1000x_txd_props, which is otherwise
dedicated to storing values from a context descriptor, and into the
per-packet TX struct.
Signed-off-by: Ed Swierk <eswierk@skyportsystems.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
When compiled with anything other than the 'log' trace backend, we have:
error: implicit declaration of function 'qemu_log_mask'
error: 'LOG_UNIMP' undeclared (first use in this function)
This patch adds the missing include.
Fixes: 7299e1a411
("hw/i386/vmport: replace fprintf() by trace events or LOG_UNIMP")
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-id: 20171221211103.30311-1-laurent@vivier.eu
[PMM: fixed commit message description of when problem occurs]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of creating a QIOChannelSocket directly for the chardev
server socket, use a QIONetListener. This provides the ability
to listen on multiple sockets at the same time, so enables
full support for IPv4/IPv6 dual stack.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <20171218135417.28301-2-berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of creating a QIOChannelSocket directly for the NBD
server socket, use a QIONetListener. This provides the ability
to listen on multiple sockets at the same time, so enables
full support for IPv4/IPv6 dual stack. This also means we can
honour multiple FDs received during socket activation.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <20171218101643.20360-3-berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of creating a QIOChannelSocket directly for the NBD
server socket, use a QIONetListener. This provides the ability
to listen on multiple sockets at the same time, so enables
full support for IPv4/IPv6 dual stack.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <20171218101643.20360-2-berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Check the expected behaviour of qemu_chr_be_event() on a mux chardev.
For some reason, sending the event on the base chardev broadcast to
all frontends, while sending it on the mux chardev itself should
trigger the event on the currently focused chardev frontend.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20171103152824.21948-3-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Kirill noticied that on recent versions on QEMU he was not able to
trigger SysRq to invoke debug capabilites of Linux Kernel. He tracked
it down to qemu_chr_be_event() ignoring CHR_EVENT_BREAK due s->be
being NULL. The bug was introduced in 2.8, commit a4afa548fc ("char:
move front end handlers in CharBackend"). Since the commit, the
qemu_chr_be_event() failed to deliver CHR_EVENT_BREAK due to
qemu_chr_fe_init() does not set s->be in case of mux.
Let's fix this by teaching mux to send an event to the frontend with
the focus.
Reported-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Fixes: a4afa548fc ("char: move front end handlers in CharBackend")
Message-Id: <20171103152824.21948-2-marcandre.lureau@redhat.com>
Tested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since there are some issues in memory alloc/free machenism
in glibc for little chunk memory, if Qemu frequently
alloc/free little chunk memory, the glibc doesn't alloc
little chunk memory from free list of glibc and still
allocate from OS, which make the heap size bigger and bigger.
This patch introduce malloc_trim(), which will free heap
memory when there is no rcu call during rcu thread loop.
malloc_trim() can be enabled/disabled by --enable-malloc-trim/
--disable-malloc-trim in the Qemu configure command. The
default malloc_trim() is enabled for libc.
Below are test results from smaps file.
(1)without patch
55f0783e1000-55f07992a000 rw-p 00000000 00:00 0 [heap]
Size: 21796 kB
Rss: 14260 kB
Pss: 14260 kB
(2)with patch
55cc5fadf000-55cc61008000 rw-p 00000000 00:00 0 [heap]
Size: 21668 kB
Rss: 6940 kB
Pss: 6940 kB
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <1513775806-19779-1-git-send-email-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This assumes that the comment gives some justification;
"volatile sig_atomic_t" is also self-explanatory and usually
correct.
Discussed in:
'[Qemu-devel] [PATCH] dump-guest-memory.py: fix "You can't do that without a process to debug"'
Suggested-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20171215181810.4122-1-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Now both classes (i8259, i8259-kvm) support this. Move this upper to
the common class code.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171210063819.14892-6-peterx@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Let's leverage the i8259 common code for kvm-i8259 too.
I think it's still possible that stats can lost when i8259 is in kernel
and meanwhile when irqfd is used, e.g., by vfio or vhost devices.
However that should be rare IMHO since they should be using MSIs mostly
if they really want performance (that's why people use vhost and device
assignment), and no old INTx should be used. As long as the INTx users
are emulated in QEMU the stats will be correct.
For "info pic", it should be always accurate since we fetch kvm regs
before dump.
More importantly, it's just too simple to do this now - it's only 10+
LOC to gain this feature.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171210063819.14892-5-peterx@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It was only for userspace i8259. Move it to general code so that
kvm-i8259 can also use it in the future.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171210063819.14892-4-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It's not really scary to even enable it forever. After all it's i8259,
and it's even not the kernel one.
Then we can remove quite a few of lines to make it cleaner. And "info
irq" will always work for it.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171210063819.14892-3-peterx@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
One thing to mention is that in pic_set_irq() I need to uncomment a few
lines in the macros to make sure IRQ value calculation is correct.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171210063819.14892-2-peterx@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It's only printing a warning since QEMU v1.3.0, so nobody should use
this anymore today. Let's get rid of this now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1513619065-31722-1-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sense keys have nice #defines in scsi/constants.h, use them.
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Extract the common parts of scsi_sense_buf_to_errno, scsi_convert_sense
and scsi_target_send_command's REQUEST SENSE handling into two new
functions scsi_parse_sense_buf and scsi_build_sense_buf.
Fix a bug in scsi_target_send_command along the way; the length was
written in buf[10] rather than buf[7].
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Fixes: b07fbce634 ("scsi-bus: correct responses for INQUIRY and REQUEST SENSE")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Now that the memory system correctly handles writes to ROM for
guest CPUs that may generate exceptions for decode errors, we
can remove the workaround from the boston board.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <1513187549-2435-3-git-send-email-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We set up the io_mem_rom special memory region using the
unassigned_mem_ops structure; this is then used when a guest tries to
write to ROM. This is incorrect, because the behaviour of unassigned
memory may be different from that of ROM for writes. In particular,
on some architectures writing to unassigned memory generates a guest
exception, whereas writing to ROM is generally ignored. Use a
special readonly_mem_ops for this purpose instead, so writes to
ROM are ignored for all guest CPUs.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <1513187549-2435-2-git-send-email-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
we currently report an "iSCSI Failure" in iscsi_co_generic_cb if the task
hasn't completed with SCSI_STATUS_GOOD. However, we expect a failure in
some cases and handle it gracefully. This is the case for misaligned UNMAPs
and WRITESAME10/16 calls without UNMAP. In this case a failure in the
logs can be quite misleading.
While we are at it improve the logging to reveal which operation failed
at what LBA.
Signed-off-by: Peter Lieven <pl@kamp.de>
Message-Id: <1512733868-9009-3-git-send-email-pl@kamp.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
we forgot to set the allocmap to invalid if an UNMAP call fails.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Message-Id: <1512733868-9009-2-git-send-email-pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Renaming cpu address space names so that they won't be the same when
there are more than one.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171123092333.16085-4-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Normally we create an address space for that CPU and pass that address
space into the function. Let's just do it inside to unify address space
creations. It'll simplify my next patch to rename those address spaces.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171123092333.16085-3-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The moxiesim machine already defines a memory region for a firmware,
but does not provide the possibility to load an image via "-bios" yet.
This will be needed for the boot-serial tester, so let's add support
for "-bios" here now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1512031988-32490-6-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QEMU only ships with some few firmware images, i.e. we can currently run
the boot-serial test only on a very limited set of machines. But writing
some characters to the default UART of a machine can often be done with
some few lines of assembly, so we add the possibility to the boot-serial
tester to use its own mini-kernels or mini-firmwares. We write such images
then into a file that we can load with the "-kernel" or "-bios" parameter
when we launch QEMU.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1512031988-32490-3-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the guest continuesly writes characters to the UART, we never leave
the inner while loop and thus never check whether we've reached the
timeout value. So if we fail to find the expected string in the UART
output, the test just hangs and never finishs. Use a counter to regularly
break out of the while loop to check the timeout.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1512031988-32490-2-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In commit e3af7c788b we
replaced direct calls to to cpu_ld*_code() with calls
to the x86_ld*_code() wrappers which incorporate an
advance of s->pc. Unfortunately we didn't notice that
in one place the old code was deliberately not incrementing
s->pc:
@@ -4501,7 +4528,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
static const int pp_prefix[4] = {
0, PREFIX_DATA, PREFIX_REPZ, PREFIX_REPNZ
};
- int vex3, vex2 = cpu_ldub_code(env, s->pc);
+ int vex3, vex2 = x86_ldub_code(env, s);
if (!CODE64(s) && (vex2 & 0xc0) != 0xc0) {
/* 4.1.4.6: In 32-bit mode, bits [7:6] must be 11b,
This meant we were mishandling this set of instructions.
Remove the manual advance of s->pc for the "is VEX" case
(which is now done by x86_ldub_code()) and instead rewind
PC in the case where we decide that this isn't really VEX.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Reported-by: Alexandro Sanchez Bach <alexandro@phi.nz>
Message-Id: <1513163959-17545-1-git-send-email-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When listening on unix/tcp sockets there was optional code that would update
the original SocketAddress struct with the info about the actual address that
was listened on. Since the conversion of everything to QIOChannelSocket, no
remaining caller made use of this feature. It has been replaced with the ability
to query the listen address after the fact using the function
qio_channel_socket_get_local_address. This is a better model when the input
address can result in listening on multiple distinct sockets.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171212111219.32601-1-berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Baum device bigger than 84 do not actually exist, but the user's own
Braille device might be wider than 84 columns. Some guest drivers
would be upset by such sizes, so clamp the device size.
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Message-Id: <20171211001950.27843-1-samuel.thibault@ens-lyon.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These gcc warnings are fixed:
target/i386/translate.c:4461:12: warning:
variable 'prefixes' might be clobbered by 'longjmp' or 'vfork' [-Wclobbered]
target/i386/translate.c:4466:9: warning:
variable 'rex_w' might be clobbered by 'longjmp' or 'vfork' [-Wclobbered]
target/i386/translate.c:4466:16: warning:
variable 'rex_r' might be clobbered by 'longjmp' or 'vfork' [-Wclobbered]
Tested with x86_64-w64-mingw32-gcc from Debian stretch.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Message-Id: <20171113064845.29142-1-sw@weilnetz.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The conditional memory barrier not only looks strange but actually is
wrong.
On s390x, I can reproduce interrupts via cpu_interrupt() not leading to
a proper kick out of emulation every now and then. cpu_interrupt() is
especially used for inter CPU communication via SIGP (esp. external
calls and emergency interrupts).
With this patch, I was not able to reproduce. (esp. no stalls or hangs
in the guest).
My setup is s390x MTTCG with 16 VCPUs on 8 CPU host, running make -j16.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171129191319.11483-1-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
pause_all_cpus() is sometimes called from a VCPU thread (e.g. s390x
during special reset). It cannot deal with multiple VCPUs per Thread
(single threaded TCG) yet.
Booting an s390x guest with -smp 2 and single threaded TCG from disk
currently fails. The DIAG 308 will issue a pause_all_cpus() and wait
forever for the CPUs to actually stop. But it is waiting for itself.
So let's stop all VCPUs belonging to the current thread. Factor out
stopping of a VCPU.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171129191215.11323-1-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The value of HV_X64_MSR_SVERSION is initialized once at vcpu init, and
is reset to zero on vcpu reset, which is wrong.
It is supposed to be a constant, so drop the field from X86CPU, set the
msr with the constant value, and don't bother getting it.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20171122181418.14180-4-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Initially SINTx msrs should be in "masked" state. To ensure that
happens on *every* reset, move setting their values to
kvm_arch_vcpu_reset.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20171122181418.14180-3-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Hyper-V has a notion of partition-wide MSRs. Those MSRs are read and
written as usual on each VCPU, however the hypervisor maintains a single
global value for all VCPUs. Thus writing such an MSR from any single
VCPU affects the global value that is read by all other VCPUs.
This leads to an issue during VCPU hotplug: the zero-initialzied values
of those MSRs get synced into KVM and override the global values as has
already been set by the guest.
This change makes the partition-wide MSRs only be synchronized on the
first vcpu.
Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com>
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20171122181418.14180-2-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Intel IceLake cpu has added new cpu features,AVX512_VBMI2/GFNI/
VAES/VPCLMULQDQ/AVX512_VNNI/AVX512_BITALG. Those new cpu features
need expose to guest VM.
The bit definition:
CPUID.(EAX=7,ECX=0):ECX[bit 06] AVX512_VBMI2
CPUID.(EAX=7,ECX=0):ECX[bit 08] GFNI
CPUID.(EAX=7,ECX=0):ECX[bit 09] VAES
CPUID.(EAX=7,ECX=0):ECX[bit 10] VPCLMULQDQ
CPUID.(EAX=7,ECX=0):ECX[bit 11] AVX512_VNNI
CPUID.(EAX=7,ECX=0):ECX[bit 12] AVX512_BITALG
The release document ref below link:
https://software.intel.com/sites/default/files/managed/c5/15/\
architecture-instruction-set-extensions-programming-reference.pdf
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <1511335676-20797-1-git-send-email-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Just so that I notice those patches more easily.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20171205072220.885-1-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Scsi-block doesn't use the DEFINE_BLOCK_PROPERTIES() macro so it didn't
gain the share-rw back when it was added to all other storage devices.
This option is meaningful here, and need to be used when attaching a
shared storage to guest.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20171205071928.30242-1-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This lets distros standardize on how QEMU should install systemd
services for qemu-ga and qemu-pr-helper.
The qemu-ga unit file comes from Fedora, but I checked that
Debian is using the same path for the virtio-serisal port.
I would like to include this in 2.11, so that the qemu-pr-helper
socket can be standardized across distros. Note however that
the files are not installed. We can add a configure option
in 2.12 perhaps, but it's too late now; documenting the files
in the release notes should do.
Suggested-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20171124164422.3960-1-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1) Return a generic sense if TEST UNIT READY does not provide one;
2) Fix two mistakes in copying from the spec.
Cc: qemu-stable@nongnu.org
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If we create a thread with QEMU_THREAD_DETACHED mode, QEMU may get a segfault with low probability.
The backtrace is:
#0 0x00007f46c60291d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007f46c602a8c8 in __GI_abort () at abort.c:90
#2 0x00000000008543c9 in PAT_abort ()
#3 0x000000000085140d in patchIllInsHandler ()
#4 <signal handler called>
#5 pthread_detach (th=139933037614848) at pthread_detach.c:50
#6 0x0000000000829759 in qemu_thread_create (thread=thread@entry=0x7ffdaa8205e0, name=name@entry=0x94d94a "io-task-worker", start_routine=start_routine@entry=0x7eb9a0 <qio_task_thread_worker>,
arg=arg@entry=0x3f5cf70, mode=mode@entry=1) at util/qemu_thread_posix.c:512
#7 0x00000000007ebc96 in qio_task_run_in_thread (task=0x31db2c0, worker=worker@entry=0x7e7e40 <qio_channel_socket_connect_worker>, opaque=0xcd23380, destroy=0x7f1180 <qapi_free_SocketAddress>)
at io/task.c:141
#8 0x00000000007e7f33 in qio_channel_socket_connect_async (ioc=ioc@entry=0x626c0b0, addr=<optimized out>, callback=callback@entry=0x55e080 <qemu_chr_socket_connected>, opaque=opaque@entry=0x42862c0,
destroy=destroy@entry=0x0) at io/channel_socket.c:194
#9 0x000000000055bdd1 in socket_reconnect_timeout (opaque=0x42862c0) at qemu_char.c:4744
#10 0x00007f46c72483b3 in g_timeout_dispatch () from /usr/lib64/libglib-2.0.so.0
#11 0x00007f46c724799a in g_main_context_dispatch () from /usr/lib64/libglib-2.0.so.0
#12 0x000000000076c646 in glib_pollfds_poll () at main_loop.c:228
#13 0x000000000076c6eb in os_host_main_loop_wait (timeout=348000000) at main_loop.c:273
#14 0x000000000076c815 in main_loop_wait (nonblocking=nonblocking@entry=0) at main_loop.c:521
#15 0x000000000056a511 in main_loop () at vl.c:2076
#16 0x0000000000420705 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4940
The cause of this problem is a glibc bug; for more information, see
https://sourceware.org/bugzilla/show_bug.cgi?id=19951.
The solution for this bug is to use pthread_attr_setdetachstate.
There is a similar issue with pthread_setname_np, which is moved
from creating thread to created thread.
Signed-off-by: linzhecheng <linzhecheng@huawei.com>
Message-Id: <20171128044656.10592-1-linzhecheng@huawei.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
[Simplify the code by removing qemu_thread_set_name, and free the arguments
before invoking the start routine. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Recent glibc added memfd_create in sys/mman.h. This conflicts with
the definition in util/memfd.c:
/builddir/build/BUILD/qemu-2.11.0-rc1/util/memfd.c:40:12: error: static declaration of memfd_create follows non-static declaration
Fix the configure test, and remove the sys/memfd.h inclusion since the
file actually does not exist---it is a typo in the memfd_create(2) man
page.
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>