Plain MAP_FIXED has the undesirable behaviour of splatting exiting
maps so we don't actually achieve what we want when looking for gaps.
We should be using MAP_FIXED_NOREPLACE. As this isn't always available
we need to potentially check the returned address to see if the kernel
gave us what we asked for.
Fixes: ad592e37df ("linux-user: provide fallback pgd_find_hole for bare chroots")
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200724064509.331-9-alex.bennee@linaro.org>
Given we assert the requested address matches what we asked we should
also make that clear in the mmap flags. Otherwise we see failures in
the GitLab environment for some currently unknown but allowable
reason. We use MAP_FIXED_NOREPLACE if we can so we don't just clobber
an existing mapping. Also include the strerror string for a bit more
info on failure.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200701135652.1366-34-alex.bennee@linaro.org>
We rely on the pointer to wrap when accessing the high address of the
COMMPAGE so it lands somewhere reasonable. However on 32 bit hosts we
cannot afford just to map the entire 4gb address range. The old mmap
trial and error code handled this by just checking we could map both
the guest_base and the computed COMMPAGE address.
We can't just manipulate loadaddr to get what we want so we introduce
an offset which pgb_find_hole can apply when looking for a gap for
guest_base that ensures there is space left to map the COMMPAGE
afterwards.
This is arguably a little inefficient for the one 32 bit
value (kuser_helper_version) we need to keep there given all the
actual code entries are picked up during the translation phase.
Fixes: ee94743034
Bug: https://bugs.launchpad.net/qemu/+bug/1880225
Cc: Bug 1880225 <1880225@bugs.launchpad.net>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20200605154929.26910-13-alex.bennee@linaro.org>
When running QEMU out of a chroot environment we may not have access
to /proc/self/maps. As there is no other "official" way to introspect
our memory map we need to fall back to the original technique of
repeatedly trying to mmap an address range until we find one that
works.
Fortunately it's not quite as ugly as the original code given we
already re-factored the complications of dealing with the
ARM_COMMPAGE. We do make an attempt to skip over brk() which is about
the only concrete piece of information we have about the address map
at this moment.
Fixes: ee9474303
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200605154929.26910-12-alex.bennee@linaro.org>
Newer clangs rightly spot that you can never exceed the full address
space of 64 bit hosts with:
linux-user/elfload.c:2076:41: error: result of comparison 'unsigned
long' > 18446744073709551615 is always false
[-Werror,-Wtautological-type-limit-compare]
4685 if ((guest_hiaddr - guest_base) > ~(uintptr_t)0) {
4686 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~
4687 1 error generated.
So lets limit the check to 32 bit hosts only.
Fixes: ee94743034
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200525131823.715-8-thuth@redhat.com>
[thuth: Use HOST_LONG_BITS < TARGET_ABI_BITS instead of HOST_LONG_BITS == 32]
Signed-off-by: Thomas Huth <thuth@redhat.com>
First we ensure all guest space initialisation logic comes through
probe_guest_base once we understand the nature of the binary we are
loading. The convoluted init_guest_space routine is removed and
replaced with a number of pgb_* helpers which are called depending on
what requirements we have when loading the binary.
We first try to do what is requested by the host. Failing that we try
and satisfy the guest requested base address. If all those options
fail we fall back to finding a space in the memory map using our
recently written read_self_maps() helper.
There are some additional complications we try and take into account
when looking for holes in the address space. We try not to go directly
after the system brk() space so there is space for a little growth. We
also don't want to have to use negative offsets which would result in
slightly less efficient code on x86 when it's unable to use the
segment offset register.
Less mind-binding gotos and hopefully clearer logic throughout.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20200513175134.19619-5-alex.bennee@linaro.org>
Searching for memory space can cause problems so lets extend the
CPU_LOG_PAGE output so you can watch init_guest_space fail to
allocate memory. A more involved fix is actually required to make this
function play nicely with the large guard pages the sanitiser likes to
use.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20200403191150.863-5-alex.bennee@linaro.org>
The v8.4-RCPC extension implements some new instructions:
* LDAPUR, LDAPURB, LDAPURH, LDAPRSB, LDAPRSH, LDAPRSW
* STLUR, STLURB, STLURH
These are all in a new subgroup of encodings that sits below the
top-level "Loads and Stores" group in the Arm ARM.
The STLUR* instructions have standard store-release semantics; the
LDAPUR* have Load-AcquirePC semantics, but (as with LDAPR*) we choose
to implement them as the slightly stronger Load-Acquire.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200224172846.13053-4-peter.maydell@linaro.org
The v8.3-RCPC extension implements three new load instructions
which provide slightly weaker consistency guarantees than the
existing load-acquire operations. For QEMU we choose to simply
implement them with a full LDAQ barrier.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200224172846.13053-3-peter.maydell@linaro.org
Use isar feature tests instead of feature bit tests.
Although none of QEMUs current cpus have VFPv3 without D32,
replace the large comment explaining why with one line that
sets ARM_HWCAP_ARM_VFPv3D16 under the correct conditions.
Mirror the test sequence used in the linux kernel.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20200224222232.13807-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Enforce a convention that an isar_feature function that tests a
32-bit ID register always has _aa32_ in its name, and one that
tests a 64-bit ID register always has _aa64_ in its name.
We already follow this except for three cases: thumb_div,
arm_div and jazelle, which all need _aa32_ adding.
(As noted in the comment, isar_feature_aa32_fp16_arith()
is an exception in that it currently tests ID_AA64PFR0_EL1,
but will switch to MVFR1 once we've properly implemented
FP16 for AArch32.)
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20200214175116.9164-2-peter.maydell@linaro.org
With bad luck, we can wind up with no space at all for brk,
which will generally cause the guest malloc to fail.
This bad luck is easier to come by with ET_DYN (PIE) binaries,
where either the stack or the interpreter (ld.so) gets placed
immediately after the main executable.
But there's nothing preventing this same thing from happening
with ET_EXEC (normal) binaries, during probe_guest_base().
In both cases, reserve some extra space via mmap and release
it back to the system after loading the interpreter and
allocating the stack.
The choice of 16MB is somewhat arbitrary. It's enough for libc
to get going, but without being so large that 32-bit guests or
32-bit hosts are in danger of running out of virtual address space.
It is expected that libc will be able to fall back to mmap arenas
after the limited brk space is exhausted.
Launchpad: https://bugs.launchpad.net/qemu/+bug/1749393
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200117230245.5040-1-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
In init_guest_space, we need to mmap guest space. If the return address
of first mmap is not aligned with align, which was set to MAX(SHMLBA,
qemu_host_page_size), we need unmap and a new mmap(space is larger than
first size). The new size is named real_size, which is aligned_size +
qemu_host_page_size. alugned_size is the guest space size. And add a
qemu_host_page_size to avoid memory error when we align real_start
manually (ROUND_UP(real_start, align)). But when SHMLBA >
qemu_host_page_size, the added size will smaller than the size to align,
which can make a mistake(in a mips machine, it appears). So change
real_size from aligned_size +qemu_host_page_size
to aligned_size + align will solve it.
Signed-off-by: Xinyu Li <precinct@mail.ustc.edu.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20191213022919.5934-1-precinct@mail.ustc.edu.cn>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
ARMv8.2 introduced support for Data Cache Clean instructions
to PoP (point-of-persistence) - DC CVAP and PoDP (point-of-deep-persistence)
- DV CVADP. Both specify conceptual points in a memory system where all writes
that are to reach them are considered persistent.
The support provided considers both to be actually the same so there is no
distinction between the two. If none is available (there is no backing store
for given memory) both will result in Data Cache Clean up to the point of
coherency. Otherwise sync for the specified range shall be performed.
Signed-off-by: Beata Michalska <beata.michalska@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20191121000843.24844-5-beata.michalska@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is needed to support debugging PIE ELF binaries running under QEMU
user mode. Currently, `code_offset` and `data_offset` remain unset for
all ELF binaries, so GDB is unable to correctly locate the position of
the binary's text and data.
The fields `code_offset`, and `data_offset` were originally added way
back in 2006 to support debugging of bFMT executables (978efd6aac),
and support was just never added for ELF. Since non-PIE binaries are
loaded at exactly the address specified in the binary, GDB does not need
to relocate any symbols, so the buggy behavior is not normally observed.
http://sourceware.org/gdb/onlinedocs/gdb/General-Query-Packets.html#index-qOffsets-packet
Buglink: https://bugs.launchpad.net/qemu/+bug/1528239
Signed-off-by: Josh Kunz <jkz@google.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190816233422.16715-1-jkz@google.com>
[lv: added link to documentation]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Add the HWCAP2_* bits from kernel version v5.3-rc3.
Enable the bits corresponding to ARMv8.5-CondM and ARMv8.5-FRINT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20190809171156.3476-1-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190812052359.30071-20-armbru@redhat.com>
QEMU_PPC_FEATURE2_VEC_CRYPTO enables the use
of VSX instructions in libcrypto that are accelerated
by the TCG vector instructions now.
QEMU_PPC_FEATURE2_DARN allows to use the new builtin
qemu_guest_getrandom() function.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20190609143521.19374-1-laurent@vivier.eu>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Now that we have both ArchCPU and CPUArchState, we can define
this generically instead of via macro in each target's cpu.h.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Let's add all HWCAPs that we can support under TCG right now, when the
respective CPU facilities are enabled.
Cc: Riku Voipio <riku.voipio@iki.fi>
Cc: Laurent Vivier <laurent@vivier.eu>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Laurent Vivier <laurent@vivier.eu>
Cc: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
For those hosts with SHMLBA > getpagesize, we don't automatically
select a guest address that is compatible with the host. We can
achieve this by boosting the alignment of guest_base and by adding
an extra alignment argument to mmap_find_vma.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190519201953.20161-13-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Sanitize interp_info structure in load_elf_binary() and, for MIPS only,
init its field fp_abi to MIPS_ABI_FP_UNKNOWN. This fixes appearances of
"Unexpected FPU mode" message in some MIPS use cases. Currently, this
bug is a complete stopper for some MIPS binaries.
In load_elf_binary(), struct image_info interp_info is used without
being properly initialized. One result is that when the ELF's program
header doesn't contain an entry for the ABI flags, then the value of
the struct image_info's fp_abi field is set to whatever happened to
be in stack memory at the time.
Backporting to 4.0 and, if possible, to 3.1 is recommended.
Fixes: https://bugs.launchpad.net/qemu/+bug/1825002
Signed-off-by: Daniel Santos <daniel.santos@pobox.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1558282527-22183-6-git-send-email-aleksandar.markovic@rt-rk.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Use a better interface for random numbers than rand * 16.
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Some PT_LOAD segments may be completely zeroed out and their p_filesize
is zero, in that case the loader should just allocate a page that's at
least p_memsz bytes large (plus eventual alignment padding).
Calling zero_bss does this job for us, all we have to do is make sure we
don't try to mmap a zero-length page.
Signed-off-by: Giuseppe Musacchio <thatlemon@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20190503122007.lkjsvztgt4ycovac@debian>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Fix this warning when building with GCC9 on Fedora 30:
In function ‘strncpy’,
inlined from ‘fill_psinfo’ at /home/alistair/qemu/linux-user/elfload.c:3208:12,
inlined from ‘fill_note_info’ at /home/alistair/qemu/linux-user/elfload.c:3390:5,
inlined from ‘elf_core_dump’ at /home/alistair/qemu/linux-user/elfload.c:3539:9:
/usr/include/bits/string_fortified.h:106:10: error: ‘__builtin_strncpy’ specified bound 16 equals destination size [-Werror=stringop-truncation]
106 | return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <c4d2b1de9efadcf1c900b91361af9302823a72a9.1556666645.git.alistair.francis@wdc.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The 32-bit kernel has strings for v4, v5, v6, v7, v7m.
The 64-bit kernel, in compat mode, has strings for v8.
Fixes: https://bugs.launchpad.net/bugs/1813034
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20190212074840.13542-2-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Userspace programs should (in theory) query the ELF HWCAP before
probing these registers. Now we have implemented them all make it
public.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190205190224.2198-6-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Most list head structs need not be given a name. In most cases the
name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV
or reverse iteration, but this does not apply to lists of other kinds,
and even for QTAILQ in practice this is only rarely needed. In addition,
we will soon reimplement those macros completely so that they do not
need a name for the head struct. So clean up everything, not giving a
name except in the rare case where it is necessary.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Set fp_abi and interp_fp_abi values to current fp_abi value read from
MIPS.abiflags.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Read MIPS.abiflags section from ELF file into Mips_elf_abiflags_v0 struct.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Both arm and thumb2 division are controlled by the same ISAR field,
which takes care of the arm implies thumb case. Having M imply
thumb2 division was wrong for cortex-m0, which is v6m and does not
have thumb2 at all, much less thumb2 division.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Most of the v8 extensions are self-contained within the ISAR
registers and are not implied by other feature bits, which
makes them the easiest to convert.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If the hostpage size is greater than the TARGET_PAGESIZE, the
target-pages of size TARGET_PAGESIZE are marked valid only till the
length requested during the elfload. The glibc attempts to consume unused
space in the last page of data segment(__libc_memalign() in
elf/dl-minimal.c). If PT_LOAD p_align is greater than or
equal to hostpage size, the GLRO(dl_pagesize) is actually the host pagesize
as set in the auxillary vectors. So, there is no explicit mmap request for
the remaining target-pages on the last hostpage. The glibc assumes that
particular space as available and subsequent attempts to use
those addresses lead to crash as the target_mmap has not marked them valid
for those target-pages.
The issue is seen when trying to chroot to 16.04-x86_64 ubuntu on a PPC64
host where the fork fails to access the thread_id as it is allocated on a
page not marked valid. The recent glibc doesn't have checks for thread-id in
fork, but the issue can manifest somewhere else, none the less.
The fix here is to map all the target-pages of the hostpage during the
elfload if the p_align is greater than or equal to hostpage size, for
data segment to allow the glibc for proper consumption.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <153553435604.51992.5640085189104207249.stgit@lep8c.aus.stglabs.ibm.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This adds the HWCAP2 bit to detect if a linux user process is
running on an ISA 3.0 compliant cpu like POWER9. This can be
verified using a simple test program that prints the value in
the auxiliary vector for AT_HWCAP2 as shown below.
Before:
$ qemu-ppc64le -cpu power8 test
0x8c000000
$ qemu-ppc64le -cpu power9 test
0x8c000000
After:
$ qemu-ppc64le -cpu power8 test
0x8c000000
$ qemu-ppc64le -cpu power9 test
0x8c800000
Signed-off-by: Sandipan Das <sandipan@linux.ibm.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Starting from nanoMIPS introduction, machine variant can be
EM_MIPS or EM_NANOMIPS.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
When we try to use some targets on ppc64, it can happen the target
doesn't support the host page size to align ELF load sections and
fails with:
ELF load command alignment not page-aligned
Since commit a70daba377 ("linux-user: Tell guest about big host
page sizes") the host page size is used to align ELF sections, but
this doesn't work if the alignment required by the load section is
smaller than the host one. For these cases, we continue to use the
TARGET_PAGE_SIZE instead of the host one.
I have tested this change on ppc64, and it fixes qemu linux-user for:
s390x, m68k, i386, arm, aarch64, hppa
and I have tested it doesn't break the following targets:
x86_64, mips64el, sh4
mips and mipsel abort, but I think for another reason.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[lv: fixed "info->alignment = 0"]
Message-Id: <20180716195349.29959-1-laurent@vivier.eu>
We've already added the helpers with an SVE patch, all that remains
is to wire up the aa64 and aa32 translators. Enable the feature
within -cpu max for CONFIG_USER_ONLY.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-36-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Enable ARM_FEATURE_SVE for the generic "max" cpu.
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-35-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The insns in the ARMv8.1-Atomics are added to the existing
load/store exclusive and load/store reg opcode spaces.
Rearrange the top-level decoders for these to accomodate.
The Atomics insns themselves still generate Unallocated.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180508151437.4232-8-richard.henderson@linaro.org
[PMM: Drop the ARM_FEATURE_V8_1 feature flag]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>