In preparation for having some more common semihosting code let's
excise the current config magic from vl.c into its own file. We shall
later add more conditionals to the build configurations so we can
avoid building this if we don't need it.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Realign comments to fix warnings issued by checkpatc.pl tool
"WARNING: Block comments use a leading /* on a separate line"
within "target/mips/cpu.h" file.
Signed-off-by: Jules Irenge <jbi.octave@gmail.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <20190413202818.13622-3-jbi.octave@gmail.com>
Add or remove space to fix errors issued by checkpatch.pl tool
"ERROR: spaces required around that..."
"ERROR: space required after that..."
"ERROR: space required before the open parenthesis"
"ERROR: space required after that..."
"ERROR: space prohibited between function name and open parenthesis"
"ERROR: code indent should never use tabs"
"ERROR: line over 90 characters"
within "target/mips/cpu.h" file.
Signed-off-by: Jules Irenge <jbi.octave@gmail.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <20190413202818.13622-2-jbi.octave@gmail.com>
This commit addresses QEMU Bug #1825311:
mips_cpu_handle_mmu_fault renders all accessed pages executable
It allows finer-grained control over whether the accessed page should
be executable by moving the decision to the underlying map_address
function, which has more information for this.
As a result, pages that have the XI bit set in the TLB and are accessed
for read/write, don't suddenly end up being executable.
Fixes: https://bugs.launchpad.net/qemu/+bug/1825311
Fixes: 2fb58b7374 ('target-mips: add RI and XI fields to TLB entry')
Signed-off-by: Jakub Jermář <jakub.jermar@kernkonzept.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190517123533.868479-1-jakub.jermar@kernkonzept.com>
The old version of the helper for the INSERT.<B|H|W|D> MSA instructions
has been replaced with four helpers that don't use switch, and change
the endianness of the given index, when executed on a big endian host.
Signed-off-by: Mateja Marjanovic <mateja.marjanovic@rt-rk.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <1554212605-16457-6-git-send-email-mateja.marjanovic@rt-rk.com>
The old version of the helper for the COPY_U.<B|H|W> MSA instructions
has been replaced with four helpers that don't use switch, and change
the endianness of the given index, when executed on a big endian host.
Signed-off-by: Mateja Marjanovic <mateja.marjanovic@rt-rk.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <1554212605-16457-5-git-send-email-mateja.marjanovic@rt-rk.com>
The old version of the helper for the COPY_S.<B|H|W|D> MSA instructions
has been replaced with four helpers that don't use switch, and change
the endianness of the given index, when executed on a big endian host.
Signed-off-by: Mateja Marjanovic <mateja.marjanovic@rt-rk.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <1554212605-16457-4-git-send-email-mateja.marjanovic@rt-rk.com>
Fix the case when the host is a big endian machine, and change
the approach toward ST.<B|H|W|D> instruction helpers.
Signed-off-by: Mateja Marjanovic <mateja.marjanovic@rt-rk.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <1554212605-16457-3-git-send-email-mateja.marjanovic@rt-rk.com>
Fix the case when the host is a big endian machine, and change
the approach toward LD.<B|H|W|D> instruction helpers.
Signed-off-by: Mateja Marjanovic <mateja.marjanovic@rt-rk.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <1554212605-16457-2-git-send-email-mateja.marjanovic@rt-rk.com>
MSA instructions MOD_<U|S>.<B|H|W|D> when dividing by zero,
didn't return the same value when executed on a referent hardware
(FPGA MIPS 64 r6, little endian) and when executed on QEMU, which
is not a real bug, because the result when dividing by zero is
UNPREDICTABLE [1] (page 255, 256).
[1] MIPS Architecture for Programmers
Volume IV-j: The MIPS64 SIMD
Architecture Module, Revision 1.12
Signed-off-by: Mateja Marjanovic <mateja.marjanovic@rt-rk.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <1554207110-9113-3-git-send-email-mateja.marjanovic@rt-rk.com>
MSA instructions DIV_<U|S>.<B|H|W|D> when dividing by zero,
didn't return the same value when executed on a referent hardware
(FPGA MIPS 64 r6, little endian) and when executed on QEMU, which
is not a real bug, because the result when dividing by zero is
UNPREDICTABLE [1] (page 141, 142).
[1] MIPS Architecture for Programmers
Volume IV-j: The MIPS64 SIMD
Architecture Module, Revision 1.12
Signed-off-by: Mateja Marjanovic <mateja.marjanovic@rt-rk.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Message-Id: <1554207110-9113-2-git-send-email-mateja.marjanovic@rt-rk.com>
There is an analogous change for ARM here:
https://patchwork.kernel.org/patch/10649857
Signed-off-by: Jonathan Behrens <jonathan@fintelia.io>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
According to the spec, "All bits besides SSIP, USIP, and UEIP in the sip
register are read-only." Further, if an interrupt is not delegated to mode x,
then "the corresponding bits in xip [...] should appear to be hardwired to
zero. This patch implements both of those requirements.
Signed-off-by: Jonathan Behrens <jonathan@fintelia.io>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
C.ADDI16SP, C.LWSP, C.JR, C.ADDIW, C.LDSP all have reserved
operands that were not diagnosed.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
No functional change, just making the code easier to read.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The requirement of holding the iothread_mutex is burdersome when
swapping the background and foreground registers in the Hypervisor
extension. To avoid the requrirement let's set the interrupt
asynchronously.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
At the same time deprecate the ISA string CPUs.
It is dobtful anyone specifies the CPUs, but we are keeping them for the
Spike machine (which is about to be depreated) so we may as well just
mark them as deprecated.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
These extra spaces make the "-d op" dump look weird.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The tcg_gen_fooi_tl functions have some immediate constant
folding built in, which match up with some of the riscv asm
builtin macros, like mv and not.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This eliminates all functions in insn_trans/trans_rvc.inc.c,
so the entire file can be removed.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This eliminates about half of the complicated decode
bits within insn_trans/trans_rvc.inc.c.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Special handling for IMM==0 is the only difference between
RVC shifti and RVI shifti. This can be handled with !function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
In some cases this allows us to directly use the insn32
translator function. In some cases we still need a shim.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The generated functions are only used within translate.c
and do not need to be global, or declared.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This patch introduces wrappers around the tcg_gen_exit_tb() and
tcg_gen_lookup_and_goto_ptr() functions that handle single stepping,
i.e. call gen_exception_debug() when single stepping is enabled.
Theses functions are then used instead of the originals, bringing single
stepping handling in places where it was previously ignored such as jalr
and system branch instructions (ecall, mret, sret, etc.).
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The 'sfence.vma' instruction is privileged, and should only ever be allowed
when executing in supervisor mode or higher.
Signed-off-by: Jonathan Behrens <fintelia@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The hw/arm/arm.h header now only includes declarations relating
to boot.c code, so it is only needed by Arm board or SoC code.
Remove some unnecessary inclusions of it from target/arm files
and from hw/intc/armv7m_nvic.c.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190516163857.6430-3-peter.maydell@linaro.org
Commit 89e68b575 "target/arm: Use vector operations for saturation"
causes this abort() when booting QEMU ARM with a Cortex-A15:
0 0x00007ffff4c2382f in raise () at /usr/lib/libc.so.6
1 0x00007ffff4c0e672 in abort () at /usr/lib/libc.so.6
2 0x00005555559c1839 in disas_neon_data_insn (insn=<optimized out>, s=<optimized out>) at ./target/arm/translate.c:6673
3 0x00005555559c1839 in disas_neon_data_insn (s=<optimized out>, insn=<optimized out>) at ./target/arm/translate.c:6386
4 0x00005555559cd8a4 in disas_arm_insn (insn=4081107068, s=0x7fffe59a9510) at ./target/arm/translate.c:9289
5 0x00005555559cd8a4 in arm_tr_translate_insn (dcbase=0x7fffe59a9510, cpu=<optimized out>) at ./target/arm/translate.c:13612
6 0x00005555558d1d39 in translator_loop (ops=0x5555561cc580 <arm_translator_ops>, db=0x7fffe59a9510, cpu=0x55555686a2f0, tb=<optimized out>, max_insns=<optimized out>) at ./accel/tcg/translator.c:96
7 0x00005555559d10d4 in gen_intermediate_code (cpu=cpu@entry=0x55555686a2f0, tb=tb@entry=0x7fffd7840080 <code_gen_buffer+126091347>, max_insns=max_insns@entry=512) at ./target/arm/translate.c:13901
8 0x00005555558d06b9 in tb_gen_code (cpu=cpu@entry=0x55555686a2f0, pc=3067096216, cs_base=0, flags=192, cflags=-16252928, cflags@entry=524288) at ./accel/tcg/translate-all.c:1736
9 0x00005555558ce467 in tb_find (cf_mask=524288, tb_exit=1, last_tb=0x7fffd783e640 <code_gen_buffer+126084627>, cpu=0x1) at ./accel/tcg/cpu-exec.c:407
10 0x00005555558ce467 in cpu_exec (cpu=cpu@entry=0x55555686a2f0) at ./accel/tcg/cpu-exec.c:728
11 0x000055555588b0cf in tcg_cpu_exec (cpu=0x55555686a2f0) at ./cpus.c:1431
12 0x000055555588d223 in qemu_tcg_cpu_thread_fn (arg=0x55555686a2f0) at ./cpus.c:1735
13 0x000055555588d223 in qemu_tcg_cpu_thread_fn (arg=arg@entry=0x55555686a2f0) at ./cpus.c:1709
14 0x0000555555d2629a in qemu_thread_start (args=<optimized out>) at ./util/qemu-thread-posix.c:502
15 0x00007ffff4db8a92 in start_thread () at /usr/lib/libpthread.
This patch ensures that we don't hit the abort() in the second switch
case in disas_neon_data_insn() as we will return from the first case.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: ad91b397f360b2fc7f4087e476f7df5b04d42ddb.1558021877.git.alistair.francis@wdc.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The mask implied by the extract is redundant with the one
implied by the deposit. Also, fix spelling of BFXIL.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190514011129.11330-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is, after all, how we implement extract2 in tcg/aarch64.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190514011129.11330-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use qemu_guest_getrandom in aspeed, nrf51, bcm2835, exynos4210 rng devices.
Use qemu_guest_getrandom in target/ppc darn instruction.
Support ARMv8.5-RNG extension.
Support x86 RDRAND extension.
Acked-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Laurent Vivier <laurent@vivier.eu>
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAlzllrsdHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9/qAgAuYpF/gHrkfT+IFrw
OsgV1pPdhh+opxp44ayIQ6VC64voij0k/NnmC3/BxRv89yPqchvA6m0c2jzfGuwZ
ICpDt7LvFTrG9k8X9vEXbOTfh5dS/5g1o0LXiGU9RmMaC/5z2ZIabxU8K1Ti3+X0
P3B5s65rRQ8fPzOAMLEjeaHYQ/AOX/CNsmgFDve+d0b9tJY99UVO3Pb0h3+eR0s3
/4AHWG+IACGX7MVgFIfkEbGVnwboNiT20MUq3Exn2yGgg0IbLfoUazOnbfRz9jkX
kbN6nAZ+WDynf31SvvkEL/P6W5medf58ufJOiBB8opIp1E4WDdM30V8RkkPOyj4z
YOBmSw==
=2RnL
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/pull-rng-20190522' into staging
Introduce qemu_guest_getrandom.
Use qemu_guest_getrandom in aspeed, nrf51, bcm2835, exynos4210 rng devices.
Use qemu_guest_getrandom in target/ppc darn instruction.
Support ARMv8.5-RNG extension.
Support x86 RDRAND extension.
Acked-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Laurent Vivier <laurent@vivier.eu>
# gpg: Signature made Wed 22 May 2019 19:36:43 BST
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/pull-rng-20190522: (25 commits)
target/i386: Implement CPUID_EXT_RDRAND
target/ppc: Use qemu_guest_getrandom for DARN
target/ppc: Use gen_io_start/end around DARN
target/arm: Implement ARMv8.5-RNG
target/arm: Put all PAC keys into a structure
hw/misc/exynos4210_rng: Use qemu_guest_getrandom
hw/misc/bcm2835_rng: Use qemu_guest_getrandom_nofail
hw/misc/nrf51_rng: Use qemu_guest_getrandom_nofail
aspeed/scu: Use qemu_guest_getrandom_nofail
linux-user: Remove srand call
linux-user/aarch64: Use qemu_guest_getrandom for PAUTH keys
linux-user: Use qemu_guest_getrandom_nofail for AT_RANDOM
linux-user: Call qcrypto_init if not using -seed
linux-user: Initialize pseudo-random seeds for all guest cpus
cpus: Initialize pseudo-random seeds for all guest cpus
util: Add qemu_guest_getrandom and associated routines
ui/vnc: Use gcrypto_random_bytes for start_auth_vnc
ui/vnc: Split out authentication_failed
crypto: Change the qcrypto_random_bytes buffer type to void*
crypto: Use getrandom for qcrypto_random_bytes
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We now have an interface for guest visible random numbers.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We now have an interface for guest visible random numbers.
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Generating a random number counts as I/O, as it cannot be
replayed and produce the same results.
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use the newly introduced infrastructure for guest random numbers.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This allows us to use a single syscall to initialize them all.
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Microarchitectural Data Sampling is a hardware vulnerability which allows
unprivileged speculative access to data which is available in various CPU
internal buffers.
Some Intel processors use the ARCH_CAP_MDS_NO bit in the
IA32_ARCH_CAPABILITIES
MSR to report that they are not vulnerable, make it available to guests.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20190516185320.28340-1-pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
md-clear is a new CPUID bit which is set when microcode provides the
mechanism to invoke a flush of various exploitable CPU buffers by invoking
the VERW instruction.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20190515141011.5315-2-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
- have the bios tolerate bootmap signature entries
- next chunk of vector instruction support in tcg
- a headers update against Linux 5.2-rc1
- add more facilities and gen15 machines to the cpu model
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAlzkFK8SHGNvaHVja0By
ZWRoYXQuY29tAAoJEN7Pa5PG8C+vV40P/38Q1yqqfQSc6Q7f/xwXLHFnrhuXCHgG
5BNhGoX+L8ywaQwkX9CndpkbI4BGixzbEPgQ4sf9SQjNvsecTujC2LBV/IZsYQ9N
dAFxGQbxCVibMWJZUYvyNpgjkwB/XQbGvZKUSFCsqZ308dG//KkGQrlz8HuWOnOe
qvUUCD8aTe9zX2mCouoBjCv2F+X/piFHubTB2OeYz0NVVIC0AcXbhqenzKqUu3vL
/UL7qY2ThAiPBNYIoZTKHT1b80o2k0x+JQnFbYq8O03h5hXwR5D4lo8IYxm92xRG
Xafc33H0k+fIwXUJJUdBsKjZh9fy2eMLoFa/qRE0vlv8g29EDLCFm+CPqGp7otEb
3i96EOBjAYwT7b23mC2VSVDMCRMmIR5No7PnjllsH8slLz80plF0r/Wj52KQX4ba
GLWvD6H4KZ0g5tiTgE8m/C5t9+5komll7EDDGfh+HTl/9EpUoESx3r9FXKzxsZFW
ANuG8nmGnafGvmGvmqPOv6d6BbWsfqV2H8T+yqh+EIftBxM0o7HtAv95hlyOO2xj
W6/gKIQMMrAwLETFdQLOsDsnRp6zimIYIvLQ33WIqC77YX0Hn6/tSrfPDm1sDvmh
JfxY9pxUVfiPjIzzvDgyf5ZBhPmsYeXrpqx90cX9QVKDYNcdBP7y9c7Rqj4oZ2qm
rfCtaFMmOb3c
=+HXM
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20190521-3' into staging
s390x update:
- have the bios tolerate bootmap signature entries
- next chunk of vector instruction support in tcg
- a headers update against Linux 5.2-rc1
- add more facilities and gen15 machines to the cpu model
# gpg: Signature made Tue 21 May 2019 16:09:35 BST
# gpg: using RSA key C3D0D66DC3624FF6A8C018CEDECF6B93C6F02FAF
# gpg: issuer "cohuck@redhat.com"
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>" [unknown]
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>" [full]
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>" [full]
# gpg: aka "Cornelia Huck <cohuck@kernel.org>" [unknown]
# gpg: aka "Cornelia Huck <cohuck@redhat.com>" [unknown]
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20190521-3: (55 commits)
s390x/cpumodel: wire up 8561 and 8562 as gen15 machines
s390x/cpumodel: add gen15 defintions
s390x/cpumodel: add Deflate-conversion facility
s390x/cpumodel: enhanced sort facility
s390x/cpumodel: vector enhancements
s390x/cpumodel: msa9 facility
s390x/cpumodel: Miscellaneous-Instruction-Extensions Facility 3
s390x/cpumodel: ignore csske for expansion
linux headers: update against Linux 5.2-rc1
update-linux-headers: handle new header file
s390x/tcg: Implement VECTOR TEST UNDER MASK
s390x/tcg: Implement VECTOR SUM ACROSS WORD
s390x/tcg: Implement VECTOR SUM ACROSS QUADWORD
s390x/tcg: Implement VECTOR SUM ACROSS DOUBLEWORD
s390x/tcg: Implement VECTOR SUBTRACT WITH BORROW COMPUTE BORROW INDICATION
s390x/tcg: Implement VECTOR SUBTRACT WITH BORROW INDICATION
s390x/tcg: Implement VECTOR SUBTRACT COMPUTE BORROW INDICATION
s390x/tcg: Implement VECTOR SUBTRACT
s390x/tcg: Implement VECTOR SHIFT RIGHT LOGICAL *
s390x/tcg: Implement VECTOR SHIFT RIGHT ARITHMETIC
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8561 and 8562 will be gen15 machines. There is no name yet, let us use
gen15a and gen15b as base name. Later on we can provide aliases with
the proper name.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20190429090250.7648-10-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
add several new features (msa9, sort, deflate, additional vector
instructions, new general purpose instructions) to generation 15.
Also disable csske and bpb from the default and base models >=15.
This will allow to migrate gen15 machines to future machines that
do not have these features.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20190429090250.7648-9-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Add vector enhancements to the cpu model.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190429090250.7648-6-borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Provide the MSA9 facility (stfle.155). This also contains pckmo
subfunctions for key wrapping. Keep them in a separate group to disable
those as a block if necessary. This is for example needed when disabling
key wrapping via the HMC.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20190429090250.7648-5-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Provide the "Miscellaneous-Instruction-Extensions Facility 3" via
stfle.61.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190429090250.7648-4-borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
csske will be removed in a future machine. Ignore it for expanding the
cpu model. Otherwise qemu falls back to z9.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190429090250.7648-3-borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Record the software fp control register, as set by the
osf_setsysinfo syscall. Add those masked exceptions
to fpcr_exc_enable. Do not raise a signal for masked
fp exceptions.
Fixes: https://bugs.launchpad.net/bugs/1701835
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Drop the "RI" and "FIR" prefixes; use only the normal linux names.
Add the FPCR to the dump.
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
"megasas: fix mapped frame size" from Peter Lieven.
In addition, -realtime is marked as deprecated.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJc3rY3AAoJEL/70l94x66D91kH/21LLnL+sKmyueSM/Sek4id2
r06tHdGMdl5Od3I5uMD9gnr4AriiCZc9ybQDQ1N879wKMmQPZwcnf2GJ5DZ0wa3L
jHoQO07Bg0KZGWALjXiN5PWB0DlJtXsTm0C4q4tnt6V/ueasjxouBk9/fRLRc09n
QTS379X9QvPElFTv3WPfGz6kmkLq8VMmdRnSlXneB9xTyXXJbFj3zlvDCElNSgWh
fZ7gnfYWB1LOC19HJxp1mJSkAUD5AgImYEK1Hmnr+BMs2sg6gypYNtp3LtE5FzmZ
HSdXYFyPkQV9UyTiV1XBs3bXJbGYj5OApfXCtwo/I2JtP+LhHBA2eq1Gs3QgP98=
=zSSj
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
Mostly bugfixes and cleanups, the most important being
"megasas: fix mapped frame size" from Peter Lieven.
In addition, -realtime is marked as deprecated.
# gpg: Signature made Fri 17 May 2019 14:25:11 BST
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (21 commits)
hw/net/ne2000: Extract the PCI device from the chipset common code
hw/char: Move multi-serial devices into separate file
ioapic: allow buggy guests mishandling level-triggered interrupts to make progress
build: don't build hardware objects with linux-user
build: chardev is only needed for softmmu targets
configure: qemu-ga is only needed with softmmu targets
build: replace GENERATED_FILES by generated-files-y
trace: only include trace-event-subdirs when they are needed
sun4m: obey -vga none
mips-fulong2e: obey -vga none
hw/i386/acpi: Assert a pointer is not null BEFORE using it
hw/i386/acpi: Add object_resolve_type_unambiguous to improve modularity
hw/acpi/piix4: Move TYPE_PIIX4_PM to a public header
memory: correct the comment to DIRTY_MEMORY_MIGRATION
vl: fix -sandbox parsing crash when seccomp support is disabled
hvf: Add missing break statement
megasas: fix mapped frame size
vl: Add missing descriptions to the VGA adapters list
Declare -realtime as deprecated
roms: assert if max rom size is less than the used size
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When possible use generated-files-$(FLAG) to disable
some targets (like KEYCODEMAP_FILES).
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20190401141222.30034-3-lvivier@redhat.com>
Let's return the cc value directly via cpu_env. Unfortunately there
isn't a simple way to calculate the value lazily - one would have to
calculate and store e.g. the population count of the mask and the
result so it can be evaluated in a cc helper.
But as VTM only sets the cc, we can assume the value will be needed soon
either way.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Similar to VECTOR SUM ACROSS DOUBLEWORD.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Similar to VECTOR SUM ACROSS DOUBLEWORD, however without a loop and
using 128-bit calculations.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Perform the calculations without a helper. Only 16 bit or 32 bit values
have to be added.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Fairly easy as only 128-bit handling is required. Simply perform the
subtraction and then subtract the borrow.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Let's keep it simple for now and handle 8/16 bit elements via helpers.
Especially for 8/16, we could come up with some bit tricks.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We can use tcg_gen_sub2_i64() to do 128-bit subtraction and otherwise
existing gvec helpers.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Similar to VECTOR SHIFT RIGHT ARITHMETICAL.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Similar to VECTOR SHIFT LEFT ARITHMETIC. Add s390_vec_sar() similar to
s390_vec_shr().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Inline expansion courtesy of Richard H.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We can reuse the existing 128-bit shift utility function.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We can use all the fancy new vector helpers implemented by Richard.
One important thing to take care of is always to properly mask of
unused bits from the shift count.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Use the new vector expansion for GVecGen3i.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Take care of properly taking the modulo of the count. We might later
want to come back and create a variant of VERLL where the base register
is 0, resulting in an immediate.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Similar to VECTOR COUNT TRAILING ZEROES.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Again, part of vector enhancement facility 1. The operation corresponds
to an bitwise equality check.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Part of vector enhancements facility 1, but easy to implement.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Yet another set of variants. Implement it similar to VECTOR MULTIPLY AND
ADD *. At least for one variant we have a gvec helper we can reuse.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Quite some variants to handle. At least handle some 32-bit element
variants via gvec expansion (we could also handle 16/32-bit variants
for ODD and EVEN easily via gvec expansion, but let's keep it simple
for now).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Luckily, we already have gvec helpers for all four cases.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
We can reuse an existing gvec helper for negating the values.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
A galois field multiplication in field 2 is like binary multiplication,
however instead of doing ordinary binary additions, xor's are performed.
So no carries are considered.
Implement all variants via helpers. s390_vec_sar() and s390_vec_shr()
will be reused later on.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Easy, we can reuse an existing gvec helper.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Implement it similar to VECTOR COUNT LEADING ZEROS.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
For 8/16, use the 32 bit variant and properly subtract the added
leading zero bits.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
To carry out the comparison, we can reuse the existing gvec comparison
function. In case the CC is to be computed, save the result vector
and compute the CC lazily. The result is a vector consisting of all 1's
for elements that matched and 0's for elements that didn't match.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Fairly easy to implement, we can make use of the existing CC helpers
cmps64 and cmpu64 - we siply have to sign extend the elements.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Time to introduce read_vec_element_i32 and write_vec_element_i32.
Take proper care of properly adding the carry. We can perform both
additions including the carry via tcg_gen_add2_i32().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Similar to VECTOR AVERAGE but without sign extension.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Handle 32/64-bit elements via gvec expansion and the 8/16 bits via
ool helpers.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Easy, as we can reuse existing gvec helpers.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Similar to VECTOR ADD COMPUTE CARRY, however 128-bit handling only.
Courtesy of Richard H.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Only slightly ugly, perform two additions. At least it is only supported
for 128 bit elements.
Introduce gen_gvec128_4_i64() similar to gen_gvec128_3_i64().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
128-bit handling courtesy of Richard H.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Introduce two types of fancy new helpers that will be reused a couple of
times
1. gen_gvec_fn_3: Call an existing tcg_gen_gvec_X function with 3
parameters, simplifying parameter passing
2. gen_gvec128_3_i64: Call a function that performs 128 bit calculations
using two 64 bit values per vector.
Luckily, for VECTOR ADD we already have everything we need.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
In target/i386/hvf/hvf.c, a break statement was probably missing in
`hvf_vcpu_exec()`, in handling EXIT_REASON_HLT.
These lines seemed to be equivalent to `kvm_handle_halt()`.
Signed-off-by: Chen Zhang <tgfbeta@me.com>
Message-Id: <087F1D9C-109D-41D1-BE2C-CE5D840C981B@me.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Optimize rotate_x() using tcg_gen_extract_i32(). We can now free the
'sz' tcg_temp earlier. Since it is allocated with tcg_const_i32(),
free it with tcg_temp_free_i32().
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190310003428.11723-6-f4bug@amsat.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The function gen_get_ccr() returns a tcg_temp created with
tcg_temp_new(). Free it with tcg_temp_free().
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190310003428.11723-4-f4bug@amsat.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Switch the m68k target from the old unassigned_access hook
to the transaction_failed hook.
The notable difference is that rather than it being called
for all physical memory accesses which fail (including
those made by DMA devices or by the gdbstub), it is only
called for those made by the CPU via its MMU. (In previous
commits we put in explicit checks for the direct physical
loads made by the target/m68k code which will no longer
be handled by calling the unassigned_access hook.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20181210165636.28366-4-peter.maydell@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
In get_physical_address(), use address_space_ldl() and
address_space_stl() instead of ldl_phys() and stl_phys().
This allows us to check whether the memory access failed.
For the moment, we simply return -1 in this case;
add a TODO comment that we should ideally generate the
appropriate kind of fault.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20181210165636.28366-3-peter.maydell@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
In dump_address_map(), use address_space_ldl() instead of ldl_phys().
This allows us to check whether the memory access failed.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20181210165636.28366-2-peter.maydell@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Improve tlb_vaddr_to_host for use by ARM SVE no-fault loads.
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAlzVx4UdHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+U1Af/b3cV5d5a1LWRdLgR
71JCPK/M3o43r2U9wCSikteXkmNBEdEoc5+WRk2SuZFLW/JB1DHDY7/gISPIhfoB
ZIza2TxD/QK1CQ5/mMWruKBlyygbYYZgsYaaNsMJRJgicgOSjTN0nuHMbIfv3tAN
mu+IlkD0LdhVjP0fz30Jpew3b3575RCjYxEPM6KQI3RxtQFjZ3FhqV5hKR4vtdP5
yLWJQzwAbaCB3SZUvvp7TN1ZsmeyLpc+Yz/YtRTqQedo7SNWWBKldLhqq4bZnH1I
AkzHbtWIOBrjWJ34ZMAgI5Q56Du9TBbBvCdM9azmrQjSu/2kdsPBPcUyOpnUCsCx
NyXo9g==
=x71l
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190510' into staging
Add CPUClass::tlb_fill.
Improve tlb_vaddr_to_host for use by ARM SVE no-fault loads.
# gpg: Signature made Fri 10 May 2019 19:48:37 BST
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/pull-tcg-20190510: (27 commits)
tcg: Use tlb_fill probe from tlb_vaddr_to_host
tcg: Remove CPUClass::handle_mmu_fault
tcg: Use CPUClass::tlb_fill in cputlb.c
target/xtensa: Convert to CPUClass::tlb_fill
target/unicore32: Convert to CPUClass::tlb_fill
target/tricore: Convert to CPUClass::tlb_fill
target/tilegx: Convert to CPUClass::tlb_fill
target/sparc: Convert to CPUClass::tlb_fill
target/sh4: Convert to CPUClass::tlb_fill
target/s390x: Convert to CPUClass::tlb_fill
target/riscv: Convert to CPUClass::tlb_fill
target/ppc: Convert to CPUClass::tlb_fill
target/openrisc: Convert to CPUClass::tlb_fill
target/nios2: Convert to CPUClass::tlb_fill
target/moxie: Convert to CPUClass::tlb_fill
target/mips: Convert to CPUClass::tlb_fill
target/mips: Tidy control flow in mips_cpu_handle_mmu_fault
target/mips: Pass a valid error to raise_mmu_exception for user-only
target/microblaze: Convert to CPUClass::tlb_fill
target/m68k: Convert to CPUClass::tlb_fill
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Exclusive Instructions provide a general-purpose mechanism for
atomic updates of memory-based synchronization variables that can be
used for exclusion algorithms.
Use cmpxchg-based implementation that is sufficient for the typical use
of exclusive access in atomic operations.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Block prefetch option adds a bunch of non-privileged opcodes that may be
implemented as nops since QEMU doesn't model caches.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20190423102145.14812-2-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Remove a function of the same name from target/arm/.
Use a branchless implementation of abs gleaned from gcc.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Replace the single opcode in .opc with a null-terminated
array in .opt_opc. We still require that all opcodes be
used with the same .vece.
Validate the contents of this list with CONFIG_DEBUG_TCG.
All tcg_gen_*_vec functions will check any list active
during .fniv expansion. Swap the active list in and out
as we expand other opcodes, or take control away from the
front-end function.
Convert all existing vector aware front ends.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Cleaned up with scripts/clean-header-guards.pl.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190315145123.28030-9-armbru@redhat.com>
We commonly define the header guard symbol without an explicit value.
Normalize the exceptions.
Done with scripts/clean-header-guards.pl.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190315145123.28030-8-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Leading underscores are ill-advised because such identifiers are
reserved. Trailing underscores are merely ugly. Strip both.
Our header guards commonly end in _H. Normalize the exceptions.
Done with scripts/clean-header-guards.pl.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190315145123.28030-7-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
[Changes to slirp/ dropped, as we're about to spin it off]
Header guard symbols should match their file name to make guard
collisions less likely.
Cleaned up with scripts/clean-header-guards.pl, followed by some
renaming of new guard symbols picked by the script to better ones.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190315145123.28030-6-armbru@redhat.com>
[Rebase to master: update include/hw/net/ne2000-isa.h]
scripts/clean-header-guards.pl warns these headers use reserved
identifier _XTENSA_CORE_CONFIGURATION_H as header guard symbol. It
additionally warns the guard doesn't match the file name.
Reuse of the same guard symbol in multiple headers is okay as long as
they cannot be included together.
Since we can avoid guard symbol reuse easily, do so: use the guard
symbol scripts/clean-header-guards.pl picks, less the TARGET_ prefix.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190315145123.28030-5-armbru@redhat.com>
The Memory Protection Unit Option (MPU) is a combined instruction and
data memory protection unit with more protection flexibility than the
Region Protection Option or the Region Translation Option but without
any translation capability. It does no demand paging and does not
reference a memory-based page table.
Add memory protection unit option, internal state, SRs and opcodes.
Implement MPU entries dumping in dump_mmu.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Add SRs and rsr/wsr/xsr opcodes defined by the parity/ECC xtensa option.
The implementation is trivial since we don't emulate parity/ECC yet.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
IDMA and scatter/gather features introduced new IRQ types that
overlay_tool.h need to initialize Xtensa configuration.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Remove declarations of the internal mmu_helper functions from the cpu.h,
make these functions static and shuffle them.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
SR numbers are not unique: different Xtensa options may reuse SR number
for different purposes. Introduce generic rsr/wsr functions and xsr
template and use them instead of centralized SR access functions. Change
prototypes of specific rsr/wsr functions to match XtensaOpcodeOp and use
them instead of centralized SR access functions. Put xtensa option that
introduces SR into the second opcode description parameter and use it to
test for rsr/wsr/xsr opcode validity. Extract SR and UR names for the
xtensa_cpu_dump_state from libisa. Merge SRs and URs in the dump.
Register names of used SR/UR in init_libisa and use these names for TCG
globals referencing these SR/UR.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Most of the existing users would continue around a loop which
would fault the tlb entry in via a normal load/store.
But for AArch64 SVE we have an existing emulation bug wherein we
would mark the first element of a no-fault vector load as faulted
(within the FFR, not via exception) just because we did not have
its address in the TLB. Now we can properly only mark it as faulted
if there really is no valid, readable translation, while still not
raising an exception. (Note that beyond the first element of the
vector, the hardware may report a fault for any reason whatsoever;
with at least one element loaded, forward progress is guaranteed.)
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We can now use the CPUClass hook instead of a named function.
Create a static tlb_fill function to avoid other changes within
cputlb.c. This also isolates the asserts within. Remove the
named tlb_fill function from all of the targets.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Remove the user-only functions, as we no longer
have a user-only config.
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Note that env->pc is removed from the qemu_log as that value is garbage.
The PC isn't recovered until cpu_restore_state, called from
cpu_loop_exit_restore, called from riscv_raise_exception.
Cc: qemu-riscv@nongnu.org
Cc: Palmer Dabbelt <palmer@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Cc: qemu-ppc@nongnu.org
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Remove the leftover debugging cpu_dump_state.
Cc: Chris Wulff <crwulff@gmail.com>
Cc: Marek Vasut <marex@denx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Remove the user-only functions, as we don't have a user-only config.
Fix the unconditional call to tlb_set_page, even if the translation
failed.
Cc: Anthony Green <green@moxielogic.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Note that env->active_tc.PC is removed from the qemu_log as that value
is garbage. The PC isn't recovered until cpu_restore_state, called from
cpu_loop_exit_restore, called from do_raise_exception_err.
Cc: Aleksandar Markovic <amarkovic@wavecomp.com>
Cc: Aleksandar Rikalo <arikalo@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since the only non-negative TLBRET_* value is TLBRET_MATCH,
the subsequent test for ret < 0 is useless. Use early return
to allow subsequent blocks to be unindented.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
At present we give ret = 0, or TLBRET_MATCH. This gets matched
by the default case, which falls through to TLBRET_BADADDR.
However, it makes more sense to use a proper value. All of the
tlb-related exceptions are handled identically in cpu_loop.c,
so TLBRET_BADADDR is as good as any other. Retain it.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Michael Walle <michael@walle.cc>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We do not support probing, but we do not need it yet either.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Remove dumping of cpu state. Remove logging of PC, as that
value is garbage until cpu_restore_state.
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
It's either "GNU *Library* General Public License version 2" or "GNU
Lesser General Public License version *2.1*", but there was no "version
2.0" of the "Lesser" license. So assume that version 2.1 is meant here.
Message-Id: <1550073530-4138-1-git-send-email-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
It's either "GNU *Library* General Public License version 2" or "GNU
Lesser General Public License version *2.1*", but there was no "version
2.0" of the "Lesser" license. So assume that version 2.1 is meant here.
Acked-by: Stafford Horne <shorne@gmail.com>
Message-Id: <1550073577-4248-1-git-send-email-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
* Stop using variable length array in dc_zva
* Implement M-profile XPSR GE bits
* Don't enable ARMV7M_EXCP_DEBUG from reset
* armv7m_nvic: NS BFAR and BFSR are RAZ/WI if BFHFNMINS == 0
* armv7m_nvic: Check subpriority in nvic_recompute_state_secure()
* fix various minor issues to allow building for Windows-on-ARM64
* aspeed: Set SDRAM size
* Allow system registers for KVM guests to be changed by QEMU code
* raspi: Diagnose requests for too much RAM
* virt: Support firmware configuration with -blockdev
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAlzRcyIZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3lynD/0eOoNsA64b8GeY2OBgmYbc
tNhgby30IhiiEiSdFK6cnKSq5MkqakGXwWQ5j7aYSjRV3frJD9unZO3yZ0FwlXmM
PZ9qlvC3AW/TcFiV6nF0uJTh5EFdiV3iPsyRYC9b9Zm+tjAg79OchDp7qOH4vq0W
rylkvQpbZrI/0poKDu/Efuq10fbT/aj9IwmO2EjWSGpt0R9rFYKFaaIKB0I1yrNQ
V+JXMCYm39IUP0Zri9Hva67GvWotS6w1Z4J1v5epv2UNAS++LQlL16Mal7EHP9eI
FWu7dfDUa9g78/ct1/ZEuG0myE9CiWEgpo1zzdLaokKgeZfsrvFYz3Y1zc14cMGh
O1SuEQbsrrZX9CizYN8iPsFXP631mxk/Bz8jKklxa8L1JAW6RLpXtS8KZCMF+O6B
PDzx7Tmxg08nG+PtMOD8jOV+cgMji2EFXeF5ojSCOpWyWKidnNUYRdubHDVU7yJR
SRItNioTrEWQQOW7hiqhedi5QflObfdOUtrAi7i2NTuCaGqNIxkhSWaerCyJ0eli
rlLctAXjqgU/APp66RdwtgrVnGyPs8hvgWsrHVC6yPLArkn0HpghH53VfijwGObZ
e6iIRh4UvN94Vp3fGx1ADWkxAcZNi10zxzLFKjSCBpN0izIoNy3qLNEyD9QNK22c
8AcNj9nR7ZzhLRkpW7sv0A==
=hcal
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190507' into staging
target-arm queue:
* Stop using variable length array in dc_zva
* Implement M-profile XPSR GE bits
* Don't enable ARMV7M_EXCP_DEBUG from reset
* armv7m_nvic: NS BFAR and BFSR are RAZ/WI if BFHFNMINS == 0
* armv7m_nvic: Check subpriority in nvic_recompute_state_secure()
* fix various minor issues to allow building for Windows-on-ARM64
* aspeed: Set SDRAM size
* Allow system registers for KVM guests to be changed by QEMU code
* raspi: Diagnose requests for too much RAM
* virt: Support firmware configuration with -blockdev
# gpg: Signature made Tue 07 May 2019 12:59:30 BST
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20190507:
target/arm: Stop using variable length array in dc_zva
target/arm: Implement XPSR GE bits
hw/intc/armv7m_nvic: Don't enable ARMV7M_EXCP_DEBUG from reset
hw/intc/armv7m_nvic: NS BFAR and BFSR are RAZ/WI if BFHFNMINS == 0
hw/arm/armv7m_nvic: Check subpriority in nvic_recompute_state_secure()
osdep: Fix mingw compilation regarding stdio formats
util/cacheinfo: Use uint64_t on LLP64 model to satisfy Windows ARM64
qga: Fix mingw compilation warnings on enum conversion
QEMU_PACKED: Remove gcc_struct attribute in Windows non x86 targets
arm: aspeed: Set SDRAM size
arm: Allow system registers for KVM guests to be changed by QEMU code
hw/arm/raspi: Diagnose requests for too much RAM
hw/arm/virt: Support firmware configuration with -blockdev
pflash_cfi01: New pflash_cfi01_legacy_drive()
pc: Rearrange pc_system_firmware_init()'s legacy -drive loop
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently the dc_zva helper function uses a variable length
array. In fact we know (as the comment above remarks) that
the length of this array is bounded because the architecture
limits the block size and QEMU limits the target page size.
Use a fixed array size and assert that we don't run off it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190503120448.13385-1-peter.maydell@linaro.org
In the M-profile architecture, if the CPU implements the DSP extension
then the XPSR has GE bits, in the same way as the A-profile CPSR. When
we added DSP extension support we forgot to add support for reading
and writing the GE bits, which are stored in env->GE. We did put in
the code to add XPSR_GE to the mask of bits to update in the v7m_msr
helper, but forgot it in v7m_mrs. We also must not allow the XPSR we
pull off the stack on exception return to set the nonexistent GE bits.
Correct these errors:
* read and write env->GE in xpsr_read() and xpsr_write()
* only set GE bits on exception return if DSP present
* read GE bits for MRS if DSP present
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190430131439.25251-5-peter.maydell@linaro.org
At the moment the Arm implementations of kvm_arch_{get,put}_registers()
don't support having QEMU change the values of system registers
(aka coprocessor registers for AArch32). This is because although
kvm_arch_get_registers() calls write_list_to_cpustate() to
update the CPU state struct fields (so QEMU code can read the
values in the usual way), kvm_arch_put_registers() does not
call write_cpustate_to_list(), meaning that any changes to
the CPU state struct fields will not be passed back to KVM.
The rationale for this design is documented in a comment in the
AArch32 kvm_arch_put_registers() -- writing the values in the
cpregs list into the CPU state struct is "lossy" because the
write of a register might not succeed, and so if we blindly
copy the CPU state values back again we will incorrectly
change register values for the guest. The assumption was that
no QEMU code would need to write to the registers.
However, when we implemented debug support for KVM guests, we
broke that assumption: the code to handle "set the guest up
to take a breakpoint exception" does so by updating various
guest registers including ESR_EL1.
Support this by making kvm_arch_put_registers() synchronize
CPU state back into the list. We sync only those registers
where the initial write succeeds, which should be sufficient.
This commit is the same as commit 823e1b3818 which we
had to revert in commit 942f99c825, except that the bug
which was preventing EDK2 guest firmware running has been fixed:
kvm_arm_reset_vcpu() now calls write_list_to_cpustate().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Eric Auger <eric.auger@redhat.com>
The EXCP_DMP trap is considered legacy.
"In PA-RISC 1.1 (Second Edition) and later revisions, processors must use
traps 26, 27,and 28 which provide equivalent functionality"
Signed-off-by: Nick Hudson <skrll@netbsd.org>
Message-Id: <20190423063621.8203-3-nick.hudson@gmx.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These instructions are present on pcxl and pcxl2 machines,
and are used by NetBSD and OpenBSD. See
https://parisc.wiki.kernel.org/images-parisc/a/a9/Pcxl2_ers.pdf
page 13-9 (195/206)
Signed-off-by: Nick Hudson <skrll@netbsd.org>
Message-Id: <20190423063621.8203-2-nick.hudson@gmx.co.uk>
[rth: Use extending loads, locally managed temporaries.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Enable the FPU by default for the Cortex-M4 and Cortex-M33.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-27-peter.maydell@linaro.org
Implement the VLLDM instruction for v7M for the FPU present cas.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-26-peter.maydell@linaro.org
Implement the VLSTM instruction for v7M for the FPU present case.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-25-peter.maydell@linaro.org
The M-profile architecture floating point system supports
lazy FP state preservation, where FP registers are not
pushed to the stack when an exception occurs but are instead
only saved if and when the first FP instruction in the exception
handler is executed. Implement this in QEMU, corresponding
to the check of LSPACT in the pseudocode ExecuteFPCheck().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-24-peter.maydell@linaro.org
Pushing registers to the stack for v7M needs to handle three cases:
* the "normal" case where we pend exceptions
* an "ignore faults" case where we set FSR bits but
do not pend exceptions (this is used when we are
handling some kinds of derived exception on exception entry)
* a "lazy FP stacking" case, where different FSR bits
are set and the exception is pended differently
Implement this by changing the existing flag argument that
tells us whether to ignore faults or not into an enum that
specifies which of the 3 modes we should handle.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-23-peter.maydell@linaro.org
In the v7M architecture, if an exception is generated in the process
of doing the lazy stacking of FP registers, the handling of
possible escalation to HardFault is treated differently to the normal
approach: it works based on the saved information about exception
readiness that was stored in the FPCCR when the stack frame was
created. Provide a new function armv7m_nvic_set_pending_lazyfp()
which pends exceptions during lazy stacking, and implements
this logic.
This corresponds to the pseudocode TakePreserveFPException().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-22-peter.maydell@linaro.org
Add a new helper function which returns the MMU index to use
for v7M, where the caller specifies all of the security
state, privilege level and whether the execution priority
is negative, and reimplement the existing
arm_v7m_mmu_idx_for_secstate_and_priv() in terms of it.
We are going to need this for the lazy-FP-stacking code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-21-peter.maydell@linaro.org
The M-profile FPCCR.ASPEN bit indicates that automatic floating-point
context preservation is enabled. Before executing any floating-point
instruction, if FPCCR.ASPEN is set and the CONTROL FPCA/SFPA bits
indicate that there is no active floating point context then we
must create a new context (by initializing FPSCR and setting
FPCA/SFPA to indicate that the context is now active). In the
pseudocode this is handled by ExecuteFPCheck().
Implement this with a new TB flag which tracks whether we
need to create a new FP context.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-20-peter.maydell@linaro.org
The M-profile FPCCR.S bit indicates the security status of
the floating point context. In the pseudocode ExecuteFPCheck()
function it is unconditionally set to match the current
security state whenever a floating point instruction is
executed.
Implement this by adding a new TB flag which tracks whether
FPCCR.S is different from the current security state, so
that we only need to emit the code to update it in the
less-common case when it is not already set correctly.
Note that we will add the handling for the other work done
by ExecuteFPCheck() in later commits.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-19-peter.maydell@linaro.org
We are close to running out of TB flags for AArch32; we could
start using the cs_base word, but before we do that we can
economise on our usage by sharing the same bits for the VFP
VECSTRIDE field and the XScale XSCALE_CPAR field. This
works because no XScale CPU ever had VFP.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-18-peter.maydell@linaro.org
Move the NS TBFLAG down from bit 19 to bit 6, which has not
been used since commit c1e3781090 in 2015, when we
started passing the entire MMU index in the TB flags rather
than just a 'privilege level' bit.
This rearrangement is not strictly necessary, but means that
we can put M-profile-only bits next to each other rather
than scattered across the flag word.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-17-peter.maydell@linaro.org
Handle floating point registers in exception return.
This corresponds to pseudocode functions ValidateExceptionReturn(),
ExceptionReturn(), PopStack() and ConsumeExcStackFrame().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-16-peter.maydell@linaro.org
The magic value pushed onto the callee stack as an integrity
check is different if floating point is present.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-15-peter.maydell@linaro.org
The TailChain() pseudocode specifies that a tail chaining
exception should sanitize the excReturn all-ones bits and
(if there is no FPU) the excReturn FType bits; we weren't
doing this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-14-peter.maydell@linaro.org
For v8M floating point support, transitions from Secure
to Non-secure state via BLNS and BLXNS must clear the
CONTROL.SFPA bit. (This corresponds to the pseudocode
BranchToNS() function.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-13-peter.maydell@linaro.org
Implement the code which updates the FPCCR register on an
exception entry where we are going to use lazy FP stacking.
We have to defer to the NVIC to determine whether the
various exceptions are currently ready or not.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190416125744.27770-12-peter.maydell@linaro.org
Handle floating point registers in exception entry.
This corresponds to the FP-specific parts of the pseudocode
functions ActivateException() and PushStack().
We defer the code corresponding to UpdateFPCCR() to a later patch.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-11-peter.maydell@linaro.org
Currently the code in v7m_push_stack() which detects a violation
of the v8M stack limit simply returns early if it does so. This
is OK for the current integer-only code, but won't work for the
floating point handling we're about to add. We need to continue
executing the rest of the function so that we check for other
exceptions like not having permission to use the FPU and so
that we correctly set the FPCCR state if we are doing lazy
stacking. Refactor to avoid the early return.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-10-peter.maydell@linaro.org
The M-profile CONTROL register has two bits -- SFPA and FPCA --
which relate to floating-point support, and should be RES0 otherwise.
Handle them correctly in the MSR/MRS register access code.
Neither is banked between security states, so they are stored
in v7m.control[M_REG_S] regardless of current security state.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-9-peter.maydell@linaro.org
If the floating point extension is present, then the SG instruction
must clear the CONTROL_S.SFPA bit. Implement this.
(On a no-FPU system the bit will always be zero, so we don't need
to make the clearing of the bit conditional on ARM_FEATURE_VFP.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-8-peter.maydell@linaro.org
Correct the decode of the M-profile "coprocessor and
floating-point instructions" space:
* op0 == 0b11 is always unallocated
* if the CPU has an FPU then all insns with op1 == 0b101
are floating point and go to disas_vfp_insn()
For the moment we leave VLLDM and VLSTM as NOPs; in
a later commit we will fill in the proper implementation
for the case where an FPU is present.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-7-peter.maydell@linaro.org
Like AArch64, M-profile floating point has no FPEXC enable
bit to gate floating point; so always set the VFPEN TB flag.
M-profile also has CPACR and NSACR similar to A-profile;
they behave slightly differently:
* the CPACR is banked between Secure and Non-Secure
* if the NSACR forces a trap then this is taken to
the Secure state, not the Non-Secure state
Honour the CPACR and NSACR settings. The NSACR handling
requires us to borrow the exception.target_el field
(usually meaningless for M profile) to distinguish the
NOCP UsageFault taken to Secure state from the more
usual fault taken to the current security state.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-6-peter.maydell@linaro.org
The only "system register" that M-profile floating point exposes
via the VMRS/VMRS instructions is FPSCR, and it does not have
the odd special case for rd==15. Add a check to ensure we only
expose FPSCR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-5-peter.maydell@linaro.org
The M-profile floating point support has three associated config
registers: FPCAR, FPCCR and FPDSCR. It also makes the registers
CPACR and NSACR have behaviour other than reads-as-zero.
Add support for all of these as simple reads-as-written registers.
We will hook up actual functionality later.
The main complexity here is handling the FPCCR register, which
has a mix of banked and unbanked bits.
Note that we don't share storage with the A-profile
cpu->cp15.nsacr and cpu->cp15.cpacr_el1, though the behaviour
is quite similar, for two reasons:
* the M profile CPACR is banked between security states
* it preserves the invariant that M profile uses no state
inside the cp15 substruct
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-4-peter.maydell@linaro.org
Enforce that for M-profile various FPSCR bits which are RES0 there
but have defined meanings on A-profile are never settable. This
ensures that M-profile code can't enable the A-profile behaviour
(notably vector length/stride handling) by accident.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190416125744.27770-2-peter.maydell@linaro.org
This patch adds support for libgloss semihosting to Nios II bare-metal
emulation. The specification for the protocol can be found in the
libgloss sources.
Signed-off-by: Sandra Loosemore <sandra@codesourcery.com>
Signed-off-by: Julian Brown <julian@codesourcery.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1554321185-2825-3-git-send-email-sandra@codesourcery.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Here's the first ppc target pull request for qemu-4.1. This has a
number of things that have accumulated while qemu-4.0 was frozen.
* A number of emulated MMU improvements from Ben Herrenschmidt
* Assorted cleanups fro Greg Kurz
* A large set of mostly mechanical cleanups from me to make target/ppc
much closer to compliant with the modern coding style
* Support for passthrough of NVIDIA GPUs using NVLink2
As well as some other assorted fixes.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlzCnusACgkQbDjKyiDZ
s5LfhhAAuem5UBGKPKPj33c87HC+GGG+S4y89ic3ebyKplWulGgouHCa4Dnc7Y5m
9MfIEcljRDpuRJCEONo6yg9aaRb3cW2Go9TpTwxmF8o1suG/v5bIQIdiRbBuMa2t
yhNujVg5kkWSU1G4mCZjL9FS2ADPsxsKZVd73DPEqjlNJg981+2qtSnfR8SXhfnk
dSSKxyfC6Hq1+uhGkLI+xtft+BCTWOstjz+efHpZ5l2mbiaMeh7zMKrIXXy/FtKA
ufIyxbZznMS5MAZk7t90YldznfwOCqfh3di1kx8GTZ40LkBKbuI5LLHTG0sT75z5
LHwFuLkBgWmS8RyIRRh9opr7ifrayHx8bQFpW368Qu+PbPzUCcTVIrWUfPmaNR74
CkYJvhiYZfTwKtUeP7b2wUkHpZF4KINI4TKNaS4QAlm3DNbO67DFYkBrytpXsSzv
smEpe+sqlbY40olw9q4ESP80r+kGdEPLkRjfdj0R7qS4fsqAH1bjuSkNqlPaCTJQ
hNsoz2D+f56z0bBq4x8FRzDpqnBkdy4x6PlLxkJuAaV7WAtvq7n7tiMA3TRr/rIB
OYFP2xPNajjP8MfyOB94+S4WDltmsgXoM7HyyvrKp2JBpe7mFjpep5fMp5GUpweV
OOYrTsN1Nuu3kFpeimEc+IOyp1BWXnJF4vHhKTOqHeqZEs5Fgus=
=RpAK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-4.1-20190426' into staging
ppc patch queue 2019-04-26
Here's the first ppc target pull request for qemu-4.1. This has a
number of things that have accumulated while qemu-4.0 was frozen.
* A number of emulated MMU improvements from Ben Herrenschmidt
* Assorted cleanups fro Greg Kurz
* A large set of mostly mechanical cleanups from me to make target/ppc
much closer to compliant with the modern coding style
* Support for passthrough of NVIDIA GPUs using NVLink2
As well as some other assorted fixes.
# gpg: Signature made Fri 26 Apr 2019 07:02:19 BST
# gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-4.1-20190426: (36 commits)
target/ppc: improve performance of large BAT invalidations
ppc/hash32: Rework R and C bit updates
ppc/hash64: Rework R and C bit updates
ppc/spapr: Use proper HPTE accessors for H_READ
target/ppc: Don't check UPRT in radix mode when in HV real mode
target/ppc/kvm: Convert DPRINTF to traces
target/ppc/trace-events: Fix trivial typo
spapr: Drop duplicate PCI swizzle code
spapr_pci: Get rid of duplicate code for node name creation
target/ppc: Style fixes for translate/spe-impl.inc.c
target/ppc: Style fixes for translate/vmx-impl.inc.c
target/ppc: Style fixes for translate/vsx-impl.inc.c
target/ppc: Style fixes for translate/fp-impl.inc.c
target/ppc: Style fixes for translate.c
target/ppc: Style fixes for translate_init.inc.c
target/ppc: Style fixes for monitor.c
target/ppc: Style fixes for mmu_helper.c
target/ppc: Style fixes for mmu-hash64.[ch]
target/ppc: Style fixes for mmu-hash32.[ch]
target/ppc: Style fixes for misc_helper.c
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Performing a complete flush is ~ 100 times faster than flushing
256MiB of 4KiB pages. Set a limit of 1024 pages and perform a complete
flush afterwards.
This patch significantly speeds up AIX 5.1 and NetBSD-ofppc.
Signed-off-by: Artyom Tarasenko <atar4qemu@gmail.com>
Message-Id: <1555103178-21894-4-git-send-email-atar4qemu@gmail.com>
Reviewed-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
With MT-TCG, we are now running translation in a racy way, thus
we need to mimic hardware when it comes to updating the R and
C bits, by doing byte stores.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190411080004.8690-5-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
With MT-TCG, we are now running translation in a racy way, thus
we need to mimic hardware when it comes to updating the R and
C bits, by doing byte stores.
The current "store_hpte" abstraction is ill suited for this, we
replace it with two separate callbacks for setting R and C.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190411080004.8690-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It appears that during kexec, we run for a while in hypervisor
real mode with LPCR:HR set and LPCR:UPRT clear, which trips
the assertion in ppc_radix64_handle_mmu_fault().
First this shouldn't be an assertion, it's a guest error.
Then we shouldn't be checking these things in hypervisor real
mode (or in virtual hypervisor guest real mode which is similar)
as the real HW won't use those LPCR bits in those cases anyway,
so technically it's ok to have this discrepancy.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190411080004.8690-2-clg@kaod.org>
[dwg: Fix for 32-bit builds]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>