Opcodes in different slots may read and write same resources (registers,
states). In the absence of resource dependency loops it must be possible
to sort opcodes to avoid interference.
Record resources used by each opcode in the bundle. Build opcode
dependency graph and use topological sort to order its nodes. In case of
success translate opcodes in sort order. In case of failure report and
raise invalid opcode exception.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Note that float16_to_float32 rightly squashes SNaN to QNaN.
But of course pickNaNMulAdd, for ARM, selects SNaNs first.
So we have to preserve SNaN long enough for the correct NaN
to be selected. Thus float16_to_float32_by_bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit 823e1b3818,
which introduces a regression running EDK2 guest firmware
under KVM:
error: kvm run failed Function not implemented
PC=000000013f5a6208 X00=00000000404003c4 X01=000000000000003a
X02=0000000000000000 X03=00000000404003c4 X04=0000000000000000
X05=0000000096000046 X06=000000013d2ef270 X07=000000013e3d1710
X08=09010755ffaf8ba8 X09=ffaf8b9cfeeb5468 X10=feeb546409010756
X11=09010757ffaf8b90 X12=feeb50680903068b X13=090306a1ffaf8bc0
X14=0000000000000000 X15=0000000000000000 X16=000000013f872da0
X17=00000000ffffa6ab X18=0000000000000000 X19=000000013f5a92d0
X20=000000013f5a7a78 X21=000000000000003a X22=000000013f5a7ab2
X23=000000013f5a92e8 X24=000000013f631090 X25=0000000000000010
X26=0000000000000100 X27=000000013f89501b X28=000000013e3d14e0
X29=000000013e3d12a0 X30=000000013f5a2518 SP=000000013b7be0b0
PSTATE=404003c4 -Z-- EL1t
with
[ 3507.926571] kvm [35042]: load/store instruction decoding not implemented
in the host dmesg.
Revert the change for the moment until we can investigate the
cause of the regression.
Reported-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is a set of VFP instructions which we implement in
disas_vfp_v8_insn() and gate on the ARM_FEATURE_V8 bit.
These were all first introduced in v8 for A-profile, but in
M-profile they appeared in v7M. Gate them on the MVFR2
FPMisc field instead, and rename the function appropriately.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190222170936.13268-3-peter.maydell@linaro.org
Instead of gating the A32/T32 FP16 conversion instructions on
the ARM_FEATURE_VFP_FP16 flag, switch to our new approach of
looking at ID register bits. In this case MVFR1 fields FPHP
and SIMDHP indicate the presence of these insns.
This change doesn't alter behaviour for any of our CPUs.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190222170936.13268-2-peter.maydell@linaro.org
Currently the Arm arm-powerctl.h APIs allow:
* arm_set_cpu_on(), which powers on a CPU and sets its
initial PC and other startup state
* arm_reset_cpu(), which resets a CPU which is already on
(and fails if the CPU is powered off)
but there is no way to say "power on a CPU as if it had
just come out of reset and don't do anything else to it".
Add a new function arm_set_cpu_on_and_reset(), which does this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219125808.25174-5-peter.maydell@linaro.org
Make the M-profile "init-svtor" property be settable after realize.
This matches the hardware, where this is a config signal which
is sampled on CPU reset and can thus be changed between one
reset and another. To do this we have to change the API we
use to add the property.
(We will need this capability for the SSE-200.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219125808.25174-4-peter.maydell@linaro.org
Set up MMI code to be compiled only for TARGET_MIPS64. This is
needed so that GPRs are 64 bit, and combined with MMI registers,
they will form full 128 bit registers.
Signed-off-by: Mateja Marjanovic <mateja.marjanovic@rt-rk.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1551183797-13570-2-git-send-email-mateja.marjanovic@rt-rk.com>
No guest support yet
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-13-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(Might need more patch splitting)
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-12-clg@kaod.org>
[dwg: Hack to fix compile with some earlier include tweaks of mine]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
That "b" means "base address" and thus shouldn't be in the name
of actual entries and related constants.
This patch keeps the synthetic patb_entry field of the spapr
virtual hypervisor unchanged until I figure out if that has
an impact on the migration stream.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-11-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Our TCG TLB only tags whether it's a HV vs a guest access, so it must
be flushed when the LPIDR is changed.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-10-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Let's use the generic helper tlb_flush_all_cpus_synced() instead
of iterating the CPUs ourselves.
We do lose the optimization of clearing the "other" CPUs "need flush"
flags but this shouldn't be a problem in practice.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-9-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER9 (arch v3) slightly changes the HPTE format. The B bits move
from the first to the second half of the HPTE, and the AVPN/ARPN
are slightly shorter.
However, under SPAPR, the hypercalls still take the old format
(and probably will for the foreseable future).
The simplest way to support this is thus to convert the HPTEs from
new to old format when reading them if the MMU model is v3 and there
is no virtual hypervisor, leaving the rest of the code unchanged.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-8-clg@kaod.org>
[dwg: Moved function to .c since there was no real need for it in the .h]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
With mttcg, we can have MMU lookups happening at the same time
as the guest modifying the page tables.
Since the HPTEs of the hash table MMU contains two words (or
double worlds on 64-bit), we need to make sure we read them
in the right order, with the correct memory barrier.
Additionally, when using emulated SPAPR mode, the hypercalls
writing to the hash table must also perform the udpates in
the right order.
Note: This part is still not entirely correct
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-7-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-5-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Historically the 64-bit server MMU supports two way of configuring the
guest "real mode" mapping:
- The "RMA" with is a single chunk of physically contiguous
memory remapped as guest real, and controlled by the RMLS
field in the LPCR register and the RMOR register.
- The "VRMA" which uses special PTEs inserted in the partition
hash table by the hypervisor.
POWER9 deprecates the former, which is reflected by the filtering
done in ppc_store_lpcr() which effectively prevents setting of
the RMLS field.
However, when using fully emulated SPAPR machines, our qemu code
currently only knows how to define the guest real mode memory using
RMLS.
Thus you cannot run a SPAPR machine anymore with a POWER9 CPU
model today.
This works around it with a quirk in ppc_store_lpcr() to continue
allowing the RMLS field to be set when using a virtual hypervisor.
Ultimately we will want to implement configuring a VRMA instead
which will also be necessary if we want to migrate a SPAPR guest
between TCG and KVM but this is a lot more work.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Now that LPCR:HR is set properly for SPAPR, use it for deciding
the translation type, which also works for bare metal
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-3-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The HW relies on LPCR:HR along with the PATE to determine whether
to use Radix or Hash mode. In fact it uses LPCR:HR more commonly
than the PATE.
For us, it's also more efficient to do so, especially since unlike
the HW we do not maintain a cache of the current PATE and HV PATE
in a generic place.
Prepare the grounds for that by ensuring that LPCR:HR is set
properly on SPAPR machines.
Another option would have been to use a callback to get the PATE
but this gets messy when implementing bare metal support, it's
much simpler (and faster) to use LPCR.
Since existing migration streams may not have it, fix it up in
spapr_post_load() as well based on the pseudo-PATE entry that
we keep.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-2-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This controls whether the External Interrupt (0x500) can be
delivered to the hypervisor or not.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-11-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Adds support for the Hypervisor directed interrupts in addition to the
OS ones.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: - modified the icp_realize() and xive_tctx_realize() to take
into account explicitely the POWER9 interrupt model
- introduced a specific power9_set_irq for POWER9 ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215161648.9600-10-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This adds support for delivering that exception
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-9-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It's very easy for the CPU specific has_work() implementation
and the logic in ppc_hw_interrupt() to be subtly out of sync.
This can occasionally allow a CPU to wakeup from a PM state
and resume executing past the PM instruction when it should
resume at the 0x100 vector.
This detects if it happens and aborts, making it a lot easier
to catch such bugs when testing rather than chasing obscure
guest misbehaviour.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-8-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
And use it to get the correct HILE bit in HID0
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-7-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
To better reflect what this does, as it's specific to some of the
P7/P8/P9 PM states, not generic.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-6-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This moves the code to handle waking up from the 0x100 vector
from powerpc_excp() to a separate function, as the former is
already way too big as it is.
No functional change.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-5-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
STOP must act differently based on PSSCR:EC on POWER9. When set, it
acts like the P7/P8 power management instructions and wake up at 0x100
based on the wakeup conditions in LPCR.
When PSSCR:EC is clear however it will wakeup at the next instruction
after STOP (if EE is clear) or take the corresponding interrupts (if
EE is set).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When issuing a power management instruction, we set MSR:EE
to force ppc_hw_interrupt() into calling powerpc_excp()
to deal with the fact that on P7 and P8, the system reset
caused by the wakeup needs to be generated regardless of
the MSR:EE value (using LPCR only).
This however means that the OS will see a bogus SRR1:EE
value which is a problem. It also prevents properly
implementing P9 STOP "light".
So fix this by instead putting some logic in ppc_hw_interrupt()
to decide whether to deliver or not by taking into account the
fact that we are waking up from sleep.
The LPCR isn't checked as this is done in the has_work() test.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-3-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Those instructions currently raise an exception from within
the helper. This tends to result in a bogus nip value in
the env context (typically the beginning of the TB). Such
a helper needs a gen_update_nip() first.
This fixes it with a different approach which is to throw the
exception from translate.c instead of the helper using
gen_exception_nip() which does the right thing. Exception
EXCP_HLT is also used instead of POWERPC_EXCP_STOP to effectively
exit from the CPU execution loop.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg : modified the commit log to comment the use of EXCP_HLT instead
of POWERPC_EXCP_STOP]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215161648.9600-2-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJcbu/QAAoJENSXKoln91plb0oH/RczDVACfmhnERAru8NhW19/
6YB5w1FjbH+CkNB4ZBdF5sQNRyAnuHxL6xMKT3LvZUCEy0ADk+D5KJxzg340JABB
eGc2FxKYe1vbhCAsYhQMOZyGhiye6UZnRjTXirYqMCm74zuFVI954X0V1ytfHARI
0AIsWcOOVLnJj+itU0Uj+i+dBFFec0TbHWodvB8rt+TVcg5SFsdiwbT7jLxUSCAA
VwhjmDUlE2+545LgbIrRbhMfnsEDkMgN2C1YGqkdBSM03dYnW0scxudGbxN0QPrV
l0KAVTvUXcdUj0i1B3E91QiF0s6KU34TpE1vwZFUBdyuqHpIPgNhJkK6Tmt/DqM=
=PVKW
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-feb-21-2019-v2' into staging
MIPS queue for February 21st, 2019, v2
# gpg: Signature made Thu 21 Feb 2019 18:37:04 GMT
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-feb-21-2019-v2:
target/mips: fulong2e: Dynamically generate SPD EEPROM data
target/mips: fulong2e: Fix bios flash size
hw/pci-host/bonito.c: Add PCI mem region mapped at the correct address
target/mips: implement QMP query-cpu-definitions command
tests/tcg: target/mips: Add wrappers for MSA integer compare instructions
tests/tcg: target/mips: Change directory name 'bit-counting' to 'bit-count'
tests/tcg: target/mips: Correct path to headers in some test source files
hw/misc: mips_itu: Fix 32/64 bit issue in a line involving shift operator
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch enables QMP-based querying of the available CPU types for
MIPS and MIPS64 platforms.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: fixed a couple of comment typos]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There are lots of special cases within these insns. Split the
major argument decode/loading/saving into no_output (compares),
rd_is_dp, and rm_is_dp.
We still need to special case argument load for compare (rd as
input, rm as zero) and vcvt fixed (rd as input+output), but lots
of special cases do disappear.
Now that we have a full switch at the beginning, hoist the ISA
checks from the code generation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move all of the fp helpers out of helper.c into a new file.
This is code movement only. Since helper.c has no copyright
header, take the one from cpu.h for the new file.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For opcodes 0-5, move some if conditions into the structure
of a switch statement. For opcodes 6 & 7, decode everything
at once with a second switch.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This was introduced by
commit bf8d09694c
target/arm: Don't clear supported PMU events when initializing PMCEID1
and identified by Coverity (CID 1398645).
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190219144621.450-1-aaron@os.amperecomputing.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The "background region" for a v8M MPU is a default which will be used
(if enabled, and if the access is privileged) if the access does
not match any specific MPU region. We were incorrectly using it
always (by putting the condition at the wrong nesting level). This
meant that we would always return the default background permissions
rather than the correct permissions for a specific region, and also
that we would not return the right information in response to a
TT instruction.
Move the check for the background region to the same place in the
logic as the equivalent v8M MPUCheck() pseudocode puts it.
This in turn means we must adjust the condition we use to detect
matches in multiple regions to avoid false-positives.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190214113408.10214-1-peter.maydell@linaro.org
FLIX adds branch and loop instruction variants with 15- and 18-bit wide
target offset. Implement them as additional names for the ordinary
branch/loop opcodes.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
There are opcodes that differ only in encoding or possible range of
immediate arguments. Allow multiple names for single opcode translation
table entry to reduce code duplication in that case.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Requirement for alphabetical opcode sorting in opcode tables is awkward
and does not allow sharing implementation between multiple opcodes.
Use hash tables to find opcodes by name. Move implementation from the
translate.c to the helper.c to its only user and remove declaration from
the cpu.h
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Don't run xtensa_finalize_config at the time of core registration,
instead run it at the CPU class initialization.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
xtensa-modules part of the test_mmuhifi_c3 core is missing fixes that
returns XTENSA_UNDEFINED for undefined opcodes and marks all data
structures static. Run sed script from target/xtensa/import_core.sh on
it. This fixes test_sr tests for missing special registers.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Here's the next batch of ppc and spapr patches. Higlights are:
* A bunch of improvements to TCG handling of vector instructions from
Richard Henderson and Marc Cave-Ayland
* Cleanup to the XICS interrupt controller from Greg Kurz, removing
the special KVM subclasses which were a bad idea
* Some refinements to the XIVE interrupt controller from Cédric Le
Goater
* Fix from Fabiano Rosas for a really dumb buffer overflow in the
device tree code for memory hotplug
* Code for allowing access to SPRs from the gdb stub from Fabiano
Rosas
* Assorted minor fixes and cleanups
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlxqt4oACgkQbDjKyiDZ
s5KeaBAAzHortvO/rKiQ0hkhKdy9MtaBbuPIYwMYA5dQXYH2gOi/VZxXHBhwDczy
MdXv+5Y+OYEWL0RC6kJGceM4xCD4b+WzZMriwYA5q32YeiUHmduyWxdq8Ulasm32
xok5DheVjyJLS970Q8Qp1Ck7vRXfYVd/7R/hNExcKkYU3wczqVEDqglHyThxaP0s
pTKrPGSuT+kHfi4kuLQ2qyKeNe6XWrvmgBAnXsud6lqWQ7D0ZAalnzhEoMrEMeyK
ldjh/suB68WyJZ7Sl0REV2DlILLKc/wDSL4HMmjmyuV5ldEKVyqhM8f7tHMtzeET
Ab8zKd0F4L1ffjyN3gmrh4WtyTa5L1s8av/bJFfESFNT3ioPFuDeMYQGQH4y3hJg
nNGSJaWXRu/3c0/uRcA9SSxWQYSzKCz2WFEV06UK2JlajVd6Wy5zpjy/7spZhbQH
z4TOSQrnRdIveRBTyUTUkJjbAitocUfHs2vCfzDBhACfj2LovSicNG284LlZXF1U
/d6F668Z2aoDpdpgKh1QSOJ6bTS/1KwKCvZ89L15EUYOcCrZlZjECJR+WtGhTP7A
YKyylvBkZ5a+M7t0f/Rm8KAy5QnpEAy7fKqLGQw8aldqX2MK46acjEwA5v696yZk
iCyAas5gu0U6ytKMOYwT1Lq1hmID/fyBApXIeFJhz2KFzTb4PqM=
=QQra
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-4.0-20190219' into staging
ppc patch queue 2019-02-19
Here's the next batch of ppc and spapr patches. Higlights are:
* A bunch of improvements to TCG handling of vector instructions from
Richard Henderson and Marc Cave-Ayland
* Cleanup to the XICS interrupt controller from Greg Kurz, removing
the special KVM subclasses which were a bad idea
* Some refinements to the XIVE interrupt controller from Cédric Le
Goater
* Fix from Fabiano Rosas for a really dumb buffer overflow in the
device tree code for memory hotplug
* Code for allowing access to SPRs from the gdb stub from Fabiano
Rosas
* Assorted minor fixes and cleanups
# gpg: Signature made Mon 18 Feb 2019 13:47:54 GMT
# gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-4.0-20190219: (43 commits)
target/ppc: convert vmin* and vmax* to vector operations
target/ppc: convert vadd*s and vsub*s to vector operations
target/ppc: Split out VSCR_SAT to a vector field
target/ppc: Add set_vscr_sat
target/ppc: Use mtvscr/mfvscr for vmstate
target/ppc: Add helper_mfvscr
target/ppc: Remove vscr_nj and vscr_sat
target/ppc: Use helper_mtvscr for reset and gdb
target/ppc: Pass integer to helper_mtvscr
target/ppc: convert xxsel to vector operations
target/ppc: convert xxspltw to vector operations
target/ppc: convert xxspltib to vector operations
target/ppc: convert VSX logical operations to vector operations
target/ppc: convert vsplt[bhw] to use vector operations
target/ppc: convert vspltis[bhw] to use vector operations
target/ppc: convert vaddu[b,h,w,d] and vsubu[b,h,w,d] over to use vector operations
target/ppc: convert VMX logical instructions to use vector operations
xics: Drop the KVM ICS class
spapr/irq: Use the "simple" ICS class for KVM
xics: Handle KVM interrupt presentation from "simple" ICS code
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJcara+AAoJEDhwtADrkYZTmU4P/jt4seb0EQZBl/+YpqdyT75m
H8RvJWTbzh7mstSeJNbyeUG9P9hmNB7j9X9uVF978csnqnp9W8x8pK91SnG+hbcI
H6nPh+/tBxTFLdBkxiTbtr7BD4aDVLsspfdD7eT1ZticSYubfNiSd7g0rgIlrR7M
B/OPgE2vt9pKbMGcQoSjBiaui+qnuAnWcpJlHbzsPkaAS9x6U+5tkfA0YbuUgI7k
9CR9HrzZGB2YU1E93CUIE0JntmnRF/RUK1OoiKwZu9nVlcUI5K08RdqMBUTM1m9P
QouCEomzr63UXgSqSE0wCu5efwdluGOqbrDBqjzam6QOn5+Rqbn3krbbcXfY8Bub
fVYMYbeLuGkXbX/Uvyj9YoZRJ8JLvAjkLecuWz27+wEHR3V0CjqoFLCmNYQt8T9R
ti+jj9cWPt40kSoUPMF6QuboORBmTGITS/sy2akq6rMnXxsDeoN1SLdNdYC/4Rax
S9j5mh0gR/YkrWwWO7Ydr7xSF9ciYFltPVEsgxVtZy/biGj52IjpjnGhTST+gJeB
Icd65cs/vgoaN9gX+n0SKf0mna162aysw3DMT4hKO42iBVQ+P0c37j1xv80pXgdw
THMJcOJFJ/PGUWpWHl/Q0wr5RkUqRpHcVp9NvssYOsbQgMA8YH+/2NV4yoJ7TIK5
JLrDXbKvl18myezVKtz8
=pNCA
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2019-02-18' into staging
QAPI patches for 2019-02-18
# gpg: Signature made Mon 18 Feb 2019 13:44:30 GMT
# gpg: using RSA key 3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2019-02-18:
qapi: move RTC_CHANGE to the target schema
qmp: Deprecate query-events in favor of query-qmp-schema
Revert "qapi-events: add 'if' condition to implicit event enum"
qapi: remove qmp_unregister_command()
qapi: make query-cpu-definitions depend on specific targets
qapi: make query-cpu-model-expansion depend on s390 or x86
qapi: make query-gic-capabilities depend on TARGET_ARM
target.json: add a note about query-cpu* not being s390x-specific
qapi: make s390 commands depend on TARGET_S390X
qapi: make rtc-reset-reinjection and SEV depend on TARGET_I386
qapi: New module target.json
build: Deal with all of QAPI's .o in qapi/Makefile.objs
build-sys: move qmp-introspect per target
qapi: Generate QAPIEvent stuff into separate files
qapi: Prepare for system modules other than 'builtin'
qapi: Clean up modular built-in code generation a bit
qapi: Fix up documentation for recent commit a95291007b
qapi: Belatedly document modular code generation
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move rtc-reset-reinjection and SEV in target.json and make them
conditional on TARGET_I386.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190214152251.2073-10-armbru@redhat.com>
Introduce the z14 GA2 cpu model for QEMU. There are no new features
introduced with this model, and will inherit the same feature set as
z14 GA1.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190212011657.18324-3-walling@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Latest systems and host kernels support mepoch, which is a
feature that was meant to be supported for z14 GA1 from the
get-go. Let's copy it to the z14 GA1 default CPU model.
Machines s390-ccw-virtio-3.1 and older will retain the old CPU
models and will not provide this bit nor the extended PTFF
functions in the default model.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Message-Id: <20190212011657.18324-2-walling@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The extended PTFF features (qsie, qtoue, stoe, stoue) are dependent
on the multiple-epoch facility (mepoch). Let's print a warning if these
features are enabled without mepoch.
While we're at it, let's move the FEAT_GROUP_INIT for mepochptff down
the s390_feature_groups list so it can be properly indexed with its
generated S390FeatGroup enum.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Message-Id: <20190212011657.18324-1-walling@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As we now always have PCI support, let's add it to the "qemu" CPU model,
taking care of backwards compatibility.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190212112323.15904-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This is a non-privileged instruction that was only implemented
for system mode. However, the stck instruction is used by glibc,
so this was causing SIGILL for programs run under debian stretch.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190212053044.29015-3-richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We will need these from CONFIG_USER_ONLY as well,
which cannot access include/hw/.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190212053044.29015-2-richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We tried to make pci support optional on s390x in the past;
unfortunately, we still require the s390 phb to be created
unconditionally due to backwards compatibility issues.
Instead of sinking more effort into this (including compat
handling for older machines etc.) for non-obvious gains, let's
just make CONFIG_PCI something that is always set on s390x.
Note that you can still fence off pci for the _guest_ if you
provide a cpu model without the zpci feature.
Message-Id: <20190211113255.3837-1-cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The license information in these files is rather confusing. The text
declares LGPL first, but then says that contributions after 2012 are
licensed under the GPL instead. How should the average user who just
downloaded the release tarball know which part is now GPL and which
is LGPL?
Looking at the text of the LGPL (see COPYING.LIB in the top directory),
the license clearly states how this should be done instead:
"3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License."
Thus let's clean up the confusing statements and use the proper GPL
text only.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1549456893-16589-1-git-send-email-thuth@redhat.com>
Acked-by: Laurent Vivier <laurent@vivier.eu>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-18-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-17-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Change the representation of VSCR_SAT such that it is easy
to set from vector code.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-16-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is required before changing the representation of the register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-15-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is required before changing the representation of the register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-14-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is required before changing the representation of the register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-13-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These macros are no longer used.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-12-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Not setting flush_to_zero from gdb_set_avr_reg was a bug.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-11-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We can re-use this helper elsewhere if we're not passing
in an entire vector register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-10-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-9-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-8-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-7-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-6-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-5-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190215100058.20015-4-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The ISA 2.06/2.07 Power Management instructions (doze, nap & rvwinkle)
don't exist on POWER9, don't enable them.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190128094625.4428-13-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The PPC BRANCH exception could bubble up, but this is an QEMU internal exception
and QEMU then crased. Instead it should trigger TRACE exception, according to
PPC 2.07 book. It could happen only when using branch stepping, which is not
commonly used.
Change gen_prep_dbgex do do trigger TRACE. The excp, argument is now removed,
since the type of exception can be inferred from the singlestep_enabled flags.
removed the guards around gen_exception, since they are unnecessary.
Fixes: 0e3bf48909 ("ppc: add DBCR based debugging").
Signed-off-by: Roman Kapl <rka@sysgo.com>
Message-Id: <20190212121255.2279-1-rka@sysgo.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Some debug stuff we don't need to keep there
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190128094625.4428-7-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
According to BookE docs, invalid bits (while undefined behaviour) should
not raise exception but be ignored. This seems to be implementation
dependent though and QEMU currently does what e500 CPUs do and raise
exception for invalid bits. Unfortunately some versions of libstdc++
(and so all programs compiled with it) have lwsync on PPC440 which is
invalid but on real hardware it's just executed as msync ignoring the
invalid bits (maybe that's why it got undetected) but they fail on QEMU.
This patch changes invalid mask of msync to allow these programs to run
but keep generating exception on e500 cores to follow what hardware does.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This allows reading and writing of SPRs via GDB:
(gdb) p/x $srr1
$1 = 0x8000000002803033
(gdb) p/x $pvr
$2 = 0x4b0201
(gdb) set $pvr=0x4b0000
(gdb) p/x $pvr
$3 = 0x4b0000
The `info` command can also be used:
(gdb) info registers spr
For this purpose, GDB needs to be provided with an XML description of
the registers (see the gdb-xml directory for examples) and a set of
callbacks for reading and writing the registers must be defined.
The XML file in this case is created dynamically, based on the SPRs
already defined in the machine. This way we avoid the need for several
XML files to suit each possible ppc machine.
The gdb_{get,set}_spr_reg callbacks take an index based on the order
the registers appear in the XML file. This index does not match the
actual location of the registers in the env->spr array so the
gdb_find_spr_idx function does that conversion.
Note: GDB currently needs to know the guest endianness in order to
properly print the registers values. This is done automatically by GDB
when provided with the ELF file or explicitly with the `set endian
<big|little>` command.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Fortunately, the functions affected are so far only called from SVE,
so there is no tail to be cleared. But as we convert more of AdvSIMD
to gvec, this will matter.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For same-sign saturation, we have tcg vector operations. We can
compute the QC bit by comparing the saturated value against the
unsaturated value.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Change the representation of this field such that it is easy
to set from vector code.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-11-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Given that we mask bits properly on set, there is no reason
to mask them again on get. We failed to clear the exception
status bits, 0x9f, which means that the wrong value would be
returned on get. Except in the (probably normal) case in which
the set clears all of the bits.
Simplify the code in set to also clear the RES0 bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Minimize the code within a macro by splitting out a helper function.
Use deposit32 instead of manual bit manipulation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The components of this register is stored in several
different locations.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These are now unused.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The 32-bit PMIN/PMAX has been decomposed to scalars,
and so can be trivially expanded inline.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since we're now handling a == b generically, we no longer need
to do it by hand within target/arm/.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
At the moment the Arm implementations of kvm_arch_{get,put}_registers()
don't support having QEMU change the values of system registers
(aka coprocessor registers for AArch32). This is because although
kvm_arch_get_registers() calls write_list_to_cpustate() to
update the CPU state struct fields (so QEMU code can read the
values in the usual way), kvm_arch_put_registers() does not
call write_cpustate_to_list(), meaning that any changes to
the CPU state struct fields will not be passed back to KVM.
The rationale for this design is documented in a comment in the
AArch32 kvm_arch_put_registers() -- writing the values in the
cpregs list into the CPU state struct is "lossy" because the
write of a register might not succeed, and so if we blindly
copy the CPU state values back again we will incorrectly
change register values for the guest. The assumption was that
no QEMU code would need to write to the registers.
However, when we implemented debug support for KVM guests, we
broke that assumption: the code to handle "set the guest up
to take a breakpoint exception" does so by updating various
guest registers including ESR_EL1.
Support this by making kvm_arch_put_registers() synchronize
CPU state back into the list. We sync only those registers
where the initial write succeeds, which should be sufficient.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Dongjiu Geng <gengdongjiu@huawei.com>
There are a whole bunch more registers in the CPUID space which are
currently not used but are exposed as RAZ. To avoid too much
duplication we expand ARMCPRegUserSpaceInfo to understand glob
patterns so we only need one entry to tweak whole ranges of registers.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190205190224.2198-5-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As this is a single register we could expose it with a simple ifdef
but we use the existing modify_arm_cp_regs mechanism for consistency.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190205190224.2198-4-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
A number of CPUID registers are exposed to userspace by modern Linux
kernels thanks to the "ARM64 CPU Feature Registers" ABI. For QEMU's
user-mode emulation we don't need to emulate the kernels trap but just
return the value the trap would have done. To avoid too much #ifdef
hackery we process ARMCPRegInfo with a new helper (modify_arm_cp_regs)
before defining the registers. The modify routine is driven by a
simple data structure which describes which bits are exported and
which are fixed.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190205190224.2198-3-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Although technically not visible to userspace the kernel does make
them visible via a trap and emulate ABI. We provide a new permission
mask (PL0U_R) which maps to PL0_R for CONFIG_USER builds and adjust
the minimum permission check accordingly.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190205190224.2198-2-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The lo,hi order is different from the comments. And in commit
1ec182c333 ("target/arm: Convert to HAVE_CMPXCHG128"), it changes
the original code logic. So just restore the old code logic before this
commit:
do_paired_cmpxchg64_be():
cmpv = int128_make128(env->exclusive_high, env->exclusive_val);
newv = int128_make128(new_hi, new_lo);
This fixes a bug that would only be visible for big-endian
AArch64 guest code.
Fixes: 1ec182c333 ("target/arm: Convert to HAVE_CMPXCHG128")
Signed-off-by: Catherine Ho <catherine.hecx@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1548985244-24523-1-git-send-email-catherine.hecx@gmail.com
[PMM: added note that bug only affects BE guests]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
HACR_EL2 is a register with IMPDEF behaviour, which allows
implementation specific trapping to EL2. Implement it as RAZ/WI,
since QEMU's implementation has no extra traps. This also
matches what h/w implementations like Cortex-A53 and A57 do.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190205181218.8995-1-peter.maydell@linaro.org
Introduce MTTCG-enabled QEMU builds for mips32, mipsn32, and mips64.
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Hold BQL whenever mips_vpe_wake() is invoked.
Without this patch, MIPS MT with MTTCG enabled triggers an abort in
tcg_handle_interrupt() due to an unlocked access to cpu_interrupt().
This patch makes sure that the BQL is held in this case.
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Make sure BQL is held for all interrupt requests.
For MTTCG-enabled configurations, handling soft and hard interrupts
between vCPUs must be properly locked. By acquiring BQL, make sure
all paths triggering an IRQ are synchronized.
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Completely rewrite conditional stores handling. Use cmpxchg.
This eliminates need for separate implementations of SC instruction
emulation for user and system emulation.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Do only virtual addresses comaprisons in LL/SC sequence emulations.
Until this patch, physical addresses had been compared in SC part of
LL/SC sequence, even though such comparisons could be avoided. Getting
rid of them allows throwing away SC helpers and having common SC
implementations in user and system mode, avoiding the need for two
separate implementations selected by #ifdef CONFIG_USER_ONLY.
Correct guest software should not rely on LL/SC if they accesses the
same physical address via different virtual addresses or if page
mapping gets changed between LL/SC due to manipulating TLB entries.
MIPS Instruction Set Manual clearly says that an RMW sequence must
use the same address in the LL and SC (virtual address, physical
address, cacheability and coherency attributes must be identical).
Otherwise, the result of the SC is not predictable. This patch takes
advantage of this fact and removes the virtual->physical address
translation from SC helper.
lladdr served as Coprocessor 0 LLAddr register which captures physical
address of the most recent LL instruction, and also lladdr was used
for comparison with following SC physical address. This patch changes
the meaning of lladdr - now it will only keep the virtual address of
the most recent LL. Additionally, CP0_LLAddr field is introduced which
is the actual Coperocessor 0 LLAddr register that guest can access.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
This patch set contains a handful of patches I've collected over the
last few weeks. There's nothing really fundamental, but I thought it
would be good to send these out now as there are some other patch sets
on the mailing list that are getting ready to go.
As far as the actual patches, there's:
* A set that cleans up our FS dirty-mode handling.
* Support for writing MISA.
* The removal of Michael as a maintainer.
* A fix to {m,s}counteren handling.
* A fix to make sure the kernel's start address is computed correctly on
32-bit targets.
This makes my "RISC-V Patches for 3.2, Part 3" pull request defunct, as
it contains the same patches but based on a newer master. As usual,
I've tested this using a Fedora boot on the latest Linux. This patch
set does not include Bastian's decodetree patches because there were
some merge conflicts and while I've cleaned them up I want to get a
round of review first.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlxkOc4THHBhbG1lckBk
YWJiZWx0LmNvbQAKCRDvTKFQLMurQZwhEACZtcbDNgXFnV3lpN4mQ7np3EYUsiFv
lEGip+/iBlTp2IWA7pXxw4wTV584/0uK9Y7errHoIB16JSNPpK1si5RTiUVWe8yo
QfYI8/c8zrXHkupMr+T4WZEu1WP6dgl5ZFO8tE/3xF2G/uMJwtTEXZ29OxUAa9tr
8Xsk8Sbs8LBa3YYY+8fGCEa/duG9Bb2DNIgyC6U0Iz3liKCFYWHnjODvs8c+Hpft
0A+VJ3zhAKLAoPymrKmbJc6mYdNNljHMaVg7uDnoxDpLo2Hb0pNuCd0AwmnJVKr5
eI6HV7XzEAxXOY96z4YWtS+/Mqxlo1wUhlkDO0acDoxFSz7XDSMecxowwdNWuwzM
WlHPUAd7VQ8j8oSO4dnRAZnC7Trn172q1tpg+xjWxm8FZuyBzTrOjwoVUW9hoXTt
62GQKtDhWt++Uzq1q0hdaVAckz3c+yBGBCXlQG9wAJVyFSdowQTeYkcW5PU3f6nv
CkZ/nY4hQgtwgxB+PAIobcgkt07bhMnWAxQYRVJaKBAX5Ea7dudQHw9eSL6eI40X
GXhzt5jsj9HRhzSqaKqcIixO1ouIsvAoCD1QNLrCeXNEoa7xMOo7FCLWT3lpj49G
TWmUjrNA/qMB25HMVOaF7lH7mwRShg3wx5oqDQII35TcGx4u+psi9oApPyRUHOFx
syEZaIPiIn+nCw==
=Q/8G
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-4.0-sf1' into staging
RISC-V Patches for the 4.0 Soft Freeze, Part 1
This patch set contains a handful of patches I've collected over the
last few weeks. There's nothing really fundamental, but I thought it
would be good to send these out now as there are some other patch sets
on the mailing list that are getting ready to go.
As far as the actual patches, there's:
* A set that cleans up our FS dirty-mode handling.
* Support for writing MISA.
* The removal of Michael as a maintainer.
* A fix to {m,s}counteren handling.
* A fix to make sure the kernel's start address is computed correctly on
32-bit targets.
This makes my "RISC-V Patches for 3.2, Part 3" pull request defunct, as
it contains the same patches but based on a newer master. As usual,
I've tested this using a Fedora boot on the latest Linux. This patch
set does not include Bastian's decodetree patches because there were
some merge conflicts and while I've cleaned them up I want to get a
round of review first.
# gpg: Signature made Wed 13 Feb 2019 15:37:50 GMT
# gpg: using RSA key 00CE76D1834960DFCE886DF8EF4CA1502CCBAB41
# gpg: issuer "palmer@dabbelt.com"
# gpg: Good signature from "Palmer Dabbelt <palmer@dabbelt.com>" [unknown]
# gpg: aka "Palmer Dabbelt <palmer@sifive.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 00CE 76D1 8349 60DF CE88 6DF8 EF4C A150 2CCB AB41
* remotes/palmer/tags/riscv-for-master-4.0-sf1:
riscv: Ensure the kernel start address is correctly cast
target/riscv: fix counter-enable checks in ctr()
MAINTAINERS: Remove Michael Clark as a RISC-V Maintainer
RISC-V: Add misa runtime write support
RISC-V: Add misa.MAFD checks to translate
RISC-V: Add misa to DisasContext
RISC-V: Add priv_ver to DisasContext
RISC-V: Use riscv prefix consistently on cpu helpers
RISC-V: Implement mstatus.TSR/TW/TVM
RISC-V: Mark mstatus.fs dirty
RISC-V: Split out mstatus_fs from tb_flags
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It looks like the operands where exchanged. HP bootrom tests the
following sequence:
0x00000000f0004064: ldil L%-66666800,r7
0x00000000f0004068: addi 19f,r7,r7
0x00000000f000406c: addi -1,r0,rp
0x00000000f0004070: addi f,r0,r4
0x00000000f0004074: addi 1,r4,r5
0x00000000f0004078: dcor rp,r6
0x00000000f000407c: cmpb,<>,n r6,r7,0xf000411
This returned 0x66666661 instead of the expected 0x9999999f in QEMU.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190211181907.2219-6-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These conditions include the signed overflow bit. See page 5-3
of the Parisc 1.1 Architecture Reference Manual for details.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
[rth: More changes for c == 3, to compute (N^V)|Z properly.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We will be fixing do_cond vs signed overflow, which requires
that do_log_cond not rely on do_cond.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When QEMU is compiled with -O0, these functions are inlined
which will cause a wrong restart address generated for the TB.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190211181907.2219-2-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Now that the implementation is entirely within the generated
decode function, eliminate the wrapper.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With decodetree.py, the specializations would conflict so we
must have a single entry point for all variants of OR.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Convert the BREAK instruction to start.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of returning DisasJumpType, immediately store it.
Return true in preparation for conversion to the decodetree script.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Access to a counter in U-mode is permitted only if the corresponding
bit is set in both mcounteren and scounteren. The current code
ignores mcounteren and checks scounteren only for U-mode access.
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This patch adds support for writing misa. misa is validated based
on rules in the ISA specification. 'E' is mutually exclusive with
all other extensions. 'D' depends on 'F' so 'D' bit is dropped
if 'F' is not present. A conservative approach to consistency is
taken by flushing the translation cache on misa writes. misa_mask
is added to the CPU struct to store the original set of extensions.
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Add misa checks for M, A, F and D extensions and if they are
not present generate illegal instructions. This improves
emulation accurary for harts with a limited set of extensions.
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
gen methods should access state from DisasContext. Add misa
field to the DisasContext struct and remove CPURISCVState
argument from all gen methods.
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The gen methods should access state from DisasContext. Add priv_ver
field to the DisasContext struct.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
* Add riscv prefix to raise_exception function
* Add riscv prefix to CSR read/write functions
* Add riscv prefix to signal handler function
* Add riscv prefix to get fflags function
* Remove redundant declaration of riscv_cpu_init
and rename cpu_riscv_init to riscv_cpu_init
* rename riscv_set_mode to riscv_cpu_set_mode
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This adds the necessary minimum to support S-mode
virtualization for priv ISA >= v1.10
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Co-authored-by: Matthew Suozzo <msuozzo@google.com>
Co-authored-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Modifed from Richard Henderson's patch [1] to integrate
with the new control and status register implementation.
[1] https://lists.nongnu.org/archive/html/qemu-devel/2018-03/msg07034.html
Note: the f* CSRs already mark mstatus.FS dirty using
env->mstatus |= mstatus.FS so the bug in the first
spin of this patch has been fixed in a prior commit.
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Co-authored-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Merge gen_callwi and gen_callw into their only users, translate_callw
and translate_callxw. Extract jump slot adjustment logic into a separate
function and use it in gen_jumpi and translate_callw.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Use libisa to extract whether opcode uses windowed registers and
construct mask based on that. This only leaves special case for the
'entry' opcode, as it needs to probe a register dynamically.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
xtensa-modules.c produced by recent Tensilica tools have
Opcode_*_encode_fns arrays defined as static. Don't add extra 'static'
in front of them when importing.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
It's either "GNU *Library* General Public License version 2" or "GNU
Lesser General Public License version *2.1*", but there was no "version
2.0" of the "Lesser" license. So assume that version 2.1 is meant here.
Also the files mentioned the GPL instead of the LGPL after declaring
that the files are licensed under the LGPL, so change these spots to
use LGPL, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1549266858-5043-1-git-send-email-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
PA-RISC specification says: "Setting the PSW Q-bit, PSW{28}, to 1
with this instruction, if it was not already 1, is an undefined
operation." However, at least HP-UX 10.20 sets the Q bit from 0 to 1
with the SSM instruction. Tested this both on HP9000/712 and
HP9000/785/C3750, both machines set the Q bit from 0 to 1 without
exception. This makes HP-UX 10.20 progress a little bit further.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190129191402.29539-1-svens@stackframe.org>
[rth: Add a comment to the code as well.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
While doing 'set $pcoqh=0xf0000000' i triggered the assertion below.
The argument order for deposit64() is wrong, and val needs to be
moved to the end.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190128165333.3814-1-svens@stackframe.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* Implement Armv8.5-BTI extension for system emulation mode
* Implement the PR_PAC_RESET_KEYS prctl() for linux-user mode's Armv8.3-PAuth support
* Support TBI (top-byte-ignore) properly for linux-user mode
* gdbstub: allow killing QEMU via vKill command
* hw/arm/boot: Support DTB autoload for firmware-only boots
* target/arm: Make FPSCR/FPCR trapped-exception bits RAZ/WI
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAlxZwhYZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3uoCEACm4ds3KGV+bA1dtC367Th7
UKsxiVJ6xD7d1BaN3TyofkLTp0aJZgaYKqFjFhnagP8JoDgerdbno0xZ/9Wu2xIC
CFopSO9MP373wdBy/fbIoYoiSle0P/Gofk40C8HnbRb+C4mj2X0oft33DVlo50RK
p7JZSvHUqvCGrWLqeIKjJ1R+U31/dBx9Xprcg75EOiGGc+9Urb9w3Zmj11QE/eHh
X6cFMM6xUn2aDYLRkbyHNjSOADHehtC/UhnHOpnsiQSnIfkYudF/pwDOPuBjmZkm
9rv8DzR9KvAy2ybbD4lrywH7W00QAnS2COVmpcFidWe9ur+glPMCk2XAuuWOK5J3
+WFsWxCg3VuZ74PL/DclKNp7QqgKhloTV0Q1TPp0z894HYca8faIXEJLoLtHSR7+
2cAFJ7vaj9FMw4wn6XGrcx7olhPy81BhzM2g0eTSOb8T8Fe1mje0Q9Zg5Sc+sTYj
5vIjl37fHclBjxhnD3+F+ZQ+P2GK4/5gFPY9gezXPQWJAtD80MLBn6h/fvT8tTJm
DlaMqFrIiQatllQDm9DmhNJg8707t6m8A6FIjhRDbfjS8Uw45Q+H2KdUfnVQJP8N
1ZCbTGRfhl4qwUy3jigYM448W10/T1l4/0BFcXdt/qc6C3QSjsqfHPlsft4XIaRf
rUgbqfNBHr/i1n+SN5L4KQ==
=XEy2
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190205' into staging
target-arm queue:
* Implement Armv8.5-BTI extension for system emulation mode
* Implement the PR_PAC_RESET_KEYS prctl() for linux-user mode's Armv8.3-PAuth support
* Support TBI (top-byte-ignore) properly for linux-user mode
* gdbstub: allow killing QEMU via vKill command
* hw/arm/boot: Support DTB autoload for firmware-only boots
* target/arm: Make FPSCR/FPCR trapped-exception bits RAZ/WI
# gpg: Signature made Tue 05 Feb 2019 17:04:22 GMT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20190205: (22 commits)
target/arm: Make FPSCR/FPCR trapped-exception bits RAZ/WI
hw/arm/boot: Support DTB autoload for firmware-only boots
hw/arm/boot: Clarify why arm_setup_firmware_boot() doesn't set env->boot_info
hw/arm/boot: Factor out "set up firmware boot" code
hw/arm/boot: Factor out "direct kernel boot" code into its own function
hw/arm/boot: Fix block comment style in arm_load_kernel()
gdbstub: allow killing QEMU via vKill command
target/arm: Enable TBI for user-only
target/arm: Compute TB_FLAGS for TBI for user-only
target/arm: Clean TBI for data operations in the translator
target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignore
tests/tcg/aarch64: Add pauth smoke test
linux-user: Implement PR_PAC_RESET_KEYS
target/arm: Enable BTI for -cpu max
target/arm: Set btype for indirect branches
target/arm: Reset btype for direct branches
target/arm: Default handling of BTYPE during translation
target/arm: Cache the GP bit for a page in MemTxAttrs
exec: Add target-specific tlb bits to MemTxAttrs
target/arm: Add BT and BTYPE to tb->flags
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The {IOE, DZE, OFE, UFE, IXE, IDE} bits in the FPSCR/FPCR are for
enabling trapped IEEE floating point exceptions (where IEEE exception
conditions cause a CPU exception rather than updating the FPSR status
bits). QEMU doesn't implement this (and nor does the hardware we're
modelling), but for implementations which don't implement trapped
exception handling these control bits are supposed to be RAZ/WI.
This allows guest code to test for whether the feature is present
by trying to write to the bit and checking whether it sticks.
QEMU is incorrectly making these bits read as written. Make them
RAZ/WI as the architecture requires.
In particular this was causing problems for the NetBSD automatic
test suite.
Reported-by: Martin Husemann <martin@netbsd.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190131130700.28392-1-peter.maydell@linaro.org
This has been enabled in the linux kernel since v3.11
(commit d50240a5f6cea, 2013-09-03,
"arm64: mm: permit use of tagged pointers at EL0").
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190204132126.3255-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Enables, but does not turn on, TBI for CONFIG_USER_ONLY.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190204132126.3255-4-richard.henderson@linaro.org
[PMM: adjusted #ifdeffery to placate clang, which otherwise complains
about static functions that are unused in the CONFIG_USER_ONLY build]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will allow TBI to be used in user-only mode, as well as
avoid ping-ponging the softmmu TLB when TBI is in use. It
will also enable other armv8 extensions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190204132126.3255-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Split out gen_top_byte_ignore in preparation of handling these
data accesses; the new tbflags field is not yet honored.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190204132126.3255-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190128223118.5255-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190128223118.5255-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is all of the non-exception cases of DISAS_NORETURN.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190128223118.5255-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The branch target exception for guarded pages has high priority,
and only 8 instructions are valid for that case. Perform this
check before doing any other decode.
Clear BTYPE after all insns that neither set BTYPE nor exit via
exception (DISAS_NORETURN).
Not yet handled are insns that exit via DISAS_NORETURN for some
other reason, like direct branches.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190128223118.5255-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Caching the bit means that we will not have to re-walk the
page tables to look up the bit during translation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190128223118.5255-6-richard.henderson@linaro.org
[PMM: no need to OR in guarded bit status]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190128223118.5255-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Place this in its own field within ENV, as that will
make it easier to reset from within TCG generated code.
With the change to pstate_read/write, exception entry
and return are automatically handled.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190128223118.5255-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Also create field definitions for id_aa64pfr1 from ARMv8.5.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190128223118.5255-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
vhost user blk discard/write zeroes features
misc cleanups and fixes all over the place
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJcWbMUAAoJECgfDbjSjVRp2acH+wa8abfyQIpVMji+cdvcw7Wo
yJjCoPxX0y+tm7FGp6YjyyQhSzjUYRS+71NL6xkZVxs8yh4ZCbb/xDRYmrVG2cb6
6tUcqp1zlMp+w+kVKS4m4WTb0Z1bIODOi99tNBXxYjWY9lXF/hjZbhRpt6vL/NaP
hqYtp+wlhyjxsY/GFn5XZLY7F6+QKq/lsMJD80FvbtHBh/ngqmgyoouNwYzg1faW
M3wD5A3IOXLzU+MGK6uCtKK0+t3xzvaVJL7zChimKkn+y+Pg9T+s04zQS739bJRg
dYYrVK3IOt2tvVxBZnnUm9eBIxxElAxRssmERPe2i0mBRSXYKomuVZrXJ6ROhjA=
=TxWc
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pci, pc, virtio: fixes, cleanups, features
vhost user blk discard/write zeroes features
misc cleanups and fixes all over the place
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Tue 05 Feb 2019 16:00:20 GMT
# gpg: using RSA key 281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [full]
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>" [full]
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream:
contrib/libvhost-user: cleanup casts
r2d: fix build on mingw
mmap-alloc: fix hugetlbfs misaligned length in ppc64
mmap-alloc: unfold qemu_ram_mmap()
i386, acpi: cleanup build_facs by removing second unused argument
fw_cfg: fix the life cycle and the name of "qemu_extra_params_fw"
acpi: Make TPM 2.0 with TIS available as MSFT0101
hw/virtio: Use CONFIG_VIRTIO_PCI switch instead of CONFIG_PCI
vhost-user-blk: add discard/write zeroes features support
contrib/vhost-user-blk: fix the compilation issue
pci/msi: export msi_is_masked()
intel_iommu: reset intr_enabled when system reset
intel_iommu: fix operator in vtd_switch_address_space
hw: virtio-pci: drop DO_UPCAST
include: update Linux headers to 4.21-rc1/5.0-rc1
scripts/update-linux-headers.sh: adjust for Linux 4.21-rc1 (or 5.0-rc1)
contrib/libvhost-user: switch to uint64_t
virtio: add checks for the size of the indirect table
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The machine that with hvf accelerator and smp sometimes boot hangs
because all processors are executing instructions at startup,
including early I/O emulations. We should just allow the bootstrap
processor to initialize the machine and then to wake up slave
processors by interrupt.
Signed-off-by: Heiher <r@hev.cc>
Message-Id: <20190123073402.28465-1-r@hev.cc>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The machine description we send is being (silently) thrown on the floor
by GDB and GDB silently uses the default machine description, because
the xml parse fails on <feature> nested within <feature>.
Changes to the xml in qemu source code have no effect.
In addition, the default machine description has fs_base, which fails to
be retrieved, which breaks the whole register window. Add it and the
other control registers.
Signed-off-by: Doug Gale <doug16k@gmail.com>
Message-Id: <20190124040457.2546-1-doug16k@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In 16-bit addressing mode, when Mod = 0 and R/M = 6, decoded displacement
doesn't reach decode_linear_addr and gets lost. Instructions that
involve the combination of ModRM always get a pointer with zero offset
from the beginning of DS segment.
The change fixes drawing in F-BIRD from day 1 of '18 advent calendar.
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20190125154743.14498-1-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
MPX support is being phased out by Intel and actually I am not sure that
OS X has ever enabled it in XCR0. Drop it from the Hypervisor.framework
acceleration.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This reverts commit 5131dc433d.
For new instruction 'PCONFIG' will not be exposed to guest.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1545227081-213696-3-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Processor tracing is not yet implemented for KVM and it will be an
opt in feature requiring a special module parameter.
Disable it, because it is wrong to enable it by default and
it is impossible that no one has ever used it.
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
PCONFIG is not available to guests; it must be specifically enabled
using the PCONFIG_ENABLE execution control. Disable it, because
no one can ever use it.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1545227081-213696-2-git-send-email-robert.hu@linux.intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- fix CPU wakeup on runstall changes; expose runstall as an IRQ line;
- place mini-bootloader at the BSP reset vector;
- expose CPU core frequency in XTFPGA board FPGA register;
- rearrange access to external interrupts of xtensa cores;
- add MX interrupt distributor and use it on SMP XTFPGA boards;
- add test_mmuhifi_c3 xtensa core variant;
- raise number of CPUs that can be instantiated on XTFPGA boards.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEK2eFS5jlMn3N6xfYUfnMkfg/oEQFAlxYi5QTHGpjbXZia2Jj
QGdtYWlsLmNvbQAKCRBR+cyR+D+gRCyJEACR/IQ7LVFYBczo450yLOoAPWU/8/iU
BmmeqM/BKZHBZN5HyS7zdBbDGHe/aQd+hlwG3hnxaJTfqTSOW0QxOmXkI2U5yQrl
SfGErboON0FnDreQuLlsRN0QQYI7pFJiwpUj5sqOYYghfbEfwmefUNzWlJr17AyC
AusLrGe3SBJuv40S78C+S7e4XlaGxPn5OwCvpvH+o8iDj9TCP7+vXV8N4fdackH+
i323F9OdNSmtKXL/lDVG2bf5eFw+koLOGPsqjdT/WIfRVg45VXd7ZFQeMZxuQrh4
8NwlQx9bxM4rbPBrUpsPDulic5udmMJvJ31CFe3YXu48skG90E3NxtOmUBGvTgMl
lpM9RzZfCZTAIfPqw0LHiQ4kaioMlAO40wtCNxqm78R5UL/FseuzNUN+eLz7qCL1
rTBKOQ5CJMFZWDkQjcwIYz54vi8QCt0jMWD4MzzIaU3Nf3Ak5uAaeEQE+b48lBvQ
0EhUzwh1Ea9An2JAyhAqlNqifs+0HsG4M5tqhVj+9IITlzbZe/zLswgYaHd5D75v
ElGoFov9Gi/GM5LxFzNRN1HwFCFWeHTt6sIHIo2oR1+mPrNhS+MJyqrnkKgIYOA/
plSJqO//iyS8wGL7c7UXVV3cixGOv7SSZDtSKFZb+sD7aML/y3DJnFTmABOl4kdt
OBYFg8+jcrma3g==
=8D60
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/xtensa/tags/20190204-xtensa' into staging
target/xtensa: SMP updates and various fixes
- fix CPU wakeup on runstall changes; expose runstall as an IRQ line;
- place mini-bootloader at the BSP reset vector;
- expose CPU core frequency in XTFPGA board FPGA register;
- rearrange access to external interrupts of xtensa cores;
- add MX interrupt distributor and use it on SMP XTFPGA boards;
- add test_mmuhifi_c3 xtensa core variant;
- raise number of CPUs that can be instantiated on XTFPGA boards.
# gpg: Signature made Mon 04 Feb 2019 18:59:32 GMT
# gpg: using RSA key 2B67854B98E5327DCDEB17D851F9CC91F83FA044
# gpg: issuer "jcmvbkbc@gmail.com"
# gpg: Good signature from "Max Filippov <filippov@cadence.com>" [unknown]
# gpg: aka "Max Filippov <max.filippov@cogentembedded.com>" [full]
# gpg: aka "Max Filippov <jcmvbkbc@gmail.com>" [full]
# Primary key fingerprint: 2B67 854B 98E5 327D CDEB 17D8 51F9 CC91 F83F A044
* remotes/xtensa/tags/20190204-xtensa:
hw/xtensa: xtfpga: raise CPU number limit
target/xtensa: add test_mmuhifi_c3 core
hw/xtensa: xtfpga: use MX PIC for SMP
target/xtensa: add MX interrupt controller
target/xtensa: expose core runstall as an IRQ line
target/xtensa: rearrange access to external interrupts
target/xtensa: drop function xtensa_timer_irq
target/xtensa: fix access to the INTERRUPT SR
hw/xtensa: xtfpga: use core frequency
hw/xtensa: xtfpga: fix bootloader placement in SMP
target/xtensa: add qemu_cpu_kick to xtensa_runstall
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As floating point registers overlay some vector registers and we want
to make use of the general tcg_gvec infrastructure that assumes vectors
are not stored in globals but in memory, don't model floating point
registers as globals anymore. This is then similar to how arm handles
it.
Reading/writing a floating point register means reading/writing memory now.
Break up ugly in2_x2() handling that modifies both, in1 and in2 into
in2_x2l and in2_x2h. This makes things more readable. Also, in1_x1() is
ugly as it touches out/out2, get rid of that and use prep_x1() instead.
As we are no longer able to use the original global variables for
out/out2, we have to use new temporary variables and write from them to
the target registers using wout_ helpers.
E.g. an instruction that reads and writes x1 will use
- prep_x1 to get the values into out/out2
- wout_x1 to write the values from out/out2
This special handling is needed for x1 as it is often used along with
other inputs, so in1/in2 is already used.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190204154406.16122-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
I plan to deprecate -mem-path option and replace it with memory-backend,
for that it's necessary to get rid of mem_path global variable.
Do it for s390x case, replacing it with alternative way to enable
1Mb hugepages capability.
Todo that replace qemu_mempath_getpagesize() with qemu_getrampagesize()
which also checks for -mem-path provided RAM.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <1548834906-133241-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
MTTCG should be enabled by default whenever the memory model allows
it. s390x was missing its definition of TCG_GUEST_DEFAULT_MO meaning
the user had to manually specify --accel tcg,thread=multi.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Cc: David Hildenbrand <david@redhat.com>
Message-Id: <20190118171848.27332-1-alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Following on from the previous work, there are numerous endian-related hacks
in int_helper.c that can now be replaced with Vsr* macros.
There are also a few places where the VECTOR_FOR_INORDER_I macro can be
replaced with a normal iterator since the processing order is irrelevant.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Richard points out that these macros suffer from a -fsanitize=shift bug in that
they improperly handle n == 0 turning it into a shift by 32/64 respectively.
Replace them with QEMU's existing ror32() and ror64() functions instead.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
As pointed out by Richard: it does not need the mask argument, nor does it need
the recast argument. The masking is implied by the cast argument, and the
recast is implied by the assignment.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These macros can be eliminated by instead using the relavant Vsr* macros in
the few locations where they appear.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The original purpose of these macros was to correctly reference the high and low
parts of the VSRs regardless of the host endianness.
Replace these direct references to high and low parts with the relevant VsrD
macro instead, and completely remove the now-unused HI_IDX and LO_IDX macros.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The current implementations make use of the endian-specific macros HI_IDX and
LO_IDX directly to calculate array offsets.
Rework the implementation to use the Vsr* macros so that these per-endian
references can be removed.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The current implementations make use of the endian-specific macros MRGLO/MRGHI
and also reference HI_IDX and LO_IDX directly to calculate array offsets.
Rework the implementation to use the Vsr* macros so that these per-endian
references can be removed.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These fields have now been replaced by equivalents under the machine
data.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This prepares us for eliminating the use of direct array access within the VMX
instruction implementations.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It has been there since the enablement of PR KVM for PAPR, ie, commit
f61b4bedaf in 2011. Not sure why at that time, but it is definitely
not needed with the current code.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A flawed test lead to the instructions always being treated as
unallocated encodings.
Fixes: https://bugs.launchpad.net/bugs/1813460
Signed-off-by: Remi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since QEMU does not support the ARMv8.2-LVA, Large Virtual Address,
extension (yet), the VA address space is 48-bits plus a sign bit. User
mode can only handle the positive half of the address space, so that
makes a limit of 48 bits.
(With LVA, it would be 53 and 52 bits respectively.)
The incorrectly large address space conflicts with PAuth instructions,
which use bits 48-54 and 56-63 for the pointer authentication code. This
also conflicts with (as yet unsupported by QEMU) data tagging and with
the ARMv8.5-MTE extension.
Signed-off-by: Remi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Drop the pac properties. This approach cannot work as written
because the properties are applied before arm_cpu_reset, which
zeros SCTLR_EL1 (amongst everything else).
We can re-introduce the properties if they turn out to be useful.
But since linux 5.0 enables all of the keys, they may not be.
Fixes: 1ae9cfbd47
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, the set_pc logic was unclear, which raised questions about
whether it should be used directly, applying a value to PC or adding
additional checks, for example, set the Thumb bit in Arm cpu. Let's set
the set_pc logic for “Configure the PC, as was done in the ELF file”
and implement synchronize_with_tb hook for preserving PC to cpu_tb_exec.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190129121817.7109-1-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These bits become writable with the ARMv8.3-PAuth extension.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190129143511.12311-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Make PMU overflow interrupts more accurate by using a timer to predict
when they will overflow rather than waiting for an event to occur which
allows us to otherwise check them.
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190124162401.5111-3-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Whenever we notice that a counter overflow has occurred, send an
interrupt. This is made more reliable with the addition of a timer in a
follow-on commit.
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190124162401.5111-2-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In disas_simd_indexed(), for the case of "complex fp", each indexable
element is a complex pair, so the total size is twice that indicated
in the 'size' field in the encoding. We were trying to do this
"double the size" operation with a left shift by 1, but this is
incorrect because the 'size' field is a MO_8/MO_16/MO_32/MO_64
value, and doubling the size should be done by a simple increment.
This meant we were mishandling FCMLA (by element) of values where
the real and imaginary parts are 32-bit floats, and would incorrectly
UNDEF this encoding. (No other insns take this code path, and for
16-bit floats it happens that 1 << 1 and 1 + 1 are both the same).
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190129140411.682-3-peter.maydell@linaro.org
The FCMLA (by element) instruction exists in the
"vector x indexed element" encoding group, but not in
the "scalar x indexed element" group. Correctly UNDEF
the unallocated encodings.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190129140411.682-2-peter.maydell@linaro.org
In the AdvSIMD scalar x indexed element and vector x indexed element
encoding group, the SDOT and UDOT instructions are vector only,
and their opcode is unallocated in the scalar group. Correctly
UNDEF this unallocated encoding.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190125182626.9221-8-peter.maydell@linaro.org
In the encoding groups
* floating-point data-processing (1 source)
* floating-point data-processing (2 source)
* floating-point data-processing (3 source)
* floating-point immediate
* floating-point compare
* floating-ponit conditional compare
* floating-point conditional select
bit 31 is M and bit 29 is S (and bit 30 is 0, already checked at
this point in the decode). None of these groups allocate any
encoding for M=1 or S=1. We checked this in disas_fp_compare(),
disas_fp_ccomp() and disas_fp_csel(), but missed it in disas_fp_1src(),
disas_fp_2src(), disas_fp_3src() and disas_fp_imm().
We also missed that in the fp immediate encoding the imm5 field
must be all zeroes.
Correctly UNDEF the unallocated encodings here.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190125182626.9221-7-peter.maydell@linaro.org