The AMD IOMMU does not (yet) support interrupt remapping. But
kvm_arch_fixup_msi_route assumes that all implementations do and crashes
when the AMD IOMMU is used in KVM mode.
Fixes: 8b5ed7dffa ("intel_iommu: add support for split irqchip")
Reported-by: Christopher Goldsworthy <christopher.goldsworthy@outlook.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <48ae78d8-58ec-8813-8680-6f407ea46041@siemens.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- mark instructions that affect active IRQ level;
- put call for gen_check_interrupts right after the instruction
translation; when FLIX is enabled it will need to appear before
other exits from the TB as well;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Now that all logic for TB termination is extracted from rsr/wsr their
return value is not used and may be dropped.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark instructions that require TB termination via slot 0;
- put TB termination right after the instruction translation loop, if
termination w/o TB linking wasn't requested;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Currently we only end TB in icount mode, because access to CCOUNT or
write to CCOMPARE are IO operations. Simplify the behaviour a bit and
end TB unconditionally.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Opcode decoding with libisa takes care about range of valid group SRs,
like CCOMPARE, IBREAKA, DBREAKA or DBREAKC. Turn range checks in wsr
implementations into assertions.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark all instructions that exit TB and require dynamic search for the
next TB;
- put TB termination right after the instruction translation loop;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark quos/quou/rems/remu instructions;
- drop parameter 0 from the translate_quou and split translate_remu from
it;
- put test for division by zero exception right after the coprocessor
exception test;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- add XtensaOpcodeOps::coprocessor with bitmask of coprocessors used by
the instruction;
- replace coprocessor id parameter of gen_check_cpenable with the
bitmask of used coprocessors;
- collect coprocessor IDs used by an instruction in the disassembly
loop;
- put test for coprocessor disabled exception after the alloca test;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark retw and retw.n instructions;
- extract window inderflow test from retw helper;
- put underflow exception check generation right after the overflow
check;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- add ps.callinc to the TB flags, that allows testing all instructions
for window overflow statically;
- drop gen_window_check* functions; replace them with get_window_check
that accepts bitmask of used registers;
- add XtensaOpcodeOps::test_overflow that returns bitmask of implicitly
used registers; use it for entry and call{,x}{4,8,12};
- drop window overflow test from the entry helper;
- drop parameter 0 from translate_[di]cache and use translate_nop for
d/i cache opcodes that don't need memory accessibility check;
- add bitmask XtensaOpcodeOps::windowed_register_op that marks opcode
arguments that refer to windowed registers;
- translate windowed_register_op mask to a mask of actually used
registers in the disassembly loop;
- add check for window overflow right after the check for debug
exception;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark break and break.n instructions;
- collect debug cause bits from parameter 0 of instructions marked for
debug exception;
- put debug exception check right after syscall check;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark privileged instructions;
- put single privileged instruction check after disassembly loop;
- translate_[di]cache: drop parameter 0, shift parameters one down;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- TB flags: add XTENSA_TBFLAG_CWOE that corresponds to the architectural
CWOE state;
- entry: move CWOE check from the helper to the test_ill_entry;
- retw: move CWOE check from the helper to the test_ill_retw;
- separate instruction disassembly loop and translation loop; save
disassembly results in local array;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
The ARMv8 architecture defines that an AArch32 CPU starts
in SVC mode, unless EL2 is the highest available EL, in
which case it starts in Hyp mode. (In ARMv7 a CPU with EL2
but not EL3 was not a valid configuration, but we don't
specifically reject this if the user asks for one.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20180823135047.16525-1-peter.maydell@linaro.org
Not only are the sve-related tb_flags fields unused when SVE is
disabled, but not all of the cpu registers are initialized properly
for computing same. This can corrupt other fields by ORing in -1,
which might result in QEMU crashing.
This bug was not present in 3.0, but this patch is cc'd to
stable because adf92eab90 where the bug was
introduced was marked for stable.
Fixes: adf92eab90
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Here are the accumulated ppc target patches for the last several
weeks. Highlights are:
* A number of 40p / PReP cleanups
* Preliminary irq rework on the pseries machine towards the new
XIVE interrupt controller
There are a few patches which make small changes to generic device and
arm code as prerequisites to the 40p interrupt routing cleanup. They
have acks from the relevant maintainers.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlup3PYACgkQbDjKyiDZ
s5IcYQ//fp79LhIXUKfJuGasVg1K8X795s3nD8vZ76z7FV2kNyHvOCcTsLn0Ccrp
WJLdXdZ0ErY87vJPfHckii9pXOX8J38nV5EFCElSLslx6gCndQZdQX2WY3luwIzq
afiKMERwTkCcqFXXPgweijhhuAU+roay8xdO/ZBO52ogzGaZalTFjG4l9a0DZMSm
ZceDrLrKw6GOaxntLptcn2+Ncuwpm0WSpLyL+bGNAzSAbqdn1dhHQ9UBrcSMteWj
df8J7CX63CFL2MwbQE3RyXeKaomdHabG+QgEVMlS4dpXVUx++ciMtrwZTX1mMDlI
DA9+5u6TcRMz34hN8lWk2O05scOVp8965BcfdeRBYAOTDS4ztiZJ9spKkIV0lHfe
rkgo7F1OsqoQhs9QrLYp0zZYn1OIhHWrbhk/DQptCJMRHk8mct4v2FcyGecU0e1Z
7SlJErxHXmar83PCCJXhtYHthDxN+dTHUW0bbrF4IjysfK+poX5hvvFEjyHGPIJL
duytwgEnnrBOFM7f7mdfH1LKeKzm1ji8nu7g2IsPAXC0xuFaq+d0fZWUWjymSPku
k5k5UUPs8KLtP9XY2qhO0vxBWl5d+CTam19FWVqHjRAp5WqjmoLxWnkofupcT0Yv
LcoHH2Ad9K8e0F4nA4UCYdJwfGH3qO+eBzmBR4+HZOuT1gVvRuw=
=A62f
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.1-20180925' into staging
ppc patch queue 2018-09-25
Here are the accumulated ppc target patches for the last several
weeks. Highlights are:
* A number of 40p / PReP cleanups
* Preliminary irq rework on the pseries machine towards the new
XIVE interrupt controller
There are a few patches which make small changes to generic device and
arm code as prerequisites to the 40p interrupt routing cleanup. They
have acks from the relevant maintainers.
# gpg: Signature made Tue 25 Sep 2018 08:00:06 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.1-20180925:
40p: add fixed IRQ routing for LSI SCSI device
lsi53c895a: add optional external IRQ via qdev
scsi: remove unused lsi53c895a_create() and lsi53c810_create() functions
scsi: move lsi53c8xx_create() callers to lsi53c8xx_handle_legacy_cmdline()
scsi: add lsi53c8xx_handle_legacy_cmdline() function
sm501: Adjust endianness of pixel value in rectangle fill
spapr_pci: add an extra 'nr_msis' argument to spapr_populate_pci_dt
spapr: increase the size of the IRQ number space
spapr: introduce a spapr_irq class 'nr_msis' attribute
40p: use OR gate to wire up raven PCI interrupts
raven: some minor IRQ-related tidy-ups
hw/ppc: on 40p machine, change default firmware to OpenBIOS
target/ppc/cpu-models: Re-group the 970 CPUs together again
Record history of ppcemb target in common.json
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The addition of the POWER9 CPUs divided the entries for the 970 CPUs,
which is a little bit confusing when you look at the code. So let's
re-group the 970 CPUs together again, and since these chips have been
based on the POWER4 processor, move them also in front of the POWER5
chips now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Here's another pull request for qemu-3.1. No real theme here, just an
assortment of various fixes. Probably the most notable thing is the
removal of the ppcemb target which has been deprecated for some time
now.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAluSKPoACgkQbDjKyiDZ
s5JSpRAAhWvxLM6OoTdhAaPKhlKrIzWexWNI8efJNWfXvHnbHBxs8tk+hnJOZVsU
m00hfFMKMA0/4JMURrbYsCiyaq+r+Ws8oEbLDVKQdng6LNeUrLq7uC0rv41bW3CC
1BTqTX16lvhPsg1Sz8mh6IGwCIgRiV8zgvQ4iCc3GCJidI2A+3uLvW5hAndvDdjb
3lq6drg23LXZ6z/ou7hPynKmV6tFTlxSnB957LCnPGFACZeJKbuoRHPP30IrWwY+
nOQ1GTvenouGvEKI5gsC13qFWYcoNPPfc7NZFtx1fvxiMpkOj7R5hg9oStT2Ya6u
MVRwcp/XA2MF+2NnJ8TZOkAV7+1JidhRirsKFjcn1JqftWSxJOKA0weWuNQgdQNY
lJzyZZejEJCHn0NgOq9ZRjOP4U6iIcSlTurfXoronhw1q7yEBkYkS+JpLToLLsid
9qwxlBAfUfQ8E1wR8RnM6ATygVp2Z2ToL+70Rc7xzq6/R8kYFSzuhyaI1GUUtPGW
ZPwp3GRYWJE/xOK3z1YAndrN8FlNxqz3Cov3vtH118aBatWAT+PRVlouOB1/aF3T
KfV8Kme5KQrMGuj/RDLGLOeQi0e8wqBtVIhsESpHdocC6uo28H5gNXxptyLJPA04
dJwWvaQf/J7eIuChhuFygiTzMnQyJA1f77jlExpKfxKKQwUpHf4=
=WnE4
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.1-20180907' into staging
ppc patch queue 2018-09-07
Here's another pull request for qemu-3.1. No real theme here, just an
assortment of various fixes. Probably the most notable thing is the
removal of the ppcemb target which has been deprecated for some time
now.
# gpg: Signature made Fri 07 Sep 2018 08:30:02 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.1-20180907:
target-ppc: Extend HWCAP2 bits for ISA 3.0
target/ppc/kvm: set vcpu as online/offline
Fix a deadlock case in the CPU hotplug flow
spapr: Correct reference count on spapr-cpu-core
mac_newworld: implement custom FWPathProvider
uninorth: add ofw-addr property to allow correct fw path generation
mac_oldworld: implement custom FWPathProvider
grackle: set device fw_name and address for correct fw path generation
macio: add addr property to macio IDE object
macio: add macio bus to help with fw path generation
macio: move MACIOIDEState type declarations to macio.h
spapr_pci: fix potential NULL pointer dereference
spapr: fix leak of rev array
ppc: Remove deprecated ppcemb target
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAluQU38ACgkQIeENKd+X
cFRAXQgAlhNcwby+Jsk8sbLajMWXEtww9FIv+XESldPOJHmJyCkNDVZX8MuMM7+f
8NraD3YGDJvXP/BEcmyE5yPC6mx+OIi8ufzqP0rUML1x4+Tpxp8nZ7sBH197RtGg
eImPA6oKvg4wyfNOrZ+hGa8HF/iMT03TqeKggUPf3dVAs8LV2iUwBIzrRLB4IhIN
yFnhbcw8cW04tWUhYg4+viDY2k0q7fMrJZkASD/RjGMBjubJkwWvSYOdMIEWSpcG
2qLT5SohzUzHyKPONsoBKjSIP+nKgtyYR6IJh40FDd5S5RRMHe/n3q9jChIkHMma
x1eSNvVd41++QlBKqDeAlA+gbdK/uw==
=FJn/
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/alistair/tags/pull-riscv-pullreq-20180905' into staging
A misc collection of RISC-V related patches for 3.1.
# gpg: Signature made Wed 05 Sep 2018 23:06:55 BST
# gpg: using RSA key 21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* remotes/alistair/tags/pull-riscv-pullreq-20180905:
riscv: remove define cpu_init()
hw/riscv/spike: Set the soc device tree node as a simple-bus
hw/riscv/virtio: Set the soc device tree node as a simple-bus
target/riscv: call gen_goto_tb on DISAS_TOO_MANY
target/riscv: optimize indirect branches
target/riscv: optimize cross-page direct jumps in softmmu
RISC-V: Simplify riscv_cpu_local_irqs_pending
RISC-V: Use atomic_cmpxchg to update PLIC bitmaps
RISC-V: Improve page table walker spec compliance
RISC-V: Update address bits to support sv39 and sv48
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Complete xtensa-semi chardev console implementation: allow reading input
characters from file descriptor 0 and call sys_select_one simcall on it.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
s32c1i must load and store value with target endianness, not host.
This results in an infinite loop in atomic cmpxchg sequences when target
endianness doesn't match host endianness.
Fixes: 9fb40342d4 ("target/xtensa: support MTTCG")
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
- FPU2000 defines rfr and wfr opcodes, not rfr.s and wfr.s;
- movcond.s uses incorrect operand in tcg_gen_movcond: in case the
condition is not satisfied it must not change its argument 0.
Fixes: c04e1692e3 ("target/xtensa: extract FPU2000 opcode
translators")
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
cpu_init() was removed since 2.12, so drop the define that is now unused.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Performance impact of this and the previous commits, measured with
the very-easy-to-cross-compile rv8-bench:
https://github.com/rv8-io/rv8-bench
Host: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
- Key:
before: master
after1,2,3: the 3 commits in this series (i.e. 3 is this commit)
- User-mode:
bench before after1 after2 after3 final speedup
---------------------------------------------------------
aes 1.12s 1.12s 1.10s 1.00s 1.12
bigint 0.78s 0.78s 0.78s 0.78s 1
dhrystone 0.96s 0.97s 0.49s 0.49s 1.9591837
miniz 1.94s 1.94s 1.88s 1.86s 1.0430108
norx 0.51s 0.51s 0.49s 0.48s 1.0625
primes 0.85s 0.85s 0.84s 0.84s 1.0119048
qsort 4.87s 4.88s 1.86s 1.86s 2.6182796
sha512 0.76s 0.77s 0.64s 0.64s 1.1875
(after1 only applies to softmmu, so no surprises here)
- Full-system (fedora):
bench before after1 after2 after3 final speedup
---------------------------------------------------------
aes 2.68s 2.54s 2.60s 2.34s 1.1452991
bigint 1.61s 1.56s 1.55s 1.64s 0.98170732
dhrystone 1.78s 1.67s 1.25s 1.24s 1.4354839
miniz 3.53s 3.35s 3.28s 3.35s 1.0537313
norx 1.13s 1.09s 1.07s 1.06s 1.0660377
primes 15.37s 15.41s 15.20s 15.37s 1
qsort 7.20s 6.71s 3.85s 3.96s 1.8181818
sha512 1.07s 1.04s 0.90s 0.90s 1.1888889
SoftMMU slows things down, so the numbers are less sensitive.
Cross-page jumps improve things a little bit, though.
Note that I'm not showing here averages, just results from a
single run, so with primes there isn't much to worry about.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Set the newly added register(KVM_REG_PPC_ONLINE) to indicate if the vcpu is
online(1) or offline(0)
KVM will use this information to set the RWMR register, which controls the PURR
and SPURR accumulation.
CC: paulus@samba.org
Signed-off-by: Nikunj A Dadhania <nikunj@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This commit is intended to improve readability.
There is no change to the logic.
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- Inline PTE_TABLE check for better readability
- Change access checks from ternary operator to if
- Improve readibility of User page U mode and SUM test
- Disallow non U mode from fetching from User pages
- Add reserved PTE flag check: W or W|X
- Add misaligned PPN check
- Set READ protection for PTE X flag and mstatus.mxr
- Use memory_region_is_ram in pte update
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In a few places translate.c contains non-breaking spaces (0xc2 0xa0)
instead of regular ones (0x20):
7c 7c c2 a0 63 63
7c 7c 20 63 63
| | c c
This confuses some text editors.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180822144039.5796-2-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
PACK fails on the test from the Principles of Operation: F1F2F3F4
becomes 0000234C instead of 0001234C due to an off-by-one error.
Furthermore, it overwrites one extra byte to the left of F1.
If len_dest is 0, then we only want to flip the 1st byte and never loop
over the rest. Therefore, the loop condition should be > and not >=.
If len_src is 1, then we should flip the 1st byte and pack the 2nd.
Since len_src is already decremented before the loop, the first
condition should be >=, and not >.
Likewise for len_src == 2 and the second condition.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-7-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Improves "b213c9f5: target/s390x: Implement TRTR" by introducing the
intermediate functions, which are compatible with dx_helper type.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-6-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Suppose psw.mask=0x0000000080000000, cc=2, r1=0 and we do "ipm 1".
This command must touch only bits 32-39, so the expected output
is r1=0x20000000. However, currently qemu yields r1=0x20008000,
because irrelevant parts of PSW leak into r1 during program mask
transfer.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-5-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
CSST is defined as:
C(0xc802, CSST, SSF, CASS, la1, a2, 0, 0, csst, 0)
It means that the first parameter is handled by in1_la1().
in1_la1() fills addr1 field, and not in1.
Furthermore, when extract32() is used for the alignment check, the
third parameter should specify the number of trailing bits that must
be 0. For FC these numbers are:
FC=0 (word, 4 bytes): 2
FC=1 (double word, 8 bytes): 3
FC=2 (quad word, 16 bytes): 4
For SC these numbers correspond to the size:
SC=0: 0
SC=1: 1
SC=2: 2
SC=3: 3
SC=4: 4
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-4-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
These instructions are provided for compatibility purposes and are
used only by old software, in the new code BAS and BASR are preferred.
The difference between the old and new instruction exists only in the
24-bit mode.
In addition, fix BAS polluting high 32 bits of the first operand in
24- and 31-bit addressing modes.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-3-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
There is no known available OS for ppc around anymore that uses page
sizes below 4k, so it does not make much sense that we keep wasting
our time on building and testing the ppcemb-softmmu target. It has
been deprecated since two releases, and nobody complained, so let's
remove this now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add definition of the first nanoMIPS processor in QEMU.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Fix ERET/ERETNC so that ADEL exception can be raised.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Update BadInstr and BadInstrX registers for nanoMIPS. The same
support for pre-nanoMIPS remains unimplemented.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
A set of nanoMIPS instructions is not available if Config5 bit NMS
is set.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 6.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 5.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 4.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 3.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 2.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 1.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of MT ASE instructions for nanoMIPS.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Use bits from configuration registers for availability control
of MT ASE instructions, rather than only ISA_MT bit in insn_flags.
This is done by adding a field in hflags for MT bit, and adding
functions check_mt() and check_cp0_mt().
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of various flavors of nanoMIPS 32-bit branch
instructions.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Implement support for nanoMIPS LLWP/SCWP instructions. Beside
adding core functionality of these instructions, this patch adds
support for availability control via configuration bit XNP.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Dimitrije Nikolic <dnikolic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add CP0_Config3 and CP0_Config5 to DisasContext structure. This is
needed for implementing availability control of various instructions.
Reviewed-by: "Aleksandar Markovic <amarkovic@wavecomp.com>"
Signed-off-by: "Aleksandar Markovic <amarkovic@wavecomp.com>"
Add emulation of various nanoMIPS load and store instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Implement emulation of nanoMIPS EXTW instruction. EXTW instruction
is similar to the MIPS r6 ALIGN instruction, except that it counts
the other way and in bits instead of bytes. We therefore generalise
gen_align() function into a new gen_align_bits() function (which
counts in bits instead of bytes and optimises when bits = size of
the word), and implement gen_align() and a new gen_ext() based on
that. Since we need to know the word size to check for when the
number of bits == the word size, the opc argument is replaced with
a wordsz argument (either 32 or 64).
Signed-off-by: James Hogan <james.hogan@mips.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Added a helper for ROTX based on the pseudocode from the
architecture spec. This instraction was not present in previous
MIPS instruction sets.
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Add emulation of nanoMIPS instructions situated in pool p_lsx, and
emulation of LSA instruction as well.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of misc nanoMIPS instructions situated in pool32axf.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of nanoMIPS instructions that are situated in pool32a0.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of basic floating point arithmetic for nanoMIPS.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of LI48, ADDIU48, ADDIUGP48, ADDIUPC48, LWPC48, and
SWPC48 instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of nanoMIPS instructions MOVE.P and MOVE.PREV.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of SIGRIE, SYSCALL, BREAK, SDBBP, ADDIU, ADDIUPC,
ADDIUGP.W, LWGP, SWGP, ORI, XORI, ANDI, and other instructions.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of SAVE16 and RESTORE.JRC16 instructions. Routines
gen_save(), gen_restore(), and gen_adjust_sp() are provided to support
this feature.
This patch at the same time provides function gen_op_addr_addi(). This
function will be used in emulation of some other nanoMIPS instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of NOT16, AND16, XOR16, OR16 instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of misc nanoMIPS 16-bit instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of nanoMIPS 16-bit shift instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of nanoMIPS 16-bit branch instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of nanoMIPS 16-bit arithmetic instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add some basic utility functions and macros for nanoMIPS decoding
engine.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add empty body and invocation of decode_nanomips_opc() if the bit
ISA_NANOMIPS32 is set in ctx->insn_flags.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Only if Config3.ISA is 3 (microMIPS), the mode should be switched in
cpu_state_reset(). Config3.ISA is 1 for nanoMIPS processors, and no mode
change should happen.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Add nanoMIPS opcodes for DSP ASE instruction pools and instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add nanoMIPS opcodes. nanoMIPS instruction are organized by so-called
instruction pools. Each pool contains a set of opcodes, that in turn
can be instruction opcodes or instruction pool opcodes.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add ISA_NANOMIPS32 and CPU_NANOMIPS32 preprocessor constants.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Following the bulk conversion of the iwMMXt code, there are
just a handful of hard coded tabs in target/arm; fix them.
This is a whitespace-only patch.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180821165215.29069-4-peter.maydell@linaro.org
Untabify the arm iwmmxt_helper.c. This affects only the iwMMXt code.
We've never touched that code in years, so it's not going to get
fixed up by our "change when touched" process, and a bulk change is
not going to be too disruptive.
This commit was produced using Emacs "untabify" (plus one
by-hand removal of a space to fix a checkpatch nit); it is
a whitespace-only change.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180821165215.29069-3-peter.maydell@linaro.org
Untabify the arm translate.c. This affects only some lines,
mostly comments, in the iwMMXt code. We've never touched
that code in years, so it's not going to get fixed up
by our "change when touched" process, and a bulk change
is not going to be too disruptive.
This commit was produced using Emacs "untabify"; it is
a whitespace-only change.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180821165215.29069-2-peter.maydell@linaro.org
On 32-bit exception entry, CPSR.J must always be set to 0
(see v7A Arm ARM DDI0406C.c B1.8.5). CPSR.IL must also
be cleared on 32-bit exception entry (see v8A Arm ARM
DDI0487C.a G1.10).
Clear these bits. (This fixes a bug which will never be noticed
by non-buggy guests.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180820153020.21478-6-peter.maydell@linaro.org
Implement the necessary support code for taking exceptions
to Hyp mode in AArch32.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180820153020.21478-5-peter.maydell@linaro.org
Factor out the code which changes the CPU state so as to
actually take an exception to AArch32. We're going to want
to use this for handling exception entry to Hyp mode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180820153020.21478-4-peter.maydell@linaro.org
The AArch32 HCR and HCR2 registers alias HCR_EL2
bits [31:0] and [63:32]; implement them.
Since HCR2 exists in ARMv8 but not ARMv7, we need new
regdef arrays for "we have EL3, not EL2, we're ARMv8"
and "we have EL2, we're ARMv8" to hold the definitions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180820153020.21478-3-peter.maydell@linaro.org
The v8 AArch32 HACTLR2 register maps to bits [63:32] of ACTLR_EL2.
We implement ACTLR_EL2 as RAZ/WI, so make HACTLR2 also RAZ/WI.
(We put the regdef next to ACTLR_EL2 as a reminder in case we
ever make ACTLR_EL2 something other than RAZ/WI).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180820153020.21478-2-peter.maydell@linaro.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180814002653.12828-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180814002653.12828-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* qumu-guest-agent freeze-hook tweak (Christian)
* pm_smbus improvements (Corey)
* Move validation to pre_plug for pc-dimm (David)
* Fix memory leaks (Eduardo, Marc-André)
* synchronization profiler (Emilio)
* Convert the CPU list to RCU (Emilio)
* LSI support for PPR Extended Message (George)
* vhost-scsi support for protection information (Greg)
* Mark mptsas as a storage device in the help (Guenter)
* checkpatch tweak cherry-picked from Linux (me)
* Typos, cleanups and dead-code removal (Julia, Marc-André)
* qemu-pr-helper support for old libmultipath (Murilo)
* Annotate fallthroughs (me)
* MemoryRegionOps cleanup (me, Peter)
* Make s390 qtests independent from libqos, which doesn't actually support it (me)
* Make cpu_get_ticks independent from BQL (me)
* Introspection fixes (Thomas)
* Support QEMU_MODULE_DIR environment variable (ryang)
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAlt+5OYUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroPtxwf8CQM/F+0L+EKeYfYcVgVZsDhhOkLj
Pm61q0bZsWKLby5jCqIDYw7Z/vodJnSS1DO0slIRoXxvQ9DwlkbBnBy/aG/E9U0q
WF1vbCezibDIt7sGcsu9F5zXU9eqe+E6dZfxFrv8FQSOFVxn34TfeJagWLCtzg0d
LnVTF/e4zJD8IQiM7w6lJQxua3fz13ssPEg2KnMkguDhACMwvZ/K/cA2AJkHRMhY
sroPMwLHlrF1NOoeCIrWxYUmSGCRCAy1DmiPGiiSs0yBq/dL0UkAa5Eu6HMQ7rgI
zUff3JDmzEjixUSIEbpVRN+yPCN0/ACSOpJUrKLDxXbc4nZ+PBQ04YpyPQ==
=UZiV
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* x86 TCG fixes for 64-bit call gates (Andrew)
* qumu-guest-agent freeze-hook tweak (Christian)
* pm_smbus improvements (Corey)
* Move validation to pre_plug for pc-dimm (David)
* Fix memory leaks (Eduardo, Marc-André)
* synchronization profiler (Emilio)
* Convert the CPU list to RCU (Emilio)
* LSI support for PPR Extended Message (George)
* vhost-scsi support for protection information (Greg)
* Mark mptsas as a storage device in the help (Guenter)
* checkpatch tweak cherry-picked from Linux (me)
* Typos, cleanups and dead-code removal (Julia, Marc-André)
* qemu-pr-helper support for old libmultipath (Murilo)
* Annotate fallthroughs (me)
* MemoryRegionOps cleanup (me, Peter)
* Make s390 qtests independent from libqos, which doesn't actually support it (me)
* Make cpu_get_ticks independent from BQL (me)
* Introspection fixes (Thomas)
* Support QEMU_MODULE_DIR environment variable (ryang)
# gpg: Signature made Thu 23 Aug 2018 17:46:30 BST
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (69 commits)
KVM: cleanup unnecessary #ifdef KVM_CAP_...
target/i386: update MPX flags when CPL changes
i2c: pm_smbus: Add the ability to force block transfer enable
i2c: pm_smbus: Don't delay host status register busy bit when interrupts are enabled
i2c: pm_smbus: Add interrupt handling
i2c: pm_smbus: Add block transfer capability
i2c: pm_smbus: Make the I2C block read command read-only
i2c: pm_smbus: Fix the semantics of block I2C transfers
i2c: pm_smbus: Clean up some style issues
pc-dimm: assign and verify the "addr" property during pre_plug
pc: drop memory region alignment check for 0
util/oslib-win32: indicate alignment for qemu_anon_ram_alloc()
pc-dimm: assign and verify the "slot" property during pre_plug
ipmi: Use proper struct reference for BT vmstate
vhost-scsi: expose 't10_pi' property for VIRTIO_SCSI_F_T10_PI
vhost-scsi: unify vhost-scsi get_features implementations
vhost-user-scsi: move host_features into VHostSCSICommon
cpus: allow cpu_get_ticks out of BQL
cpus: protect TimerState writes with a spinlock
seqlock: add QemuLockable support
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The capability macros are always defined, since they come from kernel
headers that are copied into the QEMU tree. Remove the unnecessary #ifdefs.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Iterating over the list without using atomics is undefined behaviour,
since the list can be modified concurrently by other threads (e.g.
every time a new thread is created in user-mode).
Fix it by implementing the CPU list as an RCU QTAILQ. This requires
a little bit of extra work to traverse list in reverse order (see
previous patch), but other than that the conversion is trivial.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <20180819091335.22863-12-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The current implementation has three bugs,
* segment limits are not enforced in protected mode if the L bit is set
in the target segment descriptor
* segment limits are not enforced in compatibility mode (ljmp to 32-bit
code segment in long mode)
* #GP(new_cs) is generated rather than #GP(0)
Now the segment limits are enforced if we're not in long mode OR the
target code segment doesn't have the L bit set.
Signed-off-by: Andrew Oates <aoates@google.com>
Message-Id: <20180816011903.39816-1-andrew@andrewoates.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently call gates are always treated as 32-bit gates. In IA-32e mode
(either compatibility or 64-bit submode), system segment descriptors are
always 64-bit. Treating them as 32-bit has the expected unfortunate
effect: only the lower 32 bits of the offset are loaded, the stack
pointer is truncated, a bad new stack pointer is loaded from the TSS (if
switching privilege levels), etc.
This change adds support for 64-bit call gate to the lcall and ljmp
instructions. Additionally, there should be a check for non-canonical
stack pointers, but I've omitted that since there doesn't seem to be
checks for non-canonical addresses in this code elsewhere.
I've left the raise_exception_err_ra lines unwapped at 80 columns to
match the style in the rest of the file.
Signed-off-by: Andrew Oates <aoates@google.com>
Message-Id: <20180819181725.34098-1-andrew@andrewoates.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reported by Coverity:
Error: RESOURCE_LEAK (CWE-772): [#def439]
qemu-2.12.0/target/i386/cpu.c:3179: alloc_fn: Storage is returned from allocation function "qdict_new".
qemu-2.12.0/qobject/qdict.c:34:5: alloc_fn: Storage is returned from allocation function "g_malloc0".
qemu-2.12.0/qobject/qdict.c:34:5: var_assign: Assigning: "qdict" = "g_malloc0(4120UL)".
qemu-2.12.0/qobject/qdict.c:37:5: return_alloc: Returning allocated memory "qdict".
qemu-2.12.0/target/i386/cpu.c:3179: var_assign: Assigning: "props" = storage returned from "qdict_new()".
qemu-2.12.0/target/i386/cpu.c:3217: leaked_storage: Variable "props" going out of scope leaks the storage it points to.
This was introduced by commit b8097deb35 ("i386: Improve
query-cpu-model-expansion full mode").
The leak is only theoretical: if ret->model->props is set to
props, the qapi_free_CpuModelExpansionInfo() call will free props
too in case of errors. The only way for this to not happen is if
we enter the default branch of the switch statement, which would
never happen because all CpuModelExpansionType values are being
handled.
It's still worth to change this to make the allocation logic
easier to follow and make the Coverity error go away. To make
everything simpler, initialize ret->model and ret->model->props
earlier in the function.
While at it, remove redundant check for !prop because prop is
always initialized at the beginning of the function.
Fixes: b8097deb35
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180816183509.8231-1-ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Many of these are marked as "intentional/fix required" because they
just need adding a fall through comment. This is exactly what this
patch does, except for target/mips/translate.c where it is easier to
duplicate the code, and hw/audio/sb16.c where I consulted the DOSBox
sources and decide to just remove the LOG_UNIMP before the fallthrough.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Here's my first ppc & spapr pull request for qemu-3.1. This contains
a bunch of things that have accumulated while 3.0 was in freeze.
Highlights are:
* SLOF firmware update
* A number of floating point cleanups from Richard Henderson and
Yasmin Beatriz
* A new model for assigning irq numbers on spapr, this is an
important preliminary step towards implementing the POWER9
"XIVE" interrupt controller
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlt7lewACgkQbDjKyiDZ
s5K+TA//QIMtlm59lR1G68Bwj656WEMgi/f+HN3FL419XtOZ/UkgprPmvBzWvoVP
r7EgyktRw9qSyCsOe5OOST12rkP8s4RwyjxOPak8opRBEgXRFYk9q8micCCOv/94
X7dtxh7sqDYvWVC4Gky1SvmNbrPtaFqSWAp7ZC/+OYnN5jOg9g+nQloPTko++GKp
hNEKoS5I/5Q/OvtkaxGy6+G5oShi3in9gpC/nE5vtfJOnZ/ukIJcW5Niate6INpF
WoKg5LPEF3/f0GGCDxumpoOQ7odVcBIFrtbeoeEDIK91f0l3H7+n75b8xgWE1Y51
WelLNgdD2n0Z1pxhKwxUljIg5CnJamVSBhd6zELXDc5cx8CcOBLuNBSqtpriyRPn
0Or3E4xfq3EbD+fNVcqHNVBC8M5mN18iplx+sOjmNTbBtwAiB/IGpVVfJkhc83Ed
85Rlu4FxDdwBdeeE21PwdLhkRrRrtYpgobiWU2Mw0l20YYflhnQ20XS80AVQiVBa
H/NflZbkEM93rqt/sKwenlx0bAUKt1HjZpE3mDuhSkLMRL4Sdg4hsulFEMT7QpPW
QSZs+AntJpC6znRmZfE0Cavq1GNk5j4j9O5MBSKD8fbSNv7UR6Muu4SABIhjEZ0m
7wG7qfqfLVEO/cnFph4nKgSAPnCE8mNiIyE0VowpkjhUWFSDTGE=
=viH7
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.1-20180821' into staging
ppc patch queue 2018-08-21
Here's my first ppc & spapr pull request for qemu-3.1. This contains
a bunch of things that have accumulated while 3.0 was in freeze.
Highlights are:
* SLOF firmware update
* A number of floating point cleanups from Richard Henderson and
Yasmin Beatriz
* A new model for assigning irq numbers on spapr, this is an
important preliminary step towards implementing the POWER9
"XIVE" interrupt controller
# gpg: Signature made Tue 21 Aug 2018 05:32:44 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.1-20180821: (26 commits)
ppc: add DBCR based debugging
spapr_pci: factorize the use of SPAPR_MACHINE_GET_CLASS()
mac_newworld: don't use legacy fw_cfg_init_mem() function
mac_oldworld: don't use legacy fw_cfg_init_mem() function
40p: don't use legacy fw_cfg_init_mem() function
qemu-doc: mark ppc/prep machine as deprecated
hw/ppc: deprecate the machine type 'prep', replaced by '40p'
spapr: introduce a IRQ controller backend to the machine
hw/ppc/ppc405_uc: Convert away from old_mmio
hw/ppc/ppc_boards: Don't use old_mmio for ref405ep_fpga
hw/ppc/prep: Remove ifdeffed-out stub of XCSR code
spapr: introduce a fixed IRQ number space
spapr: Add a pseries-3.1 machine type
target/ppc: simplify bcdadd/sub functions
xics: don't include "target/ppc/cpu-qom.h" in "hw/ppc/xics.h"
vfio/spapr: Allow backing bigger guest IOMMU pages with smaller physical pages
target/ppc: bcdsub fix sign when result is zero
target/ppc: Use non-arithmetic conversions for fp load/store
target/ppc: Honor fpscr_ze semantics and tidy fre, fresqrt
target/ppc: Tidy helper_fsqrt
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for DBCR (debug control register) based debugging as used on
BookE ppc. So far supports only branch and single-step events, but these are
the important ones. GDB in Linux guest can now do single-stepping.
Signed-off-by: Roman Kapl <rka@sysgo.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
After solving a corner case in bcdsub, this patch simplifies the logic
of both bcdadd/sub instructions by removing some unnecessary local flags.
This commit also rearranges some if-else conditions in bcdadd to make it
easier to read.
Signed-off-by: Yasmin Beatriz <yasmins@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When the result of bcdsub is equal to zero, the result sign may be
set to negative in some cases, and this does not follow the Power ISA
specifications as to decimal integer arithmetic instructions.
Signed-off-by: Yasmin Beatriz <yasmins@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Memory operations have no side effects on fp state.
The use of a "real" conversions between float64 and float32
would raise exceptions for SNaN and out-of-range inputs.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Divide by zero, exception taken, leaves the destination register
unmodified. Therefore we must raise the exception before returning
from the respective helpers.
>From helper_fre, divide by zero exception not taken, return the
documented +/- 0.5.
At the same time, tidy the invalid exception checking so that we
rely on softfloat for initial argument validation, and select the
kind of invalid operand exception only when we know we must.
At the same time, pass and return float64 values directly rather
than bounce through the CPU_DoubleU union.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Tidy the invalid exception checking so that we rely on softfloat for
initial argument validation, and select the kind of invalid operand
exception only when we know we must. Pass and return float64 values
directly rather than bounce through the CPU_DoubleU union.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Tidy the invalid exception checking so that we rely on softfloat for
initial argument validation, and select the kind of invalid operand
exception only when we know we must. Pass and return float64 values
directly rather than bounce through the CPU_DoubleU union.
Note that because we know float_flag_invalid was set, we do not have
to re-check the signs of the infinities.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Tidy the invalid exception checking so that we rely on softfloat for
initial argument validation, and select the kind of invalid operand
exception only when we know we must. Pass and return float64 values
directly rather than bounce through the CPU_DoubleU union.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Divide by zero, exception taken, leaves the destination register
unmodified. Therefore we must raise the exception before returning
from helper_fdiv. Move the check from do_float_check_status into
helper_fdiv.
At the same time, tidy the invalid exception checking so that we
rely on softfloat for initial argument validation, and select the
kind of invalid operand exception only when we know we must.
At the same time, pass and return float64 values directly rather
than bounce through the CPU_DoubleU union.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
While just setting the MSR bits is sufficient, we can tidy
the helper code by extracting the MSR test to a helper and
then forcing it true for user-only.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
QEMU has had huge page support for a longer time already, but KVM
memory management under s390x needed some changes to work with huge
backings.
Now that we have support, let's enable it if requested and
available. Otherwise we now properly tell the user if there is no
support and back out instead of failing to run the VM later on.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180802070201.257406-1-frankja@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Provide the etoken facility. We need to handle cpu model, migration and
clear reset.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Message-Id: <20180731090448.36662-3-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The "max" CPU model behaves like "-cpu host" when KVM is enabled, and like
a CPU with the maximum possible feature set when TCG is enabled.
While the "host" model can not be used under TCG ("kvm_required"), the
"max" model can and "Enables all features supported by the accelerator in
the current host".
So we can treat "host" just as a special case of "max" (like x86 does).
It differs to the "qemu" CPU model under TCG such that compatibility
handling will not be performed and that some experimental CPU features
not yet part of the "qemu" model might be indicated.
These are right now under TCG (see "qemu_MAX"):
- stfle53
- msa5-base
- zpci
This will result right now in the following warning when starting QEMU TCG
with the "max" model:
"qemu-system-s390x: warning: 'msa5-base' requires 'kimd-sha-512'."
The "qemu" model (used as default in QEMU under TCG) will continue to
work without such warnings. The "max" model in the current form
might be interesting for kvm-unit-tests (where we would e.g. now also
test "msa5-base").
The "max" model is neither static nor migration safe (like the "host"
model). It is independent of the machine but dependends on the accelerator.
It can be used to detect the maximum CPU model also under TCG from upper
layers without having to care about CPU model names for CPU model
expansion.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180725091233.3300-1-david@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
[CH: minor wording changes]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This option has been deprecated for two releases; remove it.
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The enumeration type S390FeatGroup is now generated as well.
This shall simplify the definition of new feature groups
without the requirement to modify existing code.
Signed-off-by: Michael Mueller <mimu@linux.ibm.com>
Message-Id: <20180725143617.8731-1-mimu@linux.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
ARMv7VE introduced the ERET instruction, which is necessary to
return from an exception taken to Hyp mode. Implement this.
In A32 encoding it is a completely new encoding; in T32 it
is an adjustment of the behaviour of the existing
"SUBS PC, LR, #<imm8>" instruction.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-10-peter.maydell@linaro.org
The MSR (banked) and MRS (banked) instructions allow accesses to ELR_Hyp
from either Monitor or Hyp mode. Our translate time check
was overly strict and only permitted access from Monitor mode.
The runtime check we do in msr_mrs_banked_exc_checks() had the
correct code in it, but never got there because of the earlier
"currmode == tgtmode" check. Special case ELR_Hyp.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-9-peter.maydell@linaro.org
The AArch32 HSR is the equivalent of AArch64 ESR_EL2;
we can implement it by marking our existing ESR_EL2 regdef
as STATE_BOTH. It also needs to be "RES0 from EL3 if
EL2 not implemented", so add the missing stanza to
el3_no_el2_cp_reginfo.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-8-peter.maydell@linaro.org
The AArch32 virtualization extensions support these fault address
registers:
* HDFAR: aliased with AArch64 FAR_EL2[31:0] and AArch32 DFAR(S)
* HIFAR: aliased with AArch64 FAR_EL2[63:32] and AArch32 IFAR(S)
Implement the accessors for these. This fixes in passing a bug
where we weren't implementing the "RES0 from EL3 if EL2 not
implemented" behaviour for AArch64 FAR_EL2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-7-peter.maydell@linaro.org
Implement the AArch32 HVBAR register; we can do this just by
making the existing VBAR_EL2 regdefs be STATE_BOTH.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-5-peter.maydell@linaro.org
ARMCPRegInfo structs will default to .cp = 15 if they
are ARM_CP_STATE_BOTH, but not if they are ARM_CP_STATE_AA32
(because a coprocessor number of 0 is valid for AArch32).
We forgot to explicitly set .cp = 15 for the HMAIR1 and
HAMAIR1 regdefs, which meant they would UNDEF when the guest
tried to access them under cp15.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-3-peter.maydell@linaro.org
We implement the HAMAIR1 register as RAZ/WI; we had a typo in the
regdef, though, and were incorrectly naming it HMAIR1 (which is
a different register which we also implement as RAZ/WI).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-2-peter.maydell@linaro.org
If an instruction is conditional (like CBZ) and it is executed
conditionally (using the ITx instruction), a jump to an undefined
label is generated, and QEMU crashes.
CBZ in IT block is an UNPREDICTABLE behavior, but we should not
crash. Honouring the condition code is allowed by the spec in this
case (constrained unpredictable, ARMv8, section K1.1.7), and matches
what we do for other "UNPREDICTABLE inside an IT block" instructions.
Fix the 'skip on condition' code to create a new label only if it
does not already exist. Previously multiple labels were created, but
only the last one of them was set.
Signed-off-by: Roman Kapl <rka@sysgo.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180816120533.6587-1-rka@sysgo.com
[PMM: fixed ^ 1 being applied to wrong argument, fixed typo]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- move register counting to xtensa/gdbstub.c
- add symbolic names for register types and flags from GDB and use them
in register counting and access functions.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
This fixes communication with gdb in the presence of type-5 (TIE state
mapped on user registers) and type-7 (special case of masked registers)
registers in the xtensa core config.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
This fixes java in a linux-user chroot:
$ java --version
qemu-sh4: .../accel/tcg/cpu-exec.c:634: cpu_loop_exec_tb: Assertion `use_icount' failed.
qemu: uncaught target signal 6 (Aborted) - core dumped
Aborted (core dumped)
In gen_conditional_jump() in the GUSA_EXCLUSIVE part, we must reset
base.is_jmp to DISAS_NEXT after the gen_goto_tb() as it is done in
gen_delayed_conditional_jump() after the gen_jump().
Bug: https://bugs.launchpad.net/qemu/+bug/1768246
Fixes: 4834871bc9
("target/sh4: Convert to DisasJumpType")
Reported-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20180811082328.11268-1-laurent@vivier.eu>
Bug fix:
* Some guests may crash when using "-cpu host" due to TOPOEXT,
disable it by default
Features:
* PV_SEND_IPI feature bit
* Icelake-{Server,Client} CPU models
* New CPUID feature bits: PV_SEND_IPI, WBNOINVD, PCONFIG, ARCH_CAPABILITIES
Documentation:
* docs/qemu-cpu-models.texi
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJbdiXVAAoJECgHk2+YTcWmcuUP/i1ekHKIm1Irfelhbd0CpGJj
GTUoK/EAkNXxUq5qYpNL23sElxCyduoFlyrpHMxdRmaffrw7EBg/ye3eZNT9SMcE
OL2iLohZhev4V9iO2lBx4/4awFxHJC8vx9q4OQXHXewNZxoFdi+6h+b7eDnSD1XO
1saCSem5bZtu6Ra/aco21SVW7afWOPtYAW0Z6fXJ040K4wgKdxGo2NfBkRX1SUMD
xqUG084FJht+MeIq95mcY9bSubg9fXKYUr6psE2mL+ycztbx+vnUMMS+Yj8XfuCC
QIBzlpF0ZCTZlxRsmQqW/ZBb5qsSdJiCMGPibeLl3vKzByZ5NpZk4xUw69NcwQ07
kAEhK3Ug4X+gjUtLH3QvRF7pIHOtJS5RdHpEfOBYZ/P+JfX7y2tCmqlAhPje6urf
av2Go4PvdD8gS0KO4bpasE6guLz+bp734xcA9c/pVwWITOT8xBG9XGqj1cZ4/S+b
uJWLdeeR6vspJBs3BjWxxCMcAS3tk8CzYamjJBYnPasXznnEnmwaS8X5QWCS0h1R
Hx83z9WGr4oPry7Pg0keKEBFA2FvFtYH/xbSBUOpiaGvDICPY8w7BfOCtjBNxOsm
wMtlx6fBsXv89ymWpYHCldvdMw7sF6GGYuQIBnF7BqXKgLZABcOKRXG/JfVdK5iU
QxoROA+kpgws7LK3lRUV
=uE8w
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-next-pull-request' into staging
x86 queue, 2018-08-16
Bug fix:
* Some guests may crash when using "-cpu host" due to TOPOEXT,
disable it by default
Features:
* PV_SEND_IPI feature bit
* Icelake-{Server,Client} CPU models
* New CPUID feature bits: PV_SEND_IPI, WBNOINVD, PCONFIG, ARCH_CAPABILITIES
Documentation:
* docs/qemu-cpu-models.texi
# gpg: Signature made Fri 17 Aug 2018 02:33:09 BST
# gpg: using RSA key 2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-next-pull-request:
i386: Disable TOPOEXT by default on "-cpu host"
target-i386: adds PV_SEND_IPI CPUID feature bit
i386: Add new CPU model Icelake-{Server,Client}
i386: Add CPUID bit for WBNOINVD
i386: Add CPUID bit for PCONFIG
i386: Add CPUID bit and feature words for IA32_ARCH_CAPABILITIES MSR
i386: Add new MSR indices for IA32_PRED_CMD and IA32_ARCH_CAPABILITIES
docs: add guidance on configuring CPU models for x86
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJbdbIoAAoJENSXKoln91pl9KwIAJQ0CB4V/eRlvPOj828qLJyt
FxCETpCKBD1mSRX80E2HdYdaj8KfhhKrd9R2Wrk2VsiTQhLA0+hYsLxuT8VfPjsX
8WSN7egxzzA8Hzm1+wbwFCDNL13xjuhHWPk9YA1RYS6eQJot/Y27KYfvt/OPTrBO
jGaTmvXOE9q3qXZczP2TwipYkehg1+Ss5ZJl7dUmHc3Iek5t/H+kumkhBPoJqwAQ
wcwQuPm2KRGLuq9aB2IVbtfJA5Rme1GMFDWZicitJX2uK+Mx76fgi9IorrW6XAyM
6LM8DjvrivxW+UgZ+3QdJKkQCbyIrXqDAp1MRd3CWMlB576WWKm6RUTZclSo9gI=
=m1CJ
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-aug-2018' into staging
MIPS queue Aug 16, 2018
# gpg: Signature made Thu 16 Aug 2018 18:19:36 BST
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-aug-2018:
qemu-doc: Amend MIPS-related items
linux-user: Add preprocessor availability control to some syscalls
linux-user: Update MIPS syscall numbers up to kernel 4.18 headers
elf: Add ELF flags for MIPS machine variants
elf: Remove duplicate preprocessor constant definition
target/mips: Check ELPA flag only in some cases of MFHC0 and MTHC0
target/mips: Don't update BadVAddr register in Debug Mode
target/mips: Implement CP0 Config1.WR bit functionality
target/mips: Add CP0 BadInstrX register
target/mips: Update some CP0 registers bit definitions
target/mips: Fix two instances of shadow variables
target/mips: Mark switch fallthroughs with interpretable comments
target/mips: Avoid case statements formulated by ranges - part 2
target/mips: Avoid case statements formulated by ranges - part 1
MAINTAINERS: Update target/mips maintainer's email addresses
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
MFHC0 and MTHC0 used to handle EntryLo0 and EntryLo1 registers only,
and placing ELPA flag checks before switch statement were technically
correct. However, after adding handling more registers, these checks
should be moved to act only in cases of handling EntryLo0 and
EntryLo1.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
BadVAddr should not be updated if (env->hflags & MIPS_HFLAG_DM) is
set.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Add testing Config1.WR bit into watch exception handling logic.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add CP0 BadInstrX register. This register will be used in nanoMIPS.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Update CP0 registers Config0, Config1, Config2, Config3,
Config4, and Config5 bit definitions.
Some of these bits will be utilized by upcoming nanoMIPS changes.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Fix two instances of shadow variables. This cleans up entire file
translate.c from shadow variables.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Mark switch fallthroughs with comments, in cases fallthroughs
are intentional.
The comments "/* fall through */" are interpreted by compilers and
other tools, and they will not issue warnings in such cases. For gcc,
the warning is turnend on by -Wimplicit-fallthrough. With this patch,
there will be no such warnings in target/mips directory. If such
warning appears in future, it should be checked if it is intentional,
and, if yes, marked with a comment similar to those from this patch.
The comment must be just before next "case", otherwise gcc won't
understand it.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Remove "range style" case statements to make code analysis easier.
This patch handles cases when the values in the range in question
were not properly defined.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <amarkovic@wavecomp.com>
Remove "range style" case statements to make code analysis easier.
This is needed also for some upcoming nanoMIPS-related refactorings.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Enabling TOPOEXT is always allowed, but it can't be enabled
blindly by "-cpu host" because it may make guests crash if the
rest of the cache topology information isn't provided or isn't
consistent.
This addresses the bug reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=1613277
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180809221852.15285-1-ehabkost@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Babu Moger <babu.moger@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
New CPU models mostly inherit features from ancestor Skylake, while addin new
features: UMIP, New Instructions ( PCONIFIG (server only), WBNOINVD,
AVX512_VBMI2, GFNI, AVX512_VNNI, VPCLMULQDQ, VAES, AVX512_BITALG),
Intel PT and 5-level paging (Server only). As well as
IA32_PRED_CMD, SSBD support for speculative execution
side channel mitigations.
Note:
For 5-level paging, Guest physical address width can be configured, with
parameter "phys-bits". Unless explicitly specified, we still use its default
value, even for Icelake-Server cpu model.
At present, hold on expose IA32_ARCH_CAPABILITIES to guest, as 1) This MSR
actually presents more than 1 'feature', maintainers are considering expanding current
features presentation of only CPUIDs to MSR bits; 2) a reasonable default value
for MSR_IA32_ARCH_CAPABILITIES needs to settled first. These 2 are actully
beyond Icelake CPU model itself but fundamental. So split these work apart
and do it later.
https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg00774.htmlhttps://lists.gnu.org/archive/html/qemu-devel/2018-07/msg00796.html
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-6-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
WBNOINVD: Write back and do not invalidate cache, enumerated by
CPUID.(EAX=80000008H, ECX=0):EBX[bit 9].
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-5-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Support of IA32_PRED_CMD MSR already be enumerated by same CPUID bit as
SPEC_CTRL.
At present, mark CPUID_7_0_EDX_ARCH_CAPABILITIES unmigratable, per Paolo's
comment.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-3-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
IA32_PRED_CMD MSR gives software a way to issue commands that affect the state
of indirect branch predictors. Enumerated by CPUID.(EAX=7H,ECX=0):EDX[26].
IA32_ARCH_CAPABILITIES MSR enumerates architectural features of RDCL_NO and
IBRS_ALL. Enumerated by CPUID.(EAX=07H, ECX=0):EDX[29].
https://software.intel.com/sites/default/files/managed/c5/63/336996-Speculative-Execution-Side-Channel-Mitigations.pdf
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-2-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
These insns require u=1; failed to include that in the switch
cases. This probably happened during one of the rebases just
before final commit.
Fixes: d17b7cdcf4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We were using the wrong flush-to-zero bit for the non-half input.
Fixes: 46d33d1e3c
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When FZ is set, input_denormal exceptions are recognized, but this does
not happen with FZ16. The softfloat code has no way to distinguish
these bits and will raise such exceptions into fp_status_f16.flags,
so ignore them when computing the accumulated flags.
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When support for FZ16 was added, we failed to include the bit
within FPCR_MASK, which means that it could never be set.
Continue to zero FZ16 when ARMv8.2-FP16 is not enabled.
Fixes: d81ce0ef2c
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Define a "cortex-m0" ARMv6-M CPU model.
Most of the register reset values set by other CPU models are not
relevant for the cut-down ARMv6-M architecture.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180814162739.11814-3-stefanha@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This allows the default (and maximum) vector length to be set
from the command-line. Which is extraordinarily helpful in
debugging problems depending on vector length without having to
bake knowledge of PR_SET_SVE_VL into every guest binary.
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Also fold the FPCR/FPSR state onto the same line as PSTATE,
and mention but do not dump disabled FPU state.
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With PC, there are 33 registers. Three per line lines up nicely
without overflowing 80 columns.
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The scaling should be solely on the memory operation size; the number
of registers being loaded does not come in to the initial computation.
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The immediate should be scaled by the size of the memory reference,
not the size of the elements into which it is loaded.
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The expression (int) imm + (uint32_t) len_align turns into uint32_t
and thus with negative imm produces a memory operation at the wrong
offset. None of the numbers involved are particularly large, so
change everything to use int.
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The pseudocode for this operation is an increment + compare loop,
so comparing <= the maximum integer produces an all-true predicate.
Rather than bound in both the inline code and the helper, pass the
helper the number of predicate bits to set instead of the number
of predicate elements to set.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180801123111.3595-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Used the wrong temporary in the computation of subtractive overflow.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180801123111.3595-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The normal vector element is sign-extended before
comparing with the wide vector element.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180801123111.3595-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tailchaining is an optimization in handling of exception return
for M-profile cores: if we are about to pop the exception stack
for an exception return, but there is a pending exception which
is higher priority than the priority we are returning to, then
instead of unstacking and then immediately taking the exception
and stacking registers again, we can chain to the pending
exception without unstacking and stacking.
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
exceptions; for v8M this is architecturally required. Implement it
in QEMU for all M-profile cores, since in practice v6M and v7M
hardware implementations generally do have it.
(We were already doing tailchaining for derived exceptions which
happened during exception return, like the validity checks and
stack access failures; these have always been required to be
tailchained for all versions of the architecture.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180720145647.8810-5-peter.maydell@linaro.org
On exception return for M-profile, we must restore the CONTROL.SPSEL
bit from the EXCRET value before we do any kind of tailchaining,
including for the derived exceptions on integrity check failures.
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
exception entry for the tailchained exception.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180720145647.8810-4-peter.maydell@linaro.org
In do_v7m_exception_exit(), we use the exc_secure variable to track
whether the exception we're returning from is secure or non-secure.
Unfortunately the statement initializing this was accidentally
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
which meant that we were using the wrong value for NMI handlers.
Move the initialization out to the right place.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180720145647.8810-3-peter.maydell@linaro.org
Improve the exception-taken logging by logging in
v7m_exception_taken() the exception we're going to take
and whether it is secure/nonsecure.
This requires us to move logging at many callsites from after the
call to before it, so that the logging appears in a sensible order.
(This will make tail-chaining produce more useful logs; for the
current callers of v7m_exception_taken() we know which exception
we're going to take, so custom log messages at the callsite sufficed;
for tail-chaining only v7m_exception_taken() knows the exception
number that we're going to tail-chain to.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180720145647.8810-2-peter.maydell@linaro.org
One of the required effects of setting HCR_EL2.TGE is that when
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
all purposes except direct reads. That is, it effectively disables
the MMU for the NS EL0/EL1 translation regime.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-6-peter.maydell@linaro.org
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1 for all purposes other than direct reads" if HCR_EL2.TGE
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
purposes other than direct reads" if HCR_EL2.TGE is set
and HRC_EL2.E2H is 1.
To avoid having to check E2H and TGE everywhere where we test IMO and
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
case will never be true, but we include the logic to save effort when
we eventually do get to that.
(Note that in several of these callsites the change doesn't
actually make a difference as either the callsite is handling
TGE specially anyway, or the CPU can't get into that situation
with TGE set; we change everywhere for consistency.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-5-peter.maydell@linaro.org
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
exceptions targeting NS EL1 must be redirected to EL2. Implement
this in raise_exception() -- all synchronous exceptions go through
this function.
(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
already honours HCR_EL2.TGE when it determines the target EL
in arm_phys_excp_target_el().)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-4-peter.maydell@linaro.org
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
and TDA, which we implement in the functions access_tdra(),
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
Implement this by having the access functions check MDCR_EL2.TDE
and HCR_EL2.TGE.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-3-peter.maydell@linaro.org
If the "trap general exceptions" bit HCR_EL2.TGE is set, we
must mask all virtual interrupts (as per DDI0487C.a D1.14.3).
Implement this in arm_excp_unmasked().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-2-peter.maydell@linaro.org
Now that we have full support for small regions, including execution,
we can remove the workarounds where we marked all small regions as
non-executable for the M-profile MPU and SAU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180710160013.26559-7-peter.maydell@linaro.org
Forbid stack alignment change. (CCR)
Reserve FAULTMASK, BASEPRI registers.
Report any fault as a HardFault. Disable MemManage, BusFault and
UsageFault, so they always escalated to HardFault. (SHCSR)
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180718095628.26442-1-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
MSR handling is the only place where CONTROL.nPRIV is modified.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Message-id: 20180705222622.17139-1-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The instance_init function of the xtensa CPUs creates a memory region,
but does not set an owner, so the memory region is not destroyed
correctly when the CPU object is removed. This can happen when
introspecting the CPU devices, so introspecting the CPU device will
leave a dangling memory region object in the QOM tree. Make sure to
set the right owner here to fix this issue.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-id: 1532005320-17794-1-git-send-email-thuth@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently the migration code incorrectly treats a subsection with
no .needed function pointer as if it was the subsection list
terminator -- it is ignored and so is everything after it.
Work around this by giving various M profile vmstate structs
a 'needed' function that always returns true.
We reuse m_needed() for this, since it's always true here.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180806123445.1459-4-peter.maydell@linaro.org
Since 86f0a186d6 the TYPE_ARM_HOST_CPU is only compiled when CONFIG_KVM
is enabled.
Remove the now redundant special-case introduced in a96c0514ab, to avoid:
$ qemu-system-aarch64 -machine virt -cpu \? | fgrep host
host
host (only available in KVM mode)
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180727132311.2777-1-f4bug@amsat.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
MSR_SMI_COUNT started being migrated in QEMU 2.12. Do not migrate it
on older machine types, or the subsection causes a load failure for
guests that use SMM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rename DCACHE to DATA_CACHE and ICACHE to INSTRUCTION_CACHE.
This avoids conflict with Linux asm/cachectl.h macros and fixes
build failure on mips hosts.
Reported-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180717194010.30096-1-ehabkost@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Babu Moger <babu.moger@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
To correctly handle small (less than TARGET_PAGE_SIZE) MPU regions,
we must correctly handle the case where the address being looked
up hits in an MPU region that is not small but the address is
in the same page as a small region. For instance if MPU region
1 covers an entire page from 0x2000 to 0x2400 and MPU region
2 is small and covers only 0x2200 to 0x2280, then for an access
to 0x2000 we must not return a result covering the full page
even though we hit the page-sized region 1. Otherwise we will
then cache that result in the TLB and accesses that should
hit region 2 will incorrectly find the region 1 information.
Check for the case where we miss an MPU region but it is still
within the same page, and in that case narrow the size we will
pass to tlb_set_page_with_attrs() for whatever the final
outcome is of the MPU lookup.
Reported-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180716133302.25989-1-peter.maydell@linaro.org
Usually, when baselining two CPU models, whereby one of them has base
CPU features disabled (e.g. z14-base,msa=off), we fallback to an older
model that did not have these features in the base model. We always try to
create a "sane" CPU model (as far as possible), and one part of it is that
removing base features is no good and to be avoided.
Now, if we disable base features that were part of a z900, we're out of
luck. We won't find a CPU model and QEMU will segfault. This is a
scenario that should never happen in real life, but it can be used to
crash QEMU.
So let's properly report an error if we baseline e.g.:
{ "execute": "query-cpu-model-baseline",
"arguments" : { "modela": { "name": "z14-base", "props": {"esan3" : false}},
"modelb": { "name": "z14"}} }
Instead of segfaulting.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180718092330.19465-1-david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
'I' was being double-incremented; correctly within the inner loop
and incorrectly within the outer loop.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180711103957.3040-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Hyper-V identifies vCPUs by Virtual Processor (VP) index which can be
queried by the guest via HV_X64_MSR_VP_INDEX msr. It is defined by the
spec as a sequential number which can't exceed the maximum number of
vCPUs per VM.
It has to be owned by QEMU in order to preserve it across migration.
However, the initial implementation in KVM didn't allow to set this
msr, and KVM used its own notion of VP index. Fortunately, the way
vCPUs are created in QEMU/KVM makes it likely that the KVM value is
equal to QEMU cpu_index.
So choose cpu_index as the value for vp_index, and push that to KVM on
kernels that support setting the msr. On older ones that don't, query
the kernel value and assert that it's in sync with QEMU.
Besides, since handling errors from vCPU init at hotplug time is
impossible, disable vCPU hotplug.
This patch also introduces accessor functions to encapsulate the mapping
between a vCPU and its vp_index.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20180702134156.13404-3-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In Hyper-V-related code, vCPUs are identified by their VP (virtual
processor) index. Since it's customary for "vcpu_id" in QEMU to mean
APIC id, rename the respective variables to "vp_index" to make the
distinction clear.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20180702134156.13404-2-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch adds field with content of KERNEL_GS_BASE MSR to QEMU note in
ELF dump.
On Windows, if all vCPUs are running usermode tasks at the time the dump is
created, this can be helpful in the discovery of guest system structures
during conversion ELF dump to MEMORY.DMP dump.
Signed-off-by: Viktor Prutyanov <viktor.prutyanov@virtuozzo.com>
Message-Id: <20180714123000.11326-1-viktor.prutyanov@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For M-profile exception returns, the mmu index to use for exception
return unstacking is supposed to be that of wherever we are returning to:
* if returning to handler mode, privileged
* if returning to thread mode, privileged or unprivileged depending on
CONTROL.nPRIV for the destination security state
We were passing the wrong thing as the 'priv' argument to
arm_v7m_mmu_idx_for_secstate_and_priv(). The effect was that guests
which programmed the MPU to behave differently for privileged and
unprivileged code could get spurious MemManage Unstack exceptions.
Reported-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180709124535.1116-1-peter.maydell@linaro.org
The translator loop does not allow the tb_start hook to set
dc->base.is_jmp; the only hook allowed to do that is translate_insn.
Split the work between init_disas_context where we validate
the gUSA parameters, and translate_insn where we emit code.
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use MAKE_64BIT_MASK instead of open-coding. Remove an odd
vector size check that is unlikely to be more profitable
than 3 64-bit integer stores. Correct the iteration for WORD
to avoid writing too much data.
Fixes RISU tests of PTRUE for VL 256.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180705191929.30773-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These instructions must perform the sve_access_check, but
since they are implemented as NOPs there is no generated
code to elide when the access check fails.
Fixes: Coverity issues 1393780 & 1393779.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When I try to build a ppc64 target on a ppc64 host (gcc 8.1.1), I have:
.../target/ppc/int_helper.c: In function 'helper_vinsertb':
.../target/ppc/int_helper.c:1954:32: error: array subscript 18446744073709551608 is above array bounds of 'uint8_t[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds]
memmove(&r->u8[index], &b->u8[8 - sizeof(r->element)], \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.../target/ppc/int_helper.c:1965:1: note: in expansion of macro 'VINSERT'
If we compare with the macro for ppc64le, we can see
sizeof(r->element[0]) should be used instead of sizeof(r->element).
And VINSERT uses only u8, u16, u32 and u64, so the maximum value
of sizeof(r->element[0]) is 8
Suggested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since commit d6dcc5583e, '-cpu ?' shows the description of the
X86_CPU_TYPE_NAME("max") for the host CPU model:
Enables all features supported by the accelerator in the current host
instead of the expected:
KVM processor with all supported host features
or
HVF processor with all supported host features
This is caused by the early use of kvm_enabled() and hvf_enabled() in
a class_init function. Since the accelerator isn't configured yet, both
helpers return false unconditionally.
A QEMU binary will only be compiled with one of these accelerators, not
both. The appropriate description can thus be decided at build time.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <153055056654.212317.4697363278304826913.stgit@bahia.lan>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Mostly patches from Richard Henderson fixing multiple things:
* Fix singlestepping in GDB.
* Use more TB linking.
* Fixes to exit TB after updating SPRs to enable registering of state
changes.
* Significant optimizations and refactors to the TLB
* Split out disassembly from translation.
* Add qemu-or1k to qemu-binfmt-conf.sh.
* Implement signal handling for linux-user.
Then there are a few fixups from me:
* Fix delay slot detections to match hardware, this was masking a bug
in the linus kernel.
* Fix stores to the PIC mask register
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJbO32qAAoJEMOzHC1eZifkdHwP/2zV/dE3A0XvEynghJU4XeVe
KlNRupCjp2civk9d9E+BJwIOVDMPfBQbBKfGC2fjzBGOuop8ZjUvvUuazNTEQoov
9RfeXPMkP8xJUzGp02Gl87ZcMY9ZXJrqlPb2BaJ//8f/E0CF+91ODnkeLK62UXnb
EbBCf5IlJy/B6Fp9icfdE09/nYx6SmQHPJZo9nC8xiWNZ8LewXn+DWGH81EHd8w1
j99FV5ijImwB/LNOP6aVelyyKV9ZpInI6ZqC1LztWWaZftJ42TuvUq4vboP1P2s9
vC9RV5oWl3/DL9HQxEphrynqKNvrxcceoQhxXirEzbLeYG83Tx1ed2a7J3x83gtY
reChNmnwRuuchCot3cK4xDn2e0dY87dT24wtBM9HNmLsGgEzudZuGPwdlHrBoRFP
o3exRItbfFr6SFdgUZaMjeC0vRSVU/FPqPRswJESWelEEMCi1R/CeQFKaw8BmaOG
rw9Ed1rtX23R1Ce/ggEQgkxh2cWGyV1Tc0q5M09UDDmHKq0pd27d7CLlVMYsqZqk
8guPjPzsYP12vNFTjx6tC8inOwJalK7kDVJv1a+c/bpyqaulcT22o7ck8rqlCZbU
wpQbhAAGbbVrwnjQevO11MZsXk+6FANw6KIXxEDdDVbfqQ+WLtHOfXsUkG/xVUuR
K1hiEeYKl77HCrDFkhqB
=mbu2
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/shorne/tags/pull-or-20180703' into staging
OpenRISC cleanups and Fixes for QEMU 3.0
Mostly patches from Richard Henderson fixing multiple things:
* Fix singlestepping in GDB.
* Use more TB linking.
* Fixes to exit TB after updating SPRs to enable registering of state
changes.
* Significant optimizations and refactors to the TLB
* Split out disassembly from translation.
* Add qemu-or1k to qemu-binfmt-conf.sh.
* Implement signal handling for linux-user.
Then there are a few fixups from me:
* Fix delay slot detections to match hardware, this was masking a bug
in the linus kernel.
* Fix stores to the PIC mask register
# gpg: Signature made Tue 03 Jul 2018 14:44:10 BST
# gpg: using RSA key C3B31C2D5E6627E4
# gpg: Good signature from "Stafford Horne <shorne@gmail.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D9C4 7354 AEF8 6C10 3A25 EFF1 C3B3 1C2D 5E66 27E4
* remotes/shorne/tags/pull-or-20180703: (25 commits)
target/openrisc: Fix writes to interrupt mask register
target/openrisc: Fix delay slot exception flag to match spec
linux-user: Fix struct sigaltstack for openrisc
linux-user: Implement signals for openrisc
target/openrisc: Add support in scripts/qemu-binfmt-conf.sh
target/openrisc: Reorg tlb lookup
target/openrisc: Increase the TLB size
target/openrisc: Stub out handle_mmu_fault for softmmu
target/openrisc: Use identical sizes for ITLB and DTLB
target/openrisc: Fix cpu_mmu_index
target/openrisc: Fix tlb flushing in mtspr
target/openrisc: Reduce tlb to a single dimension
target/openrisc: Merge mmu_helper.c into mmu.c
target/openrisc: Remove indirect function calls for mmu
target/openrisc: Merge tlb allocation into CPUOpenRISCState
target/openrisc: Form the spr index from tcg
target/openrisc: Exit the TB after l.mtspr
target/openrisc: Split out is_user
target/openrisc: Link more translation blocks
target/openrisc: Fix singlestep_enabled
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Here's a last minue pull request before today's soft freeze. Ideally
I would have sent this earlier, but I was waiting for a couple of
extra fixes I knew were close. And the freeze crept up on me, like
always.
Most of the changes here are bugfixes in any case. There are some
cleanups as well, which have been in my staging tree for a little
while. There are a couple of truly new features (some extensions to
the sam460ex platform), but these are low risk, since they only affect
a new and not really stabilized machine type anyway.
Higlights are:
* Mac platform improvements from Mark Cave-Ayland
* Sam460ex improvements from BALATON Zoltan et al.
* XICS interrupt handler cleanups from Cédric Le Goater
* TCG improvements for atomic loads and stores from Richard
Henderson
* Assorted other bugfixes
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAls7D8oACgkQbDjKyiDZ
s5Lxmg//YzPfC/nKqTTKkyJPzh/NnSC+kRTMAT3mbxdRIc7yfgMqJtWGGbS1iKgK
EeJ9hl5Qm0HfscfDuzf0xasU62ZEv3kNdLnWJEIgkqiXrxoO5KCnC0y4D8NN1W03
mvINNCa8+QDg2OsirGmNUTkriiG3wLIrHTpLZ4+JuC2Bd9H3nTHZgJ0MXON/1VWY
oRgr6kMZ5+IAzPhvYLFR6l3nPI883fgJOFyRo7YqYrkVBKFrFkfK0Xjw6vpsNxcx
2dE/YCHhNIriLuBG5noewL7GuqZRtLnl6rjjee5VAKIe1EmFeR+jsXwNjzGOVOJg
dhjOtsJsQQ3WdEw5uImJzE64kV228WCgmkeXzZd1010JBLr7sUkrd2EuoZ23vvat
uvZAHVSBrJg5WvzMo1VMEoPU3VeeZQ5HL+MI80iKiU6oUgRK11gVJcebtA0sEKt+
zhJC4JiUlHtZLTGIpMBmU8DJZ3Tyk1cBEm+Ky+SaPE+dsz16UHI0fazFQXJnXphE
MLHEGAyQgzWYp7kIcAjUFev0Geq/Uovy4JKIGI6ISop1wRPEQDxkthfkfRyQxQkE
zuse4EBcEH/Undw9KrmEQa0hCe+8BRkxklVbPesFPPdqH3PKNxtHYuWpSShQF0PW
XMjw43O2Rbsl8kBUHCpy4pYSugD1hpfgaw/mVUOU1u/M1O6toTw=
=AHrx
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.0-20180703' into staging
ppc patch queue 2018-07-03
Here's a last minue pull request before today's soft freeze. Ideally
I would have sent this earlier, but I was waiting for a couple of
extra fixes I knew were close. And the freeze crept up on me, like
always.
Most of the changes here are bugfixes in any case. There are some
cleanups as well, which have been in my staging tree for a little
while. There are a couple of truly new features (some extensions to
the sam460ex platform), but these are low risk, since they only affect
a new and not really stabilized machine type anyway.
Higlights are:
* Mac platform improvements from Mark Cave-Ayland
* Sam460ex improvements from BALATON Zoltan et al.
* XICS interrupt handler cleanups from Cédric Le Goater
* TCG improvements for atomic loads and stores from Richard
Henderson
* Assorted other bugfixes
# gpg: Signature made Tue 03 Jul 2018 06:55:22 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.0-20180703: (35 commits)
ppc: Include vga cirrus card into the compiling process
target/ppc: Relax reserved bitmask of indexed store instructions
target/ppc: set is_jmp on ppc_tr_breakpoint_check
spapr: compute default value of "hpt-max-page-size" later
target/ppc/kvm: don't pass cpu to kvm_get_smmu_info()
target/ppc/kvm: get rid of kvm_get_fallback_smmu_info()
ppc440_uc: Basic emulation of PPC440 DMA controller
sam460ex: Add RTC device
hw/timer: Add basic M41T80 emulation
ppc4xx_i2c: Rewrite to model hardware more closely
hw/ppc: Give sam46ex its own config option
fpu_helper.c: fix setting FPSCR[FI] bit
target/ppc: Implement the rest of gen_st_atomic
target/ppc: Implement the rest of gen_ld_atomic
target/ppc: Use atomic min/max helpers
target/ppc: Use MO_ALIGN for EXIWX and ECOWX
target/ppc: Split out gen_st_atomic
target/ppc: Split out gen_ld_atomic
target/ppc: Split out gen_load_locked
target/ppc: Tidy gen_conditional_store
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# hw/ppc/spapr.c
The interrupt controller mask register (PICMR) allows writing any value
to any of the 32 interrupt mask bits. Writing a 0 masks the interrupt
writing a 1 unmasks (enables) the the interrupt.
For some reason the old code was or'ing the write values to the PICMR
meaning it was not possible to ever mask a interrupt once it was
enabled.
I have tested this by running linux 4.18 and my regular checks, I don't
see any issues.
Reported-by: Davidson Francis <davidsondfgl@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The delay slot exception flag is only set on the SR register during
exception. Previously it was being set on both the ESR and SR this
caused QEMU to differ from the spec. The was apparent as the linux
kernel had a bug where it could boot on QEMU but not on real hardware.
The fixed logic now matches hardware.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
All of the existing code was boilerplate from elsewhere,
and would crash the guest upon the first signal.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
---
v2:
Add a comment to the new definition of target_pt_regs.
Install the signal mask into the ucontext.
v3:
Incorporate feedback from Laurent.
While openrisc has a split i/d tlb, qemu does not. Perform a
lookup on both i & d tlbs in parallel and put the composite
rights into qemu's tlb. This avoids ping-ponging the qemu tlb
between EXEC and READ.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The PPC440 User Manual says that if bit 31 is set, the contents of
CR[CR0] are undefined for indexed store instructions but this form is
not invalid. Other PPC variants confirming to recent ISA where this
bit may be reserved should ignore reserved bits and not raise invalid
instruction exception. In particular, MorphOS has an stwx instruction
with bit 31 set and fails to boot currently because of this. With this
patch it gets further.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The use of GDB breakpoints was broken by b0c2d52 ("target/ppc: convert
to TranslatorOps", 2018-02-16).
Fix it by setting is_jmp, so that we break from the translation loop
as originally intended.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In a future patch the machine code will need to retrieve the MMU
information from KVM during machine initialization before the CPUs
are created.
Actually, KVM_PPC_GET_SMMU_INFO is a VM class ioctl, and thus, we
don't need to have a CPU object around. We just need for KVM to
be initialized and use the kvm_state global. This patch just does
that.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Now that we're checking our MMU configuration is supported by KVM,
rather than adjusting it to KVM, it doesn't really make sense to
have a fallback for kvm_get_smmu_info(). If KVM is too old or buggy
to provide the details, we should rather treat this as an error.
This patch thus adds error reporting to kvm_get_smmu_info() and get
rid of the fallback code. QEMU will now terminate if KVM fails to
provide MMU details. This may break some very old setups, but the
simplification is worth the sacrifice.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The FPSCR[FI] bit indicates if the last floating point instruction had a result that was rounded. Each consecutive floating point instruction is suppose to set this bit to the correct value. What currently happens is this bit is not set as often as it should be. I have verified that this is the behavior of a real PowerPC 950. This patch fixes that problem by deciding to set this bit after each floating point instruction.
https://www.pdfdrive.net/powerpc-microprocessor-family-the-programming-environments-for-32-e3087633.html
Page 63 in table 2-4 is where the description of this bit can be found.
Signed-off-by: John Arbuckle <programmingkidx@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The store twin case was stubbed out. For now, implement it only within
a serial context, forcing parallel execution to synchronize. It would
be possible to implement with a cmpxchg loop, if we care, but the loose
alignment requirements (simply no crossing 32-byte boundary) might send
us back to the serial context anyway.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These cases were stubbed out. For now, implement them only within
a serial context, forcing parallel execution to synchronize. It
would be possible to implement these with cmpxchg loops, if we care.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These operations were previously unimplemented for ppc.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This avoids the need for gen_check_align entirely.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the guts of ST_ATOMIC to a function. Use foo_tl for the operations
instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an
explicit call to gen_check_align.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the guts of LD_ATOMIC to a function. Use foo_tl for the operations
instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an
explicit call to gen_check_align.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Leave only the minimal amount of code within the LDAR macro,
moving the rest of the code into gen_load_locked. Use MO_ALIGN
and remove the explicit call to gen_check_align.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Leave only the minimal amount of code within the STCX macro,
moving the rest of the code into gen_conditional_store.
Remove the explicit call to gen_check_align; the matching LDAX will
have already checked alignment, and we verify the same address.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Always use the gen_conditional_store implementation that uses
atomic_cmpxchg. Make sure and clear reserve_addr across most
interrupts crossing the cpu_loop.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When running in a parallel context, we must use a helper in order
to perform the 128-bit atomic operation. When running in a serial
context, do the compare before the store.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Section 1.4 of the Power ISA v3.0B states that this insn is
single-copy atomic. As we cannot (yet) issue 128-bit stores
within TCG, use the generic helpers provided.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Section 1.4 of the Power ISA v3.0B states that both of these
instructions are single-copy atomic. As we cannot (yet) issue
128-bit loads within TCG, use the generic helpers provided.
Since TCG cannot (yet) return a 128-bit value, add a slot within
CPUPPCState for returning the high half of a 128-bit return value.
This solution is preferred to the helper assigning to architectural
registers directly, as it avoids clobbering all TCG live values.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This allows faults from MO_ALIGN to have the same effect
as from gen_check_align.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The architecture supports 128 TLB entries. There is no reason
not to provide all of them. In the process we need to fix a
bug that failed to parameterize the configuration register that
tells the operating system the number of entries.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
---
v2:
- Change VMState version.
This hook is only used by CONFIG_USER_ONLY.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The sizes are already the same, however, we can improve things
if they are identical by design.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The code in cpu_mmu_index does not properly honor SR_DME.
This bug has workarounds elsewhere in that we flush the
tlb more often than necessary, on the state changes that
should be reflected in a change of mmu_index.
Fixing this means that we can respect the mmu_index that
is given to tlb_flush.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The previous code was confused, avoiding the flush of the old entry
if the new entry is invalid. We need to flush the old page if the
old entry is valid and the new page if the new entry is valid.
This bug was masked by over-flushing elsewhere.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
While we had defines for *_WAYS, we didn't define more than 1.
Reduce the complexity by eliminating this unused dimension.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
With tlb_fill in mmu.c, we can simplify things further.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
There is no reason to use an indirect branch instead
of simply testing the SR bits that control mmu state.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
There is no reason to allocate this separately. This was probably
copied from target/mips which makes the same mistake.
While doing so, move tlb into the clear-on-reset range. While not
all of the TLB bits are guaranteed zero on reset, all of the valid
bits are cleared, and the rest of the bits are unspecified.
Therefore clearing the whole of the TLB is correct.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Rather than pass base+offset to the helper, pass the full index.
In most cases the base is r0 and optimization yields a constant.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
A store to SR changes interrupt state, which should return
to the main loop to recognize that state.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
This allows us to limit the amount of ifdefs and isolate
the test for usermode.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Track direct jumps via dc->jmp_pc_imm. Use that in
preference to jmp_pc when possible. Emit goto_tb in
that case, and lookup_and_goto_tb otherwise.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
We failed to store to cpu_pc before raising the exception,
which caused us to re-execute the same insn that we stepped.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
No need to use the interrupt mechanisms when we can
simply exit the tb directly.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
These values are unused.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Rather than emit disassembly while translating, reuse the
generated decoder to build a separate disassembler.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Correct the output of the "info mem" and "info tlb" monitor commands to
correctly show canonical addresses.
In 48-bit addressing mode, the upper 16 bits of linear addresses are
equal to bit 47. In 57-bit addressing mode (LA57), the upper 7 bits of
linear addresses are equal to bit 56.
Signed-off-by: Doug Gale <doug16k@gmail.com>
Message-Id: <20180617084025.29198-1-doug16k@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This implements NPT suport for SVM by hooking into
x86_cpu_handle_mmu_fault where it reads the stage-1 page table. Whether
we need to perform this 2nd stage translation, and how, is decided
during vmrun and stored in hflags2, along with nested_cr3 and
nested_pg_mode.
As get_hphys performs a direct cpu_vmexit in case of NPT faults, we need
retaddr in that function. To avoid changing the signature of
cpu_handle_mmu_fault, this passes the value from tlb_fill to get_hphys
via the CPU state.
This was tested successfully via the Jailhouse hypervisor.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <567473a0-6005-5843-4c73-951f476085ca@web.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It eases code review, unit is explicit.
Patch generated using:
$ git grep -E '(1024|2048|4096|8192|(<<|>>).?(10|20|30))' hw/ include/hw/
and modified manually.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20180625124238.25339-33-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Missing break when this feature was added in 89e71e873d
("target/openrisc: implement shadow registers"). This was causing
strange issues as we get writes into the translation block jump cache
and other bits of state.
Fixes: 89e71e873d ("target/openrisc: implement shadow registers")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Add support for Hyper-V TLB flush which recently got added to KVM.
Just like regular Hyper-V we announce HV_EX_PROCESSOR_MASKS_RECOMMENDED
regardless of how many vCPUs we have. Windows is 'smart' and uses less
expensive non-EX Hypercall whenever possible (when it wants to flush TLB
for all vCPUs or the maximum vCPU index in the vCPU set requires flushing
is less than 64).
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20180610184927.19309-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
tcg_s390_tod_updated() is always called with the iothread being locked
(e.g. from S390TODClass->set() e.g. via HELPER(sck) or on incoming
migration). The helper we call takes the lock itself - bad.
Let's change that by factoring out updating the ckc timer. This now looks
much nicer than having to call a helper from another function.
While touching it we also make sure that env->ckc is updated even if the
new value is -1ULL, for now it would not have been modified in that case.
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180629170520.13671-1-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>