The machine that with hvf accelerator and smp sometimes boot hangs
because all processors are executing instructions at startup,
including early I/O emulations. We should just allow the bootstrap
processor to initialize the machine and then to wake up slave
processors by interrupt.
Signed-off-by: Heiher <r@hev.cc>
Message-Id: <20190123073402.28465-1-r@hev.cc>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The machine description we send is being (silently) thrown on the floor
by GDB and GDB silently uses the default machine description, because
the xml parse fails on <feature> nested within <feature>.
Changes to the xml in qemu source code have no effect.
In addition, the default machine description has fs_base, which fails to
be retrieved, which breaks the whole register window. Add it and the
other control registers.
Signed-off-by: Doug Gale <doug16k@gmail.com>
Message-Id: <20190124040457.2546-1-doug16k@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In 16-bit addressing mode, when Mod = 0 and R/M = 6, decoded displacement
doesn't reach decode_linear_addr and gets lost. Instructions that
involve the combination of ModRM always get a pointer with zero offset
from the beginning of DS segment.
The change fixes drawing in F-BIRD from day 1 of '18 advent calendar.
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20190125154743.14498-1-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
MPX support is being phased out by Intel and actually I am not sure that
OS X has ever enabled it in XCR0. Drop it from the Hypervisor.framework
acceleration.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This reverts commit 5131dc433d.
For new instruction 'PCONFIG' will not be exposed to guest.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1545227081-213696-3-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Processor tracing is not yet implemented for KVM and it will be an
opt in feature requiring a special module parameter.
Disable it, because it is wrong to enable it by default and
it is impossible that no one has ever used it.
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
PCONFIG is not available to guests; it must be specifically enabled
using the PCONFIG_ENABLE execution control. Disable it, because
no one can ever use it.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1545227081-213696-2-git-send-email-robert.hu@linux.intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- fix CPU wakeup on runstall changes; expose runstall as an IRQ line;
- place mini-bootloader at the BSP reset vector;
- expose CPU core frequency in XTFPGA board FPGA register;
- rearrange access to external interrupts of xtensa cores;
- add MX interrupt distributor and use it on SMP XTFPGA boards;
- add test_mmuhifi_c3 xtensa core variant;
- raise number of CPUs that can be instantiated on XTFPGA boards.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEK2eFS5jlMn3N6xfYUfnMkfg/oEQFAlxYi5QTHGpjbXZia2Jj
QGdtYWlsLmNvbQAKCRBR+cyR+D+gRCyJEACR/IQ7LVFYBczo450yLOoAPWU/8/iU
BmmeqM/BKZHBZN5HyS7zdBbDGHe/aQd+hlwG3hnxaJTfqTSOW0QxOmXkI2U5yQrl
SfGErboON0FnDreQuLlsRN0QQYI7pFJiwpUj5sqOYYghfbEfwmefUNzWlJr17AyC
AusLrGe3SBJuv40S78C+S7e4XlaGxPn5OwCvpvH+o8iDj9TCP7+vXV8N4fdackH+
i323F9OdNSmtKXL/lDVG2bf5eFw+koLOGPsqjdT/WIfRVg45VXd7ZFQeMZxuQrh4
8NwlQx9bxM4rbPBrUpsPDulic5udmMJvJ31CFe3YXu48skG90E3NxtOmUBGvTgMl
lpM9RzZfCZTAIfPqw0LHiQ4kaioMlAO40wtCNxqm78R5UL/FseuzNUN+eLz7qCL1
rTBKOQ5CJMFZWDkQjcwIYz54vi8QCt0jMWD4MzzIaU3Nf3Ak5uAaeEQE+b48lBvQ
0EhUzwh1Ea9An2JAyhAqlNqifs+0HsG4M5tqhVj+9IITlzbZe/zLswgYaHd5D75v
ElGoFov9Gi/GM5LxFzNRN1HwFCFWeHTt6sIHIo2oR1+mPrNhS+MJyqrnkKgIYOA/
plSJqO//iyS8wGL7c7UXVV3cixGOv7SSZDtSKFZb+sD7aML/y3DJnFTmABOl4kdt
OBYFg8+jcrma3g==
=8D60
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/xtensa/tags/20190204-xtensa' into staging
target/xtensa: SMP updates and various fixes
- fix CPU wakeup on runstall changes; expose runstall as an IRQ line;
- place mini-bootloader at the BSP reset vector;
- expose CPU core frequency in XTFPGA board FPGA register;
- rearrange access to external interrupts of xtensa cores;
- add MX interrupt distributor and use it on SMP XTFPGA boards;
- add test_mmuhifi_c3 xtensa core variant;
- raise number of CPUs that can be instantiated on XTFPGA boards.
# gpg: Signature made Mon 04 Feb 2019 18:59:32 GMT
# gpg: using RSA key 2B67854B98E5327DCDEB17D851F9CC91F83FA044
# gpg: issuer "jcmvbkbc@gmail.com"
# gpg: Good signature from "Max Filippov <filippov@cadence.com>" [unknown]
# gpg: aka "Max Filippov <max.filippov@cogentembedded.com>" [full]
# gpg: aka "Max Filippov <jcmvbkbc@gmail.com>" [full]
# Primary key fingerprint: 2B67 854B 98E5 327D CDEB 17D8 51F9 CC91 F83F A044
* remotes/xtensa/tags/20190204-xtensa:
hw/xtensa: xtfpga: raise CPU number limit
target/xtensa: add test_mmuhifi_c3 core
hw/xtensa: xtfpga: use MX PIC for SMP
target/xtensa: add MX interrupt controller
target/xtensa: expose core runstall as an IRQ line
target/xtensa: rearrange access to external interrupts
target/xtensa: drop function xtensa_timer_irq
target/xtensa: fix access to the INTERRUPT SR
hw/xtensa: xtfpga: use core frequency
hw/xtensa: xtfpga: fix bootloader placement in SMP
target/xtensa: add qemu_cpu_kick to xtensa_runstall
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As floating point registers overlay some vector registers and we want
to make use of the general tcg_gvec infrastructure that assumes vectors
are not stored in globals but in memory, don't model floating point
registers as globals anymore. This is then similar to how arm handles
it.
Reading/writing a floating point register means reading/writing memory now.
Break up ugly in2_x2() handling that modifies both, in1 and in2 into
in2_x2l and in2_x2h. This makes things more readable. Also, in1_x1() is
ugly as it touches out/out2, get rid of that and use prep_x1() instead.
As we are no longer able to use the original global variables for
out/out2, we have to use new temporary variables and write from them to
the target registers using wout_ helpers.
E.g. an instruction that reads and writes x1 will use
- prep_x1 to get the values into out/out2
- wout_x1 to write the values from out/out2
This special handling is needed for x1 as it is often used along with
other inputs, so in1/in2 is already used.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190204154406.16122-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
I plan to deprecate -mem-path option and replace it with memory-backend,
for that it's necessary to get rid of mem_path global variable.
Do it for s390x case, replacing it with alternative way to enable
1Mb hugepages capability.
Todo that replace qemu_mempath_getpagesize() with qemu_getrampagesize()
which also checks for -mem-path provided RAM.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <1548834906-133241-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
MTTCG should be enabled by default whenever the memory model allows
it. s390x was missing its definition of TCG_GUEST_DEFAULT_MO meaning
the user had to manually specify --accel tcg,thread=multi.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Cc: David Hildenbrand <david@redhat.com>
Message-Id: <20190118171848.27332-1-alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Following on from the previous work, there are numerous endian-related hacks
in int_helper.c that can now be replaced with Vsr* macros.
There are also a few places where the VECTOR_FOR_INORDER_I macro can be
replaced with a normal iterator since the processing order is irrelevant.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Richard points out that these macros suffer from a -fsanitize=shift bug in that
they improperly handle n == 0 turning it into a shift by 32/64 respectively.
Replace them with QEMU's existing ror32() and ror64() functions instead.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
As pointed out by Richard: it does not need the mask argument, nor does it need
the recast argument. The masking is implied by the cast argument, and the
recast is implied by the assignment.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These macros can be eliminated by instead using the relavant Vsr* macros in
the few locations where they appear.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The original purpose of these macros was to correctly reference the high and low
parts of the VSRs regardless of the host endianness.
Replace these direct references to high and low parts with the relevant VsrD
macro instead, and completely remove the now-unused HI_IDX and LO_IDX macros.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The current implementations make use of the endian-specific macros HI_IDX and
LO_IDX directly to calculate array offsets.
Rework the implementation to use the Vsr* macros so that these per-endian
references can be removed.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The current implementations make use of the endian-specific macros MRGLO/MRGHI
and also reference HI_IDX and LO_IDX directly to calculate array offsets.
Rework the implementation to use the Vsr* macros so that these per-endian
references can be removed.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These fields have now been replaced by equivalents under the machine
data.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This prepares us for eliminating the use of direct array access within the VMX
instruction implementations.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It has been there since the enablement of PR KVM for PAPR, ie, commit
f61b4bedaf in 2011. Not sure why at that time, but it is definitely
not needed with the current code.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A flawed test lead to the instructions always being treated as
unallocated encodings.
Fixes: https://bugs.launchpad.net/bugs/1813460
Signed-off-by: Remi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since QEMU does not support the ARMv8.2-LVA, Large Virtual Address,
extension (yet), the VA address space is 48-bits plus a sign bit. User
mode can only handle the positive half of the address space, so that
makes a limit of 48 bits.
(With LVA, it would be 53 and 52 bits respectively.)
The incorrectly large address space conflicts with PAuth instructions,
which use bits 48-54 and 56-63 for the pointer authentication code. This
also conflicts with (as yet unsupported by QEMU) data tagging and with
the ARMv8.5-MTE extension.
Signed-off-by: Remi Denis-Courmont <remi.denis.courmont@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Drop the pac properties. This approach cannot work as written
because the properties are applied before arm_cpu_reset, which
zeros SCTLR_EL1 (amongst everything else).
We can re-introduce the properties if they turn out to be useful.
But since linux 5.0 enables all of the keys, they may not be.
Fixes: 1ae9cfbd47
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, the set_pc logic was unclear, which raised questions about
whether it should be used directly, applying a value to PC or adding
additional checks, for example, set the Thumb bit in Arm cpu. Let's set
the set_pc logic for “Configure the PC, as was done in the ELF file”
and implement synchronize_with_tb hook for preserving PC to cpu_tb_exec.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190129121817.7109-1-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These bits become writable with the ARMv8.3-PAuth extension.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190129143511.12311-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Make PMU overflow interrupts more accurate by using a timer to predict
when they will overflow rather than waiting for an event to occur which
allows us to otherwise check them.
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190124162401.5111-3-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Whenever we notice that a counter overflow has occurred, send an
interrupt. This is made more reliable with the addition of a timer in a
follow-on commit.
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190124162401.5111-2-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In disas_simd_indexed(), for the case of "complex fp", each indexable
element is a complex pair, so the total size is twice that indicated
in the 'size' field in the encoding. We were trying to do this
"double the size" operation with a left shift by 1, but this is
incorrect because the 'size' field is a MO_8/MO_16/MO_32/MO_64
value, and doubling the size should be done by a simple increment.
This meant we were mishandling FCMLA (by element) of values where
the real and imaginary parts are 32-bit floats, and would incorrectly
UNDEF this encoding. (No other insns take this code path, and for
16-bit floats it happens that 1 << 1 and 1 + 1 are both the same).
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190129140411.682-3-peter.maydell@linaro.org
The FCMLA (by element) instruction exists in the
"vector x indexed element" encoding group, but not in
the "scalar x indexed element" group. Correctly UNDEF
the unallocated encodings.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190129140411.682-2-peter.maydell@linaro.org
In the AdvSIMD scalar x indexed element and vector x indexed element
encoding group, the SDOT and UDOT instructions are vector only,
and their opcode is unallocated in the scalar group. Correctly
UNDEF this unallocated encoding.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190125182626.9221-8-peter.maydell@linaro.org
In the encoding groups
* floating-point data-processing (1 source)
* floating-point data-processing (2 source)
* floating-point data-processing (3 source)
* floating-point immediate
* floating-point compare
* floating-ponit conditional compare
* floating-point conditional select
bit 31 is M and bit 29 is S (and bit 30 is 0, already checked at
this point in the decode). None of these groups allocate any
encoding for M=1 or S=1. We checked this in disas_fp_compare(),
disas_fp_ccomp() and disas_fp_csel(), but missed it in disas_fp_1src(),
disas_fp_2src(), disas_fp_3src() and disas_fp_imm().
We also missed that in the fp immediate encoding the imm5 field
must be all zeroes.
Correctly UNDEF the unallocated encodings here.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190125182626.9221-7-peter.maydell@linaro.org
In the "add/subtract (extended register)" encoding group, the "opt"
field in bits [23:22] must be zero. Correctly UNDEF the unallocated
encodings where this field is not zero.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190125182626.9221-6-peter.maydell@linaro.org
In the AdvSIMD load/store single structure encodings, the
non-post-indexed case should have zeroes in [20:16] (which is the
Rm field for the post-indexed case). Bit 31 must also be zero
(a check we got right in ldst_multiple but not here). Correctly
UNDEF these unallocated encodings.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190125182626.9221-5-peter.maydell@linaro.org
In the AdvSIMD load/store multiple structures encodings,
the non-post-indexed case should have zeroes in [20:16]
(which is the Rm field for the post-indexed case).
Correctly UNDEF the currently unallocated encodings which
have non-zeroes in those bits.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190125182626.9221-4-peter.maydell@linaro.org
The PRFM prefetch insn in the load/store with imm9 encodings
requires idx field 0b00; we were underdecoding this by
only checking !is_unpriv (which is equivalent to idx != 2).
Correctly UNDEF the unallocated encodings where idx == 0b01
and 0b11 as well as 0b10.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190125182626.9221-3-peter.maydell@linaro.org
The "system instructions" and "system register move" subcategories
of "branches, exception generating and system instructions" for A64
only apply if bits [23:22] are zero; other values are currently
unallocated. Correctly UNDEF these unallocated encodings.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190125182626.9221-2-peter.maydell@linaro.org
- code clean-up
- LGPL information clean-up
- fix typo (acpi)
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJcUaTuAAoJEPMMOL0/L748/ZMP/AwJM0P7AjFSuPxC5ETNK2Nn
kJSMOMhAYs4oZppwQ25tphHNFd7xEzQd1JBS5aT4svp5qN0kzoUoPY6IpxvfXueW
aFmXJPgCZvWPvD8xebHYPiH1wr0ITeRKDjZBV+YykCwIOiigg5RraMslon+Djf2f
WNB7L/abhb4eTf5vNvS7cpCLOSXbslYYtj4Z5WcSARRAFlvBzHczooiYMuovMhtP
zCOhG7tW9scqoEJyIW2Bxmw4QqHAuOtrDnqN3DHseM1Eh4PoJCNLVf4lTU8qWhF4
W8IGysjQQot2V2JIkD9XkNcfGlQNniwNj/vYhpKCDLACE65ztZ42DV7j+oe2SDB2
ljqTc2Vi4pmqEuIGcT3MykBKdsjdLOS3KBP3S6fgMV7/R347xeRf85bGcZQb/AnS
rsL5MLA4nkd0xdztpuvHcqJdxhk+SbwsY8Zlj4agSELhFuIuEKz4VLd/WrxQvAhp
yimX2+m7dHvIsHWpRduZ8I5nR2U0O+sxmIhPbaQNCcHTU/JZjtZEKxU0kpCHkBtd
AdMVXf8NMGe8NecY8n9Y80Veencpq+nCthGtRRJwWOptlXXYdR5aYHaX6IlJaVFV
jmVZITn1HxddPsVqGW0ZAJip78xVi8eYHiMvQO4+dfECLAH6m7TyLRKh8MyIyCHD
VJ6roxeKSgosPaw1tzc2
=1OOM
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/vivier2/tags/trivial-branch-pull-request' into staging
- add device category (edu, i8042, sd memory card)
- code clean-up
- LGPL information clean-up
- fix typo (acpi)
# gpg: Signature made Wed 30 Jan 2019 13:21:50 GMT
# gpg: using RSA key F30C38BD3F2FBE3C
# gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full]
# gpg: aka "Laurent Vivier <laurent@vivier.eu>" [full]
# gpg: aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full]
# Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F 5173 F30C 38BD 3F2F BE3C
* remotes/vivier2/tags/trivial-branch-pull-request:
virtio-blk: remove duplicate definition of VirtIOBlock *s pointer
hw/block: clean up stale xen_disk trace entries
target/m68k: Fix LGPL information in the file headers
target/s390x: Fix LGPL version in the file header comments
tcg: Fix LGPL version number
target/tricore: Fix LGPL version number
target/openrisc: Fix LGPL version number
COPYING.LIB: Synchronize the LGPL 2.1 with the version from gnu.org
Don't talk about the LGPL if the file is licensed under the GPL
hw: sd: set category of the sd memory card
hw: input: set category of the i8042 device
typo: apci->acpi
hw: edu: set category of the edu device
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It's either "GNU *Library* General Public License version 2" or
"GNU Lesser General Public License version *2.1*", but there was
no "version 2.0" of the "Lesser" license. So assume that version
2.1 is meant here.
Also some files mention the GPL instead of the LGPL after declaring
that the files are licensed under the LGPL, so change these spots to
use LGPL, too.
Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1548769438-28942-1-git-send-email-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
It's either "GNU *Library* General Public License version 2" or
"GNU Lesser General Public License version *2.1*", but there was
no "version 2.0" of the "Lesser" license. So assume that version
2.1 is meant here.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <1548769067-20792-1-git-send-email-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
It's either "GNU *Library* General Public version 2" or "GNU Lesser
General Public version *2.1*", but there was no "version 2.0" of the
"Lesser" library. So assume that version 2.1 is meant here.
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1548252536-6242-4-git-send-email-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
It's either "GNU *Library* General Public version 2" or "GNU Lesser
General Public version *2.1*", but there was no "version 2.0" of the
"Lesser" library. So assume that version 2.1 is meant here.
Cc: Stafford Horne <shorne@gmail.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1548252536-6242-3-git-send-email-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Some files claim that the code is licensed under the GPL, but then
suddenly suggest that the user should have a look at the LGPL.
That's of course non-sense, replace it with the correct GPL wording
instead.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1548255083-8190-1-git-send-email-thuth@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Two small CPU model updates:
* Enable NPT and NRIPSAVE on AMD CPUs
* Update stepping of Cascadelake-Server
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJcT1nUAAoJECgHk2+YTcWmMrIP/37Yh9Q3p6Erbpqee3jPNhyC
ih42LJ6jO1w34X6PFqtVS9uTWAI1P3LwoXq5bNT4JeMy37QDujph5LWP3+GXXuCa
cB8AkgaeL5GwQMSBujJEacJiZ69iI3zAQZHKXkailuApCSKOXZf89dW3Ao8S7Qgr
TVjcnJgECoiaDLD2FMWafmSP4d5YSNRkuGZ5f5NsFV8BztKptnT2DLXIBjuz/oCT
0+VqV6HEo7s19QFzYV4G59Afwk306iU1INV0NeNz770SbgKc3Oi80GAaYr0V+oLW
2Lr+dgbYPUw7ZwVhK0md6/h8ZbgRSpHhxke5i8p++AC4NCWC+xTIEgIaH6AT75Rn
6GcgG/6zWOPqJphHROYR78P38156KkNVQI46chsLm2DwSL8HFk17wyIWCQoEWJyt
AgiZosWA0/92V95Lzd5zX+Y6v+OBR+aJOEdcDn2ic8WPxkyM5G01SeWLPP4C5Tux
R6oJ3zozn4OqxIqslihkqzxiJoeap4qgJKLzcq1JR1nlQ+yoBVktqAF5+ogytb2e
ybnef3fLqf/qHi4JDLJXQdCPjIccrEVIzTs90wEgsQ9hbYtGxoeRUW2f4IemBvU2
+Jwqs1n6aFo+AlETdrTGEDlnaqphy9FOnwncrgai0eTuoHOQPI9+RuiM5K/BEEuy
qzMdXK85dZSP+f9IQrqx
=Nh3Q
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-next-pull-request' into staging
x86 queue, 2019-01-28
Two small CPU model updates:
* Enable NPT and NRIPSAVE on AMD CPUs
* Update stepping of Cascadelake-Server
# gpg: Signature made Mon 28 Jan 2019 19:36:52 GMT
# gpg: using RSA key 2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" [full]
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-next-pull-request:
i386: Enable NPT and NRIPSAVE for AMD CPUs
i386: Update stepping of Cascadelake-Server
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
A bug was introduced during a respin of:
commit 57a4a11b2b
target/arm: Add array for supported PMU events, generate PMCEID[01]_EL0
This patch introduced two calls to get_pmceid() during CPU
initialization - one each for PMCEID0 and PMCEID1. In addition to
building the register values, get_pmceid() clears an internal array
mapping event numbers to their implementations (supported_event_map)
before rebuilding it. This is an optimization since much of the logic is
shared. However, since it was called twice, the contents of
supported_event_map reflect only the events in PMCEID1 (the second call
to get_pmceid()).
Fix this bug by moving the initialization of PMCEID0 and PMCEID1 back
into a single function call, and name it more appropriately since it is
doing more than simply generating the contents of the PMCEID[01]
registers.
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190123195814.29253-1-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The current behavior of v8m_security_lookup in helper.c only checks whether the
IDAU specifies a higher security if the SAU is enabled. If SAU.ALLNS is set to
1, this will lead to addresses being treated as non-secure, even though the
IDAU indicates that they must be secure.
This patch changes the behavior to also check the IDAU if the SAU is currently
disabled.
(This brings the behaviour here into line with the v8M Arm ARM
SecurityCheck() pseudocode.)
Signed-off-by: Thomas Roth <code@stacksmashing.net>
Message-id: CAGGekkuc+-tvp5RJP7CM+Jy_hJF7eiRHZ96132sb=hPPCappKg@mail.gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: added pseudocode ref to the commit message, fixed comment style]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When tsz == 0, aarch32 selects the address space via exclusion,
and there are no "top_bits" remaining that require validation.
Fixes: ba97be9f4a
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190125184913.5970-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Runstall signal looks very much like a level-triggered IRQ line. Provide
xtensa_get_runstall function that returns runstall IRQ.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Replace xtensa_get_extint that returns single external IRQ descriptor
with xtensa_get_extints that returns a vector of all external IRQs.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
It's a one-liner used in a single place, move its implementation there
and remove its declaration.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Modern AMD CPUs support NPT and NRIPSAVE features and KVM exposes these
when present. NRIPSAVE apeared somewhere in Opteron_G3 lifetime (e.g.
QuadCore AMD Opteron 2378 has is but QuadCore AMD Opteron HE 2344 doesn't),
NPT was introduced a bit earlier.
Add the FEAT_SVM leaf to Opteron_G4/G5 and EPYC/EPYC-IBPB cpu models.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20190121155051.5628-1-vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Update the stepping from 5 to 6, in order that
the Cascadelake-Server CPU model can support AVX512VNNI
and MSR based features exposed by ARCH_CAPABILITIES.
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20181227024304.12182-2-tao3.xu@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJcSw5lAAoJENSXKoln91pl8aEH/2KkK5ojXMBY94wH+sCZ17pM
gEuysp20pUudeIk7eUk9aCnq7NHXGJ3GElB6baBq9zwC6qMlVC2Fmj4azluZbjra
ZHKXAeXmibDpRo0UJeovH27+rP8j/NKK+nvZX/+gt0azfC+0SgFwhM6DFFPklJWj
FXGxe2vOUO8Qjke0GwGfqUcjhXJPYg9DEC8RxvliBANME1o0HwkPaNPWWHdjacm2
3K5FeR4jbPYIsfLdazH+G1KfYnQc8GgleSYBIeBKSyyy0z0Z9FXUTZGTjXZSgup4
TqUJbUG0whUtVPzRzd8/DLeWK+2JXfLk35MrwVwv/TDdKuije7Pk7ABrb9B2Lnk=
=O/OP
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-january-25-2019' into staging
MIPS queue for January 25, 2019
# gpg: Signature made Fri 25 Jan 2019 13:25:57 GMT
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-january-25-2019:
docs/qemu-cpu-models: Add MIPS/nanoMIPS QEMU supported CPU models
qemu-doc: Add nanoMIPS ISA information
tests: tcg: mips: Remove old directories
tests: tcg: mips: Add two new Makefiles
tests: tcg: mips: Move source files to new locations
MAINTAINERS: Update MIPS sections
target/mips: Add I6500 core configuration
target/mips: nanoMIPS: Fix branch handling
disas: nanoMIPS: Amend DSP instructions related comments
target/mips: Extend gen_scwp() functionality to support EVA
target/mips: Correct the second argument type of cpu_supports_isa()
target/mips: nanoMIPS: Rename macros for extracting 3-bit-coded GPR numbers
target/mips: nanoMIPS: Remove an unused macro
target/mips: nanoMIPS: Remove duplicate macro definitions
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
INTERRUPT special register may be changed both by the core (by writing
to INTSET and INTCLEAR registers) and by external events (by triggering
and clearing HW IRQs). In MTTCG this state must be protected from
concurrent access, otherwise interrupts may be lost or spurious
interrupts may be detected.
Use atomic operations to change INTSET SR.
Fix wsr.intset so that it soesn't clear any bits.
Fix wsr.intclear so that it doesn't clear bit that corresponds to NMI.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
When xtensa_runstall is called to unstall a core it needs to kick it
after clearing runstall flag, otherwise the core doesn't start
immediately. There's also no point in clearing CPU_INTERRUPT_HALT, drop
it.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Add I6500 core configuration. Note that this configuration is
supported only on best-effort basis due to the lack of certain
features in QEMU.
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Fix nanoMIPS branch handling.
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Extend gen_scwp() functionality to support EVA by adding an
additional argument, modify internals of the function to handle
new functionality, and accordingly change its invocations.
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
"insn_flags" bitfield was expanded from 32-bit to 64-bit in commit
f9c9cd63e3. However, this was not reflected on the second argument
of the function cpu_supports_isa(). By chance, this did not create
some wrong behavior, since the left-most halves of all instances of
the second argument are currently all zeros. However, this is still
a bug waiting to happen. Correct this by changing the type of the
second argument to be always 64-bit.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Rename macros for extracting 3-bit-coded GPR numbers, to achieve
better consistency with the nanoMIPS documentation.
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Several macros were defined twice, with identical values, so
remove duplicates.
Previously added in 80845edf37.
This reverts commit 6bfa9f4c9c.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
When using the e6500 CPU, QEMU generates a fatal error after
complaining about registering SPR 604 twice.
Building and testing with commit
9b2e891ec5 shows the issue:
qemu-system-ppc64 --version
QEMU emulator version 3.1.50 (v3.1.0-456-g9b2e891ec5-dirty)
Copyright (c) 2003-2018 Fabrice Bellard and the QEMU Project developers
qemu-system-ppc64 -M none -cpu e6500
Error: Trying to register SPR 604 (25c) twice !
Signed-off-by: Jon Diekema <jon.diekema@ge.com>
Message-Id: <CALvuzg43uSodseEHjNaRcPFBKKPTY2mcppUbYgiLL=QO9RxX_Q@mail.gmail.com>
[removed duplicated mail header in the commit message]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Add MicroBlaze CPU properties to enable exceptions on failed
bus accesses.
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Switch the microblaze target from the old unassigned_access hook
to the transaction_failed hook.
The notable difference is that rather than it being called
for all physical memory accesses which fail (including
those made by DMA devices or by the gdbstub), it is only
called for those made by the CPU via its MMU. For
microblaze this makes no difference because none of the
target CPU code needs to make loads or stores by physical
address.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
[EI: Add space in qemu_log()]
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
When compiling the ppc code with clang and -std=gnu99, there are a
couple of warnings/errors like this one:
CC ppc64-softmmu/hw/intc/xics.o
In file included from hw/intc/xics.c:35:
include/hw/ppc/xics.h:43:25: error: redefinition of typedef 'ICPState' is a C11 feature
[-Werror,-Wtypedef-redefinition]
typedef struct ICPState ICPState;
^
target/ppc/cpu.h:1181:25: note: previous definition is here
typedef struct ICPState ICPState;
^
Work around the problems by including the proper headers in spapr.h
and by using struct forward declarations in cpu.h.
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJcQfb3AAoJENSXKoln91plXOkH/Rb+3IUi3ziXLaIo18fvMYcO
PY4cT/+3Lv9a8aa3/L1QFEYjI8Mu5s8MCQbQbOckifnusPao4bCHrVzhnwrchelb
DnkccwJcbyMkPAB2EqsUDNIRLiA6EmaXu4d9ve8HEo4mB3uy/OcOUo6YtotaPuV6
Z9kAyS1lnXOkrlbWU0ZgmEvvw8Mhs/XED3HOtzPpfrOVKnpObPqdMLPsVLqC761k
LZ6vbrjo2ELBwp+3WVaXDmLrNLF/qXd3NyFKPQ+EI8q3o7+7OpBptXOkuwT9CvAy
NzAMBFAIsKo90C1PD5hzlyYeoaEkyGBR0Uquiz+FkxUr9NuKs40qOEEjwd5njPg=
=q6sK
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-january-17-2019-v2' into staging
MIPS queue for January 17, 2019 - v2
# gpg: Signature made Fri 18 Jan 2019 15:55:35 GMT
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-january-17-2019-v2:
target/mips: Introduce 32 R5900 multimedia registers
target/mips: Rename 'rn' to 'register_name'
target/mips: Add CP0 register MemoryMapID
target/mips: Amend preprocessor constants for CP0 registers
target/mips: Update ITU to handle bus errors
target/mips: Update ITU to utilize SAARI and SAAR CP0 registers
target/mips: Add field and R/W access to ITU control register ICR0
target/mips: Provide R/W access to SAARI and SAAR CP0 registers
target/mips: Add fields for SAARI and SAAR CP0 registers
target/mips: Use preprocessor constants for 32 major CP0 registers
target/mips: Add preprocessor constants for 32 major CP0 registers
target/mips: Move comment containing summary of CP0 registers
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This both advertises that we support four counters and enables them
because the pmu_num_counters() reads this value from PMCR.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20181211151945.29137-13-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The instruction event is only enabled when icount is used, cycles are
always supported. Always defining get_cycle_count (but altering its
behavior depending on CONFIG_USER_ONLY) allows us to remove some
CONFIG_USER_ONLY #defines throughout the rest of the code.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20181211151945.29137-12-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add arrays to hold the registers, the definitions themselves, access
functions, and logic to reset counters when PMCR.P is set. Update
filtering code to support counters other than PMCCNTR. Support migration
with raw read/write functions.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181211151945.29137-11-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This commit doesn't add any supported events, but provides the framework
for adding them. We store the pm_event structs in a simple array, and
provide the mapping from the event numbers to array indexes in the
supported_event_map array. Because the value of PMCEID[01] depends upon
which events are supported at runtime, generate it dynamically.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20181211151945.29137-10-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is immediately necessary for the PMUv3 implementation to check
ID_DFR0.PerfMon to enable/disable specific features, but defines the
full complement of fields for possible future use elsewhere.
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20181211151945.29137-8-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add an array for PMOVSSET so we only define it for v7ve+ platforms
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181211151945.29137-7-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename arm_ccnt_enabled to pmu_counter_enabled, and add logic to only
return 'true' if the specified counter is enabled and neither prohibited
or filtered.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181211151945.29137-5-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Because of the PMU's design, many register accesses have side effects
which are inter-related, meaning that the normal method of saving CP
registers can result in inconsistent state. These side-effects are
largely handled in pmu_op_start/finish functions which can be called
before and after the state is saved/restored. By doing this and adding
raw read/write functions for the affected registers, we avoid
migration-related inconsistencies.
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20181211151945.29137-4-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
pmccntr_read and pmccntr_write contained duplicate code that was already
being handled by pmccntr_sync. Consolidate the duplicated code into two
functions: pmccntr_op_start and pmccntr_op_finish. Add a companion to
c15_ccnt in CPUARMState so that we can simultaneously save both the
architectural register value and the last underlying cycle count - this
ensures time isn't lost and will also allow us to access the 'old'
architectural register value in order to detect overflows in later
patches.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20181211151945.29137-3-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can perform this with fewer operations.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-32-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add 4 attributes that controls the EL1 enable bits, as we may not
always want to turn on pointer authentication with -cpu max.
However, by default they are enabled.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-31-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the main crypto routine, an implementation of QARMA.
This matches, as much as possible, ARM pseudocode.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-28-richard.henderson@linaro.org
[PMM: fixed minor checkpatch nits]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is not really functional yet, because the crypto is not yet
implemented. This, however follows the AddPAC pseudo function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is not really functional yet, because the crypto is not yet
implemented. This, however follows the Auth pseudo function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Stripping out the authentication data does not require any crypto,
it merely requires the virtual address parameters.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The arm_regime_tbi{0,1} functions are replacable with the new function
by giving the lowest and highest address.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use TBID in aa64_va_parameters depending on the data parameter.
This automatically updates all existing users of the function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will want to check TBI for I and D simultaneously.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We need to reuse this from helper-a64.c. Provide a stub
definition for CONFIG_USER_ONLY. This matches the stub
definitions that we removed for arm_regime_tbi{0,1} before.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will shortly want to talk about TBI as it relates to data.
Passing around a pair of variables is less convenient than a
single variable.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Split out functions to extract the virtual address parameters.
Let the functions choose T0 or T1 address space half, if present.
Extract (most of) the control bits that vary between EL or Tx.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-19-richard.henderson@linaro.org
[PMM: fixed minor checkpatch comment nits]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While we could expose stage_1_mmu_idx, the combination is
probably going to be more useful.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The pattern
ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
is computing the full ARMMMUIdx, stripping off the ARM bits,
and then putting them back.
Avoid the extra two steps with the appropriate helper function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is, or will shortly become, too big to inline.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Not that there are any stores involved, but why argue with ARM's
naming convention.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-15-richard.henderson@linaro.org
[fixed trivial comment nit]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will enable PAuth decode in a subsequent patch.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is only used by AArch64. Code movement only.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now properly signals unallocated for REV64 with SF=0.
Allows for the opcode2 field to be decoded shortly.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The cryptographic internals are stubbed out for now,
but the enable and trap bits are checked.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This path uses cpu_loop_exit_restore to unwind current processor state.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There are 5 bits of state that could be added, but to save
space within tbflags, add only a single enable bit.
Helpers will determine the rest of the state at runtime.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Post v8.4 bits taken from SysReg_v85_xml-00bet8.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add storage space for the 5 encryption keys.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-2-richard.henderson@linaro.org
[PMM: use 0xf rather than -1 in FIELD_DP64() expressions to
avoid clang warnings about implicit truncation from int to
bitfield changing the value]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In U-boot, we switch from S-SVC -> Mon -> Hyp mode when we want to
enter Hyp mode. The change into Hyp mode is done by doing an
exception return from Mon. This doesn't work with current QEMU.
The problem is that in bad_mode_switch() we refuse to allow
the change of mode.
Note that bad_mode_switch() is used to do validation for two situations:
(1) changes to mode by instructions writing to CPSR.M
(ie not exception take/return) -- this corresponds to the
Armv8 Arm ARM pseudocode Arch32.WriteModeByInstr
(2) changes to mode by exception return
Attempting to enter or leave Hyp mode via case (1) is forbidden in
v8 and UNPREDICTABLE in v7, and QEMU is correct to disallow it
there. However, we're already doing that check at the top of the
bad_mode_switch() function, so if that passes then we should allow
the case (2) exception return mode changes to switch into Hyp mode.
We want to test whether we're trying to return to the nonexistent
"secure Hyp" mode, so we need to look at arm_is_secure_below_el3()
rather than arm_is_secure(), since the latter is always true if
we're in Mon (EL3).
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190109152430.32359-1-agraf@suse.de
[PMM: rewrote commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The 32 R5900 128-bit registers are split into two 64-bit halves:
the lower halves are the GPRs and the upper halves are accessible
by the R5900-specific multimedia instructions.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Fredrik Noring <noring@nocrew.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Rename 'rn' to 'register_name' in CP0-related handlers.
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Add CP0 register MemoryMapID. Only data field is added.
The corresponding functionality will be added in future
patches.
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Correct existing CP0-related preprocessor constants (replace
"CPO" with "CP0" (form letter "O" to digit "0", when needed).
Besides, add preprocessor constants for CP0 subregisters.
The names of the subregisters were chosen to be in sync with
the table of corresponding assembler mnemonics found in the
documentation for I6500 and I6400 (release 1.0).
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Update ITU to utilize SAARI and SAAR CP0 registers.
Reviewed-by: Stefan Markovic <smarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Provide R/W access to SAARI and SAAR CP0 registers.
Reviewed-by: Stefan Markovic <smarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Add fields for SAARI and SAAR CP0 registers.
Reviewed-by: Stefan Markovic <smarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Use preprocessor constants for 32 major CP0 registers.
Reviewed-by: Stefan Markovic <smarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Add preprocessor constants for 32 major CP0 registers.
Reviewed-by: Stefan Markovic <smarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Move comment containing summary of CP0 registers. Checkpatch
script reported some tabs in the resutling diff, so convert
these tabs to spaces too.
Reviewed-by: Stefan Markovic <smarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
The architecture specifies specification exceptions for all
unavailable subcodes.
The presence of subcodes is indicated by checking some query subcode.
For example 6 will indicate that 3-6 are available. So future systems
might call new subcodes to check for new features. This should not
trigger a hw error, instead we return the architectured specification
exception.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Cc: qemu-stable@nongnu.org
Message-Id: <20190111113657.66195-3-frankja@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Hyper-V .feat_names are, unlike hardware features, commented out and it is
not obvious why we do that. Document the current status quo.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20181221141604.16935-1-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Some downstream distributions of QEMU set host-phys-bits=on by
default. This worked very well for most use cases, because
phys-bits really didn't have huge consequences. The only
difference was on the CPUID data seen by guests, and on the
handling of reserved bits.
This changed in KVM commit 855feb673640 ("KVM: MMU: Add 5 level
EPT & Shadow page table support"). Now choosing a large
phys-bits value for a VM has bigger impact: it will make KVM use
5-level EPT even when it's not really necessary. This means
using the host phys-bits value may not be the best choice.
Management software could address this problem by manually
configuring phys-bits depending on the size of the VM and the
amount of MMIO address space required for hotplug. But this is
not trivial to implement.
However, there's another workaround that would work for most
cases: keep using the host phys-bits value, but only if it's
smaller than 48. This patch makes this possible by introducing a
new "-cpu" option: "host-phys-bits-limit". Management software
or users can make sure they will always use 4-level EPT using:
"host-phys-bits=on,host-phys-bits-limit=48".
This behavior is still not enabled by default because QEMU
doesn't enable host-phys-bits=on by default. But users,
management software, or downstream distributions may choose to
change their defaults using the new option.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20181211192527.13254-1-ehabkost@redhat.com>
[ehabkost: removed test code while some issues are addressed]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
MPX support is being phased out by Intel; GCC has dropped it, Linux
is also going to do that. Even though KVM will have special code
to support MPX after the kernel proper stops enabling it in XCR0,
we probably also want to deprecate that in a few years. As a start,
do not enable it by default for any named CPU model starting with
the 4.0 machine types; this include Skylake, Icelake and Cascadelake.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20181220121100.21554-1-pbonzini@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The missing functionality was added ~3 years ago with the Linux commit
46896c73c1a4 ("KVM: svm: add support for RDTSCP")
so reenable RDTSCP support on those CPU models.
Opteron_G2 - being family 15, model 6, doesn't have RDTSCP support
(the real hardware doesn't have it. K8 got RDTSCP support with the NPT
models, i.e., models >= 0x40).
Document the host's minimum required kernel version, while at it.
Signed-off-by: Borislav Petkov <bp@suse.de>
Message-ID: <20181212200803.GG6653@zn.tnic>
[ehabkost: moved compat properties code to pc.c]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
It was found that QMP users of QEMU (e.g. libvirt) may need
HV_CPUID_ENLIGHTMENT_INFO.EAX/HV_CPUID_NESTED_FEATURES.EAX information. In
particular, 'hv_tlbflush' and 'hv_evmcs' enlightenments are only exposed in
HV_CPUID_ENLIGHTMENT_INFO.EAX.
HV_CPUID_NESTED_FEATURES.EAX is exposed for two reasons: convenience
(we don't need to export it from hyperv_handle_properties() and as
future-proof for Enlightened MSR-Bitmap, PV EPT invalidation and
direct virtual flush features.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20181126135958.20956-1-vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This patch set contains a handful of Michael's CSR-related cleanups,
which should allow us to proceed with more outstanding bug fixes that
depend on them.
Additionally, there is a patch that turns on USB. This works for me
when the kernel has the appropriate drivers (which will soon be in
defconfig) and I pass
-device usb-ehci
-drive id=my_usb_disk,file=usbdisk.img,if=none,format=raw
-device usb-storage,drive=my_usb_disk
to QEMU.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlw42s4THHBhbG1lckBk
YWJiZWx0LmNvbQAKCRDvTKFQLMurQS/2D/sFgvFwdh4UtLlNpaPMMTdfC6rh3TBJ
jGsTjKSt2PBi6v/5PcVSCYGqxKdNRmZCWO1QOuXBY0mtgs2it7Cy8gTj8587vx30
MZpwDdQyahUH2ekhUbWhoCcxB/VTaQuSiBn2z0BdYgUuYNDvBCDHVPzEOX4dCsfW
COKdgqKkGVHWS8jM6TQx18BlmWy7ZyBPYKE4vXLx7rGc06wuV6IHfJjJz9A9mT2D
C+olQe2xMxOOIKvViODN4q4p8XEcoZ4X8HZHS+XZqPUsdqq6XOj0NbcvzuWLe+r6
CSvj6wJeT2vndl7IxOc387esDYQT9gcpzHBr689VKZ8wsx8C6yGJbZ1ZdBMBMHzz
Vin/2wooXAVAEH5HR9vw/VKfcigGPIJ0nq6Ia3BJeYRMmhTvICwgabCJQ2keSYcD
xdv0OyplH6pZpvfsfDJFL377+qC8Rtr38SvLA4twvwkGwTNMjupombRV83HMC5F1
z9BkXgiiZiE4VZIWR6fhPDcg1zV4OJyuI3q/aKN8WM9yvOwYD8o7XGNo3M1zejvO
wTbWTt7xSvzFLUKik2etX6zgBz+myoJ6zpT7VbvjXh8q4pLBWZgAJJH7qkTc0A8k
L18Mo/wA9OPeVQ6mDxNS8IrJGYr4EehYG3bbtPgLhElO0IyP/z4YqGYBpJ6YRVXe
4EuQXMIaDcj1/w==
=JmSx
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-3.2-part2' into staging
RISC-V Updates for 3.2, Part 2
This patch set contains a handful of Michael's CSR-related cleanups,
which should allow us to proceed with more outstanding bug fixes that
depend on them.
Additionally, there is a patch that turns on USB. This works for me
when the kernel has the appropriate drivers (which will soon be in
defconfig) and I pass
-device usb-ehci
-drive id=my_usb_disk,file=usbdisk.img,if=none,format=raw
-device usb-storage,drive=my_usb_disk
to QEMU.
# gpg: Signature made Fri 11 Jan 2019 18:05:02 GMT
# gpg: using RSA key EF4CA1502CCBAB41
# gpg: Good signature from "Palmer Dabbelt <palmer@dabbelt.com>"
# gpg: aka "Palmer Dabbelt <palmer@sifive.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 00CE 76D1 8349 60DF CE88 6DF8 EF4C A150 2CCB AB41
* remotes/palmer/tags/riscv-for-master-3.2-part2:
default-configs: Enable USB support for RISC-V machines
RISC-V: Implement existential predicates for CSRs
RISC-V: Implement atomic mip/sip CSR updates
RISC-V: Implement modular CSR helper interface
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move helper functions related to interrupt and exception handling from
op_helper.c and helper.c to exc_helper.c. No functional changes.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Move HELPER functions related to native debugging from op_helper.c to
dbg_helper.c. No functional changes.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Move MMU-related helper functions from op_helper.c and helper.c to
mmu_helper.c. No functional changes.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Move helper functions related to register windows from op_helper.c to
win_helper.c. No functional changes.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Don't invalidate TB with the end of zero overhead loop when LBEG or LEND
change. Instead encode the distance from the start of the page where the
TB starts to the LEND in the TB cs_base and generate loopback code when
the next PC matches encoded LEND. Distance to a destination within the
same page and up to a maximum instruction length into the next page is
encoded literally, otherwise it's zero. The distance from LEND to LBEG
is also encoded in the cs_base: it's encoded literally when less than
256 or as 0 otherwise. This allows for TB chaining for the loopback
branch at the end of a loop for the most common loop sizes.
With this change the resulting emulation speed is about 10% higher in
softmmu mode on uClibc-ng and LTP tests. Emulation speed in linux
user mode is a few percent lower because there's no direct TB chaining
between different memory pages. Testing with lower limit on direct TB
chaining range shows gradual slowdown to ~15% for the block size of 64
bytes and ~50% for the block size of 32 bytes.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Most files that have TABs only contain a handful of them. Change
them to spaces so that we don't confuse people.
disas, standard-headers, linux-headers and libdecnumber are imported
from other projects and probably should be exempted from the check.
Outside those, after this patch the following files still contain both
8-space and TAB sequences at the beginning of the line. Many of them
have a majority of TABs, or were initially committed with all tabs.
bsd-user/i386/target_syscall.h
bsd-user/x86_64/target_syscall.h
crypto/aes.c
hw/audio/fmopl.c
hw/audio/fmopl.h
hw/block/tc58128.c
hw/display/cirrus_vga.c
hw/display/xenfb.c
hw/dma/etraxfs_dma.c
hw/intc/sh_intc.c
hw/misc/mst_fpga.c
hw/net/pcnet.c
hw/sh4/sh7750.c
hw/timer/m48t59.c
hw/timer/sh_timer.c
include/crypto/aes.h
include/disas/bfd.h
include/hw/sh4/sh.h
libdecnumber/decNumber.c
linux-headers/asm-generic/unistd.h
linux-headers/linux/kvm.h
linux-user/alpha/target_syscall.h
linux-user/arm/nwfpe/double_cpdo.c
linux-user/arm/nwfpe/fpa11_cpdt.c
linux-user/arm/nwfpe/fpa11_cprt.c
linux-user/arm/nwfpe/fpa11.h
linux-user/flat.h
linux-user/flatload.c
linux-user/i386/target_syscall.h
linux-user/ppc/target_syscall.h
linux-user/sparc/target_syscall.h
linux-user/syscall.c
linux-user/syscall_defs.h
linux-user/x86_64/target_syscall.h
slirp/cksum.c
slirp/if.c
slirp/ip.h
slirp/ip_icmp.c
slirp/ip_icmp.h
slirp/ip_input.c
slirp/ip_output.c
slirp/mbuf.c
slirp/misc.c
slirp/sbuf.c
slirp/socket.c
slirp/socket.h
slirp/tcp_input.c
slirp/tcpip.h
slirp/tcp_output.c
slirp/tcp_subr.c
slirp/tcp_timer.c
slirp/tftp.c
slirp/udp.c
slirp/udp.h
target/cris/cpu.h
target/cris/mmu.c
target/cris/op_helper.c
target/sh4/helper.c
target/sh4/op_helper.c
target/sh4/translate.c
tcg/sparc/tcg-target.inc.c
tests/tcg/cris/check_addo.c
tests/tcg/cris/check_moveq.c
tests/tcg/cris/check_swap.c
tests/tcg/multiarch/test-mmap.c
ui/vnc-enc-hextile-template.h
ui/vnc-enc-zywrle.h
util/envlist.c
util/readline.c
The following have only TABs:
bsd-user/i386/target_signal.h
bsd-user/sparc64/target_signal.h
bsd-user/sparc64/target_syscall.h
bsd-user/sparc/target_signal.h
bsd-user/sparc/target_syscall.h
bsd-user/x86_64/target_signal.h
crypto/desrfb.c
hw/audio/intel-hda-defs.h
hw/core/uboot_image.h
hw/sh4/sh7750_regnames.c
hw/sh4/sh7750_regs.h
include/hw/cris/etraxfs_dma.h
linux-user/alpha/termbits.h
linux-user/arm/nwfpe/fpopcode.h
linux-user/arm/nwfpe/fpsr.h
linux-user/arm/syscall_nr.h
linux-user/arm/target_signal.h
linux-user/cris/target_signal.h
linux-user/i386/target_signal.h
linux-user/linux_loop.h
linux-user/m68k/target_signal.h
linux-user/microblaze/target_signal.h
linux-user/mips64/target_signal.h
linux-user/mips/target_signal.h
linux-user/mips/target_syscall.h
linux-user/mips/termbits.h
linux-user/ppc/target_signal.h
linux-user/sh4/target_signal.h
linux-user/sh4/termbits.h
linux-user/sparc64/target_syscall.h
linux-user/sparc/target_signal.h
linux-user/x86_64/target_signal.h
linux-user/x86_64/termbits.h
pc-bios/optionrom/optionrom.h
slirp/mbuf.h
slirp/misc.h
slirp/sbuf.h
slirp/tcp.h
slirp/tcp_timer.h
slirp/tcp_var.h
target/i386/svm.h
target/sparc/asi.h
target/xtensa/core-dc232b/xtensa-modules.inc.c
target/xtensa/core-dc233c/xtensa-modules.inc.c
target/xtensa/core-de212/core-isa.h
target/xtensa/core-de212/xtensa-modules.inc.c
target/xtensa/core-fsf/xtensa-modules.inc.c
target/xtensa/core-sample_controller/core-isa.h
target/xtensa/core-sample_controller/xtensa-modules.inc.c
target/xtensa/core-test_kc705_be/core-isa.h
target/xtensa/core-test_kc705_be/xtensa-modules.inc.c
tests/tcg/cris/check_abs.c
tests/tcg/cris/check_addc.c
tests/tcg/cris/check_addcm.c
tests/tcg/cris/check_addoq.c
tests/tcg/cris/check_bound.c
tests/tcg/cris/check_ftag.c
tests/tcg/cris/check_int64.c
tests/tcg/cris/check_lz.c
tests/tcg/cris/check_openpf5.c
tests/tcg/cris/check_sigalrm.c
tests/tcg/cris/crisutils.h
tests/tcg/cris/sys.c
tests/tcg/i386/test-i386-ssse3.c
ui/vgafont.h
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20181213223737.11793-3-pbonzini@redhat.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Eric Blake <eblake@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Stefan Markovic <smarkovic@wavecomp.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Most list head structs need not be given a name. In most cases the
name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV
or reverse iteration, but this does not apply to lists of other kinds,
and even for QTAILQ in practice this is only rarely needed. In addition,
we will soon reimplement those macros completely so that they do not
need a name for the head struct. So clean up everything, not giving a
name except in the rare case where it is necessary.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Intel HAXM supports now 32-bit and 64-bit Linux hosts. This patch includes
the corresponding userland changes.
Since the Darwin userland backend is POSIX-compliant, the hax-darwin.{c,h}
files have been renamed to hax-posix.{c,h}. This prefix is consistent with
the naming used in the rest of QEMU.
Signed-off-by: Alexandro Sanchez Bach <asanchez@kryptoslogic.com>
Message-Id: <20181115013331.65820-1-asanchez@kryptoslogic.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
CSR predicate functions are added to the CSR table.
mstatus.FS and counter enable checks are moved
to predicate functions and two new predicates are
added to check misa.S for s* CSRs and a new PMP
CPU feature for pmp* CSRs.
Processors that don't implement S-mode will trap
on access to s* CSRs and processors that don't
implement PMP will trap on accesses to pmp* CSRs.
PMP checks are disabled in riscv_cpu_handle_mmu_fault
when the PMP CPU feature is not present.
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Use the new CSR read/modify/write interface to implement
atomic updates to mip/sip.
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Now that the 'intc' pointer is only used by the XICS interrupt mode,
let's make things clear and use a XICS type and name.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
which will be used by the machine only when the XIVE interrupt mode is
in use.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Now that the VMX and VSR register sets have been combined, the same macros can
be used to access both AVR and VSR field members.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The VSX register array is a block of 64 128-bit registers where the first 32
registers consist of the existing 64-bit FP registers extended to 128-bit
using new VSR registers, and the last 32 registers are the VMX 128-bit
registers as show below:
64-bit 64-bit
+--------------------+--------------------+
| FP0 | | VSR0
+--------------------+--------------------+
| FP1 | | VSR1
+--------------------+--------------------+
| ... | ... | ...
+--------------------+--------------------+
| FP30 | | VSR30
+--------------------+--------------------+
| FP31 | | VSR31
+--------------------+--------------------+
| VMX0 | VSR32
+-----------------------------------------+
| VMX1 | VSR33
+-----------------------------------------+
| ... | ...
+-----------------------------------------+
| VMX30 | VSR62
+-----------------------------------------+
| VMX31 | VSR63
+-----------------------------------------+
In order to allow for future conversion of VSX instructions to use TCG vector
operations, recreate the same layout using an aligned version of the existing
vsr register array.
Since the old fpr and avr register arrays are removed, the existing callers
must also be updated to use the correct offset in the vsr register array. This
also includes switching the relevant VMState fields over to using subarrays
to make sure that migration is preserved.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since the VSX registers are actually a superset of the VMX registers then they
can be represented by the same type. Merge ppc_avr_t into ppc_vsr_t and change
ppc_avr_t to be a simple typedef alias.
Note that due to a difference in the naming of the float32 member between
ppc_avr_t and ppc_vsr_t, references to the ppc_avr_t f member must be replaced
with f32 instead.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead of accessing the FPR, VMX and VSX registers through static arrays of
TCGv_i64 globals, remove them and change the helpers to load/store data directly
within cpu_env.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>