Rename arm_ccnt_enabled to pmu_counter_enabled, and add logic to only
return 'true' if the specified counter is enabled and neither prohibited
or filtered.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181211151945.29137-5-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Because of the PMU's design, many register accesses have side effects
which are inter-related, meaning that the normal method of saving CP
registers can result in inconsistent state. These side-effects are
largely handled in pmu_op_start/finish functions which can be called
before and after the state is saved/restored. By doing this and adding
raw read/write functions for the affected registers, we avoid
migration-related inconsistencies.
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20181211151945.29137-4-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
pmccntr_read and pmccntr_write contained duplicate code that was already
being handled by pmccntr_sync. Consolidate the duplicated code into two
functions: pmccntr_op_start and pmccntr_op_finish. Add a companion to
c15_ccnt in CPUARMState so that we can simultaneously save both the
architectural register value and the last underlying cycle count - this
ensures time isn't lost and will also allow us to access the 'old'
architectural register value in order to detect overflows in later
patches.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20181211151945.29137-3-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In some cases it may be helpful to modify state before saving it for
migration, and then modify the state back after it has been saved. The
existing pre_save function provides half of this functionality. This
patch adds a post_save function to provide the second half.
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 20181211151945.29137-2-aaron@os.amperecomputing.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can perform this with fewer operations.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-32-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add 4 attributes that controls the EL1 enable bits, as we may not
always want to turn on pointer authentication with -cpu max.
However, by default they are enabled.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-31-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the main crypto routine, an implementation of QARMA.
This matches, as much as possible, ARM pseudocode.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-28-richard.henderson@linaro.org
[PMM: fixed minor checkpatch nits]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is not really functional yet, because the crypto is not yet
implemented. This, however follows the AddPAC pseudo function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is not really functional yet, because the crypto is not yet
implemented. This, however follows the Auth pseudo function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Stripping out the authentication data does not require any crypto,
it merely requires the virtual address parameters.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The arm_regime_tbi{0,1} functions are replacable with the new function
by giving the lowest and highest address.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use TBID in aa64_va_parameters depending on the data parameter.
This automatically updates all existing users of the function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will want to check TBI for I and D simultaneously.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We need to reuse this from helper-a64.c. Provide a stub
definition for CONFIG_USER_ONLY. This matches the stub
definitions that we removed for arm_regime_tbi{0,1} before.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will shortly want to talk about TBI as it relates to data.
Passing around a pair of variables is less convenient than a
single variable.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Split out functions to extract the virtual address parameters.
Let the functions choose T0 or T1 address space half, if present.
Extract (most of) the control bits that vary between EL or Tx.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-19-richard.henderson@linaro.org
[PMM: fixed minor checkpatch comment nits]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While we could expose stage_1_mmu_idx, the combination is
probably going to be more useful.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The pattern
ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
is computing the full ARMMMUIdx, stripping off the ARM bits,
and then putting them back.
Avoid the extra two steps with the appropriate helper function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is, or will shortly become, too big to inline.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Not that there are any stores involved, but why argue with ARM's
naming convention.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-15-richard.henderson@linaro.org
[fixed trivial comment nit]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will enable PAuth decode in a subsequent patch.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is only used by AArch64. Code movement only.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now properly signals unallocated for REV64 with SF=0.
Allows for the opcode2 field to be decoded shortly.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The cryptographic internals are stubbed out for now,
but the enable and trap bits are checked.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This path uses cpu_loop_exit_restore to unwind current processor state.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190108223129.5570-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There are 5 bits of state that could be added, but to save
space within tbflags, add only a single enable bit.
Helpers will determine the rest of the state at runtime.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Post v8.4 bits taken from SysReg_v85_xml-00bet8.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add storage space for the 5 encryption keys.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190108223129.5570-2-richard.henderson@linaro.org
[PMM: use 0xf rather than -1 in FIELD_DP64() expressions to
avoid clang warnings about implicit truncation from int to
bitfield changing the value]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The PHY behind the MAC of an Aspeed SoC can be controlled using two
different MDC/MDIO interfaces. The same registers PHYCR (MAC60) and
PHYDATA (MAC64) are involved but they have a different layout.
BIT31 of the Feature Register (MAC40) controls which MDC/MDIO
interface is active.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Message-id: 20190111125759.31577-1-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In U-boot, we switch from S-SVC -> Mon -> Hyp mode when we want to
enter Hyp mode. The change into Hyp mode is done by doing an
exception return from Mon. This doesn't work with current QEMU.
The problem is that in bad_mode_switch() we refuse to allow
the change of mode.
Note that bad_mode_switch() is used to do validation for two situations:
(1) changes to mode by instructions writing to CPSR.M
(ie not exception take/return) -- this corresponds to the
Armv8 Arm ARM pseudocode Arch32.WriteModeByInstr
(2) changes to mode by exception return
Attempting to enter or leave Hyp mode via case (1) is forbidden in
v8 and UNPREDICTABLE in v7, and QEMU is correct to disallow it
there. However, we're already doing that check at the top of the
bad_mode_switch() function, so if that passes then we should allow
the case (2) exception return mode changes to switch into Hyp mode.
We want to test whether we're trying to return to the nonexistent
"secure Hyp" mode, so we need to look at arm_is_secure_below_el3()
rather than arm_is_secure(), since the latter is always true if
we're in Mon (EL3).
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190109152430.32359-1-agraf@suse.de
[PMM: rewrote commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Let's report IO-coherent access is supported for translation
table walks, descriptor fetches and queues by setting the COHACC
override flag. Without that, we observe wrong command opcodes.
The DT description also advertises the dma coherency.
Fixes a703b4f6c1 ("hw/arm/virt-acpi-build: Add smmuv3 node in IORT table")
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reported-by: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 20190107101041.765-1-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When the device is disabled, the internal circuitry keeps the data
register loaded and doesn't update it.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20190104182057.8778-1-philmd@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It was assumed that mesa provides the necessary X11 includes,
but it is not always the case, as it can be configured without x11 support.
Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190116113751.17177-1-alex.kanavin@gmail.com
[ kraxel: codestyle fix (long line) ]
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
When size and format of the display surface stays the same we can just
tag the guest display as dirty and be done with it.
There is no need need to resize the vnc server display or to touch the
vnc client dirty bits. On the next refresh cycle
vnc_refresh_server_surface() will check for actual display content
changes and update the client dirty bits as needed.
The desktop resize and framebuffer format notifications to the vnc
client will be skipped too.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20190116101049.8929-1-kraxel@redhat.com
Modern desktop environments can render icons at very large sizes,
especially with high DPI screens. Providing a 32x32 pixel bitmap is
nowhere near sufficient anymore.
When displayed in GNOME shell the QEMU icon looks awful, having been
scaled up to at least x4 its base size. This is compounded by the fact
that the BMP file doesn't do transparency, so while we've removed white
pixels, we still have anti-aliased nearly-white pixels which make the
logo look appalling on black backgrounds.
Loading a high resolution PNG icon addresses both problems, but requires
use of the extra SDL2_image library.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20190110120047.25369-4-berrange@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
The icon associated with a GtkWindow is just a hint to window managers
and not all of them will honour it. Some will instead want to show the
icon listed by the .desktop file. The desktop file is located based on
the application ID, which is set using g_set_prgname. QEMU has not
historically provided a desktop file or set its app ID, so it got a
broken icon in GNOME shell, which is now fixed.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20190110120047.25369-3-berrange@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
QEMU currently installs logos to $prefix/share/qemu/ which means no GUI
toolkit or applications can find them by default.
The accepted standards for desktop applications declare that application
logos / icons should be installed under $prefix/share/icons, so use this
directory location.
Pre-rendered icons are provided at the standard sizes expected for GUI
applications, along with the scalable SVG, to ensure maximum portability.
The PNGs are rendered from the SVG using inkscape, however, this is not
wired up into the default make rules to avoid requiring inkscape as a
mandatory tool in build systems / developer workstations.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-id: 20190110120047.25369-2-berrange@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Fix Commit a5d2f6f877 (contrib/rdmacm-mux: Add implementation
of RDMA User MAD multiplexer).
The above commit introduces a new contrib target, adding a global dependency
to libumad library in case pvrdma configuration option is enabled.
Clang forbids it:
clang-6.0: error: -libumad: 'linker' input unused
[-Werror,-Wunused-command-line-argument]
Fix by limiting the scope to the rdmacm-mux target itself.
Reported-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com>
Tested-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20190118124614.24548-4-marcel.apfelbaum@gmail.com>
Signed-off-by: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Do not initialize structs with {0} since some
CLANG versions do not support it.
Use {} construct instead.
Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com>
Tested-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20190118124614.24548-3-marcel.apfelbaum@gmail.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
The flag is not recognized by some CLANG versions.
Add proper constraints in code instead.
Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com>
Tested-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20190118124614.24548-2-marcel.apfelbaum@gmail.com>
Signed-off-by: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
In rdma_rm_get_backend_gid_index(), the 'sgid_idx' is used
to index the array 'dev_res->port.gid_tbl' which size is
MAX_PORT_GIDS. Current the 'sgid_idx' may be MAX_PORT_GIDS
thus cause an off-by-one issue.
Spotted by Coverity: CID 1398594
Signed-off-by: Li Qiang <liq3ea@163.com>
Message-Id: <20190103131251.49271-1-liq3ea@163.com>
Signed-off-by: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
To cover the case where fini() was called even when init() fails make
sure objects are not NULL before calling to non-null-safe destructors.
Signed-off-by: Yuval Shaia <yuval.shaia@oracle.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190116151538.14088-1-yuval.shaia@oracle.com>
Signed-off-by: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
The functions handles errors internaly, callers have nothing to do with
the return value.
Signed-off-by: Yuval Shaia <yuval.shaia@oracle.com>
Message-Id: <20190109202140.4051-1-yuval.shaia@oracle.com>
Reviewed-by: Marcel Apfelbaum<marcel.apfelbaum@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>