The register tests walks all the registers to verify they are initially
0 when appropriate. However, if the MAC address is set in the register
space, this should not be checked against 0.
Reviewed-by: Hao Wu <wuhaotsh@google.com>
Signed-off-by: Patrick Venture <venture@google.com>
Message-Id: <20220906163138.2831353-1-venture@google.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The sed processing of build/config-host.mak seems to be no longer
needed, and there is no such in the 32-bit build too. Drop it.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Message-Id: <20220908132817.1831008-5-bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Update NetBSD to 9.3
Signed-off-by: Brad Smith <brad@comstyle.com>
Message-Id: <YxacoSbT1cZR4SKr@humpty.home.comstyle.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220901110414.2892954-1-marcandre.lureau@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Support fw_cfg dma function for LoongArch virt machine.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Acked-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220908094623.73051-3-yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Remove the vga device when loongarch machine init and
we will support other display device in the future.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Acked-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220908094623.73051-2-yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
In CID 1432593, Coverity complains that the result of qdict_crumple()
might leak if it is not a dictionary. This is not a practical concern
since the test would fail immediately with a NULL pointer dereference
in qdict_size().
However, it is not nice to depend on qdict_size() crashing, so add an
explicit assertion that that the crumpled object was indeed a dictionary.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
test-visitor-serialization list tests is using an "if" to pick either the first
element of the list or the next one. This was done presumably to mimic the
code that creates the list, which has to fill in either the head pointer
or the next pointer of the last element. However, the code in the insert
phase is a pretty standard singly-linked list insertion, while the one
in the visit phase looks weird and even looks at the first item twice:
this is confusing because the test puts in 32 items and finishes with
an assertion that i == 33.
So, move the "else" step in a separate switch statement, and change
the do...while loop to a while, because cur_head has already been
initialized beforehand.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
test_bit uses header->type as an offset; if the file incorrectly specifies a
type greater than 127, smbios_entry_add will read and write garbage.
To fix this, just pass the smbios data through, assuming the user knows what
to do. Reported by Coverity as CID 1487255.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Overwriting "path" in the second call to g_strdup_printf() causes a memory leak,
even if the variable itself is g_autofree.
Reported by Coverity as CID 1460454.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reported by Coverity as CID 1490142. Since the size is constant and the
lifetime is the same as the StatsDescriptors struct, embed the struct
directly instead of using a separate allocation.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Many instructions which load/store 128-bit values are supposed to
raise #GP when the memory operand isn't 16-byte aligned. This includes:
- Instructions explicitly requiring memory alignment (Exceptions Type 1
in the "AVX and SSE Instruction Exception Specification" section of
the SDM)
- Legacy SSE instructions that load/store 128-bit values (Exceptions
Types 2 and 4).
This change sets MO_ALIGN_16 on 128-bit memory accesses that require
16-byte alignment. It adds cpu_record_sigbus and cpu_do_unaligned_access
hooks that simulate a #GP exception in qemu-user and qemu-system,
respectively.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/217
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ricky Zhou <ricky@rzhou.org>
Message-Id: <20220830034816.57091-2-ricky@rzhou.org>
[Do not bother checking PREFIX_VEX, since AVX is not supported. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The following scenario can happen if QEMU sets more RESET flags while
the KVM_RESET_DIRTY_RINGS ioctl is ongoing on another host CPU:
CPU0 CPU1 CPU2
------------------------ ------------------ ------------------------
fill gfn0
store-rel flags for gfn0
fill gfn1
store-rel flags for gfn1
load-acq flags for gfn0
set RESET for gfn0
load-acq flags for gfn1
set RESET for gfn1
do ioctl! ----------->
ioctl(RESET_RINGS)
fill gfn2
store-rel flags for gfn2
load-acq flags for gfn2
set RESET for gfn2
process gfn0
process gfn1
process gfn2
do ioctl!
etc.
The three load-acquire in CPU0 synchronize with the three store-release
in CPU2, but CPU0 and CPU1 are only synchronized up to gfn1 and CPU1
may miss gfn2's fields other than flags.
The kernel must be able to cope with invalid values of the fields, and
userspace *will* invoke the ioctl once more. However, once the RESET flag
is cleared on gfn2, it is lost forever, therefore in the above scenario
CPU1 must read the correct value of gfn2's fields.
Therefore RESET must be set with a store-release, that will synchronize
with KVM's load-acquire in CPU1.
Cc: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A set of 3 small additions/fixes.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEERfXHG0oMt/uXep+pBRYzHrxb/ecFAmMjNG4ACgkQBRYzHrxb
/edRhhAAibHXFdK1EHtcABInxMVg50wZ7oD2QiYqKb1m0t9rQDSmZ/cX22qdYF7D
hBvjsMiniy6isGYyiPhpi+7ZMnid12E6EpZ49tFKbDz+DPfpT3mk/MOKvNQu150I
K8NuqD40HImpUzo7/OVVIzo26TsKMFUo7WzuzqP4PEguSCqVAlzuoVxId8oZ3DbT
c/VQTZjwLbi+i7DuuFc1pqfzC3euNEApM7DRAYNios+oGxd2kd+DC7JLqMKK4Uk1
5t5YmCz3Q+aFZ1kOvpxUW+3xT0LI4wzR0XA0ImA3jADaqt5G3dNOzosY2E7eaQen
AXpqu6dH5wbSl2y5LWSscgN8ObAs5N6n0+ncXtIYwMENBBkZpWtaDbTYp6ezcrww
2st2qQ0MlUj8oiH3jMr5TtkZxcx9wXdEfrCDZ0MQt3275Bp6JuSW6DjqXjjpKimY
2HsAPDKYFNVJl/9SX5PxW6cGPZyb/YMa+14YdIdMbWc8+q+4yMSQnTJbGvxLFakt
S/agxGu3hpEGrYUjCdl6JzIyOxdUVFp+Lp4NwhU+DI/1UaL1/QbiK+NxDXvkk+Wp
wcI7VbIXb6YMWf9Gmv4NQ1b4uTf8qoA7J0SNONrw9ywU4lhsm9UKoIoakKaSK9F8
kI+vBt/mmZTCqNw8h4w/KNvrRFF5nP/vw117xdOGGbtCS7G8438=
=Isyw
-----END PGP SIGNATURE-----
Merge tag 'pull-hmp-20220915a' of https://gitlab.com/dagrh/qemu into staging
HMP pull 2022-09-15
A set of 3 small additions/fixes.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEERfXHG0oMt/uXep+pBRYzHrxb/ecFAmMjNG4ACgkQBRYzHrxb
# /edRhhAAibHXFdK1EHtcABInxMVg50wZ7oD2QiYqKb1m0t9rQDSmZ/cX22qdYF7D
# hBvjsMiniy6isGYyiPhpi+7ZMnid12E6EpZ49tFKbDz+DPfpT3mk/MOKvNQu150I
# K8NuqD40HImpUzo7/OVVIzo26TsKMFUo7WzuzqP4PEguSCqVAlzuoVxId8oZ3DbT
# c/VQTZjwLbi+i7DuuFc1pqfzC3euNEApM7DRAYNios+oGxd2kd+DC7JLqMKK4Uk1
# 5t5YmCz3Q+aFZ1kOvpxUW+3xT0LI4wzR0XA0ImA3jADaqt5G3dNOzosY2E7eaQen
# AXpqu6dH5wbSl2y5LWSscgN8ObAs5N6n0+ncXtIYwMENBBkZpWtaDbTYp6ezcrww
# 2st2qQ0MlUj8oiH3jMr5TtkZxcx9wXdEfrCDZ0MQt3275Bp6JuSW6DjqXjjpKimY
# 2HsAPDKYFNVJl/9SX5PxW6cGPZyb/YMa+14YdIdMbWc8+q+4yMSQnTJbGvxLFakt
# S/agxGu3hpEGrYUjCdl6JzIyOxdUVFp+Lp4NwhU+DI/1UaL1/QbiK+NxDXvkk+Wp
# wcI7VbIXb6YMWf9Gmv4NQ1b4uTf8qoA7J0SNONrw9ywU4lhsm9UKoIoakKaSK9F8
# kI+vBt/mmZTCqNw8h4w/KNvrRFF5nP/vw117xdOGGbtCS7G8438=
# =Isyw
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 15 Sep 2022 10:19:26 EDT
# gpg: using RSA key 45F5C71B4A0CB7FB977A9FA90516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" [full]
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* tag 'pull-hmp-20220915a' of https://gitlab.com/dagrh/qemu:
hmp: Fix ordering of text
monitor/hmp: print trace as option in help for log command
monitor: Support specified vCPU registers
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Fix the ordering of the help text so it's always after the commands
being defined. A few had got out of order. Keep 'info' at the end.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The below is printed when printing help information in qemu-system-x86_64
command line, and when CONFIG_TRACE_LOG is enabled:
----------------------------
$ qemu-system-x86_64 -d help
... ...
trace:PATTERN enable trace events
Use "-d trace:help" to get a list of trace events.
----------------------------
However, the options of "trace:PATTERN" are only printed by
"qemu-system-x86_64 -d help", but missing in hmp "help log" command.
Fixes: c84ea00dc2 ("log: add "-d trace:PATTERN"")
Cc: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Message-Id: <20220831213943.8155-1-dongli.zhang@oracle.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Originally we have to get all the vCPU registers and parse the
specified one. To improve the performance of this usage, allow user
specified vCPU id to query registers.
Run a VM with 16 vCPU, use bcc tool to track the latency of
'hmp_info_registers':
'info registers -a' uses about 3ms;
'info registers 12' uses about 150us.
Cc: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Message-Id: <20220802073720.1236988-2-pizhenwei@bytedance.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Currently armv7m_load_kernel() takes the size of the block of memory
where it should load the initial guest image, but assumes that it
should always load it at address 0. This happens to be true of all
our M-profile boards at the moment, but it isn't guaranteed to always
be so: M-profile CPUs can be configured (via init-svtor and
init-nsvtor, which match equivalent hardware configuration signals)
to have the initial vector table at any address, not just zero. (For
instance the Teeny board has the boot ROM at address 0x0200_0000.)
Add a base address argument to armv7m_load_kernel(), so that
callers now pass in both base address and size. All the current
callers pass 0, so this is not a behaviour change.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220823160417.3858216-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Arm system emulation targets always have TARGET_BIG_ENDIAN clear, so
there is no need to have handling in armv7m_load_kernel() for the
case when it is defined. Remove the unnecessary code.
Side notes:
* our M-profile implementation is always little-endian (that is, it
makes the IMPDEF choice that the read-only AIRCR.ENDIANNESS is 0)
* if we did want to handle big-endian ELF files here we should do it
the way that hw/arm/boot.c:arm_load_elf() does, by looking at the
ELF header to see what endianness the file itself is
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220823160417.3858216-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Update the ID registers for TCG's '-cpu max' to report a FEAT_PMUv3p5
compliant PMU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-11-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With FEAT_PMUv3p5, the event counters are now 64 bit, rather than 32
bit. (Previously, only the cycle counter could be 64 bit, and other
event counters were always 32 bits). For any given event counter,
whether the overflow event is noted for overflow from bit 31 or from
bit 63 is controlled by a combination of PMCR.LP, MDCR_EL2.HLP and
MDCR_EL2.HPMN.
Implement the 64-bit event counter handling. We choose to make our
counters always 64 bits, and mask out the top 32 bits on read or
write of PMXEVCNTR for CPUs which don't have FEAT_PMUv3p5.
(Note that the changes to pmenvcntr_op_start() and
pmenvcntr_op_finish() bring their logic closer into line with that of
pmccntr_op_start() and pmccntr_op_finish(), which already had to cope
with the overflow being either at 32 or 64 bits.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-10-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
FEAT_PMUv3p5 introduces new bits which disable the cycle
counter from counting:
* MDCR_EL2.HCCD disables the counter when in EL2
* MDCR_EL3.SCCD disables the counter when Secure
Add the code to support these bits.
(Note that there is a third documented counter-disable
bit, MDCR_EL3.MCCD, which disables the counter when in
EL3. This is not present until FEAT_PMUv3p7, so is
out of scope for now.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-9-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Our feature test functions that check the PMU version are named
isar_feature_{aa32,aa64,any}_pmu_8_{1,4}. This doesn't match the
current Arm ARM official feature names, which are FEAT_PMUv3p1 and
FEAT_PMUv3p4. Rename these functions to _pmuv3p1 and _pmuv3p4.
This commit was created with:
sed -i -e 's/pmu_8_/pmuv3p/g' target/arm/*.[ch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-8-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In pmccntr_op_finish() and pmevcntr_op_finish() we calculate the next
point at which we will get an overflow and need to fire the PMU
interrupt or set the overflow flag. We do this by calculating the
number of nanoseconds to the overflow event and then adding it to
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL). However, we don't check
whether that signed addition overflows, which can happen if the next
PMU interrupt would happen massively far in the future (250 years or
more).
Since QEMU assumes that "when the QEMU_CLOCK_VIRTUAL rolls over" is
"never", the sensible behaviour in this situation is simply to not
try to set the timer if it would be beyond that point. Detect the
overflow, and skip setting the timer in that case.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The logic in pmu_counter_enabled() for handling the 'prohibit event
counting' bits MDCR_EL2.HPMD and MDCR_EL3.SPME is written in a way
that assumes that EL2 is never Secure. This used to be true, but the
architecture now permits Secure EL2, and QEMU can emulate this.
Refactor the prohibit logic so that we effectively OR together
the various prohibit bits when they apply, rather than trying to
construct an if-else ladder where any particular state of the CPU
ends up in exactly one branch of the ladder.
This fixes the Secure EL2 case and also is a better structure for
adding the PMUv8.5 bits MDCR_EL2.HCCD and MDCR_EL3.SCCD.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The architecture requires that if PMCR.LC is set (for a 64-bit cycle
counter) then PMCR.D (which enables the clock divider so the counter
ticks every 64 cycles rather than every cycle) should be ignored. We
were always honouring PMCR.D; fix the bug so we correctly ignore it
in this situation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The PMU cycle and event counter infrastructure design requires that
operations on the PMU register fields are wrapped in pmu_op_start()
and pmu_op_finish() calls (or their more specific pmmcntr and
pmevcntr equivalents). This includes any changes to registers which
affect whether the counter should be enabled or disabled, but we
forgot to do this.
The effect of this bug is that in sequences like:
* disable the cycle counter (PMCCNTR) using the PMCNTEN register
* write a value such as 0xfffff000 to the PMCCNTR
* restart the counter by writing to PMCNTEN
the value written to the cycle counter is corrupted, and it starts
counting from the wrong place. (Essentially, we fail to record that
the QEMU_CLOCK_VIRTUAL timestamp when the counter should be considered
to have started counting is the point when PMCNTEN is written to enable
the counter.)
Add the necessary bracketing calls, so that updates to the various
registers which affect whether the PMU is counting are handled
correctly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
pmu_counter_mask() accidentally returns a value with bits [63:32]
set, because the expression it returns is evaluated as a signed value
that gets sign-extended to 64 bits. Force the whole expression to be
evaluated with 64-bit arithmetic with ULL suffixes.
The main effect of this bug was that a guest could write to the bits
in the high half of registers like PMCNTENSET_EL0 that are supposed
to be RES0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When the cycle counter overflows, we are intended to set bit 31 in PMOVSR
to indicate this. However a missing ULL suffix means that we end up
setting all of bits 63-31. Fix the bug.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Fix a missing space before a comment terminator.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The architectural feature FEAT_ETS (Enhanced Translation
Synchronization) is a set of tightened guarantees about memory
ordering involving translation table walks:
* if memory access RW1 is ordered-before memory access RW2 then it
is also ordered-before any translation table walk generated by RW2
that generates a translation fault, address size fault or access
fault
* TLB maintenance on non-exec-permission translations is guaranteed
complete after a DSB (ie it does not need the context
synchronization event that you have to have if you don’t have
FEAT_ETS)
For QEMU’s implementation we don’t reorder translation table walk
accesses, and we guarantee to finish the TLB maintenance as soon as
the TLB op is done (the tlb_flush functions will complete at the end
of the TLB, and TLB ops always end the TB because they’re sysreg
writes).
So we’re already compliant and all we need to do is say so in the ID
registers for the 'max' CPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In Armv8.6, a new AArch32 ID register ID_DFR1 is defined; implement
it. We don't have any CPUs with features that they need to advertise
here yet, but plumbing in the ID register gives it the right name
when debugging and will help in future when we do add a CPU that
has non-zero ID_DFR1 fields.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In Armv8.6 a new AArch32 ID register ID_MMFR5 is defined.
Implement this; we want to be able to use it to report to
the guest that we implement FEAT_ETS.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The code that reads the AArch32 ID registers from KVM in
kvm_arm_get_host_cpu_features() does so almost but not quite in
encoding order. Move the read of ID_PFR2 down so it's really in
encoding order.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In the AArch32 ID register scheme, coprocessor registers with
encoding cp15, 0, c0, c{0-7}, {0-7} are all in the space covered by
what in v6 and v7 was called the "CPUID scheme", and are supposed to
RAZ if they're not allocated to a specific ID register. For our
pre-v8 CPUs we get this right, because the regdefs in
id_pre_v8_midr_cp_reginfo[] cover these RAZ requirements. However
for v8 we failed to put in the necessary patterns to cover this, so
we end up UNDEFing on everything we didn't have an ID register for.
This is a problem because in Armv8 some encodings in 0, c0, c3, {0-7}
are now being used for new ID registers, and guests might thus start
trying to read them. (We already have one of these: ID_PFR2.)
For v8 CPUs, we already have regdefs for 0, c0, c{0-2}, {0-7} (that
is, the space is completely allocated with no reserved spaces). Add
entries to v8_idregs[] covering 0, c0, c3, {0-7}:
* c3, {0-2} is the reserved AArch32 space corresponding to the
AArch64 MVFR[012]_EL1
* c3, {3,5,6,7} are reserved RAZ for both AArch32 and AArch64
(in fact some of these are given defined meanings in Armv8.6,
but we don't implement them yet)
* c3, 4 is ID_PFR2 (already defined)
We then programmatically add RAZ patterns for AArch32 for
0, c0, c{4..15}, {0-7}:
* c4-c7 are unused, and not shared with AArch64 (these
are the encodings corresponding to where the AArch64
specific ID registers live in the system register space)
* c8-c15 weren't required to RAZ in v6/v7, but v8 extends
the AArch32 reserved-should-RAZ space to cover these;
the equivalent area of the AArch64 sysreg space is not
defined as must-RAZ
Note that the architecture allows some registers in this space
to return an UNKNOWN value; we always return 0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In more recent Raspbian OS Linux kernels, the fb driver gives up
immediately if RPI_FIRMWARE_FRAMEBUFFER_GET_NUM_DISPLAYS fails or no
displays are reported.
This change simply always reports one display. It makes bcm2835_fb work
again with these more recent kernels.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Enrik Berkhan <Enrik.Berkhan@inka.de>
Message-Id: <20220812143519.59134-1-Enrik.Berkhan@inka.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add cortex A35 core and enable it for virt board.
Signed-off-by: Hao Wu <wuhaotsh@google.com>
Reviewed-by: Joe Komlodi <komlodi@google.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220819002015.1663247-1-wuhaotsh@google.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The riscv target incorrectly enabled semihosting always, whether the
user asked for it or not. Call semihosting_enabled() passing the
correct value to the is_userspace argument, which fixes this and also
handles the userspace=on argument. Because we do this at translate
time, we no longer need to check the privilege level in
riscv_cpu_do_interrupt().
Note that this is a behaviour change: we used to default to
semihosting being enabled, and now the user must pass
"-semihosting-config enable=on" if they want it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220822141230.3658237-8-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of always permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled().
Note that this is a behaviour change: if the user wants to
do semihosting calls from userspace they must now specifically
enable them on the command line.
xtensa semihosting is not implemented for linux-user builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of always permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled().
Note that this is a behaviour change: if the user wants to
do semihosting calls from userspace they must now specifically
enable them on the command line.
nios2 semihosting is not implemented for linux-user builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of always permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled().
Note that this is a behaviour change: if the user wants to
do semihosting calls from userspace they must now specifically
enable them on the command line.
MIPS semihosting is not implemented for linux-user builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of never permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled(), instead of manually checking and
always forbidding semihosting if the guest is in userspace.
(Note that target/m68k doesn't support semihosting at all
in the linux-user build.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of never permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled(), instead of manually checking and
always forbidding semihosting if the guest is in userspace and this
isn't the linux-user build.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>