target/riscv: Use env_archcpu for better performance

RISCV_CPU(cs) uses a checked cast.  When QOM cast debugging is enabled
this adds about 5% total overhead when emulating RV64 on x86-64 host.

Using a RISC-V guest with 16 vCPUs, 16 GB of guest RAM, virtio-blk
disk.  The guest has a copy of the qemu source tree.  The test
involves compiling the qemu source tree with 'make clean; time make -j16'.

Before making this change the compile step took 449 & 447 seconds over
two consecutive runs.

After making this change: 428 & 421 seconds.

The saving is over 5%.

Thanks: Paolo Bonzini
Thanks: Philippe Mathieu-Daudé
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20231009124859.3373696-2-rjones@redhat.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This commit is contained in:
Richard W.M. Jones 2023-10-09 13:48:25 +01:00 committed by Alistair Francis
parent 9b9741c38f
commit 614c9466a2

View File

@ -65,8 +65,7 @@ int riscv_cpu_mmu_index(CPURISCVState *env, bool ifetch)
void cpu_get_tb_cpu_state(CPURISCVState *env, vaddr *pc, void cpu_get_tb_cpu_state(CPURISCVState *env, vaddr *pc,
uint64_t *cs_base, uint32_t *pflags) uint64_t *cs_base, uint32_t *pflags)
{ {
CPUState *cs = env_cpu(env); RISCVCPU *cpu = env_archcpu(env);
RISCVCPU *cpu = RISCV_CPU(cs);
RISCVExtStatus fs, vs; RISCVExtStatus fs, vs;
uint32_t flags = 0; uint32_t flags = 0;