In preparation for more efficient setting of this field.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Use the bit number for SR constants instead of using a bit mask. This
make possible to also use the constants for shifts.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Note that while such functions may exist both for *-user and softmmu,
only *-user uses the CPUState hook, while softmmu reuses the prototype
for calling it directly.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Change breakpoint_invalidate() argument to CPUState alongside.
Since all targets now assign a softmmu-only field, we can drop helpers
cpu_class_set_{do_unassigned_access,vmsd}() and device_class_set_vmsd().
Prepares for changing cpu_memory_rw_debug() argument to CPUState.
Acked-by: Max Filippov <jcmvbkbc@gmail.com> (for xtensa)
Signed-off-by: Andreas Färber <afaerber@suse.de>
Since commit 878096eeb2 (cpu: Turn
cpu_dump_{state,statistics}() into CPUState hooks) CPUArchState is no
longer needed.
Add documentation and make the functions available through qemu/log.h
outside NEED_CPU_H to allow use in qom/cpu.c. Moving them to qom/cpu.h
was not yet possible due to convoluted include paths, so that some
devices grow an implicit and unneeded dependency on qom/cpu.h for now.
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Reviewed-by: Richard Henderson <rth@twiddle.net>
[AF: Simplified mb_cpu_do_interrupt() and do_interrupt_all() changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This removes a global per-target function and thus takes us one step
closer to compiling multiple targets into one executable.
It will also allow to override the interrupt handling for certain CPU
families.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.
Pass PowerPCCPU to kvmppc_handle_halt().
Signed-off-by: Andreas Färber <afaerber@suse.de>
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
reserved) and its purpose doesn't match the name (most target_phys_addr_t
addresses are not target specific). Replace it with a finger-friendly,
standards conformant hwaddr.
Outstanding patchsets can be fixed up with the command
git rebase -i --exec 'find -name "*.[ch]"
| xargs s/target_phys_addr_t/hwaddr/g' origin
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Scripted conversion:
sed -i "s/CPUState/CPUSH4State/g" target-sh4/*.[hc]
sed -i "s/#define CPUSH4State/#define CPUState/" target-sh4/cpu.h
Signed-off-by: Andreas Färber <afaerber@suse.de>
Acked-by: Anthony Liguori <aliguori@us.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Alexander Graf <agraf@suse.de>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Dong Xu Wang <wdongxu@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Parameter is_softmmu (and its evil mutant twin brother is_softmuu)
is not used in cpu_*_handle_mmu_fault() functions, remove them
and adjust callers.
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
All targets except SH4 have the same cpu_halted() routine, and it has
only one caller. It is therefore a good candidate for inlining.
The difference is the handling of the intr_at_halt, which is necessary
to ignore SR.BL when sleeping. Move intr_at_halt handling out of it, by
setting this variable while executing the sleep instruction, and
clearing it when the CPU has been woken-up by an interrupt, whatever the
state of SR.BL. Also rename this variable in_sleep.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Update the PTEH register to contain the VPN at which an MMU
exception occured as specified by the SH4 reference.
Signed-off-by: Alexandre Courbot <gnurou@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Exception index of address read error should be 0x0e0.
Signed-off-by: Alexandre Courbot <gnurou@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
In cpu_sh4_invalidate_tlb, the UTLB was invalidated twice and the
ITLB left unchaged, probably because of some unfortunate copy/paste.
Signed-off-by: Alexandre Courbot <gnurou@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Fix wrong usage of ! and & in MMU related functions. Thanks to Blue
Swirl for reporting the issue.
Reported-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
SH4 is using 16-bit instructions which means most of the constants are
loaded through a constant pool at the end of the subroutine. The same
memory page is therefore accessed in exec and read mode.
With the current implementation, a QEMU TLB entry is set to read or
read/write mode after an UTLB search and to exec mode after an ITLB
search, which causes a lot of TLB exceptions to switch from read or
read/write to exec and vice versa.
This patch optimizes that by already setting the QEMU TLB entry in read
or read/write mode when an UTLB entry is copied into ITLB (during an
ITLB miss). This improve the emulation speed by about 14%.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Some Linux kernels seems to implement ITLB/UTLB flushing through by
writing all TLB entries through the memory mapped interface instead
of writing one to MMUCR.TI.
Implement memory mapped ITLB write interface so that such kernels can
boot. This fixes https://bugs.launchpad.net/bugs/700774 .
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
When building with -DNDEBUG, assert(0) will not stop execution
so it must not be used for abnormal termination.
Use cpu_abort() when in CPU context, abort() otherwise.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
QEMU uses a fixed page size for the CPU TLB. If the guest uses large
pages then we effectively split these into multiple smaller pages, and
populate the corresponding TLB entries on demand.
When the guest invalidates the TLB by virtual address we must invalidate
all entries covered by the large page. However the address used to
invalidate the entry may not be present in the QEMU TLB, so we do not
know which regions to clear.
Implementing a full vaiable size TLB is hard and slow, so just keep a
simple address/mask pair to record which addresses may have been mapped by
large pages. If the guest invalidates this region then flush the
whole TLB.
Signed-off-by: Paul Brook <paul@codesourcery.com>
env->exception_index should be cleared with -1, not 0.
See also 821b19fe92.
Spotted by Igor Kovalenko.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
On SH4, the ITLB and UTLB configurations are memory mapped, so loading
ITLB entries from UTLB has to be simulated correctly. For that the QEMU
TLB has to be handle the execute (ITLB) and read/write permissions
(UTLB) seperately.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
With the current code, the QEMU TLB is setup to match the read/write
mode of the MMU fault. This means when read access is done, the page
is setup in read-only mode. When the page is later accessed in write
mode, an MMU fault happened, and the page is switch in write-only
mode. This flip-flop causes a lot of calls to the MMU code and slow
down the emulation.
This patch changes the MMU emulation, so that the QEMU TLB is setup
to match the UTLB protection key. This impressively increase the
speed of the emulation.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
There is an ITLB access violation if SR_MD=0 (user mode) while
the high bit of the protection key is 0 (priviledge mode).
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
In the very least, a change like this requires discussion on the list.
The naming convention is goofy and it causes a massive merge problem. Something
like this _must_ be presented on the list first so people can provide input
and cope with it.
This reverts commit 99a0949b72.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Include assert.h from qemu-common.h and remove other direct uses.
cpu-all.h still need to include it because of the dyngen-exec.h hacks
Signed-off-by: Paul Brook <paul@codesourcery.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6970 c046a42c-6fe2-441c-8c8c-71466251a162
The entire U0 area is assumed to be cacheable.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6969 c046a42c-6fe2-441c-8c8c-71466251a162
Author: Vladimir Prus <vladimir@codesourcery.com>
Fix movcal.l/ocbi emulation.
* target-sh4/cpu.h (memory_content): New.
(CPUSH4State): New fields movcal_backup and movcal_backup_tail.
* target-sh4/helper.h (helper_movcal)
(helper_discard_movcal_backup, helper_ocbi): New.
* target-sh4/op_helper.c (helper_movcal)
(helper_discard_movcal_backup, helper_ocbi): New.
* target-sh4/translate.c (DisasContext): New field has_movcal.
(sh4_defs): Update CVS for SH7785.
(cpu_sh4_init): Initialize env->movcal_backup_tail.
(_decode_opc): Discard movca.l-backup.
Make use of helper_movcal and helper_ocbi.
(gen_intermediate_code_internal): Initialize has_movcal to 1.
Thanks to Shin-ichiro KAWASAKI and Paul Mundt for valuable feedback.
Acked-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6966 c046a42c-6fe2-441c-8c8c-71466251a162