Commit Graph

1365 Commits

Author SHA1 Message Date
Emilio G. Cota
f9f46db444 target/hppa: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24 13:53:41 -07:00
Emilio G. Cota
2399d4e7ce target/arm: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24 13:53:41 -07:00
Emilio G. Cota
c5a49c63fa tcg: convert tb->cflags reads to tb_cflags(tb)
Convert all existing readers of tb->cflags to tb_cflags, so that we
use atomic_read and therefore avoid undefined behaviour in C11.

Note that the remaining setters/getters of the field are protected
by tb_lock, and therefore do not need conversion.

Luckily all readers access the field via 'tb->cflags' (so no foo.cflags,
bar->cflags in the code base), which makes the conversion easily
scriptable:

FILES=$(git grep 'tb->cflags' target include/exec/gen-icount.h \
	 accel/tcg/translator.c | cut -f1 -d':' | sort | uniq)

perl -pi -e 's/([^.>])tb->cflags/$1tb_cflags(tb)/g' $FILES
perl -pi -e 's/([a-z->.]*)(->|\.)tb->cflags/tb_cflags($1$2tb)/g' $FILES

Then manually fixed the few errors that checkpatch reported.

Compile-tested for all targets.

Suggested-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24 13:53:41 -07:00
Richard Henderson
55c3ceef61 qom: Introduce CPUClass.tcg_initialize
Move target cpu tcg initialization to common code,
called from cpu_exec_realizefn.

Acked-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24 22:00:13 +02:00
Richard Henderson
11f4e8f8bf tcg: Remove TCGV_EQUAL*
When we used structures for TCGv_*, we needed a macro in order to
perform a comparison.  Now that we use pointers, this is just clutter.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24 21:50:15 +02:00
Richard Henderson
dc41aa7d34 tcg: Remove GET_TCGV_* and MAKE_TCGV_*
The GET and MAKE functions weren't really specific enough.
We now have a full complement of functions that convert exactly
between temporaries, arguments, tcgv pointers, and indices.

The target/sparc change is also a bug fix, which would have affected
a host that defines TCG_TARGET_HAS_extr[lh]_i64_i32, i.e. MIPS64.

Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24 21:49:30 +02:00
Stafford Horne
6b4bbd6aeb openrisc/cputimer: Perparation for Multicore
In order to support multicore system we move some of the previously
static state variables into the state of each core.

On the other hand in order to allow timers to be synced between each
code the ttcr (tick timer count register) is moved out of the core.
This is not as per real hardware spec which has a separate timer counter
per core, but it seems the most simple way to keep each clock in sync.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2017-10-21 06:35:47 +09:00
Stafford Horne
8c949951ed target/openrisc: Make coreid and numcores variable
Previously coreid and numcores were hard coded as 0 and 1 respectively
as OpenRISC QEMU did not have multicore support.

Multicore support is now being added so these registers need to have
configured values.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2017-10-21 06:35:47 +09:00
David Hildenbrand
2bcf018340 s390x/tcg: low-address protection support
This is a neat way to implement low address protection, whereby
only the first 512 bytes of the first two pages (each 4096 bytes) of
every address space are protected.

Store a tec of 0 for the access exception, this is what is defined by
Enhanced Suppression on Protection in case of a low address protection
(Bit 61 set to 0, rest undefined).

We have to make sure to to pass the access address, not the masked page
address into mmu_translate*().

Drop the check from testblock. So we can properly test this via
kvm-unit-tests.

This will check every access going through one of the MMUs.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171016202358.3633-3-david@redhat.com>
[CH: restored error message for access register mode]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
Halil Pasic
6bb6f19473 s390x: refactor error handling for MSCH handler
Simplify the error handling of the MSCH.  Let the code detecting the
condition tell (in a less ambiguous way) how it's to be handled. No
changes in behavior.

Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20171017140453.51099-8-pasic@linux.vnet.ibm.com>
[CH: fix return code for fctl != 0]
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
Halil Pasic
ae9f1be3bd s390x: refactor error handling for HSCH handler
Simplify the error handling of the HSCH.  Let the code detecting the
condition tell (in a less ambiguous way) how it's to be handled. No
changes in behavior.

Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20171017140453.51099-7-pasic@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
Halil Pasic
773314426e s390x: refactor error handling for CSCH handler
Simplify the error handling of the CSCH.  Let the code detecting the
condition tell (in a less ambiguous way) how it's to be handled. No
changes in behavior.

Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20171017140453.51099-6-pasic@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
Halil Pasic
963764081d s390x: refactor error handling for XSCH handler
Simplify the error handling of the XSCH.  Let the code detecting the
condition tell (in a less ambiguous way) how it's to be handled. No
changes in behavior.

Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20171017140453.51099-5-pasic@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
Halil Pasic
66dc50f705 s390x: improve error handling for SSCH and RSCH
Simplify the error handling of the SSCH and RSCH handler avoiding
arbitrary and cryptic error codes being used to tell how the instruction
is supposed to end.  Let the code detecting the condition tell how it's
to be handled in a less ambiguous way.  It's best to handle SSCH and RSCH
in one go as the emulation of the two shares a lot of code.

For passthrough this change isn't pure refactoring, but changes the way
kernel reported EFAULT is handled. After clarifying the kernel interface
we decided that EFAULT shall be mapped to unit exception.  Same goes for
unexpected error codes and absence of required ORB flags.

Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20171017140453.51099-4-pasic@linux.vnet.ibm.com>
Tested-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
[CH: cosmetic changes]
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
Igor Mammedov
32dc6aa061 s390x: move s390x_new_cpu() into board code
s390-virtio-ccw.c is the sole user of s390x_new_cpu(),
so move this helper there.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1508253203-119237-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
Igor Mammedov
ac7e4cbbab s390x: fix cpu object referrence leak in s390x_new_cpu()
object_new() returns cpu with refcnt == 1 and after realize
refcnt == 2*. s390x_new_cpu() as an owner of the first refcnt
should have released it on exit in both cases (on error and
success) to avoid it leaking. Do so for both cases.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1508247680-98800-2-git-send-email-imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
741a4ec186 target/s390x: special handling when starting a CPU with WAIT PSW
When we try to start a CPU with a WAIT PSW, we have to take care that
TCG will actually try to continue executing instructions.

We must therefore really only unhalt the CPU if we don't have a WAIT
PSW. Also document the special order for restart interrupts, which
load a new PSW and change the state to operating.

To keep KVM working, simply don't have a look at the WAIT bit when
loading the PSW. Otherwise the behavior of a restart interrupt when
a CPU stopped would be changed.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-31-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
f74990a5d0 s390x/tcg: refactor stfl(e) to use s390_get_feat_block()
Refactor it to use s390_get_feat_block(). Directly write into the mapped
lowcore with stfl and make sure it is really only compiled if needed.

While at it, add an alignment check for STFLE and avoid
potential_page_fault() by properly restoring the CPU state.

Due to s390_get_feat_block(), we will now also indicate the
"Configuration-z-architectural-mode", which is with new SIGP code the
right thing to do.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-30-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
0fc60ca58a s390x/tcg: unlock NMI
Nothing hindering us anymore from unlocking the restart code (used for
NMI).

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-29-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
ca26c5d32b s390x/cpumodel: allow to enable SENSE RUNNING STATUS for qemu
As we properly implement it, allow to enable it.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-28-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
11b0079cec s390x/tcg: switch to new SIGP handling code
This effectively enables experimental SMP support. Floating interrupts are
still a mess, so allow it but print a big warning. There also seems
to be a problem with CPU hotplug (after the main loop started).

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-27-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[CH: changed insn-data.def as pointed out by Richard]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
b376a5545a s390x/tcg: flush the tlb on SIGP SET PREFIX
Thanks to Aurelien Jarno for doing this in his prototype.

We can flush the whole TLB as this should happen really rarely.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-26-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
b1ab5f6068 s390x/tcg: implement STOP and RESET interrupts for TCG
Implement them like KVM implements/handles them. Both can only be
triggered via SIGP instructions. RESET has (almost) the lowest priority if
the CPU is running, and the highest if the CPU is STOPPED. This is handled
in SIGP code already. On delivery, we only have to care about the
"CPU running" scenario.

STOP is defined to be delivered after all other interrupts have been
delivered. Therefore it has the actual lowest priority.

As both can wake up a CPU if sleeping, indicate them correctly to
external code (e.g. cpu_has_work()).

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-25-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
a6880d213b s390x/tcg: implement SIGP CONDITIONAL EMERGENCY SIGNAL
Mostly analogous to the kernel/KVM version (so I assume the checks are
correct :) ). As a preparation for TCG.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-24-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
c50105d47c s390x/tcg: implement SIGP EMERGENCY SIGNAL
As preparation for TCG.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-23-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
070aa1a493 s390x/tcg: implement SIGP EXTERNAL CALL
As preparation for TCG.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-22-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
302230fc44 s390x/tcg: implement SIGP SENSE
Add it as preparation for TCG. Sensing could later be done completely
lockless.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-21-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
d1b468bc88 s390x/tcg: implement SIGP SENSE RUNNING STATUS
Preparation for TCG, for KVM is this is completely handled in the
kernel.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-20-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
3047f8b549 s390x/kvm: factor out actual handling of STOP interrupts
For KVM, the KVM module decides when a STOP can be performed (when the
STOP interrupt can be processed). Factor it out so we can use it
later for TCG.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-19-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
74b4c74d5e s390x/kvm: factor out SIGP code into sigp.c
We want to use the same code base for TCG, so let's cleanly factor it
out.

The sigp mutex is currently not really needed, as everything is
protected by the iothread mutex. But this could change later, so leave
it in place and initialize it properly from common code.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-17-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
a7a2b8e3d5 s390x/kvm: drop two debug prints
Preparation for moving it out of kvm.c.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-16-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
f875cb0c21 s390x/kvm: factor out storing of adtl CPU status
Called from SIGP code to be factored out, so let's move it. Add a
FIXME for TCG code in the future.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
cf729baaec s390x/kvm: factor out storing of CPU status
Factor it out into s390_store_status(), to be used also by TCG later on.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-14-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
eabcea18f8 s390x/kvm: generalize SIGP stop and restart interrupt injection
Preparation for factoring it out into !kvm code.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-13-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
27292ff18d s390x/kvm: pass ipb directly into handle_sigp()
No need to pass kvm_run. Pass parameters alphabetically ordered.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-12-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
c6892047dc target/s390x: interpret PSW_MASK_WAIT only for TCG
KVM handles the wait PSW itself and triggers a WAIT ICPT in case it
really wants to sleep (disabled wait).

This will later allow us to change the order of loading a restart
interrupt and setting a CPU to OPERATING on SIGP RESTART without
changing KVM behavior.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-11-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
f1cbfe6a73 s390x/tcg: handle WAIT PSWs during interrupt injection
If we encounter a WAIT PSW, we have to halt immediately. Using
cpu_loop_exit() at this point feels wrong. Simply leaving
cs->exception_index set doesn't result in an immediate stop.

This is also necessary to properly handle SIGP STOP interrupts later.

The CPU_INTERRUPT_HALT will be processed immediately and properly set
the CPU to halted (also resetting cs->exception_index to EXCP_HLT)

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-10-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
83f7f32901 target/s390x: factor out handling of WAIT PSW into s390_handle_wait()
This will now also detect crashes under TCG. We can directly use
cpu->env.psw.addr instead of kvm_run, as we do a cpu_synchronize_state().

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
2b3394f13d s390x/tcg: a CPU cannot switch state due to an interrupt
Going to OPERATING here looks wrong. A CPU should even never be
!OPERATING at this point. Unhalting will already be done in
cpu_handle_halt() if there is work, so we can drop this statement
completely.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
4beab67146 s390x/tcg: STOPPED cpus can never wake up
Interrupts can't wake such CPUs up. SIGP from other CPUs has to be used
to toggle the state.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
9dec238826 s390x/tcg: take care of external interrupt subclasses
We can now let go of INTERRUPT_EXT. When cr0 changes, we have to
revalidate if we now have a pending external interrupt, just like
when the PSW (or SYSTEM MASK only) changes.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-6-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
8417f904ba s390x/tcg: rework checking for deliverable interrupts
Currently, enabling/disabling of interrupts is not really supported.

Let's improve interrupt handling code by explicitly checking for
deliverable interrupts only. This is the first step. Checking for
external interrupt subclasses will be done next.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-5-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
14ca122e75 s390x/tcg: injection of emergency signals and external calls
Preparation for new TCG SIGP code. Especially also prepare for
indicating that another external call is already pending.

Take care of interrupt priority.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-4-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
d516f74c99 s390x/tcg: cleanup service interrupt injection
There are still some leftovers from old virtio interrupts in there.
Most importantly, we don't have to queue service interrupts anymore.
Just like KVM, we can simply multiplex the SCLP service interrupts and
avoid the queue.

Also, now only valid parameters/cpu_addr will be stored on service
interrupts.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-3-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
David Hildenbrand
6482b0ffd1 s390x/tcg: turn INTERRUPT_EXT into a mask
External interrupts are currently all handled like floating external
interrupts, they are queued. Let's prepare for a split of floating
and local interrupts by turning INTERRUPT_EXT into a mask.

While we can have various floating external interrupts of one kind, there
is usually only one (or a fixed number) of the local external interrupts.

So turn INTERRUPT_EXT into a mask and properly indicate the kind of
external interrupt. Floating interrupts will have to moved out of
one CPU instance later once we have SMP support.

The only floating external interrupts used right now are SERVICE
interrupts, so let's use that name. Following patches will clean up
SERVICE interrupt injection.

This get's rid of the ugly special handling for cpu timer and clock
comparator interrupts. And we really only store the parameters as
defined by the PoP.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928203708.9376-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
Marc-André Lureau
96f64aa878 S390: use g_new() family of functions
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMD: more changes in hw/s390x/css.c, added target/s390x/cpu_models.c]
Message-Id: <20171006235023.11952-27-f4bug@amsat.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20 13:32:10 +02:00
Peter Maydell
a8b392ac9a * TCG 8-byte atomic accesses bugfix (Andrew)
* Report disk rotation rate (Daniel)
 * Report invalid scsi-disk block size configuration (Mark)
 * KVM and memory API MemoryListener fixes (David, Maxime, Peter Xu)
 * x86 CPU hotplug crash fix (Igor)
 * Load/store API documentation (Peter Maydell)
 * Small fixes by myself and Thomas
 * qdev DEVICE_DELETED deferral (Michael)
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAlnnJUgUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroMifwf/dTZwtGqvAV4+jezCiZ3MTknz39dM
 HOGnD3m2xy04QT5LHiwDmaLFXy1y/AUVQm79JMPN4dKoFvtruREoWUq8EU0FCsLZ
 PkdCbJuXKGiBYMRXkQQxeT8lAyaBQwZdc+O9mYuOrSGZOQscA7SxgClYmzVdVzcy
 ZNTqkuaw1NDIAapdfGv94WLza4Nb8XX8bFwohgkf4mLDXifhjYHQTbBTfB0NqPxH
 Rk3HU+wgYUCJRYXpvktESgzRo5sm1aozCRq3f0Y6RV12ylgF6GG4CyN7YcKRn8eh
 NZbyehHiF5YU2kuvO9SmAB+FqM2+aMtq8uuNuI1Nxgd222MOVaChyWc3jg==
 =gmUj
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* TCG 8-byte atomic accesses bugfix (Andrew)
* Report disk rotation rate (Daniel)
* Report invalid scsi-disk block size configuration (Mark)
* KVM and memory API MemoryListener fixes (David, Maxime, Peter Xu)
* x86 CPU hotplug crash fix (Igor)
* Load/store API documentation (Peter Maydell)
* Small fixes by myself and Thomas
* qdev DEVICE_DELETED deferral (Michael)

# gpg: Signature made Wed 18 Oct 2017 10:56:24 BST
# gpg:                using RSA key 0xBFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream: (29 commits)
  scsi: reject configurations with logical block size > physical block size
  qdev: defer DEVICE_DEL event until instance_finalize()
  Revert "qdev: Free QemuOpts when the QOM path goes away"
  qdev: store DeviceState's canonical path to use when unparenting
  qemu-pr-helper: use new libmultipath API
  watch_mem_write: implement 8-byte accesses
  notdirty_mem_write: implement 8-byte accesses
  memory: reuse section_from_flat_range()
  kvm: simplify kvm_align_section()
  kvm: region_add and region_del is not called on updates
  kvm: fix error message when failing to unregister slot
  kvm: tolerate non-existing slot for log_start/log_stop/log_sync
  kvm: fix alignment of ram address
  memory: call log_start after region_add
  target/i386: trap on instructions longer than >15 bytes
  target/i386: introduce x86_ld*_code
  tco: add trace events
  docs/devel/loads-stores.rst: Document our various load and store APIs
  nios2: define tcg_env
  build: remove CONFIG_LIBDECNUMBER
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-19 15:38:07 +01:00
Peter Maydell
f2a48d696c Linux-user updates for Qemu 2.11
-----BEGIN PGP SIGNATURE-----
 
 iQJLBAABCAA1FiEE/4IDyMORmK4FgUHvtEiQ3t48m8AFAlnnRv4XHHJpa3Uudm9p
 cGlvQGxpbmFyby5vcmcACgkQtEiQ3t48m8D1hw/8Cu/gXcWzHPxoqU/Bz9lSMS0+
 1UICiGBVNBcrs7QapG+Z4mI4goGcxjFKFEFgHuuvQTb91ONR43jIx4cNryOlu3jS
 9njiHkgY7EeKXf84uPToie9WiXvf2IMI6wKdqJ53htJ8/Y1putbJ2CGf7c8vGIGh
 9WhNGbiQyhpWU3xrEThVptUz+CVOqXxBYVKff0krG7ZSyODhrybPDkp8D020rXa1
 O7oc8pAvwpYWiqpp+iiEU5PXFB7DoLUldefqqZQw+Rgntu6sEU+Rum7ykA2d6jBR
 o0YldbJeU2eq2Kh9EqJHDbTZwevTeul6NEcgd6ZoJkTDhOSDK9mMd6jkkRUHzf+u
 OAlGFFmfdqf9NE5S38XIMRHykwgMtWzQNU2MVLtalkEcadbOy7RnY/lOE/uEACh4
 Ao/GPMDCar7nYR2Ga/P9LXFhR90hbvvvRAB10l7TIgE6/EJJUJs9E0HxC5ktsonv
 gC4tMTejhCwe61Neb0Z9b2gxSkwznJYe9OdTa13CENXMAy3AqPUTsDY91Y80HdpX
 3rE869VSHMIQNuyNecCrUeHyKYkLoB+qGmavtO7m1RNEUeWTPP2WSDMCVKxVT4bg
 ol0UeTOwy0hGm2MUwn3scL+yMEVWjcEyInWDZCCGIvJMxfnwkqAMTO0xZXAIE8n0
 bBHdQ4xVsujMIDmQcO0=
 =9fWY
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/riku/tags/pull-linux-user-20171018' into staging

Linux-user updates for Qemu 2.11

# gpg: Signature made Wed 18 Oct 2017 13:20:14 BST
# gpg:                using RSA key 0xB44890DEDE3C9BC0
# gpg: Good signature from "Riku Voipio <riku.voipio@iki.fi>"
# gpg:                 aka "Riku Voipio <riku.voipio@linaro.org>"
# Primary key fingerprint: FF82 03C8 C391 98AE 0581  41EF B448 90DE DE3C 9BC0

* remotes/riku/tags/pull-linux-user-20171018:
  linux-user: Fix TARGET_MTIOCTOP/MTIOCGET/MTIOCPOS values
  linux-user/main: support dfilter
  linux-user: Fix target FS_IOC_GETFLAGS and FS_IOC_SETFLAGS numbers
  linux-user/sh4: Reduce TARGET_VIRT_ADDR_SPACE_BITS to 31
  linux-user: Tidy and enforce reserved_va initialization
  tcg: Fix off-by-one in assert in page_set_flags
  linux-user: Allow -R values up to 0xffff0000 for 32-bit ARM guests
  linux-user: remove duplicate break in syscall
  target/m68k,linux-user: manage FP registers in ucontext
  linux-user: fix O_TMPFILE handling

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-19 14:39:30 +01:00
Igor Mammedov
2e9c10eba0 ppc: spapr: use generic cpu_model parsing
use generic cpu_model parsing introduced by
 (6063d4c0f vl.c: convert cpu_model to cpu type and set of global properties before machine_init())

it allows to:
  * replace sPAPRMachineClass::tcg_default_cpu with
    MachineClass::default_cpu_type
  * drop cpu_parse_cpu_model() from hw/ppc/spapr.c and reuse
    one in vl.c
  * simplify spapr_get_cpu_core_type() by removing
    not needed anymore recurrsion since alias look up
    happens earlier at vl.c and spapr_get_cpu_core_type()
    works only with resulted from that cpu type.
  * spapr no more needs to parse/depend on being phased out
    MachineState::cpu_model, all tha parsing done by generic
    code and target specific callback.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
[dwg: Correct minor compile error]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17 10:34:01 +11:00
Igor Mammedov
b918f885ae ppc: move ppc_cpu_lookup_alias() before its first user
next commit will drop ppc_cpu_lookup_alias() declaration from header
and make it static which will break its last user ppc_cpu_class_by_name()
since ppc_cpu_class_by_name() defined before ppc_cpu_lookup_alias().

To avoid this move ppc_cpu_lookup_alias() right before
ppc_cpu_class_by_name().

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17 10:34:01 +11:00
Igor Mammedov
5bbb264186 ppc: spapr: register 'host' core type along with the rest of core types
consolidate 'host' core type registration by moving it from
KVM specific code into spapr_cpu_core.c, similar like it's
done in x86 target.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17 10:34:00 +11:00
Igor Mammedov
b51d3c8818 ppc: spapr: use cpu type name directly
replace sPAPRCPUCoreClass::cpu_class with cpu type name
since it were needed just to get that at points it were
accessed.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17 10:34:00 +11:00
Igor Mammedov
b8e999673b ppc: move '-cpu foo,compat=xxx' parsing into ppc_cpu_parse_featurestr()
there is a dedicated callback CPUClass::parse_features
which purpose is to convert -cpu features into a set of
global properties AND deal with compat/legacy features
that couldn't be directly translated into CPU's properties.

Create ppc variant of it (ppc_cpu_parse_featurestr) and
move 'compat=val' handling from spapr_cpu_core.c into it.
That removes a dependency of board/core code on cpu_model
parsing and would let to reuse common -cpu parsing
introduced by 6063d4c0

Set "max-cpu-compat" property only if it exists, in practice
it should limit 'compat' hack to spapr machine and allow
to avoid including machine/spapr headers in target/ppc/cpu.c

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17 10:34:00 +11:00
Sandipan Das
af1c259f6d target/ppc: Fix carry flag setting for shift algebraic instructions
For POWER ISA v3.0, the XER bit CA32 needs to be set by the shift
right algebraic instructions whenever the CA bit is to be set. This
change affects the following instructions:
  * Shift Right Algebraic Word (sraw[.])
  * Shift Right Algebraic Word Immediate (srawi[.])
  * Shift Right Algebraic Doubleword (srad[.])
  * Shift Right Algebraic Doubleword Immediate (sradi[.])

Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17 10:34:00 +11:00
David Gibson
1ed9c8af50 target/ppc: Add POWER9 DD2.0 model information
At the moment the only POWER9 model which is listed in qemu is v1.0 (aka
"DD1").  This is a very early (read, buggy) version which will never be
released to the public - it was included in qemu only for the convenience
of those doing bringup on the early silicon.  For bonus points, we actually
had its PVR incorrect in the table (0x004e0000 instead of 0x004e0100).  We
also never actually implemented the differences in behaviour (read, bugs)
that marked DD1 in qemu.

Now that we know the PVR for the substantially better v2.0 (DD2) chip,
include it and make it the default POWER9 in qemu.  For the time being we
leave the DD1 definition in place for the poor souls (read, me) who still
need to work with DD1 hardware.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17 10:34:00 +11:00
Thomas Huth
7ff26aa6c6 target/ppc: Remove unused PPC 460 and 460F definitions
We don't have any 460 or 460F CPUs in QEMU, so the init functions
are just dead code. Let's simply remove them (translate_init.c
is already big enough without them).

Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17 10:34:00 +11:00
Paolo Bonzini
b066c53757 target/i386: trap on instructions longer than >15 bytes
Besides being more correct, arbitrarily long instruction allow the
generation of a translation block that spans three pages.  This
confuses the generator and even allows ring 3 code to poison the
translation block cache and inject code into other processes that are
in guest ring 3.

This is an improved (and more invasive) fix for commit 30663fd ("tcg/i386:
Check the size of instruction being translated", 2017-03-24).  In addition
to being more precise (and generating the right exception, which is #GP
rather than #UD), it distinguishes better between page faults and too long
instructions, as shown by this test case:

    #include <sys/mman.h>
    #include <string.h>
    #include <stdio.h>

    int main()
    {
            char *x = mmap(NULL, 8192, PROT_READ|PROT_WRITE|PROT_EXEC,
                           MAP_PRIVATE|MAP_ANON, -1, 0);
            memset(x, 0x66, 4096);
            x[4096] = 0x90;
            x[4097] = 0xc3;
            char *i = x + 4096 - 15;
            mprotect(x + 4096, 4096, PROT_READ|PROT_WRITE);
            ((void(*)(void)) i) ();
    }

... which produces a #GP without the mprotect, and a #PF with it.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-16 18:03:53 +02:00
Paolo Bonzini
e3af7c788b target/i386: introduce x86_ld*_code
These take care of advancing s->pc, and will provide a unified point
where to check for the 15-byte instruction length limit.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-16 18:03:53 +02:00
Paolo Bonzini
17bd9597be nios2: define tcg_env
This should be done by all target and, since commit 53f6672bcf
("gen-icount: use tcg_ctx.tcg_env instead of cpu_env", 2017-06-30),
is causing the NIOS2 target to hang.

This is because the test for "should I exit to the main loop"
was being done with the correct offset to the icount decrementer,
but using TCG temporary 0 (the frame pointer) rather than the
env pointer.

Cc: qemu-stable@nongnu.org
Cc: Marek Vasut <marex@denx.de>
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-16 18:03:52 +02:00
Paolo Bonzini
7271a81949 build: remove CONFIG_LIBDECNUMBER
It is used by all PPC targets; we can give the directory its own
Makefile.objs file, and include it directly from target/ppc.
target/s390 can do the same when it starts using it.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-16 18:03:52 +02:00
Richard Henderson
cc1b3960a1 linux-user/sh4: Reduce TARGET_VIRT_ADDR_SPACE_BITS to 31
The real kernel has TASK_SIZE as 0x7c000000, due to quirks with
a couple of SH parts.  But nominally user-space is limited to 2GB.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170708025030.15845-4-rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
2017-10-16 16:00:56 +03:00
Richard Henderson
18e80c55bb linux-user: Tidy and enforce reserved_va initialization
We had a check using TARGET_VIRT_ADDR_SPACE_BITS to make sure
that the allocation coming in from the command-line option was
not too large, but that didn't include target-specific knowledge
about other restrictions on user-space.

Remove several target-specific hacks in linux-user/main.c.

For MIPS and Nios, we can replace them with proper adjustments
to the respective target's TARGET_VIRT_ADDR_SPACE_BITS definition.

For ARM, we had no existing ifdef but I suspect that the current
default value of 0xf7000000 was chosen with this in mind.  Define
a workable value in linux-user/arm/, and also document why the
special case is required.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20170708025030.15845-3-rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
2017-10-16 16:00:56 +03:00
Peter Maydell
76eff04d16 target/arm: Implement SG instruction corner cases
The common situation of the SG instruction is that it is
executed from S&NSC memory by a CPU in NS state. That case
is handled by v7m_handle_execute_nsc(). However the instruction
also has defined behaviour in a couple of other cases:
 * SG instruction in NS memory (behaves as a NOP)
 * SG in S memory but CPU already secure (clears IT bits and
   does nothing else)
 * SG instruction in v8M without Security Extension (NOP)

These can be implemented in translate.c.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-10-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
dcf14dfb70 target/arm: Support some Thumb insns being always unconditional
A few Thumb instructions are always unconditional even inside an
IT block (as opposed to being UNPREDICTABLE if used inside an
IT block): BKPT, the v8M SG instruction, and the A profile
HLT (debug halt) instruction.

This means we need to suppress the jump-over-instruction-on-condfail
code generation (though the IT state still advances as usual and
subsequent insns in the IT block may be conditional).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-9-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
5b8d7289e9 target-arm: Simplify insn_crosses_page()
Recent changes have left insn_crosses_page() more complicated
than it needed to be:
 * it's only called from thumb_tr_translate_insn() so we know
   for certain that we're looking at a Thumb insn
 * the caller's check for dc->pc >= dc->next_page_start - 3
   means that dc->pc can't possibly be 4 aligned, so there's
   no need to check that (the check was partly there to ensure
   that we didn't treat an ARM insn as Thumb, I think)
 * we now have thumb_insn_is_16bit() which lets us do a precise
   check of the length of the next insn, rather than opencoding
   an inaccurate check

Simplify it down to just loading the first half of the insn
and calling thumb_insn_is_16bit() on it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-8-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
296e5a0a6c target/arm: Pull Thumb insn word loads up to top level
Refactor the Thumb decode to do the loads of the instruction words at
the top level rather than only loading the second half of a 32-bit
Thumb insn in the middle of the decode.

This is simple apart from the awkward case of Thumb1, where the
BL/BLX prefix and suffix instructions live in what in Thumb2 is the
32-bit insn space.  To handle these we decode enough to identify
whether we're looking at a prefix/suffix that we handle as a 16 bit
insn, or a prefix that we're going to merge with the following suffix
to consider as a 32 bit insn.  The translation of the 16 bit cases
then moves from disas_thumb2_insn() to disas_thumb_insn().

The refactoring has the benefit that we don't need to pass the
CPUARMState* down into the decoder code any more, but the major
reason for doing this is that some Thumb instructions must be always
unconditional regardless of the IT state bits, so we need to know the
whole insn before we emit the "skip this insn if the IT bits and cond
state tell us to" code.  (The always unconditional insns are BKPT,
HLT and SG; the last of these is 32 bits.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-7-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
6b8acf256d target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1
The code which implements the Thumb1 split BL/BLX instructions
is guarded by a check on "not M or THUMB2". All we really need
to check here is "not THUMB2" (and we assume that elsewhere too,
eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns).

This doesn't change behaviour because all M profile cores
have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2.
(v6M implements a very restricted subset of Thumb2, but we
can cross that bridge when we get to it with appropriate
feature bits.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-6-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
d02a8698d7 target/arm: Implement secure function return
Secure function return happens when a non-secure function has been
called using BLXNS and so has a particular magic LR value (either
0xfefffffe or 0xfeffffff). The function return via BX behaves
specially when the new PC value is this magic value, in the same
way that exception returns are handled.

Adjust our BX excret guards so that they recognize the function
return magic number as well, and perform the function-return
unstacking in do_v7m_exception_exit().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-5-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
3e3fa230e3 target/arm: Implement BLXNS
Implement the BLXNS instruction, which allows secure code to
call non-secure code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-4-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
333e10c51e target/arm: Implement SG instruction
Implement the SG instruction, which we emulate 'by hand' in the
exception handling code path.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-3-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Peter Maydell
b9f587d62c target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()
Add the M profile secure MMU index values to the switch in
get_a32_user_mem_index() so that LDRT/STRT work correctly
rather than asserting at translate time.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1507556919-24992-2-git-send-email-peter.maydell@linaro.org
2017-10-12 13:23:14 +01:00
Emilio G. Cota
7f11636dbe tcg: remove addr argument from lookup_tb_ptr
It is unlikely that we will ever want to call this helper passing
an argument other than the current PC. So just remove the argument,
and use the pc we already get from cpu_get_tb_cpu_state.

This change paves the way to having a common "tb_lookup" function.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-10 07:37:10 -07:00
Todd Eisenberger
e0dd5fd41a x86: Correct translation of some rdgsbase and wrgsbase encodings
It looks like there was a transcription error when writing this code
initially.  The code previously only decoded src or dst of rax.  This
resolves
https://bugs.launchpad.net/qemu/+bug/1719984.

Signed-off-by: Todd Eisenberger <teisenbe@google.com>
Message-Id: <CAP26EVRNVb=Mq=O3s51w7fDhGVmf-e3XFFA73MRzc5b4qKBA4g@mail.gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-10-09 23:29:20 -03:00
Philippe Mathieu-Daudé
8301ea444a qom/cpu: move cpu_model null check to cpu_class_by_name()
and clean every implementation.

Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170917232842.14544-1-f4bug@amsat.org>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Artyom Tarasenko <atar4qemu@gmail.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-10-09 23:21:52 -03:00
Peter Maydell
b81ac0eb63 target/arm: Factor out "get mmuidx for specified security state"
For the SG instruction and secure function return we are going
to want to do memory accesses using the MMU index of the CPU
in secure state, even though the CPU is currently in non-secure
state. Write arm_v7m_mmu_idx_for_secstate() to do this job,
and use it in cpu_mmu_index().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-17-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:49 +01:00
Peter Maydell
fe768788d2 target/arm: Fix calculation of secure mm_idx values
In cpu_mmu_index() we try to do this:
        if (env->v7m.secure) {
            mmu_idx += ARMMMUIdx_MSUser;
        }
but it will give the wrong answer, because ARMMMUIdx_MSUser
includes the 0x40 ARM_MMU_IDX_M field, and so does the
mmu_idx we're adding to, and we'll end up with 0x8n rather
than 0x4n. This error is then nullified by the call to
arm_to_core_mmu_idx() which masks out the high part, but
we're about to factor out the code that calculates the
ARMMMUIdx values so it can be used without passing it through
arm_to_core_mmu_idx(), so fix this bug first.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-16-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:49 +01:00
Peter Maydell
35337cc391 target/arm: Implement security attribute lookups for memory accesses
Implement the security attribute lookups for memory accesses
in the get_phys_addr() functions, causing these to generate
various kinds of SecureFault for bad accesses.

The major subtlety in this code relates to handling of the
case when the security attributes the SAU assigns to the
address don't match the current security state of the CPU.

In the ARM ARM pseudocode for validating instruction
accesses, the security attributes of the address determine
whether the Secure or NonSecure MPU state is used. At face
value, handling this would require us to encode the relevant
bits of state into mmu_idx for both S and NS at once, which
would result in our needing 16 mmu indexes. Fortunately we
don't actually need to do this because a mismatch between
address attributes and CPU state means either:
 * some kind of fault (usually a SecureFault, but in theory
   perhaps a UserFault for unaligned access to Device memory)
 * execution of the SG instruction in NS state from a
   Secure & NonSecure code region

The purpose of SG is simply to flip the CPU into Secure
state, so we can handle it by emulating execution of that
instruction directly in arm_v7m_cpu_do_interrupt(), which
means we can treat all the mismatch cases as "throw an
exception" and we don't need to encode the state of the
other MPU bank into our mmu_idx values.

This commit doesn't include the actual emulation of SG;
it also doesn't include implementation of the IDAU, which
is a per-board way to specify hard-coded memory attributes
for addresses, which override the CPU-internal SAU if they
specify a more secure setting than the SAU is programmed to.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-15-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:49 +01:00
Peter Maydell
9901c576f6 nvic: Implement Security Attribution Unit registers
Implement the register interface for the SAU: SAU_CTRL,
SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the
actual behaviour is implemented here; registers just
read back as written.

When the CPU definition for Cortex-M33 is eventually
added, its initfn will set cpu->sau_sregion, in the same
way that we currently set cpu->pmsav7_dregion for the
M3 and M4.

Number of SAU regions is typically a configurable
CPU parameter, but this patch doesn't provide a
QEMU CPU property for it. We can easily add one when
we have a board that requires it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-14-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:49 +01:00
Peter Maydell
d3392718e1 target/arm: Add v8M support to exception entry code
Add support for v8M and in particular the security extension
to the exception entry code. This requires changes to:
 * calculation of the exception-return magic LR value
 * push the callee-saves registers in certain cases
 * clear registers when taking non-secure exceptions to avoid
   leaking information from the interrupted secure code
 * switch to the correct security state on entry
 * use the vector table for the security state we're targeting

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-13-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:49 +01:00
Peter Maydell
907bedb3f3 target/arm: Add support for restoring v8M additional state context
For v8M, exceptions from Secure to Non-Secure state will save
callee-saved registers to the exception frame as well as the
caller-saved registers. Add support for unstacking these
registers in exception exit when necessary.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-12-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:48 +01:00
Peter Maydell
bfb2eb5278 target/arm: Update excret sanity checks for v8M
In v8M, more bits are defined in the exception-return magic
values; update the code that checks these so we accept
the v8M values when the CPU permits them.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-11-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:48 +01:00
Peter Maydell
bed079da04 target/arm: Add new-in-v8M SFSR and SFAR
Add the new M profile Secure Fault Status Register
and Secure Fault Address Register.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-10-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:48 +01:00
Peter Maydell
4e4259d3c5 target/arm: Don't warn about exception return with PC low bit set for v8M
In the v8M architecture, return from an exception to a PC which
has bit 0 set is not UNPREDICTABLE; it is defined that bit 0
is discarded [R_HRJH]. Restrict our complaint about this to v7M.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-9-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:48 +01:00
Peter Maydell
cb484f9a6e target/arm: Warn about restoring to unaligned stack
Attempting to do an exception return with an exception frame that
is not 8-aligned is UNPREDICTABLE in v8M; warn about this.
(It is not UNPREDICTABLE in v7M, and our implementation can
handle the merely-4-aligned case fine, so we don't need to
do anything except warn.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-8-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:48 +01:00
Peter Maydell
224e0c300a target/arm: Check for xPSR mismatch usage faults earlier for v8M
ARM v8M specifies that the INVPC usage fault for mismatched
xPSR exception field and handler mode bit should be checked
before updating the PSR and SP, so that the fault is taken
with the existing stack frame rather than by pushing a new one.
Perform this check in the right place for v8M.

Since v7M specifies in its pseudocode that this usage fault
check should happen later, we have to retain the original
code for that check rather than being able to merge the two.
(The distinction is architecturally visible but only in
very obscure corner cases like attempting an invalid exception
return with an exception frame in read only memory.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-7-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:48 +01:00
Peter Maydell
3f0cddeee1 target/arm: Restore SPSEL to correct CONTROL register on exception return
On exception return for v8M, the SPSEL bit in the EXC_RETURN magic
value should be restored to the SPSEL bit in the CONTROL register
banked specified by the EXC_RETURN.ES bit.

Add write_v7m_control_spsel_for_secstate() which behaves like
write_v7m_control_spsel() but allows the caller to specify which
CONTROL bank to use, reimplement write_v7m_control_spsel() in
terms of it, and use it in exception return.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-6-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:48 +01:00
Peter Maydell
3919e60b6e target/arm: Restore security state on exception return
Now that we can handle the CONTROL.SPSEL bit not necessarily being
in sync with the current stack pointer, we can restore the correct
security state on exception return. This happens before we start
to read registers off the stack frame, but after we have taken
possible usage faults for bad exception return magic values and
updated CONTROL.SPSEL.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-5-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:47 +01:00
Peter Maydell
de2db7ec89 target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler mode
In the v7M architecture, there is an invariant that if the CPU is
in Handler mode then the CONTROL.SPSEL bit cannot be nonzero.
This in turn means that the current stack pointer is always
indicated by CONTROL.SPSEL, even though Handler mode always uses
the Main stack pointer.

In v8M, this invariant is removed, and CONTROL.SPSEL may now
be nonzero in Handler mode (though Handler mode still always
uses the Main stack pointer). In preparation for this change,
change how we handle this bit: rename switch_v7m_sp() to
the now more accurate write_v7m_control_spsel(), and make it
check both the handler mode state and the SPSEL bit.

Note that this implicitly changes the point at which we switch
active SP on exception exit from before we pop the exception
frame to after it.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-4-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:47 +01:00
Peter Maydell
5b5223997c target/arm: Don't switch to target stack early in v7M exception return
Currently our M profile exception return code switches to the
target stack pointer relatively early in the process, before
it tries to pop the exception frame off the stack. This is
awkward for v8M for two reasons:
 * in v8M the process vs main stack pointer is not selected
   purely by the value of CONTROL.SPSEL, so updating SPSEL
   and relying on that to switch to the right stack pointer
   won't work
 * the stack we should be reading the stack frame from and
   the stack we will eventually switch to might not be the
   same if the guest is doing strange things

Change our exception return code to use a 'frame pointer'
to read the exception frame rather than assuming that we
can switch the live stack pointer this early.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1506092407-26985-3-git-send-email-peter.maydell@linaro.org
2017-10-06 16:46:47 +01:00
Jan Kiszka
77077a8300 arm: Fix SMC reporting to EL2 when QEMU provides PSCI
This properly forwards SMC events to EL2 when PSCI is provided by QEMU
itself and, thus, ARM_FEATURE_EL3 is off.

Found and tested with the Jailhouse hypervisor. Solution based on
suggestions by Peter Maydell.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-id: 4f243068-aaea-776f-d18f-f9e05e7be9cd@siemens.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-06 16:46:47 +01:00
Cornelia Huck
8986db4922 s390x/tcg: initialize machine check queue
Just as for external interrupts and I/O interrupts, we need to
initialize mchk_index during cpu reset.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
Collin L. Walling
7edd4a4967 s390/kvm: Support for get/set of extended TOD-Clock for guest
Provides an interface for getting and setting the guest's extended
TOD-Clock via a single ioctl to kvm. If the ioctl fails because it
is not support by kvm, then we fall back to the old style of
retrieving the clock via two ioctls.

Signed-off-by: Collin L. Walling <walling@linux.vnet.ibm.com>
Reviewed-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
[split failure change from epoch index change]
Message-Id: <20171004105751.24655-2-borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
[some cosmetic fixes]
2017-10-06 10:53:02 +02:00
David Hildenbrand
86b5ab3909 s390x/tcg: make STFL store into the lowcore
Using virtual memory access is wrong and will soon include low-address
protection checks, which is to be bypassed for STFL.

STFL is a privileged instruction and using LowCore requires
!CONFIG_USER_ONLY, so add the ifdef and move the declaration to the
right place.

This was originally part of a bigger STFL(E) refactoring.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170927170027.8539-4-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
f42dc44a14 s390x: introduce and use S390_MAX_CPUS
Will be handy in the future.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928134609.16985-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
1e70ba24a9 target/s390x: get rid of next_core_id
core_id is not needed by linux-user, as the core_id a.k.a. CPU address
is only accessible from kernel space.

Therefore, drop next_core_id and make cpu_index get autoassigned again
for linux-user.

While at it, shield core_id and cpuid completely from linux-user. cpuid
can also only be queried from kernel space.

Suggested-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928134609.16985-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
c547a757f4 s390x/cpumodel: fix max STFL(E) bit number
Not that it would matter in the near future, but it is actually 2048
bytes, therefore 16384 possible bits.

Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928134609.16985-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
c5b934303c s390x: raise CPU hotplug irq after really hotplugged
Let's move it into the machine, so we trigger the IRQ after setting
ms->possible_cpus (which SCLP uses to construct the list of
online CPUs).

This also fixes a problem reported by Thomas Huth, whereby qemu can be
crashed using the none machine

qemu-s390x-softmmu -M none -monitor stdio
-> device_add qemu-s390-cpu

Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170928134609.16985-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
8eb82de91b s390x/tcg: make idte/ipte use the new _real mmu
We don't wrap addresses in the mmu for the _real case, therefore the
behavior should be unchanged.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170926183318.12995-7-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
e26131c904 s390x/tcg: make testblock use the new _real mmu
Low address protection checks will be moved into the mmu later.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170926183318.12995-6-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
4ae433417e s390x/tcg: make stora(g) use the new _real mmu
As we properly handle the return address now, we can drop
potential_page_fault().

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170926183318.12995-5-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
34499dadc1 s390x/tcg: make lura(g) use the new _real mmu.
Looks like, lurag was not loading 64bit but only 32bit.

As we properly handle the return address now, we can drop
potential_page_fault().

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170926183318.12995-4-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
fb66944df9 s390x/tcg: add MMU for real addresses
This makes it easy to access real addresses (prefix) and in addition
checks for valid memory addresses, which is missing when using e.g.
stl_phys().

We can later reuse it to implement low address protection checks (then
we might even decide to introduce yet another MMU for absolute
addresses, just for handling storage keys and low address protection).

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170926183318.12995-3-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
0bd695a960 s390x/tcg: fix checking for invalid memory check
It should have been a >=, but let's directly perform a proper access
check to also be able to deal with hotplugged memory later.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170926183318.12995-2-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:02 +02:00
David Hildenbrand
947a38bd6f s390x/kvm: fix and cleanup storing CPU status
env->psa is a 64bit value, while we copy 4 bytes into the save area,
resulting always in 0 getting stored.

Let's try to reduce such errors by using a proper structure. While at
it, use correct cpu->be conversion (and get_psw_mask()), as we will be
reusing this code for TCG soon.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170922140338.6068-1-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:01 +02:00
Igor Mammedov
b6805e127c s390x: use generic cpu_model parsing
Define default CPU type in generic way in machine class_init
and let common machine code handle cpu_model parsing.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <1505998749-269631-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:01 +02:00
David Hildenbrand
7705c75048 s390x/tcg: add basic MSA features
The STFLE bits for the MSA (extension) facilities simply indicate that
the respective instructions can be executed. The QUERY subfunction can then
be used to identify which features exactly are available.

Availability of subfunctions can also vary on real hardware. For now, we
simply implement a CPU model without any available subfunctions except
QUERY (which is always around).

As all MSA functions behave quite similarly, we can use one translation
handler for now. Prepare the code for implementation of actual subfunctions.

At least MSA is helpful for now, as older Linux kernels require this
facility when compiled for a z9 model. Allow to enable the facilities
for the qemu cpu model.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170920153016.3858-4-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:01 +02:00
David Hildenbrand
7634d658e6 s390x/tcg: move wrap_address() to internal.h
We want to use it in another file.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170920153016.3858-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:01 +02:00
David Hildenbrand
6b257354c4 s390x/tcg: implement spm (SET PROGRAM MASK)
Missing and is used inside Linux in the context of CPACF.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170920153016.3858-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06 10:53:01 +02:00
Gerd Hoffmann
e2f82e924d console: purge curses bits from console.h
Handle the translation from vga chars to curses chars in curses_update()
instead of console_write_ch().  Purge any curses support bits from
ui/console.h include file.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170927103811.19249-1-kraxel@redhat.com
2017-09-29 10:36:33 +02:00
Peter Maydell
ab16152926 Migration pull 2017-09-27
-----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJZy64HAAoJEAUWMx68W/3nTqwP/A5Gx4Qwkv5KKdpM0YLq//d+
 OODmzl7Ni3a5Up1ETqGdLb84estrgY+5DISp73Rkt4a5tbT7+XKrhb4qD+93NnTe
 zynY9in4C1jGxYm7YzeOhwSeIiuLZMTCLQlGdYw7/nunIFwkItUEvAFx3AG1WCJe
 2Mk0lvmg4LikruDDMdzqZaJu7h5RU5sQjA7SsyrTBdsN7tNWl3rKLYGXwgzv0uz5
 n2xkUgzvvnj1Bk/Adojkn05yxA86xKD/4rhFED9fjNVSjAGHMrHIWOJ70V26Cg5w
 3gJ+5mesWsH+erf0JFYv0S38SyFbmIOE39Nn13D/d0o1x89P8B8cgqbi3ADTKM77
 875wuIVnZzi2vIwVdxXQ9GHQ79cpXwr2fOfQ2rjT6Ll95K+u/MQG86fQiO0eJW+0
 KwQVCwwh+HmCUcCogMuxAc9+F8C8qolwCi/9QXwS2yLBElHKaWDIMyTce36cW9d7
 cZaKIOeSJUGNFoaWZnXN88MRuOYbdywTl+GddVAW3+VJCTYV2oi0o5fsTfxXy5AV
 y7uYo/pcSj2gSZJ5GairMlB6p5iXnE8yusi1e4ZKA1x1TaSHSb6zR59lRUFr+j/L
 JhUCfA85v5/elGqgkYp6UhSzFDJ2ID2oSEMQTIzfVrinOXtnf2KEh33YMbUH5qyo
 yHVEu12uPe9rE6A0vWlu
 =/+LV
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20170927a' into staging

Migration pull 2017-09-27

# gpg: Signature made Wed 27 Sep 2017 14:56:23 BST
# gpg:                using RSA key 0x0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg:          It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A  9FA9 0516 331E BC5B FDE7

* remotes/dgilbert/tags/pull-migration-20170927a:
  migration: Route more error paths
  migration: Route errors up through vmstate_save
  migration: wire vmstate_save_state errors up to vmstate_subsection_save
  migration: Check field save returns
  migration: check pre_save return in vmstate_save_state
  migration: pre_save return int
  migration: disable auto-converge during bulk block migration

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-09-27 22:44:51 +01:00
Peter Maydell
1d89344081 ppc patch queue 2017-09-27
Contains
  * a number of Mac machine type fixes
  * a number of embedded machine type fixes (preliminary to adding the
    Sam460ex board)
  * a important fix for handling of migration with KVM PR
  * assorted other minor fixes and cleanups
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlnLVgAACgkQbDjKyiDZ
 s5IKHw/+OO+VZOfQ7/OUKED1VwgYG51Z9roX1wkfx5lOmJRT38KZ0XpHKBw1gdV0
 3UkqH4jqYBSWqjlIUS4vFreG6NbRrxAD1nmctGI1xDBL3Cjw8Xt28vwJ7MgOzaS9
 VW+ga+1exH5x3svBaYPwoePEXI9YoFQkSh27+xAi8n+csB6D0AMquvY8iJsAf+kS
 IQY5m8QH5dlu1hJSEssQrfvY3s/zlxm41Mamm3n0i5rpl+a+vyu8jmZCBrKbhqjL
 XfiYCKssdknppW8FMmGbh7ycwf2yQuFG273KUQRXdJgiAdGp9gRObufbdblp4Q9u
 ZJCQn/MPBMVSAV5YIrMKwmgzlev26d5uFpFAyOlX+B4qZv5XRNnD3762Lv2LtSFU
 YYYMJNA3pnmrA6cQ9Q0fSXilZj+AblPdaGhc+HZeRL84K1A4RLx6wRi/axMsxoBb
 MBpF9Dm5wMLudybwLFIOlNdktQPcYCYzBsPb1V7lYITRL9PyoyMmIA4mRSGJ494T
 H1/dbUzqGBvKAQyx2Dk73V9aoBk03v1BUuCGKgF0e5THp6yWJytbxhzOq7MX6JkG
 Diwyzf7D7v2pvfZ5chdn79M01zsHFT6bnR+PBpGSmGavrT4E3qf/pRdhBcsBOc3n
 w/triA7Ae4sq2DHQqHXwQd+om70OBRxqQUkaa30dKA6+kj5fj5c=
 =NL90
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.11-20170927' into staging

ppc patch queue 2017-09-27

Contains
 * a number of Mac machine type fixes
 * a number of embedded machine type fixes (preliminary to adding the
   Sam460ex board)
 * a important fix for handling of migration with KVM PR
 * assorted other minor fixes and cleanups

# gpg: Signature made Wed 27 Sep 2017 08:40:48 BST
# gpg:                using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dgibson/tags/ppc-for-2.11-20170927: (26 commits)
  macio: use object link between MACIO_IDE and MAC_DBDMA object
  macio: pass channel into MACIOIDEState via qdev property
  mac_dbdma: remove DBDMA_init() function
  mac_dbdma: QOMify
  mac_dbdma: remove unused IO fields from DBDMAState
  spapr: fix the value of SDR1 in kvmppc_put_books_sregs()
  ppc/pnv: check for OPAL firmware file presence
  ppc: remove all unused CPU definitions
  ppc: remove unused CPU definitions
  spapr_pci: make index property mandatory
  macio: convert pmac_ide_ops from old_mmio
  ppc/pnv: Improve macro parenthesization
  spapr: introduce helpers to migrate HPT chunks and the end marker
  ppc/kvm: generalize the use of kvmppc_get_htab_fd()
  ppc/kvm: change kvmppc_get_htab_fd() to return -errno on error
  ppc: Fix OpenPIC model
  ppc/ide/macio: Add missing registers
  ppc/mac: More rework of the DBDMA emulation
  ppc/mac: Advertise a high clock frequency for NewWorld Macs
  ppc: QOMify g3beige machine
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-09-27 18:20:31 +01:00
Dr. David Alan Gilbert
44b1ff319c migration: pre_save return int
Modify the pre_save method on VMStateDescription to return an int
rather than void so that it potentially can fail.

Changed zillions of devices to make them return 0; the only
case I've made it return non-0 is hw/intc/s390_flic_kvm.c that already
had an error_report/return case.

Note: If you add an error exit in your pre_save you must emit
an error_report to say why.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170925112917.21340-2-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-09-27 11:35:59 +01:00
Christian Borntraeger
9dacc90846 s390x/cpumodel: remove ais from z14 default model-> also for 2.10.1
We disabled ais for 2.10, so let's also remove it from the z14
default model.

Fixes: 3f2d07b3b0 ("s390x/ais: for 2.10 stable: disable ais facility")
CC: qemu-stable@nongnu.org
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20170927072030.35737-2-borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-27 11:13:32 +02:00
Greg Kurz
1ec26c757d spapr: fix the value of SDR1 in kvmppc_put_books_sregs()
When running with KVM PR, if a new HPT is allocated we need to inform
KVM about the HPT address and size. This is currently done by hacking
the value of SDR1 and pushing it to KVM in several places.

Also, migration breaks the guest since it is very unlikely the HPT has
the same address in source and destination, but we push the incoming
value of SDR1 to KVM anyway.

This patch introduces a new virtual hypervisor hook so that the spapr
code can provide the correct value of SDR1 to be pushed to KVM each
time kvmppc_put_books_sregs() is called.

It allows to get rid of all the hacking in the spapr/kvmppc code and
it fixes migration of nested KVM PR.

Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27 13:05:41 +10:00
John Snow
5aec066c41 ppc: remove all unused CPU definitions
Remove *all* unused CPU definitions as indicated by compile-time
`#if 0` constructs.

Signed-off-by: John Snow <jsnow@redhat.com>
[dwg: Removed some additional now-useless comments]
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27 13:05:41 +10:00
John Snow
53a04e8e79 ppc: remove unused CPU definitions
Following commit aef77960, remove now-unused definitions from
cpu-models.h.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27 13:05:41 +10:00
Greg Kurz
14b0d74887 ppc/kvm: generalize the use of kvmppc_get_htab_fd()
The use of KVM_PPC_GET_HTAB_FD is open-coded in kvmppc_read_hptes()
and kvmppc_write_hpte().

This patch modifies kvmppc_get_htab_fd() so that it can be used
everywhere we need to access the in-kernel htab:
- add an index argument
  => only kvmppc_read_hptes() passes an actual index, all other users
     pass 0
- add an errp argument to propagate error messages to the caller.
  => spapr migration code prints the error
  => hpte helpers pass &error_abort to keep the current behavior
     of hw_error()

While here, this also fixes a bug in kvmppc_write_hpte() so that it
opens the htab fd for writing instead of reading as it currently does.
This never broke anything because we currently never call this code,
as explained in the changelog of commit c138593380:

"This support updating htab managed by the hypervisor. Currently
 we don't have any user for this feature. This actually bring the
 store_hpte interface in-line with the load_hpte one. We may want
 to use this when we want to emulate henter hcall in qemu for HV
 kvm."

The above is still true today.

Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27 13:05:41 +10:00
Greg Kurz
82be8e7394 ppc/kvm: change kvmppc_get_htab_fd() to return -errno on error
When kvmppc_get_htab_fd() fails, its return value is propagated up to
qemu_savevm_state_iterate() or to qemu_savevm_state_complete_precopy().
All savevm handlers expect to receive a negative errno on error.

Let's patch kvmppc_get_htab_fd() accordingly.

While here, let's change htab_load() in the spapr code to also
propagate the error, since it doesn't make sense to abort() if
we couldn't get the htab fd from KVM.

Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27 13:05:41 +10:00
BALATON Zoltan
81bb29ace5 ppc: Add 460EX embedded CPU
Despite its name it is a 440 core CPU

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27 13:05:41 +10:00
Greg Kurz
712b25c4cb ppc/kvm: drop kvmppc_has_cap_htab_fd()
It never got used since its introduction (commit 7c43bca004).

Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27 13:05:41 +10:00
Greg Kurz
6977afda16 ppc/kvm: check some capabilities with kvm_vm_check_extension()
The following capabilities are VM specific:
- KVM_CAP_PPC_SMT_POSSIBLE
- KVM_CAP_PPC_HTAB_FD
- KVM_CAP_PPC_ALLOC_HTAB

If both KVM HV and KVM PR are present, checking them always return
the HV value, even if we explicitely requested to use PR.

This has no visible effect for KVM_CAP_PPC_ALLOC_HTAB, because we also
try the KVM_PPC_ALLOCATE_HTAB ioctl which is only suppored by HV. As
a consequence, the spapr code doesn't even check KVM_CAP_PPC_HTAB_FD.

However, this will cause kvmppc_hint_smt_possible(), introduced by
commit fa98fbfcdf, to report several VSMT modes (eg, Available
VSMT modes: 8 4 2 1) whereas PR only support mode 1.

This patch fixes all three anyway to use kvm_vm_check_extension(). It
is okay since the VM is already created at the time kvm_arch_init() or
kvmppc_reset_htab() is called.

Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27 13:05:41 +10:00
Alistair Francis
2c5b1d2a47 target/xtensa: Use the pre-defined MEMTXATTRS_UNSPECIFIED macro
Instead of using the hardcoded (MemTxAttrs){0} for no memory attributes
let's use the already defined MEMTXATTRS_UNSPECIFIED macro instead.

This is technically a change of behaviour as MEMTXATTRS_UNSPECIFIED sets
the unspecified field to 1, but it doesn't look like anything is
checking this field.

Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2017-09-26 09:11:22 +03:00
Peter Maydell
460b6c8e58 * Speed up AddressSpaceDispatch creation (Alexey)
* Fix kvm.c assert (David)
 * Memory fixes and further speedup (me)
 * Persistent reservation manager infrastructure (me)
 * virtio-serial: add enable_backend callback (Pavel)
 * chardev GMainContext fixes (Peter)
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAlnFX3UUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroMq2wf/Z7i67tTQYhaY7trAehdGDLSa6C4m
 0xAex+DVJrpfxFHLINkktx9NpvyZbQ/PuA0+5W10qmfPVF3hddTgLL3Dcg5xkQOh
 qNa2pFPMTn2T4eEdAANycNEF3nz8at5EnZ5anW2uMS41iDMq6aBjPhDgvi/iyG4w
 GBeZFjUUXQ8Wtp5fZJ1RgV/2PFg3W1REodvM143Ge84UUmnltf/snmx3NMQWw5wu
 coZFSIpcachMRxZ+bbLtJnCoRWG+8lkmTXYkswRWGez+WniscR0898RRpD0lJgIA
 cgeX5Cg/EbBIpwcqjsW2018WlsH5qp4rb6wVuqTY2kzbG+FUyKSqxSwGZw==
 =9GLQ
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* Speed up AddressSpaceDispatch creation (Alexey)
* Fix kvm.c assert (David)
* Memory fixes and further speedup (me)
* Persistent reservation manager infrastructure (me)
* virtio-serial: add enable_backend callback (Pavel)
* chardev GMainContext fixes (Peter)

# gpg: Signature made Fri 22 Sep 2017 20:07:33 BST
# gpg:                using RSA key 0xBFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream: (32 commits)
  chardev: remove context in chr_update_read_handler
  chardev: use per-dev context for io_add_watch_poll
  chardev: add Chardev.gcontext field
  chardev: new qemu_chr_be_update_read_handlers()
  scsi: add persistent reservation manager using qemu-pr-helper
  scsi: add multipath support to qemu-pr-helper
  scsi: build qemu-pr-helper
  scsi, file-posix: add support for persistent reservation management
  memory: Share special empty FlatView
  memory: seek FlatView sharing candidates among children subregions
  memory: trace FlatView creation and destruction
  memory: Create FlatView directly
  memory: Get rid of address_space_init_shareable
  memory: Rework "info mtree" to print flat views and dispatch trees
  memory: Do not allocate FlatView in address_space_init
  memory: Share FlatView's and dispatch trees between address spaces
  memory: Move address_space_update_ioeventfds
  memory: Alloc dispatch tree where topology is generared
  memory: Store physical root MR in FlatView
  memory: Rename mem_begin/mem_commit/mem_add helpers
  ...

# Conflicts:
#	configure
2017-09-23 12:55:40 +01:00
Christian Borntraeger
3f2d07b3b0 s390x/ais: for 2.10 stable: disable ais facility
The migration interface for ais was introduced with kernel 4.13
but the capability itself had been active since 4.12. As migration
support is considered necessary lets disable ais in the 2.10
stable version. A proper fix and re-enablement will be done
for qemu 2.11.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20170921140834.14233-2-borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-22 09:25:21 +02:00
Alexey Kardashevskiy
b516572f31 memory: Get rid of address_space_init_shareable
Since FlatViews are shared now and ASes not, this gets rid of
address_space_init_shareable().

This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-17-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-09-22 01:06:51 +02:00
Peter Maydell
4ce31af4ae target/arm: Remove out of date ARM ARM section references in A64 decoder
In the A64 decoder, we have a lot of references to section numbers
from version A.a of the v8A ARM ARM (DDI0487). This version of the
document is now long obsolete (we are currently on revision B.a),
and various intervening versions renumbered all the sections.

The most recent B.a version of the document doesn't assign
section numbers at all to the individual instruction classes
in the way that the various A.x versions did. The simplest thing
to do is just to delete all the out of date C.x.x references.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20170915150849.23557-1-peter.maydell@linaro.org
2017-09-21 16:32:25 +01:00
Peter Maydell
5cb18069d7 nvic: Support banked exceptions in acknowledge and complete
Update armv7m_nvic_acknowledge_irq() and armv7m_nvic_complete_irq()
to handle banked exceptions:
 * acknowledge needs to use the correct vector, which may be
   in sec_vectors[]
 * acknowledge needs to return to its caller whether the
   exception should be taken to secure or non-secure state
 * complete needs its caller to tell it whether the exception
   being completed is a secure one or not

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1505240046-11454-20-git-send-email-peter.maydell@linaro.org
2017-09-21 16:31:09 +01:00
Peter Maydell
5d4791991d target/arm: Handle banking in negative-execution-priority check in cpu_mmu_index()
Now that we have a banked FAULTMASK register and banked exceptions,
we can implement the correct check in cpu_mmu_index() for whether
the MPU_CTRL.HFNMIENA bit's effect should apply. This bit causes
handlers which have requested a negative execution priority to run
with the MPU disabled. In v8M the test has to check this for the
current security state and so takes account of banking.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1505240046-11454-17-git-send-email-peter.maydell@linaro.org
2017-09-21 16:31:09 +01:00
Peter Maydell
2fb50a3340 nvic: Make set_pending and clear_pending take a secure parameter
Make the armv7m_nvic_set_pending() and armv7m_nvic_clear_pending()
functions take a bool indicating whether to pend the secure
or non-secure version of a banked interrupt, and update the
callsites accordingly.

In most callsites we can simply pass the correct security
state in; in a couple of cases we use TODO comments to indicate
that we will return the code in a subsequent commit.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1505240046-11454-10-git-send-email-peter.maydell@linaro.org
2017-09-21 16:31:09 +01:00
Peter Maydell
3b2e934463 nvic: Implement AIRCR changes for v8M
The Application Interrupt and Reset Control Register has some changes
for v8M:
 * new bits SYSRESETREQS, BFHFNMINS and PRIS: these all have
   real state if the security extension is implemented and otherwise
   are constant
 * the PRIGROUP field is banked between security states
 * non-secure code can be blocked from using the SYSRESET bit
   to reset the system if SYSRESETREQS is set

Implement the new state and the changes to register read and write.
For the moment we ignore the effects of the secure PRIGROUP.
We will implement the effects of PRIS and BFHFNMIS later.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1505240046-11454-6-git-send-email-peter.maydell@linaro.org
2017-09-21 16:29:27 +01:00
Peter Maydell
50f11062d4 target/arm: Implement MSR/MRS access to NS banked registers
In v8M the MSR and MRS instructions have extra register value
encodings to allow secure code to access the non-secure banked
version of various special registers.

(We don't implement the MSPLIM_NS or PSPLIM_NS aliases, because
we don't currently implement the stack limit registers at all.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1505240046-11454-2-git-send-email-peter.maydell@linaro.org
2017-09-21 16:28:23 +01:00
Eric Blake
2a2be359c4 mips: Improve macro parenthesization
Although none of the existing macro call-sites were broken,
it's always better to write macros that properly parenthesize
arguments that can be complex expressions, so that the intended
order of operations is not broken.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-09-21 13:25:41 +01:00
Igor Mammedov
c4c8146cfd mips: replace cpu_mips_init() with cpu_generic_init()
now cpu_mips_init() reimplements subset of cpu_generic_init()
tasks, so just drop it and use cpu_generic_init() directly.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMD: use internal.h instead of cpu.h]
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-09-21 13:25:37 +01:00
Igor Mammedov
41da212c9c mips: MIPSCPU model subclasses
Register separate QOM types for each mips cpu model,
so it would be possible to reuse generic CPU creation
routines.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMD: use internal.h, use void* to hold cpu_def in MIPSCPUClass,
 mark MIPSCPU abstract, address Eduardo Habkost review]
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-09-21 13:25:30 +01:00
Philippe Mathieu-Daudé
df4dc10284 mips: call cpu_mips_realize_env() from mips_cpu_realizefn()
This changes the order between cpu_mips_realize_env() and
cpu_exec_initfn(), but cpu_exec_initfn() don't have anything that
depends on cpu_mips_realize_env() being called first.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-09-21 13:25:27 +01:00
Philippe Mathieu-Daudé
27e38392ca mips: split cpu_mips_realize_env() out of cpu_mips_init()
so it can be used in mips_cpu_realizefn() in the next commit

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-09-21 13:24:34 +01:00
Philippe Mathieu-Daudé
26aa3d9aec mips: introduce internal.h and cleanup cpu.h
no logical change, only code movement (and fix a comment typo).

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-09-21 13:24:34 +01:00
Philippe Mathieu-Daudé
5502b66fc7 mips: move hw/mips/cputimer.c to target/mips/
This timer is a required part of the MIPS32/MIPS64 System Control coprocessor
(CP0). Moving it with the other architecture related files will allow an opaque
use of CPUMIPSState* in the next commit (introduce "internal.h").

also remove it from 'user' targets, remove an unnecessary include.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
2017-09-21 13:24:34 +01:00
Peter Maydell
d3f5433c7b Machine/CPU/NUMA queue, 2017-09-19
-----BEGIN PGP SIGNATURE-----
 
 iQIcBAABCAAGBQJZwXs9AAoJECgHk2+YTcWmS18P/1OceEmetatwBQ6YWURA0VfU
 CB+nCHxijlWXU8UiwoTVDOXc11P00V5VO0mMFouUbv+o+/80qyAfNl2DhDVq7ZXo
 nhIFHKXUR1BX3YbcqfIonH8xCMGtrAexghhhaGPQbx450PqmGyyjBsZsXttNge1S
 Yt+jzgD9drsZPYw4ZLJjT/tnWI/+8kDPn37jVujzRzApTD9/fq+77ZZq7q25RzQH
 ISa5OXOQk6pq+tPHvaIFXfOqfhILcpM7u/X7MYVwiA1oBqcLXTDCjDX3cKavKade
 +i3yrH7ahNgUO1DyT2bMZX6NAFoZDBrwlYgpkw8n+Yf+EUcdPDOHHEcEeaMpdTGx
 wgWbQrs+xzIg/ulRb8Qqe9FwdXGQbehfFGofd+gnGQ0XuxekT82in+ucMOivQO3x
 W/azGnzoz6D83stJFIZ93S69SRswqBuj2R8mu821yzqx1EUSNXgfKXz9OPwwFed5
 El9YN127F/VAyp0av1CsOg1XgqIujMUGRxGf7eQBfkh1R3C/g2XNPTvR3yaY9L5B
 zuMJfWLF6r6zL53ymt7/9UVEim295Lia3mNGS5/Min5QGms7edphsDsrubzXsZGq
 2owWfAU/KeDH9gNVNNkZdLcEcS5TEz+2oGPR5oeDeB/QlzVdNQ3FeTVlzFavxNQa
 8nrzeFcw7VNrIx2gdvsY
 =LQ8U
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/ehabkost/tags/machine-next-pull-request' into staging

Machine/CPU/NUMA queue, 2017-09-19

# gpg: Signature made Tue 19 Sep 2017 21:17:01 BST
# gpg:                using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF  D1AA 2807 936F 984D C5A6

* remotes/ehabkost/tags/machine-next-pull-request:
  MAINTAINERS: Update git URLs for my trees
  hw/acpi-build: Fix SRAT memory building in case of node 0 without RAM
  NUMA: Replace MAX_NODES with nb_numa_nodes in for loop
  numa: cpu: calculate/set default node-ids after all -numa CLI options are parsed
  arm: drop intermediate cpu_model -> cpu type parsing and use cpu type directly
  pc: use generic cpu_model parsing
  vl.c: convert cpu_model to cpu type and set of global properties before machine_init()
  cpu: make cpu_generic_init() abort QEMU on error
  qom: cpus: split cpu_generic_init() on feature parsing and cpu creation parts
  hostmem-file: Add "discard-data" option
  osdep: Define QEMU_MADV_REMOVE
  vl: Clean up user-creatable objects when exiting

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-09-20 17:35:36 +01:00
Cornelia Huck
8ad9087c4a s390x/ccw: create s390 phb for compat reasons as well
d32bd032d8 ("s390x/ccw: create s390 phb conditionally") made
registering the s390 pci host bridge conditional on presense
of the zpci facility bit. Sadly, that breaks migration from
machines that did not use the cpu model (2.7 and previous).

Create the s390 phb for pre-cpu model machines as well: We can
tweak s390_has_feat() to always indicate the zpci facility bit
when no cpu model is available (on 2.7 and previous compat machines).

Fixes: d32bd032d8 ("s390x/ccw: create s390 phb conditionally")
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:32 +02:00
David Hildenbrand
a1422723f7 s390x: allow CPU hotplug in random core-id order
SCLP correctly indicates the core-id aka. CPU address for each available
CPU.

As the core-id corresponds to cpu_index, also a newly created kvm vcpu
gets assigned this core-id as vcpu id. So SIGP in the kernel works
correctly (it uses the vcpu id to lookup the correct CPU).

So there should be nothing hindering us from hotplugging CPUs in random
core-id order.

This now makes sure that the output from "query-hotpluggable-cpus"
is completely true. Until now, a specific order is implicit. Performance
vice, hotplugging CPUs in non-sequential order might not be the best thing
to do, as VCPU lookup inside KVM might be a little slower. But that
doesn't hinder us from supporting it.

next_core_id is now used by linux user only.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-23-david@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:32 +02:00
David Hildenbrand
524d18d8bd s390x: get rid of cpu_s390x_create()
Now that there is only one user of cpu_s390x_create() left, make cpu
creation look like on x86.
- Perform the model/properties split and checks in s390_init_cpus()
- Parse features only once without having to remember if already parsed
- Pass only the typename to s390x_new_cpu()
- Use the typename of an existing CPU for hotplug via cpu-add

Acked-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-21-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:32 +02:00
David Hildenbrand
0347ab8469 s390x: allow cpu hotplug via device_add
E.g. the following now works:
    device_add host-s390-cpu,id=cpu1,core-id=1

The system will perform the same checks as when using cpu_add:
- If the core_id is already in use
- If the next sequential core_id isn't used
- If core-id >= max_cpu is specified

In addition, mixed CPU models are checked. E.g. if starting with
-cpu host and trying to hotplug "qemu-s390-cpu":
    "Mixed CPU models are not supported on s390x."

Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-17-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:32 +02:00
David Hildenbrand
99aa6bf29b s390x: print CPU definitions in sorted order
Other architectures provide nicely sorted lists, let's do it similarly on
s390x.

While at it, clean up the code we have to touch either way.

Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-16-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:32 +02:00
David Hildenbrand
b599fef28e target/s390x: rename next_cpu_id to next_core_id
Adapt to the new term "core_id". While at it, fix the type and drop the
initialization to 0 (which is superfluous).

Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:32 +02:00
David Hildenbrand
ca5c1457d6 target/s390x: use "core-id" for cpu number/address/id handling
Some time ago we discussed that using "id" as property name is not the
right thing to do, as it is a reserved property for other devices and
will not work with device_add.

Switch to the term "core-id" instead, and use it as an equivalent to
"CPU address" mentioned in the PoP. There is no such thing as cpu number,
so rename env.cpu_num to env.core_id. We use "core-id" as this is the
common term to use for device_add later on (x86 and ppc).

We can get rid of cpu->id now. Keep cpu_index and env->core_id in sync.
cpu_index was already implicitly used by e.g. cpu_exists(), so keeping
both in sync seems to be the right thing to do.

cpu_index will now no longer automatically get set via
cpu_exec_realizefn(). For now, we were lucky that both implicitly stayed
in sync.

Our new cpu property "core-id" can be a static property. Range checks can
be avoided by using the correct type and the "setting after realized"
check is done implicitly.

device_add will later need the reserved "id" property. Hotplugging a CPU
on s390x will then be: "device_add host-s390-cpu,id=cpu2,core-id=2".

Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-14-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:32 +02:00
David Hildenbrand
88556edd74 target/s390x: set cpu->id for linux user when realizing
scc->next_cpu_id is updated when realizing. Setting it just before that
point looks cleaner.

Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-13-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:32 +02:00
David Hildenbrand
e0b1a8a14e target/s390x: use program_interrupt() in per_check_exception()
Clean it up by reusing program_interrupt(). Add a concern regarding
ilen.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-11-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:31 +02:00
David Hildenbrand
525f4b65c7 target/s390x: use trigger_pgm_exception() in s390_cpu_handle_mmu_fault()
This looks cleaner. linux-user will not use the ilen field, so setting
it doesn't do any harm.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:31 +02:00
David Hildenbrand
53d8e91d64 s390x: move sclp_service_call() to sclp.h
Implemented in sclp.c, so let's move it to the right include file.
Also adjust some includes.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-9-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-19 18:31:31 +02:00