Remove un-needed usages of ENV_GET_CPU() by converting the APIs to use
CPUState pointers and retrieving the env_ptr as minimally needed.
Scripted conversion for target-* change:
for I in target-*/cpu.h; do
sed -i \
's/\(^int cpu_[^_]*_exec(\)[^ ][^ ]* \*s);$/\1CPUState *cpu);/' \
$I;
done
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This patch adds support for PER Breaking-Event-Address register. Like
real hardware, it save the current PSW address when the PSW address is
changed by an instruction. We have to take care of optimizations QEMU
does, a branch to the next instruction is still a branch.
This register is copied to low core memory when a program exception
happens.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
For the PER storage-alteration event we can use the QEMU watchpoint
infrastructure. When PER is enabled or PER control register changed we
enable the corresponding watchpoints. When a watchpoint arises we can
save the event. Unfortunately the current code does not provide the
address space used to trigger the watchpoint. For now we assume it comes
from the default ASC.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This patch add basic support to generate PER exceptions. It adds two
fields to the cpu structure to record for the PER address and PER
code & ATMID values. When an exception is triggered and a PER event is
pending, the two PER values are copied to the lowcore area.
At the end of an instruction, an helper is checking for a possible
pending PER event and triggers an exception in that case. For that to
work with branches, we need to disable TB chaining when PER is
activated. Fortunately it's already in the TB flags.
Finally in case of a SERVICE CALL exception, we need to trigger the PER
exception immediately after.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This function checks if an address is in between the PER starting
address and the PER ending address, taking care of a possible
address range loop.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This function returns the ATMID field that is stored in the
per_perc_atmid lowcore entry.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
In TCG mode we should store the CC value in env->cc_op. However do it
inconditionnaly because:
- the tcg_enabled function is not inlined
- it's probably faster to always store the value, especially given it
is likely in the same cache line than env->psw.mask.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
This remove the corresponding error messages in TCG mode, and allow to
simplify the s390_assign_subch_ioeventfd() function.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Intercept the diag288 requests from kvm guests, and hand the
requested command to the diag288 watchdog device for further
handling.
Signed-off-by: Xu Wang <gesaint@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Use constants to define the MMU indexes, and add a function to do
the reverse conversion of cpu_mmu_index.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
The cpu_mmu_index function wrongly looks at PSW P bit to determine the
MMU index, while this bit actually only control the use of priviledge
instructions. The addressing mode is detected by looking at the PSW ASC
bits instead.
This used to work more or less correctly up to kernel 3.6 as the kernel
was running in primary space and userland in secondary space. Since
kernel 3.7 the default is to run the kernel in home space and userland
in primary space. While the current QEMU code seems to work it open some
security issues, like accessing the lowcore memory in R/W mode from a
userspace process once it has been accessed by the kernel (it is then
cached by the QEMU TLB).
At the same time change the MMU_USER_IDX value so that it matches the
value used in recent kernels.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
Add a tod2time function similar to the time2tod one, instead of open
coding the conversion.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
When migrating a guest, be sure to include the vector registers.
The vector registers are defined in a subsection, similar to the
existing subsection for floating point registers. Since the
floating point registers are always present (and thus migrated),
we can skip them when performing the migration of the vector
registers which may or may not be present.
Suggested-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Add handling for the Store Additional Status at Address order
that exists for the Signal Processor (SIGP) instruction.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Handle the actual syncing of the vector registers with kernel space,
via the get/put register IOCTLs.
The vector registers that were introduced with the z13 overlay
the existing floating point registers. FP registers 0-15 are
the high-halves of vector registers 0-15. Thus, remove the
freg fields and replace them with the equivalent vector field
to avoid errors in duplication. Moreover, synchronize either the
vector registers via kvm_sync_regs, or floating point registers
via the GET/SET FPU IOCTLs.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Provide a routine to access the correct floating point register,
to simplify future expansion.
Suggested-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch adds support to migrate vcpu interrupts.
We use ioctl KVM_S390_GET_IRQ_STATE and _SET_IRQ_STATE
to get/set the complete interrupt state for a vcpu.
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Access register mode is one of the modes that control dynamic address
translation. In this mode the address space is specified by values of
the access registers. The effective address-space-control element is
obtained from the result of the access register translation. See
the "Access-Register Introduction" section of the chapter 5 "Program
Execution" in "Principles of Operations" for more details.
When the CPU is in AR mode, the s390_cpu_virt_mem_rw() function must
know which access register number to use for address translation.
This patch does several things:
- add new parameter 'uint8_t ar' to that function
- decode ar number from intercepted instructions
- pass the ar number to s390_cpu_virt_mem_rw(), which in turn passes it
to the KVM_S390_MEM_OP ioctl.
Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Add code to make use of the new ioctl for reading from / writing to
virtual guest memory. By using the ioctl, the memory accesses are now
protected with the so-called ipte-lock in the kernel.
[CH: moved error message into kvm_s390_mem_op()]
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
KVM prefills the SYSIB, returned by STSI 3.2.2. This patch allows
userspace to intercept execution, and fill in the values, that are
known to qemu: machine name (8 chars), extended machine name (256
chars), extended machine name encoding (equals 2 for UTF-8) and UUID.
STSI322 qemu handler also finds a highest virtualization level in
level-3 virtualization stack that doesn't support Extended Names
(Ext Name delimiter) and propagates zero Ext Name to all levels below,
because this level is not capable of managing Extended Names of lower
levels.
Signed-off-by: Ekaterina Tumanova <tumanova@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Synchronizes the guest TOD clock across a migration by sending the guest TOD
clock value to the destination system. If the guest TOD clock is not preserved
across a migration then the guest's view of time will snap backwards if the
destination host clock is behind the source host clock. This will cause the
guest to hang immediately upon resuming on the destination system.
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Message-Id: <1425912968-54387-1-git-send-email-jfrei@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
- an extension to the elf loader to allow relocations
- make the ccw bios relocatable. This allows for bigger ramdisks
or smaller guests
- Handle all slow SIGPs in QEMU (instead of kernel) for better
compliance and correctness
- tell the KVM module the maximum guest size. This allows KVM
to reduce the number or page table levels
- Several fixes/cleanups
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
iQIcBAABAgAGBQJVABYpAAoJEBF7vIC1phx8M/kP/AsuFTCrWebziX5qdeIFX8Cu
RBcnqm7Dgt7lg+fyt/mj7g7/PVZEoe9AQ5hWoXmguR850/PmMuEDfHhY6pAfKU+r
RokYiQR2pHDWFU9D2qf3ggEcI4suym1mmuMjx4TEs9318zpREHu9fGpzfJxlQgXa
SUqQDZWElYyiF1nu8cxvH7wqeJLalKSiQBRtkM3w2oG8Nw1TgFxt/xiYHkiz/rkr
U2sQrCabOCcVC/nlDAaWajBq18rzqhFk6QZEZsf9O4jsxy8Pbmkw2cqSp68KBMeB
o50lRrguGhuejQg6g4AXZWGgUt5YnNL0CIHmTXp0KTnijGSAHnWUPf+qCOOR/sfn
1roTNwCH8rjSfpEPKAhmiLRcPTVzy6IYxaT+J7KniCRAyHdIk2NBF3cHzDBy47uC
pre1pIHnKkwBkxv/xkj8CHlfcpCjp8sXhW6FSXoX9On5SKiROnQUwiLoUjtnvRXe
kQZRhtgJSKnLTtEEZ3XWh/UDyD2QJiwnm1E5SjXEa/mdDqgUmsVsPtz29f/xDKJA
GZGNOCsIew0286C+tf5M88JpIXqpAiEYXA9vw5ZUqzxh3ArNuT0GJGxrlWxbqD8j
tbvjHIja62IbCxM8dtZ9v0M4YFNU+VLHdKEREziK6RKS9Ek7rJmSh8128JNQhJ/X
RjiUxdcbApvEunZInwB/
=6Cw+
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-20150310' into staging
s390x/kvm: Features and fixes for 2.3
- an extension to the elf loader to allow relocations
- make the ccw bios relocatable. This allows for bigger ramdisks
or smaller guests
- Handle all slow SIGPs in QEMU (instead of kernel) for better
compliance and correctness
- tell the KVM module the maximum guest size. This allows KVM
to reduce the number or page table levels
- Several fixes/cleanups
# gpg: Signature made Wed Mar 11 10:17:13 2015 GMT using RSA key ID B5A61C7C
# gpg: Good signature from "Christian Borntraeger (IBM) <borntraeger@de.ibm.com>"
* remotes/borntraeger/tags/s390x-20150310:
s390-ccw: rebuild BIOS
s390/bios: Make the s390-ccw.img relocatable
elf-loader: Provide the possibility to relocate s390 ELF files
s390-ccw.img: Reinitialize guessing on reboot
s390-ccw.img: Allow bigger ramdisk sizes or offsets
s390x/kvm: passing max memory size to accelerator
virtio-ccw: Convert to realize()
virtio-s390: Convert to realize()
virtio-s390: s390_virtio_device_init() can't fail, simplify
s390x/kvm: enable the new SIGP handling in user space
s390x/kvm: deliver SIGP RESTART directly if stopped
s390x: add function to deliver restart irqs
s390x/kvm: SIGP START is only applicable when STOPPED
s390x/kvm: implement handling of new SIGP orders
s390x/kvm: trace all SIGP orders
s390x/kvm: helper to set the SIGP status in SigpInfo
s390x/kvm: pass the SIGP instruction parameter to the SIGP handler
s390x/kvm: more details for SIGP handler with one destination vcpu
s390x: introduce defines for SIGP condition codes
synchronize Linux headers to 4.0-rc3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With "KVM: s390: Allow userspace to limit guest memory size" KVM is able to
do some optimizations based on the guest memory limit.
The guest memory limit is computed by the initial definition and with the notion of
hotplugged memory.
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Guenther Hutzl <hutzl@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Message-Id: <1425570981-40609-3-git-send-email-jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
This patch adds a helper function to deliver restart irqs. To be able to be used
by kvm, the psw load/store methods have to perform special cc-code handling only
when running with tcg.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <1424783731-43426-9-git-send-email-jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
This patch adds handling code for the following SIGP orders:
- SIGP SET ARCHITECTURE
- SIGP SET PREFIX
- SIGP STOP
- SIGP STOP AND STORE STATUS
- SIGP STORE STATUS AT ADDRESS
SIGP STOP (AND STORE STATUS) are the only orders that can stay pending forever
(and may only be interrupted by resets), so special care has to be taken about
them. Their status also has to be tracked within QEMU. This patch takes
care of migrating this status (e.g. if migration happens during a SIGP STOP).
Due to the BQL, only one VCPU is currently able to execute SIGP handlers at a
time. According to the PoP, BUSY should be returned if another SIGP order is
currently being executed on a VCPU. This can only be implemented when the BQL
does not protect all handlers. For now, all SIGP orders on all VCPUs will be
serialized, which will be okay for the first shot.
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <1424783731-43426-7-git-send-email-jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
This patch introduces defines for the SIGP condition codes and replaces all
occurrences of numeral condition codes with the new defines.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <1424783731-43426-2-git-send-email-jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The function is now not used anymore, so it can be removed safely.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Change the handler for STCRW to use the new logical memory access
functions. Since STCRW is suppressed on protection/access exceptions,
we also have to make sure to re-queue the CRW in case it could not be
written to the memory.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Change the TSCH handler to use the new logical memory access functions.
Since the channel should not be updated in case of a protection or access
exception while writing to the guest memory, the css_do_tsch() has to be
split up into two parts, one for retrieving the IRB and one for the update.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The schib parameter of css_do_msch() can be declared as const to
make it clear that it does not get modified by this function.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
According to the POP specification, the parameter blocks of various
functions like the IO instructions are accessed with logical addresses.
Thus we need a function that can read or write a buffer from/to the
guest's logical address space.
This patch now provides a function that can be used to access virtual
guest memory by using the mmu_translate function of QEMU to convert
the virtual addresses to physical.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Program access exceptions are defined to deliver a translation exception
code in the low-core. Add a function trigger_access_exception() that
generates the proper program interrupt on both KVM and non-KVM systems
and switch the existing code to use it.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Bit 52 in a page table entry has always to be zero, or a translation
specification exception is to be recognized.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
If the "DAT-protection" bit is set in the region table entry and EDAT is
enabled, only read accesses are allowed in the corresponding memory area.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
When a fault occurs during the MMU lookup in s390_cpu_get_phys_page_debug(),
the trigger_page_fault() function writes the translation exception code
into the lowcore - something you would not expect during a memory access
by the debugger. Ease this problem by adding an additional parameter to
mmu_translate() which can be used to specify whether a program check and
the translation exception code should be injected or not.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The ACSEs have a table length field and the region entries have
table length and offset fields which must be checked during
translation to see whether the given virtual address is really
covered by the translation table.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
helper.c is quite overcrowded already, so let's move the MMU
translation to a separate file instead (like it has been done
with the other targets already).
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The implementation had been incomplete, as we did not store the
machine type. Note that the machine_type member is still unset
during initialization, so this has no effect yet.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1420484960-32365-1-git-send-email-peter.maydell@linaro.org
The css functions are only used from ioinst.c and other files that are
only built for CONFIG_SOFTMMU. So we do not need the dummy wrappers for
the CONFIG_USER_ONLY target in the cpu.h header.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@us.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch reuses kvm_s390_reset_vcpu() to get rid of some CONFIG_KVM and
CONFIG_USER_ONLY ifdefs in cpu.c.
In order to get rid of CONFIG_USER_ONLY, kvm_s390_reset_vcpu() has to provide a
dummy implementation - the two definitions are moved to the proper section in
cpu.h.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
CC: Andreas Faerber <afaerber@suse.de>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let QEMU propagate the cpu state to kvm. If kvm doesn't yet support it, it is
silently ignored as kvm will still handle the cpu state itself in that case.
The state is not synced back, thus kvm won't have a chance to actively modify
the cpu state. To do so, control has to be given back to QEMU (which is already
done so in all relevant cases).
Setting of the cpu state can fail either because kvm doesn't support the
interface yet, or because the state is invalid/not supported. Failed attempts
will be traced
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
CC: Andreas Faerber <afaerber@suse.de>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch makes sure that halting a cpu and stopping a cpu are two different
things. Stopping a cpu will also set the cpu halted - this is needed for common
infrastructure to work (note that the stop and stopped flag cannot be used for
our purpose because they are already used by other mechanisms).
A cpu can be halted ("waiting") when it is operating. If interrupts are
disabled, this is called a "disabled wait", as it can't be woken up anymore. A
stopped cpu is treated like a "disabled wait" cpu, but in order to prepare for a
proper cpu state synchronization with the kvm part, we need to track the real
logical state of a cpu.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
CC: Andreas Faerber <afaerber@suse.de>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Until now, when a s390 cpu was stopped or halted, the number of running
CPUs was tracked in a global variable. This was problematic for migration,
so Jason came up with a per-cpu running state.
As it turns out, we want to track the full logical state of a target vcpu,
so we need real s390 cpu states.
This patch is based on an initial patch by Jason Herne, but was heavily
rewritten when adding the cpu states STOPPED and OPERATING. On the way we
move add_del_running to cpu.c (the declaration is already in cpu.h) and
modify the users where appropriate.
Please note that the cpu is still set to be stopped when it is
halted, which is wrong. This will be fixed in the next patch. The LOAD and
CHECK-STOP state will not be used in the first step.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
[folded Jason's patch into David's patch to avoid add/remove same lines]
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
CC: Andreas Faerber <afaerber@suse.de>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch reduces the core registers to the psw and the general purpose
registers. The fpc and ac registers are handled as coprocessors registers by gdb.
This allows to reuse the feature xml files taken from gdb without further
modification and is what other architectures do.
The target.xml is now generated and provided to the gdb client. Therefore, the
client doesn't have to guess which registers are available at which logical
register number.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Add memory information to read SCP info and add handlers for
Read Storage Element Information, Attach Storage Element,
Assign Storage and Unassign Storage.
Signed-off-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
When determining the memory increment size, use the maxmem size if
it was specified.
Signed-off-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Currently, load_normal_reset() and modified_clear_reset() as triggered
by a guest vcpu will initiate cpu resets on the current vcpu thread for
all cpus. The reset should happen on the individual vcpu thread
instead, so let's use run_on_cpu() for this.
This avoids calls to synchronize_rcu() in the kernel.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>