The only change here is moving the decision about rtas-size
to QEMU.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Linux kernels call "ibm,nmi-interlock" in their system reset handlers
contrary to PAPR. Returning an error because the CPU does not hold the
interlock here causes Linux to print warning messages. PowerVM returns
success in this case, so do the same for now.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200316142613.121089-9-npiggin@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
PAPR requires that if "ibm,nmi-register" succeeds, then the hypervisor
delivers all system reset and machine check exceptions to the registered
addresses.
System Resets are delivered with registers set to the architected state,
and with no interlock.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200316142613.121089-8-npiggin@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Provide for an alternate delivery location, -1 defaults to the
architected address.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200316142613.121089-7-npiggin@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
There should no longer be a reason to prevent TCG providing FWNMI.
System Reset interrupts are generated to the guest with nmi monitor
command and H_SIGNAL_SYS_RESET. Machine Checks can not be injected
currently, but this could be implemented with the mce monitor cmd
similarly to i386.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200316142613.121089-6-npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
[dwg: Re-enable FWNMI in qtests, since that now works]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
FWNMI machine check delivery misses a few things that will make it fail
with TCG at least (which we would like to allow in future to improve
testing).
It's not nice to scatter interrupt delivery logic around the tree, so
move it to excp_helper.c and share code where possible.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200316142613.121089-5-npiggin@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The FWNMI option must deliver system reset interrupts to their
registered address, and there are a few constraints on the handler
addresses specified in PAPR. Add the system reset address state and
checks.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200316142613.121089-4-npiggin@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviwed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The option is called "FWNMI", and it involves more than just machine
checks, also machine checks can be delivered without the FWNMI option,
so re-name various things to reflect that.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200316142613.121089-3-npiggin@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ppc_cpu_do_system_reset delivers a system rreset interrupt to the guest,
which is certainly not what is intended here. Panic the guest like other
failure cases here do.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200316142613.121089-2-npiggin@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In the spapr code we've been gradually moving towards a convention that
functions which create pieces of the device tree are called spapr_dt_*().
This patch speeds that along by renaming most of the things that don't yet
match that so that they do.
For now we leave the *_dt_populate() functions which are actual methods
used in the DRCClass::dt_populate method.
While we're there we remove a few comments that don't really say anything
useful.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
This is currently called from spapr_dt_cas_updates() which is a hang
over from when we created this only as a diff to the DT at CAS time.
Now that we fully rebuild the DT at CAS time, just create it along
with the rest of the properties in /chosen.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Currently this node with information about hotpluggable memory is created
from spapr_dt_cas_updates(). But that's just a hangover from when we
created it only as a diff to the device tree at CAS time. Now that we
fully rebuild the DT as CAS time, it makes more sense to create this along
with the rest of the memory information in the device tree.
So, move it to spapr_populate_memory(). The patch is huge, but it's nearly
all just code motion.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
At the moment SLOF reserves space for RTAS and instantiates the RTAS blob
which is 20 bytes binary blob calling an hypercall. The rest of the RTAS
area is a log which SLOF has no idea about but QEMU does.
This moves RTAS sizing to QEMU and this overrides the size from SLOF.
The only remaining problem is that SLOF copies the number of bytes it
reserved (2KB for now) so QEMU needs to reserve at least this much;
SLOF will be fixed separately to check that rtas-size from QEMU is
enough for those 20 bytes for the H_RTAS hcall.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20200316011841.99970-1-aik@ozlabs.ru>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This mainly fixes virtio-serial with and without
enabled iommu-platform.
The full list of changes is:
Alexey Kardashevskiy (3):
llfw: Fix debug printf warnings
virtio-serial: Close device completely
version: update to 20200312
Cédric Le Goater (1):
virtio: Fix typo in virtio_serial_init()
Greg Kurz (2):
virtio-serial: Don't override some words
virtio-serial: Rework shutdown sequence
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
At the moment "pseries" starts in SLOF which only expects the FDT blob
pointer in r3. As we are going to introduce a OpenFirmware support in
QEMU, we will be booting OF clients directly and these expect a stack
pointer in r1, Linux looks at r3/r4 for the initramdisk location
(although vmlinux can find this from the device tree but zImage from
distro kernels cannot).
This extends spapr_cpu_set_entry_state() to take more registers. This
should cause no behavioral change.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20200310050733.29805-2-aik@ozlabs.ru>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
rlwinm cannot just AND with Mask if shift value is zero on ppc64 when
Mask Begin is greater than Mask End and high bits are set to 1.
Note that PowerISA 3.0B says that for `rlwinm' ROTL32 is used, and
ROTL32 is defined (in 3.3.14) so that rotated value should have two
copies of lower word of the source value.
This seems to be another incarnation of the fix from 820724d170
("target-ppc: Fix rlwimi, rlwinm, rlwnm again"), except I leave
optimization when Mask value is less than 32 bits.
Fixes: 7b4d326f47 ("target-ppc: Use the new deposit and extract ops")
Cc: qemu-stable@nongnu.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Message-Id: <20200309204557.14836-1-vt@altlinux.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The "ibm,xive-lisn-ranges" defines ranges of interrupt numbers that
the guest can use to configure IPIs. It starts at 0 today but it could
change to some other offset. Make clear which IRQ range we are
exposing by using SPAPR_IRQ_IPI in the property definition.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20200306123307.1348-1-clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200305121253.19078-8-philmd@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Depending on the length of sense data, vscsi_send_rsp() can
overrun the buffer size.
Do not copy more than SRP_MAX_IU_DATA_LEN bytes, and assert
that vscsi_send_iu() is always called with a size in range.
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200305121253.19078-7-philmd@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The 'union srp_iu' is meant as a pointer to any SRP Information
Unit type, it is not related to the size of a VIO DMA buffer.
Use a plain buffer for the VIO DMA read/write calls.
We can remove the reserved buffer from the 'union srp_iu'.
This issue was noticed when replacing the zero-length arrays
from hw/scsi/srp.h with flexible array member,
'clang -fsanitize=undefined' reported:
hw/scsi/spapr_vscsi.c:69:29: error: field 'iu' with variable sized type 'union viosrp_iu' not at the end of a struct or class is a GNU extension [-Werror,-Wgnu-variable-sized-type-not-at-end]
union viosrp_iu iu;
^
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200305121253.19078-6-philmd@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Introduce the req_iu() helper which returns a pointer to
the viosrp_iu union held in the vscsi_req structure.
This simplifies the next patch.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200305121253.19078-5-philmd@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We already have a 'iu' pointer, use it
(this simplifies the next commit).
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200305121253.19078-4-philmd@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Replace sizeof() flexible arrays union srp_iu/viosrp_iu by the
SRP_MAX_IU_LEN definition, which is what this code actually meant
to use.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200305121253.19078-3-philmd@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This header use the srp_* structures declared in "hw/scsi/srp.h".
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200305121253.19078-2-philmd@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the calculation of the Real Mode Area (RMA) size into a helper
function. While we're there clean it up and correct it in a few ways:
* Add comments making it clearer where the various constraints come from
* Remove a pointless check that the RMA fits within Node 0 (we've just
clamped it so that it does)
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
In spapr_machine_init() we clamp the size of the RMA to 16GiB and the
comment saying why doesn't make a whole lot of sense. In fact, this was
done because the real mode handling code elsewhere limited the RMA in TCG
mode to the maximum value configurable in LPCR[RMLS], 16GiB.
But,
* Actually LPCR[RMLS] has been able to encode a 256GiB size for a very
long time, we just didn't implement it properly in the softmmu
* LPCR[RMLS] shouldn't really be relevant anyway, it only was because we
used to abuse the RMOR based translation mode in order to handle the
fact that we're not modelling the hypervisor parts of the cpu
We've now removed those limitations in the modelling so the 16GiB clamp no
longer serves a function. However, we can't just remove the limit
universally: that would break migration to earlier qemu versions, where
the 16GiB RMLS limit still applies, no matter how bad the reasons for it
are.
So, we replace the 16GiB clamp, with a clamp to a limit defined in the
machine type class. We set it to 16 GiB for machine types 4.2 and earlier,
but set it to 0 meaning unlimited for the new 5.0 machine type.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
The Real Mode Area (RMA) is the part of memory which a guest can access
when in real (MMU off) mode. Of course, for a guest under KVM, the MMU
isn't really turned off, it's just in a special translation mode - Virtual
Real Mode Area (VRMA) - which looks like real mode in guest mode.
The mechanics of how this works when using the hash MMU (HPT) put a
constraint on the size of the RMA, which depends on the size of the
HPT. So, the latter part of spapr_setup_hpt_and_vrma() clamps the RMA
we advertise to the guest based on this VRMA limit.
There are several things wrong with this:
1) spapr_setup_hpt_and_vrma() doesn't actually clamp, it takes the minimum
of Node 0 memory size and the VRMA limit. That will *often* work the
same as clamping, but there can be other constraints on RMA size which
supersede Node 0 memory size. We have real bugs caused by this
(currently worked around in the guest kernel)
2) Some callers of spapr_setup_hpt_and_vrma() are in a situation where
we're past the point that we can actually advertise an RMA limit to the
guest
3) But most fundamentally, the VRMA limit depends on host configuration
(page size) which shouldn't be visible to the guest, but this partially
exposes it. This can cause problems with migration in certain edge
cases, although we will mostly get away with it.
In practice, this clamping is almost never applied anyway. With 64kiB
pages and the normal rules for sizing of the HPT, the theoretical VRMA
limit will be 4x(guest memory size) and so never hit. It will hit with
4kiB pages, where it will be (guest memory size)/4. However all mainstream
distro kernels for POWER have used a 64kiB page size for at least 10 years.
So, simply replace this logic with a check that the RMA we've calculated
based only on guest visible configuration will fit within the host implied
VRMA limit. This can break if running HPT guests on a host kernel with
4kiB page size. As noted that's very rare. There also exist several
possible workarounds:
* Change the host kernel to use 64kiB pages
* Use radix MMU (RPT) guests instead of HPT
* Use 64kiB hugepages on the host to back guest memory
* Increase the guest memory size so that the RMA hits one of the fixed
limits before the RMA limit. This is relatively easy on POWER8 which
has a 16GiB limit, harder on POWER9 which has a 1TiB limit.
* Use a guest NUMA configuration which artificially constrains the RMA
within the VRMA limit (the RMA must always fit within Node 0).
Previously, on KVM, we also temporarily reduced the rma_size to 256M so
that the we'd load the kernel and initrd safely, regardless of the VRMA
limit. This was a) confusing, b) could significantly limit the size of
images we could load and c) introduced a behavioural difference between
KVM and TCG. So we remove that as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Greg Kurz <groug@kaod.org>
This function calculates the maximum size of the RMA as implied by the
host's page size of structure of the VRMA (there are a number of other
constraints on the RMA size which will supersede this one in many
circumstances).
The current interface takes the current RMA size estimate, and clamps it
to the VRMA derived size. The only current caller passes in an arguably
wrong value (it will match the current RMA estimate in some but not all
cases).
We want to fix that, but for now just keep concerns separated by having the
KVM helper function just return the VRMA derived limit, and let the caller
combine it with other constraints. We call the new function
kvmppc_vrma_limit() to more clearly indicate its limited responsibility.
The helper should only ever be called in the KVM enabled case, so replace
its !CONFIG_KVM stub with an assert() rather than a dummy value.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cedric Le Goater <clg@fr.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
MIN_RMA_SLOF records the minimum about of RMA that the SLOF firmware
requires. It lets us give a meaningful error if the RMA ends up too small,
rather than just letting SLOF crash.
It's currently stored as a number of megabytes, which is strange for global
constants. Move that megabyte scaling into the definition of the constant
like most other things use.
Change from M to MiB in the associated message while we're at it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Currently, we construct the SLBE used for VRMA translations when the LPCR
is written (which controls some bits in the SLBE), then use it later for
translations.
This is a bit complex and confusing - simplify it by simply constructing
the SLBE directly from the LPCR when we need it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
When the LPCR is written, we update the env->rmls field with the RMA limit
it implies. Simplify things by just calculating the value directly from
the LPCR value when we need it.
It's possible this is a little slower, but it's unlikely to be significant,
since this is only for real mode accesses in a translation configuration
that's not used very often, and the whole thing is behind the qemu TLB
anyway. Therefore, keeping the number of state variables down and not
having to worry about making sure it's always in sync seems the better
option.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
The table of RMA limits based on the LPCR[RMLS] field is slightly wrong.
We're missing the RMLS == 0 => 256 GiB RMA option, which is available on
POWER8, so add that.
The comment that goes with the table is much more wrong. We *don't* filter
invalid RMLS values when writing the LPCR, and there's not really a
sensible way to do so. Furthermore, while in theory the set of RMLS values
is implementation dependent, it seems in practice the same set has been
available since around POWER4+ up until POWER8, the last model which
supports RMLS at all. So, correct that as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Currently we use a big switch statement in ppc_hash64_update_rmls() to work
out what the right RMA limit is based on the LPCR[RMLS] field. There's no
formula for this - it's just an arbitrary mapping defined by the existing
CPU implementations - but we can make it a bit more readable by using a
lookup table rather than a switch. In addition we can use the MiB/GiB
symbols to make it a bit clearer.
While there we add a bit of clarity and rationale to the comment about
what happens if the LPCR[RMLS] doesn't contain a valid value.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
When we store the Logical Partitioning Control Register (LPCR) we have a
big switch statement to work out which are valid bits for the cpu model
we're emulating.
As well as being ugly, this isn't really conceptually correct, since it is
based on the mmu_model variable, whereas the LPCR isn't (only) about the
MMU, so mmu_model is basically just acting as a proxy for the cpu model.
Handle this in a simpler way, by adding a suitable lpcr_mask to the QOM
class.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Currently we create the Real Mode Offset Register (RMOR) on all Book3S cpus
from POWER7 onwards. However the translation mode which the RMOR controls
is no longer supported in POWER9, and so the register has been removed from
the architecture.
Remove it from our model on POWER9 and POWER10.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
For the "pseries" machine, we use "virtual hypervisor" mode where we
only model the CPU in non-hypervisor privileged mode. This means that
we need guest physical addresses within the modelled cpu to be treated
as absolute physical addresses.
We used to do that by clearing LPCR[VPM0] and setting LPCR[RMLS] to a high
limit so that the old offset based translation for guest mode applied,
which does what we need. However, POWER9 has removed support for that
translation mode, which meant we had some ugly hacks to keep it working.
We now explicitly handle this sort of translation for virtual hypervisor
mode, so the hacks aren't necessary. We don't need to set VPM0 and RMLS
from the machine type code - they're now ignored in vhyp mode. On the cpu
side we don't need to allow LPCR[RMLS] to be set on POWER9 in vhyp mode -
that was only there to allow the hack on the machine side.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
When running guests under a hypervisor, the hypervisor obviously needs to
be protected from guest accesses even if those are in what the guest
considers real mode (translation off). The POWER hardware provides two
ways of doing that: The old way has guest real mode accesses simply offset
and bounds checked into host addresses. It works, but requires that a
significant chunk of the guest's memory - the RMA - be physically
contiguous in the host, which is pretty inconvenient. The new way, known
as VRMA, has guest real mode accesses translated in roughly the normal way
but with some special parameters.
In POWER7 and POWER8 the LPCR[VPM0] bit selected between the two modes, but
in POWER9 only VRMA mode is supported and LPCR[VPM0] no longer exists. We
handle that difference in behaviour in ppc_hash64_set_isi().. but not in
other places that we blindly check LPCR[VPM0].
Correct those instances with a new helper to tell if we should be in VRMA
mode.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
On ppc we have the concept of virtual hypervisor ("vhyp") mode, where we
only model the non-hypervisor-privileged parts of the cpu. Essentially we
model the hypervisor's behaviour from the point of view of a guest OS, but
we don't model the hypervisor's execution.
In particular, in this mode, qemu's notion of target physical address is
a guest physical address from the vcpu's point of view. So accesses in
guest real mode don't require translation. If we were modelling the
hypervisor mode, we'd need to translate the guest physical address into
a host physical address.
Currently, we handle this sloppily: we rely on setting up the virtual LPCR
and RMOR registers so that GPAs are simply HPAs plus an offset, which we
set to zero. This is already conceptually dubious, since the LPCR and RMOR
registers don't exist in the non-hypervisor portion of the CPU. It gets
worse with POWER9, where RMOR and LPCR[VPM0] no longer exist at all.
Clean this up by explicitly handling the vhyp case. While we're there,
remove some unnecessary nesting of if statements that made the logic to
select the correct real mode behaviour a bit less clear than it could be.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
The PowerPC 970 CPU was a cut-down POWER4, which had hypervisor capability.
However, it can be (and often was) strapped into "Apple mode", where the
hypervisor capabilities were disabled (essentially putting it always in
hypervisor mode).
That's actually the only mode of the 970 we support in qemu, and we're
unlikely to change that any time soon. However, we do have a partial
implementation of the 970's HID4 register which affects things only
relevant for hypervisor mode.
That stub is also really ugly, since it attempts to duplicate the effects
of HID4 by re-encoding it into the LPCR register used in newer CPUs, but
in a really confusing way.
Just get rid of it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
a4f30719a8, way back in 2007 noted that "PowerPC hypervisor mode is not
fundamentally available only for PowerPC 64" and added a 32-bit version
of the MSR[HV] bit.
But nothing was ever really done with that; there is no meaningful support
for 32-bit hypervisor mode 13 years later. Let's stop pretending and just
remove the stubs.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200228123303.14540-1-philmd@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Fixes Coverity issue,
CID 1419883: Error handling issues (CHECKED_RETURN)
Calling "qemu_uuid_parse" without checking return value
nvdimm_set_uuid() already verifies if the user provided uuid is valid or
not. So, need to check for the validity during pre-plug validation again.
As this a false positive in this case, assert if not valid to be safe.
Also, error_abort if QOM accessor encounters error while fetching the uuid
property.
Reported-by: Coverity (CID 1419883)
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Message-Id: <158281096564.89540.4507375445765515529.stgit@lep8c.aus.stglabs.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Server class POWER CPUs have a "compat" property, which was obsoleted
by commit 7843c0d60d and replaced by a "max-cpu-compat" property on the
pseries machine type. A hack was introduced so that passing "compat" to
-cpu would still produce the desired effect, for the sake of backward
compatibility : it strips the "compat" option from the CPU properties
and applies internally it to the pseries machine. The accessors of the
"compat" property were updated to do nothing but warn the user about the
deprecated status when doing something like:
$ qemu-system-ppc64 -global POWER9-family-powerpc64-cpu.compat=power9
qemu-system-ppc64: warning: CPU 'compat' property is deprecated and has no
effect; use max-cpu-compat machine property instead
This was merged during the QEMU 2.10 timeframe, a few weeks before we
formalized our deprecation process. As a consequence, the "compat"
property fell through the cracks and was never listed in the officialy
deprecated features.
We are now eight QEMU versions later, it is largely time to mention it
in qemu-deprecated.texi. Also, since -global XXX-powerpc64-cpu.compat=
has been emitting warnings since QEMU 2.10 and the usual way of setting
CPU properties is with -cpu, completely remove the "compat" property.
Keep the hack so that -cpu XXX,compat= stays functional some more time,
as required by our deprecation process.
The now empty powerpc_servercpu_properties[] list which was introduced
for "compat" and never had any other use is removed on the way. We can
re-add it in the future if the need for a server class POWER CPU specific
property arises again.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <158274357799.140275.12263135811731647490.stgit@bahia.lan>
[dwg: Convert from .texi to .rst to match upstream change]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
If a hot plug or unplug request is pending at CAS, we currently trigger
a CAS reboot, which severely increases the guest boot time. This is
because SLOF doesn't handle hot plug events and we had no way to fix
the FDT that gets presented to the guest.
We can do better thanks to recent changes in QEMU and SLOF:
- we now return a full FDT to SLOF during CAS
- SLOF was fixed to correctly detect any device that was either added or
removed since boot time and to update its internal DT accordingly.
The right solution is to process all pending hot plug/unplug requests
during CAS: convert hot plugged devices to cold plugged devices and
remove the hot unplugged ones, which is exactly what spapr_drc_reset()
does. Also clear all hot plug events that are currently queued since
they're no longer relevant.
Note that SLOF cannot currently populate hot plugged PCI bridges or PHBs
at CAS. Until this limitation is lifted, SLOF will reset the machine when
this scenario occurs : this will allow the FDT to be fully processed when
SLOF is started again (ie. the same effect as the CAS reboot that would
occur anyway without this patch).
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <158257222352.4102917.8984214333937947307.stgit@bahia.lan>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This adds vTPM support, full-FDT-rebuild-on-CAS fixes and
basic ext4 support.
The full changelog is:
Alexey Kardashevskiy (10):
disk-label: Prepare for extenting
disk-label: Support Linux GPT partition type
ext2: Prepare for extending
ext2: Rename group-desc-size
ext2: Read size of group descriptors
ext2: Read all 64bit of inode number
ext2/4: Add basic extent tree support
elf64: Add LE64 ABIv1/2 support for loading images to given address
fdt: Fix creating new nodes at H_CAS
version: update to 20200221
Greg Kurz (2):
fdt: Fix update of "interrupt-controller" node at CAS
fdt: Delete nodes of devices removed between boot and CAS
Stefan Berger (8):
slof: Implement SLOF_get_keystroke() and SLOF_reset()
slof: Make linker script variables accessible
qemu: Make print_version variable accessible
tpm: Add TPM CRQ driver implementation
tpm: Add sha256 implementation
tcgbios: Add TPM 2.0 support and firmware API
tcgbios: Implement menu to clear TPM 2 and activate its PCR banks
tcgbios: Measure the GPT table
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The bochs-display mmio bar has some sub-regions with the actual hardware
registers. What happens when the guest access something outside those
regions depends on the archirecture. On x86 those reads succeed (and
return 0xff I think). On risc-v qemu aborts.
This patch adds handlers for the parent region, to make the wanted
behavior explicit and to make things consistent across architectures.
v2:
- use existing unassigned_io_ops.
- also cover stdvga.
Cc: Alistair Francis <alistair23@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20200309100009.17624-1-kraxel@redhat.com
The current positive limit for the saturation nonlinearity is
only correct if the type of the result has 8 bits or less.
Signed-off-by: Volker Rümelin <vr_qemu@t-online.de>
Message-id: 20200308193321.20668-5-vr_qemu@t-online.de
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>