When all we do with an Error we receive into a local variable is
propagating to somewhere else, we can just as well receive it there
right away. The previous two commits did that for sufficiently simple
cases with Coccinelle. Do it for several more manually.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200707160613.848843-37-armbru@redhat.com>
Convert
foo(..., &err);
if (err) {
...
}
to
if (!foo(..., &err)) {
...
}
for qdev_realize(), qdev_realize_and_unref(), qbus_realize() and their
wrappers isa_realize_and_unref(), pci_realize_and_unref(),
sysbus_realize(), sysbus_realize_and_unref(), usb_realize_and_unref().
Coccinelle script:
@@
identifier fun = {
isa_realize_and_unref, pci_realize_and_unref, qbus_realize,
qdev_realize, qdev_realize_and_unref, sysbus_realize,
sysbus_realize_and_unref, usb_realize_and_unref
};
expression list args, args2;
typedef Error;
Error *err;
@@
- fun(args, &err, args2);
- if (err)
+ if (!fun(args, &err, args2))
{
...
}
Chokes on hw/arm/musicpal.c's lcd_refresh() with the unhelpful error
message "no position information". Nothing to convert there; skipped.
Fails to convert hw/arm/armsse.c, because Coccinelle gets confused by
ARMSSE being used both as typedef and function-like macro there.
Converted manually.
A few line breaks tidied up manually.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <20200707160613.848843-5-armbru@redhat.com>
All remaining conversions to qdev_realize() are for bus-less devices.
Coccinelle script:
// only correct for bus-less @dev!
@@
expression errp;
expression dev;
@@
- qdev_init_nofail(dev);
+ qdev_realize(dev, NULL, &error_fatal);
@ depends on !(file in "hw/core/qdev.c") && !(file in "hw/core/bus.c")@
expression errp;
expression dev;
symbol true;
@@
- object_property_set_bool(OBJECT(dev), true, "realized", errp);
+ qdev_realize(DEVICE(dev), NULL, errp);
@ depends on !(file in "hw/core/qdev.c") && !(file in "hw/core/bus.c")@
expression errp;
expression dev;
symbol true;
@@
- object_property_set_bool(dev, true, "realized", errp);
+ qdev_realize(DEVICE(dev), NULL, errp);
Note that Coccinelle chokes on ARMSSE typedef vs. macro in
hw/arm/armsse.c. Worked around by temporarily renaming the macro for
the spatch run.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200610053247.1583243-57-armbru@redhat.com>
Same transformation as in the previous commit. Manual, because
convincing Coccinelle to transform these cases is somewhere between
not worthwhile and infeasible (at least for me).
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200610053247.1583243-13-armbru@redhat.com>
The OBJECT() macro is defined as:
#define OBJECT(obj) ((Object *)(obj))
Remove the unnecessary OBJECT() casts when we already know the
pointer is of Object type.
Patch created mechanically using spatch with this script:
@@
typedef Object;
Object *o;
@@
- OBJECT(o)
+ o
Acked-by: Cornelia Huck <cohuck@redhat.com>
Acked-by: Corey Minyard <cminyard@mvista.com>
Acked-by: John Snow <jsnow@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20200512070020.22782-3-f4bug@amsat.org>
[Trivial rebase conflict in hw/s390x/sclp.c resolved]
The only way object_property_add() can fail is when a property with
the same name already exists. Since our property names are all
hardcoded, failure is a programming error, and the appropriate way to
handle it is passing &error_abort.
Same for its variants, except for object_property_add_child(), which
additionally fails when the child already has a parent. Parentage is
also under program control, so this is a programming error, too.
We have a bit over 500 callers. Almost half of them pass
&error_abort, slightly fewer ignore errors, one test case handles
errors, and the remaining few callers pass them to their own callers.
The previous few commits demonstrated once again that ignoring
programming errors is a bad idea.
Of the few ones that pass on errors, several violate the Error API.
The Error ** argument must be NULL, &error_abort, &error_fatal, or a
pointer to a variable containing NULL. Passing an argument of the
latter kind twice without clearing it in between is wrong: if the
first call sets an error, it no longer points to NULL for the second
call. ich9_pm_add_properties(), sparc32_ledma_realize(),
sparc32_dma_realize(), xilinx_axidma_realize(), xilinx_enet_realize()
are wrong that way.
When the one appropriate choice of argument is &error_abort, letting
users pick the argument is a bad idea.
Drop parameter @errp and assert the preconditions instead.
There's one exception to "duplicate property name is a programming
error": the way object_property_add() implements the magic (and
undocumented) "automatic arrayification". Don't drop @errp there.
Instead, rename object_property_add() to object_property_try_add(),
and add the obvious wrapper object_property_add().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200505152926.18877-15-armbru@redhat.com>
[Two semantic rebase conflicts resolved]
SCLP for a protected guest is done over the SIDAD, so we need to use
the s390_cpu_pv_mem_* functions to access the SIDAD instead of guest
memory when reading/writing SCBs.
To not confuse the sclp emulation, we set 0x4000 as the SCCB address,
since the function that injects the sclp external interrupt would
reject a zero sccb address.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20200319131921.2367-10-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Older QEMU versions did fixup the ram size to match what can be reported
via sclp. We need to mimic this behaviour for machine types 4.2 and
older to not fail on inbound migration for memory sizes that do not fit.
Old machines with proper aligned memory sizes are not affected.
Alignment table:
VM size (<=) | Alignment
--------------------------
1020M | 1M
2040M | 2M
4080M | 4M
8160M | 8M
16320M | 16M
32640M | 32M
65280M | 64M
130560M | 128M
261120M | 256M
522240M | 512M
1044480M | 1G
2088960M | 2G
4177920M | 4G
8355840M | 8G
Suggested action is to replace unaligned -m value with a suitable
aligned one or if a change to a newer machine type is possible, use a
machine version >= 5.0.
A future version might remove the compatibility handling.
For machine types >= 5.0 we can simply use an increment size of 1M and
use the full range of increment number which allows for all possible
memory sizes. The old limitation of having a maximum of 1020 increments
was added for standby memory, which we no longer support. With that we
can now support even weird memory sizes like 10001234 MB.
As we no longer fixup maxram_size as well, make other users use ram_size
instead. Keep using maxram_size when setting the maximum ram size in KVM,
as that will come in handy in the future when supporting memory hotplug
(in contrast, storage keys and storage attributes for hotplugged memory
will have to be migrated per RAM block in the future).
Fixes: 3a12fc61af ("390x/s390-virtio-ccw: use memdev for RAM")
Reported-by: Lukáš Doktor <ldoktor@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20200401123754.109602-1-borntraeger@de.ibm.com>
[CH: fixed up message on memory size fixup]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
'out' label can be replaced by 'return' with the appropriate
value. The 'r' integer, which is used solely to set the
return value for this label, can also be removed.
CC: Cornelia Huck <cohuck@redhat.com>
CC: Halil Pasic <pasic@linux.ibm.com>
CC: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200106182425.20312-39-danielhb413@gmail.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Return the correct error code when the SCCB buffer is too small to
contain all of the output, for the Read SCP Information and
Read CPU Information commands.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
Message-Id: <1569591203-15258-5-git-send-email-imbrenda@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Requests over 4k are not a spec exception.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
Message-Id: <1569591203-15258-4-git-send-email-imbrenda@linux.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
All sclp codes need to be checked for page boundary violations.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
Message-Id: <1569591203-15258-3-git-send-email-imbrenda@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Invalid command checking has to be done before the boundary check,
refactoring it now allows to insert the boundary check at the correct
place later.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
Message-Id: <1569591203-15258-2-git-send-email-imbrenda@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The global smp variables in s390x are replaced with smp machine properties.
A local variable of the same name would be introduced in the declaration
phase if it's used widely in the context OR replace it on the spot if it's
only used once. No semantic changes.
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Message-Id: <20190518205428.90532-7-like.xu@linux.intel.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
[ehabkost: fix build failure at VCPU_IRQ_BUF_SIZE]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
fixup! hw/s390x: Replace global smp variables with machine smp properties
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
We clamp down ram_size to match the sclp increment size. We do
not do the same for maxram_size, which means for large guests
with some sizes (e.g. -m 50000) maxram_size differs from ram_size.
This can break other code (e.g. CMMA migration) which uses maxram_size
to calculate the number of pages and then throws some errors.
Fixes: 82fab5c5b9 ("s390x/sclp: remove memory hotplug support")
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
CC: qemu-stable@nongnu.org
CC: David Hildenbrand <david@redhat.com>
Message-Id: <1532959766-53343-1-git-send-email-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
From an architecture point of view, nothing can be mapped into the address
space on s390x. All there is is memory. Therefore there is also not really
an interface to communicate such information to the guest. All we can do is
specify the maximum ram address and guests can probe in that range if
memory is available and usable (TPROT).
Also memory hotplug is strange. The guest can decide at some point in
time to add / remove memory in some range. While the hypervisor can deny
to online an increment, all increments have to be predefined and there is
no way of telling the guest about a newly "hotplugged" increment. So if we
specify right now e.g.
-m 2G,slots=2,maxmem=20G
An ordinary fedora guest will happily online (hotplug) all memory,
resulting in a guest consuming 20G. So it really behaves rather like
-m 22G
There is no way to hotplug memory from the outside like on other
architectures. This is of course bad for upper management layers.
As the guest can create/delete memory regions while it is running, of
course migration support is not available and tricky to implement.
With virtualization, it is different. We might want to map something
into guest address space (e.g. fake DAX devices) and not detect it
automatically as memory. So we really want to use the maxmem and slots
parameter just like on all other architectures. Such devices will have
to expose the applicable memory range themselves. To finally be able to
provide memory hotplug to guests, we will need a new paravirtualized
interface to do that (e.g. something into the direction of virtio-mem).
This implies, that maxmem cannot be used for s390x memory hotplug
anymore and has to go. This simplifies the code quite a bit.
As migration support is not working, this change cannot really break
migration as guests without slots and maxmem don't see the SCLP
features. Also, the ram size calculation does not change.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180219174231.10874-1-david@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
[CH: tweaked patch description, as discussed on list]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Linux crashes right now if maxmem > mem is specified on the command line.
On s390x, the guest can hotplug memory itself right now - very weird -
and e.g. Fedora 27 will simply add all memory it can when booting.
So now, we have at least the same behavior on TCG and KVM.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20171218224616.21030-3-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The highest cpu address is not the same as max_cpus. max_cpus
counts from 1 while the cpu address starts at 0.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Message-Id: <20171219082807.84494-1-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The "sclp" device is just an internal device that can not be instantiated
by the users. If they try to use it, they only get a simple error message:
$ qemu-system-s390x -nographic -device sclp
qemu-system-s390x: Option '-device s390-sclp-event-facility' cannot be
handled by this machine
Since sclp_init() tries to create a TYPE_SCLP_EVENT_FACILITY which is
a non-pluggable sysbus device, there is really no way that the "sclp"
device can be used by the user, so let's set the user_creatable = false
accordingly.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1507125199-22562-1-git-send-email-thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Reviewed-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Acked-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This is the first step to allow hot plugging of CPUs in a non-sequential
order. If a cpu is available ("plugged") can directly be decided by
looking at the cpu state pointer.
This makes sure, that really only cpus attached to the machine are
reported.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-22-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
s390-stattrib.c needs definition of TARGET_PAGE_SIZE, solve it via cpu.h.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170818114353.13455-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Not needed at that point.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170818114353.13455-2-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
If we do not provide zpci, pci reconfiguration via sclp is not available
either. I/O adapter configuration, however, should always be present.
Rename the values that refer to I/O adapter configuration (instead of only
pci) to make things clearer.
Move length checking of the sccb for I/O adapter configuration into the
common sclp code (out of the pci code). This also fixes an issue that
the pci code would refer to a field in the sccb before checking whether
it was actually long enough.
Check for the adapter type in the sccb and return unrecognized adapter
type if the guest tries to issue I/O adapter configure/deconfigure for
a type other than pci or for pci if the zpci facility is not provided.
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Use the new functions memory_region_init_{ram,rom,rom_device}()
instead of manually calling the _nomigrate() version and then
vmstate_register_ram_global().
Patch automatically created using coccinelle script:
spatch --in-place -sp_file scripts/coccinelle/memory-region-init-ram.cocci -dir hw
(As it turns out, there are no instances of the rom and
rom_device functions that are caught by this script.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1499438577-7674-8-git-send-email-peter.maydell@linaro.org
Rename memory_region_init_ram() to memory_region_init_ram_nomigrate().
This leaves the way clear for us to provide a memory_region_init_ram()
which does handle migration.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1499438577-7674-4-git-send-email-peter.maydell@linaro.org
This patch removes redundant "qemu:" from error functions. The link to the bitesized task is:
http://wiki.qemu-project.org/Contribute/BiteSizedTasks#Error_checking
Signed-off-by: Ishani Chugh <chugh.ishani@research.iiit.ac.in>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
LOADPARM has two copies:
1. in SCP Information Block
2. in IPL Information Parameter Block
So, update SCLP intrinsics now. We always store LOADPARM in SCP
information block even if we don't have a valid IPL Information
Parameter Block.
Initial patch from Eugene (jno) Dvurechenski.
Signed-off-by: Eugene (jno) Dvurechenski <jno@linux.vnet.ibm.com>
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch is the result of coccinelle script
scripts/coccinelle/typecast.cocci
CC: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The mha is provided in the CPU model, so get any CPU and extract the value.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-18-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
If we have a lowest ibc, we can indicate the ibc to the guest.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-17-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
We have three different blocks in the SCLP read-SCP information response
that indicate sclp features. Let's prepare propagation.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-16-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's factor out the common code of "read cpu info" and "read scp
info". This will make the introduction of new cpu entry fields easier.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-14-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
With the current code a simple sclp command takes about 13000 ns
The biggest part seems to be the resolver of the object model. By
caching the sclp device the time for an sclp command goes down to
2500ns. Talking about real life scenarios, this change doubles
the speed of the sclp console when sending single bytes outputs
to /dev/console.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Split s390_sclp_configure() into separate functions for sclp
configuring and deconfiguring in order to make the code more readable.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1453832250-766-8-git-send-email-peter.maydell@linaro.org
The init of a device should have no side effects. Therefore move
registering of the event facility into the realize function, so
multiple instances of the SCLP device can be created e.g. for
introspection.
Add some more detail as to why we have to add it to the sysbus
at all.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's follow established naming practice here as well.
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Markus Armbruster <armbru@pond.sub.org>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1450452927-8346-23-git-send-email-armbru@redhat.com>
Symptom:
$ qemu-system-x86_64 -m 10000000
Unexpected error in ram_block_add() at /work/armbru/qemu/exec.c:1456:
upstream-qemu: cannot set up guest memory 'pc.ram': Cannot allocate memory
Aborted (core dumped)
Root cause: commit ef701d7 screwed up handling of out-of-memory
conditions. Before the commit, we report the error and exit(1), in
one place, ram_block_add(). The commit lifts the error handling up
the call chain some, to three places. Fine. Except it uses
&error_abort in these places, changing the behavior from exit(1) to
abort(), and thus undoing the work of commit 3922825 "exec: Don't
abort when we can't allocate guest memory".
The three places are:
* memory_region_init_ram()
Commit 4994653 (right after commit ef701d7) lifted the error
handling further, through memory_region_init_ram(), multiplying the
incorrect use of &error_abort. Later on, imitation of existing
(bad) code may have created more.
* memory_region_init_ram_ptr()
The &error_abort is still there.
* memory_region_init_rom_device()
Doesn't need fixing, because commit 33e0eb5 (soon after commit
ef701d7) lifted the error handling further, and in the process
changed it from &error_abort to passing it up the call chain.
Correct, because the callers are realize() methods.
Fix the error handling after memory_region_init_ram() with a
Coccinelle semantic patch:
@r@
expression mr, owner, name, size, err;
position p;
@@
memory_region_init_ram(mr, owner, name, size,
(
- &error_abort
+ &error_fatal
|
err@p
)
);
@script:python@
p << r.p;
@@
print "%s:%s:%s" % (p[0].file, p[0].line, p[0].column)
When the last argument is &error_abort, it gets replaced by
&error_fatal. This is the fix.
If the last argument is anything else, its position is reported. This
lets us check the fix is complete. Four positions get reported:
* ram_backend_memory_alloc()
Error is passed up the call chain, ultimately through
user_creatable_complete(). As far as I can tell, it's callers all
handle the error sanely.
* fsl_imx25_realize(), fsl_imx31_realize(), dp8393x_realize()
DeviceClass.realize() methods, errors handled sanely further up the
call chain.
We're good. Test case again behaves:
$ qemu-system-x86_64 -m 10000000
qemu-system-x86_64: cannot set up guest memory 'pc.ram': Cannot allocate memory
[Exit 1 ]
The next commits will repair the rest of commit ef701d7's damage.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1441983105-26376-3-git-send-email-armbru@redhat.com>
Reviewed-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
rnmax can be directly calculated using machine->maxram_size.
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's calculate it once and reuse it.
Suggested-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The restrictions for memory calculation belong to the sclp device.
Let's move the calculation to that point, so we are able to unify it for
both s390 machines. The sclp device is the first device to be initialized.
It performs the calculation and safely stores it in the machine, where
other parts of the system can access an reuse it.
The memory hotplug device is now only created when it is really needed.
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
If no memory hotplug device was created, the sclp command facility is
not exposed (SCLP_FC_ASSIGN_ATTACH_READ_STOR). We therefore have no
memory hotplug and should correctly report SCLP_RC_INVALID_SCLP_COMMAND
if any such command is executed.
This gets rid of these ugly asserts that could have been triggered
for the s390-virtio machine.
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
ram_slots and maxram_size has already been parsed and verified by
common code for us.
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's make that function a method of the new sclp device, keeping
the wrapper for existing users.
We can now let go of get_event_facility().
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>