All devices that register a screen dump callback via
graphic_console_init() are updated.
The new argument is not used in this commit. Error handling will
be added to each device individually later.
This change is a preparation to convert the screendump command
to the QAPI.
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
switch console only if needed, also pass down whenever the console was
switched or not because a displaysurface redraw is only needed in case
the console was switched.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Replace device_init() with generalized type_init().
While at it, unify naming convention: type_init([$prefix_]register_types)
Also, type_init() is a function, so add preceding blank line where
necessary and don't put a semicolon after the closing brace.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Cc: Anthony Liguori <anthony@codemonkey.ws>
Cc: malc <av1474@comtv.ru>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Instead of each device knowing or guessing the guest page size,
just pass the desired size of dirtied memory area.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
This was done in a mostly automated fashion. I did it in three steps and then
rebased it into a single step which avoids repeatedly touching every file in
the tree.
The first step was a sed-based addition of the parent type to the subclass
registration functions.
The second step was another sed-based removal of subclass registration functions
while also adding virtual functions from the base class into a class_init
function as appropriate.
Finally, a python script was used to convert the DeviceInfo structures and
qdev_register_subclass functions to TypeInfo structures, class_init functions,
and type_register_static calls.
We are almost fully converted to QOM after this commit.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This converts three devices because apic and ioapic are subclasses of sysbus.
Converting subclasses independently of their base class is prohibitively hard.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Instead of each target knowing or guessing the guest page size,
just pass the desired size of dirtied memory area.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Currently creating a memory region automatically registers it for
live migration. This differs from other state (which is enumerated
in a VMStateDescription structure) and ties the live migration code
into the memory core.
Decouple the two by introducing a separate API, vmstate_register_ram(),
for registering a RAM block for migration. Currently the same
implementation is reused, but later it can be moved into a separate list,
and registrations can be moved to VMStateDescription blocks.
Signed-off-by: Avi Kivity <avi@redhat.com>
As stated before, devices can be little, big or native endian. The
target endianness is not of their concern, so we need to push things
down a level.
This patch adds a parameter to cpu_register_io_memory that allows a
device to choose its endianness. For now, all devices simply choose
native endian, because that's the same behavior as before.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
These will be used to generate unique id strings for ramblocks. The name
field is required, the device pointer is optional as most callers don't
have a device. When there's no device or the device isn't a child of
a bus implementing BusInfo.get_dev_path, the name should be unique for
the platform.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Add type safety to qdev reset handlers, by declaring them as
DeviceState * rather than void *.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
In the very least, a change like this requires discussion on the list.
The naming convention is goofy and it causes a massive merge problem. Something
like this _must_ be presented on the list first so people can provide input
and cope with it.
This reverts commit 99a0949b72.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This naming was used in kvm tree, and is easier to remember
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Sorry folks, but it has to be. One more of these invasive qdev patches.
We have a serious design bug in the qdev interface: device init
callbacks can't signal failure because the init() callback has no
return value. This patch fixes it.
We have already one case in-tree where this is needed:
Try -device virtio-blk-pci (without drive= specified) and watch qemu
segfault. This patch fixes it.
With usb+scsi being converted to qdev we'll get more devices where the
init callback can fail for various reasons.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This patch is a major overhaul of the device properties. The properties
are saved directly in the device state struct now, the linked list of
property values is gone.
Advantages:
* We don't have to maintain the list with the property values.
* The value in the property list and the value actually used by
the device can't go out of sync any more (used to happen for
the pci.devfn == -1 case) because there is only one place where
the value is stored.
* A record describing the property is required now, you can't set
random properties any more.
There are bus-specific and device-specific properties. The former
should be used for properties common to all bus drivers. Typical
use case is bus addressing, i.e. pci.devfn and i2c.address.
Properties have a PropertyInfo struct attached with name, size and
function pointers to parse and print properties. A few common property
types have PropertyInfos defined in qdev-properties.c. Drivers are free
to implement their own very special property parsers if needed.
Properties can have default values. If unset they are zero-filled.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This reverts commit 8217606e6e (and
updates later added users of qemu_register_reset), we solved the
problem it originally addressed less invasively.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The parameter is always zero except when registering the three internal
io regions (ROM, unassigned, notdirty). Remove the parameter to reduce
the API's power, thus facilitating future change.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Add the parameter 'order' to qemu_register_reset and sort callbacks on
registration. On system reset, callbacks with lower order will be
invoked before those with higher order. Update all existing users to the
standard order 0.
Note: At least for x86, the existing users seem to assume that handlers
are called in their registration order. Therefore, the patch preserves
this property. If someone feels bored, (s)he could try to identify this
dependency and express it properly on callback registration.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Hi all,
this patch adds a DisplayAllocator interface that allows display
frontends (sdl in particular) to provide a preallocated display buffer
for the graphical backend to use.
Whenever a graphical backend cannot use
qemu_create_displaysurface_from because its own internal pixel format
cannot be exported directly (text mode or graphical mode with color
depth 8 or 24), it creates another display buffer in memory using
qemu_create_displaysurface and does the conversion.
This new buffer needs to be blitted into the sdl surface buffer every time
we need to update portions of the screen.
We can avoid this using the DisplayAllocator interace: sdl provides its
own implementation of qemu_create_displaysurface, giving back the sdl
surface buffer directly (as we used to do before the DisplayState
changes).
Since the buffer returned by sdl could be in bgr format we need to put
back in the handlers of that case.
This approach is good if the two following conditions are true:
1) the sdl surface is a software surface that resides in main memory;
2) the host display color depth is either 16 or 32 bpp.
If first condition is false we can have bad performances using sdl
and vnc together.
If the second condition is false performances are certainly not going to
improve but they shouldn't get worse either.
The first condition is always true, at least on linux/X11 systems; but I
believe is true also on other platforms.
The second condition is true in the vast majority of the cases.
This patch should also have the good side effect of solving the sdl
2D slowness malc was reporting on MacOS, because SDL_BlitSurface is not
going to be called anymore when the guest is in text mode or 24bpp.
However the root problem is still present so I suspect we may
still see some slowness on MacOS when the guest is in 32 or 16 bpp.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6839 c046a42c-6fe2-441c-8c8c-71466251a162
Patch 5/7
This patch changes the graphical_console_init function to return an
allocated DisplayState instead of a QEMUConsole.
This patch contains just the graphical_console_init change and few other
modifications mainly in console.c and vl.c.
It was necessary to move the display frontends (e.g. sdl and vnc)
initialization after machine->init in vl.c.
This patch does *not* include any required changes to any device, these
changes come with the following patches.
Patch 6/7
This patch changes the QEMUMachine init functions not to take a
DisplayState as an argument because is not needed any more;
In few places the graphic hardware initialization function was called
only if DisplayState was not NULL, now they are always called.
Apart from these cases, the rest are all mechanical substitutions.
Patch 7/7
This patch updates the graphic device code to use the new
graphical_console_init function.
As for the previous patch, in few places graphical_console_init was called
only if DisplayState was not NULL, now it is always called.
Apart from these cases, the rest are all mechanical substitutions.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6344 c046a42c-6fe2-441c-8c8c-71466251a162
Do not handle bgr host displays in the backends.
Right now a bgr flag exists so that sdl can set it, if the SDL_Surface
is bgr.
Afterwards the graphic device (e.g. vga.c) does the needed conversion.
With this patch series is sdl that is responsible for rendering the format
provided by the graphic device that must provide a DisplaySurface
(ds->surface) in 16 or 32 bpp, rgb.
Afterwards sdl creates a SDL_Surface from the given DisplaySurface and
blits it into the main SDL_Surface using SDL_BlitSurface.
Everything is handled by sdl transparently, because SDL_BlitSurface is
perfectly capable of handling bgr displays by itself.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6335 c046a42c-6fe2-441c-8c8c-71466251a162