In the very least, a change like this requires discussion on the list.
The naming convention is goofy and it causes a massive merge problem. Something
like this _must_ be presented on the list first so people can provide input
and cope with it.
This reverts commit 99a0949b72.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This naming was used in kvm tree, and is easier to remember
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Sorry folks, but it has to be. One more of these invasive qdev patches.
We have a serious design bug in the qdev interface: device init
callbacks can't signal failure because the init() callback has no
return value. This patch fixes it.
We have already one case in-tree where this is needed:
Try -device virtio-blk-pci (without drive= specified) and watch qemu
segfault. This patch fixes it.
With usb+scsi being converted to qdev we'll get more devices where the
init callback can fail for various reasons.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This patch is a major overhaul of the device properties. The properties
are saved directly in the device state struct now, the linked list of
property values is gone.
Advantages:
* We don't have to maintain the list with the property values.
* The value in the property list and the value actually used by
the device can't go out of sync any more (used to happen for
the pci.devfn == -1 case) because there is only one place where
the value is stored.
* A record describing the property is required now, you can't set
random properties any more.
There are bus-specific and device-specific properties. The former
should be used for properties common to all bus drivers. Typical
use case is bus addressing, i.e. pci.devfn and i2c.address.
Properties have a PropertyInfo struct attached with name, size and
function pointers to parse and print properties. A few common property
types have PropertyInfos defined in qdev-properties.c. Drivers are free
to implement their own very special property parsers if needed.
Properties can have default values. If unset they are zero-filled.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This reverts commit 8217606e6e (and
updates later added users of qemu_register_reset), we solved the
problem it originally addressed less invasively.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The parameter is always zero except when registering the three internal
io regions (ROM, unassigned, notdirty). Remove the parameter to reduce
the API's power, thus facilitating future change.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Add the parameter 'order' to qemu_register_reset and sort callbacks on
registration. On system reset, callbacks with lower order will be
invoked before those with higher order. Update all existing users to the
standard order 0.
Note: At least for x86, the existing users seem to assume that handlers
are called in their registration order. Therefore, the patch preserves
this property. If someone feels bored, (s)he could try to identify this
dependency and express it properly on callback registration.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Hi all,
this patch adds a DisplayAllocator interface that allows display
frontends (sdl in particular) to provide a preallocated display buffer
for the graphical backend to use.
Whenever a graphical backend cannot use
qemu_create_displaysurface_from because its own internal pixel format
cannot be exported directly (text mode or graphical mode with color
depth 8 or 24), it creates another display buffer in memory using
qemu_create_displaysurface and does the conversion.
This new buffer needs to be blitted into the sdl surface buffer every time
we need to update portions of the screen.
We can avoid this using the DisplayAllocator interace: sdl provides its
own implementation of qemu_create_displaysurface, giving back the sdl
surface buffer directly (as we used to do before the DisplayState
changes).
Since the buffer returned by sdl could be in bgr format we need to put
back in the handlers of that case.
This approach is good if the two following conditions are true:
1) the sdl surface is a software surface that resides in main memory;
2) the host display color depth is either 16 or 32 bpp.
If first condition is false we can have bad performances using sdl
and vnc together.
If the second condition is false performances are certainly not going to
improve but they shouldn't get worse either.
The first condition is always true, at least on linux/X11 systems; but I
believe is true also on other platforms.
The second condition is true in the vast majority of the cases.
This patch should also have the good side effect of solving the sdl
2D slowness malc was reporting on MacOS, because SDL_BlitSurface is not
going to be called anymore when the guest is in text mode or 24bpp.
However the root problem is still present so I suspect we may
still see some slowness on MacOS when the guest is in 32 or 16 bpp.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6839 c046a42c-6fe2-441c-8c8c-71466251a162
Patch 5/7
This patch changes the graphical_console_init function to return an
allocated DisplayState instead of a QEMUConsole.
This patch contains just the graphical_console_init change and few other
modifications mainly in console.c and vl.c.
It was necessary to move the display frontends (e.g. sdl and vnc)
initialization after machine->init in vl.c.
This patch does *not* include any required changes to any device, these
changes come with the following patches.
Patch 6/7
This patch changes the QEMUMachine init functions not to take a
DisplayState as an argument because is not needed any more;
In few places the graphic hardware initialization function was called
only if DisplayState was not NULL, now they are always called.
Apart from these cases, the rest are all mechanical substitutions.
Patch 7/7
This patch updates the graphic device code to use the new
graphical_console_init function.
As for the previous patch, in few places graphical_console_init was called
only if DisplayState was not NULL, now it is always called.
Apart from these cases, the rest are all mechanical substitutions.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6344 c046a42c-6fe2-441c-8c8c-71466251a162
Do not handle bgr host displays in the backends.
Right now a bgr flag exists so that sdl can set it, if the SDL_Surface
is bgr.
Afterwards the graphic device (e.g. vga.c) does the needed conversion.
With this patch series is sdl that is responsible for rendering the format
provided by the graphic device that must provide a DisplaySurface
(ds->surface) in 16 or 32 bpp, rgb.
Afterwards sdl creates a SDL_Surface from the given DisplaySurface and
blits it into the main SDL_Surface using SDL_BlitSurface.
Everything is handled by sdl transparently, because SDL_BlitSurface is
perfectly capable of handling bgr displays by itself.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6335 c046a42c-6fe2-441c-8c8c-71466251a162