Move the AIOCB allocation code to use a dedicate structure, AIOPool. AIOCB
specific information, such as the AIOCB size and cancellation routine, is
moved into the pool.
At present, there is exactly one pool per block format driver, maintaining
the status quo.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6870 c046a42c-6fe2-441c-8c8c-71466251a162
There may be cases where the guest does not want the avail queue
interrupt, even when it's empty. For the virtio-net case, the
guest may use a different buffering scheme or decide polling for
used buffers is more efficient. This can be accomplished by simply
checking for whether the guest has acknowledged the existing notify
on empty flag.
Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6865 c046a42c-6fe2-441c-8c8c-71466251a162
The RXDMT0 interrupt is supposed to fire when the number of free
RX descriptors drops to some fraction of the total descriptors.
However in practice, it seems like we're adding this interrupt
cause on every RX. Fix the logic to treat (tail - head) as the
number of free entries rather than the number of used entries.
Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6864 c046a42c-6fe2-441c-8c8c-71466251a162
According to the Intel specs, lsl performs a check against NULL for the
provided selector, just like lar does. helper_lar() includes the
corresponding code, helper_lsl() was lacking it so far.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6863 c046a42c-6fe2-441c-8c8c-71466251a162
This patch makes the vnc server code skip screen refreshes in case
there is data in the output buffer. This reduces the refresh rate to
throttle the bandwidth needed in case the network link is saturated.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6862 c046a42c-6fe2-441c-8c8c-71466251a162
This patch killes the old_data hack in the qemu server and replaces
it with a clean separation of the guest-visible display surface and
the vnc server display surface. Both guest and server surface have
their own dirty bitmap for tracking screen updates.
Workflow is this:
(1) The guest writes to the guest surface. With shared buffers being
active the guest writes are directly visible to the vnc server code.
Note that this may happen in parallel to the vnc server code running
(today only in xenfb, once we have vcpu threads in qemu also for
other display adapters).
(2) vnc_update() callback tags the specified area in the guest dirty
map.
(3) vnc_update_client() will first walk through the guest dirty map. It
will compare guest and server surface for all regions tagged dirty
and in case the screen content really did change the server surface
and dirty map are updated.
Note: old code used old_data in a simliar way, so this does *not*
introduce an extra memcpy.
(4) Then vnc_update_cient() will send the updates to the vnc client
using the server surface and dirty map.
Note: old code used the guest-visible surface instead, causing
screen corruption in case of guest screen updates running in
parallel.
The separate dirty bitmap also has the nice effect that forced screen
updates can be done cleanly by simply tagging the area in both guest and
server dirty map. The old, hackish way was memset(old_data, 42, size)
to trick the code checking for screen changes.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6860 c046a42c-6fe2-441c-8c8c-71466251a162
Most 64 bit architectures I'm aware of support running 32 bit code
of the same architecture as well.
So x86_64 can run i386 code easily and ppc64 can run ppc code.
Unfortunately, the current checks are pretty strict. So you can only
load e.g. an x86_64 elf binary on qemu-system-x86_64, but no i386 one.
This can get really annoying. I first encountered this issue with
my multiboot patch, where qemu-system-x86_64 was unable to load an
i386 elf binary because the elf loader rejected it.
The same thing happened again on PPC64 now. The firmware we're loading
is a PPC32 elf binary, as it's shared with PPC32. But the platform is
PPC64.
Right now there is a hack for this in the ppc cpu.h definition, that
simply sets the type to PPC32 in system emulation mode. While that
works fine for the firmware, it's no good if you also want to load a
PPC64 kernel with -kernel.
So in order to solve this mess, I figured the easiest way is to make
the elf loader aware of platforms that are backwards compatible. For
now I was only sure that x86_64 does i386 and ppc64 does ppc32, but
maybe there are other combinations too.
This patch is a prerequisite for having a working -kernel option on
PPC64.
Signed-off-by: Alexander Graf <alex@csgraf.de>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6855 c046a42c-6fe2-441c-8c8c-71466251a162
A pci config write may remap the vga linear frame buffer, confusing the
memory slot dirty logging logic.
Fixed Windows with -vga std.
Signed-off-by: Avi Kivity <avi@redhat.com>
Sigend-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6852 c046a42c-6fe2-441c-8c8c-71466251a162
Otherwise, slot tracking gets confused.
This fixes a screen corruption bug with Ubuntu guest installation.
Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6851 c046a42c-6fe2-441c-8c8c-71466251a162
When checking that the size of the control virtqueue return field
is sufficient, use the correct sg list.
Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6845 c046a42c-6fe2-441c-8c8c-71466251a162
As previously discussed, this patch removes the non-portable use of
asprintf(), replacing it with malloc+snprintf instead
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6843 c046a42c-6fe2-441c-8c8c-71466251a162
Provide an empty line as last entry in command line history, just like
bash e.g. does.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6842 c046a42c-6fe2-441c-8c8c-71466251a162
This patch adds and uses #defines for the remaining hardcoded PCI
device IDs. It also moves definitions taken from linux/pci_ids.h
into a separate header (hw/pci_ids.h), removes the 'RTL' from
PCI_DEVICE_ID_REALTEK_RTL8029, and renames PCI_DEVICE_ID_FSL_E500
to PCI_DEVICE_ID_MPC8533E to match Linux's definition.
Changes in v2:
* Don't use C99-style comments
* Move definitions from linux/pci_ids.h into a separate header
* Rename PCI_DEVICE_ID_FSL_E500 to PCI_DEVICE_ID_MPC8533E
Signed-off-by: Stuart Brady <stuart.brady@gmail.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6841 c046a42c-6fe2-441c-8c8c-71466251a162
Hi all,
since vga_draw_graphic is only called by vga_hw_update when the console
associated with the graphic card is active, we don't need to check if
the current console is active using is_graphic_console.
I suspect I introduced these checks when the console switching mechanism
didn't work as it does now.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6840 c046a42c-6fe2-441c-8c8c-71466251a162
Hi all,
this patch adds a DisplayAllocator interface that allows display
frontends (sdl in particular) to provide a preallocated display buffer
for the graphical backend to use.
Whenever a graphical backend cannot use
qemu_create_displaysurface_from because its own internal pixel format
cannot be exported directly (text mode or graphical mode with color
depth 8 or 24), it creates another display buffer in memory using
qemu_create_displaysurface and does the conversion.
This new buffer needs to be blitted into the sdl surface buffer every time
we need to update portions of the screen.
We can avoid this using the DisplayAllocator interace: sdl provides its
own implementation of qemu_create_displaysurface, giving back the sdl
surface buffer directly (as we used to do before the DisplayState
changes).
Since the buffer returned by sdl could be in bgr format we need to put
back in the handlers of that case.
This approach is good if the two following conditions are true:
1) the sdl surface is a software surface that resides in main memory;
2) the host display color depth is either 16 or 32 bpp.
If first condition is false we can have bad performances using sdl
and vnc together.
If the second condition is false performances are certainly not going to
improve but they shouldn't get worse either.
The first condition is always true, at least on linux/X11 systems; but I
believe is true also on other platforms.
The second condition is true in the vast majority of the cases.
This patch should also have the good side effect of solving the sdl
2D slowness malc was reporting on MacOS, because SDL_BlitSurface is not
going to be called anymore when the guest is in text mode or 24bpp.
However the root problem is still present so I suspect we may
still see some slowness on MacOS when the guest is in 32 or 16 bpp.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6839 c046a42c-6fe2-441c-8c8c-71466251a162
Rename bswap_i32 into bswap32_i32 and bswap_i64 into bswap64_i64
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6829 c046a42c-6fe2-441c-8c8c-71466251a162
The changes introduced by r6824 broke a subtle, and admittedly obscure, aspect
of the block API. While bdrv_{pread,pwrite} return the number of bytes read
or written upon success, bdrv_{read,write} returns a zero upon success.
When using bdrv_pread for bdrv_read, special care must be taken to handle this
case.
This fixes certain guest images (notably linux-0.2 provided on the qemu
website).
Reported-by: malc <av1474@comtv.ru>
Reported-by: Herve Poussineau <hpoussin@reactos.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6828 c046a42c-6fe2-441c-8c8c-71466251a162
From: Xiantao Zhang <xiantao.zhang@intel.com>
Date: Tue, 3 Mar 2009 13:33:13 +0800
Subject: [PATCH] Split ioapic logic from the current apic.
Add a new ioapic.c to hold ioapic's logic, and also
make it work for ia64.
Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
---
Makefile.target | 2 +-
hw/apic.c | 237 +++----------------------------------------------
hw/ioapic.c | 263 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
hw/pc.h | 5 +-
4 files changed, 281 insertions(+), 226 deletions(-)
create mode 100644 hw/ioapic.c
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6827 c046a42c-6fe2-441c-8c8c-71466251a162
Ported from the KVM tree: Synchronize the qemu cpu state with kvm's
before invoking various monitor info commands (like 'info registers').
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6826 c046a42c-6fe2-441c-8c8c-71466251a162
This is a backport of the guest debugging support for the KVM
accelerator that is now part of the KVM tree. It implements the reworked
KVM kernel API for guest debugging (KVM_CAP_SET_GUEST_DEBUG) which is
not yet part of any mainline kernel but will probably be 2.6.30 stuff.
So far supported is x86, but PPC is expected to catch up soon.
Core features are:
- unlimited soft-breakpoints via code patching
- hardware-assisted x86 breakpoints and watchpoints
Changes in this version:
- use generic hook cpu_synchronize_state to transfer registers between
user space and kvm
- push kvm_sw_breakpoints into KVMState
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6825 c046a42c-6fe2-441c-8c8c-71466251a162
Now that scsi generic no longer uses bdrv_pread() and bdrv_pwrite(), we can
drop the corresponding internal APIs, which overlap bdrv_read()/bdrv_write()
and, being byte oriented, are unnatural for a block device.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6824 c046a42c-6fe2-441c-8c8c-71466251a162
Add an internal API for the generic block layer to send scsi generic commands
to block format driver. This means block format drivers no longer need
to consider overloaded nb_sectors parameters.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6823 c046a42c-6fe2-441c-8c8c-71466251a162
When a scsi device is backed by a scsi generic device instead of an
ordinary host block device, the block API is abused in a couple of annoying
ways:
- nb_sectors is negative, and specifies a byte count instead of a sector count
- offset is ignored, since scsi-generic is essentially a packet protocol
This overloading makes hacking the block layer difficult. Remove it by
introducing a new explicit API for scsi-generic devices. The new API
is still backed by the old implementation, but at least the users are
insulated.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6822 c046a42c-6fe2-441c-8c8c-71466251a162
This series is broken by design as it requires expensive IO operations at
open time causing very long delays when starting a virtual machine for the
first time.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6816 c046a42c-6fe2-441c-8c8c-71466251a162
This series is broken by design as it requires expensive IO operations at
open time causing very long delays when starting a virtual machine for the
first time.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6815 c046a42c-6fe2-441c-8c8c-71466251a162
This series is broken by design as it requires expensive IO operations at
open time causing very long delays when starting a virtual machine for the
first time.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6814 c046a42c-6fe2-441c-8c8c-71466251a162
This series is broken by design as it requires expensive IO operations at
open time causing very long delays when starting a virtual machine for the
first time.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6813 c046a42c-6fe2-441c-8c8c-71466251a162
This series is broken by design as it requires expensive IO operations at
open time causing very long delays when starting a virtual machine for the
first time.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6812 c046a42c-6fe2-441c-8c8c-71466251a162
Emulating fldl on arm doesn't seem to work too well. It's the way
qemu_ld64 is translated to arm instructions.
tcg_out_ld32_12(s, COND_AL, data_reg, addr_reg, 0);
tcg_out_ld32_12(s, COND_AL, data_reg2, addr_reg, 4);
Consider case where data_reg==0, data_reg2==1, and addr_reg==0. First load
overwrited addr_reg. So let's put an if (data_ref==addr_reg).
(Pablo Virolainen)
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6808 c046a42c-6fe2-441c-8c8c-71466251a162