Once there, use a better variable name.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Use qemu_invalidate_entry in cpu_physical_memory_unmap.
Do not lock mapcache entries in qemu_get_ram_ptr if the address falls in
the ramblock with offset == 0. We don't need to do that because the
callers of qemu_get_ram_ptr either try to map an entire block, other
from the main ramblock, or until the end of a page to implement a single
read or write in the main ramblock.
If we don't lock mapcache entries in qemu_get_ram_ptr we don't need to
call qemu_invalidate_entry in qemu_put_ram_ptr anymore because we can
leave with few long lived block mappings requested by devices.
Also move the call to qemu_ram_addr_from_mapcache at the beginning of
qemu_ram_addr_from_host.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Introduce qemu_ram_ptr_length that takes an address and a size as
parameters rather than just an address.
Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
once rather than calling qemu_get_ram_ptr one time per page.
This is not only more efficient but also tries to simplify the logic of
the function.
Currently we are relying on the fact that all the pages are mapped
contiguously in qemu's address space: we have a check to make sure that
the virtual address returned by qemu_get_ram_ptr from the second call on
is consecutive. Now we are making this more explicit replacing all the
calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
passing a size argument.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: agraf@suse.de
CC: anthony@codemonkey.ws
Signed-off-by: Alexander Graf <agraf@suse.de>
Replace xen_map_block with qemu_map_cache with the appropriate locking
and size parameters.
Replace xen_unmap_block with qemu_invalidate_entry.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
There is no need for qemu_map_cache_unlock, just use
qemu_invalidate_entry instead.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
When a phys memory client registers and we play catchup by walking
the page tables, we can make a huge improvement in the number of
times the set_memory callback is called by batching contiguous
pages together. With a 4G guest, this reduces the number of callbacks
at registration from 1048866 to 296.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* rth/axp-next: (26 commits)
target-alpha: Implement TLB flush primitives.
target-alpha: Use a fixed frequency for the RPCC in system mode.
target-alpha: Trap for unassigned and unaligned addresses.
target-alpha: Remap PIO space for 43-bit KSEG for EV6.
target-alpha: Implement cpu_alpha_handle_mmu_fault for system mode.
target-alpha: Implement more CALL_PAL values inline.
target-alpha: Disable interrupts properly.
target-alpha: All ISA checks to use TB->FLAGS.
target-alpha: Swap shadow registers moving to/from PALmode.
target-alpha: Implement do_interrupt for system mode.
target-alpha: Add IPRs to be used by the emulation PALcode.
target-alpha: Use kernel mmu_idx for pal_mode.
target-alpha: Add various symbolic constants.
target-alpha: Use do_restore_state for arithmetic exceptions.
target-alpha: Tidy up arithmetic exceptions.
target-alpha: Tidy exception constants.
target-alpha: Enable the alpha-softmmu target.
target-alpha: Rationalize internal processor registers.
target-alpha: Merge HW_REI and HW_RET implementations.
target-alpha: Cleanup MMU modes.
...
This patch removes all references to signal.h when qemu-common.h is included
as they become redundant.
Signed-off-by: Alexandre Raymond <cerbere@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Required for regions mapped via qemu_ram_alloc_from_ptr(). VFIO
and ivshmem will make use of this to remove mappings when devices
are hot unplugged.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
While trying out the > 64GB guest RAM patch, I hit some virtual address
limitations of my host system, which resulted in mmap failing. Unfortunately,
qemu didn't tell me about this failure, but just used the NULL pointer
happily, resulting in either segmentation faults or other fun errors.
To spare other users from tracing this down, let's print a nice message
instead so the user can figure out what's wrong from there.
Signed-off-by: Alexander Graf <agraf@suse.de>
the current s390x qemu memory layout is
0x1000000: guest start
0x80000000: qemu binary
which limits the amount of available memory to <2GB.
This patch moves the guest pages to 32GB to not collide with the binary
and to leave some space for the program break of qemu.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After
a call to qemu_put_ram_ptr, the pointer may be unmap from QEMU when
used with Xen.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
On IA32 host or IA32 PAE host, at present, generally, we can't create
an HVM guest with more than 2G memory, because generally it's almost
impossible for Qemu to find a large enough and consecutive virtual
address space to map an HVM guest's whole physical address space.
The attached patch fixes this issue using dynamic mapping based on
little blocks of memory.
Each call to qemu_get_ram_ptr makes a call to qemu_map_cache with the
lock option, so mapcache will not unmap these ram_ptr.
Blocks that do not belong to the RAM, but usually to a device ROM or to
a framebuffer, are handled in a separate function. So the whole RAMBlock
can be map.
Signed-off-by: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
When we're trying to get a newly registered phys memory client updated
with the current page mappings, we end up passing the region offset
(a ram_addr_t) as the start address rather than the actual guest
physical memory address (target_phys_addr_t). If your guest has less
than 3.5G of memory, these are coincidentally the same thing. If
there's more, the region offset for the memory above 4G starts over
at 0, so the set_memory client will overwrite it's lower memory entries.
Instead, keep track of the guest phsyical address as we're walking the
tables and pass that to the set_memory client.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
When we register a physical memory client, we try to walk the page
tables, calling the set_memory hook for every entry. Effectively
playing catchup for the client for everything already registered.
With this type, we only walk the 2nd entry of the l1 table,
typically missing all of the registered memory.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This allows to override the interrupt handling of QEMU in system mode.
KVM will make use of it to set a specialized handler.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Both have only two lines in common, and we will convert the system
service into a callback which is of no use for user mode operation.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
CC: Riku Voipio <riku.voipio@iki.fi>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
The previous patch removed the need for parameter puc.
Is is now unused, so remove it.
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Using cpu_physical_memory_read, cpu_physical_memory_write and ldub_phys
improves readability and allows removing some type casts.
lduw_phys and ldl_phys were not used because both require aligned
addresses. Therefore it is not possible to simply replace existing
calls by one of these functions.
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
All other type casts in calls of cpu_physical_memory_write are
used by hardware emulations and will be fixed by separate patches.
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Based on patch by Glauber Costa:
To allow management applications like libvirt to apply CPU affinities to
the VCPU threads, expose their ID via info cpus. This patch provides the
pre-existing and used interface from qemu-kvm.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
This both detects invalid invocations of qemu_ram_free and
qemu_ram_remap when mem_path is non-NULL and fixes a build error on
s390 ("'area' may be used uninitialized in this function").
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
CC: Alexander Graf <agraf@suse.de>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
qemu_ram_remap() unmaps the specified RAM pages, then re-maps these
pages again. This is used by KVM HWPoison support to clear HWPoisoned
page tables across guest rebooting, so that a new page may be
allocated later to recover the memory error.
[ Jan: style fixlets, WIN32 fix ]
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
We have qemu_cpu_self and qemu_thread_self. The latter is retrieving the
current thread, the former is checking for equality (using CPUState). We
also have qemu_thread_equal which is only used like qemu_cpu_self.
This refactors the interfaces, creating qemu_cpu_is_self and
qemu_thread_is_self as well ass qemu_thread_get_self.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
When the commit f471a17e9d converted the
ram_blocks structure to QLIST, it also removed the conditional check before
switching the current block at the beginning of the list.
In the common use case where ram_blocks has a few blocks with only one
frequently accessed (the main RAM), this has a performance impact as it
performs the useless list operations on each call (which are on a really
hot path).
On my machine emulation (ARM on amd64), this patch reduces the
percentage of CPU time spent in qemu_get_ram_ptr from 6.3% to 2.1% in the
profiling of a full boot.
Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
In order to use log_start/log_stop with Xen as well in the vga code,
this two operations have been put in CPUPhysMemoryClient.
The two new functions cpu_physical_log_start,cpu_physical_log_stop are
used in hw/vga.c and replace the kvm_log_start/stop. With this, vga does
no longer depends on kvm header.
[ Jan: rebasing and style fixlets ]
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
This function is only used within exec.c, so no need to make it public.
Signed-off-by: Tristan Gingold <gingold@adacore.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
With current OpenBSD, code_gen_buffer was mapped 8GB away from
text segment. Then any helpers were beyond the 2GB range of call
instruction genereated by TCG and so the calls would go nowhere,
leading to a segfault.
Fix by specifying an address for the code_gen_buffer,
hopefully free and nearby the helpers.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
As stated before, devices can be little, big or native endian. The
target endianness is not of their concern, so we need to push things
down a level.
This patch adds a parameter to cpu_register_io_memory that allows a
device to choose its endianness. For now, all devices simply choose
native endian, because that's the same behavior as before.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The way we're currently modeling mmio is too simplified. We assume that
every device has the same endianness as the target CPU. In reality,
most devices are little endian (all PCI and ISA ones I'm aware of). Some
are big endian (special system devices) and a very little fraction is
target native endian (fw_cfg).
So instead of assuming every device to be native endianness, let's move
to a model where the device tells us which endianness it's in.
That way we can compile the devices only once and get rid of all the ugly
swap will be done by the underlying layer.
For the same of readability, this patch only introduces the helper framework
but doesn't allow the registering code to set its endianness yet.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Remove the debugging fprintf() slipped in via the following commit:
commit b2e0a138e7
Author: Michael S. Tsirkin <mst@redhat.com>
Date: Mon Nov 22 19:52:34 2010 +0200
migration: stable ram block ordering
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This makes ram block ordering under migration stable, ordered by offset.
This is especially useful for migration to exec, for debugging.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Jason Wang <jasowang@redhat.com>
fprintf_function uses format checking with GCC_FMT_ATTR.
It is declared in qemu-common.h and used in cpu-all.h
(which is included from cpu.h), so qemu-common.h must
be included earlier. Some redundant include statements
for standard include files were removed.
Fix also two format errors (ptrdiff_t needs %td).
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
is_softmmu was removed with commit
d4c430a80f,
so remove it now from debug code, too.
Fix also the format specifier for paddr
in the same line of code.
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
vl.c has a Sun-specific hack to supply a prototype for madvise(),
but the call site has apparently moved to arch_init.c.
Haiku doesn't implement madvise() in favor of posix_madvise().
OpenBSD and Solaris 10 don't implement posix_madvise() but madvise().
MinGW implements neither.
Check for madvise() and posix_madvise() in configure and supply qemu_madvise()
as wrapper. Prefer madvise() over posix_madvise() due to flag availability.
Convert all callers to use qemu_madvise() and QEMU_MADV_*.
Note that on Solaris the warning is fixed by moving the madvise() prototype,
not by qemu_madvise() itself. It helps with porting though, and it simplifies
most call sites.
v7 -> v8:
* Some versions of MinGW have no sys/mman.h header. Reported by Blue Swirl.
v6 -> v7:
* Adopt madvise() rather than posix_madvise() semantics for returning errors.
* Use EINVAL in place of ENOTSUP.
v5 -> v6:
* Replace two leftover instances of POSIX_MADV_NORMAL with QEMU_MADV_INVALID.
Spotted by Blue Swirl.
v4 -> v5:
* Introduce QEMU_MADV_INVALID, suggested by Alexander Graf.
Note that this relies on -1 not being a valid advice value.
v3 -> v4:
* Eliminate #ifdefs at qemu_advise() call sites. Requested by Blue Swirl.
This will currently break the check in kvm-all.c by calling madvise() with
a supported flag, which will not fail. Ideas/patches welcome.
v2 -> v3:
* Reuse the *_MADV_* defines for QEMU_MADV_*. Suggested by Alexander Graf.
* Add configure check for madvise(), too.
Add defines to Makefile, not QEMU_CFLAGS.
Convert all callers, untested. Suggested by Blue Swirl.
* Keep Solaris' madvise() prototype around. Pointed out by Alexander Graf.
* Display configure check results.
v1 -> v2:
* Don't rely on posix_madvise() availability, add qemu_madvise().
Suggested by Blue Swirl.
Signed-off-by: Andreas Färber <afaerber@opensolaris.org>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
It is possible that subpage mmio is registered over existing memory
page. When this happens "memory" will have real memory address and not
index into io_mem array so next access to the page will generate
segfault. It is uncommon to have some part of a page to be accessed as
memory and some as mmio, but qemu shouldn't crash even when guest does
stupid things. So lets just pretend that the rest of the page is
unassigned if guest configure part of the memory page as mmio.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Since most of the code in qemu_ram_alloc() and
qemu_ram_alloc_from_ptr() are duplicated, let
qemu_ram_alloc_from_ptr() to switch by checking void *host, and change
qemu_ram_alloc() to a wrapper.
Signed-off-by: Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Provide a function to add an allocated region of memory to the qemu RAM.
This patch is copied from Marcelo's qemu_ram_map() in qemu-kvm and given the
clearer name qemu_ram_alloc_from_ptr().
Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Both values are only used in exec.c, so there is no need
to make them globally available.
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
With gcc 4.2.1-sjlj (mingw32-2) I get this warning:
/src/qemu/exec.c: In function 'qemu_ram_alloc':
/src/qemu/exec.c:2777: warning: 'offset' may be used uninitialized in this function
Fix by initializing the variable.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Now that we have a working qemu_ram_free() and the primary runtime
user of it has been updated, don't be lenient about duplicate id strings.
We also shouldn't need to create them ondemand at the target.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Now that we can support a ram_addr_t space with holes, we can implement
qemu_ram_free().
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
With these two pieces in place, we can start naming ramblocks. When
the device is present and it lives on a bus that provides a device
path, we concatenate the path and the provided name. Otherwise we
just use name. The resulting id string must be unique. For now we
assume an allocation for the same name and size is a device that has
been removed and reinserted and return the same block. This will go
away once qemu_ram_free() is implemented.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
These will be used to generate unique id strings for ramblocks. The name
field is required, the device pointer is optional as most callers don't
have a device. When there's no device or the device isn't a child of
a bus implementing BusInfo.get_dev_path, the name should be unique for
the platform.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
When available, we'd like to be able to access the DeviceState
when registering a savevm. For buses with a get_dev_path()
function, this will allow us to create more unique savevm
id strings.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
We currently need this either to allocate the next ram_addr_t for a
new block, or for total memory to be migrated. Both of which we can
calculate without need of this to keep us in a contiguous address space.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This patch avoids handling write watchpoints on read-only memory access.
It also breaks the searching loop for watchpoint once the setup for
handling watchpoint later is done.
Signed-off-by: Jun Koi <junkoi2004@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This makes the RAM block list easier to manipulate. Also incorporate
relevant variables into the RAMList struct.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Chris Wright <chrisw@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This allows the use of direct calls to the helpers,
and a direct branch back to the epilogue.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This will allow backends to make intelligent choices about how
to implement GUEST_BASE.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Implement the "functions may be omitted with NULL pointer"
interface mentioned in the function block comment by transforming
NULL entries in the read/write arrays into calls to the
unassigned_mem family of functions.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
exec.c has a comment 'XXX: optimize' for lduw_phys/stw_phys,
so let's do it, along the lines of stl_phys.
The reason to address 16 bit accesses specifically is that virtio relies
on these accesses to be done atomically, using memset as we do now
breaks this assumption, which is reported to cause qemu with kvm
to read wrong index values under stress.
https://bugzilla.redhat.com/show_bug.cgi?id=525323
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The usermode PAGE_RESERVED code is not required by the current mmap
implementation, and is already broken when guest_base != 0.
Unfortunately the bsd emulation still uses the old mmap implementation,
so we can't rip it out altogether.
Signed-off-by: Paul Brook <paul@codesourcery.com>
Greatly simplify the subpage implementation by not supporting
multiple devices at the same address at different widths. We
don't need full copies of mem_read/mem_write/opaque for each
address, only a single index back into the main io_mem_* arrays.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
V2 that uses endaddr = end-of-guest-address-space if !h2g_valid(endaddr)
after I found out that indeed works; and also disables the FreeBSD 6.x
/compat/linux/proc/self/maps fallback because it can return partial lines
if (at least I think that's the reason) the mappings change between
subsequent read() calls.
Signed-off-by: Juergen Lock <nox@jelal.kn-bremen.de>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Replaces direct phys_ram_dirty access with wrapper functions to prevent
direct access to the phys_ram_dirty bitmap.
Signed-off-by: Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>
Signed-off-by: OHMURA Kei <ohmura.kei@lab.ntt.co.jp>
Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Historically the qemu tlb "addend" field was used for both RAM and IO accesses,
so needed to be able to hold both host addresses (unsigned long) and guest
physical addresses (target_phys_addr_t). However since the introduction of
the iotlb field it has only been used for RAM accesses.
This means we can change the type of addend to unsigned long, and remove
associated hacks in the big-endian TCG backends.
We can also remove the host dependence from target_phys_addr_t.
Signed-off-by: Paul Brook <paul@codesourcery.com>
On ia64, the default memory alignement is not enough for a code
alignement. To fix that, force static_code_gen_buffer alignment
to CODE_GEN_ALIGN.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
When the host page size is bigger that the target one, unprotecting a
page should:
- mark all the target pages corresponding to the host page as writable
- invalidate all tb corresponding to the host page (and not the target
page)
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Use kinfo_getvmmap(3) on FeeBSD >= 7.x and /compat/linux/proc on older
FreeBSD. (kinfo_getvmmap is preferred since /compat/linux/proc is
usually only mounted on hosts also using the Linuxolator.)
This patch is a bit hacky because the includes needed for kinfo_getvmmap
conflict with other definitions in exec.c by default so I had to `trick
around' a little, but I built the result in FreeBSD 6.4-stable and
7.2-stable tbs and on 8-stable on the host so the hacks at least
should be stable. (If this is a problem maybe we could also move the
kinfo_getvmmap invocations into a seperate source file but that would
be more work...)
Signed-off-by: Juergen Lock <nox@jelal.kn-bremen.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Arrange various declarations so that also non-CPU code can access
them, adjust users.
Move CPU specific code to cpus.c.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
QEMU uses a fixed page size for the CPU TLB. If the guest uses large
pages then we effectively split these into multiple smaller pages, and
populate the corresponding TLB entries on demand.
When the guest invalidates the TLB by virtual address we must invalidate
all entries covered by the large page. However the address used to
invalidate the entry may not be present in the QEMU TLB, so we do not
know which regions to clear.
Implementing a full vaiable size TLB is hard and slow, so just keep a
simple address/mask pair to record which addresses may have been mapped by
large pages. If the guest invalidates this region then flush the
whole TLB.
Signed-off-by: Paul Brook <paul@codesourcery.com>
The multi-level pagetable code fails to iterate ove all entries because
of the L2_BITS v.s. L2_SIZE thinko.
Signed-off-by: Paul Brook <paul@codesourcery.com>
Fixes warning:
CC sparc-bsd-user/exec.o
/src/qemu/exec.c: In function `page_check_range':
/src/qemu/exec.c:2375: warning: comparison is always true due to limited range of data type
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The page tracking code in exec.c is used by both userspace and system
emulation. Userspace emulation uses it to track virtual pages, and
system emulation to track ram pages. Introduce a new type to hold this
kind of address.
Signed-off-by: Paul Brook <paul@codesourcery.com>
The addr < end comparison prevents iterating over the last
page in the guest address space; an iteration based on
length avoids this problem.
At the same time, assert that the given address is in the
guest address space.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Define L1_MAP_ADDR_SPACE_BITS to be either the virtual address size
(in user mode) or physical address size (in system mode), and use
that to size l1_map. This rewrites page_find_alloc, page_flush_tb,
and walk_memory_regions.
Use TARGET_PHYS_ADDR_SPACE_BITS for the physical memory map based
off of l1_phys_map. This rewrites page_phys_find_alloc and
phys_page_for_each.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Removes a set of ifdefs from exec.c.
Introduce TARGET_VIRT_ADDR_SPACE_BITS for all targets other
than Alpha. This will be used for page_find_alloc, which is
supposed to be using virtual addresses in the first place.
Signed-off-by: Richard Henderson <rth@twiddle.net>
This grand cleanup drops all reset and vmsave/load related
synchronization points in favor of four(!) generic hooks:
- cpu_synchronize_all_states in qemu_savevm_state_complete
(initial sync from kernel before vmsave)
- cpu_synchronize_all_post_init in qemu_loadvm_state
(writeback after vmload)
- cpu_synchronize_all_post_init in main after machine init
- cpu_synchronize_all_post_reset in qemu_system_reset
(writeback after system reset)
These writeback points + the existing one of VCPU exec after
cpu_synchronize_state map on three levels of writeback:
- KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
- KVM_PUT_RESET_STATE (on synchronous system reset, all VCPUs stopped)
- KVM_PUT_FULL_STATE (on init or vmload, all VCPUs stopped as well)
This level is passed to the arch-specific VCPU state writing function
that will decide which concrete substates need to be written. That way,
no writer of load, save or reset functions that interact with in-kernel
KVM states will ever have to worry about synchronization again. That
also means that a lot of reasons for races, segfaults and deadlocks are
eliminated.
cpu_synchronize_state remains untouched, just as Anthony suggested. We
continue to need it before reading or writing of VCPU states that are
also tracked by in-kernel KVM subsystems.
Consequently, this patch removes many cpu_synchronize_state calls that
are now redundant, just like remaining explicit register syncs.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Port qemu-kvm's -mem-path and -mem-prealloc options. These are useful
for backing guest memory with huge pages via hugetlbfs.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
CC: john cooper <john.cooper@redhat.com>
Userspace doesn't have physical memory, so cpu_physical_memory_rw
makes no sense. This is only used to implement cpu_memory_rw_debug, so
just implement that directly instead.
Signed-off-by: Paul Brook <paul@codesourcery.com>
Userspace emulation doesn't have a physical address space, so
l1_phys_map makes no sense. This code is never actually used, so don't
try and build it.
Signed-off-by: Paul Brook <paul@codesourcery.com>
remove direct kvm calls from exec.c, make
kvm use memory notifiers framework instead.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This adds notifiers for phys memory changes: a set of callbacks that
vhost can register and update kernel accordingly. Down the road, kvm
code can be switched to use these as well, instead of calling kvm code
directly from exec.c as is done now.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Qemu may hang in host_signal_handler after qemu has done a
seppuku with cpu_abort(). But at this stage we are not really
interested in target process coredump anymore, so unregister
host_signal_handler to die grafefully.
Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.
But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No need for input or accessing other devices frequently.
This issue was observed in a experimental embbed system. The test image
simply print "test" every 1 seconds. The output in QEmu meets expectation,
but the output in KVM is delayed for seconds.
Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
handler. By this way, We don't need vcpu explicit exit to QEmu to
handle this issue.
Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Win32 suffers from a very big memory leak when dealing with SCSI devices.
Each read/write request allocates memory with qemu_memalign (ie
VirtualAlloc) but frees it with qemu_free (ie free).
Pair all qemu_memalign() calls with qemu_vfree() to prevent such leaks.
Signed-off-by: Herve Poussineau <hpoussin@reactos.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Fixes receiving signals when guest code is being executed in a tight
loop. For an example, try interrupting the following code with ctrl-c.
http://nchipin.kos.to/test-loop.c
The tight loop is ofcourse brainless, but it is also exactly how the waitpid* testcases
are implemented.
Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The limit of iomem areas is quite low. Without the
debug print, it is quite hard to figure out why more
devices are not getting registered.
Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
/tmp doesn't exist under win32. Ease the pain of win32 development slightly.
From: Juha Riihimäki <juha.riihimaki@nokia.com>
Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
KVM on S390x requires the virtual address space of the guest's RAM to be
within the first 256GB.
The general direction I'd like to see KVM on S390 move is that this requirement
is losened, but for now that's what we're stuck with.
So let's just hack up qemu_ram_alloc until KVM behaves nicely :-).
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Call MADV_MERGEABLE on guest memory allocations. MADV_MERGABLE will be
available starting in Linux 2.6.32. This system call registers a region of
virtual address space with Linux as a candidate for transparent memory
sharing.
Patchworks-ID: 35447
Signed-off-by: Izik Eidus <ieidus@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
We don't require full pages in cpu_register_physical_memory,
except for RAM.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
In the very least, a change like this requires discussion on the list.
The naming convention is goofy and it causes a massive merge problem. Something
like this _must_ be presented on the list first so people can provide input
and cope with it.
This reverts commit 99a0949b72.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Problem: Our file sys-queue.h is a copy of the BSD file, but there are
some additions and it's not entirely compatible. Because of that, there have
been conflicts with system headers on BSD systems. Some hacks have been
introduced in the commits 15cc923584,
f40d753718,
96555a96d7 and
3990d09adf but the fixes were fragile.
Solution: Avoid the conflict entirely by renaming the functions and the
file. Revert the previous hacks.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
cpu_synchronize_state() is a little unreadable since the 'modified'
argument isn't self-explanatory. Simplify it by making it always
synchronize the kernel state into qemu, and automatically flush the
registers back to the kernel if they've been synchronized on this
exit.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
kqemu introduces a number of restrictions on the i386 target. The worst is that
it prevents large memory from working in the default build.
Furthermore, kqemu is fundamentally flawed in a number of ways. It relies on
the TSC as a time source which will not be reliable on a multiple processor
system in userspace. Since most modern processors are multicore, this severely
limits the utility of kqemu.
kvm is a viable alternative for people looking to accelerate qemu and has the
benefit of being supported by the upstream Linux kernel. If someone can
implement work arounds to remove the restrictions introduced by kqemu, I'm
happy to avoid and/or revert this patch.
N.B. kqemu will still function in the 0.11 series but this patch removes it from
the 0.12 series.
Paul, please Ack or Nack this patch.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
On Win32 the setvbuf function requires the last parameter to be size between 2 and INT_MAX bytes, so the calls always failed. Since the whole point of the calls is to set line-buffered mode for the file handle and that's not supported on Win32 anyway, conditionally remove them.
Signed-off-by: Filip Navara <filip.navara@gmail.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
I used the following command to enable debugging:
perl -p -i -e 's/^\/\/#define DEBUG/#define DEBUG/g' * */* */*/*
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Use static empty variable s_cputlb_empty_entry to clear entries,
also reset addend member when clearing entries.
This helps running with valgrind/memcheck
Signed-off-by: igor.v.kovalenko@gmail.com
--
Kind regards,
Igor V. Kovalenko
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
fix memory leak in cpu_unregister_map_client() and cpu_notify_map_clients().
Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Maximum alignment for Win32 is 16, so don't try
to set it to 32. Otherwise the compiler complains:
exec.c:102: warning: alignment of 'code_gen_prologue'
is greater than maximum object file alignment. Using 16
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
remove unnecessary #if NB_MMU_MODES by using loop.
Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
Acked-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
There are some people interested in, given a cpu number,
pick its CPUState. KVM is an example, although not yet in tree.
This patch provides a way of doing that.
Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Move io_mem_init() downwards to avoid a forward declaration. No code change.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The parameter is always zero except when registering the three internal
io regions (ROM, unassigned, notdirty). Remove the parameter to reduce
the API's power, thus facilitating future change.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
When target process is killed with signal (such signal that
should dump core) a coredump file is created. This file is
similar than coredump generated by Linux (there are few exceptions
though).
Riku Voipio: added support for rlimit
Signed-off-by: Mika Westerberg <mika.westerberg@iki.fi>
Signed-off-by: Riku Voipio <riku.voipio@iki.fi>
When debugging multi-threaded programs, QEMU's gdb stub would report the
correct number of threads (the qfThreadInfo and qsThreadInfo packets).
However, the stub was unable to actually switch between threads (the T
packet), since it would report every thread except the first as being
dead. Furthermore, the stub relied upon cpu_index as a reliable means
of assigning IDs to the threads. This was a bad idea; if you have this
sequence of events:
initial thread created
new thread #1
new thread #2
thread #1 exits
new thread #3
thread #3 will have the same cpu_index as thread #1, which would confuse
GDB. (This problem is partly due to the remote protocol not having a
good way to send thread creation/destruction events.)
We fix this by using the host thread ID for the identifier passed to GDB
when debugging a multi-threaded userspace program. The thread ID might
wrap, but the same sort of problems with wrapping thread IDs would come
up with debugging programs natively, so this doesn't represent a
problem.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
This patch adds the missing hooks to allow live migration in KVM mode.
It adds proper synchronization before/after saving/restoring the VCPU
states (note: PPC is untested), hooks into
cpu_physical_memory_set_dirty_tracking() to enable dirty memory logging
at KVM level, and synchronizes that drity log into QEMU's view before
running ram_live_save().
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Extend kvm_physical_sync_dirty_bitmap() so that is can sync across
multiple slots. Useful for updating the whole dirty log during
migration. Moreover, properly pass down errors the whole call chain.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Avi Kivity wrote:
> Suggest wrapping in a function and hiding it deep inside kvm-all.c.
>
Done in v2:
---------->
If the KVM MMU is asynchronous (kernel does not support MMU_NOTIFIER),
we have to avoid COW for the guest memory. Otherwise we risk serious
breakage when guest pages change there physical locations due to COW
after fork. Seen when forking smbd during runtime via -smb.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
To notify cpu of pending interrupt.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7243 c046a42c-6fe2-441c-8c8c-71466251a162
adds a -numa command line parameter and sets a QEMU global array with
the memory sizes. The CPU-to-node assignemnt is written into the
CPUState. If no specific values for memory and CPUs are given,
all resources will be split equally across all nodes.
This code currently support only up to 64 virtual CPUs.
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7210 c046a42c-6fe2-441c-8c8c-71466251a162
This is necessary for alpha because it has 4 protection levels and pal mode.
Signed-off-by: Tristan Gingold <gingold@adacore.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7028 c046a42c-6fe2-441c-8c8c-71466251a162
Enhance cpu_memory_rw_debug so that it can write even to ROM regions.
This allows to modify ROM via gdb (I see no point in denying this to the
user), and it will enable us to drop kvm_patch_opcode_byte().
Credits go to Avi for suggesting this.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6905 c046a42c-6fe2-441c-8c8c-71466251a162
This is a backport of the guest debugging support for the KVM
accelerator that is now part of the KVM tree. It implements the reworked
KVM kernel API for guest debugging (KVM_CAP_SET_GUEST_DEBUG) which is
not yet part of any mainline kernel but will probably be 2.6.30 stuff.
So far supported is x86, but PPC is expected to catch up soon.
Core features are:
- unlimited soft-breakpoints via code patching
- hardware-assisted x86 breakpoints and watchpoints
Changes in this version:
- use generic hook cpu_synchronize_state to transfer registers between
user space and kvm
- push kvm_sw_breakpoints into KVMState
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6825 c046a42c-6fe2-441c-8c8c-71466251a162
CPU_INTERRUPT_EXIT is not set anymore in env->interrupt_request since
revision 6728. Make sure the bit is cleared on VM load.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6756 c046a42c-6fe2-441c-8c8c-71466251a162
and process termination in legacy applications. Try to guess which we want
based on the presence of multiple threads.
Also implement locking when modifying the CPU list.
Signed-off-by: Paul Brook <paul@codesourcery.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6735 c046a42c-6fe2-441c-8c8c-71466251a162
env->interrupt_request is accessed as the bit level from both main code
and signal handler, making a race condition possible even on CISC CPU.
This causes freeze of QEMU under high load when running the dyntick
clock.
The patch below move the bit corresponding to CPU_INTERRUPT_EXIT in a
separate variable, declared as volatile sig_atomic_t, so it should be
work even on RISC CPU.
We may want to move the cpu_interrupt(env, CPU_INTERRUPT_EXIT) case in
its own function and get rid of CPU_INTERRUPT_EXIT. That can be done
later, I wanted to keep the patch short for easier review.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6728 c046a42c-6fe2-441c-8c8c-71466251a162
KVM uses cpu_physical_memory_rw() to access the I/O devices. When a
read or write with a length of 8-byte is requested, it is split into 2
4-byte accesses.
This has been broken in revision 5849. After this revision, only the
first 4 bytes are actually read/write to the device, as the target
address is changed, so on the next iteration of the loop the next 4
bytes are actually read/written elsewhere (in the RAM for the graphic
card).
This patch fixes screen corruption (and most probably data corruption)
with FreeBSD/amd64. Bug #2556746 in KVM bugzilla.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6628 c046a42c-6fe2-441c-8c8c-71466251a162
So drivers can clear their mem io table entries on exit back to unassigned
state.
Also make the io mem index allocation dynamic.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6601 c046a42c-6fe2-441c-8c8c-71466251a162
Original idea&code by Kevin Wolf, split-up in two patches and added more
archs.
This patch introduces a flag to log CPU resets. Useful for tracing
unexpected resets (such as those triggered by x86 triple faults).
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6452 c046a42c-6fe2-441c-8c8c-71466251a162
The target memory mapping API may fail if the bounce buffer resources
are exhausted. Add a notification mechanism to allow clients to retry
the mapping operation when resources become available again.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6395 c046a42c-6fe2-441c-8c8c-71466251a162
Devices accessing large amounts of memory (as with DMA) will wish to obtain
a pointer to guest memory rather than access it indirectly via
cpu_physical_memory_rw(). Add a new API to convert target addresses to
host pointers.
In case the target address does not correspond to RAM, a bounce buffer is
allocated. To prevent the guest from causing the host to allocate unbounded
amounts of bounce buffer, this memory is limited (currently to one page).
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6394 c046a42c-6fe2-441c-8c8c-71466251a162
This is a large patch that changes all occurrences of logfile/loglevel
global variables to use the new qemu_log*() macros.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6338 c046a42c-6fe2-441c-8c8c-71466251a162
Latest changes to the cpu_breakpoint/watchpoint API broke cpu_copy. This
patch fixes it by cloning the breakpoint and watchpoint lists
appropriately.
Thanks to Lionel Landwerlin for pointing out.
Signed-off-by: Jan Kiszka <jan.kiszka@web.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6321 c046a42c-6fe2-441c-8c8c-71466251a162
The attached patch updates the FSF address in the GPL/LGPL boilerplate
in most GPL/LGPLed files, and also in COPYING.LIB.
Signed-off-by: Stuart Brady <stuart.brady@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6162 c046a42c-6fe2-441c-8c8c-71466251a162
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6140 c046a42c-6fe2-441c-8c8c-71466251a162
MMIO exits are more expensive in KVM or Xen than in QEMU because they
involve, at least, privilege transitions. However, MMIO write
operations can be effectively batched if those writes do not have side
effects.
Good examples of this include VGA pixel operations when in a planar
mode. As it turns out, we can get a nice boost in other areas too.
Laurent mentioned a 9.7% performance boost in iperf with the coalesced
MMIO changes for the e1000 when he originally posted this work for KVM.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5961 c046a42c-6fe2-441c-8c8c-71466251a162
Paul's comment on my first approach to fix the h2g usage in
page_find_alloc finally open my eyes about what the code is actually
supposed to do:
With the help of h2g_valid we can no cleanly check if a freshly allocate
page (for host usage) is guest-reachable and, in case it is, mark it
reserved in the guest's address range.
Signed-off-by: Jan Kiszka <jan.kiszka@web.de>
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Acked-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5957 c046a42c-6fe2-441c-8c8c-71466251a162
Unfortunately this range is so narrow that I'm not sure if it makes more
sense to always use memory load to pc kind of branch instead.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5844 c046a42c-6fe2-441c-8c8c-71466251a162
This switches cpu_break/watchpoint_* to TAILQ wrappers, simplifying the
code and also fixing a use after release issue in
cpu_break/watchpoint_remove_all.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5799 c046a42c-6fe2-441c-8c8c-71466251a162
Hypervisors like KVM perform badly while doing mmio on
a loop, because it'll generate an exit on each access.
This is the case with VGA, which results in very bad
performance.
In this patch, we map the linear frame buffer as RAM,
make sure it has dirty region tracking enabled, and then
just let the region to be written.
Cleanups suggestions by:
Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5793 c046a42c-6fe2-441c-8c8c-71466251a162
ENOBUFS is not defined on Win32. Use ENOMEM instead which is more portable.
This was reported by Hervé Poussineau.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5749 c046a42c-6fe2-441c-8c8c-71466251a162
Add another breakpoint/watchpoint type to BP_GDB: BP_CPU. This type is
intended for hardware-assisted break/watchpoint emulations like the x86
architecture requires.
To keep the highest priority for BP_GDB breakpoints, this type is
always inserted at the head of break/watchpoint lists, thus is found
first when looking up the origin of a debug interruption.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5746 c046a42c-6fe2-441c-8c8c-71466251a162
When one watchpoint is hit, others might have triggered as well. To
support users of the watchpoint API which need to detect such cases,
the BP_WATCHPOINT_HIT flag is introduced and maintained.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5744 c046a42c-6fe2-441c-8c8c-71466251a162
Now that we can properly restore the pc on watchpoint hits, there is no
more need for prematurely terminating TBs if watchpoints are present.
Remove all related bits.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5742 c046a42c-6fe2-441c-8c8c-71466251a162
In order to provide accurate information about the triggering
instruction, this patch adds the required bits to restore the pc if the
access happened inside a TB. With the BP_STOP_BEFORE_ACCESS flag, the
watchpoint user can control if the debug trap should be issued on or
after the accessing instruction.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5741 c046a42c-6fe2-441c-8c8c-71466251a162
This adds length support for watchpoints. To keep things simple, only
aligned watchpoints are accepted.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5740 c046a42c-6fe2-441c-8c8c-71466251a162
This patch prepares the QEMU cpu_watchpoint/breakpoint API to allow the
succeeding enhancements this series comes with.
First of all, it overcomes MAX_BREAKPOINTS/MAX_WATCHPOINTS by switching
to dynamically allocated data structures that are kept in linked lists.
This also allows to return a stable reference to the related objects,
required for later introduced x86 debug register support.
Breakpoints and watchpoints are stored with their full information set
and an additional flag field that makes them easily extensible for use
beyond pure guest debugging.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5738 c046a42c-6fe2-441c-8c8c-71466251a162
This patch refactors the way the CPU state is handled that is associated
with a TB. The basic motivation is to move more arch specific code out
of generic files. Specifically the long #ifdef clutter in tb_find_fast()
has to be overcome in order to avoid duplicating it for the gdb
watchpoint fixes (patch "Restore pc on watchpoint hits").
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5736 c046a42c-6fe2-441c-8c8c-71466251a162
Unfortunately, -linux-user doesn't use osdep as it replaces some of those
functions with specific ones. The code #ifdef code in exec.c needs to
remain in place so instead of introducing a qemu_getpagesize() let's just
use getpagesize() in the non-Windows implementation of qemu_vmalloc.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5703 c046a42c-6fe2-441c-8c8c-71466251a162
This patch adds very basic KVM support. KVM is a kernel module for Linux that
allows userspace programs to make use of hardware virtualization support. It
current supports x86 hardware virtualization using Intel VT-x or AMD-V. It
also supports IA64 VT-i, PPC 440, and S390.
This patch only implements the bare minimum support to get a guest booting. It
has very little impact the rest of QEMU and attempts to integrate nicely with
the rest of QEMU.
Even though this implementation is basic, it is significantly faster than TCG.
Booting and shutting down a Linux guest:
w/TCG: 1:32.36 elapsed 84% CPU
w/KVM: 0:31.14 elapsed 59% CPU
Right now, KVM is disabled by default and must be explicitly enabled with
-enable-kvm. We can enable it by default later when we have had better
testing.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5627 c046a42c-6fe2-441c-8c8c-71466251a162
Move up the warp around test because line
'end = TARGET_PAGE_ALIGN(start+len);'
can interfere with it.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5563 c046a42c-6fe2-441c-8c8c-71466251a162
This patch adds a dirty tracking bit for live migration. We use 0x08 because
kqemu uses 0x04.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5433 c046a42c-6fe2-441c-8c8c-71466251a162
Don't truncate code_gen_buffer_size calculation to int, as it will give
unpredicted results on 64 bit systems when booting large guests.
Signed-off-by: Jes Sorensen <jes@sgi.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5310 c046a42c-6fe2-441c-8c8c-71466251a162
On some cases, such as under KVM, tb_invalidate_phys_page_range()
may be called for large addresses, when qemu is configured to more than
4GB of RAM.
On these cases, qemu was crashing because it was using an index too
large for l1_map[], that supports only 32-bit addresses when compiling
without CONFIG_USER_ONLY.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5227 c046a42c-6fe2-441c-8c8c-71466251a162