Commit Graph

935 Commits

Author SHA1 Message Date
Alexey Kardashevskiy
7dca8043f3 memory: give name to every AddressSpace
The "info mtree" command in QEMU console prints only "memory" and "I/O"
address spaces while there are actually a lot more other AddressSpace
structs created by PCI and VIO devices. Those devices do not normally
have names and therefore not present in "info mtree" output.

The patch fixes this.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:39:52 +02:00
Paolo Bonzini
df32fd1c9f dma: eliminate DMAContext
The DMAContext is a simple pointer to an AddressSpace that is now always
already available.  Make everyone hold the address space directly,
and clean up the DMA API to use the AddressSpace directly.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:39:52 +02:00
Paolo Bonzini
24addbc76d dma: eliminate old-style IOMMU support
The translate function in the DMAContext is now always NULL.
Remove every reference to it.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:47 +02:00
Avi Kivity
3095115744 memory: iommu support
Add a new memory region type that translates addresses it is given,
then forwards them to a target address space.  This is similar to
an alias, except that the mapping is more flexible than a linear
translation and trucation, and also less efficient since the
translation happens at runtime.

The implementation uses an AddressSpace mapping the target region to
avoid hierarchical dispatch all the way to the resolved region; only
iommu regions are looked up dynamically.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
[Modified to put translation in address_space_translate; assume
 IOMMUs are not reachable from TCG. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:47 +02:00
Paolo Bonzini
052e87b073 memory: make section size a 128-bit integer
So far, the size of all regions passed to listeners could fit in 64 bits,
because artificial regions (containers and aliases) are eliminated by
the memory core, leaving only device regions which have reasonable sizes

An IOMMU however cannot be eliminated by the memory core, and may have
an artificial size, hence we may need 65 bits to represent its size.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:47 +02:00
Paolo Bonzini
733d5ef527 exec: reorganize mem_add to match Int128 version
When adding support for 2^64-byte sections, we will have to change
the structure of mem_add to avoid failures in int128_get64.
Reorganize the code now before introducing Int128.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:47 +02:00
Paolo Bonzini
99b9cc0679 Revert "memory: limit sections in the radix tree to the actual address space size"
This reverts commit 86a8623692.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Paolo Bonzini
5c8a00ce18 exec: return MemoryRegion from address_space_translate
Only address_space_translate_for_iotlb needs to return the section.
Every caller of address_space_translate now uses only section->mr,
return it directly.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Jan Kiszka
acc9d80b26 exec: Implement subpage_read/write via address_space_rw
This will allow to add support for unaligned memory regions: the subpage
container region can activate unaligned support unconditionally because
the read/write handler will now ensure that accesses are split as
required by calling address_space_rw. We can furthermore drop the
special handling of RAM subpages, address_space_rw takes care of this
already.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Jan Kiszka
90260c6c09 exec: Resolve subpages in one step except for IOTLB fills
Except for the case of setting the IOTLB entry in TCG mode, we can avoid
the subpage dispatching handlers and do the resolution directly on
address_space_lookup_region. An IOTLB entry describes a full page, not
only the region that the first access to a sub-divided page may return.

This patch therefore introduces a special translation function,
address_space_translate_for_iotlb, that avoids the subpage resolutions.
In contrast, callers of the existing address_space_translate service
will now always receive the terminal memory region section. This will be
important for breaking the BQL and for enabling unaligned memory region.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Jan Kiszka
f52cc46742 exec: Allow unaligned address_space_rw
This will be needed for some corner cases with para-virtual I/O ports.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Paolo Bonzini
1db8abb102 memory: move private types to exec.c
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Jan Kiszka
9f029603ab memory: Introduce address_space_lookup_region
This introduces a wrapper for phys_page_find (before we complicate
address_space_translate with IOMMU translation).  This function will
also encapsulate locking and reference counting when we introduce
BQL-free dispatching.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Peter Maydell
3752a03648 exec.c: address_space_translate: handle access to addr 0 of 2^64 sized region
The memory API allows a MemoryRegion's size to be 2^64, as a special
case (otherwise the size always fits in a 64 bit integer). This meant
that attempts to access address zero in a 2^64 sized region would
assert in address_space_translate():

  #3  0x00007ffff3e4d192 in __GI___assert_fail#(assertion=0x555555a43f32
    "!a.hi", file=0x555555a43ef0 "include/qemu/int128.h", line=18,
    function=0x555555a4439f "int128_get64") at assert.c:103
  #4  0x0000555555877642 in int128_get64 (a=...)
    at include/qemu/int128.h:18
  #5  0x00005555558782f2 in address_space_translate (as=0x55555668d140,
   /addr=0, xlat=0x7fffafac9918, plen=0x7fffafac9920, is_write=false)
    at exec.c:221

Fix this by doing the 'min' operation in 128 bit arithmetic
rather than 64 bit arithmetic (we know the result of the 'min'
definitely fits in 64 bits because one of the inputs did).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Paolo Bonzini
fd8aaa767a memory: add return value to address_space_rw/read/write
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:34 +02:00
Paolo Bonzini
791af8c861 memory: propagate errors on I/O dispatch
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:32 +02:00
Paolo Bonzini
a649b9168c exec: just use io_mem_read/io_mem_write for 8-byte I/O accesses
The memory API is able to split it in two 4-byte accesses.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:29 +02:00
Paolo Bonzini
968a5627c8 memory: correctly handle endian-swapped 64-bit accesses
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:26 +02:00
Paolo Bonzini
51644ab70b memory: add address_space_access_valid
The old-style IOMMU lets you check whether an access is valid in a
given DMAContext.  There is no equivalent for AddressSpace in the
memory API, implement it with a lookup of the dispatch tree.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:16 +02:00
Paolo Bonzini
c353e4cc08 exec: implement .valid.accepts for subpages
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:14 +02:00
Paolo Bonzini
82f2563fc8 exec: introduce memory_access_size
This will be used by address_space_access_valid too.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:08 +02:00
Paolo Bonzini
2bbfa05d20 exec: introduce memory_access_is_direct
After the previous patches, this is a common test for all read/write
functions.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:04 +02:00
Paolo Bonzini
d17d45e95f exec: expect mr->ops to be initialized for ROM
There is no need to use the special phys_section_rom section.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:01 +02:00
Paolo Bonzini
d197063fcf memory: move unassigned_mem_ops to memory.c
reservation_ops is already doing the same thing.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:26:56 +02:00
Paolo Bonzini
149f54b53b memory: add address_space_translate
Using phys_page_find to translate an AddressSpace to a MemoryRegionSection
is unwieldy.  It requires to pass the page index rather than the address,
and later memory_region_section_addr has to be called.  Replace
memory_region_section_addr with a function that does all of it: call
phys_page_find, compute the offset within the region, and check how
big the current mapping is.  This way, a large flat region can be written
with a single lookup rather than a page at a time.

address_space_translate will also provide a single point where IOMMU
forwarding is implemented.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:26:50 +02:00
Paolo Bonzini
b018ddf633 memory: dispatch unassigned accesses based on .valid.accepts
This provides the basics for detecting accesses to unassigned memory
as soon as they happen, and also for a simple implementation of
address_space_access_valid.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:26:47 +02:00
Paolo Bonzini
bf8d516639 exec: do not use error_mem_read
We will soon reach this case when doing (unaligned) accesses that
span partly past the end of memory.  We do not want to crash in
that case.

unassigned_mem_ops and rom_mem_ops are now the same.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:26:44 +02:00
Paolo Bonzini
0844e00762 exec: make io_mem_unassigned private
There is no reason to avoid a recompile before accessing unassigned
memory.  In the end it will be treated as MMIO anyway.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:26:41 +02:00
Paolo Bonzini
ae4e43e80f exec: drop useless #if
This code is only compiled for softmmu targets.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:26:34 +02:00
Paolo Bonzini
2a8e749909 exec: eliminate io_mem_ram
It is never used, the IOTLB always goes through io_mem_notdirty.

In fact in softmmu_template.h, if it were, QEMU would crash just
below the tests, as soon as io_mem_read/write dispatches to
error_mem_read/write.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:26:21 +02:00
Paolo Bonzini
fd2989341e memory: clean up phys_page_find
Remove the goto.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-24 18:43:54 +02:00
Avi Kivity
86a8623692 memory: limit sections in the radix tree to the actual address space size
The radix tree is statically sized to fit TARGET_PHYS_ADDR_SPACE_BITS.
If a larger memory region is registered, it will overflow.

Fix by limiting any section in the radix tree to the supported size.

This problem was not observed earlier since artificial regions (containers
and aliases) are eliminated by the memory core, leaving only device regions
which have reasonable sizes.  An IOMMU however cannot be eliminated by the
memory core, and may have an artificial size.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
[ Fail the build if TARGET_PHYS_ADDR_SPACE_BITS is too large - Paolo ]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-24 18:43:35 +02:00
Paolo Bonzini
68f3f65b09 memory: assert that PhysPageEntry's ptr does not overflow
While sized to 15 bits in PhysPageEntry, the ptr field is ORed into the
iotlb entries together with a page-aligned pointer.  The ptr field must
not overflow into this page-aligned value, assert that it is smaller than
the page size.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-24 18:42:30 +02:00
Paolo Bonzini
8b0d6711a2 exec: eliminate stq_phys_notdirty
It is not used anywhere.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-24 18:42:27 +02:00
Paolo Bonzini
4f39178b3a exec: eliminate qemu_put_ram_ptr
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-24 18:42:19 +02:00
Paolo Bonzini
bbcfd2913c exec: remove obsolete comment
See how we call memory_region_section_addr two lines below to
convert a physical address to a base address in the region.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-24 18:42:07 +02:00
Paolo Bonzini
e7a09b92b7 osdep: introduce qemu_anon_ram_free to free qemu_anon_ram_alloc-ed memory
We switched from qemu_memalign to mmap() but then we don't modify
qemu_vfree() to do a munmap() over free().  Which we cannot do
because qemu_vfree() frees memory allocated by qemu_{mem,block}align.

Introduce a new function that does the munmap(), luckily the size is
available in the RAMBlock.

Reported-by: Amos Kong <akong@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Message-id: 1368454796-14989-3-git-send-email-pbonzini@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-05-14 08:53:31 -05:00
Paolo Bonzini
6eebf958ab osdep, kvm: rename low-level RAM allocation functions
This is preparatory to the introduction of a separate freeing API.

Reported-by: Amos Kong <akong@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Amos Kong <akong@redhat.com>
Message-id: 1368454796-14989-2-git-send-email-pbonzini@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-05-14 08:53:31 -05:00
Michael S. Tsirkin
d6b9e0d60c cpu: Add qemu_for_each_cpu()
Wrapper to avoid open-coded loops and to make CPUState iteration
independent of CPUArchState.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-05-01 13:04:18 +02:00
Paolo Bonzini
0d09e41a51 hw: move headers to include/
Many of these should be cleaned up with proper qdev-/QOM-ification.
Right now there are many catch-all headers in include/hw/ARCH depending
on cpu.h, and this makes it necessary to compile these files per-target.
However, fixing this does not belong in these patches.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-04-08 18:13:10 +02:00
Stefan Hajnoczi
49cd9ac6a1 exec: assert that RAMBlock size is non-zero
find_ram_offset() does not handle size=0 gracefully.  It hands out the
same RAMBlock offset multiple times, leading to obscure failures later
on.

Add an assert to warn early if something is incorrectly allocating a
zero size RAMBlock.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2013-03-26 21:02:17 +02:00
Anthony Liguori
3d34a4110c Merge remote-tracking branch 'afaerber/qom-cpu' into staging
# By Andreas Färber (16) and Igor Mammedov (1)
# Via Andreas Färber
* afaerber/qom-cpu:
  target-lm32: Update VMStateDescription to LM32CPU
  target-arm: Override do_interrupt for ARMv7-M profile
  cpu: Replace do_interrupt() by CPUClass::do_interrupt method
  cpu: Pass CPUState to cpu_interrupt()
  exec: Pass CPUState to cpu_reset_interrupt()
  cpu: Move halted and interrupt_request fields to CPUState
  target-cris/helper.c: Update Coding Style
  target-i386: Update VMStateDescription to X86CPU
  cpu: Introduce cpu_class_set_vmsd()
  cpu: Register VMStateDescription through CPUState
  stubs: Add a vmstate_dummy struct for CONFIG_USER_ONLY
  vmstate: Make vmstate_register() static inline
  target-sh4: Move PVR/PRR/CVR into SuperHCPUClass
  target-sh4: Introduce SuperHCPU subclasses
  cpus: Replace open-coded CPU loop in qmp_memsave() with qemu_get_cpu()
  monitor: Use qemu_get_cpu() in monitor_set_cpu()
  cpu: Fix qemu_get_cpu() to return NULL if CPU not found
2013-03-14 14:50:58 -05:00
Peter Feiner
8ca761f661 exec: make -mem-path filenames deterministic
Adds ramblocks' names to their backing files when using -mem-path.  Eases
introspection and debugging.

Signed-off-by: Peter Feiner <peter@gridcentric.ca>
Message-id: 1362423265-15855-1-git-send-email-peter@gridcentric.ca
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-03-12 13:42:52 -05:00
Andreas Färber
c3affe5670 cpu: Pass CPUState to cpu_interrupt()
Move it to qom/cpu.h to avoid issues with include order.

Change pc_acpi_smi_interrupt() opaque to X86CPU.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12 10:35:55 +01:00
Andreas Färber
d8ed887bdc exec: Pass CPUState to cpu_reset_interrupt()
Move it to qom/cpu.c to avoid build failures depending on include order
of cpu-qom.h and exec/cpu-all.h.

Change opaques of various ..._irq_handler() functions to the
appropriate CPU type to facilitate using cpu_reset_interrupt().

Fix Coding Style issues while at it (missing braces, indentation).

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12 10:35:55 +01:00
Andreas Färber
259186a7d2 cpu: Move halted and interrupt_request fields to CPUState
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.

Pass PowerPCCPU to kvmppc_handle_halt().

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12 10:35:55 +01:00
Andreas Färber
b170fce3dd cpu: Register VMStateDescription through CPUState
In comparison to DeviceClass::vmsd, CPU VMState is split in two,
"cpu_common" and "cpu", and uses cpu_index as instance_id instead of -1.
Therefore add a CPU-specific CPUClass::vmsd field.

Unlike the legacy CPUArchState registration, rather register CPUState.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
2013-03-12 10:35:54 +01:00
Igor Mammedov
d76fddaeee cpu: Fix qemu_get_cpu() to return NULL if CPU not found
Commit 55e5c2850 breaks CPU not found return value, and returns
CPU corresponding to the last non NULL env.
Fix it by returning CPU only if env is not NULL, otherwise CPU is
not found and function should return NULL.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12 10:35:53 +01:00
Peter Maydell
378df4b237 Handle CPU interrupts by inline checking of a flag
Fix some of the nasty TCG race conditions and crashes by implementing
cpu_exit() as setting a flag which is checked at the start of each TB.
This avoids crashes if a thread or signal handler calls cpu_exit()
while the execution thread is itself modifying the TB graph (which
may happen in system emulation mode as well as in linux-user mode
with a multithreaded guest binary).

This fixes the crashes seen in LP:668799; however there are another
class of crashes described in LP:1098729 which stem from the fact
that in linux-user with a multithreaded guest all threads will
use and modify the same global TCG date structures (including the
generated code buffer) without any kind of locking. This means that
multithreaded guest binaries are still in the "unsupported"
category.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-03-03 14:28:47 +00:00
Andreas Färber
907a5e32f2 cputlb: Pass CPUState to cpu_unlink_tb()
CPUArchState is no longer needed.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-02-16 14:51:00 +01:00
Andreas Färber
fcd7d0034b cpu: Move exit_request field to CPUState
Since it was located before breakpoints field, it needs to be reset.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-02-16 14:51:00 +01:00
Stefan Weil
e4ada48242 Replace non-portable asprintf by g_strdup_printf
g_strdup_printf already handles OOM errors, so some error handling in
QEMU code can be removed.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-01-19 10:24:43 +00:00
Andreas Färber
38d8f5c84e exec: Return CPUState from qemu_get_cpu()
Move the declaration to qemu/cpu.h and add documentation.
The implementation still depends on CPUArchState for CPU iteration.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-01-15 04:09:14 +01:00
Andreas Färber
55e5c28502 cpu: Move cpu_index field to CPUState
Note that target-alpha accesses this field from TCG, now using a
negative offset. Therefore the field is placed last in CPUState.

Pass PowerPCCPU to [kvm]ppc_fixup_cpu() to facilitate this change.

Move common parts of mips cpu_state_reset() to mips_cpu_reset().

Acked-by: Richard Henderson <rth@twiddle.net> (for alpha)
[AF: Rebased onto ppc CPU subclasses and openpic changes]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-01-15 04:09:13 +01:00
Andreas Färber
1b1ed8dc40 cpu: Move numa_node field to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-01-15 04:09:13 +01:00
Paolo Bonzini
5708fc6655 stubs: fully replace qemu-tool.c and qemu-user.c
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-01-12 17:19:08 +01:00
Blue Swirl
8e4a424b30 Revert "virtio-pci: replace byte swap hack"
This reverts commit 9807caccd6.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-01-06 18:30:17 +00:00
Blue Swirl
9807caccd6 virtio-pci: replace byte swap hack
Remove byte swaps by declaring the config space
as native endian.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-01-06 08:24:26 +00:00
Umesh Deshpande
b2a8658ef5 protect the ramlist with a separate mutex
Add the new mutex that protects shared state between ram_save_live
and the iothread.  If the iothread mutex has to be taken together
with the ramlist mutex, the iothread shall always be _outside_.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Umesh Deshpande <udeshpan@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2012-12-20 23:08:47 +01:00
Umesh Deshpande
f798b07f51 add a version number to ram_list
This will be used to detect if last_block might have become invalid
across different calls to ram_save_live.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Umesh Deshpande <udeshpan@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2012-12-20 23:08:47 +01:00
Paolo Bonzini
abb26d63e7 exec: sort the memory from biggest to smallest
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20 23:08:47 +01:00
Paolo Bonzini
a3161038a1 exec: change RAM list to a TAILQ
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20 23:08:47 +01:00
Paolo Bonzini
0d6d3c87a2 exec: change ramlist from MRU order to a 1-item cache
Most of the time, only 2 items will be active (from/to for a string operation,
or code/data).  But TCG guests likely won't have gigabytes of memory, so
this actually goes down to 1 item.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20 23:08:40 +01:00
Paolo Bonzini
9c17d615a6 softmmu: move include files to include/sysemu/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:32:45 +01:00
Paolo Bonzini
1de7afc984 misc: move include files to include/qemu/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:32:39 +01:00
Paolo Bonzini
022c62cbbc exec: move include files to include/exec/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:31:31 +01:00
Paolo Bonzini
077805fa92 janitor: do not rely on indirect inclusions of or from qemu-char.h
Various header files rely on qemu-char.h including qemu-config.h or
main-loop.h, but they really do not need qemu-char.h at all (particularly
interesting is the case of the block layer!).  Clean this up, and also
add missing inclusions of qemu-char.h itself.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19 08:29:52 +01:00
Blue Swirl
5b6dd8683d exec: move TB handling to translate-all.c
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-16 08:28:41 +00:00
Blue Swirl
5a3165263a exec: extract TB watchpoint check
Will be moved by the next patch.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-16 08:28:29 +00:00
Blue Swirl
44209fc4ed exec: fix coding style
Fix coding style in areas to be moved by later patches.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-16 08:28:16 +00:00
Richard Henderson
0be4835b49 exec: Advise huge pages for the TCG code gen buffer
After allocating 32MB or more contiguous memory, huge pages
would seem to be ideal.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-08 14:18:37 +00:00
Peter Maydell
9e11908f12 dma: Define dma_context_memory and use in sysbus-ohci
Define a new global dma_context_memory which is a DMAContext corresponding
to the global address_space_memory AddressSpace. This can be used by
sysbus peripherals like sysbus-ohci which need to do DMA.

In particular, use it in the sysbus-ohci device, which fixes a
segfault when attempting to use that device.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
2012-11-12 16:44:57 +01:00
Blue Swirl
ef84755ebb Merge branch 'trivial-patches' of git://github.com/stefanha/qemu
* 'trivial-patches' of git://github.com/stefanha/qemu:
  pc: Drop redundant test for ROM memory region
  exec: make some functions static
  target-ppc: make some functions static
  ppc: add missing static
  vnc: add missing static
  vl.c: add missing static
  target-sparc: make do_unaligned_access static
  m68k: Return semihosting errno values correctly
  cadence_uart: More debug information

Conflicts:
	target-m68k/m68k-semi.c
2012-11-03 12:55:05 +00:00
Yeongkyoon Lee
fdbb84d133 tcg: Add extended GETPC mechanism for MMU helpers with ldst optimization
Add GETPC_EXT which is used by MMU helpers to selectively calculate the code
address of accessing guest memory when called from a qemu_ld/st optimized code
or a C function. Currently, it supports only i386 and x86-64 hosts.

Signed-off-by: Yeongkyoon Lee <yeongkyoon.lee@samsung.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-03 09:44:20 +00:00
Blue Swirl
8b9c99d9dc exec: make some functions static
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2012-11-01 19:49:45 +01:00
Andreas Färber
9f09e18a6d cpu: Move thread_id to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2012-10-31 04:12:23 +01:00
Andreas Färber
c08d7424d6 cpus: Pass CPUState to qemu_cpu_kick()
CPUArchState is no longer needed there.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2012-10-31 01:02:45 +01:00
Andreas Färber
60e82579c7 cpus: Pass CPUState to qemu_cpu_is_self()
Change return type to bool, move to include/qemu/cpu.h and
add documentation.

Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
[AF: Updated new caller qemu_in_vcpu_thread()]
2012-10-31 01:02:39 +01:00
Avi Kivity
a8170e5e97 Rename target_phys_addr_t to hwaddr
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
reserved) and its purpose doesn't match the name (most target_phys_addr_t
addresses are not target specific).  Replace it with a finger-friendly,
standards conformant hwaddr.

Outstanding patchsets can be fixed up with the command

  git rebase -i --exec 'find -name "*.[ch]"
                        | xargs s/target_phys_addr_t/hwaddr/g' origin

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-10-23 08:58:25 -05:00
Luiz Capitulino
ad0b5321f1 Call MADV_HUGEPAGE for guest RAM allocations
This makes it possible for QEMU to use transparent huge pages (THP)
when transparent_hugepage/enabled=madvise. Otherwise THP is only
used when it's enabled system wide.

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-10-22 13:26:34 -05:00
Anthony Liguori
f526f3c315 Merge remote-tracking branch 'quintela/migration-next-20121017' into staging
* quintela/migration-next-20121017: (41 commits)
  cpus: create qemu_in_vcpu_thread()
  savevm: make qemu_file_put_notify() return errors
  savevm: un-export qemu_file_set_error()
  block-migration: handle errors with the return codes correctly
  block-migration:  Switch meaning of return value
  block-migration: make flush_blks() return errors
  buffered_file: buffered_put_buffer() don't need to set last_error
  savevm: Only qemu_fflush() can generate errors
  savevm: make qemu_fill_buffer() be consistent
  savevm: unexport qemu_ftell()
  savevm: unfold qemu_fclose_internal()
  savevm: make qemu_fflush() return an error code
  savevm: Remove qemu_fseek()
  virtio-net: use qemu_get_buffer() in a temp buffer
  savevm: unexport qemu_fflush
  migration: make migrate_fd_wait_for_unfreeze() return errors
  buffered_file: make buffered_flush return the error code
  buffered_file: callers of buffered_flush() already check for errors
  buffered_file: We can access directly to bandwidth_limit
  buffered_file: unfold migrate_fd_close
  ...
2012-10-22 13:26:23 -05:00
Anthony Liguori
d3e2efc5b5 Merge remote-tracking branch 'qemu-kvm/memory/dma' into staging
* qemu-kvm/memory/dma: (23 commits)
  pci: honor PCI_COMMAND_MASTER
  pci: give each device its own address space
  memory: add address_space_destroy()
  dma: make dma access its own address space
  memory: per-AddressSpace dispatch
  s390: avoid reaching into memory core internals
  memory: use AddressSpace for MemoryListener filtering
  memory: move tcg flush into a tcg memory listener
  memory: move address_space_memory and address_space_io out of memory core
  memory: manage coalesced mmio via a MemoryListener
  xen: drop no-op MemoryListener callbacks
  kvm: drop no-op MemoryListener callbacks
  xen_pt: drop no-op MemoryListener callbacks
  vfio: drop no-op MemoryListener callbacks
  memory: drop no-op MemoryListener callbacks
  memory: provide defaults for MemoryListener operations
  memory: maintain a list of address spaces
  memory: export AddressSpace
  memory: prepare AddressSpace for exporting
  xen_pt: use separate MemoryListeners for memory and I/O
  ...
2012-10-22 13:26:07 -05:00
Avi Kivity
83f3c25142 memory: add address_space_destroy()
Since address spaces can be created dynamically by device hotplug, they
can also be destroyed dynamically.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22 14:50:08 +02:00
Avi Kivity
ac1970fbe8 memory: per-AddressSpace dispatch
Currently we use a global radix tree to dispatch memory access.  This only
works with a single address space; to support multiple address spaces we
make the radix tree a member of AddressSpace (via an intermediate structure
AddressSpaceDispatch to avoid exposing too many internals).

A side effect is that address_space_io also gains a dispatch table.  When
we remove all the pre-memory-API I/O registrations, we can use that for
dispatching I/O and get rid of the original I/O dispatch.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22 14:50:08 +02:00
Avi Kivity
f6790af6bc memory: use AddressSpace for MemoryListener filtering
Using the AddressSpace type reduces confusion, as you can't accidentally
supply the MemoryRegion you're interested in.

Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22 14:50:07 +02:00
Avi Kivity
1d71148eac memory: move tcg flush into a tcg memory listener
We plan to make the core listener listen to all address spaces; this
will cause many more flushes than necessary.  Prepare for that by
moving the flush into a tcg-specific listener.

Later we can avoid registering the listener if tcg is disabled.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22 14:50:07 +02:00
Avi Kivity
2673a5da25 memory: move address_space_memory and address_space_io out of memory core
With this change, memory.c no longer knows anything about special address
spaces, so it is prepared for AddressSpace based DMA.

Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22 14:50:07 +02:00
Avi Kivity
95d2994a2f memory: manage coalesced mmio via a MemoryListener
Instead of calling a global function on coalesced mmio changes, which
routes the call to kvm if enabled, add coalesced mmio hooks to
MemoryListener and make kvm use that instead.

The motivation is support for multiple address spaces (which means we
we need to filter the call on the right address space) but the result
is cleaner as well.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22 14:50:00 +02:00
Richard Henderson
74d590c8e9 exec: Make MIN_CODE_GEN_BUFFER_SIZE private to exec.c
It is used nowhere else, and the corresponding MAX_CODE_GEN_BUFFER_SIZE
also lives there.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-20 07:54:04 +00:00
Richard Henderson
4438c8a946 exec: Allocate code_gen_prologue from code_gen_buffer
We had a hack for arm and sparc, allocating code_gen_prologue to a
special section.  Which, honestly does no good under certain cases.
We've already got limits on code_gen_buffer_size to ensure that all
TBs can use direct branches between themselves; reuse this limit to
ensure the prologue is also reachable.

As a bonus, we get to avoid marking a page of the main executable's
data segment as executable.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-20 07:54:04 +00:00
Richard Henderson
405def1846 exec: Do not use absolute address hints for code_gen_buffer with -fpie
The hard-coded addresses inside alloc_code_gen_buffer only make sense
if we're building an executable that will actually run at the address
we've put into the linker scripts.

When we're building with -fpie, the executable will run at some
random location chosen by the kernel.  We get better placement for
the code_gen_buffer if we allow the kernel to place the memory,
as it will tend to to place it near the executable, based on the
PROT_EXEC bit.

Since code_gen_prologue is always inside the executable, this effect
is easily seen at the end of most TB, with the exit_tb opcode, and
with any calls to helper functions.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-20 07:54:04 +00:00
Richard Henderson
3d85a72fd8 exec: Don't make DEFAULT_CODE_GEN_BUFFER_SIZE too large
For ARM we cap the buffer size to 16MB.  Do not allocate 32MB in that case.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-20 07:54:04 +00:00
Richard Henderson
f1bc0bcc9d exec: Split up and tidy code_gen_buffer
It now consists of:

A macro definition of MAX_CODE_GEN_BUFFER_SIZE with host-specific values,

A function size_code_gen_buffer that applies most of the reasoning for
choosing a buffer size,

Three variations of a function alloc_code_gen_buffer that contain all
of the logic for allocating executable memory via a given allocation
mechanism.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-20 07:54:04 +00:00
Juan Quintela
652d7ec291 ram: Export last_ram_offset()
Is the only way of knowing the RAM size.

Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2012-10-17 18:34:58 +02:00
Avi Kivity
9a2c913b77 memory: drop no-op MemoryListener callbacks
Removes quite a bit of useless code.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-15 11:43:07 +02:00
Avi Kivity
7762c2c1e0 memory: rename 'exec-obsolete.h'
exec-obsolete.h used to hold pre-memory-API functions that were used from
device code prior to the transition to the memory API.  Now that the
transition is complete, the name no longer describes the file.  The
functions still need to be merged better into the memory core, but there's
no danger of anyone using them.

Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-15 11:43:05 +02:00
Peter Maydell
6fd2a026fb cpu_dump_state: move DUMP_FPU and DUMP_CCOP flags from x86-only to generic
Move the DUMP_FPU and DUMP_CCOP flags for cpu_dump_state() from being
x86-specific flags to being generic ones. This allows us to drop some
TARGET_I386 ifdefs in various places, and means that we can (potentially)
be more consistent across architectures about which monitor commands or
debug abort printouts include FPU register contents and info about
QEMU's condition-code optimisations.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2012-10-05 15:04:43 +01:00
Anthony PERARD
e226939de5 exec, memory: Call to xen_modified_memory.
This patch add some calls to xen_modified_memory to notify Xen about dirtybits
during migration.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Avi Kivity <avi@redhat.com>
2012-10-03 13:49:22 +00:00
Anthony PERARD
51d7a9eb2b exec: Introduce helper to set dirty flags.
This new helper/hook is used in the next patch to add an extra call in a single
place.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Avi Kivity <avi@redhat.com>
2012-10-03 13:49:05 +00:00
Richard Henderson
9b9c37c364 tcg-sparc: Assume v9 cpu always, i.e. force v8plus in 32-bit mode.
Current code doesn't actually work in 32-bit mode at all.  Since
no one really noticed, drop the complication of v7 and v8 cpus.
Eliminate the --sparc_cpu configure option and standardize macro
testing on TCG_TARGET_REG_BITS / HOST_LONG_BITS

Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:16 +02:00
Richard Henderson
d5dd696fe3 tcg-sparc: Don't MAP_FIXED on top of the program
The address we pick in sparc64.ld is also 0x60000000, so doing a fixed map
on top of that is guaranteed to blow up.  Choosing 0x40000000 is exactly
right for the max of code_gen_buffer_size set below.

No need to ever use MAP_FIXED.  While getting our desired address helps
optimize the generated code, we won't fail if we don't get it.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21 22:02:16 +02:00
David Gibson
0b57e28713 cpu_physical_memory_write_rom() needs to do TB invalidates
cpu_physical_memory_write_rom(), despite the name, can also be used to
write images into RAM - and will often be used that way if the machine
uses load_image_targphys() into RAM addresses.

However, cpu_physical_memory_write_rom(), unlike cpu_physical_memory_rw()
doesn't invalidate any cached TBs which might be affected by the region
written.

This was breaking reset (under full emu) on the pseries machine - we loaded
our firmware image into RAM, and while executing it rewrite the code at
the entry point (correctly causing a TB invalidate/refresh).  When we
reset the firmware image was reloaded, but the TB from the rewrite was
still active and caused us to get an illegal instruction trap.

This patch fixes the bug by duplicating the tb invalidate code from
cpu_physical_memory_rw() in cpu_physical_memory_write_rom().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-09-17 10:18:48 -05:00
Luiz Capitulino
8490fc78e7 add -machine mem-merge=on|off option
It allows to disable memory merge support (KSM on Linux), which is
enabled by default otherwise.

Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-09-17 10:18:47 -05:00
Jason Baron
ddb97f1deb memory: add -machine dump-guest-core=on|off
Add a new '[,dump-guest-core=on|off]' option to the '-machine' option. When
'dump-guest-core=off' is specified, guest memory is omitted from the core dump.
The default behavior continues to be to include guest memory when a core dump is
triggered. In my testing, this brought the core dump size down from 384MB to 6MB
on a 2GB guest.

Is anything additional required to preserve this setting for migration or
savevm? I don't believe so.

Changelog:
v3:
    Eliminate globals as per Anthony's suggestion
    set no dump from qemu_ram_remap() as well
v2:
    move the option from -m to -machine, rename option dump -> dump-guest-core

Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-08-16 13:41:15 -05:00
Igor Mitsyanko
5fda043f9c exec.c: fix dirty bitmap reallocation
For each newly created RAM block, dirty bitmap is reallocated with g_realloc, which doesn't
make any promises on initial content of new extra data in returned buffer. In theory,
we initialize this new data with cpu_physical_memory_set_dirty_range() call. The
problem is, cpu_physical_memory_set_dirty_range() has a side effect of incrementing
ram_list.dirty_pages variable, but only for pages which are not already dirty. And
page "cleanliness" is determined using the same not yet uninitialized dirty bitmap
we've just reallocated. This results in inconsistency between real dirty page number
and value in ram_list.dirty_pages variable, which in turn could (and will) result
in errors during VM migration.
Zero initialize new dirty bitmap bytes to fix this problem.

Signed-off-by: Igor Mitsyanko <i.mitsyanko@samsung.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-08-11 12:23:46 +00:00
Peter Maydell
c308efe63a exec.c: Remove out of date comment
Remove an out of date comment: this comment used to be attached to
cpu_register_physical_memory_log(), before commit 0f0cb164 accidentally
inserted a couple of other functions between the comment and its function.
It is in any case obsolete since (a) the function arguments it refers
to have been replaced with a single MemoryRegionSection* argument and
(b) the inability to handle regions whose offset_within_address_space
and offset_within_region aren't equally aligned was fixed as part of
the rewrite of this code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-08-03 14:25:22 +01:00
Tyler Hall
69b67646bc exec.c: Use subpages for large unaligned mappings
Registering a multi-page memory region that is non-page-aligned results
in a subpage from the start to the page boundary, some number of full
pages, and possibly another subpage from the last page boundary to the
end. The full pages will have a value for offset_within_region that is
not a multiple of TARGET_PAGE_SIZE. Accesses through softmmu are unable
to handle this and will segfault.

Handling full pages through subpages is not optimal, but only
non-page-aligned mappings take the penalty.

Signed-off-by: Tyler Hall <tylerwhall@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-08-03 14:25:22 +01:00
Tyler Hall
adb2a9b5d4 exec.c: Fix off-by-one error in register_subpage
subpage_register() expects "end" to be the last byte in the mapping.
Registering a non-page-aligned memory region that extends up to or
beyond a page boundary causes subpage_register() to silently fail
through the (end >= PAGE_SIZE) check.

This bug does not cause noticeable problems for mappings that do not
extend to a page boundary, though they do register an extra byte.

Signed-off-by: Tyler Hall <tylerwhall@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-08-03 14:25:22 +01:00
Anthony Liguori
09f06a6c60 Merge remote-tracking branch 'qemu-kvm/uq/master' into staging
* qemu-kvm/uq/master:
  virtio: move common irqfd handling out of virtio-pci
  virtio: move common ioeventfd handling out of virtio-pci
  event_notifier: add event_notifier_set_handler
  memory: pass EventNotifier, not eventfd
  ivshmem: wrap ivshmem_del_eventfd loops with transaction
  ivshmem: use EventNotifier and memory API
  event_notifier: add event_notifier_init_fd
  event_notifier: remove event_notifier_test
  event_notifier: add event_notifier_set
  apic: Defer interrupt updates to VCPU thread
  apic: Reevaluate pending interrupts on LVT_LINT0 changes
  apic: Resolve potential endless loop around apic_update_irq
  kvm: expose tsc deadline timer feature to guest
  kvm_pv_eoi: add flag support
  kvm: Don't abort on kvm_irqchip_add_msi_route()
2012-07-18 14:44:43 -05:00
Paolo Bonzini
753d5e14c4 memory: pass EventNotifier, not eventfd
Under Win32, EventNotifiers will not have event_notifier_get_fd, so we
cannot call it in common code such as hw/virtio-pci.c.  Pass a pointer to
the notifier, and only retrieve the file descriptor in kvm-specific code.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-07-12 14:08:10 +03:00
Christian Borntraeger
fdec991857 s390: autodetect map private
By default qemu will use MAP_PRIVATE for guest pages. This will write
protect pages and thus break on s390 systems that dont support this feature.
Therefore qemu has a hack to always use MAP_SHARED for s390. But MAP_SHARED
has other problems (no dirty pages tracking, a lot more swap overhead etc.)
Newer systems allow the distinction via KVM_CAP_S390_COW. With this feature
qemu can use the standard qemu alloc if available, otherwise it will use
the old s390 hack.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Acked-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2012-07-10 18:27:33 +02:00
Juan Quintela
1720aeee72 dirty bitmap: abstract its use
Always use accessors to read/set the dirty bitmap.

Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-06-29 13:31:07 +02:00
Juan Quintela
d24981d37e Only TCG needs TLB handling
Refactor the code that is only needed for tcg to an static function.
Call that only when tcg is enabled.  We can't refactor to a dummy
function in the kvm case, as qemu can be compiled at the same time
with tcg and kvm.

Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-06-29 13:27:28 +02:00
Blue Swirl
5726c27fa9 qemu-log: move logging to qemu-log.c
Move logging functions from exec.c to qemu-log.c,
compile it only once.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-06-21 18:45:16 +00:00
Anthony Liguori
09e5ab6360 qdev: Use wrapper for qdev_get_path
This makes it easier to remove it from BusInfo.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[AF: Drop now unnecessary NULL initialization in scsibus_get_dev_path()]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2012-06-18 15:14:38 +02:00
Anthony Liguori
3525c42fd3 Merge remote-tracking branch 'stefanha/trivial-patches' into staging
* stefanha/trivial-patches:
  configure: report missing libraries for virtfs
  trace/simple.c: fix deprecated glib2 interface
  Clarify comments of tb_invalidate_phys_[page_]range
2012-06-11 12:15:51 -05:00
Max Filippov
9d70c4b7b8 exec: fix TB invalidation after breakpoint insertion/deletion
tb_invalidate_phys_addr has to be called with the exact physical address of
the breakpoint we add/remove, not just the page's base address.
Otherwise we easily fail to flush the right TB.

This breakage was introduced by the commit f3705d5329 "memory: make
phys_page_find() return an unadjusted".

This appeared to work for some guest architectures because their
cpu_get_phys_page_debug implementation returns full translated physical
address, not just the base of the TARGET_PAGE_SIZE-sized page.

Reported-by: TeLeMan <geleman@gmail.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-06-09 10:49:19 +00:00
Jan Kiszka
8e0fdce32d Clarify comments of tb_invalidate_phys_[page_]range
They could suggest that all TBs of the page containing the range would
be invalidated.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-06-08 09:32:26 +01:00
Wen Congyang
76f3553883 Add API to check whether a physical address is I/O address
This API will be used in the following patch.

Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
2012-06-04 13:49:33 -03:00
Alexander Graf
77a8f1a512 linux-user: Fix stale tbs after mmap
If we execute linux-user code that does the following:

  * A = mmap()
  * execute code in A
  * munmap(A)
  * B = mmap(), but mmap returns the same address as A
  * execute code in B

we end up executing a stale cached tb that contains translated code
from A, while we want new code from B.

This patch adds a TB flush for mmap'ed regions, before we return them,
avoiding the whole issue. It also adds a flush for munmap, so that we
don't execute stale TBs instead of getting a segfault.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Riku Voipio <riku.voipio@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-19 15:49:40 +00:00
Blue Swirl
fd06257351 memory: move functions is_romd and section_addr to memory API
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:07 +00:00
Blue Swirl
cc5bea608d cputlb: prepare private memory API for public consumption
Fold is_ram_rom and is_ram_rom_romd() into callers.

Change is_romd() and section_addr() to take MemoryRegion
instead of MemoryRegionSection for consistency and
use memory_region_ prefix.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:05 +00:00
Blue Swirl
0cac1b66c8 cputlb: move TLB handling to a separate file
Move TLB handling and softmmu code load helpers to cputlb.c,
compile only for softmmu targets.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:04 +00:00
Blue Swirl
e554861766 exec: prepare for splitting
Make s_cputlb_empty_entry 'const'.

Rename tlb_flush_jmp_cache() to tb_flush_jmp_cache().

Refactor code to add cpu_tlb_reset_dirty_all(),
memory_region_section_get_iotlb() and
memory_region_is_unassigned().

Remove unused cpu_tlb_update_dirty().

Fix coding style in areas to be moved.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:02 +00:00
Stefan Weil
8efe0ca83e w64: Use uintptr_t in exec.c
Replace all type casts to 'long' or 'unsigned long' by 'intptr_t' or 'uintptr_t'.

For type casts which are only used to extract the lower bits of an address
or to modify those bits, signedness does not matter. There I always use 'uintptr_t'.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-15 21:25:17 +02:00
Stefan Weil
6840981dfb w64: Use larger alignment for section with generated code
The MinGW-w64 compiler allows __attribute__((aligned (32)).

Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-15 21:25:16 +02:00
Stefan Weil
c6d506742f w64: Fix data types in cpu-all.h, exec.c
w64 needs uintptr_t instead of unsigned long.
For other hosts, nothing changes.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-15 21:25:16 +02:00
Max Filippov
1e7855a558 exec: provide tb_invalidate_phys_addr function
Allow TB invalidation by its physical address, extract implementation
from the breakpoint_invalidate function.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-04-14 15:25:36 +00:00
Blue Swirl
2050396801 Use uintptr_t for various op related functions
Use uintptr_t instead of void * or unsigned long in
several op related functions, env->mem_io_pc and
GETPC() macro.

Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-04-14 14:23:37 +00:00
Stefan Weil
6375e09e79 w64: Fix data type of tb_next and other variables used for host addresses
QEMU host addresses must use uintptr_t to be portable for hosts with
an unusual size of long (w64).

tb_jmp_offset is an uint16_t value, therefore the local variable offset
in function tb_set_jmp_target was changed from unsigned long to uint16_t.

The type cast to long in function tb_add_jump now also uses uintptr_t.
For the bit operation used here, the signedness of the type cast does
not matter.

Some remaining unsigned long values are either only used for ARM assembler
code or will be fixed in a later patch for PPC.

v2:
Fix signature of tb_find_pc in exec.c, too (hint from Blue Swirl, thanks).
There remain lots of other long / unsigned long in exec.c which must be
replaced by uintptr_t. This will be done in a separate patch. Here
only one of these type casts is fixed.

v3:
Also fix signature of page_unprotect.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-04-07 11:27:45 +00:00
Richard Henderson
813da6277c tcg: Use the GDB JIT debugging interface.
This allows us to generate unwind info for the dynamicly generated
code in the code_gen_buffer.  Only i386 is converted at this point.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-24 13:07:48 +00:00
Anthony PERARD
0a1b357f15 exec: fix guest memory access for Xen
In cpu_physical_memory_rw, a change has been introduced and qemu_get_ram_ptr is
no longuer called with the ram addr we want to access, but only with the
section address. This patch fixes this. (All other call to qemu_get_ram_ptr are
already called with the right address.)

This patch fixes Xen guest.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-19 19:13:30 +02:00
Avi Kivity
32b089808f memory: check for watchpoints when getting code ram_addr
The code to get the ram_addr from a (tlb entry, vaddr) pair
checks that the resulting memory is not MMIO, but neglects to
check whether the region is hidden by a watchpoint page.

Add the missing check.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-19 11:15:01 +02:00
Avi Kivity
7859cc6e39 exec: fix write tlb entry misused as iotlb
A couple of code paths check the lower bits of CPUTLBEntry::addr_write
against io_mem_ram as a way of looking for a dirty RAM page.  This works
by accident since the value is zero, which matches all clear bits for
TLB_INVALID, TLB_MMIO, and TLB_NOTDIRTY (indicating dirty RAM).

Make it work by design by checking for the proper bits.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-19 11:15:00 +02:00
Blue Swirl
e141ab52d2 softmmu templates: optionally pass CPUState to memory access functions
Optionally, make memory access helpers take a parameter for CPUState
instead of relying on global env.

On most targets, perform simple moves to reorder registers. On i386,
switch from regparm(3) calling convention to standard stack-based
version.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-18 12:21:52 +00:00
Andreas Färber
9349b4f9fd Rename CPUState -> CPUArchState
Scripted conversion:
  for file in *.[hc] hw/*.[hc] hw/kvm/*.[hc] linux-user/*.[hc] linux-user/m68k/*.[hc] bsd-user/*.[hc] darwin-user/*.[hc] tcg/*/*.[hc] target-*/cpu.h; do
    sed -i "s/CPUState/CPUArchState/g" $file
  done

All occurrences of CPUArchState are expected to be replaced by QOM CPUState,
once all targets are QOM'ified and common fields have been extracted.

Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
2012-03-14 22:20:27 +01:00
Avi Kivity
97161e177b memory: get rid of cpu_register_io_memory()
The return value of cpu_register_io_memory() is no longer used anywhere, so
we can remove it and all associated data and code.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 19:16:39 +02:00
Avi Kivity
37ec01d433 memory: dispatch directly via MemoryRegion
Instead of indirecting via io_mem_region, dispatch directly
through the MemoryRegion obtained from the iotlb or phys_page_find().

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 19:06:11 +02:00
Avi Kivity
ce5d64c2d0 exec: fix code tlb entry misused as iotlb in get_page_addr_code()
get_page_addr_code() reads a code tlb entry, but interprets it as an
iotlb entry.  This works by accident since the low bits of a RAM code
tlb entry are clear, and match a RAM iotlb entry.  This accident is
about to unhappen, so fix the code to use an iotlb entry (using the
code entry with TLB_MMIO may fail if the page is a watchpoint).

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 18:54:20 +02:00
Avi Kivity
aa102231f0 memory: store section indices in iotlb instead of io indices
A step towards eliminating io indices.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 17:06:55 +02:00
Avi Kivity
f3705d5329 memory: make phys_page_find() return an unadjusted section
We'd like to store the section index in the iotlb, so we can't
adjust it before returning.  Return an unadjusted section and
instead introduce section_addr(), which does the adjustment later.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 16:16:34 +02:00
Avi Kivity
a2d335214a memory: fix I/O port aliases
Commit e58ac72b6a0 ("ioport: change portio_list not to use
memory_region_set_offset()") started using aliases of I/O memory
regions.  Since the IORange used for the I/O was contained in the
target region, the alias information (specifically, the offset
into the region) was lost.  This broke -vga std.

Fix by allocating an independent object to hold the IORange and
also the new offset.

Note that I/O memory regions were conceptually broken wrt aliases
in a different way: an alias can cause the same region to appear
twice in an address space, but we had just one IORange to service it.
This patch fixes that problem as well, since we can now have multiple
IORange/MemoryRegion associations.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05 17:40:12 +02:00
Blue Swirl
b3e54c689c Merge branch 'xtensa' of git://jcmvbkbc.spb.ru/dumb/qemu-xtensa
* 'xtensa' of git://jcmvbkbc.spb.ru/dumb/qemu-xtensa:
  target-xtensa: add breakpoint tests
  target-xtensa: add DEBUG_SECTION to overlay tool
  target-xtensa: add DBREAK data breakpoints
  exec: let cpu_watchpoint_insert accept larger watchpoints
  exec: fix check_watchpoint exiting cpu_loop
  exec: add missing breaks to the watch_mem_write
  target-xtensa: add ICOUNT SR and debug exception
  target-xtensa: implement instruction breakpoints
  target-xtensa: add DEBUGCAUSE SR and configuration
  target-xtensa: fetch 3rd opcode byte only when needed
  target-xtensa: implement info tlb monitor command
  target-xtensa: define TLB_TEMPLATE for MMU-less cores
2012-03-03 17:53:41 +00:00
Avi Kivity
07f07b31e5 memory: allow phys_map tree paths to terminate early
When storing large contiguous ranges in phys_map, all values tend to
be the same pointers to a single MemoryRegionSection.  Collapse them
by marking nodes with level > 0 as leaves.  This reduces tree memory
usage dramatically.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
c19e8800d4 memory: unify PhysPageEntry::node and ::leaf
They have the same type, unify them.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
2999097bf1 memory: change phys_page_set() to set multiple pages
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
f7bf546118 memory: switch phys_page_set() to a recursive implementation
Setting multiple pages at once requires backtracking to previous
nodes; easiest to achieve via recursion.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
a391843286 memory: replace phys_page_find_alloc() with phys_page_set()
By giving the function the value we want to set, we make it
more flexible for the next patch.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
0f0cb164cc memory: simplify multipage/subpage registration
Instead of considering subpage on a per-page basis, split each section
into a subpage head, multipage body, and subpage tail, and register
each separately.  This simplifies the registration functions.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
31ab2b4a46 memory: give phys_page_find() its own tree search loop
We'll change phys_page_find_alloc() soon, but phys_page_find()
doesn't need to bear the consequences.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
06ef3525e1 memory: make phys_page_find() return a MemoryRegionSection
We no longer describe memory in terms of individual pages; use sections
throughout instead.

PhysPageDesc no longer used - remove.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
117712c3e4 memory: move tlb flush to MemoryListener commit callback
This way, if we have several changes in a single transaction, we flush just
once.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
717cb7b259 memory: unify the two branches of cpu_register_physical_memory_log()
Identical except that the second branch knows its not modifying an existing
subpage.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
8636b9295b memory: fix RAM subpages in newly initialized pages
If the first subpage installed in a page is RAM, then we install it as
a full page, instead of a subpage.  Fix by not special casing RAM.

The issue dates to commit db7b5426a4, which introduced subpages.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
d6f2ea22a0 memory: compress phys_map node pointers to 16 bits
Use an expanding vector to store nodes.  Allocation is baroque to g_renew()
potentially invalidating pointers; this will be addressed later.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
5312bd8b31 memory: store MemoryRegionSection pointers in phys_map
Instead of storing PhysPageDesc, store pointers to MemoryRegionSections.
The various offsets (phys_offset & ~TARGET_PAGE_MASK,
PHYS_OFFSET & TARGET_PAGE_MASK, region_offset) can all be synthesized
from the information in a MemoryRegionSection.  Adjust phys_page_find()
to synthesize a PhysPageDesc.

The upshot is that phys_map now contains uniform values, so it's easier
to generate and compress.

The end result is somewhat clumsy but this will be improved as we we
propagate MemoryRegionSections throughout the code instead of transforming
them to PhysPageDesc.

The MemoryRegionSection pointers are stored as uint16_t offsets in an
array.  This saves space (when we also compress node pointers) and is
more cache friendly.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
4346ae3e28 memory: unify phys_map last level with intermediate levels
This lays the groundwork for storing leaf data in intermediate levels,
saving space.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
3eef53df6b memory: remove first level of l1_phys_map
L1 and the lower levels in l1_phys_map are equivalent, except that L1 has
a different size, and is always allocated.  Simplify the code by removing
L1.  This leaves us with a tree composed solely of L2 tables, but that
problem can be renamed away later.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
54688b1ec1 memory: change memory registration to rebuild the memory map on each change
Instead of incrementally building the memory map, rebuild it every time.
This allows later simplification, since the code need not consider overlaying
a previous mapping.  It is also RCU friendly.

With large memory guests this can get expensive, since the operation is
O(mem size), but this will be optimized later.

As a side effect subpage and L2 leaks are fixed here.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
50c1e1491e memory: support stateless memory listeners
Current memory listeners are incremental; that is, they are expected to
maintain their own state, and receive callbacks for changes to that state.

This patch adds support for stateless listeners; these work by receiving
a ->begin() callback (which tells them that new state is coming), a
sequence of ->region_add() and ->region_nop() callbacks, and then a
->commit() callback which signifies the end of the new state.  They should
ignore ->region_del() callbacks.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:42 +02:00
Avi Kivity
4855d41a61 memory: split memory listener for the two address spaces
The memory and I/O address spaces do different things, so split them into
two memory listeners.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:42 +02:00
Avi Kivity
7376e5827a memory: allow MemoryListeners to observe a specific address space
Ignore any regions not belonging to a specified address space.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:42 +02:00
Avi Kivity
9363274709 memory: use a MemoryListener for core memory map updates too
This transforms memory.c into a library which can then be unit tested
easily, by feeding it inputs and listening to its outputs.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-02-29 13:44:42 +02:00
Avi Kivity
d7ec83e6b5 memory: don't pass ->readable attribute to cpu_register_physical_memory_log
It can be derived from the MemoryRegion itself (which is why it is not
used there).

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-02-29 13:44:42 +02:00
Max Filippov
0dc23828f1 exec: let cpu_watchpoint_insert accept larger watchpoints
Make cpu_watchpoint_insert accept watchpoints of any power-of-two size
up to the target page size.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2012-02-20 20:07:11 +04:00
Max Filippov
488d65772c exec: fix check_watchpoint exiting cpu_loop
In case of BP_STOP_BEFORE_ACCESS watchpoint check_watchpoint intends to
signal EXCP_DEBUG exception on exit from cpu loop, but later overwrites
exception code by the cpu_resume_from_signal call.

Use cpu_loop_exit with BP_STOP_BEFORE_ACCESS watchpoints.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2012-02-20 20:07:11 +04:00
Max Filippov
6736415047 exec: add missing breaks to the watch_mem_write
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Meador Inge <meadori@codesourcery.com>
2012-02-20 20:07:02 +04:00
Peter Maydell
771124e1a6 exec.c: Clarify comment about tlb_flush() flush_global parameter
Clarify the comment about tlb_flush()'s flush_global parameter,
so it is clearer what it does and why it is OK that the implementation
currently ignores it.

Reviewed-by: Andreas F=C3=A4rber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-02-01 14:45:01 -06:00
Benjamin Herrenschmidt
82afa58641 virtio-pci: Fix endianness of virtio config
The virtio config area in PIO space is a bit special. The initial
header is little endian but the rest (device specific) is guest
native endian.

The PIO accessors for PCI on machines that don't have native IO ports
assume that all PIO is little endian, which works fine for everything
except the above.

A complicated way to fix it would be to split the BAR into two memory
regions with different endianess settings, but this isn't practical
to do, besides, the PIO code doesn't honor region endianness anyway
(I have a patch for that too but it isn't necessary at this stage).

So I decided to go for the quick fix instead which consists of
reverting the swap in virtio-pci in selected places, hoping that when
we eventually do a "v2" of the virtio protocols, we sort that out once
and for all using a fixed endian setting for everything.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
[agraf: keep virtio in libhw and determine endianness through a
        helper function in exec.c]
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
2012-01-21 05:17:01 +01:00
Aurelien Jarno
5c84bd904b tcg-arm: fix a typo in comments
ARM still doesn't support 16GB buffers in 32-bit modes, replace the
16GB by 16MB in the comment.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-01-13 10:36:59 +00:00
Avi Kivity
11c7ef0c73 Remove IO_MEM_SHIFT
We no longer use any of the lower bits of a ram_addr, so we might as well
use them for the io table index.  This increases the number of potential
I/O handlers by a factor of 8.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
75c578dcaa Drop IO_MEM_ROMD
Unlike ->readonly, ->readable is not inherited from aliase, so we can simply
query the memory region.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
b3b00c78d8 Remove IO_MEM_SUBPAGE
Replace with a MemoryRegion flag.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
a621f38de8 Direct dispatch through MemoryRegion
Now that all mmio goes through MemoryRegions, we can convert
io_mem_opaque to be a MemoryRegion pointer, and remove the thunks
that convert from old-style CPU{Read,Write}MemoryFunc to MemoryRegionOps.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
1ec9b909ff Convert io_mem_watch to be a MemoryRegion
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
de712f9469 Convert IO_MEM_SUBPAGE_RAM to be a MemoryRegion
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
70c68e44bc Convert the subpage wrapper to be a MemoryRegion
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
dd81124bf6 Switch cpu_register_physical_memory_log() to use MemoryRegions
Still internally using ram_addr.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
0e0df1e24d Convert IO_MEM_{RAM,ROM,UNASSIGNED,NOTDIRTY} to MemoryRegions
Convert the fixed-address IO_MEM_RAM, IO_MEM_ROM, IO_MEM_UNASSIGNED,
and IO_MEM_NOTDIRTY io handlers to MemoryRegions.  These aren't real
regions, since they are never added to the memory hierarchy, but they
allow reuse of the dispatch functionality.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
d39e822265 Uninline get_page_addr_code()
Its use of IO_MEM_ROM and friends will later cause #include loops; and it
is too large to merit inlining.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
1d393fa2d1 Avoid range comparisons on io index types
The code sometimes uses range comparisons on io indexes (e.g.
index =< IO_MEM_ROM).  Avoid these as they make moving to objects harder.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
2774c6d0ae Fix wrong region_offset when overlaying a page with another
cpu_register_physical_memory_log() does not update region_offset
if a page was previously registered for the same address.  This
could cause mmio accesses going to the wrong place, by using the
old region_offset.

Signed-off-by: Avi Kivity <avi@redhat.com>
Acked-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
acbbec5d43 memory: move mmio access to functions
Currently mmio access goes directly to the io_mem_{read,write} arrays.
In preparation for eliminating them, add indirection via a function.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
f1f6e3b86e exec: make phys_page_find() return a temporary
Instead of returning a PhysPageDesc pointer, return a temporary.
This lets us move away from actually storing PhysPageDesc's, and
instead sythesising them when needed.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
be675c9720 memory: move endianness compensation to memory core
Instead of doing device endianness compensation in cpu_register_io_memory(),
do it in the memory core.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
8f77558f22 memory: obsolete cpu_physical_memory_[gs]et_dirty_tracking()
The getter is no longer used, so it is completely removed.

Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-04 13:34:49 +02:00
Avi Kivity
7c63736603 Store MemoryRegion in RAMBlock
As a step in moving live migration from RAMBlocks to MemoryRegions,
store the MemoryRegion in a RAMBlock.

Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-04 13:34:48 +02:00
Avi Kivity
c5705a7728 vmstate, memory: decouple vmstate from memory API
Currently creating a memory region automatically registers it for
live migration.  This differs from other state (which is enumerated
in a VMStateDescription structure) and ties the live migration code
into the memory core.

Decouple the two by introducing a separate API, vmstate_register_ram(),
for registering a RAM block for migration.  Currently the same
implementation is reused, but later it can be moved into a separate list,
and registrations can be moved to VMStateDescription blocks.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-04 13:34:48 +02:00
Avi Kivity
586c6230c0 Remove cpu_get_physical_page_desc()
No longer used.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-03 19:19:28 +02:00
Avi Kivity
dcd97e33af memory: remove CPUPhysMemoryClient
No longer used.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-03 19:19:27 +02:00
Avi Kivity
7664e80c84 memory: add API for observing updates to the physical memory map
Add an API that allows a client to observe changes in the global
memory map:
 - region added (possibly with logging enabled)
 - region removed (possibly with logging enabled)
 - logging started on a region
 - logging stopped on a region
 - global logging started
 - global logging removed

This API will eventually replace cpu_register_physical_memory_client().

Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-20 14:14:07 +02:00
Avi Kivity
67d95c153b memory: move obsolete exec.c functions to a private header
This will help avoid accidental usage.

Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-19 17:28:54 +02:00
Avi Kivity
fce537d4a7 memory, xen: pass MemoryRegion to xen_ram_alloc()
Currently xen_ram_alloc() relies on ram_addr, which is going away.
Give it something else to use as a cookie.

Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-19 17:23:24 +02:00
Alex Rozenman
5ab97b7f81 phys_page_find_alloc: Use correct initial region_offset.
This fixes a common bug with initial region_offset value.
Usually, the pages are re-assigned afterwards, so the bug
has a very small effect on regular QEMU use flows.

Signed-off-by: Alex Rozenman <Alex_Rozenman@mentor.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-12-15 10:22:40 -06:00
Andreas Färber
56384e8b1e exec.c: Fix subpage memory access to RAM MemoryRegion
Commit 95c318f5e1 (Fix segfault in mmio
subpage handling code.) prevented a segfault by making all subpage
registrations over an existing memory page perform an unassigned access.
Symptoms were writes not taking effect and reads returning zero.

Very small page sizes are not currently supported either,
so subpage memory areas cannot fully be avoided.

Therefore change the previous fix to use a new IO_MEM_SUBPAGE_RAM
instead of IO_MEM_UNASSIGNED. Suggested by Avi.

Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Cc: Avi Kivity <avi@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-12-15 09:27:23 -06:00
Dr. David Alan Gilbert
222f23f508 tcg/arm: remove fixed map code buffer restriction
On ARM, don't map the code buffer at a fixed location, and fix up the
call/goto tcg routines to let it do long jumps.

Mapping the code buffer at a fixed address could sometimes result in it being
mapped over the top of the heap with pretty random results.

Signed-off-by: Dr. David Alan Gilbert <david.gilbert@linaro.org>
Signed-off-by: Andrzej Zaborowski <andrew.zaborowski@intel.com>
2011-12-14 21:58:18 +01:00
Stefan Weil
daf767b16a w32: Disable buffering for log file
W32 does not support line buffering, but it supports unbuffered output.

Unbuffered output is better for writing to qemu.log than fully buffered
output because it also shows the latest log messages when an application
crash occurs.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-12-10 17:05:48 +00:00
Paolo Bonzini
b3c4bbe56d Make cpu_single_env thread-local
Make cpu_single_env thread-local. This fixes a regression
in handling of multi-threaded programs in linux-user mode
(bug 823902).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[Peter Maydell: rename tls_cpu_single_env to cpu_single_env]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-11-01 10:58:08 -05:00
Alex Williamson
3e837b2c05 Error check find_ram_offset
Spotted via code review, we initialize offset to 0 to avoid a
compiler warning, but in the unlikely case that offset is
never set to something else, we should abort instead of return
a value that will almost certainly cause problems.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-11-01 10:58:08 -05:00
陳韋任
8f355d6775 exec.c: Remove useless comment
As phys_ram_size had been removed since QEMU 0.12. Remove the useless
comment.

Signed-off-by: Chen Wen-Ren <chenwj@iis.sinica.edu.tw>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-10-26 13:38:36 +01:00
Paolo Bonzini
946fb27c1d qemu-timer: move icount to cpus.c
None of this is needed by tools, and most of it can even be made static
inside cpus.c.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2011-10-21 18:14:30 +02:00
Blue Swirl
3917149d96 Move GETPC from dyngen-exec.h to exec-all.h
GETPC() can be used even from outside of helper code. Move the macro to
a more accessible location. Avoid a compile warning from redefining it in exec.c.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-10-01 09:31:43 +00:00
Stefan Weil
8b3692d136 Remove qemu_host_page_bits
It was introduced with commit 54936004fd
as host_page_bits but never used.

Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-09-21 10:50:59 +01:00
Anthony Liguori
7267c0947d Use glib memory allocation and free functions
qemu_malloc/qemu_free no longer exist after this commit.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-20 23:01:08 -05:00
Paolo Bonzini
85d59fef9d fix QLIST usage for RAM list
Spotted while reviewing the migration thread patches.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-08-12 13:07:58 +01:00
Avi Kivity
309cb471c8 Integrate I/O memory regions into qemu
get_system_io() returns the root I/O memory region.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-08 10:15:53 -05:00
Tobias Nygren
9f4b09a4cd Use mmap to allocate execute memory
Use mmap to allocate executable memory on NetBSD as well.

Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-08-07 09:57:05 +00:00
Jan Kiszka
d5ab9713d2 Avoid allocating TCG resources in non-TCG mode
Do not allocate TCG-only resources like the translation buffer when
running over KVM or XEN. Saves a "few" bytes in the qemu address space
and is also conceptually cleaner.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-05 10:57:36 -05:00
Avi Kivity
8417cebfda memory: use signed arithmetic
When trying to map an alias of a ram region, where the alias starts at
address A and we map it into address B, and A > B, we had an arithmetic
underflow.  Because we use unsigned arithmetic, the underflow converted
into a large number which failed addrrange_intersects() tests.

The concrete example which triggered this was cirrus vga mapping
the framebuffer at offsets 0xc0000-0xc7fff (relative to the start of
the framebuffer) into offsets 0xa0000 (relative to system addres space
start).

With our favorite analogy of a windowing system, this is equivalent to
dragging a subwindow off the left edge of the screen, and failing to clip
it into its parent window which is on screen.

Fix by switching to signed arithmetic.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-05 10:57:36 -05:00
Anthony Liguori
3046c98404 Merge remote-tracking branch 'agraf/xen-next' into staging 2011-07-29 09:42:12 -05:00
Avi Kivity
62152b8a01 exec.c: initialize memory map
Allocate the root memory region and initialize it.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-07-29 08:25:44 -05:00
Anthony PERARD
f15fbc4bd1 cpu-common: Have a ram_addr_t of uint64 with Xen.
In Xen case, memory can be bigger than the host memory. that mean a
32bits host (and QEMU) should be able to handle a RAM address of 64bits.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-26 06:57:28 +02:00
Anthony PERARD
8ca5692df4 exec.c: Use ram_addr_t in cpu_physical_memory_rw(...).
As the variable pd and addr1 inside the function cpu_physical_memory_rw
are mean to handle a RAM address, they should be of the ram_addr_t type
instead of unsigned long.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-26 06:43:10 +02:00
Blue Swirl
b14ef7c9ab Fix unassigned memory access handling
cea5f9a28f exposed bugs in unassigned memory
access handling. Fix them by always passing CPUState to the handlers.

Reported-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-07-20 21:28:08 +00:00
Stefano Stabellini
8ab934f93b qemu_ram_ptr_length: take ram_addr_t as arguments
qemu_ram_ptr_length should take ram_addr_t as argument rather than
target_phys_addr_t because is doing comparisons with RAMBlock addresses.

cpu_physical_memory_map should create a ram_addr_t address to pass to
qemu_ram_ptr_length from PhysPageDesc phys_offset.

Remove code after abort() in qemu_ram_ptr_length.

Changes in v2:

- handle 0 size in qemu_ram_ptr_length;

- rename addr1 to raddr;

- initialize raddr to ULONG_MAX.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-17 01:54:25 +02:00
Jan Kiszka
868bb33faa xen: Fold CONFIG_XEN_MAPCACHE into CONFIG_XEN
Xen won't be enabled if there is no backend support available for the
host. And that also means the map cache will work. So drop the separate
config switch and move the required stubs over to xen-stub.c.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-17 01:54:24 +02:00
Jan Kiszka
e41d7c691a xen: Clean up map cache API naming
The map cache is a Xen thing, so its API should make this clear.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-17 01:54:24 +02:00
Peter Maydell
a884da8a06 exec.c: Fix calculation of code_gen_buffer_max_size
When calculating the point at which we should not try to put another
TB into the code gen buffer, we have to allow not just for OPC_MAX_SIZE
but OPC_BUF_SIZE. This is because the target translate.c will only
stop when an instruction has put it past the OPC_MAX_SIZE limit, so
we have to include the MAX_OP_PER_INSTR margin which that final insn
might have used.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-07-12 20:29:08 +00:00
Alexander Graf
1e78bcc19c exec: add endian specific phys ld/st functions
Device code some times needs to access physical memory and does that
through the ld./st._phys functions. However, these are the exact same
functions that the CPU uses to access memory, which means they will
be endianness swapped depending on the target CPU.

However, devices don't know about the CPU's endianness, but instead
access memory directly using their own interface to the memory bus,
so they need some way to read data with their native endianness.

This patch adds _le and _be functions to ld./st._phys.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-07-12 20:00:24 +00:00
Anthony Liguori
bb820c03e2 Merge remote-tracking branch 'stefanha/trivial-patches' into staging 2011-06-27 11:25:23 -05:00
Blue Swirl
2b41f10e18 Remove exec-all.h include directives
Most exec-all.h include directives are now useless, remove them.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-06-26 18:25:35 +00:00
Juan Quintela
4429ab4419 exec: last_first_tb was only used in !ONLY_USER case
Once there, use a better variable name.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-06-24 15:34:52 +01:00
Anthony Liguori
fdba9594df Merge remote-tracking branch 'mst/for_anthony' into staging
Conflicts:
	hw/usb-uhci.c
2011-06-22 07:11:09 -05:00
Stefano Stabellini
712c2b4149 xen: mapcache performance improvements
Use qemu_invalidate_entry in cpu_physical_memory_unmap.

Do not lock mapcache entries in qemu_get_ram_ptr if the address falls in
the ramblock with offset == 0. We don't need to do that because the
callers of qemu_get_ram_ptr either try to map an entire block, other
from the main ramblock, or until the end of a page to implement a single
read or write in the main ramblock.
If we don't lock mapcache entries in qemu_get_ram_ptr we don't need to
call qemu_invalidate_entry in qemu_put_ram_ptr anymore because we can
leave with few long lived block mappings requested by devices.

Also move the call to qemu_ram_addr_from_mapcache at the beginning of
qemu_ram_addr_from_host.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-06-19 04:40:05 +02:00
Stefano Stabellini
38bee5dc94 exec.c: refactor cpu_physical_memory_map
Introduce qemu_ram_ptr_length that takes an address and a size as
parameters rather than just an address.

Refactor cpu_physical_memory_map so that we call qemu_ram_ptr_length only
once rather than calling qemu_get_ram_ptr one time per page.
This is not only more efficient but also tries to simplify the logic of
the function.
Currently we are relying on the fact that all the pages are mapped
contiguously in qemu's address space: we have a check to make sure that
the virtual address returned by qemu_get_ram_ptr from the second call on
is consecutive. Now we are making this more explicit replacing all the
calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length
passing a size argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: agraf@suse.de
CC: anthony@codemonkey.ws
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-06-19 04:40:05 +02:00
Stefano Stabellini
6506e4f995 xen: remove xen_map_block and xen_unmap_block
Replace xen_map_block with qemu_map_cache with the appropriate locking
and size parameters.
Replace xen_unmap_block with qemu_invalidate_entry.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-06-19 04:40:05 +02:00
Stefano Stabellini
cd306087e5 xen: remove qemu_map_cache_unlock
There is no need for qemu_map_cache_unlock, just use
qemu_invalidate_entry instead.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-06-19 04:40:04 +02:00
Michael S. Tsirkin
befeac45d4 Merge remote-tracking branch 'origin/master' into pci
Conflicts:
	hw/virtio-pci.c
2011-06-15 18:27:15 +03:00
Alex Williamson
2173a75fb7 CPUPhysMemoryClient: batch addresses in catchup
When a phys memory client registers and we play catchup by walking
the page tables, we can make a huge improvement in the number of
times the set_memory callback is called by batching contiguous
pages together.  With a 4G guest, this reduces the number of callbacks
at registration from 1048866 to 296.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-06-12 10:33:27 +03:00
Edgar E. Iglesias
448293961f Merge remote branch 'rth/axp-next' into alpha-merge
* rth/axp-next: (26 commits)
  target-alpha: Implement TLB flush primitives.
  target-alpha: Use a fixed frequency for the RPCC in system mode.
  target-alpha: Trap for unassigned and unaligned addresses.
  target-alpha: Remap PIO space for 43-bit KSEG for EV6.
  target-alpha: Implement cpu_alpha_handle_mmu_fault for system mode.
  target-alpha: Implement more CALL_PAL values inline.
  target-alpha: Disable interrupts properly.
  target-alpha: All ISA checks to use TB->FLAGS.
  target-alpha: Swap shadow registers moving to/from PALmode.
  target-alpha: Implement do_interrupt for system mode.
  target-alpha: Add IPRs to be used by the emulation PALcode.
  target-alpha: Use kernel mmu_idx for pal_mode.
  target-alpha: Add various symbolic constants.
  target-alpha: Use do_restore_state for arithmetic exceptions.
  target-alpha: Tidy up arithmetic exceptions.
  target-alpha: Tidy exception constants.
  target-alpha: Enable the alpha-softmmu target.
  target-alpha: Rationalize internal processor registers.
  target-alpha: Merge HW_REI and HW_RET implementations.
  target-alpha: Cleanup MMU modes.
  ...
2011-06-10 22:21:14 +02:00
Alexandre Raymond
9bf0960a9a Fix compilation warning due to missing header for sigaction (followup)
This patch removes all references to signal.h when qemu-common.h is included
as they become redundant.

Signed-off-by: Alexandre Raymond <cerbere@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-06-08 09:04:29 +01:00
Alex Williamson
1f2e98b62d exec: Implement qemu_ram_free_from_ptr()
Required for regions mapped via qemu_ram_alloc_from_ptr().  VFIO
and ivshmem will make use of this to remove mappings when devices
are hot unplugged.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2011-06-03 22:59:15 +02:00
Richard Henderson
5b4504079a target-alpha: Trap for unassigned and unaligned addresses.
Signed-off-by: Richard Henderson <rth@twiddle.net>
2011-05-31 10:18:06 -07:00
Aurelien Jarno
6eba5c82cf Merge branch 'trivial-patches' of git://repo.or.cz/qemu/stefanha
* 'trivial-patches' of git://repo.or.cz/qemu/stefanha:
  Fix typos in comments (chek -> check)
  hw/sd.c: Don't complain about SDIO commands CMD52/CMD53
  hw/realview.c: Remove duplicate #include line
  piix_pci: fix piix3_set_irq_pic()
2011-05-23 22:36:17 +02:00
Stefan Weil
a57d23e4f7 Fix typos in comments (chek -> check)
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-05-22 22:31:45 +01:00
Alexander Graf
fb8b273579 s390x: complain when allocating ram fails
While trying out the > 64GB guest RAM patch, I hit some virtual address
limitations of my host system, which resulted in mmap failing. Unfortunately,
qemu didn't tell me about this failure, but just used the NULL pointer
happily, resulting in either segmentation faults or other fun errors.

To spare other users from tracing this down, let's print a nice message
instead so the user can figure out what's wrong from there.

Signed-off-by: Alexander Graf <agraf@suse.de>
2011-05-20 17:35:13 +02:00
Christian Borntraeger
ff83678aee s390x: change mapping base to allow guests > 2GB
the current s390x qemu memory layout is

0x1000000: guest start
0x80000000: qemu binary

which limits the amount of available memory to <2GB.
This patch moves the guest pages to 32GB to not collide with the binary
and to leave some space for the program break of qemu.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-05-20 17:35:13 +02:00
Anthony PERARD
050a0ddf39 Introduce qemu_put_ram_ptr
This function allows to unlock a ram_ptr give by qemu_get_ram_ptr. After
a call to qemu_put_ram_ptr, the pointer may be unmap from QEMU when
used with Xen.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-05-08 10:10:01 +02:00
Jun Nakajima
432d268c05 xen: Introduce the Xen mapcache
On IA32 host or IA32 PAE host, at present, generally, we can't create
an HVM guest with more than 2G memory, because generally it's almost
impossible for Qemu to find a large enough and consecutive virtual
address space to map an HVM guest's whole physical address space.
The attached patch fixes this issue using dynamic mapping based on
little blocks of memory.

Each call to qemu_get_ram_ptr makes a call to qemu_map_cache with the
lock option, so mapcache will not unmap these ram_ptr.

Blocks that do not belong to the RAM, but usually to a device ROM or to
a framebuffer, are handled in a separate function. So the whole RAMBlock
can be map.

Signed-off-by: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-05-08 10:10:01 +02:00
Michael S. Tsirkin
5300f1a548 Merge remote branch 'origin/master' into pci
Conflicts:
	exec.c
2011-05-05 16:39:47 +03:00
Alex Williamson
8d4c78e7c8 CPUPhysMemoryClient: Pass guest physical address not region offset
When we're trying to get a newly registered phys memory client updated
with the current page mappings, we end up passing the region offset
(a ram_addr_t) as the start address rather than the actual guest
physical memory address (target_phys_addr_t).  If your guest has less
than 3.5G of memory, these are coincidentally the same thing.  If
there's more, the region offset for the memory above 4G starts over
at 0, so the set_memory client will overwrite it's lower memory entries.

Instead, keep track of the guest phsyical address as we're walking the
tables and pass that to the set_memory client.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-05-05 16:23:12 +03:00
Alex Williamson
c2f42bf003 CPUPhysMemoryClient: Fix typo in phys memory client registration
When we register a physical memory client, we try to walk the page
tables, calling the set_memory hook for every entry.  Effectively
playing catchup for the client for everything already registered.
With this type, we only walk the 2nd entry of the l1 table,
typically missing all of the registered memory.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-05-05 16:21:46 +03:00
Jan Kiszka
ec6959d046 Redirect cpu_interrupt to callback handler
This allows to override the interrupt handling of QEMU in system mode.
KVM will make use of it to set a specialized handler.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-05-02 09:38:35 -03:00
Jan Kiszka
97ffbd8d9d Break up user and system cpu_interrupt implementations
Both have only two lines in common, and we will convert the system
service into a callback which is of no use for user mode operation.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
CC: Riku Voipio <riku.voipio@iki.fi>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-05-02 09:38:35 -03:00
Stefan Weil
618ba8e6a1 Remove unused function parameter from cpu_restore_state
The previous patch removed the need for parameter puc.
Is is now unused, so remove it.

Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
2011-04-20 10:37:03 +02:00
Stefan Weil
54f7b4a396 Replace cpu_physical_memory_rw were possible
Using cpu_physical_memory_read, cpu_physical_memory_write and ldub_phys
improves readability and allows removing some type casts.

lduw_phys and ldl_phys were not used because both require aligned
addresses. Therefore it is not possible to simply replace existing
calls by one of these functions.

Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2011-04-12 21:51:50 +02:00
Stefan Weil
71d2b725e1 exec: Remove a type cast which is no longer needed
All other type casts in calls of cpu_physical_memory_write are
used by hardware emulations and will be fixed by separate patches.

Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2011-04-12 21:51:50 +02:00
Edgar E. Iglesias
3b8e6a2db1 exec: Handle registrations of the entire address space
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2011-04-07 10:53:41 +02:00
Michael S. Tsirkin
0fd542fb7d cpu: add set_memory flag to request dirty logging
Pass the flag to all cpu notifiers, doing
nothing at this point. Will be used by
follow-up patches.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-04-06 22:28:40 +03:00
Jan Kiszka
dc7a09cfe4 Expose thread_id in info cpus
Based on patch by Glauber Costa:

To allow management applications like libvirt to apply CPU affinities to
the VCPU threads, expose their ID via info cpus. This patch provides the
pre-existing and used interface from qemu-kvm.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-03-16 17:11:07 -03:00
Jan Kiszka
fd28aa1323 s390: Detect invalid invocations of qemu_ram_free/remap
This both detects invalid invocations of qemu_ram_free and
qemu_ram_remap when mem_path is non-NULL and fixes a build error on
s390 ("'area' may be used uninitialized in this function").

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
CC: Alexander Graf <agraf@suse.de>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-03-15 14:36:25 -03:00
Huang Ying
cd19cfa236 Add qemu_ram_remap
qemu_ram_remap() unmaps the specified RAM pages, then re-maps these
pages again.  This is used by KVM HWPoison support to clear HWPoisoned
page tables across guest rebooting, so that a new page may be
allocated later to recover the memory error.

[ Jan: style fixlets, WIN32 fix ]

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-03-15 01:19:06 -03:00
Jan Kiszka
b7680cb607 Refactor thread retrieval and check
We have qemu_cpu_self and qemu_thread_self. The latter is retrieving the
current thread, the former is checking for equality (using CPUState). We
also have qemu_thread_equal which is only used like qemu_cpu_self.

This refactors the interfaces, creating qemu_cpu_is_self and
qemu_thread_is_self as well ass qemu_thread_get_self.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-03-13 14:44:21 +00:00
Vincent Palatin
7d82af38b7 Fix performance regression in qemu_get_ram_ptr
When the commit f471a17e9d converted the
ram_blocks structure to QLIST, it also removed the conditional check before
switching the current block at the beginning of the list.

In the common use case where ram_blocks has a few blocks with only one
frequently accessed (the main RAM), this has a performance impact as it
performs the useless list operations on each call (which are on a really
hot path).

On my machine emulation (ARM on amd64), this patch reduces the
percentage of CPU time spent in qemu_get_ram_ptr from 6.3% to 2.1% in the
profiling of a full boot.

Signed-off-by: Vincent Palatin <vpalatin@chromium.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-03-10 16:12:21 -06:00
Anthony PERARD
e5896b12e2 Introduce log_start/log_stop in CPUPhysMemoryClient
In order to use log_start/log_stop with Xen as well in the vga code,
this two operations have been put in CPUPhysMemoryClient.

The two new functions cpu_physical_log_start,cpu_physical_log_stop are
used in hw/vga.c and replace the kvm_log_start/stop. With this, vga does
no longer depends on kvm header.

[ Jan: rebasing and style fixlets ]

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-02-14 12:39:47 -02:00
Tristan Gingold
d1a1eb7472 Make tb_alloc static
This function is only used within exec.c, so no need to make it public.

Signed-off-by: Tristan Gingold <gingold@adacore.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2011-02-10 18:17:43 +01:00
Blue Swirl
4cd31ad264 tcg/sparc64: fix segfault
With current OpenBSD, code_gen_buffer was mapped 8GB away from
text segment. Then any helpers were beyond the 2GB range of call
instruction genereated by TCG and so the calls would go nowhere,
leading to a segfault.

Fix by specifying an address for the code_gen_buffer,
hopefully free and nearby the helpers.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-01-16 08:32:27 +00:00
Brad
cbb608a5c8 Use mmap() within code_gen_alloc() for OpenBSD.
Signed-off-by: Brad Smith <brad@comstyle.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-12-21 19:44:54 +00:00
Alexander Graf
2507c12ab0 Add endianness as io mem parameter
As stated before, devices can be little, big or native endian. The
target endianness is not of their concern, so we need to push things
down a level.

This patch adds a parameter to cpu_register_io_memory that allows a
device to choose its endianness. For now, all devices simply choose
native endian, because that's the same behavior as before.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-12-11 15:24:25 +00:00
Alexander Graf
dd310534e3 exec: introduce endianness swapped mmio
The way we're currently modeling mmio is too simplified. We assume that
every device has the same endianness as the target CPU. In reality,
most devices are little endian (all PCI and ISA ones I'm aware of). Some
are big endian (special system devices) and a very little fraction is
target native endian (fw_cfg).

So instead of assuming every device to be native endianness, let's move
to a model where the device tells us which endianness it's in.

That way we can compile the devices only once and get rid of all the ugly
swap will be done by the underlying layer.

For the same of readability, this patch only introduces the helper framework
but doesn't allow the registering code to set its endianness yet.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-12-11 15:24:25 +00:00
Stefan Hajnoczi
db1923de60 exec: Remove debugging fprintf() that slipped into qemu_ram_alloc_from_ptr()
Remove the debugging fprintf() slipped in via the following commit:

    commit b2e0a138e7
    Author: Michael S. Tsirkin <mst@redhat.com>
    Date:   Mon Nov 22 19:52:34 2010 +0200

        migration: stable ram block ordering

Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-12-03 11:50:20 -06:00
Michael S. Tsirkin
b2e0a138e7 migration: stable ram block ordering
This makes ram block ordering under migration stable, ordered by offset.
This is especially useful for migration to exec, for debugging.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Jason Wang <jasowang@redhat.com>
2010-12-02 21:13:39 +02:00
Stefan Weil
055403b2a7 exec: Use fprintf_function for dump_exec_info (format checking)
fprintf_function uses format checking with GCC_FMT_ATTR.

It is declared in qemu-common.h and used in cpu-all.h
(which is included from cpu.h), so qemu-common.h must
be included earlier. Some redundant include statements
for standard include files were removed.

Fix also two format errors (ptrdiff_t needs %td).

Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-10-30 08:01:59 +00:00
Marcelo Tosatti
e890261f67 Export qemu_ram_addr_from_host
To be used by next patches.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2010-10-20 16:15:04 -05:00
Stefan Weil
7fd3f49440 exec: Fix compilation error for debug code
is_softmmu was removed with commit
d4c430a80f,
so remove it now from debug code, too.

Fix also the format specifier for paddr
in the same line of code.

Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-10-03 06:41:09 +00:00
Andreas Färber
e78815a554 Introduce qemu_madvise()
vl.c has a Sun-specific hack to supply a prototype for madvise(),
but the call site has apparently moved to arch_init.c.

Haiku doesn't implement madvise() in favor of posix_madvise().
OpenBSD and Solaris 10 don't implement posix_madvise() but madvise().
MinGW implements neither.

Check for madvise() and posix_madvise() in configure and supply qemu_madvise()
as wrapper. Prefer madvise() over posix_madvise() due to flag availability.
Convert all callers to use qemu_madvise() and QEMU_MADV_*.

Note that on Solaris the warning is fixed by moving the madvise() prototype,
not by qemu_madvise() itself. It helps with porting though, and it simplifies
most call sites.

v7 -> v8:
* Some versions of MinGW have no sys/mman.h header. Reported by Blue Swirl.

v6 -> v7:
* Adopt madvise() rather than posix_madvise() semantics for returning errors.
* Use EINVAL in place of ENOTSUP.

v5 -> v6:
* Replace two leftover instances of POSIX_MADV_NORMAL with QEMU_MADV_INVALID.
  Spotted by Blue Swirl.

v4 -> v5:
* Introduce QEMU_MADV_INVALID, suggested by Alexander Graf.
  Note that this relies on -1 not being a valid advice value.

v3 -> v4:
* Eliminate #ifdefs at qemu_advise() call sites. Requested by Blue Swirl.
  This will currently break the check in kvm-all.c by calling madvise() with
  a supported flag, which will not fail. Ideas/patches welcome.

v2 -> v3:
* Reuse the *_MADV_* defines for QEMU_MADV_*. Suggested by Alexander Graf.
* Add configure check for madvise(), too.
  Add defines to Makefile, not QEMU_CFLAGS.
  Convert all callers, untested. Suggested by Blue Swirl.
* Keep Solaris' madvise() prototype around. Pointed out by Alexander Graf.
* Display configure check results.

v1 -> v2:
* Don't rely on posix_madvise() availability, add qemu_madvise().
  Suggested by Blue Swirl.

Signed-off-by: Andreas Färber <afaerber@opensolaris.org>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-09-25 11:26:05 +00:00
Gleb Natapov
95c318f5e1 Fix segfault in mmio subpage handling code.
It is possible that subpage mmio is registered over existing memory
page. When this happens "memory" will have real memory address and not
index into io_mem array so next access to the page will generate
segfault. It is uncommon to have some part of a page to be accessed as
memory and some as mmio, but qemu shouldn't crash even when guest does
stupid things. So lets just pretend that the rest of the page is
unassigned if guest configure part of the memory page as mmio.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-08-28 08:47:23 +00:00
Yoshiaki Tamura
6977dfe6af exec: remove code duplication in qemu_ram_alloc() and qemu_ram_alloc_from_ptr()
Since most of the code in qemu_ram_alloc() and
qemu_ram_alloc_from_ptr() are duplicated, let
qemu_ram_alloc_from_ptr() to switch by checking void *host, and change
qemu_ram_alloc() to a wrapper.

Signed-off-by: Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-08-22 16:19:00 -05:00
Yoshiaki Tamura
9742bf26b1 exec: replace tabs by spaces.
Signed-off-by: Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-08-22 16:19:00 -05:00
Cam Macdonell
84b89d782f Add qemu_ram_alloc_from_ptr function
Provide a function to add an allocated region of memory to the qemu RAM.

This patch is copied from Marcelo's qemu_ram_map() in qemu-kvm and given the
clearer name qemu_ram_alloc_from_ptr().

Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-08-10 16:25:15 -05:00
Stefan Weil
24ab68ac72 Declare code_gen_ptr, code_gen_max_blocks 'static'
Both values are only used in exec.c, so there is no need
to make them globally available.

Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-07-22 05:52:10 +02:00
Blue Swirl
09d7ae9000 Fix warning about uninitialized variable
With gcc 4.2.1-sjlj (mingw32-2) I get this warning:
/src/qemu/exec.c: In function 'qemu_ram_alloc':
/src/qemu/exec.c:2777: warning: 'offset' may be used uninitialized in this function

Fix by initializing the variable.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-07-07 19:37:53 +00:00
Alex Williamson
fb787f81e7 ramblocks: No more being lazy about duplicate names
Now that we have a working qemu_ram_free() and the primary runtime
user of it has been updated, don't be lenient about duplicate id strings.
We also shouldn't need to create them ondemand at the target.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-06 10:36:29 -05:00
Alex Williamson
04b1665372 qemu_ram_free: Implement it
Now that we can support a ram_addr_t space with holes, we can implement
qemu_ram_free().

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-06 10:36:28 -05:00
Alex Williamson
cc9e98cb8f ramblocks: Make use of DeviceState pointer and BusInfo.get_dev_path
With these two pieces in place, we can start naming ramblocks.  When
the device is present and it lives on a bus that provides a device
path, we concatenate the path and the provided name.  Otherwise we
just use name.  The resulting id string must be unique.  For now we
assume an allocation for the same name and size is a device that has
been removed and reinserted and return the same block.  This will go
away once qemu_ram_free() is implemented.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-06 10:36:28 -05:00
Alex Williamson
1724f04985 qemu_ram_alloc: Add DeviceState and name parameters
These will be used to generate unique id strings for ramblocks.  The name
field is required, the device pointer is optional as most callers don't
have a device.  When there's no device or the device isn't a child of
a bus implementing BusInfo.get_dev_path, the name should be unique for
the platform.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-06 10:36:28 -05:00
Alex Williamson
0be71e324f savevm: Add DeviceState param
When available, we'd like to be able to access the DeviceState
when registering a savevm.  For buses with a get_dev_path()
function, this will allow us to create more unique savevm
id strings.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-06 10:36:28 -05:00
Alex Williamson
d17b5288d9 Remove uses of ram.last_offset (aka last_ram_offset)
We currently need this either to allocate the next ram_addr_t for a
new block, or for total memory to be migrated.  Both of which we can
calculate without need of this to keep us in a contiguous address space.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-06 10:36:27 -05:00
Jun Koi
bf298f83c3 A bit optimization for tlb_set_page()
This patch avoids handling write watchpoints on read-only memory access.
It also breaks the searching loop for watchpoint once the setup for
handling watchpoint later is done.

Signed-off-by: Jun Koi <junkoi2004@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-06-30 20:25:55 +02:00
Alex Williamson
f471a17e9d ram_blocks: Convert to a QLIST
This makes the RAM block list easier to manipulate.  Also incorporate
relevant variables into the RAMList struct.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Chris Wright <chrisw@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-06-14 11:12:53 -05:00
Richard Henderson
eba0b89379 tcg-s390: Allocate the code_gen_buffer near the main program.
This allows the use of direct calls to the helpers,
and a direct branch back to the epilogue.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-06-11 09:30:48 +02:00
Aurelien Jarno
239fda311a tcg: get rid of copy_size in TCGOpDef
copy_size is a left-over from the dyngen era, remove it.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-06-09 16:10:50 +02:00
Richard Henderson
9002ec794e tcg: Initialize the prologue after GUEST_BASE is fixed.
This will allow backends to make intelligent choices about how
to implement GUEST_BASE.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-05-21 18:41:21 +02:00
Marcelo Tosatti
618a568da4 Fix -mem-path with hugetlbfs
Fallback to qemu_vmalloc in case file_ram_alloc fails.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2010-05-11 14:02:21 -03:00
Richard Henderson
3cab721d0e Fill in unassigned mem read/write callbacks.
Implement the "functions may be omitted with NULL pointer"
interface mentioned in the function block comment by transforming
NULL entries in the read/write arrays into calls to the
unassigned_mem family of functions.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-05-07 16:58:38 +00:00
Michael S. Tsirkin
733f0b02c8 qemu: address todo comment in exec.c
exec.c has a comment 'XXX: optimize' for lduw_phys/stw_phys,
so let's do it, along the lines of stl_phys.

The reason to address 16 bit accesses specifically is that virtio relies
on these accesses to be done atomically, using memset as we do now
breaks this assumption, which is reported to cause qemu with kvm
to read wrong index values under stress.

https://bugzilla.redhat.com/show_bug.cgi?id=525323

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-05-06 07:28:49 +02:00
Richard Henderson
3e0650a9c9 Fix zero-length write(2).
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-05-06 06:45:12 +02:00
Paul Brook
2e9a5713f0 Remove PAGE_RESERVED
The usermode PAGE_RESERVED code is not required by the current mmap
implementation, and is already broken when guest_base != 0.
Unfortunately the bsd emulation still uses the old mmap implementation,
so we can't rip it out altogether.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-05-05 16:32:59 +01:00
Richard Henderson
f64052478e Remove IO_MEM_SUBWIDTH.
Greatly simplify the subpage implementation by not supporting
multiple devices at the same address at different widths.  We
don't need full copies of mem_read/mem_write/opaque for each
address, only a single index back into the main io_mem_* arrays.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-04-25 12:59:33 +00:00
Jun Koi
24f7fb19b3 Cleanup dead code
This patch removes some dead code in exec.c

Signed-off-by: Jun Koi <junkoi2004@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-04-11 20:15:01 +00:00
Aurelien Jarno
fd43690777 Revert "Avoid page_set_flags() assert in qemu-user host page protection code"
This reverts commit 01c0bef162.

(breaks build on 32-bit hosts)
2010-04-10 17:20:36 +02:00
Juergen Lock
01c0bef162 Avoid page_set_flags() assert in qemu-user host page protection code
V2 that uses endaddr = end-of-guest-address-space if !h2g_valid(endaddr)
after I found out that indeed works; and also disables the FreeBSD 6.x
/compat/linux/proc/self/maps fallback because it can return partial lines
if (at least I think that's the reason) the mappings change between
subsequent read() calls.

Signed-off-by: Juergen Lock <nox@jelal.kn-bremen.de>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-04-09 22:01:30 +02:00
Yoshiaki Tamura
f7c11b5350 Replace direct phys_ram_dirty access with wrapper functions.
Replaces direct phys_ram_dirty access with wrapper functions to prevent
direct access to the phys_ram_dirty bitmap.

Signed-off-by: Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>
Signed-off-by: OHMURA Kei <ohmura.kei@lab.ntt.co.jp>
Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-04-08 11:11:21 +02:00
Paul Brook
355b194369 Split TLB addend and target_phys_addr_t
Historically the qemu tlb "addend" field was used for both RAM and IO accesses,
so needed to be able to hold both host addresses (unsigned long) and guest
physical addresses (target_phys_addr_t).  However since the introduction of
the iotlb field it has only been used for RAM accesses.

This means we can change the type of addend to unsigned long, and remove
associated hacks in the big-endian TCG backends.

We can also remove the host dependence from target_phys_addr_t.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-04-05 00:28:53 +01:00
Aurelien Jarno
ebf50fb3b9 tcg: align static_code_gen_buffer to CODE_GEN_ALIGN
On ia64, the default memory alignement is not enough for a code
alignement. To fix that, force static_code_gen_buffer alignment
to CODE_GEN_ALIGN.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-04-01 21:51:59 +02:00
Aurelien Jarno
45d679d643 linux-user: fix page_unprotect when host page size > target page size
When the host page size is bigger that the target one, unprotecting a
page should:
- mark all the target pages corresponding to the host page as writable
- invalidate all tb corresponding to the host page (and not the target
  page)

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-04-01 21:51:59 +02:00
Juergen Lock
f01576f185 Get bsd-user host page protection code working on FreeBSD hosts
Use kinfo_getvmmap(3) on FeeBSD >= 7.x and /compat/linux/proc on older
FreeBSD.  (kinfo_getvmmap is preferred since /compat/linux/proc is
usually only mounted on hosts also using the Linuxolator.)

This patch is a bit hacky because the includes needed for kinfo_getvmmap
conflict with other definitions in exec.c by default so I had to `trick
around' a little, but I built the result in FreeBSD 6.4-stable and
7.2-stable tbs and on 8-stable on the host so the hacks at least
should be stable.  (If this is a problem maybe we could also move the
kinfo_getvmmap invocations into a seperate source file but that would
be more work...)

Signed-off-by: Juergen Lock <nox@jelal.kn-bremen.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-03-30 17:45:10 +00:00
Blue Swirl
29e922b61f Compile qemu-timer only once
Arrange various declarations so that also non-CPU code can access
them, adjust users.

Move CPU specific code to cpus.c.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-03-29 19:24:00 +00:00
Aurelien Jarno
91dbed4ba1 exec: remove dead code
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-03-28 18:47:25 +02:00
Michael Tokarev
6adc05492f be more specific in -mem-path error messages
Signed-Off-By: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-03-27 15:26:36 +01:00
Paul Brook
d4c430a80f Large page TLB flush
QEMU uses a fixed page size for the CPU TLB.  If the guest uses large
pages then we effectively split these into multiple smaller pages, and
populate the corresponding TLB entries on demand.

When the guest invalidates the TLB by virtual address we must invalidate
all entries covered by the large page.  However the address used to
invalidate the entry may not be present in the QEMU TLB, so we do not
know which regions to clear.

Implementing a full vaiable size TLB is hard and slow, so just keep a
simple address/mask pair to record which addresses may have been mapped by
large pages.  If the guest invalidates this region then flush the
whole TLB.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-17 02:44:41 +00:00
Paul Brook
7296abaccc Fix pagetable code
The multi-level pagetable code fails to iterate ove all entries because
of the L2_BITS v.s. L2_SIZE thinko.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-14 14:58:46 +00:00
Blue Swirl
338e9e6ce5 Fix more wrong usermode virtual address types
Fixes warning:
  CC    sparc-bsd-user/exec.o
/src/qemu/exec.c: In function `page_check_range':
/src/qemu/exec.c:2375: warning: comparison is always true due to limited range of data type

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-03-13 09:48:08 +00:00
Paul Brook
b480d9b74d Fix usermode virtual address type
Usermode virtual addresses are abi_ulong, not target_ulong.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-12 23:25:52 +00:00
Paul Brook
b3755a915e Disable phsyical memory handling in userspace emulation.
Code to handle physical memory access is not meaningful in usrmode emulation,
so disable it.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-12 18:34:25 +00:00
Paul Brook
41c1b1c9eb Add tb_page_addr_t
The page tracking code in exec.c is used by both userspace and system
emulation.  Userspace emulation uses it to track virtual pages, and
system emulation to track ram pages.  Introduce a new type to hold this
kind of address.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-12 17:23:50 +00:00
Richard Henderson
376a790970 Fix last page errors in page_check_range and page_set_flags.
The addr < end comparison prevents iterating over the last
page in the guest address space; an iteration based on
length avoids this problem.

At the same time, assert that the given address is in the
guest address space.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2010-03-12 16:31:32 +00:00
Richard Henderson
5cd2c5b6ad Implement multi-level page tables.
Define L1_MAP_ADDR_SPACE_BITS to be either the virtual address size
(in user mode) or physical address size (in system mode), and use
that to size l1_map.  This rewrites page_find_alloc, page_flush_tb,
and walk_memory_regions.

Use TARGET_PHYS_ADDR_SPACE_BITS for the physical memory map based
off of l1_phys_map.  This rewrites page_phys_find_alloc and
phys_page_for_each.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2010-03-12 16:31:09 +00:00
Richard Henderson
5270589032 Move TARGET_PHYS_ADDR_SPACE_BITS to target-*/cpu.h.
Removes a set of ifdefs from exec.c.

Introduce TARGET_VIRT_ADDR_SPACE_BITS for all targets other
than Alpha.  This will be used for page_find_alloc, which is
supposed to be using virtual addresses in the first place.

Signed-off-by: Richard Henderson <rth@twiddle.net>
2010-03-12 16:28:24 +00:00
Jan Kiszka
ea375f9ab8 KVM: Rework VCPU state writeback API
This grand cleanup drops all reset and vmsave/load related
synchronization points in favor of four(!) generic hooks:

- cpu_synchronize_all_states in qemu_savevm_state_complete
  (initial sync from kernel before vmsave)
- cpu_synchronize_all_post_init in qemu_loadvm_state
  (writeback after vmload)
- cpu_synchronize_all_post_init in main after machine init
- cpu_synchronize_all_post_reset in qemu_system_reset
  (writeback after system reset)

These writeback points + the existing one of VCPU exec after
cpu_synchronize_state map on three levels of writeback:

- KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
- KVM_PUT_RESET_STATE   (on synchronous system reset, all VCPUs stopped)
- KVM_PUT_FULL_STATE    (on init or vmload, all VCPUs stopped as well)

This level is passed to the arch-specific VCPU state writing function
that will decide which concrete substates need to be written. That way,
no writer of load, save or reset functions that interact with in-kernel
KVM states will ever have to worry about synchronization again. That
also means that a lot of reasons for races, segfaults and deadlocks are
eliminated.

cpu_synchronize_state remains untouched, just as Anthony suggested. We
continue to need it before reading or writing of VCPU states that are
also tracked by in-kernel KVM subsystems.

Consequently, this patch removes many cpu_synchronize_state calls that
are now redundant, just like remaining explicit register syncs.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-03-04 00:29:28 -03:00
Marcelo Tosatti
c902760fb2 Add option to use file backed guest memory
Port qemu-kvm's -mem-path and -mem-prealloc options. These are useful
for backing guest memory with huge pages via hugetlbfs.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
CC: john cooper <john.cooper@redhat.com>
2010-03-04 00:28:47 -03:00
Paul Brook
c527ee8fc8 Avoid tlb_set_page in userspace emulation
tlb_set_page isn't meaningful for userspace emulation, so remove it.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-01 04:40:29 +00:00
Paul Brook
c04b2b7899 Move subpage definitions
Move definitions for subpage handling into !CONFIG_USER_ONLY code.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-01 04:40:22 +00:00
Paul Brook
a68fe89caf Remove bogus cpu_physical_memory_rw
Userspace doesn't have physical memory, so cpu_physical_memory_rw
makes no sense.  This is only used to implement cpu_memory_rw_debug, so
just implement that directly instead.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-01 00:08:59 +00:00
Paul Brook
6d9a13042d Remove l1_phys_map from userspace emulation
Userspace emulation doesn't have a physical address space, so
l1_phys_map makes no sense. This code is never actually used, so don't
try and build it.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-02-28 23:55:53 +00:00
Paul Brook
94df27fd2f Fix userspace breakpoint invalidation
Remove bogus virtual->physical address translation in
breakpoint_invalidate for userspace emulation.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-02-28 23:47:45 +00:00
Michael S. Tsirkin
7b8f3b7834 kvm: move kvm to use memory notifiers
remove direct kvm calls from exec.c, make
kvm use memory notifiers framework instead.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-02-09 16:56:13 -06:00
Michael S. Tsirkin
f6f3fbcab0 qemu: memory notifiers
This adds notifiers for phys memory changes: a set of callbacks that
vhost can register and update kernel accordingly.  Down the road, kvm
code can be switched to use these as well, instead of calling kvm code
directly from exec.c as is done now.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-02-09 16:56:13 -06:00
Anthony Liguori
8217d94586 Merge remote branch 'qemu-kvm/uq/master' into staging-tmp 2010-02-08 10:06:54 -06:00
Riku Voipio
fd052bf63a linux-user: remove signal handler before calling abort()
Qemu may hang in host_signal_handler after qemu has done a
seppuku with cpu_abort(). But at this stage we are not really
interested in target process coredump anymore, so unregister
host_signal_handler to die grafefully.

Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
2010-02-06 17:19:43 +01:00
Riku Voipio
cab1b4bdc7 fix locking error with current_tb
Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
2010-02-06 17:19:43 +01:00
Paolo Bonzini
a484156557 exec.c: dead assignments
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-02-05 18:13:10 +00:00
Sheng Yang
62a2744ca0 kvm: Flush coalesced MMIO buffer periodly
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.

But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No need for input or accessing other devices frequently.

This issue was observed in a experimental embbed system. The test image
simply print "test" every 1 seconds. The output in QEmu meets expectation,
but the output in KVM is delayed for seconds.

Per Avi's suggestion, I hooked flushing coalesced MMIO buffer in VGA update
handler. By this way, We don't need vcpu explicit exit to QEmu to
handle this issue.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2010-02-03 19:47:33 -02:00
Herve Poussineau
f8a83245d9 win32: pair qemu_memalign() with qemu_vfree()
Win32 suffers from a very big memory leak when dealing with SCSI devices.
Each read/write request allocates memory with qemu_memalign (ie
VirtualAlloc) but frees it with qemu_free (ie free).
Pair all qemu_memalign() calls with qemu_vfree() to prevent such leaks.

Signed-off-by: Herve Poussineau <hpoussin@reactos.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-01-26 16:41:06 -06:00
Riku Voipio
f76cfe56d9 linux-user: enable tb unlinking when compiled with NPTL
Fixes receiving signals when guest code is being executed in a tight
loop. For an example, try interrupting the following code with ctrl-c.

http://nchipin.kos.to/test-loop.c

The tight loop is ofcourse brainless, but it is also exactly how the waitpid* testcases
are implemented.

Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2009-12-19 19:45:26 +01:00
Riku Voipio
c6703b4761 Give a error when running out of iomem areas.
The limit of iomem areas is quite low. Without the
debug print, it is quite hard to figure out why more
devices are not getting registered.

Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2009-12-18 23:23:56 +01:00
Juha Riihimäki
1e8b27ca85 Fix win32 log file location
/tmp doesn't exist under win32. Ease the pain of win32 development slightly.

From: Juha Riihimäki <juha.riihimaki@nokia.com>
Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2009-12-18 23:23:56 +01:00
Alexander Graf
6b02494d64 Allocate physical memory in low virtual address space
KVM on S390x requires the virtual address space of the guest's RAM to be
within the first 256GB.

The general direction I'd like to see KVM on S390 move is that this requirement
is losened, but for now that's what we're stuck with.

So let's just hack up qemu_ram_alloc until KVM behaves nicely :-).

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2009-12-05 17:36:02 +01:00
Aurelien Jarno
a167ba5085 Add support for GNU/kFreeBSD
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2009-11-29 18:00:41 +01:00
Izik Eidus
ccb167e9d7 ksm support
Call MADV_MERGEABLE on guest memory allocations.  MADV_MERGABLE will be
available starting in Linux 2.6.32.  This system call registers a region of
virtual address space with Linux as a candidate for transparent memory
sharing.

Patchworks-ID: 35447
Signed-off-by: Izik Eidus <ieidus@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-15 09:32:04 -05:00
Michael S. Tsirkin
8f2498f9f6 fix comment on cpu_register_physical_memory_offset
We don't require full pages in cpu_register_physical_memory,
except for RAM.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-05 09:32:51 -05:00
Juan Quintela
d4bfa4d7c6 vmstate: remove const from pre_save() functions
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-05 09:32:37 -05:00
Juan Quintela
e59fb3741b vmstate: add version_id argument to post_load
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-05 09:32:36 -05:00
Anthony Liguori
c227f0995e Revert "Get rid of _t suffix"
In the very least, a change like this requires discussion on the list.

The naming convention is goofy and it causes a massive merge problem.  Something
like this _must_ be presented on the list first so people can provide input
and cope with it.

This reverts commit 99a0949b72.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-01 16:12:16 -05:00
malc
99a0949b72 Get rid of _t suffix
Some not so obvious bits, slirp and Xen were left alone for the time
being.

Signed-off-by: malc <av1474@comtv.ru>
2009-10-01 22:45:02 +04:00
Blue Swirl
72cf2d4f0e Fix sys-queue.h conflict for good
Problem: Our file sys-queue.h is a copy of the BSD file, but there are
some additions and it's not entirely compatible. Because of that, there have
been conflicts with system headers on BSD systems. Some hacks have been
introduced in the commits 15cc923584,
f40d753718,
96555a96d7 and
3990d09adf but the fixes were fragile.

Solution: Avoid the conflict entirely by renaming the functions and the
file. Revert the previous hacks.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-09-12 07:36:22 +00:00
Juan Quintela
e7f4eff7fb vmstate: port cpu_comon
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-09-11 11:10:05 -05:00
Edgar E. Iglesias
faed1c2a23 microblaze: Trap on bus accesses to unmapped areas.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
2009-09-03 13:25:09 +02:00
Avi Kivity
4c0960c0c4 kvm: Simplify cpu_synchronize_state()
cpu_synchronize_state() is a little unreadable since the 'modified'
argument isn't self-explanatory.  Simplify it by making it always
synchronize the kernel state into qemu, and automatically flush the
registers back to the kernel if they've been synchronized on this
exit.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-08-27 20:35:30 -05:00
Blue Swirl
d60efc6b0d Make CPURead/WriteFunc structure 'const'
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-08-25 18:29:31 +00:00
Anthony Liguori
4a1418e07b Unbreak large mem support by removing kqemu
kqemu introduces a number of restrictions on the i386 target.  The worst is that
it prevents large memory from working in the default build.

Furthermore, kqemu is fundamentally flawed in a number of ways.  It relies on
the TSC as a time source which will not be reliable on a multiple processor
system in userspace.  Since most modern processors are multicore, this severely
limits the utility of kqemu.

kvm is a viable alternative for people looking to accelerate qemu and has the
benefit of being supported by the upstream Linux kernel.  If someone can
implement work arounds to remove the restrictions introduced by kqemu, I'm
happy to avoid and/or revert this patch.

N.B. kqemu will still function in the 0.11 series but this patch removes it from
the 0.12 series.

Paul, please Ack or Nack this patch.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-08-24 08:02:55 -05:00
Blue Swirl
660f11be54 Fix Sparse warnings: "Using plain integer as NULL pointer"
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-07-31 21:16:51 +00:00
Juan Quintela
2f7bb8780a rename USE_NPTL to CONFIG_USE_NPTL
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-27 14:10:55 -05:00
Filip Navara
bf65f53fba Remove setvbuf(<handle>, NULL, _IOLBF, 0) calls for Win32
On Win32 the setvbuf function requires the last parameter to be size between 2 and INT_MAX bytes, so the calls always failed. Since the whole point of the calls is to set line-buffered mode for the file handle and that's not supported on Win32 anyway, conditionally remove them.

Signed-off-by: Filip Navara <filip.navara@gmail.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-27 14:09:15 -05:00
Blue Swirl
0bf9e31af1 Fix most warnings (errors with -Werror) when debugging is enabled
I used the following command to enable debugging:
perl -p -i -e 's/^\/\/#define DEBUG/#define DEBUG/g' * */* */*/*

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-07-20 17:19:25 +00:00
Igor Kovalenko
0873898472 tlb flush cleanup
Use static empty variable s_cputlb_empty_entry to clear entries,
also reset addend member when clearing entries.
This helps running with valgrind/memcheck

Signed-off-by: igor.v.kovalenko@gmail.com

--
Kind regards,
Igor V. Kovalenko
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-07-16 17:28:50 -05:00
Blue Swirl
8167ee8839 Update to a hopefully more future proof FSF address
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-07-16 20:47:01 +00:00
Isaku Yamahata
34d5e948e8 cpu_unregister_map_client: fix memory leak.
fix memory leak in cpu_unregister_map_client() and cpu_notify_map_clients().

Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-06-29 14:18:06 -05:00
Stefan Weil
f8e2af11d9 Win32: Reduce section alignment for Windows.
Maximum alignment for Win32 is 16, so don't try
to set it to 32. Otherwise the compiler complains:

exec.c:102: warning: alignment of 'code_gen_prologue'
is greater than maximum object file alignment.  Using 16

Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-06-22 10:15:31 -05:00
Isaku Yamahata
cfde4bd931 exec.c: remove unnecessary #if NB_MMU_MODES
remove unnecessary #if NB_MMU_MODES by using loop.

Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
Acked-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-06-16 15:52:38 -05:00
Glauber Costa
950f147249 provide cpu_index to env mapping
There are some people interested in, given a cpu number,
pick its CPUState. KVM is an example, although not yet in tree.
This patch provides a way of doing that.

Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-06-16 15:36:47 -05:00
Avi Kivity
e9179ce1a0 Rearrange io_mem_init()
Move io_mem_init() downwards to avoid a forward declaration.  No code change.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-06-16 15:18:38 -05:00
Avi Kivity
1eed09cb4a Remove io_index argument from cpu_register_io_memory()
The parameter is always zero except when registering the three internal
io regions (ROM, unassigned, notdirty).  Remove the parameter to reduce
the API's power, thus facilitating future change.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-06-16 15:18:37 -05:00
Mika Westerberg
edf8e2af14 linux-user: implemented ELF coredump support for ARM target
When target process is killed with signal (such signal that
should dump core) a coredump file is created.  This file is
similar than coredump generated by Linux (there are few exceptions
though).

Riku Voipio: added support for rlimit

Signed-off-by: Mika Westerberg <mika.westerberg@iki.fi>
Signed-off-by: Riku Voipio <riku.voipio@iki.fi>
2009-06-16 16:56:28 +03:00
Nathan Froyd
1e9fa73016 fix gdbstub support for multiple threads in usermode, v3
When debugging multi-threaded programs, QEMU's gdb stub would report the
correct number of threads (the qfThreadInfo and qsThreadInfo packets).
However, the stub was unable to actually switch between threads (the T
packet), since it would report every thread except the first as being
dead.  Furthermore, the stub relied upon cpu_index as a reliable means
of assigning IDs to the threads.  This was a bad idea; if you have this
sequence of events:

initial thread created
new thread #1
new thread #2
thread #1 exits
new thread #3

thread #3 will have the same cpu_index as thread #1, which would confuse
GDB.  (This problem is partly due to the remote protocol not having a
good way to send thread creation/destruction events.)

We fix this by using the host thread ID for the identifier passed to GDB
when debugging a multi-threaded userspace program.  The thread ID might
wrap, but the same sort of problems with wrapping thread IDs would come
up with debugging programs natively, so this doesn't represent a
problem.

Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
2009-06-04 10:04:49 +01:00
Jan Kiszka
b0a46a333a kvm: Add missing bits to support live migration
This patch adds the missing hooks to allow live migration in KVM mode.
It adds proper synchronization before/after saving/restoring the VCPU
states (note: PPC is untested), hooks into
cpu_physical_memory_set_dirty_tracking() to enable dirty memory logging
at KVM level, and synchronizes that drity log into QEMU's view before
running ram_live_save().

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-22 10:50:33 -05:00
Jan Kiszka
151f7749f2 kvm: Rework dirty bitmap synchronization
Extend kvm_physical_sync_dirty_bitmap() so that is can sync across
multiple slots. Useful for updating the whole dirty log during
migration. Moreover, properly pass down errors the whole call chain.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-22 10:50:33 -05:00
Stuart Brady
ccbb4d44fc Fix typos in comments in exec.c
This patch fixes several typos in comments in exec.c:

            longet -> longer
       recommanded -> recommended
        ajustments -> adjustments
   inconsistancies -> inconsistencies
           phsical -> physical
       positionned -> positioned
       succesfully -> successfully
      regon_offset -> region_offset

and also:

      start_region -> start_addr

Signed-off-by: Stuart Brady <stuart.brady@gmail.com>
2009-05-03 21:58:28 +03:00
Jan Kiszka
6f0437e8de kvm: Avoid COW if KVM MMU is asynchronous
Avi Kivity wrote:
> Suggest wrapping in a function and hiding it deep inside kvm-all.c.
>

Done in v2:

---------->

If the KVM MMU is asynchronous (kernel does not support MMU_NOTIFIER),
we have to avoid COW for the guest memory. Otherwise we risk serious
breakage when guest pages change there physical locations due to COW
after fork. Seen when forking smbd during runtime via -smb.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-05-01 09:44:11 -05:00
Paul Brook
0b4e6e3e78 Remove cpu_get_io_memory_{read,write}.
Signed-off-by: Paul Brook <paul@codesourcery.com>
2009-04-30 18:39:07 +01:00
aliguori
8edac960a7 qemu: introduce qemu_cpu_kick (Marcelo Tosatti)
To notify cpu of pending interrupt.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7243 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-24 18:03:45 +00:00
aliguori
268a362c63 added -numa cmdline parameter parser (Andre Przywara)
adds a -numa command line parameter and sets a QEMU global array with
the memory sizes. The CPU-to-node assignemnt is written into the
CPUState. If no specific values for memory and CPUs are given,
all resources will be split equally across all nodes.
This code currently support only up to 64 virtual CPUs.

Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7210 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-21 22:30:27 +00:00
blueswir1
640f42e4e9 kqemu: merge CONFIG_KQEMU and USE_KQEMU
Basically a recursive ":%s/USE_KQEMU/CONFIG_KQEMU/g".

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7189 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-19 10:18:01 +00:00
pbrook
94a6b54fd6 Implement dynamic guest ram allocation.
Signed-off-by: Paul Brook <paul@codesourcery.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7088 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-11 17:15:54 +00:00
pbrook
5579c7f37e Remove code phys_ram_base uses.
Signed-off-by: Paul Brook <paul@codesourcery.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7085 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-11 14:47:08 +00:00
pbrook
dc828ca1b5 Cleanup SPARC/TCX framebuffer allocation.
Signed-off-by: Paul Brook <paul@codesourcery.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7059 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-09 22:21:07 +00:00
aurel32
e37e6ee6e1 Allow 5 mmu indexes.
This is necessary for alpha because it has 4 protection levels and pal mode.

Signed-off-by: Tristan Gingold <gingold@adacore.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>

git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7028 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-07 21:47:27 +00:00
blueswir1
b9e82a5946 Fix some win32 compile warnings
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6984 c046a42c-6fe2-441c-8c8c-71466251a162
2009-04-05 18:03:31 +00:00
aliguori
5e2972fdab ROM write access for debugging (Jan Kiszka)
Enhance cpu_memory_rw_debug so that it can write even to ROM regions.
This allows to modify ROM via gdb (I see no point in denying this to the
user), and it will enable us to drop kvm_patch_opcode_byte().

Credits go to Avi for suggesting this.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6905 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-28 17:51:36 +00:00
blueswir1
d78f399542 Delete some unused macros detected with -Wp,-Wunused-macros use
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6856 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-16 16:33:01 +00:00
aliguori
e22a25c936 Guest debugging support for KVM (Jan Kiszka)
This is a backport of the guest debugging support for the KVM
accelerator that is now part of the KVM tree. It implements the reworked
KVM kernel API for guest debugging (KVM_CAP_SET_GUEST_DEBUG) which is
not yet part of any mainline kernel but will probably be 2.6.30 stuff.
So far supported is x86, but PPC is expected to catch up soon.

Core features are:
 - unlimited soft-breakpoints via code patching
 - hardware-assisted x86 breakpoints and watchpoints

Changes in this version:
 - use generic hook cpu_synchronize_state to transfer registers between
   user space and kvm
 - push kvm_sw_breakpoints into KVMState

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6825 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-12 20:12:48 +00:00
aurel32
3098dba01c Use a dedicated function to request exit from execution loop
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>

git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6762 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 21:28:24 +00:00
aurel32
e47ce3f244 Clear CPU_INTERRUPT_EXIT on VM load
CPU_INTERRUPT_EXIT is not set anymore in env->interrupt_request since
revision 6728. Make sure the bit is cleared on VM load.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>

git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6756 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 20:57:31 +00:00
blueswir1
c5e97233e8 Support for DragonFly BSD (Hasso Tepper)
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6746 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 20:06:23 +00:00
blueswir1
511d2b140f Sparse fixes: NULL use, header order, ANSI prototypes, static
Fix Sparse warnings:
 * use NULL instead of plain 0
 * rearrange header include order to avoid redefining types accidentally
 * ANSIfy SLIRP
 * avoid "restrict" keyword
 * add static



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6736 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 15:32:56 +00:00
pbrook
c276471991 The _exit syscall is used for both thread termination in NPTL applications,
and process termination in legacy applications.  Try to guess which we want
based on the presence of multiple threads.

Also implement locking when modifying the CPU list.


Signed-off-by: Paul Brook <paul@codesourcery.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6735 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-07 15:24:59 +00:00
aurel32
be214e6c05 Fix race condition on access to env->interrupt_request
env->interrupt_request is accessed as the bit level from both main code
and signal handler, making a race condition possible even on CISC CPU.
This causes freeze of QEMU under high load when running the dyntick
clock.

The patch below move the bit corresponding to CPU_INTERRUPT_EXIT in a
separate variable, declared as volatile sig_atomic_t, so it should be
work even on RISC CPU.

We may want to move the cpu_interrupt(env, CPU_INTERRUPT_EXIT) case in
its own function and get rid of CPU_INTERRUPT_EXIT. That can be done
later, I wanted to keep the patch short for easier review.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>

git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6728 c046a42c-6fe2-441c-8c8c-71466251a162
2009-03-06 21:48:00 +00:00
pbrook
67c4d23c4f Fix unassigned region offsets.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6639 c046a42c-6fe2-441c-8c8c-71466251a162
2009-02-23 13:16:07 +00:00
aurel32
6c2934db94 Fix cpu_physical_memory_rw() for 64-bit I/O accesses
KVM uses cpu_physical_memory_rw() to access the I/O devices. When a
read or write with a length of 8-byte is requested, it is split into 2
4-byte accesses.

This has been broken in revision 5849. After this revision, only the
first 4 bytes are actually read/write to the device, as the target
address is changed, so on the next iteration of the loop the next 4
bytes are actually read/written elsewhere (in the RAM for the graphic
card).

This patch fixes screen corruption (and most probably data corruption)
with FreeBSD/amd64. Bug #2556746 in KVM bugzilla.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>

git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6628 c046a42c-6fe2-441c-8c8c-71466251a162
2009-02-18 21:37:17 +00:00
aliguori
8871565764 qemu: add cpu_unregister_io_memory and make io mem table index dynamic (Marcelo Tosatti)
So drivers can clear their mem io table entries on exit back to unassigned
state.

Also make the io mem index allocation dynamic.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6601 c046a42c-6fe2-441c-8c8c-71466251a162
2009-02-11 15:20:58 +00:00
aliguori
1eec614b36 toplevel: remove error handling from qemu_malloc() callers (Avi Kivity)
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6531 c046a42c-6fe2-441c-8c8c-71466251a162
2009-02-05 22:06:18 +00:00
aliguori
eca1bdf415 Log reset events (Jan Kiszka)
Original idea&code by Kevin Wolf, split-up in two patches and added more
archs.

This patch introduces a flag to log CPU resets. Useful for tracing
unexpected resets (such as those triggered by x86 triple faults).

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6452 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-26 19:54:31 +00:00
aliguori
ba223c29da Add map client retry notification (Avi Kivity)
The target memory mapping API may fail if the bounce buffer resources
are exhausted.  Add a notification mechanism to allow clients to retry
the mapping operation when resources become available again.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6395 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-22 16:59:16 +00:00
aliguori
6d16c2f88f Add target memory mapping API (Avi Kivity)
Devices accessing large amounts of memory (as with DMA) will wish to obtain
a pointer to guest memory rather than access it indirectly via
cpu_physical_memory_rw().  Add a new API to convert target addresses to
host pointers.

In case the target address does not correspond to RAM, a bounce buffer is
allocated.  To prevent the guest from causing the host to allocate unbounded
amounts of bounce buffer, this memory is limited (currently to one page).

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6394 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-22 16:59:11 +00:00
aliguori
31b1a7b4f5 global s/fflush(logfile)/qemu_log_flush()/ (Eduardo Habkost)
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6339 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-15 22:35:09 +00:00
aliguori
93fcfe39a0 Convert references to logfile/loglevel to use qemu_log*() macros
This is a large patch that changes all occurrences of logfile/loglevel
global variables to use the new qemu_log*() macros.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6338 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-15 22:34:14 +00:00
aliguori
5a38f08190 Adopt cpu_copy to new breakpoint API (Jan Kaszka)
Latest changes to the cpu_breakpoint/watchpoint API broke cpu_copy. This
patch fixes it by cloning the breakpoint and watchpoint lists
appropriately.

Thanks to Lionel Landwerlin for pointing out.

Signed-off-by: Jan Kiszka <jan.kiszka@web.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6321 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-15 20:16:51 +00:00
aurel32
fad6cb1a56 Update FSF address in GPL/LGPL boilerplate
The attached patch updates the FSF address in the GPL/LGPL boilerplate
in most GPL/LGPLed files, and also in COPYING.LIB.

Signed-off-by: Stuart Brady <stuart.brady@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>

git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6162 c046a42c-6fe2-441c-8c8c-71466251a162
2009-01-04 22:05:52 +00:00
edgar_igl
0a6f8a6dd2 CRIS: Remove CRIS specific do_unassigned_access.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6140 c046a42c-6fe2-441c-8c8c-71466251a162
2008-12-29 14:39:57 +00:00
aliguori
f65ed4c152 KVM: Coalesced MMIO support
MMIO exits are more expensive in KVM or Xen than in QEMU because they 
involve, at least, privilege transitions.  However, MMIO write 
operations can be effectively batched if those writes do not have side 
effects.

Good examples of this include VGA pixel operations when in a planar 
mode.  As it turns out, we can get a nice boost in other areas too.  
Laurent mentioned a 9.7% performance boost in iperf with the coalesced 
MMIO changes for the e1000 when he originally posted this work for KVM.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5961 c046a42c-6fe2-441c-8c8c-71466251a162
2008-12-09 20:09:57 +00:00
aurel32
fb1c2cd7d9 linux-user: Fix h2g usage in page_find_alloc
Paul's comment on my first approach to fix the h2g usage in
page_find_alloc finally open my eyes about what the code is actually
supposed to do:

With the help of h2g_valid we can no cleanly check if a freshly allocate
page (for host usage) is guest-reachable and, in case it is, mark it
reserved in the guest's address range.

Signed-off-by: Jan Kiszka <jan.kiszka@web.de>
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Acked-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>

git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5957 c046a42c-6fe2-441c-8c8c-71466251a162
2008-12-08 18:12:26 +00:00
pbrook
0e8f096751 Cosmetic cleanups to previous patch.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5852 c046a42c-6fe2-441c-8c8c-71466251a162
2008-12-02 09:02:15 +00:00
pbrook
8da3ff1809 Change MMIO callbacks to use offsets, not absolute addresses.
Signed-off-by: Paul Brook <paul@codesourcery.com>


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5849 c046a42c-6fe2-441c-8c8c-71466251a162
2008-12-01 18:59:50 +00:00
balrog
63d412465b Fix the comment added in r5844.
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5846 c046a42c-6fe2-441c-8c8c-71466251a162
2008-12-01 02:19:41 +00:00
balrog
1cb0661e00 arm: Reserve code buffer in memory range reachable for pc-relative branch.
Unfortunately this range is so narrow that I'm not sure if it makes more
sense to always use memory load to pc kind of branch instead.


git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5844 c046a42c-6fe2-441c-8c8c-71466251a162
2008-12-01 02:10:17 +00:00
aliguori
c0ce998e94 Use sys-queue.h for break/watchpoint managment (Jan Kiszka)
This switches cpu_break/watchpoint_* to TAILQ wrappers, simplifying the
code and also fixing a use after release issue in
cpu_break/watchpoint_remove_all.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5799 c046a42c-6fe2-441c-8c8c-71466251a162
2008-11-25 22:13:57 +00:00
aliguori
2bec46dc97 vga optimization (Glauber Costa)
Hypervisors like KVM perform badly while doing mmio on
a loop, because it'll generate an exit on each access.
This is the case with VGA, which results in very bad
performance.

In this patch, we map the linear frame buffer as RAM,
make sure it has dirty region tracking enabled, and then
just let the region to be written.

Cleanups suggestions by:
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5793 c046a42c-6fe2-441c-8c8c-71466251a162
2008-11-24 20:21:41 +00:00
aliguori
426cd5d6d2 Fix Windows build
ENOBUFS is not defined on Win32.  Use ENOMEM instead which is more portable.

This was reported by Hervé Poussineau.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5749 c046a42c-6fe2-441c-8c8c-71466251a162
2008-11-18 21:52:54 +00:00
aliguori
2dc9f4117c Introduce BP_CPU as a breakpoint type (Jan Kiszka)
Add another breakpoint/watchpoint type to BP_GDB: BP_CPU. This type is
intended for hardware-assisted break/watchpoint emulations like the x86
architecture requires.

To keep the highest priority for BP_GDB breakpoints, this type is
always inserted at the head of break/watchpoint lists, thus is found
first when looking up the origin of a debug interruption.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5746 c046a42c-6fe2-441c-8c8c-71466251a162
2008-11-18 20:56:59 +00:00
aliguori
6e140f28c6 Introduce BP_WATCHPOINT_HIT flag (Jan Kiszka)
When one watchpoint is hit, others might have triggered as well. To
support users of the watchpoint API which need to detect such cases,
the BP_WATCHPOINT_HIT flag is introduced and maintained.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>



git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5744 c046a42c-6fe2-441c-8c8c-71466251a162
2008-11-18 20:37:55 +00:00