Commit Graph

19 Commits

Author SHA1 Message Date
Huang Ying
cd19cfa236 Add qemu_ram_remap
qemu_ram_remap() unmaps the specified RAM pages, then re-maps these
pages again.  This is used by KVM HWPoison support to clear HWPoisoned
page tables across guest rebooting, so that a new page may be
allocated later to recover the memory error.

[ Jan: style fixlets, WIN32 fix ]

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-03-15 01:19:06 -03:00
Anthony PERARD
e5896b12e2 Introduce log_start/log_stop in CPUPhysMemoryClient
In order to use log_start/log_stop with Xen as well in the vga code,
this two operations have been put in CPUPhysMemoryClient.

The two new functions cpu_physical_log_start,cpu_physical_log_stop are
used in hw/vga.c and replace the kvm_log_start/stop. With this, vga does
no longer depends on kvm header.

[ Jan: rebasing and style fixlets ]

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-02-14 12:39:47 -02:00
Alexander Graf
dd310534e3 exec: introduce endianness swapped mmio
The way we're currently modeling mmio is too simplified. We assume that
every device has the same endianness as the target CPU. In reality,
most devices are little endian (all PCI and ISA ones I'm aware of). Some
are big endian (special system devices) and a very little fraction is
target native endian (fw_cfg).

So instead of assuming every device to be native endianness, let's move
to a model where the device tells us which endianness it's in.

That way we can compile the devices only once and get rid of all the ugly
swap will be done by the underlying layer.

For the same of readability, this patch only introduces the helper framework
but doesn't allow the registering code to set its endianness yet.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-12-11 15:24:25 +00:00
Michael S. Tsirkin
b2e0a138e7 migration: stable ram block ordering
This makes ram block ordering under migration stable, ordered by offset.
This is especially useful for migration to exec, for debugging.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Jason Wang <jasowang@redhat.com>
2010-12-02 21:13:39 +02:00
Marcelo Tosatti
e890261f67 Export qemu_ram_addr_from_host
To be used by next patches.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2010-10-20 16:15:04 -05:00
Cam Macdonell
84b89d782f Add qemu_ram_alloc_from_ptr function
Provide a function to add an allocated region of memory to the qemu RAM.

This patch is copied from Marcelo's qemu_ram_map() in qemu-kvm and given the
clearer name qemu_ram_alloc_from_ptr().

Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-08-10 16:25:15 -05:00
Alex Williamson
1724f04985 qemu_ram_alloc: Add DeviceState and name parameters
These will be used to generate unique id strings for ramblocks.  The name
field is required, the device pointer is optional as most callers don't
have a device.  When there's no device or the device isn't a child of
a bus implementing BusInfo.get_dev_path, the name should be unique for
the platform.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-07-06 10:36:28 -05:00
Richard Henderson
f64052478e Remove IO_MEM_SUBWIDTH.
Greatly simplify the subpage implementation by not supporting
multiple devices at the same address at different widths.  We
don't need full copies of mem_read/mem_write/opaque for each
address, only a single index back into the main io_mem_* arrays.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-04-25 12:59:33 +00:00
Paolo Bonzini
37b76cfd93 move targphys.h and hw/poison.h inclusion to cpu-common.h
With more files from outside the hw/ directory being placed into
libhw, avoid the need to include hw/hw.h for the sake of targ_phys_addr_t.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-04-09 18:55:55 +02:00
Aurelien Jarno
477ba62001 tcg: initial ia64 support
A few words about design choices:
* On IA64, instructions should be grouped by bundle, and dependencies
  between instructions declared. A first version of this code tried to
  schedule instructions automatically, but was very complex and too
  invasive for the current common TCG code (ops not ending at
  instruction boundaries, code retranslation breaking already generated
  code, etc.)  It was also not very efficient, as dependencies between
  TCG ops is not available.
  Instead the option taken by the current implementation does not try
  to fill the bundle by scheduling instructions, but by providing ops
  not available as an ia64 instruction, and by offering 22-bit constant
  loading for most of the instructions. With both options the bundle are
  filled at approximately the same level.

* Up to 128 registers can be affected to a function on IA64, but TCG
  limits this number to 64, which is actually more than enough. The
  register affectation is the following:
  - r0: used to map a constant argument with value 0
  - r1: global pointer
  - r2, r3: internal use
  - r4 to r6: not used to avoid saving them
  - r7: env structure
  - r8 to r11: free for TCG (call clobbered)
  - r12: stack pointer
  - r13: thread pointer
  - r14 to r31: free for TCG (call clobbered)
  - r32: reserved (return address)
  - r33: reserved (PFS)
  - r33 to r63: free for TCG

* The IA64 architecture has only 64-bit registers and no 32-bit
  instructions (the only exception being cmp4). Therefore 64-bit
  registers and instructions are used for 32-bit ops. The adopted
  strategy is the same as the ABI, that is the higher 32 bits are
  undefined. Most ops (and, or, add, shl, etc.) can directly use
  the 64-bit registers, while some others have to sign-extend (sar,
  div, etc.) or zero-extend (shr, divu, etc.) the register first.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2010-04-01 21:51:59 +02:00
Blue Swirl
6842a08ee0 Compile pci only once
Move coalesced_mmio declarations to a more accessible location.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2010-03-21 19:47:13 +00:00
Paul Brook
b3755a915e Disable phsyical memory handling in userspace emulation.
Code to handle physical memory access is not meaningful in usrmode emulation,
so disable it.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2010-03-12 18:34:25 +00:00
Michael S. Tsirkin
f6f3fbcab0 qemu: memory notifiers
This adds notifiers for phys memory changes: a set of callbacks that
vhost can register and update kernel accordingly.  Down the road, kvm
code can be switched to use these as well, instead of calling kvm code
directly from exec.c as is done now.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2010-02-09 16:56:13 -06:00
Anthony Liguori
c227f0995e Revert "Get rid of _t suffix"
In the very least, a change like this requires discussion on the list.

The naming convention is goofy and it causes a massive merge problem.  Something
like this _must_ be presented on the list first so people can provide input
and cope with it.

This reverts commit 99a0949b72.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-10-01 16:12:16 -05:00
malc
99a0949b72 Get rid of _t suffix
Some not so obvious bits, slirp and Xen were left alone for the time
being.

Signed-off-by: malc <av1474@comtv.ru>
2009-10-01 22:45:02 +04:00
Blue Swirl
d60efc6b0d Make CPURead/WriteFunc structure 'const'
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2009-08-25 18:29:31 +00:00
Anthony Liguori
4a1418e07b Unbreak large mem support by removing kqemu
kqemu introduces a number of restrictions on the i386 target.  The worst is that
it prevents large memory from working in the default build.

Furthermore, kqemu is fundamentally flawed in a number of ways.  It relies on
the TSC as a time source which will not be reliable on a multiple processor
system in userspace.  Since most modern processors are multicore, this severely
limits the utility of kqemu.

kvm is a viable alternative for people looking to accelerate qemu and has the
benefit of being supported by the upstream Linux kernel.  If someone can
implement work arounds to remove the restrictions introduced by kqemu, I'm
happy to avoid and/or revert this patch.

N.B. kqemu will still function in the 0.11 series but this patch removes it from
the 0.12 series.

Paul, please Ack or Nack this patch.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-08-24 08:02:55 -05:00
Avi Kivity
1eed09cb4a Remove io_index argument from cpu_register_io_memory()
The parameter is always zero except when registering the three internal
io regions (ROM, unassigned, notdirty).  Remove the parameter to reduce
the API's power, thus facilitating future change.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2009-06-16 15:18:37 -05:00
Paul Brook
1ad2134f91 Hardware convenience library
The only target dependency for most hardware is sizeof(target_phys_addr_t).
Build these files into a convenience library, and use that instead of
building for every target.

Remove and poison various target specific macros to avoid bogus target
dependencies creeping back in.

Big/Little endian is not handled because devices should not know or care
about this to start with.

Signed-off-by: Paul Brook <paul@codesourcery.com>
2009-05-19 16:17:58 +01:00