Commit Graph

514 Commits

Author SHA1 Message Date
Blue Swirl
cc5bea608d cputlb: prepare private memory API for public consumption
Fold is_ram_rom and is_ram_rom_romd() into callers.

Change is_romd() and section_addr() to take MemoryRegion
instead of MemoryRegionSection for consistency and
use memory_region_ prefix.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:05 +00:00
Blue Swirl
0cac1b66c8 cputlb: move TLB handling to a separate file
Move TLB handling and softmmu code load helpers to cputlb.c,
compile only for softmmu targets.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:04 +00:00
Blue Swirl
e554861766 exec: prepare for splitting
Make s_cputlb_empty_entry 'const'.

Rename tlb_flush_jmp_cache() to tb_flush_jmp_cache().

Refactor code to add cpu_tlb_reset_dirty_all(),
memory_region_section_get_iotlb() and
memory_region_is_unassigned().

Remove unused cpu_tlb_update_dirty().

Fix coding style in areas to be moved.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:02 +00:00
Stefan Weil
8efe0ca83e w64: Use uintptr_t in exec.c
Replace all type casts to 'long' or 'unsigned long' by 'intptr_t' or 'uintptr_t'.

For type casts which are only used to extract the lower bits of an address
or to modify those bits, signedness does not matter. There I always use 'uintptr_t'.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-15 21:25:17 +02:00
Stefan Weil
6840981dfb w64: Use larger alignment for section with generated code
The MinGW-w64 compiler allows __attribute__((aligned (32)).

Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-15 21:25:16 +02:00
Stefan Weil
c6d506742f w64: Fix data types in cpu-all.h, exec.c
w64 needs uintptr_t instead of unsigned long.
For other hosts, nothing changes.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-15 21:25:16 +02:00
Max Filippov
1e7855a558 exec: provide tb_invalidate_phys_addr function
Allow TB invalidation by its physical address, extract implementation
from the breakpoint_invalidate function.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-04-14 15:25:36 +00:00
Blue Swirl
2050396801 Use uintptr_t for various op related functions
Use uintptr_t instead of void * or unsigned long in
several op related functions, env->mem_io_pc and
GETPC() macro.

Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-04-14 14:23:37 +00:00
Stefan Weil
6375e09e79 w64: Fix data type of tb_next and other variables used for host addresses
QEMU host addresses must use uintptr_t to be portable for hosts with
an unusual size of long (w64).

tb_jmp_offset is an uint16_t value, therefore the local variable offset
in function tb_set_jmp_target was changed from unsigned long to uint16_t.

The type cast to long in function tb_add_jump now also uses uintptr_t.
For the bit operation used here, the signedness of the type cast does
not matter.

Some remaining unsigned long values are either only used for ARM assembler
code or will be fixed in a later patch for PPC.

v2:
Fix signature of tb_find_pc in exec.c, too (hint from Blue Swirl, thanks).
There remain lots of other long / unsigned long in exec.c which must be
replaced by uintptr_t. This will be done in a separate patch. Here
only one of these type casts is fixed.

v3:
Also fix signature of page_unprotect.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-04-07 11:27:45 +00:00
Richard Henderson
813da6277c tcg: Use the GDB JIT debugging interface.
This allows us to generate unwind info for the dynamicly generated
code in the code_gen_buffer.  Only i386 is converted at this point.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-24 13:07:48 +00:00
Anthony PERARD
0a1b357f15 exec: fix guest memory access for Xen
In cpu_physical_memory_rw, a change has been introduced and qemu_get_ram_ptr is
no longuer called with the ram addr we want to access, but only with the
section address. This patch fixes this. (All other call to qemu_get_ram_ptr are
already called with the right address.)

This patch fixes Xen guest.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-19 19:13:30 +02:00
Avi Kivity
32b089808f memory: check for watchpoints when getting code ram_addr
The code to get the ram_addr from a (tlb entry, vaddr) pair
checks that the resulting memory is not MMIO, but neglects to
check whether the region is hidden by a watchpoint page.

Add the missing check.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-19 11:15:01 +02:00
Avi Kivity
7859cc6e39 exec: fix write tlb entry misused as iotlb
A couple of code paths check the lower bits of CPUTLBEntry::addr_write
against io_mem_ram as a way of looking for a dirty RAM page.  This works
by accident since the value is zero, which matches all clear bits for
TLB_INVALID, TLB_MMIO, and TLB_NOTDIRTY (indicating dirty RAM).

Make it work by design by checking for the proper bits.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-19 11:15:00 +02:00
Blue Swirl
e141ab52d2 softmmu templates: optionally pass CPUState to memory access functions
Optionally, make memory access helpers take a parameter for CPUState
instead of relying on global env.

On most targets, perform simple moves to reorder registers. On i386,
switch from regparm(3) calling convention to standard stack-based
version.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-18 12:21:52 +00:00
Andreas Färber
9349b4f9fd Rename CPUState -> CPUArchState
Scripted conversion:
  for file in *.[hc] hw/*.[hc] hw/kvm/*.[hc] linux-user/*.[hc] linux-user/m68k/*.[hc] bsd-user/*.[hc] darwin-user/*.[hc] tcg/*/*.[hc] target-*/cpu.h; do
    sed -i "s/CPUState/CPUArchState/g" $file
  done

All occurrences of CPUArchState are expected to be replaced by QOM CPUState,
once all targets are QOM'ified and common fields have been extracted.

Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
2012-03-14 22:20:27 +01:00
Avi Kivity
97161e177b memory: get rid of cpu_register_io_memory()
The return value of cpu_register_io_memory() is no longer used anywhere, so
we can remove it and all associated data and code.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 19:16:39 +02:00
Avi Kivity
37ec01d433 memory: dispatch directly via MemoryRegion
Instead of indirecting via io_mem_region, dispatch directly
through the MemoryRegion obtained from the iotlb or phys_page_find().

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 19:06:11 +02:00
Avi Kivity
ce5d64c2d0 exec: fix code tlb entry misused as iotlb in get_page_addr_code()
get_page_addr_code() reads a code tlb entry, but interprets it as an
iotlb entry.  This works by accident since the low bits of a RAM code
tlb entry are clear, and match a RAM iotlb entry.  This accident is
about to unhappen, so fix the code to use an iotlb entry (using the
code entry with TLB_MMIO may fail if the page is a watchpoint).

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 18:54:20 +02:00
Avi Kivity
aa102231f0 memory: store section indices in iotlb instead of io indices
A step towards eliminating io indices.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 17:06:55 +02:00
Avi Kivity
f3705d5329 memory: make phys_page_find() return an unadjusted section
We'd like to store the section index in the iotlb, so we can't
adjust it before returning.  Return an unadjusted section and
instead introduce section_addr(), which does the adjustment later.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 16:16:34 +02:00
Avi Kivity
a2d335214a memory: fix I/O port aliases
Commit e58ac72b6a0 ("ioport: change portio_list not to use
memory_region_set_offset()") started using aliases of I/O memory
regions.  Since the IORange used for the I/O was contained in the
target region, the alias information (specifically, the offset
into the region) was lost.  This broke -vga std.

Fix by allocating an independent object to hold the IORange and
also the new offset.

Note that I/O memory regions were conceptually broken wrt aliases
in a different way: an alias can cause the same region to appear
twice in an address space, but we had just one IORange to service it.
This patch fixes that problem as well, since we can now have multiple
IORange/MemoryRegion associations.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05 17:40:12 +02:00
Blue Swirl
b3e54c689c Merge branch 'xtensa' of git://jcmvbkbc.spb.ru/dumb/qemu-xtensa
* 'xtensa' of git://jcmvbkbc.spb.ru/dumb/qemu-xtensa:
  target-xtensa: add breakpoint tests
  target-xtensa: add DEBUG_SECTION to overlay tool
  target-xtensa: add DBREAK data breakpoints
  exec: let cpu_watchpoint_insert accept larger watchpoints
  exec: fix check_watchpoint exiting cpu_loop
  exec: add missing breaks to the watch_mem_write
  target-xtensa: add ICOUNT SR and debug exception
  target-xtensa: implement instruction breakpoints
  target-xtensa: add DEBUGCAUSE SR and configuration
  target-xtensa: fetch 3rd opcode byte only when needed
  target-xtensa: implement info tlb monitor command
  target-xtensa: define TLB_TEMPLATE for MMU-less cores
2012-03-03 17:53:41 +00:00
Avi Kivity
07f07b31e5 memory: allow phys_map tree paths to terminate early
When storing large contiguous ranges in phys_map, all values tend to
be the same pointers to a single MemoryRegionSection.  Collapse them
by marking nodes with level > 0 as leaves.  This reduces tree memory
usage dramatically.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
c19e8800d4 memory: unify PhysPageEntry::node and ::leaf
They have the same type, unify them.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
2999097bf1 memory: change phys_page_set() to set multiple pages
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
f7bf546118 memory: switch phys_page_set() to a recursive implementation
Setting multiple pages at once requires backtracking to previous
nodes; easiest to achieve via recursion.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
a391843286 memory: replace phys_page_find_alloc() with phys_page_set()
By giving the function the value we want to set, we make it
more flexible for the next patch.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
0f0cb164cc memory: simplify multipage/subpage registration
Instead of considering subpage on a per-page basis, split each section
into a subpage head, multipage body, and subpage tail, and register
each separately.  This simplifies the registration functions.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
31ab2b4a46 memory: give phys_page_find() its own tree search loop
We'll change phys_page_find_alloc() soon, but phys_page_find()
doesn't need to bear the consequences.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
06ef3525e1 memory: make phys_page_find() return a MemoryRegionSection
We no longer describe memory in terms of individual pages; use sections
throughout instead.

PhysPageDesc no longer used - remove.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
117712c3e4 memory: move tlb flush to MemoryListener commit callback
This way, if we have several changes in a single transaction, we flush just
once.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
717cb7b259 memory: unify the two branches of cpu_register_physical_memory_log()
Identical except that the second branch knows its not modifying an existing
subpage.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
8636b9295b memory: fix RAM subpages in newly initialized pages
If the first subpage installed in a page is RAM, then we install it as
a full page, instead of a subpage.  Fix by not special casing RAM.

The issue dates to commit db7b5426a4, which introduced subpages.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
d6f2ea22a0 memory: compress phys_map node pointers to 16 bits
Use an expanding vector to store nodes.  Allocation is baroque to g_renew()
potentially invalidating pointers; this will be addressed later.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
5312bd8b31 memory: store MemoryRegionSection pointers in phys_map
Instead of storing PhysPageDesc, store pointers to MemoryRegionSections.
The various offsets (phys_offset & ~TARGET_PAGE_MASK,
PHYS_OFFSET & TARGET_PAGE_MASK, region_offset) can all be synthesized
from the information in a MemoryRegionSection.  Adjust phys_page_find()
to synthesize a PhysPageDesc.

The upshot is that phys_map now contains uniform values, so it's easier
to generate and compress.

The end result is somewhat clumsy but this will be improved as we we
propagate MemoryRegionSections throughout the code instead of transforming
them to PhysPageDesc.

The MemoryRegionSection pointers are stored as uint16_t offsets in an
array.  This saves space (when we also compress node pointers) and is
more cache friendly.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
4346ae3e28 memory: unify phys_map last level with intermediate levels
This lays the groundwork for storing leaf data in intermediate levels,
saving space.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
3eef53df6b memory: remove first level of l1_phys_map
L1 and the lower levels in l1_phys_map are equivalent, except that L1 has
a different size, and is always allocated.  Simplify the code by removing
L1.  This leaves us with a tree composed solely of L2 tables, but that
problem can be renamed away later.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
54688b1ec1 memory: change memory registration to rebuild the memory map on each change
Instead of incrementally building the memory map, rebuild it every time.
This allows later simplification, since the code need not consider overlaying
a previous mapping.  It is also RCU friendly.

With large memory guests this can get expensive, since the operation is
O(mem size), but this will be optimized later.

As a side effect subpage and L2 leaks are fixed here.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:43 +02:00
Avi Kivity
50c1e1491e memory: support stateless memory listeners
Current memory listeners are incremental; that is, they are expected to
maintain their own state, and receive callbacks for changes to that state.

This patch adds support for stateless listeners; these work by receiving
a ->begin() callback (which tells them that new state is coming), a
sequence of ->region_add() and ->region_nop() callbacks, and then a
->commit() callback which signifies the end of the new state.  They should
ignore ->region_del() callbacks.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:42 +02:00
Avi Kivity
4855d41a61 memory: split memory listener for the two address spaces
The memory and I/O address spaces do different things, so split them into
two memory listeners.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:42 +02:00
Avi Kivity
7376e5827a memory: allow MemoryListeners to observe a specific address space
Ignore any regions not belonging to a specified address space.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:42 +02:00
Avi Kivity
9363274709 memory: use a MemoryListener for core memory map updates too
This transforms memory.c into a library which can then be unit tested
easily, by feeding it inputs and listening to its outputs.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-02-29 13:44:42 +02:00
Avi Kivity
d7ec83e6b5 memory: don't pass ->readable attribute to cpu_register_physical_memory_log
It can be derived from the MemoryRegion itself (which is why it is not
used there).

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-02-29 13:44:42 +02:00
Max Filippov
0dc23828f1 exec: let cpu_watchpoint_insert accept larger watchpoints
Make cpu_watchpoint_insert accept watchpoints of any power-of-two size
up to the target page size.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2012-02-20 20:07:11 +04:00
Max Filippov
488d65772c exec: fix check_watchpoint exiting cpu_loop
In case of BP_STOP_BEFORE_ACCESS watchpoint check_watchpoint intends to
signal EXCP_DEBUG exception on exit from cpu loop, but later overwrites
exception code by the cpu_resume_from_signal call.

Use cpu_loop_exit with BP_STOP_BEFORE_ACCESS watchpoints.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2012-02-20 20:07:11 +04:00
Max Filippov
6736415047 exec: add missing breaks to the watch_mem_write
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Meador Inge <meadori@codesourcery.com>
2012-02-20 20:07:02 +04:00
Peter Maydell
771124e1a6 exec.c: Clarify comment about tlb_flush() flush_global parameter
Clarify the comment about tlb_flush()'s flush_global parameter,
so it is clearer what it does and why it is OK that the implementation
currently ignores it.

Reviewed-by: Andreas F=C3=A4rber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-02-01 14:45:01 -06:00
Benjamin Herrenschmidt
82afa58641 virtio-pci: Fix endianness of virtio config
The virtio config area in PIO space is a bit special. The initial
header is little endian but the rest (device specific) is guest
native endian.

The PIO accessors for PCI on machines that don't have native IO ports
assume that all PIO is little endian, which works fine for everything
except the above.

A complicated way to fix it would be to split the BAR into two memory
regions with different endianess settings, but this isn't practical
to do, besides, the PIO code doesn't honor region endianness anyway
(I have a patch for that too but it isn't necessary at this stage).

So I decided to go for the quick fix instead which consists of
reverting the swap in virtio-pci in selected places, hoping that when
we eventually do a "v2" of the virtio protocols, we sort that out once
and for all using a fixed endian setting for everything.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
[agraf: keep virtio in libhw and determine endianness through a
        helper function in exec.c]
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
2012-01-21 05:17:01 +01:00
Aurelien Jarno
5c84bd904b tcg-arm: fix a typo in comments
ARM still doesn't support 16GB buffers in 32-bit modes, replace the
16GB by 16MB in the comment.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-01-13 10:36:59 +00:00
Avi Kivity
11c7ef0c73 Remove IO_MEM_SHIFT
We no longer use any of the lower bits of a ram_addr, so we might as well
use them for the io table index.  This increases the number of potential
I/O handlers by a factor of 8.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
75c578dcaa Drop IO_MEM_ROMD
Unlike ->readonly, ->readable is not inherited from aliase, so we can simply
query the memory region.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
b3b00c78d8 Remove IO_MEM_SUBPAGE
Replace with a MemoryRegion flag.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
a621f38de8 Direct dispatch through MemoryRegion
Now that all mmio goes through MemoryRegions, we can convert
io_mem_opaque to be a MemoryRegion pointer, and remove the thunks
that convert from old-style CPU{Read,Write}MemoryFunc to MemoryRegionOps.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
1ec9b909ff Convert io_mem_watch to be a MemoryRegion
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
de712f9469 Convert IO_MEM_SUBPAGE_RAM to be a MemoryRegion
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
70c68e44bc Convert the subpage wrapper to be a MemoryRegion
Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
dd81124bf6 Switch cpu_register_physical_memory_log() to use MemoryRegions
Still internally using ram_addr.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
0e0df1e24d Convert IO_MEM_{RAM,ROM,UNASSIGNED,NOTDIRTY} to MemoryRegions
Convert the fixed-address IO_MEM_RAM, IO_MEM_ROM, IO_MEM_UNASSIGNED,
and IO_MEM_NOTDIRTY io handlers to MemoryRegions.  These aren't real
regions, since they are never added to the memory hierarchy, but they
allow reuse of the dispatch functionality.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:50 +02:00
Avi Kivity
d39e822265 Uninline get_page_addr_code()
Its use of IO_MEM_ROM and friends will later cause #include loops; and it
is too large to merit inlining.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
1d393fa2d1 Avoid range comparisons on io index types
The code sometimes uses range comparisons on io indexes (e.g.
index =< IO_MEM_ROM).  Avoid these as they make moving to objects harder.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
2774c6d0ae Fix wrong region_offset when overlaying a page with another
cpu_register_physical_memory_log() does not update region_offset
if a page was previously registered for the same address.  This
could cause mmio accesses going to the wrong place, by using the
old region_offset.

Signed-off-by: Avi Kivity <avi@redhat.com>
Acked-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
acbbec5d43 memory: move mmio access to functions
Currently mmio access goes directly to the io_mem_{read,write} arrays.
In preparation for eliminating them, add indirection via a function.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
f1f6e3b86e exec: make phys_page_find() return a temporary
Instead of returning a PhysPageDesc pointer, return a temporary.
This lets us move away from actually storing PhysPageDesc's, and
instead sythesising them when needed.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
be675c9720 memory: move endianness compensation to memory core
Instead of doing device endianness compensation in cpu_register_io_memory(),
do it in the memory core.

Signed-off-by: Avi Kivity <avi@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04 13:34:49 +02:00
Avi Kivity
8f77558f22 memory: obsolete cpu_physical_memory_[gs]et_dirty_tracking()
The getter is no longer used, so it is completely removed.

Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-04 13:34:49 +02:00
Avi Kivity
7c63736603 Store MemoryRegion in RAMBlock
As a step in moving live migration from RAMBlocks to MemoryRegions,
store the MemoryRegion in a RAMBlock.

Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-04 13:34:48 +02:00
Avi Kivity
c5705a7728 vmstate, memory: decouple vmstate from memory API
Currently creating a memory region automatically registers it for
live migration.  This differs from other state (which is enumerated
in a VMStateDescription structure) and ties the live migration code
into the memory core.

Decouple the two by introducing a separate API, vmstate_register_ram(),
for registering a RAM block for migration.  Currently the same
implementation is reused, but later it can be moved into a separate list,
and registrations can be moved to VMStateDescription blocks.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-04 13:34:48 +02:00
Avi Kivity
586c6230c0 Remove cpu_get_physical_page_desc()
No longer used.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-03 19:19:28 +02:00
Avi Kivity
dcd97e33af memory: remove CPUPhysMemoryClient
No longer used.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-03 19:19:27 +02:00
Avi Kivity
7664e80c84 memory: add API for observing updates to the physical memory map
Add an API that allows a client to observe changes in the global
memory map:
 - region added (possibly with logging enabled)
 - region removed (possibly with logging enabled)
 - logging started on a region
 - logging stopped on a region
 - global logging started
 - global logging removed

This API will eventually replace cpu_register_physical_memory_client().

Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-20 14:14:07 +02:00
Avi Kivity
67d95c153b memory: move obsolete exec.c functions to a private header
This will help avoid accidental usage.

Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-19 17:28:54 +02:00
Avi Kivity
fce537d4a7 memory, xen: pass MemoryRegion to xen_ram_alloc()
Currently xen_ram_alloc() relies on ram_addr, which is going away.
Give it something else to use as a cookie.

Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-19 17:23:24 +02:00
Alex Rozenman
5ab97b7f81 phys_page_find_alloc: Use correct initial region_offset.
This fixes a common bug with initial region_offset value.
Usually, the pages are re-assigned afterwards, so the bug
has a very small effect on regular QEMU use flows.

Signed-off-by: Alex Rozenman <Alex_Rozenman@mentor.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-12-15 10:22:40 -06:00
Andreas Färber
56384e8b1e exec.c: Fix subpage memory access to RAM MemoryRegion
Commit 95c318f5e1 (Fix segfault in mmio
subpage handling code.) prevented a segfault by making all subpage
registrations over an existing memory page perform an unassigned access.
Symptoms were writes not taking effect and reads returning zero.

Very small page sizes are not currently supported either,
so subpage memory areas cannot fully be avoided.

Therefore change the previous fix to use a new IO_MEM_SUBPAGE_RAM
instead of IO_MEM_UNASSIGNED. Suggested by Avi.

Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Cc: Avi Kivity <avi@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-12-15 09:27:23 -06:00
Dr. David Alan Gilbert
222f23f508 tcg/arm: remove fixed map code buffer restriction
On ARM, don't map the code buffer at a fixed location, and fix up the
call/goto tcg routines to let it do long jumps.

Mapping the code buffer at a fixed address could sometimes result in it being
mapped over the top of the heap with pretty random results.

Signed-off-by: Dr. David Alan Gilbert <david.gilbert@linaro.org>
Signed-off-by: Andrzej Zaborowski <andrew.zaborowski@intel.com>
2011-12-14 21:58:18 +01:00
Stefan Weil
daf767b16a w32: Disable buffering for log file
W32 does not support line buffering, but it supports unbuffered output.

Unbuffered output is better for writing to qemu.log than fully buffered
output because it also shows the latest log messages when an application
crash occurs.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-12-10 17:05:48 +00:00
Paolo Bonzini
b3c4bbe56d Make cpu_single_env thread-local
Make cpu_single_env thread-local. This fixes a regression
in handling of multi-threaded programs in linux-user mode
(bug 823902).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[Peter Maydell: rename tls_cpu_single_env to cpu_single_env]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-11-01 10:58:08 -05:00
Alex Williamson
3e837b2c05 Error check find_ram_offset
Spotted via code review, we initialize offset to 0 to avoid a
compiler warning, but in the unlikely case that offset is
never set to something else, we should abort instead of return
a value that will almost certainly cause problems.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-11-01 10:58:08 -05:00
陳韋任
8f355d6775 exec.c: Remove useless comment
As phys_ram_size had been removed since QEMU 0.12. Remove the useless
comment.

Signed-off-by: Chen Wen-Ren <chenwj@iis.sinica.edu.tw>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-10-26 13:38:36 +01:00
Paolo Bonzini
946fb27c1d qemu-timer: move icount to cpus.c
None of this is needed by tools, and most of it can even be made static
inside cpus.c.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2011-10-21 18:14:30 +02:00
Blue Swirl
3917149d96 Move GETPC from dyngen-exec.h to exec-all.h
GETPC() can be used even from outside of helper code. Move the macro to
a more accessible location. Avoid a compile warning from redefining it in exec.c.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-10-01 09:31:43 +00:00
Stefan Weil
8b3692d136 Remove qemu_host_page_bits
It was introduced with commit 54936004fd
as host_page_bits but never used.

Signed-off-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-09-21 10:50:59 +01:00
Anthony Liguori
7267c0947d Use glib memory allocation and free functions
qemu_malloc/qemu_free no longer exist after this commit.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-20 23:01:08 -05:00
Paolo Bonzini
85d59fef9d fix QLIST usage for RAM list
Spotted while reviewing the migration thread patches.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2011-08-12 13:07:58 +01:00
Avi Kivity
309cb471c8 Integrate I/O memory regions into qemu
get_system_io() returns the root I/O memory region.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-08 10:15:53 -05:00
Tobias Nygren
9f4b09a4cd Use mmap to allocate execute memory
Use mmap to allocate executable memory on NetBSD as well.

Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-08-07 09:57:05 +00:00
Jan Kiszka
d5ab9713d2 Avoid allocating TCG resources in non-TCG mode
Do not allocate TCG-only resources like the translation buffer when
running over KVM or XEN. Saves a "few" bytes in the qemu address space
and is also conceptually cleaner.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-05 10:57:36 -05:00
Avi Kivity
8417cebfda memory: use signed arithmetic
When trying to map an alias of a ram region, where the alias starts at
address A and we map it into address B, and A > B, we had an arithmetic
underflow.  Because we use unsigned arithmetic, the underflow converted
into a large number which failed addrrange_intersects() tests.

The concrete example which triggered this was cirrus vga mapping
the framebuffer at offsets 0xc0000-0xc7fff (relative to the start of
the framebuffer) into offsets 0xa0000 (relative to system addres space
start).

With our favorite analogy of a windowing system, this is equivalent to
dragging a subwindow off the left edge of the screen, and failing to clip
it into its parent window which is on screen.

Fix by switching to signed arithmetic.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-08-05 10:57:36 -05:00
Anthony Liguori
3046c98404 Merge remote-tracking branch 'agraf/xen-next' into staging 2011-07-29 09:42:12 -05:00
Avi Kivity
62152b8a01 exec.c: initialize memory map
Allocate the root memory region and initialize it.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2011-07-29 08:25:44 -05:00
Anthony PERARD
f15fbc4bd1 cpu-common: Have a ram_addr_t of uint64 with Xen.
In Xen case, memory can be bigger than the host memory. that mean a
32bits host (and QEMU) should be able to handle a RAM address of 64bits.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-26 06:57:28 +02:00
Anthony PERARD
8ca5692df4 exec.c: Use ram_addr_t in cpu_physical_memory_rw(...).
As the variable pd and addr1 inside the function cpu_physical_memory_rw
are mean to handle a RAM address, they should be of the ram_addr_t type
instead of unsigned long.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-26 06:43:10 +02:00
Blue Swirl
b14ef7c9ab Fix unassigned memory access handling
cea5f9a28f exposed bugs in unassigned memory
access handling. Fix them by always passing CPUState to the handlers.

Reported-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-07-20 21:28:08 +00:00
Stefano Stabellini
8ab934f93b qemu_ram_ptr_length: take ram_addr_t as arguments
qemu_ram_ptr_length should take ram_addr_t as argument rather than
target_phys_addr_t because is doing comparisons with RAMBlock addresses.

cpu_physical_memory_map should create a ram_addr_t address to pass to
qemu_ram_ptr_length from PhysPageDesc phys_offset.

Remove code after abort() in qemu_ram_ptr_length.

Changes in v2:

- handle 0 size in qemu_ram_ptr_length;

- rename addr1 to raddr;

- initialize raddr to ULONG_MAX.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-17 01:54:25 +02:00
Jan Kiszka
868bb33faa xen: Fold CONFIG_XEN_MAPCACHE into CONFIG_XEN
Xen won't be enabled if there is no backend support available for the
host. And that also means the map cache will work. So drop the separate
config switch and move the required stubs over to xen-stub.c.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-17 01:54:24 +02:00
Jan Kiszka
e41d7c691a xen: Clean up map cache API naming
The map cache is a Xen thing, so its API should make this clear.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2011-07-17 01:54:24 +02:00
Peter Maydell
a884da8a06 exec.c: Fix calculation of code_gen_buffer_max_size
When calculating the point at which we should not try to put another
TB into the code gen buffer, we have to allow not just for OPC_MAX_SIZE
but OPC_BUF_SIZE. This is because the target translate.c will only
stop when an instruction has put it past the OPC_MAX_SIZE limit, so
we have to include the MAX_OP_PER_INSTR margin which that final insn
might have used.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-07-12 20:29:08 +00:00
Alexander Graf
1e78bcc19c exec: add endian specific phys ld/st functions
Device code some times needs to access physical memory and does that
through the ld./st._phys functions. However, these are the exact same
functions that the CPU uses to access memory, which means they will
be endianness swapped depending on the target CPU.

However, devices don't know about the CPU's endianness, but instead
access memory directly using their own interface to the memory bus,
so they need some way to read data with their native endianness.

This patch adds _le and _be functions to ld./st._phys.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-07-12 20:00:24 +00:00
Anthony Liguori
bb820c03e2 Merge remote-tracking branch 'stefanha/trivial-patches' into staging 2011-06-27 11:25:23 -05:00
Blue Swirl
2b41f10e18 Remove exec-all.h include directives
Most exec-all.h include directives are now useless, remove them.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2011-06-26 18:25:35 +00:00