Devices that use address_space_rw to write large areas to memory
(as opposed to address_space_map/unmap) were broken with respect
to migration since fe680d0 (exec: Limit translation limiting in
address_space_translate to xen, 2014-05-07). Such devices include
IDE CD-ROMs.
The reason is that invalidate_and_set_dirty (called by address_space_rw
but not address_space_map/unmap) was only setting the dirty bit for
the first page in the translation.
To fix this, introduce cpu_physical_memory_set_dirty_range_nocode that
is the same as cpu_physical_memory_set_dirty_range except it does not
muck with the DIRTY_MEMORY_CODE bitmap. This function can be used if
the caller invalidates translations with tb_invalidate_phys_page_range.
There is another difference between cpu_physical_memory_set_dirty_range
and cpu_physical_memory_set_dirty_flag; the former includes a call
to xen_modified_memory. This is handled separately in
invalidate_and_set_dirty, and is not needed in other callers of
cpu_physical_memory_set_dirty_range_nocode, so leave it alone.
Just one nit: now that invalidate_and_set_dirty takes care of handling
multiple pages, there is no need for address_space_unmap to wrap it
in a loop. In fact that loop would now be O(n^2).
Reported-by: Dave Gilbert <dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Old code was affected by memory gaps which resulted in buffer pointers
pointing to address outside of the mapped regions.
Here we are introducing following changes:
- new function qemu_get_ram_block_host_ptr() returns host pointer
to the ram block, it is needed to calculate offset of specific
region in the host memory
- new field mmap_offset is added to the VhostUserMemoryRegion. It
contains offset where specific region starts in the mapped memory.
As there is stil no wider adoption of vhost-user agreement was made
that we will not bump version number due to this change
- other fileds in VhostUserMemoryRegion struct are not changed, as
they are all needed for usermode app implementation
- region data is not taken from ram_list.blocks anymore, instead we
use region data which is alredy calculated for use in vhost-net
- Now multiple regions can have same FD and user applicaton can call
mmap() multiple times with the same FD but with different offset
(user needs to take care for offset page alignment)
Signed-off-by: Damjan Marion <damarion@cisco.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Damjan Marion <damarion@cisco.com>
A new "share" property can be used with the "memory-file" backend to
map memory with MAP_SHARED instead of MAP_PRIVATE.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
And allow preallocation of file-based memory even without -mem-prealloc.
Some care is necessary because -mem-prealloc does not allow disabling
preallocation for hostmem-file.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Right now, -mem-path will fall back to RAM-based allocation in some
cases. This should never happen with "-object memory-file", prepare
the code by adding correct error propagation.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
MST: drop \n at end of error messages
Split the internal interface in exec.c to a separate function, and
push the check on mem_path up to memory_region_init_ram.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
See commit fbeadf50 (bitops: unify bitops_ffsl with the one in
host-utils.h, call it bitops_ctzl) on why ctzl should be used instead
of ffsl.
This is also needed for musl libc which does not implement ffsl.
Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ae2810c4bb patch introduced
optimization for ram_list.dirty_memory update. However it can only
work correctly if hpratio is 1 as the @bitmap parameter stores 1 bits
per system page size (may vary, 4K or 64K on PPC64) and
ram_list.dirty_memory stores 1 bit per TARGET_PAGE_SIZE
(which is hardcoded to 4K).
This fixes hpratio!=1 case to fall back to the slow path.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Juan Quintela <quintela@redhat.com>
cpu_physical_memory_set_dirty_lebitmap calls getpageaddr and ffsl which are
unavailable for MinGW. As the function is unused for MinGW, it can simply
be excluded from compilation.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
If bitmaps are aligned properly, use bitmap operations. If they are
not, just use old bit at a time code.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
We want to have all the functions that handle directly the dirty
bitmap near. We will change it later.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
All the functions that use ram_addr_t should be here.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>