It iterates over all areas intersecting a given address range and
removes the need for manually skipping uninteresting initial areas. It
uses VMAddressSpace::FindClosestArea() to efficiently find the starting
area.
This speeds up the two iterations in unmap_address_range and one in
wait_if_address_range_is_wired and resolves a TODO in the latter hinting
at such a solution.
Change-Id: Iba1d39942db4e4b27e17706be194496f9d4279ed
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2841
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
This introduces VMAddressSpace::FindClosestArea() that can be used to
find the closest area to a given address in either direction. This is
now trivial and efficient since both kernel and user address spaces use
a binary search tree.
Using FindClosestArea() getting multiple area infos is sped up
dramatically as it removes the need for a linear search from the first
area to the one given in the cookie on each successive invocation.
Change-Id: I227da87d915f6f3d3ef88bfeb6be5d4c97c3baaa
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2840
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
This reverts commit c558f9c8fe.
This reverts commit 44f24718b1.
This reverts commit a69cb33030.
This reverts commit 951182620e.
There have been multiple reports that these changes break mounting NTFS partitions
(on all systems, see #14204), and shutting down (on certain systems, see #12405.)
Until they can be fixed, they are being backed out.
* in load_image_internal(), elf32_load_user_image checks whether the binary
format requires the compatibility mode.
* we then set up the flag THREAD_FLAGS_COMPAT_MODE and the address space size.
* the compatibility mode runtime_loader is hardcoded with x86/runtime_loader.
* if needed, the 64-bit flat_args structure is converted in-place to its 32-bit
layout.
* a 32-bit flat_args isn't handled yet (a 32-bit team execs a 64-bit binary).
Change-Id: Ia6a066bde8d1774d85de29b48dc500e27ae9668f
* VMAddressSpace: Add randomizingEnabled property.
* VMUserAddressSpace: Randomize addresses only when randomizingEnabled
property is set.
* create_team_arg(): Check, if the team's environment contains
"DISABLE_ASLR=1". Set the team's address space property
randomizingEnabled accordingly in load_image_internal() and
exec_team().
restrictions for virtual/physical addresses.
* vm_page_allocate_page_run():
- Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not
taken into account.
- Takes a physical_address_restrictions instead of base/limit and also
supports alignment and boundary restrictions, now.
* map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/
ReserveAddressRange() take a virtual_address_restrictions parameter, now. They
also support an alignment independent from the range size.
* create_area_etc(), vm_create_anonymous_area(): Take
{virtual,physical}_address_restrictions parameters, now.
* Removed no longer needed B_PHYSICAL_BASE_ADDRESS.
* DMAResources:
- Fixed potential overflows of uint32 when initializing from device node
attributes.
- Fixed bounce buffer creation TODOs: By using create_area_etc() with the
new restrictions parameters we can directly support physical high address,
boundary, and alignment.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
* _user_map_file(), _user_unmap_memory(): Verify that the address (if given) is
page aligned.
* Reworked memory locking (wiring):
- VMArea does now have a list of wired memory ranges and supports waiting for
a range to be removed.
- vm_soft_fault():
- Added "wirePage" parameter that, if given, makes the function wire the
page and return it.
- Added "wiredRange" parameter (for calls from lock_memory_etc()) and made
sure we never unmap wired pages. This could e.g. happen when a page from a
lower cache was read-mapped and a write fault occurred. Now in such a
situation the function waits for the page to be unwired and restarts.
- All functions that manipulate areas in a way that could affect wired ranges
do now either require the caller to make sure there are no wired ranges in
the way or do that themselves. Added a few wait_if_*_is_wired() helper
functions for that purpose.
- lock_memory_etc():
- Does now also work correctly when the range spans more than one area.
- Adds VMAreaWiredRanges to the affected VMAreas and retains an address
space reference (so that the address space won't be deleted as long as a
wired range exists).
- Resolved TODO: The area's caches are now locked when
increment_page_wired_count() is called.
- Resolved TODO: The race condition due to missing locking after looking up
the page mapping is now prevented. We hold the cache locks (in case the
page is already mapped) and the new vm_soft_fault() parameter allows us
to get the page wired.
- unlock_memory_etc(): Changes symmetrical to those in lock_memory_etc() and
resolved all TODOs.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36030 a95241bf-73f2-0310-859d-f6bbb57e9c96
argument. They replace the previous special-purpose allocation functions
(malloc_nogrow(), vip_io_request_malloc()).
* Moved the I/O VIP heap to heap.cpp accordingly.
* Added quite a bit of passing around of allocation flags in the VM,
particularly in the VM*AddressSpace classes.
* Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always
allocated on the normal heap.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Reorganized the code for [un]mapping pages:
- Added new VMTranslationMap::Unmap{Area,Page[s]}() which essentially do what
vm_unmap_page[s]() did before, just in the architecture specific code, which
allows for specific optimizations. UnmapArea() is for the special case that
the complete area is unmapped. Particularly in case the address space is
deleted, some work can be saved. Several TODOs could be slain.
- Since they are only used within vm.cpp vm_map_page() and vm_unmap_page[s]()
are now static and have lost their prefix (and the "preserveModified"
parameter).
* Added VMTranslationMap::Protect{Page,Area}(). They are just inline wrappers
for Protect().
* X86VMTranslationMap::Protect(): Make sure not to accidentally clear the
accessed/dirty flags.
* X86VMTranslationMap::Unmap()/Protect(): Make page table skipping actually
work. It was only skipping to the next page.
* Adjusted the PPC code to at least compile.
No measurable effect for the -j8 Haiku image build time, though the kernel time
drops minimally.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35089 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Pulled the physical page mapping functions out of vm_translation_map into
a new interface VMPhysicalPageMapper.
* Renamed vm_translation_map to VMTranslationMap and made it a proper C++
class. The functions in the operations vector have become methods.
* Added class GenericVMPhysicalPageMapper implementing VMPhysicalPageMapper
as far as possible (without actually writing new code).
* Adjusted the x86 and the PPC specifics accordingly (untested for the
latter). For the other architectures the build is, I'm afraid, seriously
broken.
The next steps will modify and extend the VMTranslationMap interface, so that
it will be possible to fix the bugs in vm_unmap_page[s]() and employ
architecture specific optimizations.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35066 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added Debug{First,Next}() methods to allow easy iteration through the
address spaces in kernel debugger commands.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34978 a95241bf-73f2-0310-859d-f6bbb57e9c96
table. It is now inline and uses double-checked locking. This reduces the
contention on the lock to insignificant. Total -j8 Haiku image build speedup
is marginal, but the total kernel time drops 12%.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34934 a95241bf-73f2-0310-859d-f6bbb57e9c96
to clarify that they never enlarge the area.
* Reimplemented VMKernelAddressSpace. It is somewhat inspired by Bonwick's
vmem resource allocator (though we have different requirements):
- We consider the complete address space to be divided into contiguous
ranges of type free, reserved, or area, each range being represented by
a VMKernelAddressRange object.
- The range objects are managed in an AVL tree and a doubly linked list
(the latter only for faster iteration) sorted by address. This provides
O(log(n)) lookup, insertion and removal.
- For each power of two size we maintain a list of free ranges of at least
that size. Thus for the most common case of B_ANY*_ADDRESS area
allocation, we find a free range in constant time (the rest of the
processing being O(log(n))) with a rather good fit. This should also
help avoiding address space fragmentation.
While the new implementation should be faster, particularly with an
increasing number of areas, I couldn't measure any difference in the -j2
haiku build. From a cursory test the -j8 build hasn't tangibly benefitted
either.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34528 a95241bf-73f2-0310-859d-f6bbb57e9c96
link to them.
* VM{Kernel,User}AddressSpace manage the respective VMArea subclass now, and
VMAddressSpace has grown factory methods {Create,Delete}Area.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34493 a95241bf-73f2-0310-859d-f6bbb57e9c96
new derived classes VM{Kernel,User}AddressSpace. Currently those are
identical, but that will change.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34492 a95241bf-73f2-0310-859d-f6bbb57e9c96
pure address space feature, so it should be handled there.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34491 a95241bf-73f2-0310-859d-f6bbb57e9c96
and size.
* Made VMArea::Set{Base,Size}() private and made VMAddressSpace a friend.
In vm.cpp the new VMAddressSpace::ResizeArea{Head,Tail}() are used
instead.
Finally all address space changes happen in VMAddressSpace only. *phew*
Now it's ready to be thoroughly butchered. :-)
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34467 a95241bf-73f2-0310-859d-f6bbb57e9c96
simplify migration of the area management, but as a side effect, it also
makes area deletion O(1) (instead of O(n), n == number of areas in the
address space).
* Moved more area management functionality from vm.cpp to VMAddressSpace and
VMArea structure creation to VMArea. Made the list and list link members
itself private.
* VMAddressSpace tracks its amount of free space, now. This also replaces
the previous mechanism to do that only for the kernel address space. It
was broken anyway, since delete_area() subtracted the area size instead of
adding it.
* vm_free_unused_boot_loader_range():
- lastEnd could be set to a value < start, which could cause memory
outside of the given range to be unmapped. Haven't checked whether this
could happen in practice -- if so, it would be seriously unhealthy.
- The range between the end of the last area in the range and the end of
the range would never be freed.
- Fixed potential integer overflows when computing addresses.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34459 a95241bf-73f2-0310-859d-f6bbb57e9c96