a protection_max attribute is added in VMArea.
a read-only opened file already can't be mapped shared read-write at the moment,
but can later be changed to read-write with mprotect() or set_area_protection().
When creating the VMArea, the actual maximum protection is stored in the area,
so that it can be checked when needed.
this fixes a VM TODO.
Change-Id: I33b144c192034eeb059f1dede5dbef5af947280d
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3804
Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
When copying an area with vm_copy_area only the new protection would be
applied and any possibly existing page protections on the source area
were ignored.
For areas with stricter area protection than page protection, this lead
to faults when accessing the copy. In the opposite case it lead to too
relaxed protection. The currently only user of vm_copy_area is
fork_team which goes through all areas of the parent and copies them to
the new team. Hence page protections were ignored on all forked teams.
Remove the protection argument and instead always carry over the source
area protection and duplicate the page protections when present.
Also make sure to take the page protections into account for deciding
whether or not the copy is writable and therefore needs to have copy on
write semantics.
Change-Id: I52f295f2aaa66e31b4900b754343b3be9a19ba30
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3166
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Pages in the given range are unmapped and freed without getting written
back anywhere. It can be used whenever a caller does not care about the
data in the given range anymore and wants to reduce page pressure.
Change-Id: I8bcce68fab278efef710d3714677e1d463504a56
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2843
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
It iterates over all areas intersecting a given address range and
removes the need for manually skipping uninteresting initial areas. It
uses VMAddressSpace::FindClosestArea() to efficiently find the starting
area.
This speeds up the two iterations in unmap_address_range and one in
wait_if_address_range_is_wired and resolves a TODO in the latter hinting
at such a solution.
Change-Id: Iba1d39942db4e4b27e17706be194496f9d4279ed
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2841
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
This introduces VMAddressSpace::FindClosestArea() that can be used to
find the closest area to a given address in either direction. This is
now trivial and efficient since both kernel and user address spaces use
a binary search tree.
Using FindClosestArea() getting multiple area infos is sped up
dramatically as it removes the need for a linear search from the first
area to the one given in the cookie on each successive invocation.
Change-Id: I227da87d915f6f3d3ef88bfeb6be5d4c97c3baaa
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2840
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
The code in the Resize and Rebase methods was identical except for the
iterator.
Change-Id: I9f6b3c2c09af0c26778215bd627fed030c4d46f1
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2835
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Rename MovePageRange to Adopt and group it with Resize/Rebase as it
covers the third, middle cut case.
Implement VMAnonymousCache::Adopt() to actually adopt swap pages. This
has to recreate swap blocks instead of taking them over from the source
cache as the cut offset or base offset between the caches may not be
swap block aligned. This means that adoption may fail due to memory
shortage in allocating the swap blocks.
For the middle cut case it is therefore now possible to have the adopt
fail in which case the previous cache restore logic is applied. Since
the readoption of the pages from the second cache can fail for the same
reason, there is a slight chance that we can't restore and lose pages.
For now, just panic in such a case and add a TODO to free memory and
retry.
Change-Id: I9a661f00c8f03bbbea2fe6dee90371c68d7951e6
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2588
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
* Adds VMCache::MovePageRange() and VMCache::Rebase() to facilitate
this.
Applied on top of hrev45098 and rebased with the hrev45564 page_num_t to
off_t change included.
Change-Id: Ie61bf43696783e3376fb4144ddced3781aa092ba
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2581
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
B_OS_NAME_LENGTH is 32, char* is 8 (on x64), and this structure
has quite a lot of pointers in it so it is not like we really
needed to save those 24 bytes. Hitting malloc() in here is not
so great, especially because we usually have B_DONT_LOCK_KERNEL_SPACE
turned on, so just inline and avoid it.
Change-Id: I5c94955324cfda08972895826b61748c3b69096a
This reverts commit c558f9c8fe.
This reverts commit 44f24718b1.
This reverts commit a69cb33030.
This reverts commit 951182620e.
There have been multiple reports that these changes break mounting NTFS partitions
(on all systems, see #14204), and shutting down (on certain systems, see #12405.)
Until they can be fixed, they are being backed out.
* in load_image_internal(), elf32_load_user_image checks whether the binary
format requires the compatibility mode.
* we then set up the flag THREAD_FLAGS_COMPAT_MODE and the address space size.
* the compatibility mode runtime_loader is hardcoded with x86/runtime_loader.
* if needed, the 64-bit flat_args structure is converted in-place to its 32-bit
layout.
* a 32-bit flat_args isn't handled yet (a 32-bit team execs a 64-bit binary).
Change-Id: Ia6a066bde8d1774d85de29b48dc500e27ae9668f
It was limited to a uint32 and could for example be overflown by the
slab MemoryManager that uses size_t on a 64 bit system.
This aligns the signature with create_area() that already uses size_t
for the size argument.
Note that the function is currently private, so the impact should be
limited.
The base VMCache class changed to the generic_ types with their
introduction in in *2011* (435c43f591),
but these classes were never properly adapted. These functions should not
be called here (they panic() -- but the base class only returns B_ERROR,
so that is a difference at least.)
Found by Clang's -Woverloaded-virtual.
* VMArea::AddWaiterIfWired(): Replace the ignoreRange argument by a
flags argument and introduce (currently only) flag
IGNORE_WRITE_WIRED_RANGES. If specified, ranges wired for writing
are ignored. Ignoring just a single specified range doesn't cut it
in vm_soft_fault(), and there aren't any other users of that feature.
* vm_soft_fault(): When having to unmap a page of a lower cache, this
page cannot be wired for writing. So we can safely ignore all
writed-wired ranges, instead of just our own. We even have to do that
in case there's another thread that concurrently tries to write-wire
the same page, since otherwise we'd deadlock waiting for each other.
* VMTranslationMap:
- Add DebugPrintMappingInfo(): Given a virtual address it is supposed
to print the paging structure information for that address. To be
implemented by derived classes.
- Add DebugGetReverseMappingInfo(): Given a physical addresss it is
supposed to find all virtual addresses mapped to it. To be
implemented by derived classes.
* X86VMTranslationMapPAE: Implement the new methods
DebugPrintMappingInfo() and DebugGetReverseMappingInfo().
* Add KDL command "mapping". It supports both virtual address lookups
and reverse lookups.
* VMAddressSpace: Add randomizingEnabled property.
* VMUserAddressSpace: Randomize addresses only when randomizingEnabled
property is set.
* create_team_arg(): Check, if the team's environment contains
"DISABLE_ASLR=1". Set the team's address space property
randomizingEnabled accordingly in load_image_internal() and
exec_team().
Set execute disable bit for any page that belongs to area with neither
B_EXECUTE_AREA nor B_KERNEL_EXECUTE_AREA set.
In order to take advanage of NX bit in 32 bit protected mode PAE must be
enabled. Thus, from now on it is also enabled when the CPU supports NX bit.
vm_page_fault() takes additional argument which indicates whether page fault
was caused by an illegal instruction fetch.
* Added the aforementioned functions.
* create_area_etc() now takes a guard size parameter.
* The thread_info::stack_base/end range now refers to the usable range
only.
The cookie is used to store the base address of the area that was just
visited. On 64-bit systems, int32 is not sufficient. Therefore, changed
to ssize_t which retains compatibility on x86 while expanding to a
sufficient size on x86_64.
This adds a pair of functions vm_prepare_kernel_area_debug_protection()
and vm_set_kernel_area_debug_protection() to set a kernel area up for
page wise protection and to actually protect individual pages
respectively.
It was already possible to read and write protect full areas via area
protection flags and not mapping any actual pages. For areas that
actually have mapped pages this doesn't work however as no fault, at
which the permissions could be checked, is generated on access.
These new functions use the debug helpers of the translation map to mark
individual pages as non-present without unmapping them. This allows them
to be "protected", i.e. causing a fault on read and write access. As they
aren't actually unmapped they can later be marked present again.
Note that these are debug helpers and have quite a few restrictions as
described in the comment above the function and is only useful for some
very specific and constrained use cases.
They can be used to mark pages as present/non-present without actually
unmapping them. Marking pages as non-present causes every access to
fault. We can use that for debugging as it allows us to "read protect"
individual kernel pages.
* Turn VMCache::consumers C list into a DoublyLinkedList.
* Use object caches for the different VMCache types and the VMCacheRefs.
The purpose is to reduce slab area fragmentation.
* Requires the introduction of a pure virtual VMCache::DeleteObject()
method, implemented in the derived classes.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43133 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The team and thread kernel structures have been renamed to Team and Thread
respectively and moved into the new BKernel namespace.
* Several (kernel add-on) sources have been converted from C to C++ since
private kernel headers are included that are no longer C compatible.
Changes after merging:
* Fixed gcc 2 build (warnings mainly in the scary firewire bus manager).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40196 a95241bf-73f2-0310-859d-f6bbb57e9c96
the parameter (CID 5329).
* _MergeWithOnlyConsumer(): Removed the somewhat weird consumerLocked
parameter. The caller can unlock itself, if desired. Improves Unlock()
readability.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40084 a95241bf-73f2-0310-859d-f6bbb57e9c96
vm_page::Init().
* Made vm_page::wired_count private and added accessor methods.
* Added VMCache::fWiredPagesCount (the number of wired pages the cache
contains) and accessor methods.
* Made more use of vm_page::IsMapped().
* vm_copy_on_write_area(): Added vm_page_reservation* parameter that can be
used to request a special handling for wired pages. If given the wired pages
are replaced by copies and the original pages are moved to the upper cache.
* vm_copy_area():
- We don't need to do any wired ranges handling, if the source area is a
B_SHARED_AREA, since we don't touch the area's mappings in this case.
- We no longer wait for wired ranges of the concerned areas to disappear.
Instead we use the new vm_copy_on_write_area() feature and just let it
copy the wired pages. This fixes#6288, an issue introduced with the use
of user mutexes in libroot: When executing multiple concurrent fork()s all
but the first one would wait on the fork mutex, which (being a user mutex)
would wire a page that the vm_copy_area() of the first fork() would wait
for.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37460 a95241bf-73f2-0310-859d-f6bbb57e9c96
ClearAccessedAndModified() implementations into helper methods PageUnmapped()
and UnaccessedPageUnmapped() in the base class.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37187 a95241bf-73f2-0310-859d-f6bbb57e9c96
vm_available_not_needed_memory() version that can be called from within the
kernel debugger.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37167 a95241bf-73f2-0310-859d-f6bbb57e9c96
kernel private.
* Moved dumping code from dump_cache() to new VMCache::Dump().
* Override VMCache::Dump() in VMVnodeCache to also print the vnode.
* Removed no longer needed VMCache::GetLock().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37138 a95241bf-73f2-0310-859d-f6bbb57e9c96
restrictions for virtual/physical addresses.
* vm_page_allocate_page_run():
- Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not
taken into account.
- Takes a physical_address_restrictions instead of base/limit and also
supports alignment and boundary restrictions, now.
* map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/
ReserveAddressRange() take a virtual_address_restrictions parameter, now. They
also support an alignment independent from the range size.
* create_area_etc(), vm_create_anonymous_area(): Take
{virtual,physical}_address_restrictions parameters, now.
* Removed no longer needed B_PHYSICAL_BASE_ADDRESS.
* DMAResources:
- Fixed potential overflows of uint32 when initializing from device node
attributes.
- Fixed bounce buffer creation TODOs: By using create_area_etc() with the
new restrictions parameters we can directly support physical high address,
boundary, and alignment.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96