Set execute disable bit for any page that belongs to area with neither
B_EXECUTE_AREA nor B_KERNEL_EXECUTE_AREA set.
In order to take advanage of NX bit in 32 bit protected mode PAE must be
enabled. Thus, from now on it is also enabled when the CPU supports NX bit.
vm_page_fault() takes additional argument which indicates whether page fault
was caused by an illegal instruction fetch.
* Added the aforementioned functions.
* create_area_etc() now takes a guard size parameter.
* The thread_info::stack_base/end range now refers to the usable range
only.
The cookie is used to store the base address of the area that was just
visited. On 64-bit systems, int32 is not sufficient. Therefore, changed
to ssize_t which retains compatibility on x86 while expanding to a
sufficient size on x86_64.
This adds a pair of functions vm_prepare_kernel_area_debug_protection()
and vm_set_kernel_area_debug_protection() to set a kernel area up for
page wise protection and to actually protect individual pages
respectively.
It was already possible to read and write protect full areas via area
protection flags and not mapping any actual pages. For areas that
actually have mapped pages this doesn't work however as no fault, at
which the permissions could be checked, is generated on access.
These new functions use the debug helpers of the translation map to mark
individual pages as non-present without unmapping them. This allows them
to be "protected", i.e. causing a fault on read and write access. As they
aren't actually unmapped they can later be marked present again.
Note that these are debug helpers and have quite a few restrictions as
described in the comment above the function and is only useful for some
very specific and constrained use cases.
They can be used to mark pages as present/non-present without actually
unmapping them. Marking pages as non-present causes every access to
fault. We can use that for debugging as it allows us to "read protect"
individual kernel pages.
* Turn VMCache::consumers C list into a DoublyLinkedList.
* Use object caches for the different VMCache types and the VMCacheRefs.
The purpose is to reduce slab area fragmentation.
* Requires the introduction of a pure virtual VMCache::DeleteObject()
method, implemented in the derived classes.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43133 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The team and thread kernel structures have been renamed to Team and Thread
respectively and moved into the new BKernel namespace.
* Several (kernel add-on) sources have been converted from C to C++ since
private kernel headers are included that are no longer C compatible.
Changes after merging:
* Fixed gcc 2 build (warnings mainly in the scary firewire bus manager).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40196 a95241bf-73f2-0310-859d-f6bbb57e9c96
the parameter (CID 5329).
* _MergeWithOnlyConsumer(): Removed the somewhat weird consumerLocked
parameter. The caller can unlock itself, if desired. Improves Unlock()
readability.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40084 a95241bf-73f2-0310-859d-f6bbb57e9c96
vm_page::Init().
* Made vm_page::wired_count private and added accessor methods.
* Added VMCache::fWiredPagesCount (the number of wired pages the cache
contains) and accessor methods.
* Made more use of vm_page::IsMapped().
* vm_copy_on_write_area(): Added vm_page_reservation* parameter that can be
used to request a special handling for wired pages. If given the wired pages
are replaced by copies and the original pages are moved to the upper cache.
* vm_copy_area():
- We don't need to do any wired ranges handling, if the source area is a
B_SHARED_AREA, since we don't touch the area's mappings in this case.
- We no longer wait for wired ranges of the concerned areas to disappear.
Instead we use the new vm_copy_on_write_area() feature and just let it
copy the wired pages. This fixes#6288, an issue introduced with the use
of user mutexes in libroot: When executing multiple concurrent fork()s all
but the first one would wait on the fork mutex, which (being a user mutex)
would wire a page that the vm_copy_area() of the first fork() would wait
for.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37460 a95241bf-73f2-0310-859d-f6bbb57e9c96
ClearAccessedAndModified() implementations into helper methods PageUnmapped()
and UnaccessedPageUnmapped() in the base class.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37187 a95241bf-73f2-0310-859d-f6bbb57e9c96
vm_available_not_needed_memory() version that can be called from within the
kernel debugger.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37167 a95241bf-73f2-0310-859d-f6bbb57e9c96
kernel private.
* Moved dumping code from dump_cache() to new VMCache::Dump().
* Override VMCache::Dump() in VMVnodeCache to also print the vnode.
* Removed no longer needed VMCache::GetLock().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37138 a95241bf-73f2-0310-859d-f6bbb57e9c96
restrictions for virtual/physical addresses.
* vm_page_allocate_page_run():
- Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not
taken into account.
- Takes a physical_address_restrictions instead of base/limit and also
supports alignment and boundary restrictions, now.
* map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/
ReserveAddressRange() take a virtual_address_restrictions parameter, now. They
also support an alignment independent from the range size.
* create_area_etc(), vm_create_anonymous_area(): Take
{virtual,physical}_address_restrictions parameters, now.
* Removed no longer needed B_PHYSICAL_BASE_ADDRESS.
* DMAResources:
- Fixed potential overflows of uint32 when initializing from device node
attributes.
- Fixed bounce buffer creation TODOs: By using create_area_etc() with the
new restrictions parameters we can directly support physical high address,
boundary, and alignment.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
that are wide enough for both virtual and physical addresses.
* DMABuffer, IORequest, IOScheduler,... and code using them: Use
generic_io_vec and generic_{addr,size}_t where necessary.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36997 a95241bf-73f2-0310-859d-f6bbb57e9c96
where appropriate.
* Typedef'ed page_num_t to phys_addr_t and used it in more places in
vm_page.{h,cpp}.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96
map_backing_store() doesn't commit memory when this flag is given.
* Used the new flag vm_copy_area(): We no longer commit memory for read-only
areas. This prevents read-only mapped files from suddenly requiring memory
after fork(). Might improve the situation on machines with very little RAM
a bit.
We should probably mark writable copies over-committing, since the usual
case is fork() + exec() where the child normally doesn't need more than a
few pages until calling exec(). That would significantly reduce the memory
requirement for jamming the Haiku tree.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36651 a95241bf-73f2-0310-859d-f6bbb57e9c96
implemented for any architecture yet.
* vm_set_area_memory_type(): Call VMTranslationMap::ProtectArea() to change the
memory type for the already mapped pages.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36574 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Don't set the VMArea's memory type in arch_vm_set_memory_type(), but let the
callers do that.
* vm_set_area_memory_type(): Does nothing, if the memory type doesn't change.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36573 a95241bf-73f2-0310-859d-f6bbb57e9c96
of consistency.
* Moved the B_OVERCOMMITTING_AREA flag from B_KERNEL_AREA_FLAGS to
B_USER_AREA_FLAGS, since we really allow it to be passed from userland.
* Most VM syscalls check the provided protection against B_USER_AREA_FLAGS
instead of B_USER_PROTECTION, now. This way they allow for
B_OVERCOMMITTING_AREA as well.
* _user_map_file(), _user_set_memory_protection(): Check the protection like
the other syscalls do and use fix_protection() instead of doing that
manually.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36572 a95241bf-73f2-0310-859d-f6bbb57e9c96
free respective free and cached pages.
* Removed the unused vm_page_allocate_page_run_no_base().
* vm_page_allocate_page_run() (and allocate_page_run()):
- Use vm_page_reserve_pages() instead of vm_page_try_reserve_pages(), i.e.
wait until the reservation succeeds.
- Now we iterates two times through the pages to find a suitable page run. In
the first iteration it only looks for free/clear pages, in the second
iteration it also considers cached pages. This increases the chance of the
function to succeed, when a lot of caching is going on.
This reduces the amount of memory required to use the IOCache when booting
off the anyboot Live CD to around 160 MB in qemu. It also seems to work with
128 MB, but the syslog indicates that some memory allocations fail, which
is not exactly inspiring confidence.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36489 a95241bf-73f2-0310-859d-f6bbb57e9c96
swap space when the cache shrinks. Currently the implementation stil leaks
swap space of busy pages.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36373 a95241bf-73f2-0310-859d-f6bbb57e9c96
mapped page.
* debug_{mem,strl}cpy():
- Added "team" parameter for specifying the address space the address are
to be interpreted in.
- When the standard memcpy() (with fault handler) fails, fall back to
vm_debug_copy_page_memory().
* Added debug_is_debugged_team(): Predicate returning true, if the supplied
team_id refers to the same team debug_get_debugged_thread() belongs to.
* Added DebuggedThreadSetter class for scope-based debug_set_debugged_thread().
Made use of it in several debugger functions.
* print_demangled_call() (x86): Fixed unsafe memory access.
Allows KDL stack traces to work correctly again, even if the page daemon has
already unmapped the concerned pages.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36230 a95241bf-73f2-0310-859d-f6bbb57e9c96
return it.
* lock_memory_etc(): On error the VMAreaWiredRange object could be leaked.
* [un]lock_memory_etc(): Call VMArea::Unwire() with the cache locked and
explicitly delete the range object after unlocking the cache to avoid
potential deadlocks.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36035 a95241bf-73f2-0310-859d-f6bbb57e9c96
Since the requirement is that the area's top cache is locked, allocating
memory isn't allowed.
* lock_memory_etc(): Create the VMAreaWiredRange object explicitly before
locking the area's top cache.
Fixes#5680 (deadlocks when using the slab as malloc() backend).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36033 a95241bf-73f2-0310-859d-f6bbb57e9c96
* _user_map_file(), _user_unmap_memory(): Verify that the address (if given) is
page aligned.
* Reworked memory locking (wiring):
- VMArea does now have a list of wired memory ranges and supports waiting for
a range to be removed.
- vm_soft_fault():
- Added "wirePage" parameter that, if given, makes the function wire the
page and return it.
- Added "wiredRange" parameter (for calls from lock_memory_etc()) and made
sure we never unmap wired pages. This could e.g. happen when a page from a
lower cache was read-mapped and a write fault occurred. Now in such a
situation the function waits for the page to be unwired and restarts.
- All functions that manipulate areas in a way that could affect wired ranges
do now either require the caller to make sure there are no wired ranges in
the way or do that themselves. Added a few wait_if_*_is_wired() helper
functions for that purpose.
- lock_memory_etc():
- Does now also work correctly when the range spans more than one area.
- Adds VMAreaWiredRanges to the affected VMAreas and retains an address
space reference (so that the address space won't be deleted as long as a
wired range exists).
- Resolved TODO: The area's caches are now locked when
increment_page_wired_count() is called.
- Resolved TODO: The race condition due to missing locking after looking up
the page mapping is now prevented. We hold the cache locks (in case the
page is already mapped) and the new vm_soft_fault() parameter allows us
to get the page wired.
- unlock_memory_etc(): Changes symmetrical to those in lock_memory_etc() and
resolved all TODOs.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36030 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added vm_clear_page_mapping_accessed_flags() and
vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality
of vm_test_map_activation(), vm_clear_map_flags(), and
vm_remove_all_page_mappings(), thus saving lots of calls to translation map
methods. The backend is the new method
VMTranslationMap::ClearAccessedAndModified().
* Started to make use of the cached page queue and changed the meaning of the
other non-free queues slightly:
- Active queue: Contains mapped pages that have been used recently.
- Inactive queue: Contains mapped pages that have not been used recently. Also
contains unmapped temporary pages.
- Modified queue: Contains unmapped modified pages.
- Cached queue: Contains unmapped unmodified pages (LRU sorted).
Unless we're actually low on memory and actively do paging, modified and
cached queues only contain non-temporary pages. Cached pages are considered
quasi free. They still belong to a cache, but since they are unmodified and
unmapped, they can be freed immediately. And this is what
vm_page_[try_]reserve_pages() do now when there are no more actually free
pages at hand. Essentially this means that pages storing cached file data,
unless mmap()ped, no longer are considered used and don't contribute to page
pressure. Paging will not happen as long there are enough free + cached pages
available.
* Reimplemented the page daemon. It no longer scans all pages, but instead works
the page queues. As long as the free pages situation is harmless, it only
iterates through the active queue and deactivates pages that have not been
used recently. When paging occurs it additionally scans the inactive queue and
frees pages that have not been used recently.
* Changed the page reservation/allocation interface:
vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and
vm_page_allocate_page() now take a vm_page_reservation structure pointer.
The reservation functions initialize the structure -- currently consisting
only of a count member for the number of still reserved pages.
vm_page_allocate_page() decrements the count and vm_page_unreserve_pages()
unreserves the remaining pages (if any). Advantages are that reservation/
unreservation mismatches cannot occur anymore, that vm_page_allocate_page()
can verify that the caller has indeed a reserved page left, and that there's
no unnecessary pressure on the free page pool anymore. The only disadvantage
is that the vm_page_reservation object needs to be passed around a bit.
* Reworked the page reservation implementation:
- Got rid of sSystemReservedPages and sPageDeficit. Instead
sUnreservedFreePages now actually contains the number of free pages that
have not yet been reserved (it cannot become negative anymore) and the new
sUnsatisfiedPageReservations contains the number of pages that are still
needed for reservation.
- Threads waiting for reservations do now add themselves to a waiter queue,
which is ordered by descending priority (VM priority and thread priority).
High priority waiters are served first when pages become available.
Fixes#5328.
* cache_prefetch_vnode(): Would reserve one less page than allocated later, if
the size wasn't page aligned.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
general "flags" parameter. It encodes the target state of the page -- so
that the page isn't unnecessarily put in the wrong page queue first -- a
flag whether the page should be cleared, and one to indicate whether the
page should be marked busy.
* Added page state PAGE_STATE_CACHED. Not used yet.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35333 a95241bf-73f2-0310-859d-f6bbb57e9c96
flag. The obvious advantage is that one can still see what state a page is in
and even move it between states while being marked busy.
* Removed the vm_page::is_dummy flag. Instead we mark marker pages busy, which
in all cases has the same effect. Introduced a vm_page_is_dummy() that can
still check whether a given page is a dummy page.
* vm_page_unreserve_pages(): Before adding to the system reserve make sure
sUnreservedFreePages is non-negative. Otherwise we'd make nonexisting pages
available for allocation. steal_pages() still has the same problem and it
can't be solved that easily.
* map_page(): No longer changes the page state/mark the page unbusy. That's the
caller's responsibility.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35331 a95241bf-73f2-0310-859d-f6bbb57e9c96
argument. They replace the previous special-purpose allocation functions
(malloc_nogrow(), vip_io_request_malloc()).
* Moved the I/O VIP heap to heap.cpp accordingly.
* Added quite a bit of passing around of allocation flags in the VM,
particularly in the VM*AddressSpace classes.
* Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always
allocated on the normal heap.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96
memory and page reservation functions have a new "priority" parameter that
indicates how deep the function may tap into that reserve. The currently
existing priority levels are "user", "system", and "VIP". The idea is that
user programs should never be able to cause a state that gets the kernel into
trouble due to heavy battling for memory. The "VIP" level (not really used
yet) is intended for allocations that are required to free memory eventually
(in the page writer). More levels are thinkable in the future, like "user real
time" or "user system server".
* Added "priority" parameters to several VMCache methods.
* Replaced the map_backing_store() "unmapAddressRange" parameter by a "flags"
parameter.
* Added area creation flag CREATE_AREA_PRIORITY_VIP and slab allocator flag
CACHE_PRIORITY_VIP indicating the importance of the request.
* Changed most code to pass the right priorities/flags.
These changes already significantly improve the behavior in low memory
situations. I've tested a bit with 64 MB (virtual) RAM and, while not
particularly fast and responsive, the system remains at least usable under high
memory pressure.
As a side effect the slab allocator can now be used as general memory allocator.
Not done by default yet, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35295 a95241bf-73f2-0310-859d-f6bbb57e9c96