access to a vm_page. It is basically an atomically accessed thread ID field
in the vm_page structure, which is explicitly set by macros marking the
critical sections. As a first positive effect I had to review quite a bit of
code and found several issues.
* Added several TODOs and comments. Some harmless ones, but also a few
troublesome ones in vm.cpp regarding page unmapping.
* file_cache: PrecacheIO::Prepare()/read_into_cache: Removed superfluous
vm_page_allocate_page() return value checks. It cannot fail anymore.
* Removed the heavily contended "pages" lock. We use different policies now:
- sModifiedTemporaryPages is accessed atomically.
- sPageDeficitLock and sFreePageCondition are protected by a new mutex.
- The page queues have individual locks (mutexes).
- Renamed set_page_state_nolock() to set_page_state(). Unless the caller says
otherwise, it does now lock the affected pages queues itself. Also changed
the return value to void -- we panic() anyway.
* set_page_state(): Add free/clear pages to the beginning of their respective
queues as this is more cache-friendly.
* Pages with the states PAGE_STATE_WIRED or PAGE_STATE_UNUSED are no longer
in any queue. They were in the "active" queue, but there's no good reason
to have them there. In case we decide to let the page daemon work the queues
(like FreeBSD) they would just be in the way.
* Pulled the common part of vm_page_allocate_page_run[_no_base]() into a helper
function. Also fixed a bug I introduced previously: The functions must not
vm_page_unreserve_pages() on success, since they remove the pages from the
free/clear queue without decrementing sUnreservedFreePages.
* vm_page_set_state(): Changed return type to void. The function cannot really
fail and no-one was checking it anyway.
* vm_page_free(), vm_page_set_state(): Added assertion: The page must not be
free/clear before. This is implied by the policy that no-one is allowed to
access free/clear pages without holding the respective queue's lock, which is
not the case at this point. This found the bug fixed in r34912.
* vm_page_requeue(): Added general assertions. panic() when requeuing of
free/clear pages is requested. Same reason as above.
* vm_clone_area(), B_FULL_LOCK case: Don't map busy pages. The implementation is
still not correct, though.
My usual -j8 Haiku build test runs another 10% faster, now. The total kernel
time drops about 18%. As hoped the new locks have only a fraction of the old
"pages" lock contention. Other locks lead the "most wanted list" now.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34933 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added VMCache::MovePage() and MoveAllPages() to move pages between caches.
* VMAnonymousCache:
- _MergeSwapPages(): Avoid doing anything, if neither cache has swapped out
pages.
- _MergeSwapPages() does now also remove source cache pages that are
shadowed by consumer swap pages. This allows us to call _MergeSwapPages()
before _MergePagesSmallerSource(), save the swap page shadowing check
there and get rid of the vm_page::merge_swap flag. This is an
optimization based on the assumption that usually none or only few pages
are swapped out, so we save a lot of checks.
- Implemented _MergePagesSmallerConsumer() as an alternative to
_MergePagesSmallerSource(). The former is used when the source cache has
more pages than the consumer cache. It iterates over the consumer cache's
pages, moves them to the source and finally moves all pages back to the
consumer. The final move is relatively cheap (though unfortunately we
still have to update all pages' vm_page::cache field), so that overall we
save iterations of the main loop with the more expensive checks.
The optimizations particularly improve the common fork()+exec*() situations.
fork() uses CoW, which is implemented by putting two new empty caches between
the to be copied area and its cache. exec*() destroys one copy of the area,
its cache and thus causes merging of the other new cache with the old cache.
Since this usually happens in a very short time, the old cache does still
contain many pages and the new cache only few. Previously the many pages were
all checked and moved individually. Now we do that for the few pages instead.
A very extreme example of this situation is the Haiku image build. jam has a
huge heap (> 200 MB) and it fork()s+exec*()s for every action to be executed.
Since during the cache merging the cache is locked, any write access to a
heap page causes jam to block until the cache merging is done. Formerly that
took so long that it killed a lot of parallelism in multi-job builds. That
could be observed particularly well when lots of small actions where executed
(like the Link, XRes, Mimeset, SetType, SetVersion combos when building
executables/libraries/add-ons). Those look dramatically better now.
The overall speed improvement for a -j8 image build on my machine is only
about 15%, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34784 a95241bf-73f2-0310-859d-f6bbb57e9c96
* When DEBUG_SPINLOCK_LATENCIES is 1, the system will panic if any spinlock is
held longer than DEBUG_LATENCY micro seconds (currently 200). If your system
doesn't boot anymore, a new safemode setting can disable the panic.
* Besides some problems during boot when the MTRRs are set up, 200 usecs work
fine here if all debug output is turned off (the output stuff is definitely
problematic, though I don't have a good idea on how to improve upon it a lot).
* Renamed the formerly BeOS compatible safemode settings to look better; there
is no need to be compatible there.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@33953 a95241bf-73f2-0310-859d-f6bbb57e9c96
have a simple dedicated heap for the kernel debugger with stacked allocation
pools (deleting a pool frees all memory allocated in it). The heap should
eventually be used for all commands that need temporary storage too large for
the stack instead of each using its own static buffer.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30949 a95241bf-73f2-0310-859d-f6bbb57e9c96
enter/exit code. There's no real reason not to keep kernel breakpoints
enabled when in userland (unless there are breakpoints installed for the
team, of course).
* Enabled kernel breakpoints by default (check your kernel_debug_config.h,
if you have overridden it!), since they don't really add any overhead
anymore.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30206 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Turned the checks for all those macros to "#if"s instead of "#ifdef"s.
* Introduced macro KDEBUG_LEVEL which serves as a master setting.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28248 a95241bf-73f2-0310-859d-f6bbb57e9c96
Currently it only contains KDEBUG and the block cache debugging macros.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27816 a95241bf-73f2-0310-859d-f6bbb57e9c96