Commit Graph

25 Commits

Author SHA1 Message Date
Michael Lotz
2cdd33d4ff Add an assert to ensure the wired count doesn't wrap. 2011-11-13 22:02:36 +01:00
Ingo Weinhold
f8154d172d mmlr (distracted) + bonefish:
* Turn VMCache::consumers C list into a DoublyLinkedList.
* Use object caches for the different VMCache types and the VMCacheRefs.
  The purpose is to reduce slab area fragmentation.
* Requires the introduction of a pure virtual VMCache::DeleteObject()
  method, implemented in the derived classes.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43133 a95241bf-73f2-0310-859d-f6bbb57e9c96
2011-11-02 21:14:11 +00:00
Ingo Weinhold
a0d93d1416 * VMCache::Unlock(): Renamed local variable consumerLocked to avoid shadowing
the parameter (CID 5329).
* _MergeWithOnlyConsumer(): Removed the somewhat weird consumerLocked
  parameter. The caller can unlock itself, if desired. Improves Unlock()
  readability.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40084 a95241bf-73f2-0310-859d-f6bbb57e9c96
2011-01-03 00:44:40 +00:00
Ingo Weinhold
b944766870 * Moved the vm_page initialization from vm_page.cpp:vm_page_init() to the new
vm_page::Init().
* Made vm_page::wired_count private and added accessor methods.
* Added VMCache::fWiredPagesCount (the number of wired pages the cache
  contains) and accessor methods.
* Made more use of vm_page::IsMapped().
* vm_copy_on_write_area(): Added vm_page_reservation* parameter that can be
  used to request a special handling for wired pages. If given the wired pages
  are replaced by copies and the original pages are moved to the upper cache.
* vm_copy_area():
  - We don't need to do any wired ranges handling, if the source area is a
    B_SHARED_AREA, since we don't touch the area's mappings in this case.
  - We no longer wait for wired ranges of the concerned areas to disappear.
    Instead we use the new vm_copy_on_write_area() feature and just let it
    copy the wired pages. This fixes #6288, an issue introduced with the use
    of user mutexes in libroot: When executing multiple concurrent fork()s all
    but the first one would wait on the fork mutex, which (being a user mutex)
    would wire a page that the vm_copy_area() of the first fork() would wait
    for.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37460 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-07-10 15:08:13 +00:00
Fredrik Holmqvist
e5dca54f1a r37259 fails to compile on one of my systems without this header. Checking out fresh tree and reconfiguring didn't work so don't really know why.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37263 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-26 17:24:11 +00:00
Ingo Weinhold
13638944db Removed never read VMCache::scan_skip.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37195 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-21 15:10:37 +00:00
Ingo Weinhold
f8e263c184 Slipped by in r37138: Added VMCache::Dump() and removed GetLock().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37141 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-15 00:07:18 +00:00
Ingo Weinhold
435c43f591 * Introduced type generic_io_vec, which is similar to iovec, but uses types
that are wide enough for both virtual and physical addresses.
* DMABuffer, IORequest, IOScheduler,... and code using them: Use
  generic_io_vec and generic_{addr,size}_t where necessary.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36997 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-02 18:42:20 +00:00
Ingo Weinhold
efeca209a1 Made VMCache::Resize() virtual and let VMAnonymousCache override it to free
swap space when the cache shrinks. Currently the implementation stil leaks
swap space of busy pages.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36373 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-04-20 14:04:18 +00:00
Ingo Weinhold
86875ad9d1 Added VMCache::DebugHasPage() and DebugLookupPage() for use in the kernel
debugger.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36228 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-04-13 17:18:57 +00:00
Ingo Weinhold
4bb4f79355 Added assert.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35497 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-02-16 16:49:52 +00:00
Ingo Weinhold
cf99b9ab60 Added VMCache::CacheRef() accessor.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35391 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-02-03 18:46:29 +00:00
Ingo Weinhold
72382fa629 * Removed the page state PAGE_STATE_BUSY and instead introduced a vm_page::busy
flag. The obvious advantage is that one can still see what state a page is in
  and even move it between states while being marked busy.
* Removed the vm_page::is_dummy flag. Instead we mark marker pages busy, which
  in all cases has the same effect. Introduced a vm_page_is_dummy() that can
  still check whether a given page is a dummy page.
* vm_page_unreserve_pages(): Before adding to the system reserve make sure
  sUnreservedFreePages is non-negative. Otherwise we'd make nonexisting pages
  available for allocation. steal_pages() still has the same problem and it
  can't be solved that easily.
* map_page(): No longer changes the page state/mark the page unbusy. That's the
  caller's responsibility.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35331 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-29 10:00:45 +00:00
Ingo Weinhold
deee8524b7 * Introduced {malloc,memalign,free}_etc() which take an additional "flags"
argument. They replace the previous special-purpose allocation functions
  (malloc_nogrow(), vip_io_request_malloc()).
* Moved the I/O VIP heap to heap.cpp accordingly.
* Added quite a bit of passing around of allocation flags in the VM,
  particularly in the VM*AddressSpace classes.
* Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always
  allocated on the normal heap.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-27 12:45:53 +00:00
Ingo Weinhold
cff6e9e406 * The system now holds back a small reserve of committable memory and pages. The
memory and page reservation functions have a new "priority" parameter that
  indicates how deep the function may tap into that reserve. The currently
  existing priority levels are "user", "system", and "VIP". The idea is that
  user programs should never be able to cause a state that gets the kernel into
  trouble due to heavy battling for memory. The "VIP" level (not really used
  yet) is intended for allocations that are required to free memory eventually
  (in the page writer). More levels are thinkable in the future, like "user real
  time" or "user system server".
* Added "priority" parameters to several VMCache methods.
* Replaced the map_backing_store() "unmapAddressRange" parameter by a "flags"
  parameter.
* Added area creation flag CREATE_AREA_PRIORITY_VIP and slab allocator flag
  CACHE_PRIORITY_VIP indicating the importance of the request.
* Changed most code to pass the right priorities/flags.

These changes already significantly improve the behavior in low memory
situations. I've tested a bit with 64 MB (virtual) RAM and, while not
particularly fast and responsive, the system remains at least usable under high
memory pressure.
As a side effect the slab allocator can now be used as general memory allocator.
Not done by default yet, though.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35295 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-26 14:44:58 +00:00
Ingo Weinhold
6379e53e2d vm_page no longer points directly to its containing cache, but rather to a
VMCacheRef object which points to the cache. This allows to optimize
VMCache::MoveAllPages(), since it no longer needs to iterate over all pages
to adjust their cache pointer. It can simple swap the cache refs of the two
caches instead.

Reduces the total -j8 Haiku image build time only marginally. The kernel time
drops almost 10%, though.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35155 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-19 08:34:14 +00:00
Ingo Weinhold
3632eeedb9 * VMCache: Added a UserData attribute which can be used by the lock holder.
* Added "bool consumerLocked" parameter to VMCache::Unlock() and
  ReleaseRefAndUnlock(). Since Unlock() may cause the cache to be merged with
  a consumer cache, the flag is needed to prevent a deadlock in case the
  caller still holds a lock to the consumer. Hasn't been a problem yet, since
  that situation never occurred.
* VMCacheChainLocker: Reversed unlocking order to bottom-up. The other
  direction could cause a deadlock in case caches would be merged, since the
  locking order would be reversed. The way VMCacheChainLocker was used this
  didn't happen, though.
* fault_get_page(): While copying a page from a lower cache to the top cache,
  we do now unlock all caches but the top one, so we don't unnecessarily
  kill concurrency.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35153 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-19 03:02:11 +00:00
Ingo Weinhold
77690f288e Added VMCache::SwitchFromReadLock(), atomically unlocking a read lock and
starting to lock the cache.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34936 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-07 15:32:28 +00:00
Ingo Weinhold
355dc6bef4 Inlined several VMCache methods.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34837 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-01 17:09:23 +00:00
Ingo Weinhold
eb8dc1ebfb * Removed DEBUG_PAGE_CACHE_TRANSITIONS debugging.
* Added VMCache::MovePage() and MoveAllPages() to move pages between caches.
* VMAnonymousCache:
  - _MergeSwapPages(): Avoid doing anything, if neither cache has swapped out
    pages.
  - _MergeSwapPages() does now also remove source cache pages that are
    shadowed by consumer swap pages. This allows us to call _MergeSwapPages()
    before _MergePagesSmallerSource(), save the swap page shadowing check
    there and get rid of the vm_page::merge_swap flag. This is an
    optimization based on the assumption that usually none or only few pages
    are swapped out, so we save a lot of checks.
  - Implemented _MergePagesSmallerConsumer() as an alternative to
    _MergePagesSmallerSource(). The former is used when the source cache has
    more pages than the consumer cache. It iterates over the consumer cache's
    pages, moves them to the source and finally moves all pages back to the
    consumer. The final move is relatively cheap (though unfortunately we
    still have to update all pages' vm_page::cache field), so that overall we
    save iterations of the main loop with the more expensive checks.

The optimizations particularly improve the common fork()+exec*() situations.
fork() uses CoW, which is implemented by putting two new empty caches between
the to be copied area and its cache. exec*() destroys one copy of the area,
its cache and thus causes merging of the other new cache with the old cache.
Since this usually happens in a very short time, the old cache does still
contain many pages and the new cache only few. Previously the many pages were
all checked and moved individually. Now we do that for the few pages instead.

A very extreme example of this situation is the Haiku image build. jam has a
huge heap (> 200 MB) and it fork()s+exec*()s for every action to be executed.
Since during the cache merging the cache is locked, any write access to a
heap page causes jam to block until the cache merging is done. Formerly that
took so long that it killed a lot of parallelism in multi-job builds. That
could be observed particularly well when lots of small actions where executed
(like the Link, XRes, Mimeset, SetType, SetVersion combos when building
executables/libraries/add-ons). Those look dramatically better now.
The overall speed improvement for a -j8 image build on my machine is only
about 15%, though.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34784 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-12-27 16:14:13 +00:00
Ingo Weinhold
2e74d74f4f * Added method VMCache::TransferAreas() moving areas from one cache to
another. The code originates from vm_copy_on_write_area(). We now generate
  the VM cache tracing entries, though.
* count_writable_areas() -> VMCache::CountWritableAreas()
* Added debugger command "cache_stack" which is enabled when VM cache tracing
  is enabled. It prints the source caches of a given cache or area at the
  time of a specified tracing entry.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34751 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-12-22 22:00:35 +00:00
Ingo Weinhold
522c2f19d4 * Added a simple mechanism to wait for events to VMCache. WaitForPageEvents()
waits for certain events on a given page, NotifyPageEvents() wakes up
  waiting threads respectively.
* Used the new feature instead of condition variables for waiting on busy
  pages. We save publishing and unpublishing of a condition variable whenever
  a page is marked busy. There's only something to do, if there's at least
  one thread waiting in the list of the respective cache. The general
  assumption is that this is only rarely the case and even if it happens,
  there should be only very few threads.
* Added an apparently missing notification in cache_io(). At least I didn't
  see the reason for it not being there.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34537 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-12-07 15:42:08 +00:00
Ingo Weinhold
6440406a59 Style changes
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34536 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-12-07 14:28:56 +00:00
Ingo Weinhold
be7328a9f6 Moved VMCache related definitions to <vm/VMCache.h>.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34535 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-12-07 14:14:21 +00:00
Ingo Weinhold
e50cf8765b * Moved the VM headers into subdirectory vm/.
* Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96
2009-12-02 18:05:10 +00:00