actual caching in the file cache, i.e. all reads and writes go directly
to the underlying device. The implementation is not quite complete,
since the VM can still add pages to the cache when the file is mmap()ed,
which can lead to inconsistencies.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26779 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Init swap support in main().
* Added "bool swappable" parameter to
VMCacheFactory::CreateAnonymousCache(). A cache supporting swapping
is created when true. Adjusted invocations accordingly.
* The page writer does now write non-locked swappable pages (when
memory is low).
* Fixed header guard of VMAnonymousNoSwapCache.h.
* Swap support is compiled conditionally, controlled by the
ENABLE_SWAP_SUPPORT in src/system/kernel/vm/VMAnonymousCache.h. It is
disabled ATM. Since no swap files are added, it wouldn't have much
effect anyway.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26625 a95241bf-73f2-0310-859d-f6bbb57e9c96
fs_vnode_ops::write_pages() to be called with fsReenter = true. Since
this is no longer the case, the argument has become superfluous. For
read_pages() it always was. Removed the argument from the functions
and all functions that propagated it.
* Some whitespace at the end of lines was removed.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26579 a95241bf-73f2-0310-859d-f6bbb57e9c96
introduces the following relevant changes:
* VMCache:
- Renamed vm_cache to VMCache, merged it with vm_store and made it a
C++ class with virtual methods (replacing the store operations).
Turned the different store implementations into subclasses.
- Introduced MergeStore() callback, changed semantics of Commit().
- Changed locking and referencing semantics. A reference can only be
acquired/released with the cache locked. An unreferenced cache is
deleted and a mergeable cache merged when it is unlocked. This
removes the "busy" state of a cache and simplifies the page fault
code.
* Added VMAnonymousCache, which will implement swap support (work by
Zhao Shuai). It is not integrated and used yet, though.
* Enabled the mutex/recursive lock holder asserts.
* Fixed DoublyLinkedList::Swap().
* Generalized the low memory handler to a low resource handler. And made
semaphores and reserved memory handled resources. Made
vm_try_resource_memory() optionally wait (with timeout), and used that
feature to reserve memory for areas.
...
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26572 a95241bf-73f2-0310-859d-f6bbb57e9c96
per cache.
* Changed the strategy vm_cache_acquire_page_cache_ref() uses to ensure
that the cache isn't deleted while trying to get a reference. Instead
of the global cache pages hash table lock, it holds the global cache
list lock now. We acquire + release this lock in delete_cache() after
removing all pages and just before deleting the object.
* Some small optimizations using the property that the cache's pages are
ordered, now (vm_cache_resize(), vm_page_write_modified_page_range(),
vm_page_schedule_write_page_range()).
* Replaced some code counting a cache's pages by simply using
vm_cache::page_count.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26160 a95241bf-73f2-0310-859d-f6bbb57e9c96
level 2).
* merge_cache_with_only_consumer() marked the source cache unbusy when
it was done, which caused a race condition with the page fault code.
I accidentally introduced this problem in r25716. Fixes#2326.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26068 a95241bf-73f2-0310-859d-f6bbb57e9c96
waiting for a heap grow.
* Use that nogrow version in the VM code to avoid a deadlock with the address
space lock when a grow operation would try to create an area while a malloc
happened from such a function in the VM.
* When waiting for a grow to happen, notify the waiting thread from the grower
also if it failed to allocate a new heap. Otherwise a thread would just sit
there and wait until another thread requested growing too and that one
succeeded (or just forever in the worst case).
* Make the dedicated grow heap growable too. If the current grow heaps run low
on memory it will instruct the grower to allocate a new grow heap. This
reduces the likelyhood of running out of memory with no way to grow to a
minimum. As the growing is done asynchronously it is still possible to
happen, but it is highly unlikely as the grow heap is solely used to
allocate memory in the process of creating new heap areas and it will even
try using normal public memory if the dedicated memory has run out.
* Reduced the dedicated grow heap from 2 to 1MB. As it can now grow itself, it
doesn't need to last so long.
* Extract heap creation into it's own function that does area creation and heap
attach and use this function for growing normal and grow heaps.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26009 a95241bf-73f2-0310-859d-f6bbb57e9c96
Instead we only allow temporary caches to be merged. This remedies the
problem that after fork() + join() there remains a superfluous cache
layer for all RAM areas.
I haven't tested it, but this might improve the jam situation
memory-wise (huge heap is committed one less time), though it might
worsen it performance-wise (lots of heap pages are moved with every
merge).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25717 a95241bf-73f2-0310-859d-f6bbb57e9c96
new function merge_cache_with_only_consumer(), which is also used in
vm_cache_remove_area(), now.
* Added tracing for the merge case.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25716 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Trivial adjustments of code using mutexes. Mostly removing the
mutex_init() return value check.
* Added mutex_lock_threads_locked(), which is called with the threads
spinlock being held. The spinlock is released while waiting, of
course. This function is useful in cases where the existence of the
mutex object is ensured by holding the threads spinlock.
* Changed the two instances in the VFS code where an IO context of
another team needs to be locked to use mutex_lock_threads_locked().
Before it required a semaphore-based mutex implementation.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25283 a95241bf-73f2-0310-859d-f6bbb57e9c96
respective Private* base class.
* Changed sigwait() and sigsuspend() to use thread_block() instead of a
condition variable.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25100 a95241bf-73f2-0310-859d-f6bbb57e9c96
overcommitting stores:
- has_precommitted was incorrectly set to true in the constructor
- when a precommitted page was committed, vm_store::committed_size
was still changed.
- unreserving memory did not update vm_store::committed_size.
- when precommitted pages were committed, their page count instead of their
size was reserved.
* All this lead to bug #1970 which should be fixed now.
* Cleanup of vm_cache.cpp, no functional change.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@24742 a95241bf-73f2-0310-859d-f6bbb57e9c96
time. Releasing the cache's store reference while holding the cache lock
could reverse the usual locking order -- the VFS could potentially call
the remove_vnode() or put_vnode() FS hook, which in turn could use the
file cache, thus resulting in a deadlock. Now we release the store ref
before locking the cache.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@24548 a95241bf-73f2-0310-859d-f6bbb57e9c96
resized but still had dirty pages to be written back,
vm_cache_resize() (which is called with the inode lock being held)
deadlocked with the page writer.
* Now, I reintroduced busy_writing: it'll be set by everything that
writes back pages (vm_page_write_modified(), and the page writer),
and will be checked for in vm_cache_resize() - other functions are not
affected for now, AFAICT.
* vm_cache_resize() will clear that flag, and the writer will check it
again after it wrote back the page (which will fail when it's outside
the file bounds), and if it's cleared, it will get rid of the page
(if the file has been resized again in the mean time, writing it will
succeed then, and we'll keep the page around).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23334 a95241bf-73f2-0310-859d-f6bbb57e9c96
appear: when freeing a modified page, it wouldn't have a cache
anymore, but set_page_state_nolock() depended on it.
* To work around this, I added a vm_page_free() function, which the
caches that free modified pages have to call (but others may, too).
It will correctly maintain the sModifiedTemporaryPages counter in case
the cache has already been removed.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23318 a95241bf-73f2-0310-859d-f6bbb57e9c96
to the private VM types are including vm_types.h now.
* Removed vm_page, vm_area, vm_cache, and vm_address_space typedefs; it's
cleaner this way, and the actual types are only used in C++ files now,
anyway.
* And that caused changes in many files...
* Made commpage.h self-containing.
* Minor cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22329 a95241bf-73f2-0310-859d-f6bbb57e9c96
introduced a new vm_page_write_modified_page().
* Resolved a TODO: vm_page_write_modified_pages() did not mark a to be
written page busy but unlocked its cache which could let someone else
steal that page in the mean time.
* Moved the logic when to move a page to the active or inactive queue to
a new function move_page_to_active_or_inactive_queue().
* Moved page_state_to_string() to vm_page(); it's now also used by the
"page" and "page_queue" KDL commands.
* Made the output of the "page_queue list" command more useful.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22323 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Removed the vm_cache/vm_store ref_count duality that besides being a bit ugly
also created the page dameon cache retrieval problem: now, only areas (and
cache consumers) retrieve a reference to the store (and therefore, the vnode).
The page daemon doesn't need to care about this at all anymore, and the pseudo
references of the vm_cache could be removed again.
* Rearranged deletion of vnodes such that its ID can be reused directly after
fs_remove_vnode() has been called.
* vm_page_allocate_page() no longer panics when it runs out of pages, but just
waits for new pages to become available using the new sFreeCondition condition
variable - to make sure this happens in an acceptable time frame, it'll
trigger a run of the low memory handlers.
* Implemented a page_thief() that steals inactive pages from caches and puts
them into the free queue. It runs as a low memory handler.
* The file cache now sets the usage count on the pages it inserts into the
cache (needs some rework though, cache_io() doesn't do it yet).
* Instead of panicking, the kernel will currently dead lock in low memory
situations, since BFS does a bit too much in bfs_release_vnode().
* Some minor cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22315 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Fixed vm_page_allocate_page_run(): it did not take the pageState into account,
and would therefore return uninitialized memory (ie. B_CONTIGUOUS areas would
contain garbage).
Now, it stores if a page is cleared in a new vm_page::is_cleared field.
* Some cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22306 a95241bf-73f2-0310-859d-f6bbb57e9c96
freshly booted, it would already contain > 20000 pages. The size is
now initialized to half of the available pages. Ideally it would
grow/shrink dynamically, though.
* Changed the hash function to yield a better distribution.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22211 a95241bf-73f2-0310-859d-f6bbb57e9c96
a reference to a by them not yet referenced cache was not correct.
They only incremented the reference count, but a vnode cache reference
includes also a vnode reference. In case of the page daemon this would
cause vnode references to be lost (causing bug #1465).
* The page daemon used an unsafe method to access a yet unreferenced
page cache. There was nothing that prevented the cache from being
deleted while the page daemon tried to get a reference. The
vm_page::cache field is now protected by the page cache table
spinlock, too, which the new function
vm_cache_acquire_page_cache_ref(), used by the page daemon, also
acquires while trying to get the reference.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22208 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The vm_translation_map is now correctly held in all of the vm_ mapping
functions.
* Removed the old vm_daemons.c file - there is now a new vm_daemons.cpp
which contains the beginnings of our new page daemon.
So far, it's pretty static and not much tested. What it currently does
is to rescan all pages in the system with a two-handed clock algorithm
and push pages into the modified and inactive lists.
* These inactive pages aren't really stolen yet, even though their mappings
are removed (ie. their next access will cause a page fault). This should
slow down Haiku a bit more, great, huh? :-)
* The page daemon currently only runs on low memory situations, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22156 a95241bf-73f2-0310-859d-f6bbb57e9c96
DEBUG_CACHE_LIST) that prints an unspectacular list of pointers to all
existing caches. Feel free to extend.
* Enhanced MultiAddressSpaceLocker:
- It supports choosing between read and write lock per address space,
now.
- Added AddAreaCacheAndLock(), which adds the address spaces of all
areas that are attached to a given area's cache, locks them, and
locks the cache. It makes sure that the area list didn't change in
the meantime and optionally also that all areas have their
no_cache_change flags cleared.
* Changed vm_copy_on_write_area() to take a cache instead of an area,
requiring it to be locked and all address spaces of affected areas to
be read-locked, plus all areas' no_cache_change flags to be cleared.
Callers simply use MultiAddressSpaceLocker:: AddAreaCacheAndLock() to
do that. This resolves an open TODO, that the areas' base, size, and
protection fields were accessed without their address spaces being
locked.
* vm_copy_area() does now always insert a cache for the target area. Not
doing that would cause source and target area being attached to
the same cache in case the target protection was read-only. This
would make them behave like cloned areas, which would lead to trouble
when one of the areas would be changed to writable later.
* Fixed the !writable -> writable case in vm_set_area_protection(). It
would simply change the protection of all mapped pages for this area,
including ones from lower caches, thus causing later writes to the
area to be seen by areas that shouldn't see them. This fixes a problem
with software breakpoints in gdb. They could cause other programs to
be dropped into the debugger.
* resize_area() uses MultiAddressSpaceLocker::AddAreaCacheAndLock() now,
too, and could be compacted quite a bit.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22152 a95241bf-73f2-0310-859d-f6bbb57e9c96
if the page was already in the "modified" list before. Also, the source page (which is
either mapped directly or copied to the target page) is no longer marked busy before its
final destiny is decided (it didn't have any effect, anyway, since we had its cache
locked for the whole time, but it now preserves the modified state). This fixes bug #1369.
* vm_cache_write_modified() now filters out temporary caches (it's currently called on area
deletion).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@21971 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Removed a few instances where the page state was set busy directly after
allocating it. This is a no-op, since a page is always busy after
allocation.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@21875 a95241bf-73f2-0310-859d-f6bbb57e9c96
offset of the page to insert is already in the cache. Revealed the bug
fixed with my previous commit.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@21817 a95241bf-73f2-0310-859d-f6bbb57e9c96
* More conditional debug code (wrt page transitions between caches).
* Replaced debugger command cache_chain by a nicer cache_tree.
* While handling a soft fault: When we temporarily unlock a cache, it
can theoretically become busy. One such occurrence is now handled
properly, two more panic() ATM, though should be fixed.
* When merging caches, we do now always replace a dummy page in the
upper cache, not only when the concurrent page fault is a read fault.
This prevents a page from the lower (to be discarded) cache from still
remaining mapped (causing a panic).
* When merging caches and replacing a dummy page, we were trying to
remove the dummy page from the wrong cache (causing a panic).
The Haiku kernel seems now to run shockingly stable. ATM, we have more
than two hours uptime of a system booted and running over network. We
didn't manage to get it down by fully building Pe, downloading, unzipping,
and playing with various stuff. Someone should finally fix all those app
server drawing bugs, though (hint, hint! ;-)).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@21672 a95241bf-73f2-0310-859d-f6bbb57e9c96