2004-11-23 06:16:10 +03:00
|
|
|
/*
|
2009-04-30 19:46:55 +04:00
|
|
|
* Copyright 2002-2009, Axel Dörfler, axeld@pinc-software.de.
|
2004-11-23 06:16:10 +03:00
|
|
|
* Distributed under the terms of the MIT License.
|
|
|
|
*
|
|
|
|
* Copyright 2001-2002, Travis Geiselbrecht. All rights reserved.
|
|
|
|
* Distributed under the terms of the NewOS License.
|
|
|
|
*/
|
2009-12-02 21:05:10 +03:00
|
|
|
#ifndef _KERNEL_VM_VM_PAGE_H
|
|
|
|
#define _KERNEL_VM_VM_PAGE_H
|
2002-07-09 16:24:59 +04:00
|
|
|
|
2004-09-07 01:49:38 +04:00
|
|
|
|
2009-12-02 21:05:10 +03:00
|
|
|
#include <vm/vm.h>
|
2010-05-26 01:34:08 +04:00
|
|
|
#include <vm/vm_types.h>
|
2002-07-09 16:24:59 +04:00
|
|
|
|
2004-09-07 01:49:38 +04:00
|
|
|
|
2003-05-03 20:03:26 +04:00
|
|
|
struct kernel_args;
|
|
|
|
|
2008-08-06 04:28:28 +04:00
|
|
|
extern int32 gMappedPagesCount;
|
2003-05-03 20:03:26 +04:00
|
|
|
|
* Removed useless return parameter from vm_remove_all_page_mappings().
* Added vm_clear_page_mapping_accessed_flags() and
vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality
of vm_test_map_activation(), vm_clear_map_flags(), and
vm_remove_all_page_mappings(), thus saving lots of calls to translation map
methods. The backend is the new method
VMTranslationMap::ClearAccessedAndModified().
* Started to make use of the cached page queue and changed the meaning of the
other non-free queues slightly:
- Active queue: Contains mapped pages that have been used recently.
- Inactive queue: Contains mapped pages that have not been used recently. Also
contains unmapped temporary pages.
- Modified queue: Contains unmapped modified pages.
- Cached queue: Contains unmapped unmodified pages (LRU sorted).
Unless we're actually low on memory and actively do paging, modified and
cached queues only contain non-temporary pages. Cached pages are considered
quasi free. They still belong to a cache, but since they are unmodified and
unmapped, they can be freed immediately. And this is what
vm_page_[try_]reserve_pages() do now when there are no more actually free
pages at hand. Essentially this means that pages storing cached file data,
unless mmap()ped, no longer are considered used and don't contribute to page
pressure. Paging will not happen as long there are enough free + cached pages
available.
* Reimplemented the page daemon. It no longer scans all pages, but instead works
the page queues. As long as the free pages situation is harmless, it only
iterates through the active queue and deactivates pages that have not been
used recently. When paging occurs it additionally scans the inactive queue and
frees pages that have not been used recently.
* Changed the page reservation/allocation interface:
vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and
vm_page_allocate_page() now take a vm_page_reservation structure pointer.
The reservation functions initialize the structure -- currently consisting
only of a count member for the number of still reserved pages.
vm_page_allocate_page() decrements the count and vm_page_unreserve_pages()
unreserves the remaining pages (if any). Advantages are that reservation/
unreservation mismatches cannot occur anymore, that vm_page_allocate_page()
can verify that the caller has indeed a reserved page left, and that there's
no unnecessary pressure on the free page pool anymore. The only disadvantage
is that the vm_page_reservation object needs to be passed around a bit.
* Reworked the page reservation implementation:
- Got rid of sSystemReservedPages and sPageDeficit. Instead
sUnreservedFreePages now actually contains the number of free pages that
have not yet been reserved (it cannot become negative anymore) and the new
sUnsatisfiedPageReservations contains the number of pages that are still
needed for reservation.
- Threads waiting for reservations do now add themselves to a waiter queue,
which is ordered by descending priority (VM priority and thread priority).
High priority waiters are served first when pages become available.
Fixes #5328.
* cache_prefetch_vnode(): Would reserve one less page than allocated later, if
the size wasn't page aligned.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-02-03 21:53:52 +03:00
|
|
|
|
|
|
|
struct vm_page_reservation {
|
|
|
|
uint32 count;
|
|
|
|
};
|
|
|
|
|
|
|
|
|
2004-09-07 01:49:38 +04:00
|
|
|
#ifdef __cplusplus
|
|
|
|
extern "C" {
|
|
|
|
#endif
|
|
|
|
|
2007-09-27 16:21:33 +04:00
|
|
|
void vm_page_init_num_pages(struct kernel_args *args);
|
2004-10-20 03:19:10 +04:00
|
|
|
status_t vm_page_init(struct kernel_args *args);
|
|
|
|
status_t vm_page_init_post_area(struct kernel_args *args);
|
|
|
|
status_t vm_page_init_post_thread(struct kernel_args *args);
|
2002-07-09 16:24:59 +04:00
|
|
|
|
2010-05-26 01:34:08 +04:00
|
|
|
status_t vm_mark_page_inuse(page_num_t page);
|
|
|
|
status_t vm_mark_page_range_inuse(page_num_t startPage, page_num_t length);
|
2008-07-23 00:36:32 +04:00
|
|
|
void vm_page_free(struct VMCache *cache, struct vm_page *page);
|
* Added new debug feature (DEBUG_PAGE_ACCESS) to detect invalid concurrent
access to a vm_page. It is basically an atomically accessed thread ID field
in the vm_page structure, which is explicitly set by macros marking the
critical sections. As a first positive effect I had to review quite a bit of
code and found several issues.
* Added several TODOs and comments. Some harmless ones, but also a few
troublesome ones in vm.cpp regarding page unmapping.
* file_cache: PrecacheIO::Prepare()/read_into_cache: Removed superfluous
vm_page_allocate_page() return value checks. It cannot fail anymore.
* Removed the heavily contended "pages" lock. We use different policies now:
- sModifiedTemporaryPages is accessed atomically.
- sPageDeficitLock and sFreePageCondition are protected by a new mutex.
- The page queues have individual locks (mutexes).
- Renamed set_page_state_nolock() to set_page_state(). Unless the caller says
otherwise, it does now lock the affected pages queues itself. Also changed
the return value to void -- we panic() anyway.
* set_page_state(): Add free/clear pages to the beginning of their respective
queues as this is more cache-friendly.
* Pages with the states PAGE_STATE_WIRED or PAGE_STATE_UNUSED are no longer
in any queue. They were in the "active" queue, but there's no good reason
to have them there. In case we decide to let the page daemon work the queues
(like FreeBSD) they would just be in the way.
* Pulled the common part of vm_page_allocate_page_run[_no_base]() into a helper
function. Also fixed a bug I introduced previously: The functions must not
vm_page_unreserve_pages() on success, since they remove the pages from the
free/clear queue without decrementing sUnreservedFreePages.
* vm_page_set_state(): Changed return type to void. The function cannot really
fail and no-one was checking it anyway.
* vm_page_free(), vm_page_set_state(): Added assertion: The page must not be
free/clear before. This is implied by the policy that no-one is allowed to
access free/clear pages without holding the respective queue's lock, which is
not the case at this point. This found the bug fixed in r34912.
* vm_page_requeue(): Added general assertions. panic() when requeuing of
free/clear pages is requested. Same reason as above.
* vm_clone_area(), B_FULL_LOCK case: Don't map busy pages. The implementation is
still not correct, though.
My usual -j8 Haiku build test runs another 10% faster, now. The total kernel
time drops about 18%. As hoped the new locks have only a fraction of the old
"pages" lock contention. Other locks lead the "most wanted list" now.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34933 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-07 05:37:05 +03:00
|
|
|
void vm_page_set_state(struct vm_page *page, int state);
|
2007-09-28 19:50:26 +04:00
|
|
|
void vm_page_requeue(struct vm_page *page, bool tail);
|
2002-07-09 16:24:59 +04:00
|
|
|
|
2004-11-03 20:24:41 +03:00
|
|
|
// get some data about the number of pages in the system
|
2010-05-26 01:34:08 +04:00
|
|
|
page_num_t vm_page_num_pages(void);
|
|
|
|
page_num_t vm_page_num_free_pages(void);
|
|
|
|
page_num_t vm_page_num_available_pages(void);
|
|
|
|
page_num_t vm_page_num_unused_pages(void);
|
2008-08-06 04:28:28 +04:00
|
|
|
void vm_page_get_stats(system_info *info);
|
2010-06-23 17:52:32 +04:00
|
|
|
phys_addr_t vm_page_max_address();
|
2004-11-03 20:24:41 +03:00
|
|
|
|
2008-07-23 00:36:32 +04:00
|
|
|
status_t vm_page_write_modified_page_range(struct VMCache *cache,
|
2008-07-23 19:47:47 +04:00
|
|
|
uint32 firstPage, uint32 endPage);
|
|
|
|
status_t vm_page_write_modified_pages(struct VMCache *cache);
|
2007-09-28 19:50:26 +04:00
|
|
|
void vm_page_schedule_write_page(struct vm_page *page);
|
2008-07-23 00:36:32 +04:00
|
|
|
void vm_page_schedule_write_page_range(struct VMCache *cache,
|
2008-05-23 01:51:12 +04:00
|
|
|
uint32 firstPage, uint32 endPage);
|
2004-11-23 06:16:10 +03:00
|
|
|
|
* Removed useless return parameter from vm_remove_all_page_mappings().
* Added vm_clear_page_mapping_accessed_flags() and
vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality
of vm_test_map_activation(), vm_clear_map_flags(), and
vm_remove_all_page_mappings(), thus saving lots of calls to translation map
methods. The backend is the new method
VMTranslationMap::ClearAccessedAndModified().
* Started to make use of the cached page queue and changed the meaning of the
other non-free queues slightly:
- Active queue: Contains mapped pages that have been used recently.
- Inactive queue: Contains mapped pages that have not been used recently. Also
contains unmapped temporary pages.
- Modified queue: Contains unmapped modified pages.
- Cached queue: Contains unmapped unmodified pages (LRU sorted).
Unless we're actually low on memory and actively do paging, modified and
cached queues only contain non-temporary pages. Cached pages are considered
quasi free. They still belong to a cache, but since they are unmodified and
unmapped, they can be freed immediately. And this is what
vm_page_[try_]reserve_pages() do now when there are no more actually free
pages at hand. Essentially this means that pages storing cached file data,
unless mmap()ped, no longer are considered used and don't contribute to page
pressure. Paging will not happen as long there are enough free + cached pages
available.
* Reimplemented the page daemon. It no longer scans all pages, but instead works
the page queues. As long as the free pages situation is harmless, it only
iterates through the active queue and deactivates pages that have not been
used recently. When paging occurs it additionally scans the inactive queue and
frees pages that have not been used recently.
* Changed the page reservation/allocation interface:
vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and
vm_page_allocate_page() now take a vm_page_reservation structure pointer.
The reservation functions initialize the structure -- currently consisting
only of a count member for the number of still reserved pages.
vm_page_allocate_page() decrements the count and vm_page_unreserve_pages()
unreserves the remaining pages (if any). Advantages are that reservation/
unreservation mismatches cannot occur anymore, that vm_page_allocate_page()
can verify that the caller has indeed a reserved page left, and that there's
no unnecessary pressure on the free page pool anymore. The only disadvantage
is that the vm_page_reservation object needs to be passed around a bit.
* Reworked the page reservation implementation:
- Got rid of sSystemReservedPages and sPageDeficit. Instead
sUnreservedFreePages now actually contains the number of free pages that
have not yet been reserved (it cannot become negative anymore) and the new
sUnsatisfiedPageReservations contains the number of pages that are still
needed for reservation.
- Threads waiting for reservations do now add themselves to a waiter queue,
which is ordered by descending priority (VM priority and thread priority).
High priority waiters are served first when pages become available.
Fixes #5328.
* cache_prefetch_vnode(): Would reserve one less page than allocated later, if
the size wasn't page aligned.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-02-03 21:53:52 +03:00
|
|
|
void vm_page_unreserve_pages(vm_page_reservation* reservation);
|
|
|
|
void vm_page_reserve_pages(vm_page_reservation* reservation, uint32 count,
|
|
|
|
int priority);
|
|
|
|
bool vm_page_try_reserve_pages(vm_page_reservation* reservation, uint32 count,
|
|
|
|
int priority);
|
2007-09-26 21:42:25 +04:00
|
|
|
|
* Removed useless return parameter from vm_remove_all_page_mappings().
* Added vm_clear_page_mapping_accessed_flags() and
vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality
of vm_test_map_activation(), vm_clear_map_flags(), and
vm_remove_all_page_mappings(), thus saving lots of calls to translation map
methods. The backend is the new method
VMTranslationMap::ClearAccessedAndModified().
* Started to make use of the cached page queue and changed the meaning of the
other non-free queues slightly:
- Active queue: Contains mapped pages that have been used recently.
- Inactive queue: Contains mapped pages that have not been used recently. Also
contains unmapped temporary pages.
- Modified queue: Contains unmapped modified pages.
- Cached queue: Contains unmapped unmodified pages (LRU sorted).
Unless we're actually low on memory and actively do paging, modified and
cached queues only contain non-temporary pages. Cached pages are considered
quasi free. They still belong to a cache, but since they are unmodified and
unmapped, they can be freed immediately. And this is what
vm_page_[try_]reserve_pages() do now when there are no more actually free
pages at hand. Essentially this means that pages storing cached file data,
unless mmap()ped, no longer are considered used and don't contribute to page
pressure. Paging will not happen as long there are enough free + cached pages
available.
* Reimplemented the page daemon. It no longer scans all pages, but instead works
the page queues. As long as the free pages situation is harmless, it only
iterates through the active queue and deactivates pages that have not been
used recently. When paging occurs it additionally scans the inactive queue and
frees pages that have not been used recently.
* Changed the page reservation/allocation interface:
vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and
vm_page_allocate_page() now take a vm_page_reservation structure pointer.
The reservation functions initialize the structure -- currently consisting
only of a count member for the number of still reserved pages.
vm_page_allocate_page() decrements the count and vm_page_unreserve_pages()
unreserves the remaining pages (if any). Advantages are that reservation/
unreservation mismatches cannot occur anymore, that vm_page_allocate_page()
can verify that the caller has indeed a reserved page left, and that there's
no unnecessary pressure on the free page pool anymore. The only disadvantage
is that the vm_page_reservation object needs to be passed around a bit.
* Reworked the page reservation implementation:
- Got rid of sSystemReservedPages and sPageDeficit. Instead
sUnreservedFreePages now actually contains the number of free pages that
have not yet been reserved (it cannot become negative anymore) and the new
sUnsatisfiedPageReservations contains the number of pages that are still
needed for reservation.
- Threads waiting for reservations do now add themselves to a waiter queue,
which is ordered by descending priority (VM priority and thread priority).
High priority waiters are served first when pages become available.
Fixes #5328.
* cache_prefetch_vnode(): Would reserve one less page than allocated later, if
the size wasn't page aligned.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-02-03 21:53:52 +03:00
|
|
|
struct vm_page *vm_page_allocate_page(vm_page_reservation* reservation,
|
|
|
|
uint32 flags);
|
* Introduced structures {virtual,physical}_address_restrictions, which specify
restrictions for virtual/physical addresses.
* vm_page_allocate_page_run():
- Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not
taken into account.
- Takes a physical_address_restrictions instead of base/limit and also
supports alignment and boundary restrictions, now.
* map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/
ReserveAddressRange() take a virtual_address_restrictions parameter, now. They
also support an alignment independent from the range size.
* create_area_etc(), vm_create_anonymous_area(): Take
{virtual,physical}_address_restrictions parameters, now.
* Removed no longer needed B_PHYSICAL_BASE_ADDRESS.
* DMAResources:
- Fixed potential overflows of uint32 when initializing from device node
attributes.
- Fixed bounce buffer creation TODOs: By using create_area_etc() with the
new restrictions parameters we can directly support physical high address,
boundary, and alignment.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-06-14 20:25:14 +04:00
|
|
|
struct vm_page *vm_page_allocate_page_run(uint32 flags, page_num_t length,
|
|
|
|
const physical_address_restrictions* restrictions, int priority);
|
2007-09-27 16:21:33 +04:00
|
|
|
struct vm_page *vm_page_at_index(int32 index);
|
2010-05-26 01:34:08 +04:00
|
|
|
struct vm_page *vm_lookup_page(page_num_t pageNumber);
|
2010-01-29 13:00:45 +03:00
|
|
|
bool vm_page_is_dummy(struct vm_page *page);
|
2004-09-07 01:49:38 +04:00
|
|
|
|
|
|
|
#ifdef __cplusplus
|
|
|
|
}
|
|
|
|
#endif
|
2002-07-09 16:24:59 +04:00
|
|
|
|
2009-12-02 21:05:10 +03:00
|
|
|
#endif /* _KERNEL_VM_VM_PAGE_H */
|