the heap debug panics. Instead syslog output is generated if turned off.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35481 a95241bf-73f2-0310-859d-f6bbb57e9c96
well as the thread allocating it. Can for example be used to verify that an
object or buffer is as large as expected.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35480 a95241bf-73f2-0310-859d-f6bbb57e9c96
keeping all returned heap memory in the 0xdeadbeef state (including the
first sizeof(void *) bytes otherwise for the free list). While wasting a lot
of memory it allows you to rely on 0xdeadbeef being always present as no
future allocation will reuse the freed memory block.
* Also added heap_debug_malloc_with_guard_page() which is intended to allocate
a memory block so it is aligned that the start of invalid memory past the
allocation is in an unmapped guard page. However the kernel backend that would
guarantee this is not yet implemented, so right now this works only by chance
if no other area happens to be allocated exactly past the created one. With a
very specifc suspicion you can put that one allocation you get to good use
though. It causes a crash when accessing memory past the allocation size so
you actually get a backtrace from where the access happened instead of only
after freeing/wall checking.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35478 a95241bf-73f2-0310-859d-f6bbb57e9c96
it has been unmapped. This way modified pages could end up in the "cached"
queue without having been written back. That would be a good explanation for
#5374 (partially wrong file contents) -- as soon as such a page was freed,
the invalid on-disk contents would become visible.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35477 a95241bf-73f2-0310-859d-f6bbb57e9c96
so the fallback implementations of UnmapPages() and UnmapArea() need to do
that. Not relevant for x86.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35476 a95241bf-73f2-0310-859d-f6bbb57e9c96
though).
* Added/improved some KDL commands to make the slab easier to work with from
KDL.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35466 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Renamed the "busy" stuff to "busy_reading", and added a "busy_writing"
concept.
* This now allows reading a block (and other blocks), while blocks are written
back. This should speed all operations needing to write back blocks, like
unzipping or compiling.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35464 a95241bf-73f2-0310-859d-f6bbb57e9c96
driver_events (ie. there is now only a single list to walk).
* Also, the DriverWatcher is now maintained using the driver_events.
This fixes bug #5005.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35463 a95241bf-73f2-0310-859d-f6bbb57e9c96
call with the true parameter. Fixes a panic at boot when using the hpet timers
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35453 a95241bf-73f2-0310-859d-f6bbb57e9c96
which happened on some systems (mine included).
Should close ticket #5341.
Thanks!
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35441 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Moving some functions around, removing and adding others for the public API.
I've written a blog post at haiku-os.org to go as documentation for this
introducing the API and the other helpful bits.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35431 a95241bf-73f2-0310-859d-f6bbb57e9c96
the contiguous page allocation function and unlocks a bin locker a bit earlier.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35424 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Make the contiguous page allocation capable of aligning the allocation
and make it more clever by checking up front if there's a chance of getting
enough pages at all, by giving up earlier if the page count can't be fit
anymore, and in the alignment case by only checking the pages which have a
valid alignment.
* If the alignment requirement is > B_PAGE_SIZE we now use page allocation
directly, because the bins aren't necesarily aligned on their size past
B_PAGE_SIZE anymore.
* When doing aligned bin allocation, calculate the aligned size up front and
choose the right heap for the allocation.
* Also when doing aligned bin allocations we not only need to round up the size
but also ensure that the bin we choose is aligned at all.
* Moved adding leak check info into it's own function.
Fixes various misalignment problems when working with alignments > B_PAGE_SIZE
or when using alignments < allocation size. Also the directly aligned page
allocations now only use up as many pages as actually required instead of
allocating based on the rounded up to align size.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35422 a95241bf-73f2-0310-859d-f6bbb57e9c96
consider that when filling in the text and data ranges of the image info.
This fixes#5361 and #5351, caused by libtracker.so not finding its own
image.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35407 a95241bf-73f2-0310-859d-f6bbb57e9c96
going on. I only wanted to have it in the repository in case we decide at a
later point that it is a good idea after all.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35395 a95241bf-73f2-0310-859d-f6bbb57e9c96
its own source file now that the page daemon source file is gone.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35394 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added vm_clear_page_mapping_accessed_flags() and
vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality
of vm_test_map_activation(), vm_clear_map_flags(), and
vm_remove_all_page_mappings(), thus saving lots of calls to translation map
methods. The backend is the new method
VMTranslationMap::ClearAccessedAndModified().
* Started to make use of the cached page queue and changed the meaning of the
other non-free queues slightly:
- Active queue: Contains mapped pages that have been used recently.
- Inactive queue: Contains mapped pages that have not been used recently. Also
contains unmapped temporary pages.
- Modified queue: Contains unmapped modified pages.
- Cached queue: Contains unmapped unmodified pages (LRU sorted).
Unless we're actually low on memory and actively do paging, modified and
cached queues only contain non-temporary pages. Cached pages are considered
quasi free. They still belong to a cache, but since they are unmodified and
unmapped, they can be freed immediately. And this is what
vm_page_[try_]reserve_pages() do now when there are no more actually free
pages at hand. Essentially this means that pages storing cached file data,
unless mmap()ped, no longer are considered used and don't contribute to page
pressure. Paging will not happen as long there are enough free + cached pages
available.
* Reimplemented the page daemon. It no longer scans all pages, but instead works
the page queues. As long as the free pages situation is harmless, it only
iterates through the active queue and deactivates pages that have not been
used recently. When paging occurs it additionally scans the inactive queue and
frees pages that have not been used recently.
* Changed the page reservation/allocation interface:
vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and
vm_page_allocate_page() now take a vm_page_reservation structure pointer.
The reservation functions initialize the structure -- currently consisting
only of a count member for the number of still reserved pages.
vm_page_allocate_page() decrements the count and vm_page_unreserve_pages()
unreserves the remaining pages (if any). Advantages are that reservation/
unreservation mismatches cannot occur anymore, that vm_page_allocate_page()
can verify that the caller has indeed a reserved page left, and that there's
no unnecessary pressure on the free page pool anymore. The only disadvantage
is that the vm_page_reservation object needs to be passed around a bit.
* Reworked the page reservation implementation:
- Got rid of sSystemReservedPages and sPageDeficit. Instead
sUnreservedFreePages now actually contains the number of free pages that
have not yet been reserved (it cannot become negative anymore) and the new
sUnsatisfiedPageReservations contains the number of pages that are still
needed for reservation.
- Threads waiting for reservations do now add themselves to a waiter queue,
which is ordered by descending priority (VM priority and thread priority).
High priority waiters are served first when pages become available.
Fixes#5328.
* cache_prefetch_vnode(): Would reserve one less page than allocated later, if
the size wasn't page aligned.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
that does not have a transaction.
* This should fix#5340.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35390 a95241bf-73f2-0310-859d-f6bbb57e9c96
releasing our reference to it. So return immediately after having done that.
Previously the _MaybeNotifyProfilerThread() that innocently lurked at the end
of the method would be invoked, playing with the dead beef.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35357 a95241bf-73f2-0310-859d-f6bbb57e9c96
general "flags" parameter. It encodes the target state of the page -- so
that the page isn't unnecessarily put in the wrong page queue first -- a
flag whether the page should be cleared, and one to indicate whether the
page should be marked busy.
* Added page state PAGE_STATE_CACHED. Not used yet.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35333 a95241bf-73f2-0310-859d-f6bbb57e9c96
flag. The obvious advantage is that one can still see what state a page is in
and even move it between states while being marked busy.
* Removed the vm_page::is_dummy flag. Instead we mark marker pages busy, which
in all cases has the same effect. Introduced a vm_page_is_dummy() that can
still check whether a given page is a dummy page.
* vm_page_unreserve_pages(): Before adding to the system reserve make sure
sUnreservedFreePages is non-negative. Otherwise we'd make nonexisting pages
available for allocation. steal_pages() still has the same problem and it
can't be solved that easily.
* map_page(): No longer changes the page state/mark the page unbusy. That's the
caller's responsibility.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35331 a95241bf-73f2-0310-859d-f6bbb57e9c96
the wired count of manually mapped pages not to be decremented in
delete_area(), leading to a "pages still has mappings" panic when the slab
allocator's memory manager deleted areas.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35329 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added flags to avoid notifying the busy condition variable unnecessarily.
* get_writable_cached_block(): Unlock the cache while memcpy()ing/memset()ing
the block's data. The idea is to reduce lock contention. Less effective
than I hoped, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35328 a95241bf-73f2-0310-859d-f6bbb57e9c96
functional change (other than avoiding no-ops like subtracting 0).
* vm_page_try_reserve_pages(): Moved the kernel tracing calls from the top of
the function to the points where the reservation already succeeded.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35322 a95241bf-73f2-0310-859d-f6bbb57e9c96
argument. They replace the previous special-purpose allocation functions
(malloc_nogrow(), vip_io_request_malloc()).
* Moved the I/O VIP heap to heap.cpp accordingly.
* Added quite a bit of passing around of allocation flags in the VM,
particularly in the VM*AddressSpace classes.
* Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always
allocated on the normal heap.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96
on the fly, clearing and writing it each time, we now use an iovec with 32
identical entries pointing to a clear page that we prepare once at
initialization. This speeds up clear_image in low memory situations
dramatically.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35304 a95241bf-73f2-0310-859d-f6bbb57e9c96
beginning whether to pass the cache by really doesn't help when
reading/writing a huge amount of data, since a low memory situation is likely
to occur at some point during the operation. This should fix the main issue
of #3768.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35299 a95241bf-73f2-0310-859d-f6bbb57e9c96
memory and page reservation functions have a new "priority" parameter that
indicates how deep the function may tap into that reserve. The currently
existing priority levels are "user", "system", and "VIP". The idea is that
user programs should never be able to cause a state that gets the kernel into
trouble due to heavy battling for memory. The "VIP" level (not really used
yet) is intended for allocations that are required to free memory eventually
(in the page writer). More levels are thinkable in the future, like "user real
time" or "user system server".
* Added "priority" parameters to several VMCache methods.
* Replaced the map_backing_store() "unmapAddressRange" parameter by a "flags"
parameter.
* Added area creation flag CREATE_AREA_PRIORITY_VIP and slab allocator flag
CACHE_PRIORITY_VIP indicating the importance of the request.
* Changed most code to pass the right priorities/flags.
These changes already significantly improve the behavior in low memory
situations. I've tested a bit with 64 MB (virtual) RAM and, while not
particularly fast and responsive, the system remains at least usable under high
memory pressure.
As a side effect the slab allocator can now be used as general memory allocator.
Not done by default yet, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35295 a95241bf-73f2-0310-859d-f6bbb57e9c96
Those use malloc(), which obviously doesn't work before the heap is
initialized.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35288 a95241bf-73f2-0310-859d-f6bbb57e9c96
table, we only enter the slab. This also saves us the link object per object.
* Removed the now useless {Prepare,Unprepare}Object() methods.
* SmallObjectCache: Unlock the cache while calling into the MemoryManager. We
need to do that to avoid an indirect violation of the CACHE_DONT_* policy.
* Simplified lower_boundary().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35285 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added support to do larger raw allocations (up to one large chunk (128 pages))
in the slab areas. For an even larger allocation an area is created (haven't
seen that happen yet, though).
* Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING).
* _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area
would be added to the free lists instead of being removed from them. This
would corrupt the lists and also lead to all kinds of misuse of meta chunks.
object caches:
* Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object
caches, but the block allocator sets it on all power of two size caches.
* object_cache_reserve_internal(): Detect recursion and don't wait in such a
case. The function could deadlock itself, since
HashedObjectCache::CreateSlab() does allocate memory, thus potentially
reentering.
* object_cache_low_memory():
- I missed some returns when reworking that one in r35254, so the function
might stop early and also leave the cache in maintenance mode, which would
cause it to be ignored by object cache resizer and low memory handler from
that point on.
- Since ReturnSlab() potentially unlocks, the conditions weren't quite correct
and too many slabs could be freed.
- Simplified things a bit.
* object_cache_alloc(): Since object_cache_reserve_internal() does potentially
unlock the cache, the situation might have changed and their might not be an
empty slab available, but a partial one. The function would crash.
* Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING.
* Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with
the VMCache commands.
* ObjectCache::usage was not maintained anymore since I introduced the
MemoryManager. object_cache_get_usage() would thus always return 0 and the
block cache would not be considered cached memory. This was only of
informational relevance, though.
slab allocator misc.:
* Disable the object depots of block allocator caches for object sizes > 2 KB.
Allocations of those sizes aren't so common that the object depots yield any
benefit.
* The slab allocator is now fully self-sufficient. It allocates its bootstrap
memory from the MemoryManager, and the hash tables for HashedObjectCaches use
the block allocator instead of the heap, now.
* Added option to use the slab allocator for malloc() and friends
(USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and
has virtually no lock contention. Handling for low memory situations is yet
missing, though.
* Improved the output of some debugger commands.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96
VMCacheRef object, since that can fail, in which case the subsequently called
Delete() would use uninitialized pointers.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35279 a95241bf-73f2-0310-859d-f6bbb57e9c96
to add it back to its partial list or it would be leaked.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35266 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Does now keep one or two empty areas around, so that even in case of
CACHE_DONT_LOCK_KERNEL_SPACE memory can be provided as long as pages are
available. The object cache maintainer thread is used to asynchronously
allocate/delete the free areas.
* Added new debugger commands "slab_meta_chunk[s]" and improved the existing
ones.
* Moved Area::chunks to MetaChunk.
* Removed unused _AllocationArea() "chunkSize" parameter.
* Fixed serious bug in _FreeChunk(): Empty meta chunks were not removed from
the partial chunk lists and could thus be used twice.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35264 a95241bf-73f2-0310-859d-f6bbb57e9c96
adding the cache to the maintenance queue. Not so important but more correct.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35255 a95241bf-73f2-0310-859d-f6bbb57e9c96
low resource handler functions. Particularly fixed the race conditions
between those and delete_object_cache().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35254 a95241bf-73f2-0310-859d-f6bbb57e9c96
a certain chunk size, the areas are split into meta chunks (which are as
large as a large chunk) each of which can be a used independently for chunks
of a certain size. This reduces the vulnerablity to fragmentation, so that we
need fewer areas overall.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35250 a95241bf-73f2-0310-859d-f6bbb57e9c96
CACHE_DONT_LOCK_KERNEL_SPACE. If the former is given, the slab memory manager
does not wait when reserving memory or pages. The latter prevents area
operations. The new flags add a bit of flexibility. E.g. when allocating page
mapping objects for userland areas CACHE_DONT_WAIT_FOR_MEMORY is sufficient,
i.e. the allocation will succeed as long as pages are available.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35246 a95241bf-73f2-0310-859d-f6bbb57e9c96
allocate a page mapping. In that case we do at least have to mark the page
not busy again. Furthermore we enforce the minimum page mappings object cache
reserve, so we'll have more luck on the next fault.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35241 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Implemented a more elaborated raw memory allocation backend (MemoryManager).
We allocate 8 MB areas whose pages we allocate and map when needed. An area is
divided into equally-sized chunks which form the basic units of allocation. We
have areas with three possible chunk sizes (small, medium, large), which is
basically what the ObjectCache implementations were using anyway.
* Added "uint32 flags" parameter to several of the slab allocator's object
cache and object depot functions. E.g. object_depot_store() potentially wants
to allocate memory for a magazine. But also in pure freeing functions it
might eventually become useful to have those flags, since they could end up
deleting an area, which might not be allowable in all situations. We should
introduce specific flags to indicate that.
* Reworked the block allocator. Since the MemoryManager allocates block-aligned
areas, maintains a hash table for lookup, and maps chunks to object caches,
we can quickly find out which object cache a to be freed allocation belongs
to and thus don't need the boundary tags anymore.
* Reworked the slab boot strap process. We allocate from the initial area only
when really necessary, i.e. when the object cache for the respective
allocation size has not been created yet. A single page is thus sufficient.
other:
* vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the
semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS.
* Use an object cache for page mappings. This significantly reduces the
contention on the heap bin locks.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96
the CACHE_DONT_SLEEP flag to work for real, since otherwise the thread could
block on the mutex held by a thread allocating memory. We use two condition
variables to prevent multiple threads from allocating slabs at the same time.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35206 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Changed the semantics of object_cache_reserve_internal(). Now it makes sure
the given number of objects are free. As a side effect this also changes
the semantics of object_cache_reserve() similarly, though I have trouble
seeing the purpose of the function in the first place.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35204 a95241bf-73f2-0310-859d-f6bbb57e9c96
one for each per CPU store):
* The depot is now protected by a R/W lock combined with a spinlock. It is
required to either hold read lock + spinlock or just the write lock.
* When accessing the per CPU stores we only need to acquire the read lock
and disable interrupts. When switching magazines with the depot we
additionally get the spinlock.
* When allocating a new magazine we do completely unlock.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35200 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The threads beside the main thread are killed earlier now (in the new
team_shutdown_team()), before removing the team from the team hash and from
its process group. This fixes#5296.
* Use a condition variable instead of a semaphore to wait for the non-main
threads to die. We notify the condition right after a thread has left the
team. The semaphore was released by the undertaker.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35196 a95241bf-73f2-0310-859d-f6bbb57e9c96
function is used only in one place and the missing locking would be harmless
if it weren't for the per translation map physical page mapper. It is used to
map the page table for the lookup. Concurrent access could corrupt its data
structures, or, just as bad, the unlocked Query() could remap the page table
used by a concurrent Map() or Unmap(), which would then manipulate the
wrong page table.
Potentially messing up kernel memory, this bug could obviously cause all
kinds of kernel crashes and weird behavior. E.g. ticket #5138 is a likely
candidate, as are triple faults.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35195 a95241bf-73f2-0310-859d-f6bbb57e9c96
idea, since that would potentially add the object back to the object store
or lead to infinite recursion. When the object cache is destroyed it most
likely led to infinite loops, because the object would alternately be
removed from and added back to the object store.
* delete_object_cache(): Lock after destroying the object store, so we don't
deadlock.
* Use the object store on SMP machines. It seems to work, though I only
tested with the network stack and that seems to have problems of its own.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35182 a95241bf-73f2-0310-859d-f6bbb57e9c96
* heap_index_for(): Could return invalid index, if there hadn't been created
a set of heaps for each CPU.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35179 a95241bf-73f2-0310-859d-f6bbb57e9c96
things a bit.
* Some style cleanup.
* The object depot does now have a cookie that will be passed to the return
hook.
* Fixed object_cache_return_object_wrapper() using the new cookie.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35174 a95241bf-73f2-0310-859d-f6bbb57e9c96
When initializing driver settings, make sure to set the parameter count to 0,
because these settings have not been parsed yet. This allows us to safely free
the settings. Freeing the settings is triggered in load_driver_settings() if
we encounter settings which have been originally loaded by the boot_loader,
which might be stale. I think the bug would trigger for settings which had been
loaded by the boot_loader but had never been parsed.
With this change, I can use the userlandfs on all my machines.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35166 a95241bf-73f2-0310-859d-f6bbb57e9c96
VMCacheRef object which points to the cache. This allows to optimize
VMCache::MoveAllPages(), since it no longer needs to iterate over all pages
to adjust their cache pointer. It can simple swap the cache refs of the two
caches instead.
Reduces the total -j8 Haiku image build time only marginally. The kernel time
drops almost 10%, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35155 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added "bool consumerLocked" parameter to VMCache::Unlock() and
ReleaseRefAndUnlock(). Since Unlock() may cause the cache to be merged with
a consumer cache, the flag is needed to prevent a deadlock in case the
caller still holds a lock to the consumer. Hasn't been a problem yet, since
that situation never occurred.
* VMCacheChainLocker: Reversed unlocking order to bottom-up. The other
direction could cause a deadlock in case caches would be merged, since the
locking order would be reversed. The way VMCacheChainLocker was used this
didn't happen, though.
* fault_get_page(): While copying a page from a lower cache to the top cache,
we do now unlock all caches but the top one, so we don't unnecessarily
kill concurrency.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35153 a95241bf-73f2-0310-859d-f6bbb57e9c96
Also added an empty stub for _thread_do_exit_notification() when compiling for GCC2.
* Removed the check testing if the thread is already dead.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35142 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Moved the "tmp" directory out of /var, and to /boot/common/cache/.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35104 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Implementations of pthread_getschedparam and pthread_setschedparam I had since a while.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35098 a95241bf-73f2-0310-859d-f6bbb57e9c96
Update filesystem name in find_directory as our fat filesystem is not named "dos".
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35093 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Reorganized the code for [un]mapping pages:
- Added new VMTranslationMap::Unmap{Area,Page[s]}() which essentially do what
vm_unmap_page[s]() did before, just in the architecture specific code, which
allows for specific optimizations. UnmapArea() is for the special case that
the complete area is unmapped. Particularly in case the address space is
deleted, some work can be saved. Several TODOs could be slain.
- Since they are only used within vm.cpp vm_map_page() and vm_unmap_page[s]()
are now static and have lost their prefix (and the "preserveModified"
parameter).
* Added VMTranslationMap::Protect{Page,Area}(). They are just inline wrappers
for Protect().
* X86VMTranslationMap::Protect(): Make sure not to accidentally clear the
accessed/dirty flags.
* X86VMTranslationMap::Unmap()/Protect(): Make page table skipping actually
work. It was only skipping to the next page.
* Adjusted the PPC code to at least compile.
No measurable effect for the -j8 Haiku image build time, though the kernel time
drops minimally.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35089 a95241bf-73f2-0310-859d-f6bbb57e9c96
* relocated Trash dirs to volume roots by modifying find_directory() to report the trash location as volume/Trash.
* FSUtils no longer creates /home/Desktop on every volume.
* TrashWatcher now keeps icons in sync on all volumes.
* Simplified FSGetDeskDir since it no longer has to worry about getting the desk directory on any volume other than the root.
* Relocated trash context menu logic to BContainerWindow so it can also be used at the volume roots.
* DesktopPoseView now creates a virtual Trash pose representing the trash contents as before.
* Corrected typo: Model::WriteAttrKillForegin() -> Model::WriteAttrKillForeign().
Closes ticket #5245.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35085 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Pulled the physical page mapping functions out of vm_translation_map into
a new interface VMPhysicalPageMapper.
* Renamed vm_translation_map to VMTranslationMap and made it a proper C++
class. The functions in the operations vector have become methods.
* Added class GenericVMPhysicalPageMapper implementing VMPhysicalPageMapper
as far as possible (without actually writing new code).
* Adjusted the x86 and the PPC specifics accordingly (untested for the
latter). For the other architectures the build is, I'm afraid, seriously
broken.
The next steps will modify and extend the VMTranslationMap interface, so that
it will be possible to fix the bugs in vm_unmap_page[s]() and employ
architecture specific optimizations.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35066 a95241bf-73f2-0310-859d-f6bbb57e9c96
situations:
* When mapping the page the page table entry should not have been marked
"present" before, i.e. it would not have been cached anyway.
* When the page table entry's accessed flag wasn't set, the entry hadn't been
cached either.
Speeds up the -j8 Haiku image build only minimally, but the total kernel time
drops about 9%.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35062 a95241bf-73f2-0310-859d-f6bbb57e9c96
* ioapic_init(): map_physical_memory() was called for already mapped
addresses. This worked fine, but only because the x86 page mapping code
didn't mind.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35059 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Removed the page_{table,directory}_entry structures. The bit fields are
nice in principle, but modifying individual flags this way is inherently
non-atomic and we need atomicity in some situations.
* Use atomic operations in protect_tmap(), clear_flags_tmap(), and others.
* Aligned the query_tmap_interrupt() semantics with that of query_tmap().
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35058 a95241bf-73f2-0310-859d-f6bbb57e9c96
Entering the boot loader menu has changed/simplified while reducing the boot time by .75 seconds.
Now it is enough to hold one of shift/Esc/F8/F12/Space. Thanks!
I've also updated the boot loader documentation to reflect the change, but I only mentioned holding shift.
I know that changing the documention directly is not preferred anymore, but I wanted to make sure this
patch is complete.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35050 a95241bf-73f2-0310-859d-f6bbb57e9c96
preserve the dirty flag of the mapping without having to potentially move the
page to the modified queue. This lifts the (ignored) requirement that the
pages to be unmapped must not be busy.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35023 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Use atomic_{and,or}() instead of atomic_set(), as there are no built-ins
for the latter.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35021 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Fix the boot net stack's IPService to base the calculation payload size
on the IP-indicated packet size instead of the Ethernet payload.
Thanks a lot! Fixes ticket #5234.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35016 a95241bf-73f2-0310-859d-f6bbb57e9c96
This makes appending the pages to the active queue more efficient and we
don't need the vm_page::is_cleared bit anymore.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35011 a95241bf-73f2-0310-859d-f6bbb57e9c96
short and quite hot, so mutexes just cause more overhead due to frequent
rescheduling than waiting for the spinlocks does. The free and clear queues
are additionally protected by a R/W lock, which is mostly read-locked, save
for rare cases like allocating page runs.
The total -j8 Haiku image build speedup is marginal. The kernel time drops
about 8%, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35004 a95241bf-73f2-0310-859d-f6bbb57e9c96
that causes a locking order reversal (condition variable lock <-> system
profiler lock) and thus a potential deadlock. Instead we use the thread
blocking API directly. Fixes#5229.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35003 a95241bf-73f2-0310-859d-f6bbb57e9c96
We can only access the page, if it is not busy. Fixes#5228.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34980 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Various smaller fixes.
* Used add_debugger_command_etc() and added more verbose usage message.
* Added option "-m" which iterates through all address spaces and finds out
which virtual pages are mapped to the page.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34979 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added Debug{First,Next}() methods to allow easy iteration through the
address spaces in kernel debugger commands.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34978 a95241bf-73f2-0310-859d-f6bbb57e9c96
device path + child partition name. When a "raw" device is unpublished the node
removal notification triggers the partition and child partitions to be
unpublished/removed. Since in that case the "raw" node is already unpublished
trying to resolve it in devfs_unpublish_partition() again to unpublish the child
partitions would fail, leaving the child partition nodes behind. When a new raw
device would then become available publishing its partitions would fail because
of these left behind nodes, causing bug #4587. Seeing that this code is more
compact and straight forward anyway I don't quite see why it was changed in the
first place.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34967 a95241bf-73f2-0310-859d-f6bbb57e9c96
LockAllSourceCaches()) and moved it to the beginning of the file.
* Removed sMappingLock and adjusted the locking policy for mapping/unmapping
pages: Since holding the lock of the affected pages' caches is already
required, that does now protect the page's mappings, too. The area mappings
are protected by the translation map lock, which we always acquire anyway
when mapping, unmapping, or looking up mappings.
The change results in a -j8 Haiku image build speedup of almost 10%. The
total kernel time drops almost 30%.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34965 a95241bf-73f2-0310-859d-f6bbb57e9c96
Apparently (at least when running in VMware >=2) the boot loader can still
map the same physical page more than once -- in the ACPI or HPET code I
suppose -- which would lead to this situation.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34954 a95241bf-73f2-0310-859d-f6bbb57e9c96
following function might otherwise not be shown correctly in ELF tools.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34952 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Moved hash computations out of the critical sections.
* Replaced the LRU entry queue by an array of entry "generations", each
containing a sparse array of entries of that generation. Whenever a
generation is full, we clear the oldest generation and continue with that
one. The main advantage of this algorithm is that entry cache's mutex could
be replaced by an r/w lock, that most of the time only has to be read
locked in Lookup(). This does dramatically decrease contention of that
lock.
The total -j8 Haiku image build speedup is marginal, but the kernel time
drops about 7% (now being smaller than the real time).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34950 a95241bf-73f2-0310-859d-f6bbb57e9c96
don't need it. That prevents us from ending up with the page being mapped
multiple times (under VMware at least) and thus fixes#5208.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34948 a95241bf-73f2-0310-859d-f6bbb57e9c96
allocate all pages the given range intersects with. When not page aligned
it could fail to allocate the last page.
* mmu_free():
- Adjusted semantics to be compatible with mmu_map_physical_memory().
- The validity check was broken, because page number and addresses were
mixed, and because KERNEL_BASE + kMaxKernelSize doesn't mark the end of
the allocated virtual ranges.
- The final check against sNextVirtualAddress was broken.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34947 a95241bf-73f2-0310-859d-f6bbb57e9c96
protected by the global vnodes lock. The contention mostly moves to other
locks, though. The total -j8 Haiku image build time is only reduced
minimally.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34938 a95241bf-73f2-0310-859d-f6bbb57e9c96
contention about two orders of magnitude. Most of it seems to be taken over
by other locks, though. Yields only small improvements for the -j8 Haiku
image build.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34937 a95241bf-73f2-0310-859d-f6bbb57e9c96
table. It is now inline and uses double-checked locking. This reduces the
contention on the lock to insignificant. Total -j8 Haiku image build speedup
is marginal, but the total kernel time drops 12%.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34934 a95241bf-73f2-0310-859d-f6bbb57e9c96
access to a vm_page. It is basically an atomically accessed thread ID field
in the vm_page structure, which is explicitly set by macros marking the
critical sections. As a first positive effect I had to review quite a bit of
code and found several issues.
* Added several TODOs and comments. Some harmless ones, but also a few
troublesome ones in vm.cpp regarding page unmapping.
* file_cache: PrecacheIO::Prepare()/read_into_cache: Removed superfluous
vm_page_allocate_page() return value checks. It cannot fail anymore.
* Removed the heavily contended "pages" lock. We use different policies now:
- sModifiedTemporaryPages is accessed atomically.
- sPageDeficitLock and sFreePageCondition are protected by a new mutex.
- The page queues have individual locks (mutexes).
- Renamed set_page_state_nolock() to set_page_state(). Unless the caller says
otherwise, it does now lock the affected pages queues itself. Also changed
the return value to void -- we panic() anyway.
* set_page_state(): Add free/clear pages to the beginning of their respective
queues as this is more cache-friendly.
* Pages with the states PAGE_STATE_WIRED or PAGE_STATE_UNUSED are no longer
in any queue. They were in the "active" queue, but there's no good reason
to have them there. In case we decide to let the page daemon work the queues
(like FreeBSD) they would just be in the way.
* Pulled the common part of vm_page_allocate_page_run[_no_base]() into a helper
function. Also fixed a bug I introduced previously: The functions must not
vm_page_unreserve_pages() on success, since they remove the pages from the
free/clear queue without decrementing sUnreservedFreePages.
* vm_page_set_state(): Changed return type to void. The function cannot really
fail and no-one was checking it anyway.
* vm_page_free(), vm_page_set_state(): Added assertion: The page must not be
free/clear before. This is implied by the policy that no-one is allowed to
access free/clear pages without holding the respective queue's lock, which is
not the case at this point. This found the bug fixed in r34912.
* vm_page_requeue(): Added general assertions. panic() when requeuing of
free/clear pages is requested. Same reason as above.
* vm_clone_area(), B_FULL_LOCK case: Don't map busy pages. The implementation is
still not correct, though.
My usual -j8 Haiku build test runs another 10% faster, now. The total kernel
time drops about 18%. As hoped the new locks have only a fraction of the old
"pages" lock contention. Other locks lead the "most wanted list" now.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34933 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Fixed coding style issues pointed out by Axel.
* Fixed potential buffer overflow and fault in default-client-up code path
(OF counts terminating zero char, too).
* Added an intermediate fallback to parsing the boot path
* Added himself to the copyright holders
Thanks a lot! Fixes ticket #5189.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34918 a95241bf-73f2-0310-859d-f6bbb57e9c96
mapping is actually present. This would have resulted in page 0 being freed
over and over again, if we hadn't also incorrectly tried to look up the page
by the virtual instead of the physical address. So we were actually freeing
random pages. Fortunately the virtual addresses are kernel addresses, so that
the affected pages lay beyond 2 GB and probably weren't used at this point
yet.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34912 a95241bf-73f2-0310-859d-f6bbb57e9c96
Since the former is no longer guarded by any lock, there's a race condition
with vm_page_unreserve_pages() which would cause us to wait longer than
necessary.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34898 a95241bf-73f2-0310-859d-f6bbb57e9c96
sHeapBase will probably not point to memory in the heap area. Use
sFreeHeapBase instead.
* When reserving the heap area range fails, set sHeapBase to NULL, so we'll
later know about the fact.
* hoardSbrk(): When resizing the area fails, we'll now try to allocate a new
one, if the former failure was not due to an "out of memory" situation.
E.g. if the heap range reservation failed or, if we just have exhausted the
range, another area could be in the way. Also when mmap()ing over
malloc()ed, the heap area count be split in two with the first part
retaining the old area ID, thus preventing resizing as well. Fixed#5168.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34897 a95241bf-73f2-0310-859d-f6bbb57e9c96
sUnreservedFreePages which tracks the difference between free/clear and
reserved pages. Access to it uses atomic operations which allows the three
page (un)reservation to avoid locking in most cases, thus reducing contention
of the "pages" lock.
In the -j8 Haiku image build that decreases the contention of the "pages"
lock to about one third of the previous value. As a positive side effect the
VMCache lock contention drops about the same factor. The total build speedup
is about 20%, the total kernel time drops about 20%.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34888 a95241bf-73f2-0310-859d-f6bbb57e9c96
improve the reliability as long as our slab implementation is a PITA.
* Removed an assertion that will no longer work (due to the DoublyLinkedList
changes).
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34877 a95241bf-73f2-0310-859d-f6bbb57e9c96
the next/previous pointers. There might be more, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34875 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Renamed page_queue to VMPageQueue and made it a proper C++ class. Use
DoublyLinkedList instead of own list code.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34874 a95241bf-73f2-0310-859d-f6bbb57e9c96
playing with the unused list manually. This also clears the vnode's unused
flag, which wasn't done before and would thus cause corruption of the
unused list a bit later.
* fs_unmount():
- Fixed an iteration bug I introduced previously. The iterator would be
advanced twice per iteration, leading to NULL pointer dereferencing
when the vnode count was odd and skipping the checks for every other
vnode.
- All vnodes are going to be freed, so vnode_to_be_freed() has to invoked
for every one of them. The code wasn't adjusted correctly when
introducing the hot vnodes handling.
* Adjusted/improved some comments.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34871 a95241bf-73f2-0310-859d-f6bbb57e9c96
available in the boot loader.
* Simplified parse_ip_address() and use style conforming identifiers.
* Some cleanup.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34870 a95241bf-73f2-0310-859d-f6bbb57e9c96
used for it that is.
* The main cause for the heavy contention of the unused vnodes mutex was that
relatively few vnodes are actually used for a longer time. Mainly those are
the volume roots, mmap()ed files, and the files opened by programs. A good
deal of nodes -- particularly directories -- are just referenced for a very
short time, e.g. to resolve a path to a contained entry. This caused those
nodes to be added to and removed from the unused vnodes list very
frequently, thus resulting in a high contention of the mutex guarding it.
To address the problem I've introduced an approximation of a set of "hot"
vnodes, i.e. vnodes that have recently been marked unused. They are stored
in an array that by means of an r/w locker and atomic operations can most
of the time be accessed concurrently. Whenever it gets full, it is flushed
to the actual unused vnodes list.
* dec_vnode_ref_count(): No longer check the unused vnode count every time.
The called new vnode_unused() does only from time to time and returns when
the caller is expected to free some of the unused vnodes. As a side effect
this also fixes a bug I previously introduced: The unused vnode to be freed
was marked busy without being locked first.
The -j8 Haiku image test build shows that the changes reduce the contention
of the unused vnode list mutex to virtually zero without introducing any
significant contention of the new r/w lock. The VMCache lock contention also
seems to be decreased somewhat, which is probably not that surprising
considering that the page writer acquires/releases vnode references with the
cache lock held. The "pages" lock takes over even more contention, now
causing more than 100000 waits per second.
The total build time reduction is about 4.5%. Kernel time drops more than
10%.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34866 a95241bf-73f2-0310-859d-f6bbb57e9c96
its own header/source files.
* Changed vnode's bit fields to a single, atomically changeable int32 using
flags instead. Added respective accessor methods.
* Added a per-vnode mutex-like lock, which uses 2 bits of the structure and
32 global "buckets" which are used for waiter lists for the vnode locks.
* Reorganized the VFS locking a bit:
Renamed sVnodeMutex to sVnodeLock and made it an r/w lock. In most situations
it is now only read-locked to reduce its contention. The per-vnode locks guard
the fields of the vnode structure and the newly introduced sUnusedVnodesLock
has taken over the job to guard the unused vnodes list.
The main intent of the changes was to reduce the contention of the sVnodeMutex,
which was partially successful. In my standard -j8 Haiku image build test the
new sUnusedVnodesLock took over about a fourth of the former sVnodeMutex
contention, but the sVnodeLock and the vnode locks have virtually no contention
to speak of, now. A lot of contention migrated to the unrelated "pages" mutex
(another bottleneck). The overall build time dropped about 10 %.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34865 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Skip mappings to non-physical memory in the PPC MMU code. Gets the
PPC kernel booting a little further.
Thanks! Fixes ticket #5193.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34863 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Fix a typo in the comments: unintialized -> uninitialized.
Thanks a lot!
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34857 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Fix duplicate assignment which is probably a merging artifact.
This patch was also a requirement for a working PPC KDL prompt. I didn't
apply the patches in order... Thanks a lot!
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34856 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Fix a warning in VM tracing output, which prevented the compilation since
warnings are treated as errors.
Thanks!
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34855 a95241bf-73f2-0310-859d-f6bbb57e9c96
* The kernel's _start entry function expects now a second argument, the
current CPU index. The PPC boot loader didn't initialize GPR4, passing
its second argument, the kernel entry address, as CPU index, causing
smp_cpu_rendezvous() to loop forever. This fix gets the PPC boot to a
kernel debug prompt. The CPU index is currently fixed to 0.
Thanks a lot!
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34854 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Choosing Reboot from the menu will now reboot the system instead of
returning to the OpenFirmware prompt. Places, where returning to the
prompt was desirable have been adapted to maintain their current behavior.
Thanks!
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34852 a95241bf-73f2-0310-859d-f6bbb57e9c96
* If retrieving an IP address from the non-standard /chosen/dhcp-response
fails, try to parse it from /options/default-client-ip instead.
Thanks!
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34850 a95241bf-73f2-0310-859d-f6bbb57e9c96
in VMCache, but, if anything, that makes a -j8 build marginally slower. I
guess busy vnodes are encountered so rarely that the additional overhead for
a more intelligent algorithm isn't really worth it. Reduced the wait time,
though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34845 a95241bf-73f2-0310-859d-f6bbb57e9c96
duplicating firmware names in firmware_get() of the freebsd compat layer.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34844 a95241bf-73f2-0310-859d-f6bbb57e9c96
stores the value right-shifted by 12 bits, now, since those bits are not
relevant. This saves some bits and also resolves a TODO.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34842 a95241bf-73f2-0310-859d-f6bbb57e9c96
destruction and VMVnodeCache::AcquireUnreferencedStoreRef(). Solved by
adding a flag to VMVnodeCache and letting AcquireUnreferencedStoreRef()
fail, if set.
* Added TODO regarding replacing the snooze() waiting for busy vnodes.
* get_vnode(): Unlock sVnodeMutex while calling the put_vnode() hook on
error.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34841 a95241bf-73f2-0310-859d-f6bbb57e9c96
it do that? This fixes the kernel build, and probably GCC4, too.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34840 a95241bf-73f2-0310-859d-f6bbb57e9c96
sure that the kernel's frame buffer console points to the right data.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34835 a95241bf-73f2-0310-859d-f6bbb57e9c96
* When interrupts are disabled, it is still safe to capture the kernel stack
trace. The respective TODO preceded the introduction of the "kernelOnly"
flag.
* Actually made "kernelOnly" work. The wrong flag was passed to
arch_debug_get_stack_trace() in case it was false, so we never captured
user stack traces.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34832 a95241bf-73f2-0310-859d-f6bbb57e9c96
directory iteration code, a mutex to protect the iteration cookie and one
to protect the cookie list have been introduced.
Overall this reduces the contention of the rootfs lock significantly. The
Haiku image -j8 build gets only marginally faster though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34831 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Changed the rw_lock_{read,write}_unlock() return values to void. They
returned a value != B_OK only in case of user error and no-one checked them
anyway.
* Optimized rw_lock_read_[un]lock(). They are inline now and as long as
there's no contending write locker, they will only perform an atomic_add().
* Changed the semantics of nested locking after acquiring a write lock: Read
and write locks are counted separately, so read locks no longer implicitly
become write locks. This does e.g. make degrading a write lock to a read
lock by way of read_lock + write_unlock (as used in the VM) actually work.
These changes speed up the -j8 Haiku image build on my machine by a few
percent, but more interestingly they reduce the total kernel time by 25 %.
Apparently we get more contention on other locks, now.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34830 a95241bf-73f2-0310-859d-f6bbb57e9c96
the function shall nevertheless return the length of the string that would
be written, if the buffer were large enough.
Added a touch of C++ while doing that. :-)
* Fixed the instances in boot loader, kernel, and kernel modules where the
wrong semantics were expected. The majority of uses actually.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34826 a95241bf-73f2-0310-859d-f6bbb57e9c96
debug heap implementation.
* Added libroot_debug.so to the DevelopmentMin optional package. Since it has
the same soname as the standard libroot, it can simply be specified in
LD_PRELOAD to run a program with that version.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34788 a95241bf-73f2-0310-859d-f6bbb57e9c96
automatically and a pre-loaded library will not be loaded again, when it's
also a dependency.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34786 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added VMCache::MovePage() and MoveAllPages() to move pages between caches.
* VMAnonymousCache:
- _MergeSwapPages(): Avoid doing anything, if neither cache has swapped out
pages.
- _MergeSwapPages() does now also remove source cache pages that are
shadowed by consumer swap pages. This allows us to call _MergeSwapPages()
before _MergePagesSmallerSource(), save the swap page shadowing check
there and get rid of the vm_page::merge_swap flag. This is an
optimization based on the assumption that usually none or only few pages
are swapped out, so we save a lot of checks.
- Implemented _MergePagesSmallerConsumer() as an alternative to
_MergePagesSmallerSource(). The former is used when the source cache has
more pages than the consumer cache. It iterates over the consumer cache's
pages, moves them to the source and finally moves all pages back to the
consumer. The final move is relatively cheap (though unfortunately we
still have to update all pages' vm_page::cache field), so that overall we
save iterations of the main loop with the more expensive checks.
The optimizations particularly improve the common fork()+exec*() situations.
fork() uses CoW, which is implemented by putting two new empty caches between
the to be copied area and its cache. exec*() destroys one copy of the area,
its cache and thus causes merging of the other new cache with the old cache.
Since this usually happens in a very short time, the old cache does still
contain many pages and the new cache only few. Previously the many pages were
all checked and moved individually. Now we do that for the few pages instead.
A very extreme example of this situation is the Haiku image build. jam has a
huge heap (> 200 MB) and it fork()s+exec*()s for every action to be executed.
Since during the cache merging the cache is locked, any write access to a
heap page causes jam to block until the cache merging is done. Formerly that
took so long that it killed a lot of parallelism in multi-job builds. That
could be observed particularly well when lots of small actions where executed
(like the Link, XRes, Mimeset, SetType, SetVersion combos when building
executables/libraries/add-ons). Those look dramatically better now.
The overall speed improvement for a -j8 image build on my machine is only
about 15%, though.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34784 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Pulled the code moving the pages out of Merge() into a separate method.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34778 a95241bf-73f2-0310-859d-f6bbb57e9c96
another. The code originates from vm_copy_on_write_area(). We now generate
the VM cache tracing entries, though.
* count_writable_areas() -> VMCache::CountWritableAreas()
* Added debugger command "cache_stack" which is enabled when VM cache tracing
is enabled. It prints the source caches of a given cache or area at the
time of a specified tracing entry.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34751 a95241bf-73f2-0310-859d-f6bbb57e9c96
- Replaced the "userOnly" parameter by a "flags" parameter, that allows to
specify kernel and userland stack traces individually.
- x86, m68k: Don't always skip the first frame as that prevents the caller
from being able to record its own address.
* capture_tracing_stack_trace(): Replaced the "userOnly" parameter by
"kernelOnly", since one is probably always interested in the kernel stack
trace, but might not want the userland stack trace.
* Added stack trace support for VM cache kernel tracing.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34742 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Lock the notification service and check HasListeners(), so we don't prepare
an event message needlessly.
* The on-stack buffer for the event message was too small for I/O operation
related events. Now a larger buffer belonging to the roster object is used.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34737 a95241bf-73f2-0310-859d-f6bbb57e9c96
* Added Lock()/Unlock() for explicit locking by a service user.
* Added NotifyLocked() and made Notify() inline.
* Added HasListeners() so one can check whether there is a listener at all
before preparing the event message.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34736 a95241bf-73f2-0310-859d-f6bbb57e9c96
be run again or generated/build/BuildConfig needs to be adjusted manually.
* Removed bochs debug hack.
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34721 a95241bf-73f2-0310-859d-f6bbb57e9c96
Fixes#5152.
* _get_port_message_info_etc(): Check whether the port still exists and is not
closed and empty in the loop. Though actually it shouldn't be necessary
(same in the other functions), since Wait() would return an error, if the
port was closed/deleted. Well, paranoia... :-)
git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34713 a95241bf-73f2-0310-859d-f6bbb57e9c96