kernel/slab: Add missing support for CACHE_DONT_LOCK_KERNEL_SPACE.

This codepath is only hit for especially large malloc'd areas that
do not fit into an object cache, which is a rather rare case to
have in tandem with DONT_LOCK_KERNEL_SPACE, but we need to be
prepared for it in this code anyways.

Probably this codepath would previously have caused a hang due to
attempted double lock of the kernel addres space. Unfortunately
it appears our rw_locks do not actually detect that at present,
so likely I should investigate that next...

Change-Id: I3c9059d3e3939271beeeff221ebc921bc07ddf00
Reviewed-on: https://review.haiku-os.org/c/haiku/+/4438
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
This commit is contained in:
Augustin Cavalier 2021-09-09 10:31:18 -04:00 committed by waddlesplash
parent 530f89aa6d
commit 2f001d82b6

View File

@ -705,7 +705,14 @@ MemoryManager::FreeRawOrReturnCache(void* pages, uint32 flags)
readLocker.Unlock();
if (area == NULL) {
// Probably a large allocation. Look up the VM area.
// Probably a large allocation.
if ((flags & CACHE_DONT_LOCK_KERNEL_SPACE) != 0) {
// We cannot delete areas without locking the kernel address space,
// so defer the free until we can do that.
deferred_free(pages);
return NULL;
}
VMAddressSpace* addressSpace = VMAddressSpace::Kernel();
addressSpace->ReadLock();
VMArea* area = addressSpace->LookupArea((addr_t)pages);
@ -739,6 +746,12 @@ MemoryManager::FreeRawOrReturnCache(void* pages, uint32 flags)
size_t size = reference - (addr_t)pages + 1;
ASSERT((size % SLAB_CHUNK_SIZE_SMALL) == 0);
// Verify we can actually lock the kernel space before going further.
if ((flags & CACHE_DONT_LOCK_KERNEL_SPACE) != 0) {
deferred_free(pages);
return NULL;
}
// unmap the chunks
_UnmapChunk(area->vmArea, (addr_t)pages, size, flags);