haiku/build/config_headers/kernel_debug_config.h

138 lines
4.5 KiB
C
Raw Normal View History

#ifndef KERNEL_DEBUG_CONFIG_H
#define KERNEL_DEBUG_CONFIG_H
// Master switch:
// 0: Disables all debug code that hasn't been enabled otherwise.
// 1: Enables some lightweight debug code.
// 2: Enables more debug code. Will impact performance.
#define KDEBUG_LEVEL 2
#define KDEBUG_LEVEL_2 (KDEBUG_LEVEL >= 2)
#define KDEBUG_LEVEL_1 (KDEBUG_LEVEL >= 1)
#define KDEBUG_LEVEL_0 (KDEBUG_LEVEL >= 0)
// general kernel debugging
// Enables kernel ASSERT()s and various checks, locking primitives aren't
// benaphore-style.
#define KDEBUG KDEBUG_LEVEL_2
// Size of the heap used by the kernel debugger.
#define KDEBUG_HEAP (64 * 1024)
// Set to 0 to disable support for kernel breakpoints.
#define KERNEL_BREAKPOINTS 1
// Enables the debug syslog feature (accessing the previous syslog in the boot
// loader) by default. Can be overridden in the boot loader menu.
#define KDEBUG_ENABLE_DEBUG_SYSLOG KDEBUG_LEVEL_1
// block/file cache
// Enables debugger commands.
#define DEBUG_BLOCK_CACHE KDEBUG_LEVEL_1
// Enables checks that non-dirty blocks really aren't changed. Seriously
// degrades performance when the block cache is used heavily.
#define BLOCK_CACHE_DEBUG_CHANGED KDEBUG_LEVEL_2
// Enables a global list of file maps and related debugger commands.
#define DEBUG_FILE_MAP KDEBUG_LEVEL_1
// heap / slab
// Initialize newly allocated memory with something non zero.
#define PARANOID_KERNEL_MALLOC KDEBUG_LEVEL_2
// Check for double free, and fill freed memory with 0xdeadbeef.
#define PARANOID_KERNEL_FREE KDEBUG_LEVEL_2
// Validate sanity of the heap after each operation (slow!).
#define PARANOID_HEAP_VALIDATION 0
// Store size, thread and team info at the end of each allocation block.
// Enables the "allocations*" debugger commands.
#define KERNEL_HEAP_LEAK_CHECK 0
// Enables the "allocations*" debugger commands for the slab.
#define SLAB_ALLOCATION_TRACKING 0
// interrupts
// Adds statistics and unhandled counter per interrupts. Enables the "ints"
// debugger command.
#define DEBUG_INTERRUPTS KDEBUG_LEVEL_1
// semaphores
// Enables tracking of the last threads that acquired/released a semaphore.
#define DEBUG_SEM_LAST_ACQUIRER KDEBUG_LEVEL_1
// SMP
// Enables spinlock caller debugging. When acquiring a spinlock twice on a
// non-SMP machine, this will give a clue who locked it the first time.
// Furthermore (also on SMP machines) the "spinlock" debugger command will be
// available.
#define DEBUG_SPINLOCKS KDEBUG_LEVEL_2
#define DEBUG_SPINLOCK_LATENCIES 0
// VM
* Added new debug feature (DEBUG_PAGE_ACCESS) to detect invalid concurrent access to a vm_page. It is basically an atomically accessed thread ID field in the vm_page structure, which is explicitly set by macros marking the critical sections. As a first positive effect I had to review quite a bit of code and found several issues. * Added several TODOs and comments. Some harmless ones, but also a few troublesome ones in vm.cpp regarding page unmapping. * file_cache: PrecacheIO::Prepare()/read_into_cache: Removed superfluous vm_page_allocate_page() return value checks. It cannot fail anymore. * Removed the heavily contended "pages" lock. We use different policies now: - sModifiedTemporaryPages is accessed atomically. - sPageDeficitLock and sFreePageCondition are protected by a new mutex. - The page queues have individual locks (mutexes). - Renamed set_page_state_nolock() to set_page_state(). Unless the caller says otherwise, it does now lock the affected pages queues itself. Also changed the return value to void -- we panic() anyway. * set_page_state(): Add free/clear pages to the beginning of their respective queues as this is more cache-friendly. * Pages with the states PAGE_STATE_WIRED or PAGE_STATE_UNUSED are no longer in any queue. They were in the "active" queue, but there's no good reason to have them there. In case we decide to let the page daemon work the queues (like FreeBSD) they would just be in the way. * Pulled the common part of vm_page_allocate_page_run[_no_base]() into a helper function. Also fixed a bug I introduced previously: The functions must not vm_page_unreserve_pages() on success, since they remove the pages from the free/clear queue without decrementing sUnreservedFreePages. * vm_page_set_state(): Changed return type to void. The function cannot really fail and no-one was checking it anyway. * vm_page_free(), vm_page_set_state(): Added assertion: The page must not be free/clear before. This is implied by the policy that no-one is allowed to access free/clear pages without holding the respective queue's lock, which is not the case at this point. This found the bug fixed in r34912. * vm_page_requeue(): Added general assertions. panic() when requeuing of free/clear pages is requested. Same reason as above. * vm_clone_area(), B_FULL_LOCK case: Don't map busy pages. The implementation is still not correct, though. My usual -j8 Haiku build test runs another 10% faster, now. The total kernel time drops about 18%. As hoped the new locks have only a fraction of the old "pages" lock contention. Other locks lead the "most wanted list" now. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34933 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-07 05:37:05 +03:00
// Enables the vm_page::queue field, i.e. it is tracked which queue the page
// should be in.
#define DEBUG_PAGE_QUEUE 0
* Added new debug feature (DEBUG_PAGE_ACCESS) to detect invalid concurrent access to a vm_page. It is basically an atomically accessed thread ID field in the vm_page structure, which is explicitly set by macros marking the critical sections. As a first positive effect I had to review quite a bit of code and found several issues. * Added several TODOs and comments. Some harmless ones, but also a few troublesome ones in vm.cpp regarding page unmapping. * file_cache: PrecacheIO::Prepare()/read_into_cache: Removed superfluous vm_page_allocate_page() return value checks. It cannot fail anymore. * Removed the heavily contended "pages" lock. We use different policies now: - sModifiedTemporaryPages is accessed atomically. - sPageDeficitLock and sFreePageCondition are protected by a new mutex. - The page queues have individual locks (mutexes). - Renamed set_page_state_nolock() to set_page_state(). Unless the caller says otherwise, it does now lock the affected pages queues itself. Also changed the return value to void -- we panic() anyway. * set_page_state(): Add free/clear pages to the beginning of their respective queues as this is more cache-friendly. * Pages with the states PAGE_STATE_WIRED or PAGE_STATE_UNUSED are no longer in any queue. They were in the "active" queue, but there's no good reason to have them there. In case we decide to let the page daemon work the queues (like FreeBSD) they would just be in the way. * Pulled the common part of vm_page_allocate_page_run[_no_base]() into a helper function. Also fixed a bug I introduced previously: The functions must not vm_page_unreserve_pages() on success, since they remove the pages from the free/clear queue without decrementing sUnreservedFreePages. * vm_page_set_state(): Changed return type to void. The function cannot really fail and no-one was checking it anyway. * vm_page_free(), vm_page_set_state(): Added assertion: The page must not be free/clear before. This is implied by the policy that no-one is allowed to access free/clear pages without holding the respective queue's lock, which is not the case at this point. This found the bug fixed in r34912. * vm_page_requeue(): Added general assertions. panic() when requeuing of free/clear pages is requested. Same reason as above. * vm_clone_area(), B_FULL_LOCK case: Don't map busy pages. The implementation is still not correct, though. My usual -j8 Haiku build test runs another 10% faster, now. The total kernel time drops about 18%. As hoped the new locks have only a fraction of the old "pages" lock contention. Other locks lead the "most wanted list" now. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34933 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-07 05:37:05 +03:00
// Enables the vm_page::access_count field, which is used to detect invalid
// concurrent access to the page.
#define DEBUG_PAGE_ACCESS KDEBUG_LEVEL_2
* Added new debug feature (DEBUG_PAGE_ACCESS) to detect invalid concurrent access to a vm_page. It is basically an atomically accessed thread ID field in the vm_page structure, which is explicitly set by macros marking the critical sections. As a first positive effect I had to review quite a bit of code and found several issues. * Added several TODOs and comments. Some harmless ones, but also a few troublesome ones in vm.cpp regarding page unmapping. * file_cache: PrecacheIO::Prepare()/read_into_cache: Removed superfluous vm_page_allocate_page() return value checks. It cannot fail anymore. * Removed the heavily contended "pages" lock. We use different policies now: - sModifiedTemporaryPages is accessed atomically. - sPageDeficitLock and sFreePageCondition are protected by a new mutex. - The page queues have individual locks (mutexes). - Renamed set_page_state_nolock() to set_page_state(). Unless the caller says otherwise, it does now lock the affected pages queues itself. Also changed the return value to void -- we panic() anyway. * set_page_state(): Add free/clear pages to the beginning of their respective queues as this is more cache-friendly. * Pages with the states PAGE_STATE_WIRED or PAGE_STATE_UNUSED are no longer in any queue. They were in the "active" queue, but there's no good reason to have them there. In case we decide to let the page daemon work the queues (like FreeBSD) they would just be in the way. * Pulled the common part of vm_page_allocate_page_run[_no_base]() into a helper function. Also fixed a bug I introduced previously: The functions must not vm_page_unreserve_pages() on success, since they remove the pages from the free/clear queue without decrementing sUnreservedFreePages. * vm_page_set_state(): Changed return type to void. The function cannot really fail and no-one was checking it anyway. * vm_page_free(), vm_page_set_state(): Added assertion: The page must not be free/clear before. This is implied by the policy that no-one is allowed to access free/clear pages without holding the respective queue's lock, which is not the case at this point. This found the bug fixed in r34912. * vm_page_requeue(): Added general assertions. panic() when requeuing of free/clear pages is requested. Same reason as above. * vm_clone_area(), B_FULL_LOCK case: Don't map busy pages. The implementation is still not correct, though. My usual -j8 Haiku build test runs another 10% faster, now. The total kernel time drops about 18%. As hoped the new locks have only a fraction of the old "pages" lock contention. Other locks lead the "most wanted list" now. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34933 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-07 05:37:05 +03:00
// Enables a global list of all vm_cache structures.
#define DEBUG_CACHE_LIST KDEBUG_LEVEL_1
// Enables swap support.
#define ENABLE_SWAP_SUPPORT 1
// Use the selected allocator as generic memory allocator (malloc()/free()).
#define USE_DEBUG_HEAP_FOR_MALLOC 0
// Heap implementation with additional debugging facilities.
#define USE_GUARDED_HEAP_FOR_MALLOC 0
// Heap implementation that allocates memory so that the end of the
// allocation always coincides with a page end and is followed by a guard
// page which is marked non-present. Out of bounds access (both read and
// write) therefore cause a crash (unhandled page fault). Note that this
// allocator is neither speed nor space efficient, indeed it wastes huge
// amounts of pages and address space so it is quite easy to hit limits.
#define USE_SLAB_ALLOCATOR_FOR_MALLOC 1
// Heap implementation based on the slab allocator (for production use).
MemoryManager: * Added support to do larger raw allocations (up to one large chunk (128 pages)) in the slab areas. For an even larger allocation an area is created (haven't seen that happen yet, though). * Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING). * _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area would be added to the free lists instead of being removed from them. This would corrupt the lists and also lead to all kinds of misuse of meta chunks. object caches: * Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object caches, but the block allocator sets it on all power of two size caches. * object_cache_reserve_internal(): Detect recursion and don't wait in such a case. The function could deadlock itself, since HashedObjectCache::CreateSlab() does allocate memory, thus potentially reentering. * object_cache_low_memory(): - I missed some returns when reworking that one in r35254, so the function might stop early and also leave the cache in maintenance mode, which would cause it to be ignored by object cache resizer and low memory handler from that point on. - Since ReturnSlab() potentially unlocks, the conditions weren't quite correct and too many slabs could be freed. - Simplified things a bit. * object_cache_alloc(): Since object_cache_reserve_internal() does potentially unlock the cache, the situation might have changed and their might not be an empty slab available, but a partial one. The function would crash. * Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING. * Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with the VMCache commands. * ObjectCache::usage was not maintained anymore since I introduced the MemoryManager. object_cache_get_usage() would thus always return 0 and the block cache would not be considered cached memory. This was only of informational relevance, though. slab allocator misc.: * Disable the object depots of block allocator caches for object sizes > 2 KB. Allocations of those sizes aren't so common that the object depots yield any benefit. * The slab allocator is now fully self-sufficient. It allocates its bootstrap memory from the MemoryManager, and the hash tables for HashedObjectCaches use the block allocator instead of the heap, now. * Added option to use the slab allocator for malloc() and friends (USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and has virtually no lock contention. Handling for low memory situations is yet missing, though. * Improved the output of some debugger commands. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-25 16:46:58 +03:00
// Replace the object cache with the guarded heap to force debug features. Also
// requires the use of the guarded heap for malloc.
#define USE_GUARDED_HEAP_FOR_OBJECT_CACHE 0
// Enables additional sanity checks in the slab allocator's memory manager.
#define DEBUG_SLAB_MEMORY_MANAGER_PARANOID_CHECKS 0
// Disables memory re-use in the guarded heap (freed memory is never reused and
// stays invalid causing every access to crash). Note that this is a magnitude
// more space inefficient than the guarded heap itself. Fully booting may not
// work at all due to address space waste.
#define DEBUG_GUARDED_HEAP_DISABLE_MEMORY_REUSE 0
// When set limits the amount of available RAM (in MB).
//#define LIMIT_AVAILABLE_MEMORY 256
// Enables tracking of page allocations.
#define VM_PAGE_ALLOCATION_TRACKING 0
#endif // KERNEL_DEBUG_CONFIG_H