haiku/build/config_headers/kernel_debug_config.h
Ingo Weinhold b4e5e49823 MemoryManager:
* Added support to do larger raw allocations (up to one large chunk (128 pages))
  in the slab areas. For an even larger allocation an area is created (haven't
  seen that happen yet, though).
* Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING).
* _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area
  would be added to the free lists instead of being removed from them. This
  would corrupt the lists and also lead to all kinds of misuse of meta chunks.

object caches:
* Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object
  caches, but the block allocator sets it on all power of two size caches.
* object_cache_reserve_internal(): Detect recursion and don't wait in such a
  case. The function could deadlock itself, since
  HashedObjectCache::CreateSlab() does allocate memory, thus potentially
  reentering.
* object_cache_low_memory():
  - I missed some returns when reworking that one in r35254, so the function
    might stop early and also leave the cache in maintenance mode, which would
    cause it to be ignored by object cache resizer and low memory handler from
    that point on.
  - Since ReturnSlab() potentially unlocks, the conditions weren't quite correct
    and too many slabs could be freed.
  - Simplified things a bit.
* object_cache_alloc(): Since object_cache_reserve_internal() does potentially
  unlock the cache, the situation might have changed and their might not be an
  empty slab available, but a partial one. The function would crash.
* Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING.
* Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with
  the VMCache commands.
* ObjectCache::usage was not maintained anymore since I introduced the
  MemoryManager. object_cache_get_usage() would thus always return 0 and the
  block cache would not be considered cached memory. This was only of
  informational relevance, though.

slab allocator misc.:
* Disable the object depots of block allocator caches for object sizes > 2 KB.
  Allocations of those sizes aren't so common that the object depots yield any
  benefit.
* The slab allocator is now fully self-sufficient. It allocates its bootstrap
  memory from the MemoryManager, and the hash tables for HashedObjectCaches use
  the block allocator instead of the heap, now.
* Added option to use the slab allocator for malloc() and friends
  (USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and
  has virtually no lock contention. Handling for low memory situations is yet
  missing, though.
* Improved the output of some debugger commands.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96
2010-01-25 13:46:58 +00:00

105 lines
2.9 KiB
C

#ifndef KERNEL_DEBUG_CONFIG_H
#define KERNEL_DEBUG_CONFIG_H
// Master switch:
// 0: Disables all debug code that hasn't been enabled otherwise.
// 1: Enables some lightweight debug code.
// 2: Enables more debug code. Will impact performance.
#define KDEBUG_LEVEL 2
#define KDEBUG_LEVEL_2 (KDEBUG_LEVEL >= 2)
#define KDEBUG_LEVEL_1 (KDEBUG_LEVEL >= 1)
#define KDEBUG_LEVEL_0 (KDEBUG_LEVEL >= 0)
// general kernel debugging
// Enables kernel ASSERT()s and various checks, locking primitives aren't
// benaphore-style.
#define KDEBUG KDEBUG_LEVEL_2
// Size of the heap used by the kernel debugger.
#define KDEBUG_HEAP (64 * 1024)
// Set to 0 to disable support for kernel breakpoints.
#define KERNEL_BREAKPOINTS 1
// block/file cache
// Enables debugger commands.
#define DEBUG_BLOCK_CACHE KDEBUG_LEVEL_1
// Enables checks that non-dirty blocks really aren't changed. Seriously
// degrades performance when the block cache is used heavily.
#define BLOCK_CACHE_DEBUG_CHANGED KDEBUG_LEVEL_2
// Enables a global list of file maps and related debugger commands.
#define DEBUG_FILE_MAP KDEBUG_LEVEL_1
// heap
// Initialize newly allocated memory with something non zero.
#define PARANOID_KERNEL_MALLOC KDEBUG_LEVEL_2
// Check for double free, and fill freed memory with 0xdeadbeef.
#define PARANOID_KERNEL_FREE KDEBUG_LEVEL_2
// Validate sanity of the heap after each operation (slow!).
#define PARANOID_HEAP_VALIDATION 0
// Store size, thread and team info at the end of each allocation block.
// Enables the "allocations*" debugger commands.
#define KERNEL_HEAP_LEAK_CHECK 0
// interrupts
// Adds statistics and unhandled counter per interrupts. Enables the "ints"
// debugger command.
#define DEBUG_INTERRUPTS KDEBUG_LEVEL_1
// semaphores
// Enables tracking of the last threads that acquired/released a semaphore.
#define DEBUG_SEM_LAST_ACQUIRER KDEBUG_LEVEL_1
// SMP
// Enables spinlock caller debugging. When acquiring a spinlock twice on a
// non-SMP machine, this will give a clue who locked it the first time.
// Furthermore (also on SMP machines) the "spinlock" debugger command will be
// available.
#define DEBUG_SPINLOCKS KDEBUG_LEVEL_2
#define DEBUG_SPINLOCK_LATENCIES 0
// VM
// Enables the vm_page::queue field, i.e. it is tracked which queue the page
// should be in.
#define DEBUG_PAGE_QUEUE 0
// Enables the vm_page::access_count field, which is used to detect invalid
// concurrent access to the page.
#define DEBUG_PAGE_ACCESS KDEBUG_LEVEL_2
// Enables a global list of all vm_cache structures.
#define DEBUG_CACHE_LIST KDEBUG_LEVEL_1
// Enables swap support.
#define ENABLE_SWAP_SUPPORT 1
// Use the slab allocator as generic memory allocator (malloc()/free()).
#define USE_SLAB_ALLOCATOR_FOR_MALLOC 0
// When set limits the amount of available RAM (in MB).
//#define LIMIT_AVAILABLE_MEMORY 256
#endif // KERNEL_DEBUG_CONFIG_H