Locks/Reference Counting: vm_address_space: sem R/W for area creation/deletion and any other address space changes fields: areas, area_hint (is currently written in vm_area_lookup() without a write lock!), state ref_count: ensures validity of object beyond team lifetime, retrieved via the global sAddressSpaceTable's pointer (which is guarded by sAddressSpaceHashSem) vm_address_space_walk_next() is unsafe! (and obsolete, only used by the former page scanner) Problems: resize_area() does not lock any address spaces yet, but needs to lock all clones vm_area: ref_count: ensures validity retrieved via the global sAreaHash's pointer (which is guarded by sAreaHashLock) vs. vm_area_lookup() which iterates over the address space's area list, not the hash - therefore, it checks ref_count against NULL (ugly) variable fields: size, protection: essentially unguarded! (can be changed by resize_area() and set_area_protection()) mappings: guarded by the global sMappingLock (currently a spinlock) address_space_next: vm_address_space::sem hash_next: sAreaHashLock cache: guarded by vm_area_get_locked_cache()/sAreaCacheLock cache_next|prev: cache_ref::lock vm_cache_ref: ref_count: ensures validity vm_cache_remove_consumer(): does scary things with the ref_count fault_acquire_locked_source(): tries to get a ref through the vm_cache cache, areas: guarded by lock vm_cache: all fields: guarded by ref::lock BUT: ref may change, therefore it's generally unsafe to go from cache to ref without holding the ref's lock (which happens, by design, in vm_cache::source and vm_cache::consumers)! vm_page: hash_next: guarded by sPageCacheTableLock spinlock queue_prev|next: guarded by sPageLock cache_prev|next, cache, cache_offset: guarded by vm_cache_ref::lock mappings: guarded by the global sMappingLock (currently a spinlock) state: in vm_page only used with the sPageLock held, other uses have the cache locked the page is in wired_count, usage_count: not guarded? TBD busy_reading, busy_writing: dummy pages only vm_translation_map: TBD.