Block the very last page of kernel address space. The problem here is that non

of the VM functions handling areas are overflow safe. If an area is created that
spans across the last page many places will run into an integer overflow. This
mostly concerns the area allocation path in find_and_insert_area_slot() and also
vm_create_anonymous_area() where the loop for mapping pages for B_FULL_LOCK
areas overflows and runs more times than it should leading to #2550.
This could be seen as a workaround. The real fix would be to make everything
overflow safe. The thing is that this does also concern the user of the area
which could easily have forgotten to check for overflows as well, so I am a bit
uneasy with handing out areas that could easily lead to such hard to debug
problems. Since this is really an edge case and this single step safes quite a
bit of extra checks I'd actually be OK with keeping it that way.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@33032 a95241bf-73f2-0310-859d-f6bbb57e9c96
This commit is contained in:
Michael Lotz 2009-09-10 02:03:31 +00:00
parent 3794518c2d
commit 9aff7f1593

View File

@ -4388,6 +4388,9 @@ vm_init(kernel_args* args)
B_ALREADY_WIRED, B_KERNEL_READ_AREA | B_KERNEL_WRITE_AREA);
}
void* lastPage = (void*)ROUNDDOWN(~(addr_t)0, B_PAGE_SIZE);
vm_block_address_range("overflow protection", lastPage, B_PAGE_SIZE - 1);
#if DEBUG_CACHE_LIST
create_area("cache info table", (void**)&sCacheInfoTable,
B_ANY_KERNEL_ADDRESS,