c650846d9e
Since we used a hash table with a fixed size (1024), collisions were obviously inevitable, meaning that while insertions would always be fast, lookups and deletions would take linear time to search the linked-list for the area in question. For recently-created areas, this would be fast; for less-recently-created areas, it would get slower and slower and slower. A particularly pathological case was the "mmap/24-1" test from the Open POSIX Testsuite, which creates millions of areas until it hits ENOMEM; it then simply exits, at which point it would run for minutes and minutes in the kernel team deletion routines; how long I don't know, as I rebooted before it finished. This change fixes that problem, among others, at the cost of increased area creation time, by using an AVL tree instead of a hash. For comparison, mmap'ing 2 million areas with the "24-1" test before this change took around 0m2.706s of real time, while afterwards it takes about 0m3.118s, or around a 15% increase (1.152x). On the other hand, the total test runtime for 2 million areas went from around 2m11.050s to 0m4.035s, or around a 97% decrease (0.031x); in other words, with this new code, it is *32 times faster.* Area insertion will no longer be O(1), however, so the time increase may go up with the number of areas present on the system; but if it's only around 3 seconds to create 2 million areas, or about 1.56 us per area, vs. 1.35 us before, I don't think that's worth worrying about. My nonscientific "compile HaikuDepot with 2 cores in VM" benchmark seems to be within the realm of "noise", anyway, with most results both before and after this change coming in around 47s real time. Change-Id: I230e17de4f80304d082152af83db8bd5abe7b831 |
||
---|---|---|
.. | ||
build | ||
compatibility | ||
config | ||
cpp | ||
glibc | ||
libs | ||
os | ||
posix | ||
private | ||
tools |