Merge branch 'master' into dev
This commit is contained in:
commit
6f181194f6
@ -26,17 +26,25 @@ without code changes, for example, on Unix you can use it as:
|
||||
|
||||
Notable aspects of the design include:
|
||||
|
||||
- __small and consistent__: the library is less than 6k LOC using simple and
|
||||
- __small and consistent__: the library is about 8k LOC using simple and
|
||||
consistent data structures. This makes it very suitable
|
||||
to integrate and adapt in other projects. For runtime systems it
|
||||
provides hooks for a monotonic _heartbeat_ and deferred freeing (for
|
||||
bounded worst-case times with reference counting).
|
||||
- __free list sharding__: the big idea: instead of one big free list (per size class) we have
|
||||
many smaller lists per memory "page" which both reduces fragmentation
|
||||
and increases locality --
|
||||
- __free list sharding__: instead of one big free list (per size class) we have
|
||||
many smaller lists per "mimalloc page" which reduces fragmentation and
|
||||
increases locality --
|
||||
things that are allocated close in time get allocated close in memory.
|
||||
(A memory "page" in _mimalloc_ contains blocks of one size class and is
|
||||
usually 64KiB on a 64-bit system).
|
||||
(A mimalloc page contains blocks of one size class and is usually 64KiB on a 64-bit system).
|
||||
- __free list multi-sharding__: the big idea! Not only do we shard the free list
|
||||
per mimalloc page, but for each page we have multiple free lists. In particular, there
|
||||
is one list for thread-local `free` operations, and another one for concurrent `free`
|
||||
operations. Free-ing from another thread can now be a single CAS without needing
|
||||
sophisticated coordination between threads. Since there will be
|
||||
thousands of separate free lists, contention is naturally distributed over the heap,
|
||||
and the chance of contending on a single location will be low -- this is quite
|
||||
similar to randomized algorithms like skip lists where adding
|
||||
a random oracle removes the need for a more complex algorithm.
|
||||
- __eager page reset__: when a "page" becomes empty (with increased chance
|
||||
due to free list sharding) the memory is marked to the OS as unused ("reset" or "purged")
|
||||
reducing (real) memory pressure and fragmentation, especially in long running
|
||||
@ -51,7 +59,7 @@ Notable aspects of the design include:
|
||||
times (_wcat_), bounded space overhead (~0.2% meta-data, with at most 12.5% waste in allocation sizes),
|
||||
and has no internal points of contention using only atomic operations.
|
||||
- __fast__: In our benchmarks (see [below](#performance)),
|
||||
_mimalloc_ always outperforms all other leading allocators (_jemalloc_, _tcmalloc_, _Hoard_, etc),
|
||||
_mimalloc_ outperforms all other leading allocators (_jemalloc_, _tcmalloc_, _Hoard_, etc),
|
||||
and usually uses less memory (up to 25% more in the worst case). A nice property
|
||||
is that it does consistently well over a wide range of benchmarks.
|
||||
|
||||
|
@ -105,13 +105,14 @@ $(document).ready(function(){initNavTree('index.html','');});
|
||||
<div class="textblock"><p>This is the API documentation of the <a href="https://github.com/microsoft/mimalloc">mimalloc</a> allocator (pronounced "me-malloc") – a general purpose allocator with excellent <a href="bench.html">performance</a> characteristics. Initially developed by Daan Leijen for the run-time systems of the <a href="https://github.com/koka-lang/koka">Koka</a> and <a href="https://github.com/leanprover/lean">Lean</a> languages.</p>
|
||||
<p>It is a drop-in replacement for <code>malloc</code> and can be used in other programs without code changes, for example, on Unix you can use it as: </p><div class="fragment"><div class="line">> LD_PRELOAD=/usr/bin/libmimalloc.so myprogram</div></div><!-- fragment --><p>Notable aspects of the design include:</p>
|
||||
<ul>
|
||||
<li><b>small and consistent</b>: the library is less than 6k LOC using simple and consistent data structures. This makes it very suitable to integrate and adapt in other projects. For runtime systems it provides hooks for a monotonic <em>heartbeat</em> and deferred freeing (for bounded worst-case times with reference counting).</li>
|
||||
<li><b>free list sharding</b>: the big idea: instead of one big free list (per size class) we have many smaller lists per memory "page" which both reduces fragmentation and increases locality – things that are allocated close in time get allocated close in memory. (A memory "page" in <em>mimalloc</em> contains blocks of one size class and is usually 64KiB on a 64-bit system).</li>
|
||||
<li><b>small and consistent</b>: the library is about 8k LOC using simple and consistent data structures. This makes it very suitable to integrate and adapt in other projects. For runtime systems it provides hooks for a monotonic <em>heartbeat</em> and deferred freeing (for bounded worst-case times with reference counting).</li>
|
||||
<li><b>free list sharding</b>: instead of one big free list (per size class) we have many smaller lists per "mimalloc page" which reduces fragmentation and increases locality – things that are allocated close in time get allocated close in memory. (A mimalloc page contains blocks of one size class and is usually 64KiB on a 64-bit system).</li>
|
||||
<li><b>free list multi-sharding</b>: the big idea! Not only do we shard the free list per mimalloc page, but for each page we have multiple free lists. In particular, there is one list for thread-local <code>free</code> operations, and another one for concurrent <code>free</code> operations. Free-ing from another thread can now be a single CAS without needing sophisticated coordination between threads. Since there will be thousands of separate free lists, contention is naturally distributed over the heap, and the chance of contending on a single location will be low – this is quite similar to randomized algorithms like skip lists where adding a random oracle removes the need for a more complex algorithm.</li>
|
||||
<li><b>eager page reset</b>: when a "page" becomes empty (with increased chance due to free list sharding) the memory is marked to the OS as unused ("reset" or "purged") reducing (real) memory pressure and fragmentation, especially in long running programs.</li>
|
||||
<li><b>secure</b>: <em>mimalloc</em> can be build in secure mode, adding guard pages, randomized allocation, encrypted free lists, etc. to protect against various heap vulnerabilities. The performance penalty is only around 3% on average over our benchmarks.</li>
|
||||
<li><b>first-class heaps</b>: efficiently create and use multiple heaps to allocate across different regions. A heap can be destroyed at once instead of deallocating each object separately.</li>
|
||||
<li><b>bounded</b>: it does not suffer from <em>blowup</em> [1], has bounded worst-case allocation times (<em>wcat</em>), bounded space overhead (~0.2% meta-data, with at most 12.5% waste in allocation sizes), and has no internal points of contention using only atomic operations.</li>
|
||||
<li><b>fast</b>: In our benchmarks (see <a href="#performance">below</a>), <em>mimalloc</em> always outperforms all other leading allocators (<em>jemalloc</em>, <em>tcmalloc</em>, <em>Hoard</em>, etc), and usually uses less memory (up to 25% more in the worst case). A nice property is that it does consistently well over a wide range of benchmarks.</li>
|
||||
<li><b>fast</b>: In our benchmarks (see <a href="#performance">below</a>), <em>mimalloc</em> outperforms all other leading allocators (<em>jemalloc</em>, <em>tcmalloc</em>, <em>Hoard</em>, etc), and usually uses less memory (up to 25% more in the worst case). A nice property is that it does consistently well over a wide range of benchmarks.</li>
|
||||
</ul>
|
||||
<p>You can read more on the design of <em>mimalloc</em> in the <a href="https://www.microsoft.com/en-us/research/publication/mimalloc-free-list-sharding-in-action">technical report</a> which also has detailed benchmark results.</p>
|
||||
<p>Further information:</p>
|
||||
|
File diff suppressed because one or more lines are too long
Loading…
Reference in New Issue
Block a user