Commit Graph

33 Commits

Author SHA1 Message Date
haad a23681588b Add kmem_asprintf rotuine which allocates string accordingly to format
string from kmem pool. Allocated string is string length + 1 char for ending
zero.

Ok: ad@.
2010-02-11 23:13:46 +00:00
skrll e975ed8767 1 CTASSERT(foo) is enough for anyone. 2010-01-31 11:54:32 +00:00
uebayasi 80d41370e7 Use CTASSERT() for constant only assertions. 2010-01-04 16:01:42 +00:00
yamt 5873138145 constify 2009-10-12 23:36:02 +00:00
yamt 28bf72b353 fix KMEM_SIZE vs KMEM_GUARD 2009-10-12 23:35:09 +00:00
jnemeth fa6c059bce add KASSERT(p != NULL); to kmem_free() 2009-06-03 22:54:51 +00:00
ad f51a17bccf kernel memory guard for DEBUG kernels, proposed on tech-kern.
See kmem_alloc(9) for details.
2009-03-29 10:51:53 +00:00
yamt feff5384df use %zu for size_t 2009-02-18 13:04:59 +00:00
ad 81525af92d Fix min/max confusion that causes a problem with DEBUG on some
architectures. Independently spotted by yamt@. /brick ad
2009-02-17 21:54:30 +00:00
enami 60b229d2eb Use same expression to decide to use pool cache or not in both
kmem_alloc/free.
2009-02-06 22:58:49 +00:00
ad c26577a1b0 Apply kmem patch posted to tech-kern.
- Add another level of caches, for max quantum cache size -> PAGE_SIZE.
- Add debug code to verify that kmem_free() is given the correct size.
2009-02-01 18:51:07 +00:00
ad c1ef49a66f Back VMEM_ADDR_NULL change. It's too invasive. 2008-12-15 11:42:34 +00:00
ad b8c27c5dfc Check for VMEM_ADDR_NULL, not NULL. 2008-12-15 11:33:13 +00:00
ad f9b17a5200 Define VMEM_ADDR_NULL as UINTPTR_MAX, otherwise a vmem that can allocate
a block starting at zero will not work.

XXX pool_cache uses NULL to signify failed allocation.
XXX how did the percpu allocator work before?
2008-12-15 11:29:49 +00:00
yamt 67ab67abb3 if DEBUG, over-allocate 1 byte to detect overrun. 2008-02-09 12:56:20 +00:00
yamt ac257c9a5b sprinkle more kmem_poison_check. 2007-12-28 13:49:25 +00:00
ad d18c6ca4de Merge from vmlocking:
- pool_cache changes.
- Debugger/procfs locking fixes.
- Other minor changes.
2007-11-07 00:23:13 +00:00
ad 88ab7da936 Merge some of the less invasive changes from the vmlocking branch:
- kthread, callout, devsw API changes
- select()/poll() improvements
- miscellaneous MT safety improvements
2007-07-09 20:51:58 +00:00
hubertf 3bfc0c42ee Remove duplicate #include's
From: Slava Semushin <php-coder@altlinux.ru>
2007-03-26 22:52:44 +00:00
yamt 6d6b100a95 kmem_backend_alloc: fix a null dereference. 2007-03-02 12:30:53 +00:00
ad b07ec3fc38 Merge newlock2 to head. 2007-02-09 21:55:00 +00:00
yamt f6217feae5 kmem_alloc: fix a null dereference reported by Chuck Silvers. 2007-02-05 11:53:46 +00:00
yamt 1a7bc55dcc remove some __unused from function parameters. 2006-11-01 10:17:58 +00:00
christos 4d595fd7b1 - sprinkle __unused on function decls.
- fix a couple of unused bugs
- no more -Wno-unused for i386
2006-10-12 01:30:41 +00:00
yamt b153af038b don't include sys/lock.h as it is no longer necessary. 2006-08-28 13:41:04 +00:00
martin 5581630d1f Add <sys/lock.h> include for <sys/callback.h> 2006-08-21 09:06:06 +00:00
yamt 4e59653466 move kmem_kva_reclaim_callback out of #ifdef DEBUG.
fixes compilation problem in the case of !DEBUG.
pointed by Kurt Schreiner.
2006-08-20 13:08:11 +00:00
yamt 0406a06106 implement kva reclamation for kmem_alloc quantum cache. 2006-08-20 09:45:59 +00:00
yamt fc12b34a0a kmem_init: use vmem quantum cache. XXX needs tune. 2006-08-20 09:44:06 +00:00
yamt d9530c47ba add DEBUG code to detect modifications on free memory. 2006-07-08 06:01:53 +00:00
yamt d145ea66dc change KMEM_QUANTUM_SIZE from sizeof(void *) to (ALIGNBYTES + 1).
the latter is larger on eg. sparc.

noted by Christos Zoulas.
http://mail-index.NetBSD.org/port-sparc/2006/07/02/0001.html
2006-07-03 09:18:35 +00:00
yamt 8308eb1f7a implement kmem_zalloc. 2006-06-25 08:10:04 +00:00
yamt bc4977819f 1. implement solaris-like vmem. (still primitive, though)
2. implement solaris-like kmem_alloc/free api, using #1.
   (note: this implementation is backed by kernel_map, thus can't be
   used from interrupt context.)
2006-06-25 08:00:01 +00:00