Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized
memory used by the kernel at run time, and just like kASan and kCSan, it
is an excellent feature. It has already detected 38 uninitialized variables
in the kernel during my testing, which I have since discreetly fixed.
We use two shadows:
- "shad", to track uninitialized memory with a bit granularity (1:1).
Each bit set to 1 in the shad corresponds to one uninitialized bit of
real kernel memory.
- "orig", to track the origin of the memory with a 4-byte granularity
(1:1). Each uint32_t cell in the orig indicates the origin of the
associated uint32_t of real kernel memory.
The memory consumption of these shadows is consequent, so at least 4GB of
RAM is recommended to run kMSan.
The compiler inserts calls to specific __msan_* functions on each memory
access, to manage both the shad and the orig and detect uninitialized
memory accesses that change the execution flow (like an "if" on an
uninitialized variable).
We mark as uninit several types of memory buffers (stack, pools, kmem,
malloc, uvm_km), and check each buffer passed to copyout, copyoutstr,
bwrite, if_transmit_lock and DMA operations, to detect uninitialized memory
that leaves the system. This allows us to detect kernel info leaks in a way
that is more efficient and also more user-friendly than KLEAK.
Contrary to kASan, kMSan requires comprehensive coverage, ie we cannot
tolerate having one non-instrumented function, because this could cause
false positives. kMSan cannot instrument ASM functions, so I converted
most of them to __asm__ inlines, which kMSan is able to instrument. Those
that remain receive special treatment.
Contrary to kASan again, kMSan uses a TLS, so we must context-switch this
TLS during interrupts. We use different contexts depending on the interrupt
level.
The orig tracks precisely the origin of a buffer. We use a special encoding
for the orig values, and pack together in each uint32_t cell of the orig:
- a code designating the type of memory (Stack, Pool, etc), and
- a compressed pointer, which points either (1) to a string containing
the name of the variable associated with the cell, or (2) to an area
in the kernel .text section which we resolve to a symbol name + offset.
This encoding allows us not to consume extra memory for associating
information with each cell, and produces a precise output, that can tell
for example the name of an uninitialized variable on the stack, the
function in which it was pushed on the stack, and the function where we
accessed this uninitialized variable.
kMSan is available with LLVM, but not with GCC.
The code is organized in a way that is similar to kASan and kCSan, so it
means that other architectures than amd64 can be supported.
2019-11-14 19:23:52 +03:00
|
|
|
/* $NetBSD: subr_kmem.c,v 1.77 2019/11/14 16:23:52 maxv Exp $ */
|
2009-02-01 21:51:07 +03:00
|
|
|
|
2019-08-15 15:06:42 +03:00
|
|
|
/*
|
2015-07-27 12:24:28 +03:00
|
|
|
* Copyright (c) 2009-2015 The NetBSD Foundation, Inc.
|
2009-02-01 21:51:07 +03:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* This code is derived from software contributed to The NetBSD Foundation
|
2015-07-27 12:24:28 +03:00
|
|
|
* by Andrew Doran and Maxime Villard.
|
2009-02-01 21:51:07 +03:00
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
|
|
|
|
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
|
|
|
|
* TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
|
|
|
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
|
|
|
|
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
|
|
|
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
|
|
|
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
|
|
|
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
|
|
|
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
|
|
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
|
|
|
* POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*/
|
2006-06-25 12:00:01 +04:00
|
|
|
|
2019-08-15 15:06:42 +03:00
|
|
|
/*
|
2006-06-25 12:00:01 +04:00
|
|
|
* Copyright (c)2006 YAMAMOTO Takashi,
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2014-06-25 20:05:22 +04:00
|
|
|
* Allocator of kernel wired memory. This allocator has some debug features
|
|
|
|
* enabled with "option DIAGNOSTIC" and "option DEBUG".
|
2013-04-22 17:22:25 +04:00
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2014-06-25 20:05:22 +04:00
|
|
|
* KMEM_SIZE: detect alloc/free size mismatch bugs.
|
2014-07-01 16:08:33 +04:00
|
|
|
* Prefix each allocations with a fixed-sized, aligned header and record
|
|
|
|
* the exact user-requested allocation size in it. When freeing, compare
|
|
|
|
* it with kmem_free's "size" argument.
|
2014-07-22 11:38:41 +04:00
|
|
|
*
|
2019-08-15 15:06:42 +03:00
|
|
|
* This option is enabled on DIAGNOSTIC.
|
2013-04-22 17:22:25 +04:00
|
|
|
*
|
2018-08-20 14:35:28 +03:00
|
|
|
* |CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|CHUNK|
|
|
|
|
* +-----+-----+-----+-----+-----+-----+-----+-----+-----+---+-+
|
|
|
|
* |/////| | | | | | | | | |U|
|
|
|
|
* |/HSZ/| | | | | | | | | |U|
|
|
|
|
* |/////| | | | | | | | | |U|
|
|
|
|
* +-----+-----+-----+-----+-----+-----+-----+-----+-----+---+-+
|
|
|
|
* |Size | Buffer usable by the caller (requested size) |Unused\
|
2014-07-22 11:38:41 +04:00
|
|
|
*/
|
|
|
|
|
2006-06-25 12:00:01 +04:00
|
|
|
#include <sys/cdefs.h>
|
Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized
memory used by the kernel at run time, and just like kASan and kCSan, it
is an excellent feature. It has already detected 38 uninitialized variables
in the kernel during my testing, which I have since discreetly fixed.
We use two shadows:
- "shad", to track uninitialized memory with a bit granularity (1:1).
Each bit set to 1 in the shad corresponds to one uninitialized bit of
real kernel memory.
- "orig", to track the origin of the memory with a 4-byte granularity
(1:1). Each uint32_t cell in the orig indicates the origin of the
associated uint32_t of real kernel memory.
The memory consumption of these shadows is consequent, so at least 4GB of
RAM is recommended to run kMSan.
The compiler inserts calls to specific __msan_* functions on each memory
access, to manage both the shad and the orig and detect uninitialized
memory accesses that change the execution flow (like an "if" on an
uninitialized variable).
We mark as uninit several types of memory buffers (stack, pools, kmem,
malloc, uvm_km), and check each buffer passed to copyout, copyoutstr,
bwrite, if_transmit_lock and DMA operations, to detect uninitialized memory
that leaves the system. This allows us to detect kernel info leaks in a way
that is more efficient and also more user-friendly than KLEAK.
Contrary to kASan, kMSan requires comprehensive coverage, ie we cannot
tolerate having one non-instrumented function, because this could cause
false positives. kMSan cannot instrument ASM functions, so I converted
most of them to __asm__ inlines, which kMSan is able to instrument. Those
that remain receive special treatment.
Contrary to kASan again, kMSan uses a TLS, so we must context-switch this
TLS during interrupts. We use different contexts depending on the interrupt
level.
The orig tracks precisely the origin of a buffer. We use a special encoding
for the orig values, and pack together in each uint32_t cell of the orig:
- a code designating the type of memory (Stack, Pool, etc), and
- a compressed pointer, which points either (1) to a string containing
the name of the variable associated with the cell, or (2) to an area
in the kernel .text section which we resolve to a symbol name + offset.
This encoding allows us not to consume extra memory for associating
information with each cell, and produces a precise output, that can tell
for example the name of an uninitialized variable on the stack, the
function in which it was pushed on the stack, and the function where we
accessed this uninitialized variable.
kMSan is available with LLVM, but not with GCC.
The code is organized in a way that is similar to kASan and kCSan, so it
means that other architectures than amd64 can be supported.
2019-11-14 19:23:52 +03:00
|
|
|
__KERNEL_RCSID(0, "$NetBSD: subr_kmem.c,v 1.77 2019/11/14 16:23:52 maxv Exp $");
|
2017-04-12 23:05:54 +03:00
|
|
|
|
|
|
|
#ifdef _KERNEL_OPT
|
|
|
|
#include "opt_kmem.h"
|
|
|
|
#endif
|
2006-06-25 12:00:01 +04:00
|
|
|
|
|
|
|
#include <sys/param.h>
|
2006-08-20 13:45:59 +04:00
|
|
|
#include <sys/callback.h>
|
2006-06-25 12:00:01 +04:00
|
|
|
#include <sys/kmem.h>
|
2012-01-27 23:48:38 +04:00
|
|
|
#include <sys/pool.h>
|
2007-02-10 00:55:00 +03:00
|
|
|
#include <sys/debug.h>
|
2007-11-07 03:23:13 +03:00
|
|
|
#include <sys/lockdebug.h>
|
2009-02-01 21:51:07 +03:00
|
|
|
#include <sys/cpu.h>
|
Add support for kASan on amd64. Written by me, with some parts inspired
from Siddharth Muralee's initial work. This feature can detect several
kinds of memory bugs, and it's an excellent feature.
It can be enabled by uncommenting these three lines in GENERIC:
#makeoptions KASAN=1 # Kernel Address Sanitizer
#options KASAN
#no options SVS
The kernel is compiled without SVS, without DMAP and without PCPU area.
A shadow area is created at boot time, and it can cover the upper 128TB
of the address space. This area is populated gradually as we allocate
memory. With this design the memory consumption is kept at its lowest
level.
The compiler calls the __asan_* functions each time a memory access is
done. We verify whether this access is legal by looking at the shadow
area.
We declare our own special memcpy/memset/etc functions, because the
compiler's builtins don't add the __asan_* instrumentation.
Initially all the mappings are marked as valid. During dynamic
allocations, we add a redzone, which we mark as invalid. Any access on
it will trigger a kASan error message. Additionally, the compiler adds
a redzone on global variables, and we mark these redzones as invalid too.
The illegal-access detection works with a 1-byte granularity.
For now, we cover three areas:
- global variables
- kmem_alloc-ated areas
- malloc-ated areas
More will come, but that's a good start.
2018-08-20 18:04:51 +03:00
|
|
|
#include <sys/asan.h>
|
Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized
memory used by the kernel at run time, and just like kASan and kCSan, it
is an excellent feature. It has already detected 38 uninitialized variables
in the kernel during my testing, which I have since discreetly fixed.
We use two shadows:
- "shad", to track uninitialized memory with a bit granularity (1:1).
Each bit set to 1 in the shad corresponds to one uninitialized bit of
real kernel memory.
- "orig", to track the origin of the memory with a 4-byte granularity
(1:1). Each uint32_t cell in the orig indicates the origin of the
associated uint32_t of real kernel memory.
The memory consumption of these shadows is consequent, so at least 4GB of
RAM is recommended to run kMSan.
The compiler inserts calls to specific __msan_* functions on each memory
access, to manage both the shad and the orig and detect uninitialized
memory accesses that change the execution flow (like an "if" on an
uninitialized variable).
We mark as uninit several types of memory buffers (stack, pools, kmem,
malloc, uvm_km), and check each buffer passed to copyout, copyoutstr,
bwrite, if_transmit_lock and DMA operations, to detect uninitialized memory
that leaves the system. This allows us to detect kernel info leaks in a way
that is more efficient and also more user-friendly than KLEAK.
Contrary to kASan, kMSan requires comprehensive coverage, ie we cannot
tolerate having one non-instrumented function, because this could cause
false positives. kMSan cannot instrument ASM functions, so I converted
most of them to __asm__ inlines, which kMSan is able to instrument. Those
that remain receive special treatment.
Contrary to kASan again, kMSan uses a TLS, so we must context-switch this
TLS during interrupts. We use different contexts depending on the interrupt
level.
The orig tracks precisely the origin of a buffer. We use a special encoding
for the orig values, and pack together in each uint32_t cell of the orig:
- a code designating the type of memory (Stack, Pool, etc), and
- a compressed pointer, which points either (1) to a string containing
the name of the variable associated with the cell, or (2) to an area
in the kernel .text section which we resolve to a symbol name + offset.
This encoding allows us not to consume extra memory for associating
information with each cell, and produces a precise output, that can tell
for example the name of an uninitialized variable on the stack, the
function in which it was pushed on the stack, and the function where we
accessed this uninitialized variable.
kMSan is available with LLVM, but not with GCC.
The code is organized in a way that is similar to kASan and kCSan, so it
means that other architectures than amd64 can be supported.
2019-11-14 19:23:52 +03:00
|
|
|
#include <sys/msan.h>
|
Add support for kASan on amd64. Written by me, with some parts inspired
from Siddharth Muralee's initial work. This feature can detect several
kinds of memory bugs, and it's an excellent feature.
It can be enabled by uncommenting these three lines in GENERIC:
#makeoptions KASAN=1 # Kernel Address Sanitizer
#options KASAN
#no options SVS
The kernel is compiled without SVS, without DMAP and without PCPU area.
A shadow area is created at boot time, and it can cover the upper 128TB
of the address space. This area is populated gradually as we allocate
memory. With this design the memory consumption is kept at its lowest
level.
The compiler calls the __asan_* functions each time a memory access is
done. We verify whether this access is legal by looking at the shadow
area.
We declare our own special memcpy/memset/etc functions, because the
compiler's builtins don't add the __asan_* instrumentation.
Initially all the mappings are marked as valid. During dynamic
allocations, we add a redzone, which we mark as invalid. Any access on
it will trigger a kASan error message. Additionally, the compiler adds
a redzone on global variables, and we mark these redzones as invalid too.
The illegal-access detection works with a 1-byte granularity.
For now, we cover three areas:
- global variables
- kmem_alloc-ated areas
- malloc-ated areas
More will come, but that's a good start.
2018-08-20 18:04:51 +03:00
|
|
|
|
2006-08-20 13:45:59 +04:00
|
|
|
#include <uvm/uvm_extern.h>
|
|
|
|
#include <uvm/uvm_map.h>
|
|
|
|
|
2006-06-25 12:00:01 +04:00
|
|
|
#include <lib/libkern/libkern.h>
|
|
|
|
|
2012-07-21 15:45:04 +04:00
|
|
|
struct kmem_cache_info {
|
2012-01-29 03:09:06 +04:00
|
|
|
size_t kc_size;
|
|
|
|
const char * kc_name;
|
2012-07-21 15:45:04 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
static const struct kmem_cache_info kmem_cache_sizes[] = {
|
2012-01-27 23:48:38 +04:00
|
|
|
{ 8, "kmem-8" },
|
|
|
|
{ 16, "kmem-16" },
|
|
|
|
{ 24, "kmem-24" },
|
|
|
|
{ 32, "kmem-32" },
|
|
|
|
{ 40, "kmem-40" },
|
|
|
|
{ 48, "kmem-48" },
|
|
|
|
{ 56, "kmem-56" },
|
|
|
|
{ 64, "kmem-64" },
|
|
|
|
{ 80, "kmem-80" },
|
|
|
|
{ 96, "kmem-96" },
|
|
|
|
{ 112, "kmem-112" },
|
|
|
|
{ 128, "kmem-128" },
|
|
|
|
{ 160, "kmem-160" },
|
|
|
|
{ 192, "kmem-192" },
|
|
|
|
{ 224, "kmem-224" },
|
|
|
|
{ 256, "kmem-256" },
|
|
|
|
{ 320, "kmem-320" },
|
|
|
|
{ 384, "kmem-384" },
|
|
|
|
{ 448, "kmem-448" },
|
|
|
|
{ 512, "kmem-512" },
|
|
|
|
{ 768, "kmem-768" },
|
|
|
|
{ 1024, "kmem-1024" },
|
2012-07-21 15:45:04 +04:00
|
|
|
{ 0, NULL }
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct kmem_cache_info kmem_cache_big_sizes[] = {
|
2012-01-27 23:48:38 +04:00
|
|
|
{ 2048, "kmem-2048" },
|
|
|
|
{ 4096, "kmem-4096" },
|
2012-07-21 15:45:04 +04:00
|
|
|
{ 8192, "kmem-8192" },
|
|
|
|
{ 16384, "kmem-16384" },
|
2012-01-27 23:48:38 +04:00
|
|
|
{ 0, NULL }
|
|
|
|
};
|
2009-02-01 21:51:07 +03:00
|
|
|
|
2012-01-27 23:48:38 +04:00
|
|
|
/*
|
2012-01-29 03:09:06 +04:00
|
|
|
* KMEM_ALIGN is the smallest guaranteed alignment and also the
|
2012-07-21 15:45:04 +04:00
|
|
|
* smallest allocateable quantum.
|
|
|
|
* Every cache size >= CACHE_LINE_SIZE gets CACHE_LINE_SIZE alignment.
|
2012-01-27 23:48:38 +04:00
|
|
|
*/
|
2012-01-29 03:09:06 +04:00
|
|
|
#define KMEM_ALIGN 8
|
|
|
|
#define KMEM_SHIFT 3
|
2012-07-21 15:45:04 +04:00
|
|
|
#define KMEM_MAXSIZE 1024
|
2012-01-29 03:09:06 +04:00
|
|
|
#define KMEM_CACHE_COUNT (KMEM_MAXSIZE >> KMEM_SHIFT)
|
2006-06-25 12:00:01 +04:00
|
|
|
|
2012-01-29 03:09:06 +04:00
|
|
|
static pool_cache_t kmem_cache[KMEM_CACHE_COUNT] __cacheline_aligned;
|
|
|
|
static size_t kmem_cache_maxidx __read_mostly;
|
2009-02-01 21:51:07 +03:00
|
|
|
|
2012-07-21 15:45:04 +04:00
|
|
|
#define KMEM_BIG_ALIGN 2048
|
|
|
|
#define KMEM_BIG_SHIFT 11
|
|
|
|
#define KMEM_BIG_MAXSIZE 16384
|
|
|
|
#define KMEM_CACHE_BIG_COUNT (KMEM_BIG_MAXSIZE >> KMEM_BIG_SHIFT)
|
|
|
|
|
|
|
|
static pool_cache_t kmem_cache_big[KMEM_CACHE_BIG_COUNT] __cacheline_aligned;
|
|
|
|
static size_t kmem_cache_big_maxidx __read_mostly;
|
|
|
|
|
2014-06-23 21:43:42 +04:00
|
|
|
#if defined(DIAGNOSTIC) && defined(_HARDKERNEL)
|
2014-07-01 16:08:33 +04:00
|
|
|
#define KMEM_SIZE
|
2018-08-20 14:35:28 +03:00
|
|
|
#endif
|
2014-06-23 21:43:42 +04:00
|
|
|
|
2012-04-15 23:07:40 +04:00
|
|
|
#if defined(DEBUG) && defined(_HARDKERNEL)
|
2015-07-27 12:24:28 +03:00
|
|
|
static void *kmem_freecheck;
|
2018-08-20 14:35:28 +03:00
|
|
|
#endif
|
2006-07-08 10:01:53 +04:00
|
|
|
|
2009-02-01 21:51:07 +03:00
|
|
|
#if defined(KMEM_SIZE)
|
2014-07-01 16:08:33 +04:00
|
|
|
struct kmem_header {
|
|
|
|
size_t size;
|
|
|
|
} __aligned(KMEM_ALIGN);
|
|
|
|
#define SIZE_SIZE sizeof(struct kmem_header)
|
2009-02-01 21:51:07 +03:00
|
|
|
static void kmem_size_set(void *, size_t);
|
2012-01-27 23:48:38 +04:00
|
|
|
static void kmem_size_check(void *, size_t);
|
2009-02-01 21:51:07 +03:00
|
|
|
#else
|
|
|
|
#define SIZE_SIZE 0
|
|
|
|
#define kmem_size_set(p, sz) /* nothing */
|
|
|
|
#define kmem_size_check(p, sz) /* nothing */
|
|
|
|
#endif
|
|
|
|
|
2010-01-31 14:54:32 +03:00
|
|
|
CTASSERT(KM_SLEEP == PR_WAITOK);
|
|
|
|
CTASSERT(KM_NOSLEEP == PR_NOWAIT);
|
|
|
|
|
2012-07-21 15:45:04 +04:00
|
|
|
/*
|
|
|
|
* kmem_intr_alloc: allocate wired memory.
|
|
|
|
*/
|
2012-01-27 23:48:38 +04:00
|
|
|
void *
|
2013-04-22 17:22:25 +04:00
|
|
|
kmem_intr_alloc(size_t requested_size, km_flag_t kmflags)
|
2006-06-25 12:00:01 +04:00
|
|
|
{
|
2018-08-22 17:12:30 +03:00
|
|
|
#ifdef KASAN
|
2018-08-22 12:38:21 +03:00
|
|
|
const size_t origsize = requested_size;
|
2018-08-22 17:12:30 +03:00
|
|
|
#endif
|
2012-01-29 03:09:06 +04:00
|
|
|
size_t allocsz, index;
|
2013-04-22 17:22:25 +04:00
|
|
|
size_t size;
|
2012-01-27 23:48:38 +04:00
|
|
|
pool_cache_t pc;
|
|
|
|
uint8_t *p;
|
2006-06-25 12:00:01 +04:00
|
|
|
|
2013-04-22 17:22:25 +04:00
|
|
|
KASSERT(requested_size > 0);
|
2006-06-25 12:00:01 +04:00
|
|
|
|
2017-11-10 02:20:12 +03:00
|
|
|
KASSERT((kmflags & KM_SLEEP) || (kmflags & KM_NOSLEEP));
|
|
|
|
KASSERT(!(kmflags & KM_SLEEP) || !(kmflags & KM_NOSLEEP));
|
|
|
|
|
Add support for kASan on amd64. Written by me, with some parts inspired
from Siddharth Muralee's initial work. This feature can detect several
kinds of memory bugs, and it's an excellent feature.
It can be enabled by uncommenting these three lines in GENERIC:
#makeoptions KASAN=1 # Kernel Address Sanitizer
#options KASAN
#no options SVS
The kernel is compiled without SVS, without DMAP and without PCPU area.
A shadow area is created at boot time, and it can cover the upper 128TB
of the address space. This area is populated gradually as we allocate
memory. With this design the memory consumption is kept at its lowest
level.
The compiler calls the __asan_* functions each time a memory access is
done. We verify whether this access is legal by looking at the shadow
area.
We declare our own special memcpy/memset/etc functions, because the
compiler's builtins don't add the __asan_* instrumentation.
Initially all the mappings are marked as valid. During dynamic
allocations, we add a redzone, which we mark as invalid. Any access on
it will trigger a kASan error message. Additionally, the compiler adds
a redzone on global variables, and we mark these redzones as invalid too.
The illegal-access detection works with a 1-byte granularity.
For now, we cover three areas:
- global variables
- kmem_alloc-ated areas
- malloc-ated areas
More will come, but that's a good start.
2018-08-20 18:04:51 +03:00
|
|
|
kasan_add_redzone(&requested_size);
|
2013-04-22 17:22:25 +04:00
|
|
|
size = kmem_roundup_size(requested_size);
|
2014-06-24 11:28:23 +04:00
|
|
|
allocsz = size + SIZE_SIZE;
|
|
|
|
|
2012-07-21 15:45:04 +04:00
|
|
|
if ((index = ((allocsz -1) >> KMEM_SHIFT))
|
|
|
|
< kmem_cache_maxidx) {
|
|
|
|
pc = kmem_cache[index];
|
|
|
|
} else if ((index = ((allocsz - 1) >> KMEM_BIG_SHIFT))
|
2014-06-25 20:05:22 +04:00
|
|
|
< kmem_cache_big_maxidx) {
|
2012-07-21 15:45:04 +04:00
|
|
|
pc = kmem_cache_big[index];
|
2013-04-21 06:44:15 +04:00
|
|
|
} else {
|
2012-01-29 03:09:06 +04:00
|
|
|
int ret = uvm_km_kmem_alloc(kmem_va_arena,
|
2012-04-01 21:02:46 +04:00
|
|
|
(vsize_t)round_page(size),
|
2012-01-27 23:48:38 +04:00
|
|
|
((kmflags & KM_SLEEP) ? VM_SLEEP : VM_NOSLEEP)
|
|
|
|
| VM_INSTANTFIT, (vmem_addr_t *)&p);
|
2012-07-21 15:45:04 +04:00
|
|
|
if (ret) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
FREECHECK_OUT(&kmem_freecheck, p);
|
|
|
|
return p;
|
2006-06-25 12:00:01 +04:00
|
|
|
}
|
|
|
|
|
2012-01-27 23:48:38 +04:00
|
|
|
p = pool_cache_get(pc, kmflags);
|
|
|
|
|
|
|
|
if (__predict_true(p != NULL)) {
|
|
|
|
FREECHECK_OUT(&kmem_freecheck, p);
|
2013-04-22 17:22:25 +04:00
|
|
|
kmem_size_set(p, requested_size);
|
2018-08-20 14:46:44 +03:00
|
|
|
p += SIZE_SIZE;
|
2019-04-07 12:20:04 +03:00
|
|
|
kasan_mark(p, origsize, size, KASAN_KMEM_REDZONE);
|
2018-08-20 14:46:44 +03:00
|
|
|
return p;
|
2012-01-27 23:48:38 +04:00
|
|
|
}
|
2013-04-17 01:13:38 +04:00
|
|
|
return p;
|
2006-06-25 12:00:01 +04:00
|
|
|
}
|
|
|
|
|
2012-07-21 15:45:04 +04:00
|
|
|
/*
|
|
|
|
* kmem_intr_zalloc: allocate zeroed wired memory.
|
|
|
|
*/
|
2012-01-27 23:48:38 +04:00
|
|
|
void *
|
|
|
|
kmem_intr_zalloc(size_t size, km_flag_t kmflags)
|
2009-02-01 21:51:07 +03:00
|
|
|
{
|
2012-01-27 23:48:38 +04:00
|
|
|
void *p;
|
2009-02-01 21:51:07 +03:00
|
|
|
|
2012-01-27 23:48:38 +04:00
|
|
|
p = kmem_intr_alloc(size, kmflags);
|
|
|
|
if (p != NULL) {
|
|
|
|
memset(p, 0, size);
|
|
|
|
}
|
|
|
|
return p;
|
2009-02-01 21:51:07 +03:00
|
|
|
}
|
|
|
|
|
2012-07-21 15:45:04 +04:00
|
|
|
/*
|
|
|
|
* kmem_intr_free: free wired memory allocated by kmem_alloc.
|
|
|
|
*/
|
2012-01-27 23:48:38 +04:00
|
|
|
void
|
2013-04-22 17:22:25 +04:00
|
|
|
kmem_intr_free(void *p, size_t requested_size)
|
2009-02-01 21:51:07 +03:00
|
|
|
{
|
2012-01-29 03:09:06 +04:00
|
|
|
size_t allocsz, index;
|
2013-04-22 17:22:25 +04:00
|
|
|
size_t size;
|
2012-01-27 23:48:38 +04:00
|
|
|
pool_cache_t pc;
|
|
|
|
|
|
|
|
KASSERT(p != NULL);
|
2013-04-22 17:22:25 +04:00
|
|
|
KASSERT(requested_size > 0);
|
2009-02-01 21:51:07 +03:00
|
|
|
|
Add support for kASan on amd64. Written by me, with some parts inspired
from Siddharth Muralee's initial work. This feature can detect several
kinds of memory bugs, and it's an excellent feature.
It can be enabled by uncommenting these three lines in GENERIC:
#makeoptions KASAN=1 # Kernel Address Sanitizer
#options KASAN
#no options SVS
The kernel is compiled without SVS, without DMAP and without PCPU area.
A shadow area is created at boot time, and it can cover the upper 128TB
of the address space. This area is populated gradually as we allocate
memory. With this design the memory consumption is kept at its lowest
level.
The compiler calls the __asan_* functions each time a memory access is
done. We verify whether this access is legal by looking at the shadow
area.
We declare our own special memcpy/memset/etc functions, because the
compiler's builtins don't add the __asan_* instrumentation.
Initially all the mappings are marked as valid. During dynamic
allocations, we add a redzone, which we mark as invalid. Any access on
it will trigger a kASan error message. Additionally, the compiler adds
a redzone on global variables, and we mark these redzones as invalid too.
The illegal-access detection works with a 1-byte granularity.
For now, we cover three areas:
- global variables
- kmem_alloc-ated areas
- malloc-ated areas
More will come, but that's a good start.
2018-08-20 18:04:51 +03:00
|
|
|
kasan_add_redzone(&requested_size);
|
2013-04-22 17:22:25 +04:00
|
|
|
size = kmem_roundup_size(requested_size);
|
2014-06-24 11:28:23 +04:00
|
|
|
allocsz = size + SIZE_SIZE;
|
|
|
|
|
2012-07-21 15:45:04 +04:00
|
|
|
if ((index = ((allocsz -1) >> KMEM_SHIFT))
|
|
|
|
< kmem_cache_maxidx) {
|
|
|
|
pc = kmem_cache[index];
|
|
|
|
} else if ((index = ((allocsz - 1) >> KMEM_BIG_SHIFT))
|
2014-06-25 20:05:22 +04:00
|
|
|
< kmem_cache_big_maxidx) {
|
2012-07-21 15:45:04 +04:00
|
|
|
pc = kmem_cache_big[index];
|
|
|
|
} else {
|
|
|
|
FREECHECK_IN(&kmem_freecheck, p);
|
2012-01-27 23:48:38 +04:00
|
|
|
uvm_km_kmem_free(kmem_va_arena, (vaddr_t)p,
|
2012-04-01 21:02:46 +04:00
|
|
|
round_page(size));
|
2012-01-27 23:48:38 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2019-04-07 12:20:04 +03:00
|
|
|
kasan_mark(p, size, size, 0);
|
2018-08-22 12:38:21 +03:00
|
|
|
|
2012-07-21 15:45:04 +04:00
|
|
|
p = (uint8_t *)p - SIZE_SIZE;
|
2013-04-22 17:22:25 +04:00
|
|
|
kmem_size_check(p, requested_size);
|
2012-01-27 23:48:38 +04:00
|
|
|
FREECHECK_IN(&kmem_freecheck, p);
|
2012-07-21 15:45:04 +04:00
|
|
|
LOCKDEBUG_MEM_CHECK(p, size);
|
2012-01-27 23:48:38 +04:00
|
|
|
|
|
|
|
pool_cache_put(pc, p);
|
2009-02-01 21:51:07 +03:00
|
|
|
}
|
|
|
|
|
2019-08-15 15:06:42 +03:00
|
|
|
/* -------------------------------- Kmem API -------------------------------- */
|
2006-06-25 12:00:01 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* kmem_alloc: allocate wired memory.
|
|
|
|
* => must not be called from interrupt context.
|
|
|
|
*/
|
|
|
|
void *
|
|
|
|
kmem_alloc(size_t size, km_flag_t kmflags)
|
|
|
|
{
|
2016-02-29 03:34:17 +03:00
|
|
|
void *v;
|
|
|
|
|
2012-01-29 03:09:06 +04:00
|
|
|
KASSERTMSG((!cpu_intr_p() && !cpu_softintr_p()),
|
|
|
|
"kmem(9) should not be used from the interrupt context");
|
2016-02-29 03:34:17 +03:00
|
|
|
v = kmem_intr_alloc(size, kmflags);
|
Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized
memory used by the kernel at run time, and just like kASan and kCSan, it
is an excellent feature. It has already detected 38 uninitialized variables
in the kernel during my testing, which I have since discreetly fixed.
We use two shadows:
- "shad", to track uninitialized memory with a bit granularity (1:1).
Each bit set to 1 in the shad corresponds to one uninitialized bit of
real kernel memory.
- "orig", to track the origin of the memory with a 4-byte granularity
(1:1). Each uint32_t cell in the orig indicates the origin of the
associated uint32_t of real kernel memory.
The memory consumption of these shadows is consequent, so at least 4GB of
RAM is recommended to run kMSan.
The compiler inserts calls to specific __msan_* functions on each memory
access, to manage both the shad and the orig and detect uninitialized
memory accesses that change the execution flow (like an "if" on an
uninitialized variable).
We mark as uninit several types of memory buffers (stack, pools, kmem,
malloc, uvm_km), and check each buffer passed to copyout, copyoutstr,
bwrite, if_transmit_lock and DMA operations, to detect uninitialized memory
that leaves the system. This allows us to detect kernel info leaks in a way
that is more efficient and also more user-friendly than KLEAK.
Contrary to kASan, kMSan requires comprehensive coverage, ie we cannot
tolerate having one non-instrumented function, because this could cause
false positives. kMSan cannot instrument ASM functions, so I converted
most of them to __asm__ inlines, which kMSan is able to instrument. Those
that remain receive special treatment.
Contrary to kASan again, kMSan uses a TLS, so we must context-switch this
TLS during interrupts. We use different contexts depending on the interrupt
level.
The orig tracks precisely the origin of a buffer. We use a special encoding
for the orig values, and pack together in each uint32_t cell of the orig:
- a code designating the type of memory (Stack, Pool, etc), and
- a compressed pointer, which points either (1) to a string containing
the name of the variable associated with the cell, or (2) to an area
in the kernel .text section which we resolve to a symbol name + offset.
This encoding allows us not to consume extra memory for associating
information with each cell, and produces a precise output, that can tell
for example the name of an uninitialized variable on the stack, the
function in which it was pushed on the stack, and the function where we
accessed this uninitialized variable.
kMSan is available with LLVM, but not with GCC.
The code is organized in a way that is similar to kASan and kCSan, so it
means that other architectures than amd64 can be supported.
2019-11-14 19:23:52 +03:00
|
|
|
if (__predict_true(v != NULL)) {
|
|
|
|
kmsan_mark(v, size, KMSAN_STATE_UNINIT);
|
|
|
|
kmsan_orig(v, size, KMSAN_TYPE_KMEM, __RET_ADDR);
|
|
|
|
}
|
2016-02-29 03:34:17 +03:00
|
|
|
KASSERT(v || (kmflags & KM_NOSLEEP) != 0);
|
|
|
|
return v;
|
2006-06-25 12:00:01 +04:00
|
|
|
}
|
|
|
|
|
2006-06-25 12:10:04 +04:00
|
|
|
/*
|
2012-01-27 23:48:38 +04:00
|
|
|
* kmem_zalloc: allocate zeroed wired memory.
|
2006-06-25 12:10:04 +04:00
|
|
|
* => must not be called from interrupt context.
|
|
|
|
*/
|
|
|
|
void *
|
|
|
|
kmem_zalloc(size_t size, km_flag_t kmflags)
|
|
|
|
{
|
2016-02-29 03:34:17 +03:00
|
|
|
void *v;
|
|
|
|
|
2012-01-29 03:09:06 +04:00
|
|
|
KASSERTMSG((!cpu_intr_p() && !cpu_softintr_p()),
|
|
|
|
"kmem(9) should not be used from the interrupt context");
|
2016-02-29 03:34:17 +03:00
|
|
|
v = kmem_intr_zalloc(size, kmflags);
|
|
|
|
KASSERT(v || (kmflags & KM_NOSLEEP) != 0);
|
|
|
|
return v;
|
2006-06-25 12:10:04 +04:00
|
|
|
}
|
|
|
|
|
2006-06-25 12:00:01 +04:00
|
|
|
/*
|
|
|
|
* kmem_free: free wired memory allocated by kmem_alloc.
|
|
|
|
* => must not be called from interrupt context.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
kmem_free(void *p, size_t size)
|
|
|
|
{
|
2009-02-01 21:51:07 +03:00
|
|
|
KASSERT(!cpu_intr_p());
|
2009-03-29 14:51:53 +04:00
|
|
|
KASSERT(!cpu_softintr_p());
|
2012-01-27 23:48:38 +04:00
|
|
|
kmem_intr_free(p, size);
|
Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized
memory used by the kernel at run time, and just like kASan and kCSan, it
is an excellent feature. It has already detected 38 uninitialized variables
in the kernel during my testing, which I have since discreetly fixed.
We use two shadows:
- "shad", to track uninitialized memory with a bit granularity (1:1).
Each bit set to 1 in the shad corresponds to one uninitialized bit of
real kernel memory.
- "orig", to track the origin of the memory with a 4-byte granularity
(1:1). Each uint32_t cell in the orig indicates the origin of the
associated uint32_t of real kernel memory.
The memory consumption of these shadows is consequent, so at least 4GB of
RAM is recommended to run kMSan.
The compiler inserts calls to specific __msan_* functions on each memory
access, to manage both the shad and the orig and detect uninitialized
memory accesses that change the execution flow (like an "if" on an
uninitialized variable).
We mark as uninit several types of memory buffers (stack, pools, kmem,
malloc, uvm_km), and check each buffer passed to copyout, copyoutstr,
bwrite, if_transmit_lock and DMA operations, to detect uninitialized memory
that leaves the system. This allows us to detect kernel info leaks in a way
that is more efficient and also more user-friendly than KLEAK.
Contrary to kASan, kMSan requires comprehensive coverage, ie we cannot
tolerate having one non-instrumented function, because this could cause
false positives. kMSan cannot instrument ASM functions, so I converted
most of them to __asm__ inlines, which kMSan is able to instrument. Those
that remain receive special treatment.
Contrary to kASan again, kMSan uses a TLS, so we must context-switch this
TLS during interrupts. We use different contexts depending on the interrupt
level.
The orig tracks precisely the origin of a buffer. We use a special encoding
for the orig values, and pack together in each uint32_t cell of the orig:
- a code designating the type of memory (Stack, Pool, etc), and
- a compressed pointer, which points either (1) to a string containing
the name of the variable associated with the cell, or (2) to an area
in the kernel .text section which we resolve to a symbol name + offset.
This encoding allows us not to consume extra memory for associating
information with each cell, and produces a precise output, that can tell
for example the name of an uninitialized variable on the stack, the
function in which it was pushed on the stack, and the function where we
accessed this uninitialized variable.
kMSan is available with LLVM, but not with GCC.
The code is organized in a way that is similar to kASan and kCSan, so it
means that other architectures than amd64 can be supported.
2019-11-14 19:23:52 +03:00
|
|
|
kmsan_mark(p, size, KMSAN_STATE_INITED);
|
2006-06-25 12:00:01 +04:00
|
|
|
}
|
|
|
|
|
2012-07-21 15:45:04 +04:00
|
|
|
static size_t
|
2012-01-27 23:48:38 +04:00
|
|
|
kmem_create_caches(const struct kmem_cache_info *array,
|
2012-07-21 15:45:04 +04:00
|
|
|
pool_cache_t alloc_table[], size_t maxsize, int shift, int ipl)
|
2006-06-25 12:00:01 +04:00
|
|
|
{
|
2012-07-21 15:45:04 +04:00
|
|
|
size_t maxidx = 0;
|
|
|
|
size_t table_unit = (1 << shift);
|
2012-01-27 23:48:38 +04:00
|
|
|
size_t size = table_unit;
|
2009-02-01 21:51:07 +03:00
|
|
|
int i;
|
2006-06-25 12:00:01 +04:00
|
|
|
|
2012-01-27 23:48:38 +04:00
|
|
|
for (i = 0; array[i].kc_size != 0 ; i++) {
|
2012-01-29 03:09:06 +04:00
|
|
|
const char *name = array[i].kc_name;
|
2012-01-27 23:48:38 +04:00
|
|
|
size_t cache_size = array[i].kc_size;
|
2012-07-21 15:45:04 +04:00
|
|
|
struct pool_allocator *pa;
|
2019-03-26 23:05:18 +03:00
|
|
|
int flags = 0;
|
2012-01-29 03:09:06 +04:00
|
|
|
pool_cache_t pc;
|
2012-01-27 23:48:38 +04:00
|
|
|
size_t align;
|
2009-03-29 14:51:53 +04:00
|
|
|
|
2012-01-27 23:48:38 +04:00
|
|
|
if ((cache_size & (CACHE_LINE_SIZE - 1)) == 0)
|
|
|
|
align = CACHE_LINE_SIZE;
|
|
|
|
else if ((cache_size & (PAGE_SIZE - 1)) == 0)
|
|
|
|
align = PAGE_SIZE;
|
|
|
|
else
|
|
|
|
align = KMEM_ALIGN;
|
2006-06-25 12:00:01 +04:00
|
|
|
|
2012-01-27 23:48:38 +04:00
|
|
|
if (cache_size < CACHE_LINE_SIZE)
|
|
|
|
flags |= PR_NOTOUCH;
|
2006-06-25 12:00:01 +04:00
|
|
|
|
2012-01-27 23:48:38 +04:00
|
|
|
/* check if we reached the requested size */
|
2012-07-21 15:45:04 +04:00
|
|
|
if (cache_size > maxsize || cache_size > PAGE_SIZE) {
|
2012-01-27 23:48:38 +04:00
|
|
|
break;
|
2012-01-29 03:09:06 +04:00
|
|
|
}
|
2012-07-21 15:45:04 +04:00
|
|
|
if ((cache_size >> shift) > maxidx) {
|
|
|
|
maxidx = cache_size >> shift;
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((cache_size >> shift) > maxidx) {
|
|
|
|
maxidx = cache_size >> shift;
|
2012-01-29 03:09:06 +04:00
|
|
|
}
|
2006-06-25 12:00:01 +04:00
|
|
|
|
2012-07-21 15:45:04 +04:00
|
|
|
pa = &pool_allocator_kmem;
|
2012-01-27 23:48:38 +04:00
|
|
|
pc = pool_cache_init(cache_size, align, 0, flags,
|
2012-07-21 15:45:04 +04:00
|
|
|
name, pa, ipl, NULL, NULL, NULL);
|
2006-06-25 12:00:01 +04:00
|
|
|
|
2012-01-27 23:48:38 +04:00
|
|
|
while (size <= cache_size) {
|
2012-07-21 15:45:04 +04:00
|
|
|
alloc_table[(size - 1) >> shift] = pc;
|
2012-01-27 23:48:38 +04:00
|
|
|
size += table_unit;
|
|
|
|
}
|
2006-06-25 12:00:01 +04:00
|
|
|
}
|
2012-07-21 15:45:04 +04:00
|
|
|
return maxidx;
|
2006-06-25 12:00:01 +04:00
|
|
|
}
|
|
|
|
|
2012-01-27 23:48:38 +04:00
|
|
|
void
|
|
|
|
kmem_init(void)
|
2006-06-25 12:00:01 +04:00
|
|
|
{
|
2012-07-21 15:45:04 +04:00
|
|
|
kmem_cache_maxidx = kmem_create_caches(kmem_cache_sizes,
|
|
|
|
kmem_cache, KMEM_MAXSIZE, KMEM_SHIFT, IPL_VM);
|
2014-06-25 20:05:22 +04:00
|
|
|
kmem_cache_big_maxidx = kmem_create_caches(kmem_cache_big_sizes,
|
2012-07-21 15:45:04 +04:00
|
|
|
kmem_cache_big, PAGE_SIZE, KMEM_BIG_SHIFT, IPL_VM);
|
2006-06-25 12:00:01 +04:00
|
|
|
}
|
2006-07-08 10:01:53 +04:00
|
|
|
|
2012-01-27 23:48:38 +04:00
|
|
|
size_t
|
|
|
|
kmem_roundup_size(size_t size)
|
2006-08-20 17:08:11 +04:00
|
|
|
{
|
2012-01-27 23:48:38 +04:00
|
|
|
return (size + (KMEM_ALIGN - 1)) & ~(KMEM_ALIGN - 1);
|
2006-08-20 17:08:11 +04:00
|
|
|
}
|
|
|
|
|
2015-07-27 12:24:28 +03:00
|
|
|
/*
|
|
|
|
* Used to dynamically allocate string with kmem accordingly to format.
|
|
|
|
*/
|
|
|
|
char *
|
|
|
|
kmem_asprintf(const char *fmt, ...)
|
|
|
|
{
|
|
|
|
int size __diagused, len;
|
|
|
|
va_list va;
|
|
|
|
char *str;
|
|
|
|
|
|
|
|
va_start(va, fmt);
|
|
|
|
len = vsnprintf(NULL, 0, fmt, va);
|
|
|
|
va_end(va);
|
|
|
|
|
|
|
|
str = kmem_alloc(len + 1, KM_SLEEP);
|
|
|
|
|
|
|
|
va_start(va, fmt);
|
|
|
|
size = vsnprintf(str, len + 1, fmt, va);
|
|
|
|
va_end(va);
|
|
|
|
|
|
|
|
KASSERT(size == len);
|
|
|
|
|
|
|
|
return str;
|
|
|
|
}
|
|
|
|
|
2017-11-07 21:35:57 +03:00
|
|
|
char *
|
|
|
|
kmem_strdupsize(const char *str, size_t *lenp, km_flag_t flags)
|
|
|
|
{
|
|
|
|
size_t len = strlen(str) + 1;
|
|
|
|
char *ptr = kmem_alloc(len, flags);
|
|
|
|
if (ptr == NULL)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (lenp)
|
|
|
|
*lenp = len;
|
|
|
|
memcpy(ptr, str, len);
|
|
|
|
return ptr;
|
|
|
|
}
|
|
|
|
|
2018-01-09 04:53:55 +03:00
|
|
|
char *
|
|
|
|
kmem_strndup(const char *str, size_t maxlen, km_flag_t flags)
|
|
|
|
{
|
|
|
|
KASSERT(str != NULL);
|
|
|
|
KASSERT(maxlen != 0);
|
|
|
|
|
|
|
|
size_t len = strnlen(str, maxlen);
|
|
|
|
char *ptr = kmem_alloc(len + 1, flags);
|
|
|
|
if (ptr == NULL)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
memcpy(ptr, str, len);
|
|
|
|
ptr[len] = '\0';
|
|
|
|
|
|
|
|
return ptr;
|
|
|
|
}
|
|
|
|
|
2017-11-07 21:35:57 +03:00
|
|
|
void
|
|
|
|
kmem_strfree(char *str)
|
|
|
|
{
|
|
|
|
if (str == NULL)
|
|
|
|
return;
|
|
|
|
|
|
|
|
kmem_free(str, strlen(str) + 1);
|
|
|
|
}
|
|
|
|
|
2019-08-15 15:06:42 +03:00
|
|
|
/* --------------------------- DEBUG / DIAGNOSTIC --------------------------- */
|
2006-07-08 10:01:53 +04:00
|
|
|
|
2009-02-01 21:51:07 +03:00
|
|
|
#if defined(KMEM_SIZE)
|
|
|
|
static void
|
|
|
|
kmem_size_set(void *p, size_t sz)
|
|
|
|
{
|
2014-07-01 16:08:33 +04:00
|
|
|
struct kmem_header *hd;
|
|
|
|
hd = (struct kmem_header *)p;
|
|
|
|
hd->size = sz;
|
2009-02-01 21:51:07 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2012-01-27 23:48:38 +04:00
|
|
|
kmem_size_check(void *p, size_t sz)
|
2009-02-01 21:51:07 +03:00
|
|
|
{
|
2014-07-01 16:08:33 +04:00
|
|
|
struct kmem_header *hd;
|
|
|
|
size_t hsz;
|
2009-02-01 21:51:07 +03:00
|
|
|
|
2014-07-01 16:08:33 +04:00
|
|
|
hd = (struct kmem_header *)p;
|
|
|
|
hsz = hd->size;
|
|
|
|
|
|
|
|
if (hsz != sz) {
|
2009-02-01 21:51:07 +03:00
|
|
|
panic("kmem_free(%p, %zu) != allocated size %zu",
|
2014-07-01 16:08:33 +04:00
|
|
|
(const uint8_t *)p + SIZE_SIZE, sz, hsz);
|
2009-02-01 21:51:07 +03:00
|
|
|
}
|
2019-02-04 18:13:54 +03:00
|
|
|
|
|
|
|
hd->size = -1;
|
2009-02-01 21:51:07 +03:00
|
|
|
}
|
2014-06-24 11:28:23 +04:00
|
|
|
#endif /* defined(KMEM_SIZE) */
|