diff options
author | Dmitry Vyukov <dvyukov@google.com> | 2018-02-06 15:36:30 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-02-06 18:32:43 -0800 |
commit | 6860f6340c0918cddcd3c9fcf8c36401c8184268 (patch) | |
tree | 59b652f61341d35394d745a3010bb437a54d6421 /mm/mempool.c | |
parent | ee3ce779b58c31acacdfab0ad6c86d428ba2c2e3 (diff) | |
download | linux-6860f6340c0918cddcd3c9fcf8c36401c8184268.tar.gz linux-6860f6340c0918cddcd3c9fcf8c36401c8184268.tar.bz2 linux-6860f6340c0918cddcd3c9fcf8c36401c8184268.zip |
kasan: detect invalid frees for large mempool objects
Detect frees of pointers into middle of mempool objects.
I did a one-off test, but it turned out to be very tricky, so I reverted
it. First, mempool does not call kasan_poison_kfree() unless allocation
function fails. I stubbed an allocation function to fail on second and
subsequent allocations. But then mempool stopped to call
kasan_poison_kfree() at all, because it does it only when allocation
function is mempool_kmalloc(). We could support this special failing
test allocation function in mempool, but it also can't live with kasan
tests, because these are in a module.
Link: http://lkml.kernel.org/r/bf7a7d035d7a5ed62d2dd0e3d2e8a4fcdf456aa7.1514378558.git.dvyukov@google.com
Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>a
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/mempool.c')
-rw-r--r-- | mm/mempool.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/mempool.c b/mm/mempool.c index 7d8c5a0010a2..5c9dce34719b 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -103,10 +103,10 @@ static inline void poison_element(mempool_t *pool, void *element) } #endif /* CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON */ -static void kasan_poison_element(mempool_t *pool, void *element) +static __always_inline void kasan_poison_element(mempool_t *pool, void *element) { if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) - kasan_poison_kfree(element); + kasan_poison_kfree(element, _RET_IP_); if (pool->alloc == mempool_alloc_pages) kasan_free_pages(element, (unsigned long)pool->pool_data); } @@ -119,7 +119,7 @@ static void kasan_unpoison_element(mempool_t *pool, void *element, gfp_t flags) kasan_alloc_pages(element, (unsigned long)pool->pool_data); } -static void add_element(mempool_t *pool, void *element) +static __always_inline void add_element(mempool_t *pool, void *element) { BUG_ON(pool->curr_nr >= pool->min_nr); poison_element(pool, element); |