summaryrefslogtreecommitdiffstats
path: root/mm/slab.c
diff options
context:
space:
mode:
authorVlastimil Babka <vbabka@suse.cz>2020-12-14 19:04:29 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2020-12-15 12:13:37 -0800
commit0c06dd75514327be4b1c22b109341ff7dfeeff98 (patch)
tree32c989ea85c0afdb67510641c8d1dc40031327b0 /mm/slab.c
parenta47fc51d8e1e9ce0f2d8fd9e5197649f00bac4ca (diff)
downloadlinux-stable-0c06dd75514327be4b1c22b109341ff7dfeeff98.tar.gz
linux-stable-0c06dd75514327be4b1c22b109341ff7dfeeff98.tar.bz2
linux-stable-0c06dd75514327be4b1c22b109341ff7dfeeff98.zip
mm, slab, slub: clear the slab_cache field when freeing page
The page allocator expects that page->mapping is NULL for a page being freed. SLAB and SLUB use the slab_cache field which is in union with mapping, but before freeing the page, the field is referenced with the "mapping" name when set to NULL. It's IMHO more correct (albeit functionally the same) to use the slab_cache name as that's the field we use in SL*B, and document why we clear it in a comment (we don't clear fields such as s_mem or freelist, as page allocator doesn't care about those). While using the 'mapping' name would automagically keep the code correct if the unions in struct page changed, such changes should be done consciously and needed changes evaluated - the comment should help with that. Link: https://lkml.kernel.org/r/20201210160020.21562-1-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: David Rientjes <rientjes@google.com> Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slab.c')
-rw-r--r--mm/slab.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/slab.c b/mm/slab.c
index b1113561b98b..2e67a513b0c9 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1399,7 +1399,8 @@ static void kmem_freepages(struct kmem_cache *cachep, struct page *page)
__ClearPageSlabPfmemalloc(page);
__ClearPageSlab(page);
page_mapcount_reset(page);
- page->mapping = NULL;
+ /* In union with page->mapping where page allocator expects NULL */
+ page->slab_cache = NULL;
if (current->reclaim_state)
current->reclaim_state->reclaimed_slab += 1 << order;