From 4c6759967826b87f56c73e0f1deb7b76379ccd23 Mon Sep 17 00:00:00 2001 From: Vlastimil Babka Date: Mon, 27 Feb 2023 17:00:14 -0800 Subject: mm/mremap: fix dup_anon_vma() in vma_merge() case 4 In case 4, we are shrinking 'prev' (PPPP in the comment) and expanding 'mid' (NNNN). So we need to make sure 'mid' clones the anon_vma from 'prev', if it doesn't have any. After commit 0503ea8f5ba7 ("mm/mmap: remove __vma_adjust()") we can fail to do that due to wrong parameters for dup_anon_vma(). The call is a no-op because res == next, adjust == mid and mid == next. Fix it. Link: https://lkml.kernel.org/r/ad91d62b-37eb-4b73-707a-3c45c9e16256@suse.cz Fixes: 0503ea8f5ba7 ("mm/mmap: remove __vma_adjust()") Signed-off-by: Vlastimil Babka Reviewed-by: Liam R. Howlett Signed-off-by: Andrew Morton --- mm/mmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/mmap.c b/mm/mmap.c index 20f21f0949dd..740b54be3ed4 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -973,7 +973,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, vma_end = addr; adjust = mid; adj_next = -(vma->vm_end - addr); - err = dup_anon_vma(res, adjust); + err = dup_anon_vma(adjust, prev); } else { vma = next; /* case 3 */ vma_start = addr; -- cgit v1.2.3 From 3f98c9a62c338bbe06a215c9491e6166ea39bf82 Mon Sep 17 00:00:00 2001 From: "andrew.yang" Date: Wed, 22 Feb 2023 14:42:20 +0800 Subject: mm/damon/paddr: fix missing folio_put() damon_get_folio() would always increase folio _refcount and folio_isolate_lru() would increase folio _refcount if the folio's lru flag is set. If an unevictable folio isolated successfully, there will be two more _refcount. The one from folio_isolate_lru() will be decreased in folio_puback_lru(), but the other one from damon_get_folio() will be left behind. This causes a pin page. Whatever the case, the _refcount from damon_get_folio() should be decreased. Link: https://lkml.kernel.org/r/20230222064223.6735-1-andrew.yang@mediatek.com Fixes: 57223ac29584 ("mm/damon/paddr: support the pageout scheme") Signed-off-by: andrew.yang Reviewed-by: SeongJae Park Cc: [5.16.x] Signed-off-by: Andrew Morton --- mm/damon/paddr.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 607bb69e526c..6c655d9b5639 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -250,12 +250,11 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) folio_put(folio); continue; } - if (folio_test_unevictable(folio)) { + if (folio_test_unevictable(folio)) folio_putback_lru(folio); - } else { + else list_add(&folio->lru, &folio_list); - folio_put(folio); - } + folio_put(folio); } applied = reclaim_pages(&folio_list); cond_resched(); -- cgit v1.2.3 From 1c0a0af511effbd75beffa6c5e54fd64d3563860 Mon Sep 17 00:00:00 2001 From: Mikhail Zaslonko Date: Tue, 21 Feb 2023 14:16:17 +0100 Subject: lib/zlib: DFLTCC deflate does not write all available bits for Z_NO_FLUSH DFLTCC deflate with Z_NO_FLUSH might generate a corrupted stream when the output buffer is not large enough to fit all the deflate output at once. The problem takes place on closing the deflate block since flush_pending() might leave some output bits not written. Similar problem for software deflate with Z_BLOCK flush option (not supported by kernel zlib deflate) has been fixed a while ago in userspace zlib but the fix never got to the kernel. Now flush_pending() flushes the bit buffer before copying out the byte buffer, in order to really flush as much as possible. Currently there are no users of DFLTCC deflate with Z_NO_FLUSH option in the kernel so the problem remained hidden for a while. This commit is based on the old zlib commit: https://github.com/madler/zlib/commit/0b828b4 Link: https://lkml.kernel.org/r/20230221131617.3369978-2-zaslonko@linux.ibm.com Signed-off-by: Mikhail Zaslonko Acked-by: Ilya Leoshkevich Cc: Heiko Carstens Cc: Vasily Gorbik Signed-off-by: Andrew Morton --- lib/zlib_deflate/defutil.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/lib/zlib_deflate/defutil.h b/lib/zlib_deflate/defutil.h index 385333b22ec6..4ea40f5a279f 100644 --- a/lib/zlib_deflate/defutil.h +++ b/lib/zlib_deflate/defutil.h @@ -420,9 +420,11 @@ static inline void flush_pending( z_streamp strm ) { + unsigned len; deflate_state *s = (deflate_state *) strm->state; - unsigned len = s->pending; + bi_flush(s); + len = s->pending; if (len > strm->avail_out) len = strm->avail_out; if (len == 0) return; -- cgit v1.2.3 From 6da6b1d4a7df8c35770186b53ef65d388398e139 Mon Sep 17 00:00:00 2001 From: Naoya Horiguchi Date: Tue, 21 Feb 2023 17:59:05 +0900 Subject: mm/hwpoison: convert TTU_IGNORE_HWPOISON to TTU_HWPOISON After a memory error happens on a clean folio, a process unexpectedly receives SIGBUS when it accesses the error page. This SIGBUS killing is pointless and simply degrades the level of RAS of the system, because the clean folio can be dropped without any data lost on memory error handling as we do for a clean pagecache. When memory_failure() is called on a clean folio, try_to_unmap() is called twice (one from split_huge_page() and one from hwpoison_user_mappings()). The root cause of the issue is that pte conversion to hwpoisoned entry is now done in the first call of try_to_unmap() because PageHWPoison is already set at this point, while it's actually expected to be done in the second call. This behavior disturbs the error handling operation like removing pagecache, which results in the malfunction described above. So convert TTU_IGNORE_HWPOISON into TTU_HWPOISON and set TTU_HWPOISON only when we really intend to convert pte to hwpoison entry. This can prevent other callers of try_to_unmap() from accidentally converting to hwpoison entries. Link: https://lkml.kernel.org/r/20230221085905.1465385-1-naoya.horiguchi@linux.dev Fixes: a42634a6c07d ("readahead: Use a folio in read_pages()") Signed-off-by: Naoya Horiguchi Cc: David Hildenbrand Cc: Hugh Dickins Cc: Matthew Wilcox Cc: Miaohe Lin Cc: Minchan Kim Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton --- include/linux/rmap.h | 2 +- mm/memory-failure.c | 8 ++++---- mm/rmap.c | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index a4570da03e58..b87d01660412 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -94,7 +94,7 @@ enum ttu_flags { TTU_SPLIT_HUGE_PMD = 0x4, /* split huge PMD if any */ TTU_IGNORE_MLOCK = 0x8, /* ignore mlock */ TTU_SYNC = 0x10, /* avoid racy checks with PVMW_SYNC */ - TTU_IGNORE_HWPOISON = 0x20, /* corrupted page is recoverable */ + TTU_HWPOISON = 0x20, /* do convert pte to hwpoison entry */ TTU_BATCH_FLUSH = 0x40, /* Batch TLB flushes where possible * and caller guarantees they will * do a final flush if necessary */ diff --git a/mm/memory-failure.c b/mm/memory-failure.c index a1ede7bdce95..fae9baf3be16 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1069,7 +1069,7 @@ static int me_pagecache_dirty(struct page_state *ps, struct page *p) * cache and swap cache(ie. page is freshly swapped in). So it could be * referenced concurrently by 2 types of PTEs: * normal PTEs and swap PTEs. We try to handle them consistently by calling - * try_to_unmap(TTU_IGNORE_HWPOISON) to convert the normal PTEs to swap PTEs, + * try_to_unmap(!TTU_HWPOISON) to convert the normal PTEs to swap PTEs, * and then * - clear dirty bit to prevent IO * - remove from LRU @@ -1486,7 +1486,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, int flags, struct page *hpage) { struct folio *folio = page_folio(hpage); - enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC; + enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON; struct address_space *mapping; LIST_HEAD(tokill); bool unmap_success; @@ -1516,7 +1516,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, if (PageSwapCache(p)) { pr_err("%#lx: keeping poisoned page in swap cache\n", pfn); - ttu |= TTU_IGNORE_HWPOISON; + ttu &= ~TTU_HWPOISON; } /* @@ -1531,7 +1531,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, if (page_mkclean(hpage)) { SetPageDirty(hpage); } else { - ttu |= TTU_IGNORE_HWPOISON; + ttu &= ~TTU_HWPOISON; pr_info("%#lx: corrupted page was clean: dropped without side effects\n", pfn); } diff --git a/mm/rmap.c b/mm/rmap.c index 15ae24585fc4..8632e02661ac 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1602,7 +1602,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, /* Update high watermark before we lower rss */ update_hiwater_rss(mm); - if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) { + if (PageHWPoison(subpage) && (flags & TTU_HWPOISON)) { pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(folio_nr_pages(folio), mm); -- cgit v1.2.3 From d49b497060387545ca10210590adef6edb296583 Mon Sep 17 00:00:00 2001 From: Konrad Dybcio Date: Fri, 17 Feb 2023 21:35:16 +0100 Subject: mailmap: map Georgi Djakov's old Linaro address to his current one Georgi's old email is still picked up by the likes of get_maintainer.pl and it keeps bouncing every time one submits an interconnect patch. Map it to his current @kernel.org one. Link: https://lkml.kernel.org/r/20230217203516.826424-1-konrad.dybcio@linaro.org Signed-off-by: Konrad Dybcio Acked-by: Georgi Djakov Cc: Arnd Bergmann Cc: Baolin Wang Cc: Colin Ian King Cc: Kirill Tkhai Cc: Qais Yousef Cc: Vasily Averin Signed-off-by: Andrew Morton --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index a872c9683958..fb65947d671d 100644 --- a/.mailmap +++ b/.mailmap @@ -150,6 +150,7 @@ Gao Xiang Gao Xiang Gao Xiang Gao Xiang +Georgi Djakov Gerald Schaefer Gerald Schaefer Gerald Schaefer -- cgit v1.2.3 From 60eed1e3d45045623e46944ebc7c42c30a4350f0 Mon Sep 17 00:00:00 2001 From: Heming Zhao via Ocfs2-devel Date: Fri, 17 Feb 2023 08:37:17 +0800 Subject: ocfs2: fix defrag path triggering jbd2 ASSERT code path: ocfs2_ioctl_move_extents ocfs2_move_extents ocfs2_defrag_extent __ocfs2_move_extent + ocfs2_journal_access_di + ocfs2_split_extent //sub-paths call jbd2_journal_restart + ocfs2_journal_dirty //crash by jbs2 ASSERT crash stacks: PID: 11297 TASK: ffff974a676dcd00 CPU: 67 COMMAND: "defragfs.ocfs2" #0 [ffffb25d8dad3900] machine_kexec at ffffffff8386fe01 #1 [ffffb25d8dad3958] __crash_kexec at ffffffff8395959d #2 [ffffb25d8dad3a20] crash_kexec at ffffffff8395a45d #3 [ffffb25d8dad3a38] oops_end at ffffffff83836d3f #4 [ffffb25d8dad3a58] do_trap at ffffffff83833205 #5 [ffffb25d8dad3aa0] do_invalid_op at ffffffff83833aa6 #6 [ffffb25d8dad3ac0] invalid_op at ffffffff84200d18 [exception RIP: jbd2_journal_dirty_metadata+0x2ba] RIP: ffffffffc09ca54a RSP: ffffb25d8dad3b70 RFLAGS: 00010207 RAX: 0000000000000000 RBX: ffff9706eedc5248 RCX: 0000000000000000 RDX: 0000000000000001 RSI: ffff97337029ea28 RDI: ffff9706eedc5250 RBP: ffff9703c3520200 R8: 000000000f46b0b2 R9: 0000000000000000 R10: 0000000000000001 R11: 00000001000000fe R12: ffff97337029ea28 R13: 0000000000000000 R14: ffff9703de59bf60 R15: ffff9706eedc5250 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #7 [ffffb25d8dad3ba8] ocfs2_journal_dirty at ffffffffc137fb95 [ocfs2] #8 [ffffb25d8dad3be8] __ocfs2_move_extent at ffffffffc139a950 [ocfs2] #9 [ffffb25d8dad3c80] ocfs2_defrag_extent at ffffffffc139b2d2 [ocfs2] Analysis This bug has the same root cause of 'commit 7f27ec978b0e ("ocfs2: call ocfs2_journal_access_di() before ocfs2_journal_dirty() in ocfs2_write_end_nolock()")'. For this bug, jbd2_journal_restart() is called by ocfs2_split_extent() during defragmenting. How to fix For ocfs2_split_extent() can handle journal operations totally by itself. Caller doesn't need to call journal access/dirty pair, and caller only needs to call journal start/stop pair. The fix method is to remove journal access/dirty from __ocfs2_move_extent(). The discussion for this patch: https://oss.oracle.com/pipermail/ocfs2-devel/2023-February/000647.html Link: https://lkml.kernel.org/r/20230217003717.32469-1-heming.zhao@suse.com Signed-off-by: Heming Zhao Reviewed-by: Joseph Qi Cc: Mark Fasheh Cc: Joel Becker Cc: Junxiao Bi Cc: Changwei Ge Cc: Gang He Cc: Jun Piao Cc: Signed-off-by: Andrew Morton --- fs/ocfs2/move_extents.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/fs/ocfs2/move_extents.c b/fs/ocfs2/move_extents.c index 192cad0662d8..6251748c695b 100644 --- a/fs/ocfs2/move_extents.c +++ b/fs/ocfs2/move_extents.c @@ -105,14 +105,6 @@ static int __ocfs2_move_extent(handle_t *handle, */ replace_rec.e_flags = ext_flags & ~OCFS2_EXT_REFCOUNTED; - ret = ocfs2_journal_access_di(handle, INODE_CACHE(inode), - context->et.et_root_bh, - OCFS2_JOURNAL_ACCESS_WRITE); - if (ret) { - mlog_errno(ret); - goto out; - } - ret = ocfs2_split_extent(handle, &context->et, path, index, &replace_rec, context->meta_ac, &context->dealloc); @@ -121,8 +113,6 @@ static int __ocfs2_move_extent(handle_t *handle, goto out; } - ocfs2_journal_dirty(handle, context->et.et_root_bh); - context->new_phys_cpos = new_p_cpos; /* -- cgit v1.2.3 From 236b9254f8d1edc273ad88b420aa85fbd84f492d Mon Sep 17 00:00:00 2001 From: Heming Zhao via Ocfs2-devel Date: Mon, 20 Feb 2023 13:05:26 +0800 Subject: ocfs2: fix non-auto defrag path not working issue This fixes three issues on move extents ioctl without auto defrag: a) In ocfs2_find_victim_alloc_group(), we have to convert bits to block first in case of global bitmap. b) In ocfs2_probe_alloc_group(), when finding enough bits in block group bitmap, we have to back off move_len to start pos as well, otherwise it may corrupt filesystem. c) In ocfs2_ioctl_move_extents(), set me_threshold both for non-auto and auto defrag paths. Otherwise it will set move_max_hop to 0 and finally cause unexpectedly ENOSPC error. Currently there are no tools triggering the above issues since defragfs.ocfs2 enables auto defrag by default. Tested with manually changing defragfs.ocfs2 to run non auto defrag path. Link: https://lkml.kernel.org/r/20230220050526.22020-1-heming.zhao@suse.com Signed-off-by: Heming Zhao Reviewed-by: Joseph Qi Cc: Mark Fasheh Cc: Joel Becker Cc: Junxiao Bi Cc: Changwei Ge Cc: Gang He Cc: Jun Piao Cc: Signed-off-by: Andrew Morton --- fs/ocfs2/move_extents.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/fs/ocfs2/move_extents.c b/fs/ocfs2/move_extents.c index 6251748c695b..b1e32ec4a9d4 100644 --- a/fs/ocfs2/move_extents.c +++ b/fs/ocfs2/move_extents.c @@ -434,7 +434,7 @@ static int ocfs2_find_victim_alloc_group(struct inode *inode, bg = (struct ocfs2_group_desc *)gd_bh->b_data; if (vict_blkno < (le64_to_cpu(bg->bg_blkno) + - le16_to_cpu(bg->bg_bits))) { + (le16_to_cpu(bg->bg_bits) << bits_per_unit))) { *ret_bh = gd_bh; *vict_bit = (vict_blkno - blkno) >> @@ -549,6 +549,7 @@ static void ocfs2_probe_alloc_group(struct inode *inode, struct buffer_head *bh, last_free_bits++; if (last_free_bits == move_len) { + i -= move_len; *goal_bit = i; *phys_cpos = base_cpos + i; break; @@ -1020,18 +1021,19 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp) context->range = ⦥ + /* + * ok, the default theshold for the defragmentation + * is 1M, since our maximum clustersize was 1M also. + * any thought? + */ + if (!range.me_threshold) + range.me_threshold = 1024 * 1024; + + if (range.me_threshold > i_size_read(inode)) + range.me_threshold = i_size_read(inode); + if (range.me_flags & OCFS2_MOVE_EXT_FL_AUTO_DEFRAG) { context->auto_defrag = 1; - /* - * ok, the default theshold for the defragmentation - * is 1M, since our maximum clustersize was 1M also. - * any thought? - */ - if (!range.me_threshold) - range.me_threshold = 1024 * 1024; - - if (range.me_threshold > i_size_read(inode)) - range.me_threshold = i_size_read(inode); if (range.me_flags & OCFS2_MOVE_EXT_FL_PART_DEFRAG) context->partial = 1; -- cgit v1.2.3 From 51287dcb00cc715c27bf6a6b4dbd431621c5b65a Mon Sep 17 00:00:00 2001 From: Marco Elver Date: Fri, 24 Feb 2023 09:59:39 +0100 Subject: kasan: emit different calls for instrumentable memintrinsics Clang 15 provides an option to prefix memcpy/memset/memmove calls with __asan_/__hwasan_ in instrumented functions: https://reviews.llvm.org/D122724 GCC will add support in future: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108777 Use it to regain KASAN instrumentation of memcpy/memset/memmove on architectures that require noinstr to be really free from instrumented mem*() functions (all GENERIC_ENTRY architectures). Link: https://lkml.kernel.org/r/20230224085942.1791837-1-elver@google.com Fixes: 69d4c0d32186 ("entry, kasan, x86: Disallow overriding mem*() functions") Signed-off-by: Marco Elver Acked-by: Peter Zijlstra (Intel) Reviewed-by: Andrey Konovalov Tested-by: Linux Kernel Functional Testing Tested-by: Naresh Kamboju Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Borislav Petkov (AMD) Cc: Dave Hansen Cc: Dmitry Vyukov Cc: Ingo Molnar Cc: Jakub Jelinek Cc: kasan-dev@googlegroups.com Cc: Kees Cook Cc: Linux Kernel Functional Testing Cc: Nathan Chancellor # build only Cc: Nick Desaulniers Cc: Nicolas Schier Cc: Thomas Gleixner Cc: Vincenzo Frascino Signed-off-by: Andrew Morton --- mm/kasan/kasan.h | 4 ++++ mm/kasan/shadow.c | 11 +++++++++++ scripts/Makefile.kasan | 8 ++++++++ 3 files changed, 23 insertions(+) diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 9377b0789edc..a61eeee3095a 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -666,4 +666,8 @@ void __hwasan_storeN_noabort(unsigned long addr, size_t size); void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size); +void *__hwasan_memset(void *addr, int c, size_t len); +void *__hwasan_memmove(void *dest, const void *src, size_t len); +void *__hwasan_memcpy(void *dest, const void *src, size_t len); + #endif /* __MM_KASAN_KASAN_H */ diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 3703983a8e55..90186122b21b 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -107,6 +107,17 @@ void *__asan_memcpy(void *dest, const void *src, size_t len) } EXPORT_SYMBOL(__asan_memcpy); +#ifdef CONFIG_KASAN_SW_TAGS +void *__hwasan_memset(void *addr, int c, size_t len) __alias(__asan_memset); +EXPORT_SYMBOL(__hwasan_memset); +#ifdef __HAVE_ARCH_MEMMOVE +void *__hwasan_memmove(void *dest, const void *src, size_t len) __alias(__asan_memmove); +EXPORT_SYMBOL(__hwasan_memmove); +#endif +void *__hwasan_memcpy(void *dest, const void *src, size_t len) __alias(__asan_memcpy); +EXPORT_SYMBOL(__hwasan_memcpy); +#endif + void kasan_poison(const void *addr, size_t size, u8 value, bool init) { void *shadow_start, *shadow_end; diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan index b9e94c5e7097..fa9f836f8039 100644 --- a/scripts/Makefile.kasan +++ b/scripts/Makefile.kasan @@ -38,6 +38,11 @@ endif CFLAGS_KASAN += $(call cc-param,asan-stack=$(stack_enable)) +# Instrument memcpy/memset/memmove calls by using instrumented __asan_mem*() +# instead. With compilers that don't support this option, compiler-inserted +# memintrinsics won't be checked by KASAN on GENERIC_ENTRY architectures. +CFLAGS_KASAN += $(call cc-param,asan-kernel-mem-intrinsic-prefix=1) + endif # CONFIG_KASAN_GENERIC ifdef CONFIG_KASAN_SW_TAGS @@ -54,6 +59,9 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress \ $(call cc-param,hwasan-inline-all-checks=0) \ $(instrumentation_flags) +# Instrument memcpy/memset/memmove calls by using instrumented __hwasan_mem*(). +CFLAGS_KASAN += $(call cc-param,hwasan-kernel-mem-intrinsic-prefix=1) + endif # CONFIG_KASAN_SW_TAGS export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE -- cgit v1.2.3 From 36be5cba99f6f9984a9a9f0454f95a38f4184d3e Mon Sep 17 00:00:00 2001 From: Marco Elver Date: Fri, 24 Feb 2023 09:59:40 +0100 Subject: kasan: treat meminstrinsic as builtins in uninstrumented files Where the compiler instruments meminstrinsics by generating calls to __asan/__hwasan_ prefixed functions, let the compiler consider memintrinsics as builtin again. To do so, never override memset/memmove/memcpy if the compiler does the correct instrumentation - even on !GENERIC_ENTRY architectures. [elver@google.com: powerpc: don't rename memintrinsics if compiler adds prefixes] Link: https://lore.kernel.org/all/20230224085942.1791837-1-elver@google.com/ [1] Link: https://lkml.kernel.org/r/20230227094726.3833247-1-elver@google.com Link: https://lkml.kernel.org/r/20230224085942.1791837-2-elver@google.com Fixes: 69d4c0d32186 ("entry, kasan, x86: Disallow overriding mem*() functions") Signed-off-by: Marco Elver Reviewed-by: Andrey Konovalov Tested-by: Linux Kernel Functional Testing Tested-by: Naresh Kamboju Acked-by: Michael Ellerman (powerpc) Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Borislav Petkov (AMD) Cc: Dave Hansen Cc: Dmitry Vyukov Cc: Ingo Molnar Cc: Jakub Jelinek Cc: Kees Cook Cc: Nathan Chancellor Cc: Nick Desaulniers Cc: Nicolas Schier Cc: Peter Zijlstra (Intel) Cc: Thomas Gleixner Cc: Vincenzo Frascino Signed-off-by: Andrew Morton --- lib/Kconfig.kasan | 9 +++++++++ mm/kasan/shadow.c | 5 ++++- scripts/Makefile.kasan | 9 +++++++++ 3 files changed, 22 insertions(+), 1 deletion(-) diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan index be6ee6020290..fdca89c05745 100644 --- a/lib/Kconfig.kasan +++ b/lib/Kconfig.kasan @@ -49,6 +49,15 @@ menuconfig KASAN if KASAN +config CC_HAS_KASAN_MEMINTRINSIC_PREFIX + def_bool (CC_IS_CLANG && $(cc-option,-fsanitize=kernel-address -mllvm -asan-kernel-mem-intrinsic-prefix=1)) || \ + (CC_IS_GCC && $(cc-option,-fsanitize=kernel-address --param asan-kernel-mem-intrinsic-prefix=1)) + # Don't define it if we don't need it: compilation of the test uses + # this variable to decide how the compiler should treat builtins. + depends on !KASAN_HW_TAGS + help + The compiler is able to prefix memintrinsics with __asan or __hwasan. + choice prompt "KASAN mode" default KASAN_GENERIC diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 90186122b21b..c8b86f3273b5 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -38,11 +38,14 @@ bool __kasan_check_write(const volatile void *p, unsigned int size) } EXPORT_SYMBOL(__kasan_check_write); -#ifndef CONFIG_GENERIC_ENTRY +#if !defined(CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX) && !defined(CONFIG_GENERIC_ENTRY) /* * CONFIG_GENERIC_ENTRY relies on compiler emitted mem*() calls to not be * instrumented. KASAN enabled toolchains should emit __asan_mem*() functions * for the sites they want to instrument. + * + * If we have a compiler that can instrument meminstrinsics, never override + * these, so that non-instrumented files can safely consider them as builtins. */ #undef memset void *memset(void *addr, int c, size_t len) diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan index fa9f836f8039..c186110ffa20 100644 --- a/scripts/Makefile.kasan +++ b/scripts/Makefile.kasan @@ -1,5 +1,14 @@ # SPDX-License-Identifier: GPL-2.0 + +ifdef CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX +# Safe for compiler to generate meminstrinsic calls in uninstrumented files. +CFLAGS_KASAN_NOSANITIZE := +else +# Don't let compiler generate memintrinsic calls in uninstrumented files +# because they are instrumented. CFLAGS_KASAN_NOSANITIZE := -fno-builtin +endif + KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET) cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1))) -- cgit v1.2.3 From 85f195b12d8b769fa14ef37aa8a5e6bd07458122 Mon Sep 17 00:00:00 2001 From: Marco Elver Date: Fri, 24 Feb 2023 09:59:41 +0100 Subject: kasan: test: fix test for new meminstrinsic instrumentation The tests for memset/memmove have been failing since they haven't been instrumented in 69d4c0d32186. Fix the test to recognize when memintrinsics aren't instrumented, and skip test cases accordingly. We also need to conditionally pass -fno-builtin to the test, otherwise the instrumentation pass won't recognize memintrinsics and end up not instrumenting them either. Link: https://lkml.kernel.org/r/20230224085942.1791837-3-elver@google.com Fixes: 69d4c0d32186 ("entry, kasan, x86: Disallow overriding mem*() functions") Reported-by: Linux Kernel Functional Testing Signed-off-by: Marco Elver Reviewed-by: Andrey Konovalov Tested-by: Linux Kernel Functional Testing Tested-by: Naresh Kamboju Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Borislav Petkov (AMD) Cc: Dave Hansen Cc: Dmitry Vyukov Cc: Ingo Molnar Cc: Jakub Jelinek Cc: Kees Cook Cc: Nathan Chancellor Cc: Nick Desaulniers Cc: Nicolas Schier Cc: Peter Zijlstra (Intel) Cc: Thomas Gleixner Cc: Vincenzo Frascino Signed-off-by: Andrew Morton --- mm/kasan/Makefile | 9 ++++++++- mm/kasan/kasan_test.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 37 insertions(+), 1 deletion(-) diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile index d4837bff3b60..7634dd2a6128 100644 --- a/mm/kasan/Makefile +++ b/mm/kasan/Makefile @@ -35,7 +35,14 @@ CFLAGS_shadow.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_hw_tags.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_sw_tags.o := $(CC_FLAGS_KASAN_RUNTIME) -CFLAGS_KASAN_TEST := $(CFLAGS_KASAN) -fno-builtin $(call cc-disable-warning, vla) +CFLAGS_KASAN_TEST := $(CFLAGS_KASAN) $(call cc-disable-warning, vla) +ifndef CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX +# If compiler instruments memintrinsics by prefixing them with __asan/__hwasan, +# we need to treat them normally (as builtins), otherwise the compiler won't +# recognize them as instrumentable. If it doesn't instrument them, we need to +# pass -fno-builtin, so the compiler doesn't inline them. +CFLAGS_KASAN_TEST += -fno-builtin +endif CFLAGS_kasan_test.o := $(CFLAGS_KASAN_TEST) CFLAGS_kasan_test_module.o := $(CFLAGS_KASAN_TEST) diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c index 74cd80c12b25..627eaf1ee1db 100644 --- a/mm/kasan/kasan_test.c +++ b/mm/kasan/kasan_test.c @@ -165,6 +165,15 @@ static void kasan_test_exit(struct kunit *test) kunit_skip((test), "Test requires " #config "=n"); \ } while (0) +#define KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test) do { \ + if (IS_ENABLED(CONFIG_KASAN_HW_TAGS)) \ + break; /* No compiler instrumentation. */ \ + if (IS_ENABLED(CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX)) \ + break; /* Should always be instrumented! */ \ + if (IS_ENABLED(CONFIG_GENERIC_ENTRY)) \ + kunit_skip((test), "Test requires checked mem*()"); \ +} while (0) + static void kmalloc_oob_right(struct kunit *test) { char *ptr; @@ -454,6 +463,8 @@ static void kmalloc_oob_16(struct kunit *test) u64 words[2]; } *ptr1, *ptr2; + KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test); + /* This test is specifically crafted for the generic mode. */ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); @@ -476,6 +487,8 @@ static void kmalloc_uaf_16(struct kunit *test) u64 words[2]; } *ptr1, *ptr2; + KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test); + ptr1 = kmalloc(sizeof(*ptr1), GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1); @@ -498,6 +511,8 @@ static void kmalloc_oob_memset_2(struct kunit *test) char *ptr; size_t size = 128 - KASAN_GRANULE_SIZE; + KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test); + ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); @@ -511,6 +526,8 @@ static void kmalloc_oob_memset_4(struct kunit *test) char *ptr; size_t size = 128 - KASAN_GRANULE_SIZE; + KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test); + ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); @@ -524,6 +541,8 @@ static void kmalloc_oob_memset_8(struct kunit *test) char *ptr; size_t size = 128 - KASAN_GRANULE_SIZE; + KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test); + ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); @@ -537,6 +556,8 @@ static void kmalloc_oob_memset_16(struct kunit *test) char *ptr; size_t size = 128 - KASAN_GRANULE_SIZE; + KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test); + ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); @@ -550,6 +571,8 @@ static void kmalloc_oob_in_memset(struct kunit *test) char *ptr; size_t size = 128 - KASAN_GRANULE_SIZE; + KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test); + ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); @@ -566,6 +589,8 @@ static void kmalloc_memmove_negative_size(struct kunit *test) size_t size = 64; size_t invalid_size = -2; + KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test); + /* * Hardware tag-based mode doesn't check memmove for negative size. * As a result, this test introduces a side-effect memory corruption, @@ -590,6 +615,8 @@ static void kmalloc_memmove_invalid_size(struct kunit *test) size_t size = 64; size_t invalid_size = size; + KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test); + ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); @@ -618,6 +645,8 @@ static void kmalloc_uaf_memset(struct kunit *test) char *ptr; size_t size = 33; + KASAN_TEST_NEEDS_CHECKED_MEMINTRINSICS(test); + /* * Only generic KASAN uses quarantine, which is required to avoid a * kernel memory corruption this test causes. -- cgit v1.2.3 From 4ec4190be4cf9cc3e0ccaf5f155a5f9066d18950 Mon Sep 17 00:00:00 2001 From: Marco Elver Date: Fri, 24 Feb 2023 09:59:42 +0100 Subject: kasan, x86: don't rename memintrinsics in uninstrumented files Now that memcpy/memset/memmove are no longer overridden by KASAN, we can just use the normal symbol names in uninstrumented files. Drop the preprocessor redefinitions. Link: https://lkml.kernel.org/r/20230224085942.1791837-4-elver@google.com Fixes: 69d4c0d32186 ("entry, kasan, x86: Disallow overriding mem*() functions") Signed-off-by: Marco Elver Reviewed-by: Andrey Konovalov Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Borislav Petkov (AMD) Cc: Dave Hansen Cc: Dmitry Vyukov Cc: Ingo Molnar Cc: Jakub Jelinek Cc: Kees Cook Cc: Linux Kernel Functional Testing Cc: Naresh Kamboju Cc: Nathan Chancellor Cc: Nick Desaulniers Cc: Nicolas Schier Cc: Peter Zijlstra (Intel) Cc: Thomas Gleixner Cc: Vincenzo Frascino Signed-off-by: Andrew Morton --- arch/x86/include/asm/string_64.h | 19 ------------------- 1 file changed, 19 deletions(-) diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h index 888731ccf1f6..c1e14cee0722 100644 --- a/arch/x86/include/asm/string_64.h +++ b/arch/x86/include/asm/string_64.h @@ -85,25 +85,6 @@ char *strcpy(char *dest, const char *src); char *strcat(char *dest, const char *src); int strcmp(const char *cs, const char *ct); -#if (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)) -/* - * For files that not instrumented (e.g. mm/slub.c) we - * should use not instrumented version of mem* functions. - */ - -#undef memcpy -#define memcpy(dst, src, len) __memcpy(dst, src, len) -#undef memmove -#define memmove(dst, src, len) __memmove(dst, src, len) -#undef memset -#define memset(s, c, n) __memset(s, c, n) - -#ifndef __NO_FORTIFY -#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */ -#endif - -#endif - #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE #define __HAVE_ARCH_MEMCPY_FLUSHCACHE 1 void __memcpy_flushcache(void *dst, const void *src, size_t cnt); -- cgit v1.2.3 From 359d62559f578dc1ac86fd2d198f1455e9d8ce04 Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Thu, 23 Feb 2023 20:26:18 -0800 Subject: lib: parser: update documentation for match_NUMBER functions commit 67222c4ba8af ("lib: parser: optimize match_NUMBER apis to use local array") removed -ENOMEM as a possible return value, so update the comments accordingly. Link: https://lkml.kernel.org/r/20230224042618.9092-1-ebiggers@kernel.org Fixes: 67222c4ba8af ("lib: parser: optimize match_NUMBER apis to use local array") Signed-off-by: Eric Biggers Cc: Li Lingfeng Cc: Tejun Heo Cc: Yu Kuai Signed-off-by: Andrew Morton --- lib/parser.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/lib/parser.c b/lib/parser.c index 2b5e2b480253..f4eafb9d74e6 100644 --- a/lib/parser.c +++ b/lib/parser.c @@ -133,7 +133,7 @@ EXPORT_SYMBOL(match_token); * as a number in that base. * * Return: On success, sets @result to the integer represented by the - * string and returns 0. Returns -ENOMEM, -EINVAL, or -ERANGE on failure. + * string and returns 0. Returns -EINVAL or -ERANGE on failure. */ static int match_number(substring_t *s, int *result, int base) { @@ -165,7 +165,7 @@ static int match_number(substring_t *s, int *result, int base) * as a number in that base. * * Return: On success, sets @result to the integer represented by the - * string and returns 0. Returns -ENOMEM, -EINVAL, or -ERANGE on failure. + * string and returns 0. Returns -EINVAL or -ERANGE on failure. */ static int match_u64int(substring_t *s, u64 *result, int base) { @@ -189,7 +189,7 @@ static int match_u64int(substring_t *s, u64 *result, int base) * Description: Attempts to parse the &substring_t @s as a decimal integer. * * Return: On success, sets @result to the integer represented by the string - * and returns 0. Returns -ENOMEM, -EINVAL, or -ERANGE on failure. + * and returns 0. Returns -EINVAL or -ERANGE on failure. */ int match_int(substring_t *s, int *result) { @@ -205,7 +205,7 @@ EXPORT_SYMBOL(match_int); * Description: Attempts to parse the &substring_t @s as a decimal integer. * * Return: On success, sets @result to the integer represented by the string - * and returns 0. Returns -ENOMEM, -EINVAL, or -ERANGE on failure. + * and returns 0. Returns -EINVAL or -ERANGE on failure. */ int match_uint(substring_t *s, unsigned int *result) { @@ -228,7 +228,7 @@ EXPORT_SYMBOL(match_uint); * integer. * * Return: On success, sets @result to the integer represented by the string - * and returns 0. Returns -ENOMEM, -EINVAL, or -ERANGE on failure. + * and returns 0. Returns -EINVAL or -ERANGE on failure. */ int match_u64(substring_t *s, u64 *result) { @@ -244,7 +244,7 @@ EXPORT_SYMBOL(match_u64); * Description: Attempts to parse the &substring_t @s as an octal integer. * * Return: On success, sets @result to the integer represented by the string - * and returns 0. Returns -ENOMEM, -EINVAL, or -ERANGE on failure. + * and returns 0. Returns -EINVAL or -ERANGE on failure. */ int match_octal(substring_t *s, int *result) { @@ -260,7 +260,7 @@ EXPORT_SYMBOL(match_octal); * Description: Attempts to parse the &substring_t @s as a hexadecimal integer. * * Return: On success, sets @result to the integer represented by the string - * and returns 0. Returns -ENOMEM, -EINVAL, or -ERANGE on failure. + * and returns 0. Returns -EINVAL or -ERANGE on failure. */ int match_hex(substring_t *s, int *result) { -- cgit v1.2.3 From b905039e428d639adeebb719b76f98865ea38d4d Mon Sep 17 00:00:00 2001 From: "Guilherme G. Piccoli" Date: Sun, 26 Feb 2023 13:08:38 -0300 Subject: panic: fix the panic_print NMI backtrace setting Commit 8d470a45d1a6 ("panic: add option to dump all CPUs backtraces in panic_print") introduced a setting for the "panic_print" kernel parameter to allow users to request a NMI backtrace on panic. Problem is that the panic_print handling happens after the secondary CPUs are already disabled, hence this option ended-up being kind of a no-op - kernel skips the NMI trace in idling CPUs, which is the case of offline CPUs. Fix it by checking the NMI backtrace bit in the panic_print prior to the CPU disabling function. Link: https://lkml.kernel.org/r/20230226160838.414257-1-gpiccoli@igalia.com Fixes: 8d470a45d1a6 ("panic: add option to dump all CPUs backtraces in panic_print") Signed-off-by: Guilherme G. Piccoli Cc: Cc: Baoquan He Cc: Dave Young Cc: Feng Tang Cc: HATAYAMA Daisuke Cc: Hidehiro Kawai Cc: Kees Cook Cc: Michael Kelley Cc: Petr Mladek Cc: Vivek Goyal Signed-off-by: Andrew Morton --- kernel/panic.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/kernel/panic.c b/kernel/panic.c index 487f5b03bf83..5cfea8302d23 100644 --- a/kernel/panic.c +++ b/kernel/panic.c @@ -212,9 +212,6 @@ static void panic_print_sys_info(bool console_flush) return; } - if (panic_print & PANIC_PRINT_ALL_CPU_BT) - trigger_all_cpu_backtrace(); - if (panic_print & PANIC_PRINT_TASK_INFO) show_state(); @@ -244,6 +241,30 @@ void check_panic_on_warn(const char *origin) origin, limit); } +/* + * Helper that triggers the NMI backtrace (if set in panic_print) + * and then performs the secondary CPUs shutdown - we cannot have + * the NMI backtrace after the CPUs are off! + */ +static void panic_other_cpus_shutdown(bool crash_kexec) +{ + if (panic_print & PANIC_PRINT_ALL_CPU_BT) + trigger_all_cpu_backtrace(); + + /* + * Note that smp_send_stop() is the usual SMP shutdown function, + * which unfortunately may not be hardened to work in a panic + * situation. If we want to do crash dump after notifier calls + * and kmsg_dump, we will need architecture dependent extra + * bits in addition to stopping other CPUs, hence we rely on + * crash_smp_send_stop() for that. + */ + if (!crash_kexec) + smp_send_stop(); + else + crash_smp_send_stop(); +} + /** * panic - halt the system * @fmt: The text string to print @@ -334,23 +355,10 @@ void panic(const char *fmt, ...) * * Bypass the panic_cpu check and call __crash_kexec directly. */ - if (!_crash_kexec_post_notifiers) { + if (!_crash_kexec_post_notifiers) __crash_kexec(NULL); - /* - * Note smp_send_stop is the usual smp shutdown function, which - * unfortunately means it may not be hardened to work in a - * panic situation. - */ - smp_send_stop(); - } else { - /* - * If we want to do crash dump after notifier calls and - * kmsg_dump, we will need architecture dependent extra - * works in addition to stopping other CPUs. - */ - crash_smp_send_stop(); - } + panic_other_cpus_shutdown(_crash_kexec_post_notifiers); /* * Run any panic handlers, including those that might need to -- cgit v1.2.3 From 07db5e247ab5858439b14dd7cc1fe538b9efcf32 Mon Sep 17 00:00:00 2001 From: Dongliang Mu Date: Sun, 26 Feb 2023 20:49:47 +0800 Subject: fs: hfsplus: fix UAF issue in hfsplus_put_super The current hfsplus_put_super first calls hfs_btree_close on sbi->ext_tree, then invokes iput on sbi->hidden_dir, resulting in an use-after-free issue in hfsplus_release_folio. As shown in hfsplus_fill_super, the error handling code also calls iput before hfs_btree_close. To fix this error, we move all iput calls before hfsplus_btree_close. Note that this patch is tested on Syzbot. Link: https://lkml.kernel.org/r/20230226124948.3175736-1-mudongliangabcd@gmail.com Reported-by: syzbot+57e3e98f7e3b80f64d56@syzkaller.appspotmail.com Tested-by: Dongliang Mu Signed-off-by: Dongliang Mu Cc: Bart Van Assche Cc: Jens Axboe Cc: Muchun Song Cc: Roman Gushchin Cc: "Theodore Ts'o" Cc: Signed-off-by: Andrew Morton --- fs/hfsplus/super.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c index 122ed89ebf9f..1986b4f18a90 100644 --- a/fs/hfsplus/super.c +++ b/fs/hfsplus/super.c @@ -295,11 +295,11 @@ static void hfsplus_put_super(struct super_block *sb) hfsplus_sync_fs(sb, 1); } + iput(sbi->alloc_file); + iput(sbi->hidden_dir); hfs_btree_close(sbi->attr_tree); hfs_btree_close(sbi->cat_tree); hfs_btree_close(sbi->ext_tree); - iput(sbi->alloc_file); - iput(sbi->hidden_dir); kfree(sbi->s_vhdr_buf); kfree(sbi->s_backup_vhdr_buf); unload_nls(sbi->nls); -- cgit v1.2.3 From 3e35102666f873a135d31a726ac1ec8af4905206 Mon Sep 17 00:00:00 2001 From: Andrew Morton Date: Sun, 26 Feb 2023 12:31:11 -0800 Subject: fs/cramfs/inode.c: initialize file_ra_state file_ra_state_init() assumes that the file_ra_state has been zeroed out. Fixes a KMSAN used-unintialized issue (at least). Fixes: cf948cbc35e80 ("cramfs: read_mapping_page() is synchronous") Reported-by: syzbot Link: https://lkml.kernel.org/r/0000000000008f74e905f56df987@google.com Cc: Matthew Wilcox Cc: Nicolas Pitre Cc: Signed-off-by: Andrew Morton --- fs/cramfs/inode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c index e3d168911dbe..006ef68d7ff6 100644 --- a/fs/cramfs/inode.c +++ b/fs/cramfs/inode.c @@ -183,7 +183,7 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset, unsigned int len) { struct address_space *mapping = sb->s_bdev->bd_inode->i_mapping; - struct file_ra_state ra; + struct file_ra_state ra = {}; struct page *pages[BLKS_PER_BUF]; unsigned i, blocknr, buffer; unsigned long devsize; -- cgit v1.2.3 From 6a41de1eb2168c78079cc5e6bbbec6d1490c54ed Mon Sep 17 00:00:00 2001 From: Konrad Dybcio Date: Tue, 28 Feb 2023 16:33:35 +0100 Subject: mailmap: map Vikash Garodia's old address to his current one Vikash's old email is still picked up by the likes of get_maintainer.pl and keeps bouncing. Map it to his current one. Link: https://lkml.kernel.org/r/20230228153335.907164-3-konrad.dybcio@linaro.org Signed-off-by: Konrad Dybcio Cc: Vikash Garodia Signed-off-by: Andrew Morton --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index fb65947d671d..3366d9b63080 100644 --- a/.mailmap +++ b/.mailmap @@ -442,6 +442,7 @@ Vasily Averin Vasily Averin Vasily Averin Valentin Schneider +Vikash Garodia Vinod Koul Vinod Koul Vinod Koul -- cgit v1.2.3 From ecf1d926661bec4080a79c0ac9dbfe02b31702cf Mon Sep 17 00:00:00 2001 From: Konrad Dybcio Date: Tue, 28 Feb 2023 16:33:34 +0100 Subject: mailmap: map Dikshita Agarwal's old address to his current one Dikshita's old email is still picked up by the likes of get_maintainer.pl and keeps bouncing. Map it to his current one. Link: https://lkml.kernel.org/r/20230228153335.907164-2-konrad.dybcio@linaro.org Signed-off-by: Konrad Dybcio Cc: Dikshita Agarwal Signed-off-by: Andrew Morton --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 3366d9b63080..5367faaf7831 100644 --- a/.mailmap +++ b/.mailmap @@ -121,6 +121,7 @@ Dengcheng Zhu Dengcheng Zhu Dengcheng Zhu +Dikshita Agarwal Dmitry Baryshkov Dmitry Baryshkov <[dbaryshkov@gmail.com]> Dmitry Baryshkov -- cgit v1.2.3