diff options
author | Rik van Riel <riel@surriel.com> | 2018-07-16 15:03:32 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2018-07-17 09:35:31 +0200 |
commit | 2ff6ddf19c0ec40633bd14d8fe28a289816bd98d (patch) | |
tree | e608a4aa5331e3fcd5a1b00a6c65de41b6563eb5 /arch/x86/include/asm/tlbflush.h | |
parent | c1a2f7f0c06454387c2cd7b93ff1491c715a8c69 (diff) | |
download | linux-stable-2ff6ddf19c0ec40633bd14d8fe28a289816bd98d.tar.gz linux-stable-2ff6ddf19c0ec40633bd14d8fe28a289816bd98d.tar.bz2 linux-stable-2ff6ddf19c0ec40633bd14d8fe28a289816bd98d.zip |
x86/mm/tlb: Leave lazy TLB mode at page table free time
Andy discovered that speculative memory accesses while in lazy
TLB mode can crash a system, when a CPU tries to dereference a
speculative access using memory contents that used to be valid
page table memory, but have since been reused for something else
and point into la-la land.
The latter problem can be prevented in two ways. The first is to
always send a TLB shootdown IPI to CPUs in lazy TLB mode, while
the second one is to only send the TLB shootdown at page table
freeing time.
The second should result in fewer IPIs, since operationgs like
mprotect and madvise are very common with some workloads, but
do not involve page table freeing. Also, on munmap, batching
of page table freeing covers much larger ranges of virtual
memory than the batching of unmapped user pages.
Tested-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Rik van Riel <riel@surriel.com>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: efault@gmx.de
Cc: kernel-team@fb.com
Cc: luto@kernel.org
Link: http://lkml.kernel.org/r/20180716190337.26133-3-riel@surriel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/include/asm/tlbflush.h')
-rw-r--r-- | arch/x86/include/asm/tlbflush.h | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 6690cd3fc8b1..3aa3204b5dc0 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -554,4 +554,9 @@ extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); native_flush_tlb_others(mask, info) #endif +extern void tlb_flush_remove_tables(struct mm_struct *mm); +extern void tlb_flush_remove_tables_local(void *arg); + +#define HAVE_TLB_FLUSH_REMOVE_TABLES + #endif /* _ASM_X86_TLBFLUSH_H */ |