diff options
author | Peter Zijlstra <peterz@infradead.org> | 2018-09-04 17:04:07 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2019-04-03 10:32:54 +0200 |
commit | 6137fed0823247e32306bde2b48cac627c24f894 (patch) | |
tree | 73fe1d4bac3022a68e12f0798b1f21d9b4af6d5f /arch/c6x/Kconfig | |
parent | 7bb8709d6ad3ceeb5010a98b0d7eb11db8836da1 (diff) | |
download | linux-6137fed0823247e32306bde2b48cac627c24f894.tar.gz linux-6137fed0823247e32306bde2b48cac627c24f894.tar.bz2 linux-6137fed0823247e32306bde2b48cac627c24f894.zip |
arch/tlb: Clean up simple architectures
For the architectures that do not implement their own tlb_flush() but
do already use the generic mmu_gather, there are two options:
1) the platform has an efficient flush_tlb_range() and
asm-generic/tlb.h doesn't need any overrides at all.
2) the platform lacks an efficient flush_tlb_range() and
we select MMU_GATHER_NO_RANGE to minimize full invalidates.
Convert all 'simple' architectures to one of these two forms.
alpha: has no range invalidate -> 2
arc: already used flush_tlb_range() -> 1
c6x: has no range invalidate -> 2
hexagon: has an efficient flush_tlb_range() -> 1
(flush_tlb_mm() is in fact a full range invalidate,
so no need to shoot down everything)
m68k: has inefficient flush_tlb_range() -> 2
microblaze: has no flush_tlb_range() -> 2
mips: has efficient flush_tlb_range() -> 1
(even though it currently seems to use flush_tlb_mm())
nds32: already uses flush_tlb_range() -> 1
nios2: has inefficient flush_tlb_range() -> 2
(no limit on range iteration)
openrisc: has inefficient flush_tlb_range() -> 2
(no limit on range iteration)
parisc: already uses flush_tlb_range() -> 1
sparc32: already uses flush_tlb_range() -> 1
unicore32: has inefficient flush_tlb_range() -> 2
(no limit on range iteration)
xtensa: has efficient flush_tlb_range() -> 1
Note this also fixes a bug in the existing code for a number
platforms. Those platforms that did:
tlb_end_vma() -> if (!full_mm) flush_tlb_*()
tlb_flush -> if (full_mm) flush_tlb_mm()
missed the case of shift_arg_pages(), which doesn't have @fullmm set,
nor calls into tlb_*vma(), but still frees page-tables and thus needs
an invalidate. The new code handles this by detecting a non-empty
range, and either issuing the matching range invalidate or a full
invalidate, depending on the capabilities.
No change in behavior intended.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/c6x/Kconfig')
-rw-r--r-- | arch/c6x/Kconfig | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/c6x/Kconfig b/arch/c6x/Kconfig index e5cd3c5f8399..3bb75e674161 100644 --- a/arch/c6x/Kconfig +++ b/arch/c6x/Kconfig @@ -20,6 +20,7 @@ config C6X select GENERIC_CLOCKEVENTS select MODULES_USE_ELF_RELA select ARCH_NO_COHERENT_DMA_MMAP + select MMU_GATHER_NO_RANGE if MMU config MMU def_bool n |