diff options
author | Peter Zijlstra <peterz@infradead.org> | 2018-08-31 14:46:08 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2019-04-03 10:32:40 +0200 |
commit | ed6a79352cad00e9a49d6e438be40e45107207bf (patch) | |
tree | a1ed733ba7eacb57d93e6bb825a24b63769a11c9 /mm/mmu_gather.c | |
parent | dea2434c23c102b3e7d320849ec1cfeb432edb60 (diff) | |
download | linux-stable-ed6a79352cad00e9a49d6e438be40e45107207bf.tar.gz linux-stable-ed6a79352cad00e9a49d6e438be40e45107207bf.tar.bz2 linux-stable-ed6a79352cad00e9a49d6e438be40e45107207bf.zip |
asm-generic/tlb, arch: Provide CONFIG_HAVE_MMU_GATHER_PAGE_SIZE
Move the mmu_gather::page_size things into the generic code instead of
PowerPC specific bits.
No change in behavior intended.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'mm/mmu_gather.c')
-rw-r--r-- | mm/mmu_gather.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index f2f03c655807..14dfc97155e4 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -58,7 +58,9 @@ void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb->batch = NULL; #endif +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE tlb->page_size = 0; +#endif __tlb_reset_range(tlb); } @@ -121,7 +123,10 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ struct mmu_gather_batch *batch; VM_BUG_ON(!tlb->end); + +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE VM_WARN_ON(tlb->page_size != page_size); +#endif batch = tlb->active; /* |