From 1cf35d47712dd5dc4d62c6ce984f04ac6eab0408 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Fri, 25 Apr 2014 16:05:40 -0700 Subject: mm: split 'tlb_flush_mmu()' into tlb flushing and memory freeing parts The mmu-gather operation 'tlb_flush_mmu()' has done two things: the actual tlb flush operation, and the batched freeing of the pages that the TLB entries pointed at. This splits the operation into separate phases, so that the forced batched flushing done by zap_pte_range() can now do the actual TLB flush while still holding the page table lock, but delay the batched freeing of all the pages to after the lock has been dropped. This in turn allows us to avoid a race condition between set_page_dirty() (as called by zap_pte_range() when it finds a dirty shared memory pte) and page_mkclean(): because we now flush all the dirty page data from the TLB's while holding the pte lock, page_mkclean() will be held up walking the (recently cleaned) page tables until after the TLB entries have been flushed from all CPU's. Reported-by: Benjamin Herrenschmidt Tested-by: Dave Hansen Acked-by: Hugh Dickins Cc: Peter Zijlstra Cc: Russell King - ARM Linux Cc: Tony Luck Signed-off-by: Linus Torvalds --- arch/sh/include/asm/tlb.h | 8 ++++++++ 1 file changed, 8 insertions(+) (limited to 'arch/sh') diff --git a/arch/sh/include/asm/tlb.h b/arch/sh/include/asm/tlb.h index 362192ed12fe..62f80d2a9df9 100644 --- a/arch/sh/include/asm/tlb.h +++ b/arch/sh/include/asm/tlb.h @@ -86,6 +86,14 @@ tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) } } +static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) +{ +} + +static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) +{ +} + static inline void tlb_flush_mmu(struct mmu_gather *tlb) { } -- cgit v1.2.3