summaryrefslogtreecommitdiffstats
path: root/include/asm-generic/tlb.h
diff options
context:
space:
mode:
authorAlex Shi <alex.shi@intel.com>2012-06-28 09:02:19 +0800
committerH. Peter Anvin <hpa@zytor.com>2012-06-27 19:29:10 -0700
commitc4211f42d3e66875298a5e26a75109878c80f15b (patch)
tree5f4db23b52be8eb74f95c35621373df790eacdd2 /include/asm-generic/tlb.h
parentd8dfe60d6dcad5989c4558b753b98d657e2813c0 (diff)
downloadlinux-c4211f42d3e66875298a5e26a75109878c80f15b.tar.gz
linux-c4211f42d3e66875298a5e26a75109878c80f15b.tar.bz2
linux-c4211f42d3e66875298a5e26a75109878c80f15b.zip
x86/tlb: add tlb_flushall_shift for specific CPU
Testing show different CPU type(micro architectures and NUMA mode) has different balance points between the TLB flush all and multiple invlpg. And there also has cases the tlb flush change has no any help. This patch give a interface to let x86 vendor developers have a chance to set different shift for different CPU type. like some machine in my hands, balance points is 16 entries on Romely-EP; while it is at 8 entries on Bloomfield NHM-EP; and is 256 on IVB mobile CPU. but on model 15 core2 Xeon using invlpg has nothing help. For untested machine, do a conservative optimization, same as NHM CPU. Signed-off-by: Alex Shi <alex.shi@intel.com> Link: http://lkml.kernel.org/r/1340845344-27557-5-git-send-email-alex.shi@intel.com Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Diffstat (limited to 'include/asm-generic/tlb.h')
-rw-r--r--include/asm-generic/tlb.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index f96a5b58a975..75e888b3cfd2 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -113,7 +113,8 @@ static inline int tlb_fast_mode(struct mmu_gather *tlb)
void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm);
void tlb_flush_mmu(struct mmu_gather *tlb);
-void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end);
+void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start,
+ unsigned long end);
int __tlb_remove_page(struct mmu_gather *tlb, struct page *page);
/* tlb_remove_page