diff options
author | Heiko Carstens <heiko.carstens@de.ibm.com> | 2013-03-04 14:14:11 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2013-03-20 12:58:53 -0700 |
commit | 2932ef21c24f5f248b869a92c1604e531750df17 (patch) | |
tree | 87a77705d6cec9c24c15efa0eecacc4662bf80e4 /arch | |
parent | 68e0bbe8b7781877de7dc96d620a4ce6af8807f9 (diff) | |
download | linux-stable-2932ef21c24f5f248b869a92c1604e531750df17.tar.gz linux-stable-2932ef21c24f5f248b869a92c1604e531750df17.tar.bz2 linux-stable-2932ef21c24f5f248b869a92c1604e531750df17.zip |
s390/mm: fix flush_tlb_kernel_range()
commit f6a70a07079518280022286a1dceb797d12e1edf upstream.
Our flush_tlb_kernel_range() implementation calls __tlb_flush_mm() with
&init_mm as argument. __tlb_flush_mm() however will only flush tlbs
for the passed in mm if its mm_cpumask is not empty.
For the init_mm however its mm_cpumask has never any bits set. Which in
turn means that our flush_tlb_kernel_range() implementation doesn't
work at all.
This can be easily verified with a vmalloc/vfree loop which allocates
a page, writes to it and then frees the page again. A crash will follow
almost instantly.
To fix this remove the cpumask_empty() check in __tlb_flush_mm() since
there shouldn't be too many mms with a zero mm_cpumask, besides the
init_mm of course.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/s390/include/asm/tlbflush.h | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/arch/s390/include/asm/tlbflush.h b/arch/s390/include/asm/tlbflush.h index b7a4f2eb0057..d7862adc434f 100644 --- a/arch/s390/include/asm/tlbflush.h +++ b/arch/s390/include/asm/tlbflush.h @@ -73,8 +73,6 @@ static inline void __tlb_flush_idte(unsigned long asce) static inline void __tlb_flush_mm(struct mm_struct * mm) { - if (unlikely(cpumask_empty(mm_cpumask(mm)))) - return; /* * If the machine has IDTE we prefer to do a per mm flush * on all cpus instead of doing a local flush if the mm |