summaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorChristian Borntraeger <borntraeger@de.ibm.com>2017-04-09 22:09:38 +0200
committerBen Hutchings <ben@decadent.org.uk>2017-07-18 18:40:34 +0100
commitf0e51b233eae4a86e143136e770e2934dac0ca18 (patch)
treeab3c63fe2f69652fcb1f0b7d644dd52d21784c3d /arch
parentac27a8bff23b37444c3489829dba7f0e11bbad1f (diff)
downloadlinux-stable-f0e51b233eae4a86e143136e770e2934dac0ca18.tar.gz
linux-stable-f0e51b233eae4a86e143136e770e2934dac0ca18.tar.bz2
linux-stable-f0e51b233eae4a86e143136e770e2934dac0ca18.zip
s390/mm: fix CMMA vs KSM vs others
commit a8f60d1fadf7b8b54449fcc9d6b15248917478ba upstream. On heavy paging with KSM I see guest data corruption. Turns out that KSM will add pages to its tree, where the mapping return true for pte_unused (or might become as such later). KSM will unmap such pages and reinstantiate with different attributes (e.g. write protected or special, e.g. in replace_page or write_protect_page)). This uncovered a bug in our pagetable handling: We must remove the unused flag as soon as an entry becomes present again. Signed-of-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'arch')
-rw-r--r--arch/s390/include/asm/pgtable.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 8904e1282562..589f9c65416a 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -868,6 +868,8 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
{
pgste_t pgste;
+ if (pte_present(entry))
+ pte_val(entry) &= ~_PAGE_UNUSED;
if (mm_has_pgste(mm)) {
pgste = pgste_get_lock(ptep);
pgste_val(pgste) &= ~_PGSTE_GPS_ZERO;