diff options
author | Heiko Carstens <hca@linux.ibm.com> | 2023-10-24 10:15:19 +0200 |
---|---|---|
committer | Vasily Gorbik <gor@linux.ibm.com> | 2023-10-25 15:08:29 +0200 |
commit | 44d93045247661acbd50b1629e62f415f2747577 (patch) | |
tree | 834dccf7549d4ad42e22b4a512b902e4f4ee5243 /arch/s390/mm | |
parent | e3f4170ccf20ebe2985b8e166184dd2d54220b60 (diff) | |
download | linux-stable-44d93045247661acbd50b1629e62f415f2747577.tar.gz linux-stable-44d93045247661acbd50b1629e62f415f2747577.tar.bz2 linux-stable-44d93045247661acbd50b1629e62f415f2747577.zip |
s390/cmma: fix detection of DAT pages
If the cmma no-dat feature is available the kernel page tables are walked
to identify and mark all pages which are used for address translation (all
region, segment, and page tables). In a subsequent loop all other pages are
marked as "no-dat" pages with the ESSA instruction.
This information is visible to the hypervisor, so that the hypervisor can
optimize purging of guest TLB entries. The initial loop however is
incorrect: only the first three of the four pages which belong to segment
and region tables will be marked as being used for DAT. The last page is
incorrectly marked as no-dat.
This can result in incorrect guest TLB flushes.
Fix this by simply marking all four pages.
Cc: <stable@vger.kernel.org>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Diffstat (limited to 'arch/s390/mm')
-rw-r--r-- | arch/s390/mm/page-states.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/arch/s390/mm/page-states.c b/arch/s390/mm/page-states.c index c3db3c0e75c0..20c0b160efee 100644 --- a/arch/s390/mm/page-states.c +++ b/arch/s390/mm/page-states.c @@ -121,7 +121,7 @@ static void mark_kernel_pud(p4d_t *p4d, unsigned long addr, unsigned long end) continue; if (!pud_folded(*pud)) { page = phys_to_page(pud_val(*pud)); - for (i = 0; i < 3; i++) + for (i = 0; i < 4; i++) set_bit(PG_arch_1, &page[i].flags); } mark_kernel_pmd(pud, addr, next); @@ -142,7 +142,7 @@ static void mark_kernel_p4d(pgd_t *pgd, unsigned long addr, unsigned long end) continue; if (!p4d_folded(*p4d)) { page = phys_to_page(p4d_val(*p4d)); - for (i = 0; i < 3; i++) + for (i = 0; i < 4; i++) set_bit(PG_arch_1, &page[i].flags); } mark_kernel_pud(p4d, addr, next); @@ -171,7 +171,7 @@ static void mark_kernel_pgd(void) continue; if (!pgd_folded(*pgd)) { page = phys_to_page(pgd_val(*pgd)); - for (i = 0; i < 3; i++) + for (i = 0; i < 4; i++) set_bit(PG_arch_1, &page[i].flags); } mark_kernel_p4d(pgd, addr, next); |