summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorRick Edgecombe <rick.p.edgecombe@intel.com>2023-06-12 17:10:43 -0700
committerRick Edgecombe <rick.p.edgecombe@intel.com>2023-07-11 14:12:19 -0700
commite5136e876581ba5b63220378e25fec9dcec7bad1 (patch)
tree2a57cbd94d820e586de9735f602bdff2af5c2bd7 /mm
parent0266e7c53647fbc18be2d0da98d5c9e92922d866 (diff)
downloadlinux-stable-e5136e876581ba5b63220378e25fec9dcec7bad1.tar.gz
linux-stable-e5136e876581ba5b63220378e25fec9dcec7bad1.tar.bz2
linux-stable-e5136e876581ba5b63220378e25fec9dcec7bad1.zip
mm: Warn on shadow stack memory in wrong vma
The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. One sharp edge is that PTEs that are both Write=0 and Dirty=1 are treated as shadow by the CPU, but this combination used to be created by the kernel on x86. Previous patches have changed the kernel to now avoid creating these PTEs unless they are for shadow stack memory. In case any missed corners of the kernel are still creating PTEs like this for non-shadow stack memory, and to catch any re-introductions of the logic, warn if any shadow stack PTEs (Write=0, Dirty=1) are found in non-shadow stack VMAs when they are being zapped. This won't catch transient cases but should have decent coverage. In order to check if a PTE is shadow stack in core mm code, add two arch breakouts arch_check_zapped_pte/pmd(). This will allow shadow stack specific code to be kept in arch/x86. Only do the check if shadow stack is supported by the CPU and configured because in rare cases older CPUs may write Dirty=1 to a Write=0 CPU on older CPUs. This check is handled in pte_shstk()/pmd_shstk(). Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Tested-by: Pengfei Xu <pengfei.xu@intel.com> Tested-by: John Allen <john.allen@amd.com> Tested-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/all/20230613001108.3040476-18-rick.p.edgecombe%40intel.com
Diffstat (limited to 'mm')
-rw-r--r--mm/huge_memory.c1
-rw-r--r--mm/memory.c1
2 files changed, 2 insertions, 0 deletions
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 23c2aa612926..554f6f82d225 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1681,6 +1681,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
*/
orig_pmd = pmdp_huge_get_and_clear_full(vma, addr, pmd,
tlb->fullmm);
+ arch_check_zapped_pmd(vma, orig_pmd);
tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
if (vma_is_special_huge(vma)) {
if (arch_needs_pgtable_deposit())
diff --git a/mm/memory.c b/mm/memory.c
index f093c73512c5..36289f33327e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1430,6 +1430,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
continue;
ptent = ptep_get_and_clear_full(mm, addr, pte,
tlb->fullmm);
+ arch_check_zapped_pte(vma, ptent);
tlb_remove_tlb_entry(tlb, pte, addr);
zap_install_uffd_wp_if_needed(vma, addr, pte, details,
ptent);