summaryrefslogtreecommitdiffstats
path: root/arch/x86/include/asm/inat.h
diff options
context:
space:
mode:
authorThomas Hellstrom <thellstrom@vmware.com>2020-03-04 12:45:26 +0100
committerBorislav Petkov <bp@suse.de>2020-03-17 11:48:31 +0100
commit6db73f17c5f155dbcfd5e48e621c706270b84df0 (patch)
tree934dd7d64e9164d0e30edbd6bd31963b5232ed70 /arch/x86/include/asm/inat.h
parent6a9feaa8774f3b8210dfe40626a75ca047e4ecae (diff)
downloadlinux-6db73f17c5f155dbcfd5e48e621c706270b84df0.tar.gz
linux-6db73f17c5f155dbcfd5e48e621c706270b84df0.tar.bz2
linux-6db73f17c5f155dbcfd5e48e621c706270b84df0.zip
x86: Don't let pgprot_modify() change the page encryption bit
When SEV or SME is enabled and active, vm_get_page_prot() typically returns with the encryption bit set. This means that users of pgprot_modify(, vm_get_page_prot()) (mprotect_fixup(), do_mmap()) end up with a value of vma->vm_pg_prot that is not consistent with the intended protection of the PTEs. This is also important for fault handlers that rely on the VMA vm_page_prot to set the page protection. Fix this by not allowing pgprot_modify() to change the encryption bit, similar to how it's done for PAT bits. Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20200304114527.3636-2-thomas_os@shipmail.org
Diffstat (limited to 'arch/x86/include/asm/inat.h')
0 files changed, 0 insertions, 0 deletions