diff options
author | Sean Christopherson <seanjc@google.com> | 2021-06-22 10:56:56 -0700 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2021-06-24 18:00:37 -0400 |
commit | 2640b0865395b6a31f76d6eca9937dec3e876ca3 (patch) | |
tree | b84991f4281fbc3aa08223ca32684bde87012e1d /fs/ext4/inline.c | |
parent | 00a669780ffa8c4b5f3e37346b5bf45508dd15bb (diff) | |
download | linux-2640b0865395b6a31f76d6eca9937dec3e876ca3.tar.gz linux-2640b0865395b6a31f76d6eca9937dec3e876ca3.tar.bz2 linux-2640b0865395b6a31f76d6eca9937dec3e876ca3.zip |
KVM: x86/mmu: WARN and zap SP when sync'ing if MMU role mismatches
When synchronizing a shadow page, WARN and zap the page if its mmu role
isn't compatible with the current MMU context, where "compatible" is an
exact match sans the bits that have no meaning in the overall MMU context
or will be explicitly overwritten during the sync. Many of the helpers
used by sync_page() are specific to the current context, updating a SMM
vs. non-SMM shadow page would use the wrong memslots, updating L1 vs. L2
PTEs might work but would be extremely bizaree, and so on and so forth.
Drop the guard with respect to 8-byte vs. 4-byte PTEs in
__kvm_sync_page(), it was made useless when kvm_mmu_get_page() stopped
trying to sync shadow pages irrespective of the current MMU context.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210622175739.3610207-12-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'fs/ext4/inline.c')
0 files changed, 0 insertions, 0 deletions