diff options
author | Sean Christopherson <seanjc@google.com> | 2021-02-25 12:47:42 -0800 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2021-03-15 04:43:48 -0400 |
commit | e7b7bdea77f3277fe49f714c983d0f38f7cb0d86 (patch) | |
tree | d9423f780b1077a459ee6e808920c4f7f1cdd434 /arch/x86/include/asm | |
parent | d6b87f256591cf6be78825db6a09a5218666e539 (diff) | |
download | linux-stable-e7b7bdea77f3277fe49f714c983d0f38f7cb0d86.tar.gz linux-stable-e7b7bdea77f3277fe49f714c983d0f38f7cb0d86.tar.bz2 linux-stable-e7b7bdea77f3277fe49f714c983d0f38f7cb0d86.zip |
KVM: x86/mmu: Move logic for setting SPTE masks for EPT into the MMU proper
Let the MMU deal with the SPTE masks to avoid splitting the logic and
knowledge across the MMU and VMX.
The SPTE masks that are used for EPT are very, very tightly coupled to
the MMU implementation. The use of available bits, the existence of A/D
types, the fact that shadow_x_mask even exists, and so on and so forth
are all baked into the MMU implementation. Cross referencing the params
to the masks is also a nightmare, as pretty much every param is a u64.
A future patch will make the location of the MMU_WRITABLE and
HOST_WRITABLE bits MMU specific, to free up bit 11 for a MMU_PRESENT bit.
Doing that change with the current kvm_mmu_set_mask_ptes() would be an
absolute mess.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210225204749.1512652-18-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/include/asm')
-rw-r--r-- | arch/x86/include/asm/kvm_host.h | 3 |
1 files changed, 0 insertions, 3 deletions
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index bbe925694d55..d05542645b5c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1412,9 +1412,6 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu); int kvm_mmu_create(struct kvm_vcpu *vcpu); void kvm_mmu_init_vm(struct kvm *kvm); void kvm_mmu_uninit_vm(struct kvm *kvm); -void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, - u64 dirty_mask, u64 nx_mask, u64 x_mask, u64 p_mask, - u64 acc_track_mask, u64 me_mask); void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, |