Skip to content

Commit

Permalink
KVM: x86/mmu: Set SPTE_AD_WRPROT_ONLY_MASK if and only if PML is enabled
Browse files Browse the repository at this point in the history
Check that PML is actually enabled before setting the mask to force a
SPTE to be write-protected.  The bits used for the !AD_ENABLED case are
in the upper half of the SPTE.  With 64-bit paging and EPT, these bits
are ignored, but with 32-bit PAE paging they are reserved.  Setting them
for L2 SPTEs without checking PML breaks NPT on 32-bit KVM.

Fixes: 1f4e5fc ("KVM: x86: fix nested guest live migration with PML")
Cc: [email protected]
Signed-off-by: Sean Christopherson <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
  • Loading branch information
sean-jc authored and bonzini committed Feb 26, 2021
1 parent 919f4eb commit 44ac595
Showing 1 changed file with 8 additions and 8 deletions.
16 changes: 8 additions & 8 deletions arch/x86/kvm/mmu/mmu_internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -81,15 +81,15 @@ static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep)
static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu)
{
/*
* When using the EPT page-modification log, the GPAs in the log
* would come from L2 rather than L1. Therefore, we need to rely
* on write protection to record dirty pages. This also bypasses
* PML, since writes now result in a vmexit. Note, this helper will
* tag SPTEs as needing write-protection even if PML is disabled or
* unsupported, but that's ok because the tag is consumed if and only
* if PML is enabled. Omit the PML check to save a few uops.
* When using the EPT page-modification log, the GPAs in the CPU dirty
* log would come from L2 rather than L1. Therefore, we need to rely
* on write protection to record dirty pages, which bypasses PML, since
* writes now result in a vmexit. Note, the check on CPU dirty logging
* being enabled is mandatory as the bits used to denote WP-only SPTEs
* are reserved for NPT w/ PAE (32-bit KVM).
*/
return vcpu->arch.mmu == &vcpu->arch.guest_mmu;
return vcpu->arch.mmu == &vcpu->arch.guest_mmu &&
kvm_x86_ops.cpu_dirty_log_size;
}

bool is_nx_huge_page_enabled(void);
Expand Down

0 comments on commit 44ac595

Please sign in to comment.