Message ID | 20220921173546.2674386-11-dmatlack@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Make tdp_mmu read-only and clean up TPD MMU fault handler | expand |
On Wed, Sep 21, 2022 at 10:35:46AM -0700, David Matlack <dmatlack@google.com> wrote: > Rename __direct_map() to direct_map() since the leading underscores are > unnecessary. This also makes the page fault handler names more > consistent: kvm_tdp_mmu_page_fault() calls kvm_tdp_mmu_map() and > direct_page_fault() calls direct_map(). > > Opportunistically make some trivial cleanups to comments that had to be > modified anyway since they mentioned __direct_map(). Specifically, use > "()" when referring to functions, and include kvm_tdp_mmu_map() among > the various callers of disallowed_hugepage_adjust(). > > No functional change intended. > > Signed-off-by: David Matlack <dmatlack@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 14 +++++++------- > arch/x86/kvm/mmu/mmu_internal.h | 2 +- > 2 files changed, 8 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 4ad70fa371df..a0b4bc3c9202 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3079,11 +3079,11 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ > is_shadow_present_pte(spte) && > !is_large_pte(spte)) { > /* > - * A small SPTE exists for this pfn, but FNAME(fetch) > - * and __direct_map would like to create a large PTE > - * instead: just force them to go down another level, > - * patching back for them into pfn the next 9 bits of > - * the address. > + * A small SPTE exists for this pfn, but FNAME(fetch), > + * direct_map(), or kvm_tdp_mmu_map() would like to create a > + * large PTE instead: just force them to go down another level, > + * patching back for them into pfn the next 9 bits of the > + * address. > */ > u64 page_mask = KVM_PAGES_PER_HPAGE(cur_level) - > KVM_PAGES_PER_HPAGE(cur_level - 1); > @@ -3092,7 +3092,7 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ > } > } > > -static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > +static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > { > struct kvm_shadow_walk_iterator it; > struct kvm_mmu_page *sp; > @@ -4265,7 +4265,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > if (r) > goto out_unlock; > > - r = __direct_map(vcpu, fault); > + r = direct_map(vcpu, fault); > > out_unlock: > write_unlock(&vcpu->kvm->mmu_lock); > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h > index 1e91f24bd865..b8c116ec1a89 100644 > --- a/arch/x86/kvm/mmu/mmu_internal.h > +++ b/arch/x86/kvm/mmu/mmu_internal.h > @@ -198,7 +198,7 @@ struct kvm_page_fault { > > /* > * Maximum page size that can be created for this fault; input to > - * FNAME(fetch), __direct_map and kvm_tdp_mmu_map. > + * FNAME(fetch), direct_map() and kvm_tdp_mmu_map(). > */ > u8 max_level; > > -- > 2.37.3.998.g577e59143f-goog > Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4ad70fa371df..a0b4bc3c9202 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3079,11 +3079,11 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ is_shadow_present_pte(spte) && !is_large_pte(spte)) { /* - * A small SPTE exists for this pfn, but FNAME(fetch) - * and __direct_map would like to create a large PTE - * instead: just force them to go down another level, - * patching back for them into pfn the next 9 bits of - * the address. + * A small SPTE exists for this pfn, but FNAME(fetch), + * direct_map(), or kvm_tdp_mmu_map() would like to create a + * large PTE instead: just force them to go down another level, + * patching back for them into pfn the next 9 bits of the + * address. */ u64 page_mask = KVM_PAGES_PER_HPAGE(cur_level) - KVM_PAGES_PER_HPAGE(cur_level - 1); @@ -3092,7 +3092,7 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ } } -static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_shadow_walk_iterator it; struct kvm_mmu_page *sp; @@ -4265,7 +4265,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) goto out_unlock; - r = __direct_map(vcpu, fault); + r = direct_map(vcpu, fault); out_unlock: write_unlock(&vcpu->kvm->mmu_lock); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1e91f24bd865..b8c116ec1a89 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -198,7 +198,7 @@ struct kvm_page_fault { /* * Maximum page size that can be created for this fault; input to - * FNAME(fetch), __direct_map and kvm_tdp_mmu_map. + * FNAME(fetch), direct_map() and kvm_tdp_mmu_map(). */ u8 max_level;
Rename __direct_map() to direct_map() since the leading underscores are unnecessary. This also makes the page fault handler names more consistent: kvm_tdp_mmu_page_fault() calls kvm_tdp_mmu_map() and direct_page_fault() calls direct_map(). Opportunistically make some trivial cleanups to comments that had to be modified anyway since they mentioned __direct_map(). Specifically, use "()" when referring to functions, and include kvm_tdp_mmu_map() among the various callers of disallowed_hugepage_adjust(). No functional change intended. Signed-off-by: David Matlack <dmatlack@google.com> --- arch/x86/kvm/mmu/mmu.c | 14 +++++++------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-)