Message ID | 20220909095006.65440-2-metikaya@amazon.co.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] KVM: x86/xen: Remove redundant NULL check | expand |
On Fri, Sep 09, 2022, Metin Kaya wrote: > It simplifies validation of gfn_to_hva_cache to make it less error prone Avoid pronouns as they're ambiguous, e.g. does "it" mean the helper or the patch? Obviously not a big deal in this case, but avoid pronouns is a good habit to get into. > per the discussion at > https://lore.kernel.org/all/4e29402770a7a254a1ea8ca8165af641ed0832ed.camel@infradead.org. Please write a proper changelog. Providing a link to the discussion is wonderful, but it's a supplement to a good changelog, not a substitution. > Signed-off-by: Metin Kaya <metikaya@amazon.co.uk> > --- > arch/x86/kvm/x86.c | 26 ++++++++++++++------------ > 1 file changed, 14 insertions(+), 12 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 43a6a7efc6ec..07d368dc69ad 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -3425,11 +3425,22 @@ void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu) > } > EXPORT_SYMBOL_GPL(kvm_service_local_tlb_flush_requests); > > +static inline bool kvm_gfn_to_hva_cache_valid(struct kvm *kvm, Don't add inline to local static functions, let the compiler make those decisions. > + struct gfn_to_hva_cache *ghc, > + gpa_t gpa) > +{ > + struct kvm_memslots *slots = kvm_memslots(kvm); > + > + return !unlikely(slots->generation != ghc->generation || I don't I don't think the unlikely should be here, it's the context in which the helper is used that determines whether or not the outcome is unlikely. > + gpa != ghc->gpa || > + kvm_is_error_hva(ghc->hva) || > + !ghc->memslot); Might be worth opportunistically reordering the checks to avoid grabbing memslots when something else is invalid, e.g. return gpa != ghc->gpa || kvm_is_error_hva(ghc->hva) || !ghc->memslot || kvm_memslots(kvm)->generation != ghc->generation); > +} > +
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 43a6a7efc6ec..07d368dc69ad 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3425,11 +3425,22 @@ void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_service_local_tlb_flush_requests); +static inline bool kvm_gfn_to_hva_cache_valid(struct kvm *kvm, + struct gfn_to_hva_cache *ghc, + gpa_t gpa) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + + return !unlikely(slots->generation != ghc->generation || + gpa != ghc->gpa || + kvm_is_error_hva(ghc->hva) || + !ghc->memslot); +} + static void record_steal_time(struct kvm_vcpu *vcpu) { struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache; struct kvm_steal_time __user *st; - struct kvm_memslots *slots; gpa_t gpa = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS; u64 steal; u32 version; @@ -3445,11 +3456,7 @@ static void record_steal_time(struct kvm_vcpu *vcpu) if (WARN_ON_ONCE(current->mm != vcpu->kvm->mm)) return; - slots = kvm_memslots(vcpu->kvm); - - if (unlikely(slots->generation != ghc->generation || - gpa != ghc->gpa || - kvm_is_error_hva(ghc->hva) || !ghc->memslot)) { + if (!kvm_gfn_to_hva_cache_valid(vcpu->kvm, ghc, gpa)) { /* We rely on the fact that it fits in a single page. */ BUILD_BUG_ON((sizeof(*st) - 1) & KVM_STEAL_VALID_BITS); @@ -4729,7 +4736,6 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) { struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache; struct kvm_steal_time __user *st; - struct kvm_memslots *slots; static const u8 preempted = KVM_VCPU_PREEMPTED; gpa_t gpa = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS; @@ -4756,11 +4762,7 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) if (unlikely(current->mm != vcpu->kvm->mm)) return; - slots = kvm_memslots(vcpu->kvm); - - if (unlikely(slots->generation != ghc->generation || - gpa != ghc->gpa || - kvm_is_error_hva(ghc->hva) || !ghc->memslot)) + if (!kvm_gfn_to_hva_cache_valid(vcpu->kvm, ghc, gpa)) return; st = (struct kvm_steal_time __user *)ghc->hva;
It simplifies validation of gfn_to_hva_cache to make it less error prone per the discussion at https://lore.kernel.org/all/4e29402770a7a254a1ea8ca8165af641ed0832ed.camel@infradead.org. Signed-off-by: Metin Kaya <metikaya@amazon.co.uk> --- arch/x86/kvm/x86.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-)