Message ID | 20220321224358.1305530-6-bgardon@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/MMU: Optimize disabling dirty logging | expand |
On Mon, Mar 21, 2022, Ben Gardon wrote: > Factor out the implementation of reset_tdp_shadow_zero_bits_mask to a > helper function which does not require a vCPU pointer. The only element > of the struct kvm_mmu context used by the function is the shadow root > level, so pass that in too instead of the mmu context. > > No functional change intended. > > Signed-off-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 17 +++++++++++------ > 1 file changed, 11 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 3b8da8b0745e..6f98111f8f8b 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4487,16 +4487,14 @@ static inline bool boot_cpu_is_amd(void) > * possible, however, kvm currently does not do execution-protection. > */ > static void Strongly prefer the newline here get dropped (see below). > -reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) > +build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, Kind of a nit, but KVM uses "calc" for this sort of thing. There are no other instances of "build_" to describe this behavior. Am I alone in think that shadow_zero_check is an awful, awful name? E.g. the EPT memtype case has legal non-zero values. Anyone object to opportunistically renaming the function and the local shadow_zero_check to "rsvd_bits" to shorten line lengths and move KVM one step closer to consistent naming? > + int shadow_root_level) > { > - struct rsvd_bits_validate *shadow_zero_check; > int i; > > - shadow_zero_check = &context->shadow_zero_check; > - > if (boot_cpu_is_amd()) > __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), > - context->shadow_root_level, false, > + shadow_root_level, false, > boot_cpu_has(X86_FEATURE_GBPAGES), > false, true); > else > @@ -4507,12 +4505,19 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) > if (!shadow_me_mask) > return; > > - for (i = context->shadow_root_level; --i >= 0;) { > + for (i = shadow_root_level; --i >= 0;) { > shadow_zero_check->rsvd_bits_mask[0][i] &= ~shadow_me_mask; > shadow_zero_check->rsvd_bits_mask[1][i] &= ~shadow_me_mask; > } > } > > +static void > +reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) One line! Aside from being against the One True Style[*], there is zero reason for a newline here. And I vote to drop the "mask", because (a) it's not a singular mask and (b) it's not even a mask in all cases. And while I'm on a naming consistency rant, s/context/mmu. I.e. end up with: static void calc_tdp_shadow_rsvd_bits(struct rsvd_bits_validate *rsvd_bits, int shadow_root_level) static void reset_tdp_shadow_rsvd_bits(struct kvm_mmu *mmu) [*] https://lore.kernel.org/mm-commits/CAHk-=wjS-Jg7sGMwUPpDsjv392nDOOs0CtUtVkp=S6Q7JzFJRw@mail.gmail.com > +{ > + build_tdp_shadow_zero_bits_mask(&context->shadow_zero_check, > + context->shadow_root_level); > +} > + > /* > * as the comments in reset_shadow_zero_bits_mask() except it > * is the shadow page table for intel nested guest. > -- > 2.35.1.894.gb6a874cedc-goog >
On Tue, Apr 12, 2022 at 8:46 AM Sean Christopherson <seanjc@google.com> wrote: > > On Mon, Mar 21, 2022, Ben Gardon wrote: > > Factor out the implementation of reset_tdp_shadow_zero_bits_mask to a > > helper function which does not require a vCPU pointer. The only element > > of the struct kvm_mmu context used by the function is the shadow root > > level, so pass that in too instead of the mmu context. > > > > No functional change intended. > > > > Signed-off-by: Ben Gardon <bgardon@google.com> > > --- > > arch/x86/kvm/mmu/mmu.c | 17 +++++++++++------ > > 1 file changed, 11 insertions(+), 6 deletions(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index 3b8da8b0745e..6f98111f8f8b 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -4487,16 +4487,14 @@ static inline bool boot_cpu_is_amd(void) > > * possible, however, kvm currently does not do execution-protection. > > */ > > static void > > Strongly prefer the newline here get dropped (see below). > > > -reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) > > +build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, > > Kind of a nit, but KVM uses "calc" for this sort of thing. There are no other > instances of "build_" to describe this behavior. > > Am I alone in think that shadow_zero_check is an awful, awful name? E.g. the EPT > memtype case has legal non-zero values. Anyone object to opportunistically > renaming the function and the local shadow_zero_check to "rsvd_bits" to shorten > line lengths and move KVM one step closer to consistent naming? That makes sense to me. I'm happy to add a commit to this series to standardize on rsvd_bits. > > > + int shadow_root_level) > > { > > - struct rsvd_bits_validate *shadow_zero_check; > > int i; > > > > - shadow_zero_check = &context->shadow_zero_check; > > - > > if (boot_cpu_is_amd()) > > __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), > > - context->shadow_root_level, false, > > + shadow_root_level, false, > > boot_cpu_has(X86_FEATURE_GBPAGES), > > false, true); > > else > > @@ -4507,12 +4505,19 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) > > if (!shadow_me_mask) > > return; > > > > - for (i = context->shadow_root_level; --i >= 0;) { > > + for (i = shadow_root_level; --i >= 0;) { > > shadow_zero_check->rsvd_bits_mask[0][i] &= ~shadow_me_mask; > > shadow_zero_check->rsvd_bits_mask[1][i] &= ~shadow_me_mask; > > } > > } > > > > +static void > > +reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) > > One line! Aside from being against the One True Style[*], there is zero reason > for a newline here. > > And I vote to drop the "mask", because (a) it's not a singular mask and (b) it's > not even a mask in all cases. > > And while I'm on a naming consistency rant, s/context/mmu. > > I.e. end up with: > > static void calc_tdp_shadow_rsvd_bits(struct rsvd_bits_validate *rsvd_bits, > int shadow_root_level) > > static void reset_tdp_shadow_rsvd_bits(struct kvm_mmu *mmu) > > [*] https://lore.kernel.org/mm-commits/CAHk-=wjS-Jg7sGMwUPpDsjv392nDOOs0CtUtVkp=S6Q7JzFJRw@mail.gmail.com > > > +{ > > + build_tdp_shadow_zero_bits_mask(&context->shadow_zero_check, > > + context->shadow_root_level); > > +} > > + > > /* > > * as the comments in reset_shadow_zero_bits_mask() except it > > * is the shadow page table for intel nested guest. > > -- > > 2.35.1.894.gb6a874cedc-goog > >
On Thu, Apr 21, 2022 at 11:50 AM Ben Gardon <bgardon@google.com> wrote: > > On Tue, Apr 12, 2022 at 8:46 AM Sean Christopherson <seanjc@google.com> wrote: > > > > On Mon, Mar 21, 2022, Ben Gardon wrote: > > > Factor out the implementation of reset_tdp_shadow_zero_bits_mask to a > > > helper function which does not require a vCPU pointer. The only element > > > of the struct kvm_mmu context used by the function is the shadow root > > > level, so pass that in too instead of the mmu context. > > > > > > No functional change intended. > > > > > > Signed-off-by: Ben Gardon <bgardon@google.com> > > > --- > > > arch/x86/kvm/mmu/mmu.c | 17 +++++++++++------ > > > 1 file changed, 11 insertions(+), 6 deletions(-) > > > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > > index 3b8da8b0745e..6f98111f8f8b 100644 > > > --- a/arch/x86/kvm/mmu/mmu.c > > > +++ b/arch/x86/kvm/mmu/mmu.c > > > @@ -4487,16 +4487,14 @@ static inline bool boot_cpu_is_amd(void) > > > * possible, however, kvm currently does not do execution-protection. > > > */ > > > static void > > > > Strongly prefer the newline here get dropped (see below). > > > > > -reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) > > > +build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, > > > > Kind of a nit, but KVM uses "calc" for this sort of thing. There are no other > > instances of "build_" to describe this behavior. > > > > Am I alone in think that shadow_zero_check is an awful, awful name? E.g. the EPT > > memtype case has legal non-zero values. Anyone object to opportunistically > > renaming the function and the local shadow_zero_check to "rsvd_bits" to shorten > > line lengths and move KVM one step closer to consistent naming? > > That makes sense to me. I'm happy to add a commit to this series to > standardize on rsvd_bits. Actually rsvd_bits is already a function name so I'm going to standardize on spte_rsvd_bits, if that works for everyone. > > > > > > + int shadow_root_level) > > > { > > > - struct rsvd_bits_validate *shadow_zero_check; > > > int i; > > > > > > - shadow_zero_check = &context->shadow_zero_check; > > > - > > > if (boot_cpu_is_amd()) > > > __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), > > > - context->shadow_root_level, false, > > > + shadow_root_level, false, > > > boot_cpu_has(X86_FEATURE_GBPAGES), > > > false, true); > > > else > > > @@ -4507,12 +4505,19 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) > > > if (!shadow_me_mask) > > > return; > > > > > > - for (i = context->shadow_root_level; --i >= 0;) { > > > + for (i = shadow_root_level; --i >= 0;) { > > > shadow_zero_check->rsvd_bits_mask[0][i] &= ~shadow_me_mask; > > > shadow_zero_check->rsvd_bits_mask[1][i] &= ~shadow_me_mask; > > > } > > > } > > > > > > +static void > > > +reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) > > > > One line! Aside from being against the One True Style[*], there is zero reason > > for a newline here. > > > > And I vote to drop the "mask", because (a) it's not a singular mask and (b) it's > > not even a mask in all cases. > > > > And while I'm on a naming consistency rant, s/context/mmu. > > > > I.e. end up with: > > > > static void calc_tdp_shadow_rsvd_bits(struct rsvd_bits_validate *rsvd_bits, > > int shadow_root_level) > > > > static void reset_tdp_shadow_rsvd_bits(struct kvm_mmu *mmu) > > > > [*] https://lore.kernel.org/mm-commits/CAHk-=wjS-Jg7sGMwUPpDsjv392nDOOs0CtUtVkp=S6Q7JzFJRw@mail.gmail.com > > > > > +{ > > > + build_tdp_shadow_zero_bits_mask(&context->shadow_zero_check, > > > + context->shadow_root_level); > > > +} > > > + > > > /* > > > * as the comments in reset_shadow_zero_bits_mask() except it > > > * is the shadow page table for intel nested guest. > > > -- > > > 2.35.1.894.gb6a874cedc-goog > > >
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3b8da8b0745e..6f98111f8f8b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4487,16 +4487,14 @@ static inline bool boot_cpu_is_amd(void) * possible, however, kvm currently does not do execution-protection. */ static void -reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) +build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, + int shadow_root_level) { - struct rsvd_bits_validate *shadow_zero_check; int i; - shadow_zero_check = &context->shadow_zero_check; - if (boot_cpu_is_amd()) __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), - context->shadow_root_level, false, + shadow_root_level, false, boot_cpu_has(X86_FEATURE_GBPAGES), false, true); else @@ -4507,12 +4505,19 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) if (!shadow_me_mask) return; - for (i = context->shadow_root_level; --i >= 0;) { + for (i = shadow_root_level; --i >= 0;) { shadow_zero_check->rsvd_bits_mask[0][i] &= ~shadow_me_mask; shadow_zero_check->rsvd_bits_mask[1][i] &= ~shadow_me_mask; } } +static void +reset_tdp_shadow_zero_bits_mask(struct kvm_mmu *context) +{ + build_tdp_shadow_zero_bits_mask(&context->shadow_zero_check, + context->shadow_root_level); +} + /* * as the comments in reset_shadow_zero_bits_mask() except it * is the shadow page table for intel nested guest.
Factor out the implementation of reset_tdp_shadow_zero_bits_mask to a helper function which does not require a vCPU pointer. The only element of the struct kvm_mmu context used by the function is the shadow root level, so pass that in too instead of the mmu context. No functional change intended. Signed-off-by: Ben Gardon <bgardon@google.com> --- arch/x86/kvm/mmu/mmu.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)