Message ID | 20181013145406.4911-3-Tianyu.Lan@microsoft.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/KVM/Hyper-v: Add HV ept tlb range flush hypercall support in KVM | expand |
> On 13 Oct 2018, at 17:53, lantianyu1986@gmail.com wrote: > > From: Lan Tianyu <Tianyu.Lan@microsoft.com> > > This patch is to add wrapper functions for tlb_remote_flush_with_range > callback. > > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com> > --- > Change sicne V3: > Remove code of updating "tlbs_dirty" > Change since V2: > Fix comment in the kvm_flush_remote_tlbs_with_range() > --- > arch/x86/kvm/mmu.c | 40 ++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 40 insertions(+) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index c73d9f650de7..ff656d85903a 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -264,6 +264,46 @@ static void mmu_spte_set(u64 *sptep, u64 spte); > static union kvm_mmu_page_role > kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); > > + > +static inline bool kvm_available_flush_tlb_with_range(void) > +{ > + return kvm_x86_ops->tlb_remote_flush_with_range; > +} Seems that kvm_available_flush_tlb_with_range() is not used in this patch… > + > +static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, > + struct kvm_tlb_range *range) > +{ > + int ret = -ENOTSUPP; > + > + if (range && kvm_x86_ops->tlb_remote_flush_with_range) > + ret = kvm_x86_ops->tlb_remote_flush_with_range(kvm, range); > + > + if (ret) > + kvm_flush_remote_tlbs(kvm); > +} > + > +static void kvm_flush_remote_tlbs_with_list(struct kvm *kvm, > + struct list_head *flush_list) > +{ > + struct kvm_tlb_range range; > + > + range.flush_list = flush_list; > + > + kvm_flush_remote_tlbs_with_range(kvm, &range); > +} > + > +static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, > + u64 start_gfn, u64 pages) > +{ > + struct kvm_tlb_range range; > + > + range.start_gfn = start_gfn; > + range.pages = pages; > + range.flush_list = NULL; > + > + kvm_flush_remote_tlbs_with_range(kvm, &range); > +} > + > void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value) > { > BUG_ON((mmio_mask & mmio_value) != mmio_value); > -- > 2.14.4 >
On Sun, 14 Oct 2018, Liran Alon wrote: > > On 13 Oct 2018, at 17:53, lantianyu1986@gmail.com wrote: > > > > From: Lan Tianyu <Tianyu.Lan@microsoft.com> > > > > This patch is to add wrapper functions for tlb_remote_flush_with_range > > callback. > > > > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com> > > --- > > Change sicne V3: > > Remove code of updating "tlbs_dirty" > > Change since V2: > > Fix comment in the kvm_flush_remote_tlbs_with_range() > > --- > > arch/x86/kvm/mmu.c | 40 ++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 40 insertions(+) > > > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > index c73d9f650de7..ff656d85903a 100644 > > --- a/arch/x86/kvm/mmu.c > > +++ b/arch/x86/kvm/mmu.c > > @@ -264,6 +264,46 @@ static void mmu_spte_set(u64 *sptep, u64 spte); > > static union kvm_mmu_page_role > > kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); > > > > + > > +static inline bool kvm_available_flush_tlb_with_range(void) > > +{ > > + return kvm_x86_ops->tlb_remote_flush_with_range; > > +} > > Seems that kvm_available_flush_tlb_with_range() is not used in this patch… What's wrong with that? It provides the implementation and later patches make use of it. It's a sensible way to split patches into small, self contained entities. Thanks, tglx
> On 14 Oct 2018, at 11:16, Thomas Gleixner <tglx@linutronix.de> wrote: > > On Sun, 14 Oct 2018, Liran Alon wrote: >>> On 13 Oct 2018, at 17:53, lantianyu1986@gmail.com wrote: >>> >>> + >>> +static inline bool kvm_available_flush_tlb_with_range(void) >>> +{ >>> + return kvm_x86_ops->tlb_remote_flush_with_range; >>> +} >> >> Seems that kvm_available_flush_tlb_with_range() is not used in this patch… > > What's wrong with that? > > It provides the implementation and later patches make use of it. It's a > sensible way to split patches into small, self contained entities. > > Thanks, > > tglx > I guess it’s a matter of taste, but I prefer to not add dead-code for patches in order for each commit to compile nicely without warnings of declared and unused functions. I would prefer to just add this utility function on the patch that actually use it. -Liran
On Sun, Oct 14, 2018 at 10:16:56AM +0200, Thomas Gleixner wrote: > On Sun, 14 Oct 2018, Liran Alon wrote: > > > On 13 Oct 2018, at 17:53, lantianyu1986@gmail.com wrote: > > > > > > From: Lan Tianyu <Tianyu.Lan@microsoft.com> > > > > > > This patch is to add wrapper functions for tlb_remote_flush_with_range > > > callback. > > > > > > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com> > > > --- > > > Change sicne V3: > > > Remove code of updating "tlbs_dirty" > > > Change since V2: > > > Fix comment in the kvm_flush_remote_tlbs_with_range() > > > --- > > > arch/x86/kvm/mmu.c | 40 ++++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 40 insertions(+) > > > > > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > > index c73d9f650de7..ff656d85903a 100644 > > > --- a/arch/x86/kvm/mmu.c > > > +++ b/arch/x86/kvm/mmu.c > > > @@ -264,6 +264,46 @@ static void mmu_spte_set(u64 *sptep, u64 spte); > > > static union kvm_mmu_page_role > > > kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); > > > > > > + > > > +static inline bool kvm_available_flush_tlb_with_range(void) > > > +{ > > > + return kvm_x86_ops->tlb_remote_flush_with_range; > > > +} > > > > Seems that kvm_available_flush_tlb_with_range() is not used in this patch… > > What's wrong with that? > > It provides the implementation and later patches make use of it. It's a > sensible way to split patches into small, self contained entities. From what I can see of the patches that follow _this_ patch in the series, this function remains unused. So, not only is it not used in this patch, it's not used in this series. I think the real question that needs asking is - what is the plan for this function, and when will it be used?
On Sun, Oct 14, 2018 at 10:27:34AM +0100, Russell King - ARM Linux wrote: > On Sun, Oct 14, 2018 at 10:16:56AM +0200, Thomas Gleixner wrote: > > On Sun, 14 Oct 2018, Liran Alon wrote: > > > > On 13 Oct 2018, at 17:53, lantianyu1986@gmail.com wrote: > > > > > > > > From: Lan Tianyu <Tianyu.Lan@microsoft.com> > > > > > > > > This patch is to add wrapper functions for tlb_remote_flush_with_range > > > > callback. > > > > > > > > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com> > > > > --- > > > > Change sicne V3: > > > > Remove code of updating "tlbs_dirty" > > > > Change since V2: > > > > Fix comment in the kvm_flush_remote_tlbs_with_range() > > > > --- > > > > arch/x86/kvm/mmu.c | 40 ++++++++++++++++++++++++++++++++++++++++ > > > > 1 file changed, 40 insertions(+) > > > > > > > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > > > index c73d9f650de7..ff656d85903a 100644 > > > > --- a/arch/x86/kvm/mmu.c > > > > +++ b/arch/x86/kvm/mmu.c > > > > @@ -264,6 +264,46 @@ static void mmu_spte_set(u64 *sptep, u64 spte); > > > > static union kvm_mmu_page_role > > > > kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); > > > > > > > > + > > > > +static inline bool kvm_available_flush_tlb_with_range(void) > > > > +{ > > > > + return kvm_x86_ops->tlb_remote_flush_with_range; > > > > +} > > > > > > Seems that kvm_available_flush_tlb_with_range() is not used in this patch… > > > > What's wrong with that? > > > > It provides the implementation and later patches make use of it. It's a > > sensible way to split patches into small, self contained entities. > > From what I can see of the patches that follow _this_ patch in the > series, this function remains unused. So, not only is it not used > in this patch, it's not used in this series. Note - I seem to have only received patches 1 through 4, so this is based on the patches I've received.
Hi Liran & Thomas: Thanks for your review. On Sun, Oct 14, 2018 at 5:20 PM Liran Alon <liran.alon@oracle.com> wrote: > > > > > On 14 Oct 2018, at 11:16, Thomas Gleixner <tglx@linutronix.de> wrote: > > > > On Sun, 14 Oct 2018, Liran Alon wrote: > >>> On 13 Oct 2018, at 17:53, lantianyu1986@gmail.com wrote: > >>> > >>> + > >>> +static inline bool kvm_available_flush_tlb_with_range(void) > >>> +{ > >>> + return kvm_x86_ops->tlb_remote_flush_with_range; > >>> +} > >> > >> Seems that kvm_available_flush_tlb_with_range() is not used in this patch… > > > > What's wrong with that? > > > > It provides the implementation and later patches make use of it. It's a > > sensible way to split patches into small, self contained entities. > > > > Thanks, > > > > tglx > > > > I guess it’s a matter of taste, but I prefer to not add dead-code for patches > in order for each commit to compile nicely without warnings of declared and unused functions. > I would prefer to just add this utility function on the patch that actually use it. > > -Liran > Normally, I also prefer to put the function definition into the patch which use it. But the following patch "KVM: Replace old tlb flush function with new one to flush a specified range" and other patches which use new functions will change a lot of places. It's not friendly for review and so I split them into pieces. -- Best regards Tianyu Lan
Hi Russell: Thanks for your review. On Sun, Oct 14, 2018 at 5:36 PM Russell King - ARM Linux <linux@armlinux.org.uk> wrote: > > On Sun, Oct 14, 2018 at 10:27:34AM +0100, Russell King - ARM Linux wrote: > > On Sun, Oct 14, 2018 at 10:16:56AM +0200, Thomas Gleixner wrote: > > > On Sun, 14 Oct 2018, Liran Alon wrote: > > > > > On 13 Oct 2018, at 17:53, lantianyu1986@gmail.com wrote: > > > > > > > > > > From: Lan Tianyu <Tianyu.Lan@microsoft.com> > > > > > > > > > > This patch is to add wrapper functions for tlb_remote_flush_with_range > > > > > callback. > > > > > > > > > > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com> > > > > > --- > > > > > Change sicne V3: > > > > > Remove code of updating "tlbs_dirty" > > > > > Change since V2: > > > > > Fix comment in the kvm_flush_remote_tlbs_with_range() > > > > > --- > > > > > arch/x86/kvm/mmu.c | 40 ++++++++++++++++++++++++++++++++++++++++ > > > > > 1 file changed, 40 insertions(+) > > > > > > > > > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > > > > index c73d9f650de7..ff656d85903a 100644 > > > > > --- a/arch/x86/kvm/mmu.c > > > > > +++ b/arch/x86/kvm/mmu.c > > > > > @@ -264,6 +264,46 @@ static void mmu_spte_set(u64 *sptep, u64 spte); > > > > > static union kvm_mmu_page_role > > > > > kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); > > > > > > > > > > + > > > > > +static inline bool kvm_available_flush_tlb_with_range(void) > > > > > +{ > > > > > + return kvm_x86_ops->tlb_remote_flush_with_range; > > > > > +} > > > > > > > > Seems that kvm_available_flush_tlb_with_range() is not used in this patch… > > > > > > What's wrong with that? > > > > > > It provides the implementation and later patches make use of it. It's a > > > sensible way to split patches into small, self contained entities. > > > > From what I can see of the patches that follow _this_ patch in the > > series, this function remains unused. So, not only is it not used > > in this patch, it's not used in this series. > > Note - I seem to have only received patches 1 through 4, so this is > based on the patches I've received. > Sorry to confuse your. I get from CCers from get_maintainer.pl script. The next patch "[PATCH V4 3/15]KVM: Replace old tlb flush function with new one to flush a specified range" calls new function. https://lkml.org/lkml/2018/10/13/254 The patch "[PATCH V4 4/15] KVM: Make kvm_set_spte_hva() return int" changes under ARM directory. Please have a look. Thanks.
On Sun, Oct 14, 2018 at 09:21:23PM +0800, Tianyu Lan wrote:
> Sorry to confuse your. I get from CCers from get_maintainer.pl script.
Unfortunately you seem to have made a mistake. My email address is
'linux@armlinux.org.uk' not 'linux@armlinux.org'. There is no
'linux@armlinux.org' in MAINTAINERS, yet your emails appear to be
copied to this address.
Consequently, if it was your intention to copy me with the entire
series, that didn't happen.
On 14/10/2018 10:16, Thomas Gleixner wrote: >>> +static inline bool kvm_available_flush_tlb_with_range(void) >>> +{ >>> + return kvm_x86_ops->tlb_remote_flush_with_range; >>> +} >> Seems that kvm_available_flush_tlb_with_range() is not used in this patch… > What's wrong with that? > > It provides the implementation and later patches make use of it. It's a > sensible way to split patches into small, self contained entities. That's true, on the other hand I have indeed a concerns with this patch: this series is not bisectable at all, because all the new code is dead until the very last patch. Uses of the new feature should come _after_ the implementation. I don't have any big problem with what Liran pointed out (and I can live with the unused static functions that would warn with -Wunused, too), but the above should be fixed in v5, basically by moving patches 12-15 at the beginning of the series. Paolo
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index c73d9f650de7..ff656d85903a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -264,6 +264,46 @@ static void mmu_spte_set(u64 *sptep, u64 spte); static union kvm_mmu_page_role kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); + +static inline bool kvm_available_flush_tlb_with_range(void) +{ + return kvm_x86_ops->tlb_remote_flush_with_range; +} + +static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, + struct kvm_tlb_range *range) +{ + int ret = -ENOTSUPP; + + if (range && kvm_x86_ops->tlb_remote_flush_with_range) + ret = kvm_x86_ops->tlb_remote_flush_with_range(kvm, range); + + if (ret) + kvm_flush_remote_tlbs(kvm); +} + +static void kvm_flush_remote_tlbs_with_list(struct kvm *kvm, + struct list_head *flush_list) +{ + struct kvm_tlb_range range; + + range.flush_list = flush_list; + + kvm_flush_remote_tlbs_with_range(kvm, &range); +} + +static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, + u64 start_gfn, u64 pages) +{ + struct kvm_tlb_range range; + + range.start_gfn = start_gfn; + range.pages = pages; + range.flush_list = NULL; + + kvm_flush_remote_tlbs_with_range(kvm, &range); +} + void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value) { BUG_ON((mmio_mask & mmio_value) != mmio_value);