Message ID | 20230602170147.1541355-4-coltonlewis@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Relax break-before-make use with FEAT_BBM | expand |
On Fri, 02 Jun 2023 18:01:47 +0100, Colton Lewis <coltonlewis@google.com> wrote: > > Skip the break phase of break-before-make when the CPU has FEAT_BBM > level 2. This allows skipping some expensive invalidation and > serialization and should result in significant performance > improvements when changing block size. > > The ARM manual section D5.10.1 specifically states under heading > "Support levels for changing block size" that FEAT_BBM Level 2 support > means changing block size does not break coherency, ordering > guarantees, or uniprocessor semantics. I'd like to have that sort of reference in the code itself (spelling out the revision on the ARM ARM this is taken from, as this section is in D8.14.2 in DDI0487J.a). I'd also like it to point out that this only applies when the *output addresses* are the same. > > Because a compare-and-exchange operation was used in the break phase > to serialize access to the PTE, an analogous compare-and-exchange is > introduced in the make phase to ensure serialization remains even if > the break phase is skipped and proper handling is introduced to > account for this function now having a way to fail. > > Considering the possibility that the new pte has different permissions > than the old pte, the minimum necessary tlb invalidations are used. > > Signed-off-by: Colton Lewis <coltonlewis@google.com> > --- > arch/arm64/kvm/hyp/pgtable.c | 58 +++++++++++++++++++++++++++++++----- > 1 file changed, 51 insertions(+), 7 deletions(-) > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index 8acab89080af9..6778e3df697f7 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -643,6 +643,11 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) > return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); > } > > +static bool stage2_has_bbm_level2(void) > +{ > + return cpus_have_const_cap(ARM64_HAS_STAGE2_BBM2); By the time we look at unmapping things from S2, the capabilities should be finalised, so this should read cpus_have_final_cap() instead. > +} > + > #define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt)) > > static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot prot, > @@ -730,7 +735,7 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_ > * @ctx: context of the visited pte. > * @mmu: stage-2 mmu > * > - * Returns: true if the pte was successfully broken. > + * Returns: true if the pte was successfully broken or there is no need. No need of what? Why? The rationale should be captured in the comments below. > * > * If the removed pte was valid, performs the necessary serialization and TLB > * invalidation for the old value. For counted ptes, drops the reference count > @@ -750,6 +755,10 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, > return false; > } > > + /* There is no need to break the pte. */ > + if (stage2_has_bbm_level2()) > + return true; > + > if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED)) > return false; > > @@ -771,16 +780,45 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, > return true; > } > > -static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new) > +static bool stage2_pte_perms_equal(kvm_pte_t p1, kvm_pte_t p2) > +{ > + u64 perms1 = p1 & KVM_PGTABLE_PROT_RWX; > + u64 perms2 = p2 & KVM_PGTABLE_PROT_RWX; Huh? The KVM_PGTABLE_PROT_* constants are part of an *enum*, and do *not* represent the bit layout of the PTE. How did you test this code? > + > + return perms1 == perms2; > +} > + > +/** > + * stage2_try_make_pte() - Attempts to install a new pte. > + * > + * @ctx: context of the visited pte. > + * @new: new pte to install > + * > + * Returns: true if the pte was successfully installed > + * > + * If the old pte had different permissions, perform appropriate TLB > + * invalidation for the old value. For counted ptes, drops the > + * reference count on the containing table page. > + */ > +static bool stage2_try_make_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, kvm_pte_t new) > { > struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > > - WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); > + if (!stage2_has_bbm_level2()) > + WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); > + > + if (!stage2_try_set_pte(ctx, new)) > + return false; > + > + if (kvm_pte_table(ctx->old, ctx->level)) > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > + else if (kvm_pte_valid(ctx->old) && !stage2_pte_perms_equal(ctx->old, new)) > + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, mmu, ctx->addr, ctx->level); Why a non-shareable invalidation? Nothing in this code captures the rationale for it. What if the permission change was a *restriction* of the permission? It should absolutely be global, and not local. > > if (stage2_pte_is_counted(new)) > mm_ops->get_page(ctx->ptep); > > - smp_store_release(ctx->ptep, new); > + return true; > } > > static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, > @@ -879,7 +917,8 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx, > stage2_pte_executable(new)) > mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops), granule); > > - stage2_make_pte(ctx, new); > + if (!stage2_try_make_pte(ctx, data->mmu, new)) > + return -EAGAIN; So we don't have forward-progress guarantees anymore? I'm not sure this is a change I'm overly fond of. Thanks, M.
On Sun, Jun 04, 2023 at 09:23:39AM +0100, Marc Zyngier wrote: > On Fri, 02 Jun 2023 18:01:47 +0100, Colton Lewis <coltonlewis@google.com> wrote: > > +static bool stage2_try_make_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, kvm_pte_t new) > > { > > struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > > > > - WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); > > + if (!stage2_has_bbm_level2()) > > + WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); > > + > > + if (!stage2_try_set_pte(ctx, new)) > > + return false; > > + > > + if (kvm_pte_table(ctx->old, ctx->level)) > > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > > + else if (kvm_pte_valid(ctx->old) && !stage2_pte_perms_equal(ctx->old, new)) > > + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, mmu, ctx->addr, ctx->level); > > Why a non-shareable invalidation? Nothing in this code captures the > rationale for it. What if the permission change was a *restriction* of > the permission? It should absolutely be global, and not local. IIRC, Colton was testing largely with permission relaxation, and had forward progress issues b.c. the stale TLB entry was never invalidated in response to a permission fault. Nonetheless, I very much agree with your suggestion. Non-Shareable invalidations should only be applied after exhausting all other invalidation requirements for a particular manipulation to the stage-2 tables. > > > > if (stage2_pte_is_counted(new)) > > mm_ops->get_page(ctx->ptep); > > > > - smp_store_release(ctx->ptep, new); > > + return true; > > } > > > > static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, > > @@ -879,7 +917,8 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx, > > stage2_pte_executable(new)) > > mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops), granule); > > > > - stage2_make_pte(ctx, new); > > + if (!stage2_try_make_pte(ctx, data->mmu, new)) > > + return -EAGAIN; > > So we don't have forward-progress guarantees anymore? I'm not sure > this is a change I'm overly fond of. I'll take the blame for the clunky wording here, though I do not believe there are any real changes to our forward progress guarantees relative to the existing code. Previously, we did the CAS on the break side of things to have a fault handler 'take ownership' of a PTE. The CAS now needs to move onto the make end when doing a BBM=2 style manipulation. Would you rather see something explicitly keyed on the BBM capability here? Then we could use a helper that implies unconditional success for BBM!=2 systems. -- Thanks, Oliver
On Mon, Jun 05, 2023 at 02:36:00PM -0700, Oliver Upton wrote: > On Sun, Jun 04, 2023 at 09:23:39AM +0100, Marc Zyngier wrote: > > On Fri, 02 Jun 2023 18:01:47 +0100, Colton Lewis <coltonlewis@google.com> wrote: > > > +static bool stage2_try_make_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, kvm_pte_t new) > > > { > > > struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > > > > > > - WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); > > > + if (!stage2_has_bbm_level2()) > > > + WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); > > > + > > > + if (!stage2_try_set_pte(ctx, new)) > > > + return false; > > > + > > > + if (kvm_pte_table(ctx->old, ctx->level)) > > > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > > > + else if (kvm_pte_valid(ctx->old) && !stage2_pte_perms_equal(ctx->old, new)) > > > + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, mmu, ctx->addr, ctx->level); > > > > Why a non-shareable invalidation? Nothing in this code captures the > > rationale for it. What if the permission change was a *restriction* of > > the permission? It should absolutely be global, and not local. > > IIRC, Colton was testing largely with permission relaxation, and had > forward progress issues b.c. the stale TLB entry was never invalidated > in response to a permission fault. Would the series at: https://lore.kernel.org/r/5d8e1f752051173d2d1b5c3e14b54eb3506ed3ef.1684892404.git-series.apopple@nvidia.com help with that? Will
Hey Will, On Thu, Jun 08, 2023 at 06:21:13PM +0100, Will Deacon wrote: > > IIRC, Colton was testing largely with permission relaxation, and had > > forward progress issues b.c. the stale TLB entry was never invalidated > > in response to a permission fault. > > Would the series at: > > https://lore.kernel.org/r/5d8e1f752051173d2d1b5c3e14b54eb3506ed3ef.1684892404.git-series.apopple@nvidia.com > > help with that? Heh, that's a rather interesting patch :) I don't think it is directly related to the problem Colton encounters, though the symptoms are similar. This crops up when KVM uses a stricter permission set than the primary MMU, like lazy X for deferred I$ maintenance and write-protection for dirty logging. KVM policy led to the stale TLB entry, so KVM is the one that needs to initiate the invalidation.
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 8acab89080af9..6778e3df697f7 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -643,6 +643,11 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); } +static bool stage2_has_bbm_level2(void) +{ + return cpus_have_const_cap(ARM64_HAS_STAGE2_BBM2); +} + #define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt)) static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot prot, @@ -730,7 +735,7 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_ * @ctx: context of the visited pte. * @mmu: stage-2 mmu * - * Returns: true if the pte was successfully broken. + * Returns: true if the pte was successfully broken or there is no need. * * If the removed pte was valid, performs the necessary serialization and TLB * invalidation for the old value. For counted ptes, drops the reference count @@ -750,6 +755,10 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, return false; } + /* There is no need to break the pte. */ + if (stage2_has_bbm_level2()) + return true; + if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED)) return false; @@ -771,16 +780,45 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, return true; } -static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new) +static bool stage2_pte_perms_equal(kvm_pte_t p1, kvm_pte_t p2) +{ + u64 perms1 = p1 & KVM_PGTABLE_PROT_RWX; + u64 perms2 = p2 & KVM_PGTABLE_PROT_RWX; + + return perms1 == perms2; +} + +/** + * stage2_try_make_pte() - Attempts to install a new pte. + * + * @ctx: context of the visited pte. + * @new: new pte to install + * + * Returns: true if the pte was successfully installed + * + * If the old pte had different permissions, perform appropriate TLB + * invalidation for the old value. For counted ptes, drops the + * reference count on the containing table page. + */ +static bool stage2_try_make_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, kvm_pte_t new) { struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; - WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); + if (!stage2_has_bbm_level2()) + WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); + + if (!stage2_try_set_pte(ctx, new)) + return false; + + if (kvm_pte_table(ctx->old, ctx->level)) + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + else if (kvm_pte_valid(ctx->old) && !stage2_pte_perms_equal(ctx->old, new)) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, mmu, ctx->addr, ctx->level); if (stage2_pte_is_counted(new)) mm_ops->get_page(ctx->ptep); - smp_store_release(ctx->ptep, new); + return true; } static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, @@ -879,7 +917,8 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx, stage2_pte_executable(new)) mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops), granule); - stage2_make_pte(ctx, new); + if (!stage2_try_make_pte(ctx, data->mmu, new)) + return -EAGAIN; return 0; } @@ -934,7 +973,9 @@ static int stage2_map_walk_leaf(const struct kvm_pgtable_visit_ctx *ctx, * will be mapped lazily. */ new = kvm_init_table_pte(childp, mm_ops); - stage2_make_pte(ctx, new); + + if (!stage2_try_make_pte(ctx, data->mmu, new)) + return -EAGAIN; return 0; } @@ -1385,7 +1426,10 @@ static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx, * writes the PTE using smp_store_release(). */ new = kvm_init_table_pte(childp, mm_ops); - stage2_make_pte(ctx, new); + + if (!stage2_try_make_pte(ctx, mmu, new)) + return -EAGAIN; + dsb(ishst); return 0; }
Skip the break phase of break-before-make when the CPU has FEAT_BBM level 2. This allows skipping some expensive invalidation and serialization and should result in significant performance improvements when changing block size. The ARM manual section D5.10.1 specifically states under heading "Support levels for changing block size" that FEAT_BBM Level 2 support means changing block size does not break coherency, ordering guarantees, or uniprocessor semantics. Because a compare-and-exchange operation was used in the break phase to serialize access to the PTE, an analogous compare-and-exchange is introduced in the make phase to ensure serialization remains even if the break phase is skipped and proper handling is introduced to account for this function now having a way to fail. Considering the possibility that the new pte has different permissions than the old pte, the minimum necessary tlb invalidations are used. Signed-off-by: Colton Lewis <coltonlewis@google.com> --- arch/arm64/kvm/hyp/pgtable.c | 58 +++++++++++++++++++++++++++++++----- 1 file changed, 51 insertions(+), 7 deletions(-) -- 2.41.0.rc0.172.g3f132b7071-goog