Message ID | 20240124204909.105952-12-oliver.upton@linux.dev (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: Improvements to GICv3 LPI injection | expand |
On Wed, 24 Jan 2024 20:49:05 +0000, Oliver Upton <oliver.upton@linux.dev> wrote: > > Reusing translation cache entries within a read-side critical section is > fundamentally incompatible with an rculist. As such, we need to allocate > a new entry to replace an eviction and free the removed entry > afterwards. > > Take this as an opportunity to remove the eager allocation of > translation cache entries altogether in favor of a lazy allocation model > on cache miss. > > Signed-off-by: Oliver Upton <oliver.upton@linux.dev> > --- > arch/arm64/kvm/vgic/vgic-init.c | 3 -- > arch/arm64/kvm/vgic/vgic-its.c | 86 ++++++++++++++------------------- > include/kvm/arm_vgic.h | 1 + > 3 files changed, 38 insertions(+), 52 deletions(-) > > diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c > index e25672d6e846..660d5ce3b610 100644 > --- a/arch/arm64/kvm/vgic/vgic-init.c > +++ b/arch/arm64/kvm/vgic/vgic-init.c > @@ -305,9 +305,6 @@ int vgic_init(struct kvm *kvm) > } > } > > - if (vgic_has_its(kvm)) > - vgic_lpi_translation_cache_init(kvm); > - > /* > * If we have GICv4.1 enabled, unconditionnaly request enable the > * v4 support so that we get HW-accelerated vSGIs. Otherwise, only > diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c > index 8c026a530018..aec82d9a1b3c 100644 > --- a/arch/arm64/kvm/vgic/vgic-its.c > +++ b/arch/arm64/kvm/vgic/vgic-its.c > @@ -608,12 +608,20 @@ static struct vgic_irq *vgic_its_check_cache(struct kvm *kvm, phys_addr_t db, > return irq; > } > > +/* Default is 16 cached LPIs per vcpu */ > +#define LPI_DEFAULT_PCPU_CACHE_SIZE 16 > + > +static unsigned int vgic_its_max_cache_size(struct kvm *kvm) > +{ > + return atomic_read(&kvm->online_vcpus) * LPI_DEFAULT_PCPU_CACHE_SIZE; > +} > + > static void vgic_its_cache_translation(struct kvm *kvm, struct vgic_its *its, > u32 devid, u32 eventid, > struct vgic_irq *irq) > { > + struct vgic_translation_cache_entry *new, *victim; > struct vgic_dist *dist = &kvm->arch.vgic; > - struct vgic_translation_cache_entry *cte; > unsigned long flags; > phys_addr_t db; > > @@ -621,10 +629,11 @@ static void vgic_its_cache_translation(struct kvm *kvm, struct vgic_its *its, > if (irq->hw) > return; > > - raw_spin_lock_irqsave(&dist->lpi_list_lock, flags); > + new = victim = kzalloc(sizeof(*new), GFP_KERNEL_ACCOUNT); > + if (!new) > + return; > > - if (unlikely(list_empty(&dist->lpi_translation_cache))) > - goto out; > + raw_spin_lock_irqsave(&dist->lpi_list_lock, flags); > > /* > * We could have raced with another CPU caching the same > @@ -635,17 +644,15 @@ static void vgic_its_cache_translation(struct kvm *kvm, struct vgic_its *its, > if (__vgic_its_check_cache(dist, db, devid, eventid)) > goto out; > > - /* Always reuse the last entry (LRU policy) */ > - cte = list_last_entry(&dist->lpi_translation_cache, > - typeof(*cte), entry); > - > - /* > - * Caching the translation implies having an extra reference > - * to the interrupt, so drop the potential reference on what > - * was in the cache, and increment it on the new interrupt. > - */ > - if (cte->irq) > - vgic_put_irq(kvm, cte->irq); > + if (dist->lpi_cache_count >= vgic_its_max_cache_size(kvm)) { > + /* Always reuse the last entry (LRU policy) */ > + victim = list_last_entry(&dist->lpi_translation_cache, > + typeof(*cte), entry); > + list_del(&victim->entry); > + dist->lpi_cache_count--; > + } else { > + victim = NULL; > + } > > /* > * The irq refcount is guaranteed to be nonzero while holding the > @@ -654,16 +661,26 @@ static void vgic_its_cache_translation(struct kvm *kvm, struct vgic_its *its, > lockdep_assert_held(&its->its_lock); > vgic_get_irq_kref(irq); > > - cte->db = db; > - cte->devid = devid; > - cte->eventid = eventid; > - cte->irq = irq; > + new->db = db; > + new->devid = devid; > + new->eventid = eventid; > + new->irq = irq; > > /* Move the new translation to the head of the list */ > - list_move(&cte->entry, &dist->lpi_translation_cache); > + list_add(&new->entry, &dist->lpi_translation_cache); > > out: > raw_spin_unlock_irqrestore(&dist->lpi_list_lock, flags); > + > + /* > + * Caching the translation implies having an extra reference > + * to the interrupt, so drop the potential reference on what > + * was in the cache, and increment it on the new interrupt. > + */ > + if (victim && victim->irq) > + vgic_put_irq(kvm, victim->irq); The games you play with 'victim' are a bit odd. I'd rather have it initialised to NULL, and be trusted to have a valid irq if non-NULL. Is there something special I'm missing? Thanks, M.
On Thu, Jan 25, 2024 at 10:19:46AM +0000, Marc Zyngier wrote: > On Wed, 24 Jan 2024 20:49:05 +0000, Oliver Upton <oliver.upton@linux.dev> wrote: > > + > > + /* > > + * Caching the translation implies having an extra reference > > + * to the interrupt, so drop the potential reference on what > > + * was in the cache, and increment it on the new interrupt. > > + */ > > + if (victim && victim->irq) > > + vgic_put_irq(kvm, victim->irq); > > The games you play with 'victim' are a bit odd. I'd rather have it > initialised to NULL, and be trusted to have a valid irq if non-NULL. > > Is there something special I'm missing? I pulled some shenanigans use the same cleanup path to free the new cache entry in the case of a race. At that point the new cache entry is initialized to 0 and doesn't have a valid pointer to an irq. I thought this was a fun trick, but in retrospect it just makes it hard to follow. I'll just explicitly free the new entry in the case of a detected race and do away with the weirdness.
diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index e25672d6e846..660d5ce3b610 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -305,9 +305,6 @@ int vgic_init(struct kvm *kvm) } } - if (vgic_has_its(kvm)) - vgic_lpi_translation_cache_init(kvm); - /* * If we have GICv4.1 enabled, unconditionnaly request enable the * v4 support so that we get HW-accelerated vSGIs. Otherwise, only diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c index 8c026a530018..aec82d9a1b3c 100644 --- a/arch/arm64/kvm/vgic/vgic-its.c +++ b/arch/arm64/kvm/vgic/vgic-its.c @@ -608,12 +608,20 @@ static struct vgic_irq *vgic_its_check_cache(struct kvm *kvm, phys_addr_t db, return irq; } +/* Default is 16 cached LPIs per vcpu */ +#define LPI_DEFAULT_PCPU_CACHE_SIZE 16 + +static unsigned int vgic_its_max_cache_size(struct kvm *kvm) +{ + return atomic_read(&kvm->online_vcpus) * LPI_DEFAULT_PCPU_CACHE_SIZE; +} + static void vgic_its_cache_translation(struct kvm *kvm, struct vgic_its *its, u32 devid, u32 eventid, struct vgic_irq *irq) { + struct vgic_translation_cache_entry *new, *victim; struct vgic_dist *dist = &kvm->arch.vgic; - struct vgic_translation_cache_entry *cte; unsigned long flags; phys_addr_t db; @@ -621,10 +629,11 @@ static void vgic_its_cache_translation(struct kvm *kvm, struct vgic_its *its, if (irq->hw) return; - raw_spin_lock_irqsave(&dist->lpi_list_lock, flags); + new = victim = kzalloc(sizeof(*new), GFP_KERNEL_ACCOUNT); + if (!new) + return; - if (unlikely(list_empty(&dist->lpi_translation_cache))) - goto out; + raw_spin_lock_irqsave(&dist->lpi_list_lock, flags); /* * We could have raced with another CPU caching the same @@ -635,17 +644,15 @@ static void vgic_its_cache_translation(struct kvm *kvm, struct vgic_its *its, if (__vgic_its_check_cache(dist, db, devid, eventid)) goto out; - /* Always reuse the last entry (LRU policy) */ - cte = list_last_entry(&dist->lpi_translation_cache, - typeof(*cte), entry); - - /* - * Caching the translation implies having an extra reference - * to the interrupt, so drop the potential reference on what - * was in the cache, and increment it on the new interrupt. - */ - if (cte->irq) - vgic_put_irq(kvm, cte->irq); + if (dist->lpi_cache_count >= vgic_its_max_cache_size(kvm)) { + /* Always reuse the last entry (LRU policy) */ + victim = list_last_entry(&dist->lpi_translation_cache, + typeof(*cte), entry); + list_del(&victim->entry); + dist->lpi_cache_count--; + } else { + victim = NULL; + } /* * The irq refcount is guaranteed to be nonzero while holding the @@ -654,16 +661,26 @@ static void vgic_its_cache_translation(struct kvm *kvm, struct vgic_its *its, lockdep_assert_held(&its->its_lock); vgic_get_irq_kref(irq); - cte->db = db; - cte->devid = devid; - cte->eventid = eventid; - cte->irq = irq; + new->db = db; + new->devid = devid; + new->eventid = eventid; + new->irq = irq; /* Move the new translation to the head of the list */ - list_move(&cte->entry, &dist->lpi_translation_cache); + list_add(&new->entry, &dist->lpi_translation_cache); out: raw_spin_unlock_irqrestore(&dist->lpi_list_lock, flags); + + /* + * Caching the translation implies having an extra reference + * to the interrupt, so drop the potential reference on what + * was in the cache, and increment it on the new interrupt. + */ + if (victim && victim->irq) + vgic_put_irq(kvm, victim->irq); + + kfree(victim); } void vgic_its_invalidate_cache(struct kvm *kvm) @@ -1905,33 +1922,6 @@ static int vgic_register_its_iodev(struct kvm *kvm, struct vgic_its *its, return ret; } -/* Default is 16 cached LPIs per vcpu */ -#define LPI_DEFAULT_PCPU_CACHE_SIZE 16 - -void vgic_lpi_translation_cache_init(struct kvm *kvm) -{ - struct vgic_dist *dist = &kvm->arch.vgic; - unsigned int sz; - int i; - - if (!list_empty(&dist->lpi_translation_cache)) - return; - - sz = atomic_read(&kvm->online_vcpus) * LPI_DEFAULT_PCPU_CACHE_SIZE; - - for (i = 0; i < sz; i++) { - struct vgic_translation_cache_entry *cte; - - /* An allocation failure is not fatal */ - cte = kzalloc(sizeof(*cte), GFP_KERNEL_ACCOUNT); - if (WARN_ON(!cte)) - break; - - INIT_LIST_HEAD(&cte->entry); - list_add(&cte->entry, &dist->lpi_translation_cache); - } -} - void vgic_lpi_translation_cache_destroy(struct kvm *kvm) { struct vgic_dist *dist = &kvm->arch.vgic; @@ -1978,8 +1968,6 @@ static int vgic_its_create(struct kvm_device *dev, u32 type) kfree(its); return ret; } - - vgic_lpi_translation_cache_init(dev->kvm); } mutex_init(&its->its_lock); diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index a6f6c1583662..70490a2a300d 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -282,6 +282,7 @@ struct vgic_dist { /* LPI translation cache */ struct list_head lpi_translation_cache; + unsigned int lpi_cache_count; /* used by vgic-debug */ struct vgic_state_iter *iter;
Reusing translation cache entries within a read-side critical section is fundamentally incompatible with an rculist. As such, we need to allocate a new entry to replace an eviction and free the removed entry afterwards. Take this as an opportunity to remove the eager allocation of translation cache entries altogether in favor of a lazy allocation model on cache miss. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> --- arch/arm64/kvm/vgic/vgic-init.c | 3 -- arch/arm64/kvm/vgic/vgic-its.c | 86 ++++++++++++++------------------- include/kvm/arm_vgic.h | 1 + 3 files changed, 38 insertions(+), 52 deletions(-)