Message ID | 20240124204909.105952-5-oliver.upton@linux.dev (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: Improvements to GICv3 LPI injection | expand |
On Wed, 24 Jan 2024 20:48:58 +0000, Oliver Upton <oliver.upton@linux.dev> wrote: > > Start iterating the LPI xarray in anticipation of removing the LPI > linked-list. > > Signed-off-by: Oliver Upton <oliver.upton@linux.dev> > --- > arch/arm64/kvm/vgic/vgic-its.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c > index f152d670113f..a2d95a279798 100644 > --- a/arch/arm64/kvm/vgic/vgic-its.c > +++ b/arch/arm64/kvm/vgic/vgic-its.c > @@ -332,6 +332,7 @@ static int update_lpi_config(struct kvm *kvm, struct vgic_irq *irq, > int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) > { > struct vgic_dist *dist = &kvm->arch.vgic; > + XA_STATE(xas, &dist->lpi_xa, 0); Why 0? LPIs start at 8192 (aka GIC_LPI_OFFSET), so it'd probably make sense to use that. > struct vgic_irq *irq; > unsigned long flags; > u32 *intids; > @@ -350,7 +351,9 @@ int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) > return -ENOMEM; > > raw_spin_lock_irqsave(&dist->lpi_list_lock, flags); > - list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) { > + rcu_read_lock(); > + > + xas_for_each(&xas, irq, U32_MAX) { Similar thing: we advertise 16 bits of ID space (described as INTERRUPT_ID_BITS_ITS), so capping at that level would make it more understandable. > if (i == irq_count) > break; > /* We don't need to "get" the IRQ, as we hold the list lock. */ > @@ -358,6 +361,8 @@ int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) > continue; > intids[i++] = irq->intid; > } > + > + rcu_read_unlock(); > raw_spin_unlock_irqrestore(&dist->lpi_list_lock, flags); > > *intid_ptr = intids; Thanks, M.
On Thu, Jan 25, 2024 at 09:15:30AM +0000, Marc Zyngier wrote: > On Wed, 24 Jan 2024 20:48:58 +0000, > Oliver Upton <oliver.upton@linux.dev> wrote: > > > > Start iterating the LPI xarray in anticipation of removing the LPI > > linked-list. > > > > Signed-off-by: Oliver Upton <oliver.upton@linux.dev> > > --- > > arch/arm64/kvm/vgic/vgic-its.c | 7 ++++++- > > 1 file changed, 6 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c > > index f152d670113f..a2d95a279798 100644 > > --- a/arch/arm64/kvm/vgic/vgic-its.c > > +++ b/arch/arm64/kvm/vgic/vgic-its.c > > @@ -332,6 +332,7 @@ static int update_lpi_config(struct kvm *kvm, struct vgic_irq *irq, > > int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) > > { > > struct vgic_dist *dist = &kvm->arch.vgic; > > + XA_STATE(xas, &dist->lpi_xa, 0); > > Why 0? LPIs start at 8192 (aka GIC_LPI_OFFSET), so it'd probably make > sense to use that. Just being lazy! > > struct vgic_irq *irq; > > unsigned long flags; > > u32 *intids; > > @@ -350,7 +351,9 @@ int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) > > return -ENOMEM; > > > > raw_spin_lock_irqsave(&dist->lpi_list_lock, flags); > > - list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) { > > + rcu_read_lock(); > > + > > + xas_for_each(&xas, irq, U32_MAX) { > > Similar thing: we advertise 16 bits of ID space (described as > INTERRUPT_ID_BITS_ITS), so capping at that level would make it more > understandable. See above. But completely agree, this is much more readable when it matches the the actual ID space.
diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c index f152d670113f..a2d95a279798 100644 --- a/arch/arm64/kvm/vgic/vgic-its.c +++ b/arch/arm64/kvm/vgic/vgic-its.c @@ -332,6 +332,7 @@ static int update_lpi_config(struct kvm *kvm, struct vgic_irq *irq, int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) { struct vgic_dist *dist = &kvm->arch.vgic; + XA_STATE(xas, &dist->lpi_xa, 0); struct vgic_irq *irq; unsigned long flags; u32 *intids; @@ -350,7 +351,9 @@ int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) return -ENOMEM; raw_spin_lock_irqsave(&dist->lpi_list_lock, flags); - list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) { + rcu_read_lock(); + + xas_for_each(&xas, irq, U32_MAX) { if (i == irq_count) break; /* We don't need to "get" the IRQ, as we hold the list lock. */ @@ -358,6 +361,8 @@ int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) continue; intids[i++] = irq->intid; } + + rcu_read_unlock(); raw_spin_unlock_irqrestore(&dist->lpi_list_lock, flags); *intid_ptr = intids;
Start iterating the LPI xarray in anticipation of removing the LPI linked-list. Signed-off-by: Oliver Upton <oliver.upton@linux.dev> --- arch/arm64/kvm/vgic/vgic-its.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)