Message ID | 20190725153543.24386-1-maz@kernel.org (mailing list archive) |
---|---|
Headers | show |
Series | KVM: arm/arm64: vgic: ITS translation cache | expand |
On Thu, 25 Jul 2019 16:35:33 +0100 Marc Zyngier <maz@kernel.org> wrote: Hi, > From: Marc Zyngier <marc.zyngier@arm.com> > > It recently became apparent[1] that our LPI injection path is not as > efficient as it could be when injecting interrupts coming from a VFIO > assigned device. > > Although the proposed patch wasn't 100% correct, it outlined at least > two issues: > > (1) Injecting an LPI from VFIO always results in a context switch to a > worker thread: no good > > (2) We have no way of amortising the cost of translating a DID+EID pair > to an LPI number > > The reason for (1) is that we may sleep when translating an LPI, so we > do need a context process. A way to fix that is to implement a small > LPI translation cache that could be looked up from an atomic > context. It would also solve (2). > > This is what this small series proposes. It implements a very basic > LRU cache of pre-translated LPIs, which gets used to implement > kvm_arch_set_irq_inatomic. The size of the cache is currently > hard-coded at 16 times the number of vcpus, a number I have picked > under the influence of Ali Saidi. If that's not enough for you, blame > me, though. > > Does it work? well, it doesn't crash, and is thus perfect. More > seriously, I don't really have a way to benchmark it directly, so my > observations are only indirect: > > On a TX2 system, I run a 4 vcpu VM with an Ethernet interface passed > to it directly. From the host, I inject interrupts using debugfs. In > parallel, I look at the number of context switch, and the number of > interrupts on the host. Without this series, I get the same number for > both IRQ and CS (about half a million of each per second is pretty > easy to reach). With this series, the number of context switches drops > to something pretty small (in the low 2k), while the number of > interrupts stays the same. > > Yes, this is a pretty rubbish benchmark, what did you expect? ;-) > > Andre did run some benchmarks of his own, with some rather positive > results[2]. So I'm putting this out for people with real workloads to > try out and report what they see. And I gave this series a try, on top of 5.3-rc2, with Robin's patch [3] to fix a VFIO IRQ breakage. The results were very similar, though at least one performance number was slightly worse than compared to this series on top of v5.2. But nevertheless there is still the big improvement compared to the baseline without this series, so: (for the whole series): Tested-by: Andre Przywara <andre.przywara@arm.com> Cheers, Andre. [3] http://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/669468.html > [1] https://lore.kernel.org/lkml/1552833373-19828-1-git-send-email-yuzenghui@huawei.com/ > [2] https://www.spinics.net/lists/arm-kernel/msg742655.html > > * From v2: > > - Added invalidation on turning the ITS off > - Added invalidation on MAPC with V=0 > - Added Rb's from Eric > > * From v1: > > - Fixed race on allocation, where the same LPI could be cached multiple times > - Now invalidate the cache on vgic teardown, avoiding memory leaks > - Change patch split slightly, general reshuffling > - Small cleanups here and there > - Rebased on 5.2-rc4 > > Marc Zyngier (10): > KVM: arm/arm64: vgic: Add LPI translation cache definition > KVM: arm/arm64: vgic: Add __vgic_put_lpi_locked primitive > KVM: arm/arm64: vgic-its: Add MSI-LPI translation cache invalidation > KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on > specific commands > KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on > disabling LPIs > KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on ITS > disable > KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on vgic > teardown > KVM: arm/arm64: vgic-its: Cache successful MSI->LPI translation > KVM: arm/arm64: vgic-its: Check the LPI translation cache on MSI > injection > KVM: arm/arm64: vgic-irqfd: Implement kvm_arch_set_irq_inatomic > > include/kvm/arm_vgic.h | 3 + > virt/kvm/arm/vgic/vgic-init.c | 5 + > virt/kvm/arm/vgic/vgic-irqfd.c | 36 +++++- > virt/kvm/arm/vgic/vgic-its.c | 207 +++++++++++++++++++++++++++++++ > virt/kvm/arm/vgic/vgic-mmio-v3.c | 4 +- > virt/kvm/arm/vgic/vgic.c | 26 ++-- > virt/kvm/arm/vgic/vgic.h | 5 + > 7 files changed, 270 insertions(+), 16 deletions(-) >
On Thu, 25 Jul 2019 16:35:33 +0100 Marc Zyngier <maz@kernel.org> wrote: > From: Marc Zyngier <marc.zyngier@arm.com> > > It recently became apparent[1] that our LPI injection path is not as > efficient as it could be when injecting interrupts coming from a VFIO > assigned device. > > Although the proposed patch wasn't 100% correct, it outlined at least > two issues: > > (1) Injecting an LPI from VFIO always results in a context switch to a > worker thread: no good > > (2) We have no way of amortising the cost of translating a DID+EID pair > to an LPI number > > The reason for (1) is that we may sleep when translating an LPI, so we > do need a context process. A way to fix that is to implement a small > LPI translation cache that could be looked up from an atomic > context. It would also solve (2). > > This is what this small series proposes. It implements a very basic > LRU cache of pre-translated LPIs, which gets used to implement > kvm_arch_set_irq_inatomic. The size of the cache is currently > hard-coded at 16 times the number of vcpus, a number I have picked > under the influence of Ali Saidi. If that's not enough for you, blame > me, though. > > Does it work? well, it doesn't crash, and is thus perfect. More > seriously, I don't really have a way to benchmark it directly, so my > observations are only indirect: > > On a TX2 system, I run a 4 vcpu VM with an Ethernet interface passed > to it directly. From the host, I inject interrupts using debugfs. In > parallel, I look at the number of context switch, and the number of > interrupts on the host. Without this series, I get the same number for > both IRQ and CS (about half a million of each per second is pretty > easy to reach). With this series, the number of context switches drops > to something pretty small (in the low 2k), while the number of > interrupts stays the same. > > Yes, this is a pretty rubbish benchmark, what did you expect? ;-) > > Andre did run some benchmarks of his own, with some rather positive > results[2]. So I'm putting this out for people with real workloads to > try out and report what they see. > > [1] https://lore.kernel.org/lkml/1552833373-19828-1-git-send-email-yuzenghui@huawei.com/ > [2] https://www.spinics.net/lists/arm-kernel/msg742655.html > > * From v2: > > - Added invalidation on turning the ITS off > - Added invalidation on MAPC with V=0 > - Added Rb's from Eric > > * From v1: > > - Fixed race on allocation, where the same LPI could be cached multiple times > - Now invalidate the cache on vgic teardown, avoiding memory leaks > - Change patch split slightly, general reshuffling > - Small cleanups here and there > - Rebased on 5.2-rc4 > > Marc Zyngier (10): > KVM: arm/arm64: vgic: Add LPI translation cache definition > KVM: arm/arm64: vgic: Add __vgic_put_lpi_locked primitive > KVM: arm/arm64: vgic-its: Add MSI-LPI translation cache invalidation > KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on > specific commands > KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on > disabling LPIs > KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on ITS > disable > KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on vgic > teardown > KVM: arm/arm64: vgic-its: Cache successful MSI->LPI translation > KVM: arm/arm64: vgic-its: Check the LPI translation cache on MSI > injection > KVM: arm/arm64: vgic-irqfd: Implement kvm_arch_set_irq_inatomic > > include/kvm/arm_vgic.h | 3 + > virt/kvm/arm/vgic/vgic-init.c | 5 + > virt/kvm/arm/vgic/vgic-irqfd.c | 36 +++++- > virt/kvm/arm/vgic/vgic-its.c | 207 +++++++++++++++++++++++++++++++ > virt/kvm/arm/vgic/vgic-mmio-v3.c | 4 +- > virt/kvm/arm/vgic/vgic.c | 26 ++-- > virt/kvm/arm/vgic/vgic.h | 5 + > 7 files changed, 270 insertions(+), 16 deletions(-) > FWIW, I've now queued this for 5.4, with Eric's RBs and Andre's TBs. Thanks, M.
From: Marc Zyngier <marc.zyngier@arm.com> It recently became apparent[1] that our LPI injection path is not as efficient as it could be when injecting interrupts coming from a VFIO assigned device. Although the proposed patch wasn't 100% correct, it outlined at least two issues: (1) Injecting an LPI from VFIO always results in a context switch to a worker thread: no good (2) We have no way of amortising the cost of translating a DID+EID pair to an LPI number The reason for (1) is that we may sleep when translating an LPI, so we do need a context process. A way to fix that is to implement a small LPI translation cache that could be looked up from an atomic context. It would also solve (2). This is what this small series proposes. It implements a very basic LRU cache of pre-translated LPIs, which gets used to implement kvm_arch_set_irq_inatomic. The size of the cache is currently hard-coded at 16 times the number of vcpus, a number I have picked under the influence of Ali Saidi. If that's not enough for you, blame me, though. Does it work? well, it doesn't crash, and is thus perfect. More seriously, I don't really have a way to benchmark it directly, so my observations are only indirect: On a TX2 system, I run a 4 vcpu VM with an Ethernet interface passed to it directly. From the host, I inject interrupts using debugfs. In parallel, I look at the number of context switch, and the number of interrupts on the host. Without this series, I get the same number for both IRQ and CS (about half a million of each per second is pretty easy to reach). With this series, the number of context switches drops to something pretty small (in the low 2k), while the number of interrupts stays the same. Yes, this is a pretty rubbish benchmark, what did you expect? ;-) Andre did run some benchmarks of his own, with some rather positive results[2]. So I'm putting this out for people with real workloads to try out and report what they see. [1] https://lore.kernel.org/lkml/1552833373-19828-1-git-send-email-yuzenghui@huawei.com/ [2] https://www.spinics.net/lists/arm-kernel/msg742655.html * From v2: - Added invalidation on turning the ITS off - Added invalidation on MAPC with V=0 - Added Rb's from Eric * From v1: - Fixed race on allocation, where the same LPI could be cached multiple times - Now invalidate the cache on vgic teardown, avoiding memory leaks - Change patch split slightly, general reshuffling - Small cleanups here and there - Rebased on 5.2-rc4 Marc Zyngier (10): KVM: arm/arm64: vgic: Add LPI translation cache definition KVM: arm/arm64: vgic: Add __vgic_put_lpi_locked primitive KVM: arm/arm64: vgic-its: Add MSI-LPI translation cache invalidation KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on specific commands KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on disabling LPIs KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on ITS disable KVM: arm/arm64: vgic-its: Invalidate MSI-LPI translation cache on vgic teardown KVM: arm/arm64: vgic-its: Cache successful MSI->LPI translation KVM: arm/arm64: vgic-its: Check the LPI translation cache on MSI injection KVM: arm/arm64: vgic-irqfd: Implement kvm_arch_set_irq_inatomic include/kvm/arm_vgic.h | 3 + virt/kvm/arm/vgic/vgic-init.c | 5 + virt/kvm/arm/vgic/vgic-irqfd.c | 36 +++++- virt/kvm/arm/vgic/vgic-its.c | 207 +++++++++++++++++++++++++++++++ virt/kvm/arm/vgic/vgic-mmio-v3.c | 4 +- virt/kvm/arm/vgic/vgic.c | 26 ++-- virt/kvm/arm/vgic/vgic.h | 5 + 7 files changed, 270 insertions(+), 16 deletions(-)