From patchwork Sun Mar 17 14:36:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zenghui Yu X-Patchwork-Id: 10856373 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CF9B51669 for ; Sun, 17 Mar 2019 14:41:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AA94629376 for ; Sun, 17 Mar 2019 14:41:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9C01C29379; Sun, 17 Mar 2019 14:41:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 86A0F29376 for ; Sun, 17 Mar 2019 14:41:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=wkDmpsQ9C5N9xetq7AQedBB92laP2F7nrg+ef4oeiqw=; b=sdRMHgIASZq5ej MfNmunqy0UCoKL89HeCvM3LwREJYaGAm3hHIUH3wRPBdT7d7qqoRcvFx6GjDW3Q4meOzZ/1rHDphR usylcf2EzsGfNEJS2RU8QHbRIwSllhpHvgzOYs43fUjlDlI+zwn7MeKNc7QD0YpWr7V7uXPy+f84N Sq2edRSA94K4LAjpWEGVeL97dP3xmrV5Ipcyy64W5COR3WXZxUxYsr3W0fV1weBas9ISx/uFY4POd YunwcOs82QEtMrN1w4yoxD45ziHkOjfde7urHXFYZihgQdm1nP/EpbxoAO9GLlo3zCss9gMWW90vE nXtUQgJCUAekHtX+Nl7A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5Wz4-000215-2l; Sun, 17 Mar 2019 14:41:54 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5Wz0-00020g-BR for linux-arm-kernel@lists.infradead.org; Sun, 17 Mar 2019 14:41:51 +0000 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 20EB82B4E8D95F57C411; Sun, 17 Mar 2019 22:41:42 +0800 (CST) Received: from HGHY2Y004646261.china.huawei.com (10.184.12.158) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.408.0; Sun, 17 Mar 2019 22:41:34 +0800 From: Zenghui Yu To: , , , Subject: [RFC PATCH] KVM: arm/arm64: Enable direct irqfd MSI injection Date: Sun, 17 Mar 2019 14:36:13 +0000 Message-ID: <1552833373-19828-1-git-send-email-yuzenghui@huawei.com> X-Mailer: git-send-email 2.6.4.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.184.12.158] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190317_074151_032804_3B39ADB3 X-CRM114-Status: UNSURE ( 9.10 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: julien.thierry@arm.com, rkrcmar@redhat.com, kvm@vger.kernel.org, mst@redhat.com, suzuki.poulose@arm.com, linux-kernel@vger.kernel.org, james.morse@arm.com, wanghaibin.wang@huawei.com, Zenghui Yu , pbonzini@redhat.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, IRQFD on arm still uses the deferred workqueue mechanism to inject interrupts into guest, which will likely lead to a busy context-switching from/to the kworker thread. This overhead is for no purpose (only in my view ...) and will result in an interrupt performance degradation. Implement kvm_arch_set_irq_inatomic() for arm/arm64 to support direct irqfd MSI injection, by which we can get rid of the annoying latency. As a result, irqfd MSI intensive scenarios (e.g., DPDK with high packet processing workloads) will benefit from it. Signed-off-by: Zenghui Yu --- It seems that only MSI will follow the IRQFD path, did I miss something? This patch is still under test and sent out for early feedback. If I have any mis-understanding, please fix me up and let me know. Thanks! --- virt/kvm/arm/vgic/trace.h | 22 ++++++++++++++++++++++ virt/kvm/arm/vgic/vgic-irqfd.c | 21 +++++++++++++++++++++ 2 files changed, 43 insertions(+) diff --git a/virt/kvm/arm/vgic/trace.h b/virt/kvm/arm/vgic/trace.h index 55fed77..bc1f4db 100644 --- a/virt/kvm/arm/vgic/trace.h +++ b/virt/kvm/arm/vgic/trace.h @@ -27,6 +27,28 @@ __entry->vcpu_id, __entry->irq, __entry->level) ); +TRACE_EVENT(kvm_arch_set_irq_inatomic, + TP_PROTO(u32 gsi, u32 type, int level, int irq_source_id), + TP_ARGS(gsi, type, level, irq_source_id), + + TP_STRUCT__entry( + __field( u32, gsi ) + __field( u32, type ) + __field( int, level ) + __field( int, irq_source_id ) + ), + + TP_fast_assign( + __entry->gsi = gsi; + __entry->type = type; + __entry->level = level; + __entry->irq_source_id = irq_source_id; + ), + + TP_printk("gsi %u type %u level %d source %d", __entry->gsi, + __entry->type, __entry->level, __entry->irq_source_id) +); + #endif /* _TRACE_VGIC_H */ #undef TRACE_INCLUDE_PATH diff --git a/virt/kvm/arm/vgic/vgic-irqfd.c b/virt/kvm/arm/vgic/vgic-irqfd.c index 99e026d..4cfc3f4 100644 --- a/virt/kvm/arm/vgic/vgic-irqfd.c +++ b/virt/kvm/arm/vgic/vgic-irqfd.c @@ -19,6 +19,7 @@ #include #include #include "vgic.h" +#include "trace.h" /** * vgic_irqfd_set_irq: inject the IRQ corresponding to the @@ -105,6 +106,26 @@ int kvm_set_msi(struct kvm_kernel_irq_routing_entry *e, return vgic_its_inject_msi(kvm, &msi); } +/** + * kvm_arch_set_irq_inatomic: fast-path for irqfd injection + * + * Currently only direct MSI injecton is supported. + */ +int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e, + struct kvm *kvm, int irq_source_id, int level, + bool line_status) +{ + int ret; + + trace_kvm_arch_set_irq_inatomic(e->gsi, e->type, level, irq_source_id); + + if (unlikely(e->type != KVM_IRQ_ROUTING_MSI)) + return -EWOULDBLOCK; + + ret = kvm_set_msi(e, kvm, irq_source_id, level, line_status); + return ret; +} + int kvm_vgic_setup_default_irq_routing(struct kvm *kvm) { struct kvm_irq_routing_entry *entries;