From patchwork Sat Nov 28 14:18:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shenming Lu X-Patchwork-Id: 11937487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E258C64E7A for ; Sat, 28 Nov 2020 14:21:07 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 47E9B24684 for ; Sat, 28 Nov 2020 14:21:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Fdvmfm2Y" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 47E9B24684 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Ph+ZkpUuda+bKlP2CZWTUYxAy9wM55unUCDpx4yPWR4=; b=Fdvmfm2YKfnWQxQZ3eM2ton9F iEyEqb0l2mXpcuWCeeUi6fCwPpkJEynxVW3ayecEJmTf46vi1vffnINvGE+J2n9ixB3X7RtAY1FqR XWS9+BYnrHBUFJwmsda76xf2Y/pY0J3Zv/41vlIKri9OPcvR6ePDCoffyLiWnQDMqiUP6SfFxkDZq Fq2WoNnrxVTM1v8tvk/iehRcW3KA7k/bOEu02ewNRJF+zoNUDHkP7H40cd3b6I8iH/g45pFQCurET UHcBFkUxfrThOnkiInR/xBj+9stua12tmmxSfiRGxPbuIm0hju9Ws3gGQBGT/GR9qYi3skaOsjVwx 4m/IteWjw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kj14x-0005MP-KR; Sat, 28 Nov 2020 14:19:59 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kj14u-0005L4-Ju for linux-arm-kernel@lists.infradead.org; Sat, 28 Nov 2020 14:19:58 +0000 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Cjtsn3XY0z6wRJ; Sat, 28 Nov 2020 22:19:25 +0800 (CST) Received: from DESKTOP-7FEPK9S.china.huawei.com (10.174.187.74) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.487.0; Sat, 28 Nov 2020 22:19:37 +0800 From: Shenming Lu To: Marc Zyngier , Thomas Gleixner , "Jason Cooper" , , , , , James Morse , Julien Thierry , Suzuki K Poulose , Catalin Marinas , Will Deacon , Eric Auger , Christoffer Dall Subject: [PATCH v2 2/2] KVM: arm64: Delay the execution of the polling on the GICR_VPENDBASER.Dirty bit Date: Sat, 28 Nov 2020 22:18:57 +0800 Message-ID: <20201128141857.983-3-lushenming@huawei.com> X-Mailer: git-send-email 2.27.0.windows.1 In-Reply-To: <20201128141857.983-1-lushenming@huawei.com> References: <20201128141857.983-1-lushenming@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.74] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201128_091957_389617_BBF9D3DA X-CRM114-Status: GOOD ( 17.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: yuzenghui@huawei.com, wanghaibin.wang@huawei.com, lushenming@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to further reduce the impact of the wait delay of the VPT analysis, we can delay the execution of the polling on the GICR_VPENDBASER.Dirty bit (call it from kvm_vgic_flush_hwstate() corresponding to vPE resident), let the GIC and the CPU work in parallel on the entry path. Signed-off-by: Shenming Lu --- arch/arm64/kvm/vgic/vgic-v4.c | 16 ++++++++++++++++ arch/arm64/kvm/vgic/vgic.c | 3 +++ drivers/irqchip/irq-gic-v3-its.c | 16 ++++++++++++---- drivers/irqchip/irq-gic-v4.c | 11 +++++++++++ include/kvm/arm_vgic.h | 3 +++ include/linux/irqchip/arm-gic-v4.h | 4 ++++ 6 files changed, 49 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c index b5fa73c9fd35..b0da74809187 100644 --- a/arch/arm64/kvm/vgic/vgic-v4.c +++ b/arch/arm64/kvm/vgic/vgic-v4.c @@ -353,6 +353,22 @@ int vgic_v4_load(struct kvm_vcpu *vcpu) return err; } +void vgic_v4_commit(struct kvm_vcpu *vcpu) +{ + struct its_vpe *vpe = &vcpu->arch.vgic_cpu.vgic_v3.its_vpe; + + /* + * No need to wait for the vPE to be ready across a shallow guest + * exit, as only a vcpu_put will invalidate it. + */ + if (vpe->vpe_ready) + return; + + its_commit_vpe(vpe); + + vpe->vpe_ready = true; +} + static struct vgic_its *vgic_get_its(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *irq_entry) { diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c index c3643b7f101b..1c597c9885fa 100644 --- a/arch/arm64/kvm/vgic/vgic.c +++ b/arch/arm64/kvm/vgic/vgic.c @@ -915,6 +915,9 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) if (can_access_vgic_from_kernel()) vgic_restore_state(vcpu); + + if (vgic_supports_direct_msis(vcpu->kvm)) + vgic_v4_commit(vcpu); } void kvm_vgic_load(struct kvm_vcpu *vcpu) diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 22f427135c6b..f30aba14933e 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -3842,8 +3842,6 @@ static void its_vpe_schedule(struct its_vpe *vpe) val |= vpe->idai ? GICR_VPENDBASER_IDAI : 0; val |= GICR_VPENDBASER_Valid; gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER); - - its_wait_vpt_parse_complete(); } static void its_vpe_deschedule(struct its_vpe *vpe) @@ -3855,6 +3853,8 @@ static void its_vpe_deschedule(struct its_vpe *vpe) vpe->idai = !!(val & GICR_VPENDBASER_IDAI); vpe->pending_last = !!(val & GICR_VPENDBASER_PendingLast); + + vpe->vpe_ready = false; } static void its_vpe_invall(struct its_vpe *vpe) @@ -3891,6 +3891,10 @@ static int its_vpe_set_vcpu_affinity(struct irq_data *d, void *vcpu_info) its_vpe_deschedule(vpe); return 0; + case COMMIT_VPE: + its_wait_vpt_parse_complete(); + return 0; + case INVALL_VPE: its_vpe_invall(vpe); return 0; @@ -4052,8 +4056,6 @@ static void its_vpe_4_1_schedule(struct its_vpe *vpe, val |= FIELD_PREP(GICR_VPENDBASER_4_1_VPEID, vpe->vpe_id); gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER); - - its_wait_vpt_parse_complete(); } static void its_vpe_4_1_deschedule(struct its_vpe *vpe, @@ -4091,6 +4093,8 @@ static void its_vpe_4_1_deschedule(struct its_vpe *vpe, GICR_VPENDBASER_PendingLast); vpe->pending_last = true; } + + vpe->vpe_ready = false; } static void its_vpe_4_1_invall(struct its_vpe *vpe) @@ -4128,6 +4132,10 @@ static int its_vpe_4_1_set_vcpu_affinity(struct irq_data *d, void *vcpu_info) its_vpe_4_1_deschedule(vpe, info); return 0; + case COMMIT_VPE: + its_wait_vpt_parse_complete(); + return 0; + case INVALL_VPE: its_vpe_4_1_invall(vpe); return 0; diff --git a/drivers/irqchip/irq-gic-v4.c b/drivers/irqchip/irq-gic-v4.c index 0c18714ae13e..6cea71a4e68b 100644 --- a/drivers/irqchip/irq-gic-v4.c +++ b/drivers/irqchip/irq-gic-v4.c @@ -258,6 +258,17 @@ int its_make_vpe_resident(struct its_vpe *vpe, bool g0en, bool g1en) return ret; } +int its_commit_vpe(struct its_vpe *vpe) +{ + struct its_cmd_info info = { + .cmd_type = COMMIT_VPE, + }; + + WARN_ON(preemptible()); + + return its_send_vpe_cmd(vpe, &info); +} + int its_invall_vpe(struct its_vpe *vpe) { struct its_cmd_info info = { diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index a8d8fdcd3723..f2170df6cf7c 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -401,7 +401,10 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int irq, int kvm_vgic_v4_unset_forwarding(struct kvm *kvm, int irq, struct kvm_kernel_irq_routing_entry *irq_entry); +void vgic_v4_commit(struct kvm_vcpu *vcpu); + int vgic_v4_load(struct kvm_vcpu *vcpu); + int vgic_v4_put(struct kvm_vcpu *vcpu, bool need_db); #endif /* __KVM_ARM_VGIC_H */ diff --git a/include/linux/irqchip/arm-gic-v4.h b/include/linux/irqchip/arm-gic-v4.h index 6976b8331b60..936d88e482a9 100644 --- a/include/linux/irqchip/arm-gic-v4.h +++ b/include/linux/irqchip/arm-gic-v4.h @@ -75,6 +75,8 @@ struct its_vpe { u16 vpe_id; /* Pending VLPIs on schedule out? */ bool pending_last; + /* VPT parse complete */ + bool vpe_ready; }; /* @@ -104,6 +106,7 @@ enum its_vcpu_info_cmd_type { PROP_UPDATE_AND_INV_VLPI, SCHEDULE_VPE, DESCHEDULE_VPE, + COMMIT_VPE, INVALL_VPE, PROP_UPDATE_VSGI, }; @@ -129,6 +132,7 @@ int its_alloc_vcpu_irqs(struct its_vm *vm); void its_free_vcpu_irqs(struct its_vm *vm); int its_make_vpe_resident(struct its_vpe *vpe, bool g0en, bool g1en); int its_make_vpe_non_resident(struct its_vpe *vpe, bool db); +int its_commit_vpe(struct its_vpe *vpe); int its_invall_vpe(struct its_vpe *vpe); int its_map_vlpi(int irq, struct its_vlpi_map *map); int its_get_vlpi(int irq, struct its_vlpi_map *map);