From patchwork Sun Apr 23 17:08:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9694927 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 562C160245 for ; Sun, 23 Apr 2017 17:10:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3DF7726530 for ; Sun, 23 Apr 2017 17:10:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 32CEC265B9; Sun, 23 Apr 2017 17:10:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D0F4726530 for ; Sun, 23 Apr 2017 17:10:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1162422AbdDWRKm (ORCPT ); Sun, 23 Apr 2017 13:10:42 -0400 Received: from mail-wm0-f46.google.com ([74.125.82.46]:37772 "EHLO mail-wm0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1162418AbdDWRJv (ORCPT ); Sun, 23 Apr 2017 13:09:51 -0400 Received: by mail-wm0-f46.google.com with SMTP id m123so50334311wma.0 for ; Sun, 23 Apr 2017 10:09:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tbt5VOzaKaXmHDso8AbFnshXLt0rKTtt6femX5m1deU=; b=diqdsYYZg/JjA1Vq+gvm8I65gE+dc3JzNcip/g7nj1eEVnQAFYPmjyL6BBQhF+WKjD p1sNQzG2yR7sdDxIIc2btAStwuBi2KEY2k4Kz3ss6U8nRwF0djxexUcjbGKKqnmDDyuP 5mu3cojmLGmt/qpkcDJ2jELqx8o27D00uc614= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tbt5VOzaKaXmHDso8AbFnshXLt0rKTtt6femX5m1deU=; b=mdKyBOst/3C9IEgrC/Zy1S8DOBZ8Y3pFkqUmnTncNfPcOVw2dK80SGWXdU0WDlVC8U fdhSOVN5nwQjYm+ih7jVYqsa0A6D4CaPToS5rM6TOBs7qtwcR2JE1I24brVf+nC0My5+ 6uc6ekuA6yncbz7+giy2Vt16F/x6THnlK9h2m9kuPFFIvULORybO96QKD6ODnbQYAarA arQQVHT5ibpOXxLT7OK4+46yhyCYHnUI36mPCQWXbVwakjVj9c6XC68mZPC0g92x7usl wYmuhsN9cYEheMn0iqbEJcacgNFd7uEyil5oX2Jy+eTYKLTjRt5cL9o9PaK8v+sADuGe nAzg== X-Gm-Message-State: AN3rC/4Z9ztHpvoe1QOynxCbgj1G7ZmYcjNXK95kqXvWfKiyFSU8zK9w 8BXi1tZd/tsUEeh1K9cfnQ== X-Received: by 10.80.142.188 with SMTP id w57mr139803edw.11.1492967390196; Sun, 23 Apr 2017 10:09:50 -0700 (PDT) Received: from localhost.localdomain (xd93ddc2d.cust.hiper.dk. [217.61.220.45]) by smtp.gmail.com with ESMTPSA id 58sm2803521edz.2.2017.04.23.10.09.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 23 Apr 2017 10:09:49 -0700 (PDT) From: Christoffer Dall To: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= Cc: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Shih-Wei Li , Christoffer Dall Subject: [PULL 18/79] KVM: arm/arm64: vgic: Avoid flushing vgic state when there's no pending IRQ Date: Sun, 23 Apr 2017 19:08:28 +0200 Message-Id: <20170423170929.27334-19-cdall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170423170929.27334-1-cdall@linaro.org> References: <20170423170929.27334-1-cdall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Shih-Wei Li We do not need to flush vgic states in each world switch unless there is pending IRQ queued to the vgic's ap list. We can thus reduce the overhead by not grabbing the spinlock and not making the extra function call to vgic_flush_lr_state. Note: list_empty is a single atomic read (uses READ_ONCE) and can therefore check if a list is empty or not without the need to take the spinlock protecting the list. Reviewed-by: Marc Zyngier Signed-off-by: Shih-Wei Li Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic/vgic.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 2ac0def..1043291 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -637,12 +637,17 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu) /* Sync back the hardware VGIC state into our emulation after a guest's run. */ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + if (unlikely(!vgic_initialized(vcpu->kvm))) return; vgic_process_maintenance_interrupt(vcpu); vgic_fold_lr_state(vcpu); vgic_prune_ap_list(vcpu); + + /* Make sure we can fast-path in flush_hwstate */ + vgic_cpu->used_lrs = 0; } /* Flush our emulation state into the GIC hardware before entering the guest. */ @@ -651,6 +656,18 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) if (unlikely(!vgic_initialized(vcpu->kvm))) return; + /* + * If there are no virtual interrupts active or pending for this + * VCPU, then there is no work to do and we can bail out without + * taking any lock. There is a potential race with someone injecting + * interrupts to the VCPU, but it is a benign race as the VCPU will + * either observe the new interrupt before or after doing this check, + * and introducing additional synchronization mechanism doesn't change + * this. + */ + if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head)) + return; + spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock); vgic_flush_lr_state(vcpu); spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock);