From patchwork Thu Dec 7 10:54:12 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10098427 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6781A60325 for ; Thu, 7 Dec 2017 10:55:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5060A2A417 for ; Thu, 7 Dec 2017 10:55:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 441E62A419; Thu, 7 Dec 2017 10:55:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BDD632A417 for ; Thu, 7 Dec 2017 10:55:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5GQKxt28bO8xoqUqCLfProRPYRfBpkbvt4KUaxlmV7k=; b=MX8ShRWuP1ZMxdUbhShZyyOq/v HrZgQ5ZeUcx9OTHZlpGVro5Y4NgN6+qApH9cJDICaWiCAblT5aXhPg78itUolrDvBAsQtoeXGgi/g HkBLw9NVlGWLgnWIBth84FdkCPzZokOgpkQOecUba2R0/bggVpCxqRdatbhsGuqdSCXdGI94BUAhS nfUXUwYxf6EuuU7lh5Tlfl9OnOwuMZM3/YCa291EUPMn2RB+4KoLWVBAh5bqFjUrz8nGThWxfC7ht 2HOPV3dfDvMsGNffZikiTL94simx564LZ9pd+ooAMH/ZJUxBGM2B7rcq7vccRe5sAK0PrOt6+T8VT KJ17/bhA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eMtpX-0000Ul-UW; Thu, 07 Dec 2017 10:55:03 +0000 Received: from mail-wm0-x242.google.com ([2a00:1450:400c:c09::242]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1eMtpI-0008Uc-V1 for linux-arm-kernel@lists.infradead.org; Thu, 07 Dec 2017 10:55:01 +0000 Received: by mail-wm0-x242.google.com with SMTP id t8so12253822wmc.3 for ; Thu, 07 Dec 2017 02:54:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7KPMJBZEfc0lCaprIsw+36s4HzSd9UMf7za4HX6aycc=; b=jTfxJsk2sAP+9PNLtedIy7uLm4yljo8gWjs+0TZ6PJiwoClRSm/y9tb+TZDDI0UmtR lGxY3Nw4w4ZbR4jMzVqCRpIuhcDsaQmczhzZ2htb0+ZNOOUJjOlI7gjrWsurKW9V83cw B7+WB3x9Na7pVfJJ5E055J/KeN1fUweI+c6kA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7KPMJBZEfc0lCaprIsw+36s4HzSd9UMf7za4HX6aycc=; b=DZx/ob71u1V5G/S+S0trZrXuMpm6SniciLFU5pELYJ8oQFPYofBET/Abj2WRMXsRZp I+HE4F/eAwxUa9qaJ0uDi/cysWQgoKEmtQywbZaVy0snoEgcD52n/OR26JmZWuRt5eW7 yJMFad2nvA1oHkGuHW43lIumzun1R32UPv/xop641gsI2CdZOOZdxMEv8hdG2w/VtnLK Cf5gEZQIybOJOr/uVKM76DW8pJQiXfm/BeWzWEhi4dgpBxQ2P2BfIxV4hTdiRa183AIv iXX8LQhlxL5SkyscBQHZ7JyUUILX00EbWfL+FZnFlYzn4OPOonZra1BeiIc8VZnXeY4s DyJQ== X-Gm-Message-State: AJaThX4gM/dbaBkbN3/66ymF2KKPtpgxFLQldTNRxS0JA7mp9Q2FpgkY AX/gDpJjD2St2TADyh03hLsSFA== X-Google-Smtp-Source: AGs4zMaYYKhotinA3A9eXUy0i0iDY+CjJTExijm7IszoSN1U/R/p4KKYSEmjTJZ+E5zIcIgteFz/8g== X-Received: by 10.80.222.1 with SMTP id z1mr39314158edk.277.1512644070324; Thu, 07 Dec 2017 02:54:30 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id t23sm2233671edb.70.2017.12.07.02.54.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 07 Dec 2017 02:54:28 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v7 2/8] KVM: arm/arm64: Factor out functionality to get vgic mmio requester_vcpu Date: Thu, 7 Dec 2017 11:54:12 +0100 Message-Id: <20171207105418.22428-3-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171207105418.22428-1-christoffer.dall@linaro.org> References: <20171207105418.22428-1-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171207_025449_809657_1DF627AC X-CRM114-Status: GOOD ( 16.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, Marc Zyngier , Andre Przywara , Eric Auger , linux-arm-kernel@lists.infradead.org, Christoffer Dall MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We are about to distinguish between userspace accesses and mmio traps for a number of the mmio handlers. When the requester vcpu is NULL, it means we are handling a userspace access. Factor out the functionality to get the request vcpu into its own function, mostly so we have a common place to document the semantics of the return value. Also take the chance to move the functionality outside of holding a spinlock and instead explicitly disable and enable preemption. This supports PREEMPT_RT kernels as well. Acked-by: Marc Zyngier Reviewed-by: Andre Przywara Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic/vgic-mmio.c | 44 +++++++++++++++++++++++++++---------------- 1 file changed, 28 insertions(+), 16 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index deb51ee16a3d..fdad95f62fa3 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -122,6 +122,27 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, return value; } +/* + * This function will return the VCPU that performed the MMIO access and + * trapped from within the VM, and will return NULL if this is a userspace + * access. + * + * We can disable preemption locally around accessing the per-CPU variable, + * and use the resolved vcpu pointer after enabling preemption again, because + * even if the current thread is migrated to another CPU, reading the per-CPU + * value later will give us the same value as we update the per-CPU variable + * in the preempt notifier handlers. + */ +static struct kvm_vcpu *vgic_get_mmio_requester_vcpu(void) +{ + struct kvm_vcpu *vcpu; + + preempt_disable(); + vcpu = kvm_arm_get_running_vcpu(); + preempt_enable(); + return vcpu; +} + void vgic_mmio_write_spending(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) @@ -184,24 +205,10 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu, static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, bool new_active_state) { - struct kvm_vcpu *requester_vcpu; unsigned long flags; - spin_lock_irqsave(&irq->irq_lock, flags); + struct kvm_vcpu *requester_vcpu = vgic_get_mmio_requester_vcpu(); - /* - * The vcpu parameter here can mean multiple things depending on how - * this function is called; when handling a trap from the kernel it - * depends on the GIC version, and these functions are also called as - * part of save/restore from userspace. - * - * Therefore, we have to figure out the requester in a reliable way. - * - * When accessing VGIC state from user space, the requester_vcpu is - * NULL, which is fine, because we guarantee that no VCPUs are running - * when accessing VGIC state from user space so irq->vcpu->cpu is - * always -1. - */ - requester_vcpu = kvm_arm_get_running_vcpu(); + spin_lock_irqsave(&irq->irq_lock, flags); /* * If this virtual IRQ was written into a list register, we @@ -213,6 +220,11 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, * vgic_change_active_prepare) and still has to sync back this IRQ, * so we release and re-acquire the spin_lock to let the other thread * sync back the IRQ. + * + * When accessing VGIC state from user space, requester_vcpu is + * NULL, which is fine, because we guarantee that no VCPUs are running + * when accessing VGIC state from user space so irq->vcpu->cpu is + * always -1. */ while (irq->vcpu && /* IRQ may have state in an LR somewhere */ irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */