From patchwork Wed Jan 31 09:34:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10193761 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 56F9460380 for ; Wed, 31 Jan 2018 09:48:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4401228473 for ; Wed, 31 Jan 2018 09:48:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 38D8D2860B; Wed, 31 Jan 2018 09:48:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D5BD328473 for ; Wed, 31 Jan 2018 09:48:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=JbaEIDsmH9doe+A/gTHpfDn8GHhr72Cb12kohBcew/g=; b=oDrU2Ezuqb/E3YF7oJRvk8bsyl kYxVl1vOWE6Wwwui+rOahzByDJa4J9uAY6zG3eqP2BLUIc41cYsmZ/7MQtxLNgFPixxWMOnaEg3sK +/+LjPwISfXxHqbDMMjEyxD0b2QdDrpXHiRIQxl2JPECNNo2Z+bHwbdHQORZiBI2Es7icG+nu9EZz qTU0lMiNEDS0M0+dm8XQT9KSX9bON00UUoga4Ds5clv2XW/9wJVV6jmhpmUIr7tLkcooIvvnTzQgL rP91gegA2ITKKxVp1jz8KwYKR624EEJXPaQrcnBcTenp2CguchMxB1YUI5WcNCNwB+NH3rmZeUlX7 AGqR3uEw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1egozy-0007gB-U1; Wed, 31 Jan 2018 09:48:10 +0000 Received: from mail-wm0-x243.google.com ([2a00:1450:400c:c09::243]) by bombadil.infradead.org with esmtps (Exim 4.89 #1 (Red Hat Linux)) id 1egoo5-0005oZ-6H for linux-arm-kernel@lists.infradead.org; Wed, 31 Jan 2018 09:36:02 +0000 Received: by mail-wm0-x243.google.com with SMTP id t74so6694121wme.3 for ; Wed, 31 Jan 2018 01:35:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ro0/zjCgMkh1RbQiDHp16WoIR5TpKhUqxLZVB3ftdYo=; b=EnCw6/GNDC+s8Qe71/Ej020LByfW4OwkuTw9QAISXpPuDjETBwxQE60AIkgy4hkO9H quzQvStrovysCuYitiVetETRqOFC73Ih4OqHLdXLBFc1cOQfFDMm6CdPBS59c0lXuibL MWLUXpfRvTi0HD3O4WmR/mfpJUBHaKux7phZU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ro0/zjCgMkh1RbQiDHp16WoIR5TpKhUqxLZVB3ftdYo=; b=XpfIyFnh0nk7tFfRg5Caq1bcDqtXnBVp/n+72/vhcvX/SkraIJp426OFOZIjBczGCU bjgkmG8d+KTIkoXGHqtFP+kZeH6jzc/bPWtL5m1CnoEnAJV/I370suQrdLzyjG1aRo6k Lg1XsM5gbZ+2eklu6MOrWelmtqvmDXpAHcoIo+BCzlgie0GKr8YzDZy7SeIP0S+EuTp1 Jm7SwJ/HXk5ExYZjAs73bwdgbdXn3nCjoiDlSDo9Fs8gb7QUS7uoJXZR9MFJ1y9C2oQu xIBSjD9IVMyKON7uwUlczMoKA9S37VwDv0FGgVA8QR9C49ADyVlcu7K1Tntq7NbbJ1WO POnA== X-Gm-Message-State: AKwxytc2f7C8ex1LVH13LjEydxnkpZtU9FaJZbPAHN8CYAv4VBX0h9Gx fNs/U4YFNFw4o7WeiS9Bd7XgZQ== X-Google-Smtp-Source: AH8x224rDnsjweWDlpKZPJNs6ejOG0HNG8+ILYc6gjPyZcSyY+1jE171WZ2KpU1YrbwYx5gGz06ZPw== X-Received: by 10.80.166.98 with SMTP id d89mr56311987edc.34.1517391341645; Wed, 31 Jan 2018 01:35:41 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id z2sm10247837edc.50.2018.01.31.01.35.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 31 Jan 2018 01:35:40 -0800 (PST) From: Christoffer Dall To: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= Subject: [PULL 20/28] KVM: arm/arm64: Preserve Exec permission across R/W permission faults Date: Wed, 31 Jan 2018 10:34:59 +0100 Message-Id: <20180131093507.22219-21-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180131093507.22219-1-christoffer.dall@linaro.org> References: <20180131093507.22219-1-christoffer.dall@linaro.org> X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Christoffer Dall , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Marc Zyngier So far, we loose the Exec property whenever we take permission faults, as we always reconstruct the PTE/PMD from scratch. This can be counter productive as we can end-up with the following fault sequence: X -> RO -> ROX -> RW -> RWX Instead, we can lookup the existing PTE/PMD and clear the XN bit in the new entry if it was already cleared in the old one, leadig to a much nicer fault sequence: X -> ROX -> RWX Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_mmu.h | 10 ++++++++++ arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ virt/kvm/arm/mmu.c | 27 +++++++++++++++++++++++++++ 3 files changed, 47 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 4d7a54cbb3ab..aab64fe52146 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -107,6 +107,11 @@ static inline bool kvm_s2pte_readonly(pte_t *pte) return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY; } +static inline bool kvm_s2pte_exec(pte_t *pte) +{ + return !(pte_val(*pte) & L_PTE_XN); +} + static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) { pmd_val(*pmd) = (pmd_val(*pmd) & ~L_PMD_S2_RDWR) | L_PMD_S2_RDONLY; @@ -117,6 +122,11 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; } +static inline bool kvm_s2pmd_exec(pmd_t *pmd) +{ + return !(pmd_val(*pmd) & PMD_SECT_XN); +} + static inline bool kvm_page_empty(void *ptr) { struct page *ptr_page = virt_to_page(ptr); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 1e1b20cb348f..126abefffe7f 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -203,6 +203,11 @@ static inline bool kvm_s2pte_readonly(pte_t *pte) return (pte_val(*pte) & PTE_S2_RDWR) == PTE_S2_RDONLY; } +static inline bool kvm_s2pte_exec(pte_t *pte) +{ + return !(pte_val(*pte) & PTE_S2_XN); +} + static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) { kvm_set_s2pte_readonly((pte_t *)pmd); @@ -213,6 +218,11 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return kvm_s2pte_readonly((pte_t *)pmd); } +static inline bool kvm_s2pmd_exec(pmd_t *pmd) +{ + return !(pmd_val(*pmd) & PMD_S2_XN); +} + static inline bool kvm_page_empty(void *ptr) { struct page *ptr_page = virt_to_page(ptr); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index f956efbd933d..b83b5a8442bb 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -926,6 +926,25 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache return 0; } +static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr) +{ + pmd_t *pmdp; + pte_t *ptep; + + pmdp = stage2_get_pmd(kvm, NULL, addr); + if (!pmdp || pmd_none(*pmdp) || !pmd_present(*pmdp)) + return false; + + if (pmd_thp_or_huge(*pmdp)) + return kvm_s2pmd_exec(pmdp); + + ptep = pte_offset_kernel(pmdp, addr); + if (!ptep || pte_none(*ptep) || !pte_present(*ptep)) + return false; + + return kvm_s2pte_exec(ptep); +} + static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, phys_addr_t addr, const pte_t *new_pte, unsigned long flags) @@ -1407,6 +1426,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault) { new_pmd = kvm_s2pmd_mkexec(new_pmd); invalidate_icache_guest_page(vcpu, pfn, PMD_SIZE); + } else if (fault_status == FSC_PERM) { + /* Preserve execute if XN was already cleared */ + if (stage2_is_exec(kvm, fault_ipa)) + new_pmd = kvm_s2pmd_mkexec(new_pmd); } ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); @@ -1425,6 +1448,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault) { new_pte = kvm_s2pte_mkexec(new_pte); invalidate_icache_guest_page(vcpu, pfn, PAGE_SIZE); + } else if (fault_status == FSC_PERM) { + /* Preserve execute if XN was already cleared */ + if (stage2_is_exec(kvm, fault_ipa)) + new_pte = kvm_s2pte_mkexec(new_pte); } ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);