From patchwork Mon Sep 22 22:28:42 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mitchel Humpherys X-Patchwork-Id: 4951181 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 38098BEEA5 for ; Mon, 22 Sep 2014 22:30:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 50DCD2024D for ; Mon, 22 Sep 2014 22:30:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 71134201FA for ; Mon, 22 Sep 2014 22:30:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XWC6b-0005Hb-VH; Mon, 22 Sep 2014 22:29:13 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XWC6Y-00052Z-OG for linux-arm-kernel@lists.infradead.org; Mon, 22 Sep 2014 22:29:11 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id DAF5C140077; Mon, 22 Sep 2014 22:28:49 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id C77E4140081; Mon, 22 Sep 2014 22:28:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from localhost (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: mitchelh@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 4CC6E140077; Mon, 22 Sep 2014 22:28:49 +0000 (UTC) From: Mitchel Humpherys To: Will Deacon Subject: Re: [PATCH 2/2] iommu/arm-smmu: add support for access-protected mappings References: <1410984969-2340-1-git-send-email-mitchelh@codeaurora.org> <1410984969-2340-3-git-send-email-mitchelh@codeaurora.org> <20140919220535.GM20773@arm.com> Date: Mon, 22 Sep 2014 15:28:42 -0700 In-Reply-To: <20140919220535.GM20773@arm.com> (Will Deacon's message of "Fri, 19 Sep 2014 23:05:36 +0100") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux) MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140922_152910_885151_45237559 X-CRM114-Status: GOOD ( 24.35 ) X-Spam-Score: -1.0 (-) Cc: Olav Haugan , "iommu@lists.linux-foundation.org" , Joerg Roedel , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On Fri, Sep 19 2014 at 03:05:36 PM, Will Deacon wrote: > On Wed, Sep 17, 2014 at 09:16:09PM +0100, Mitchel Humpherys wrote: >> ARM SMMUs support memory access control via some bits in the translation >> table descriptor memory attributes. Currently we assume all translations >> are "unprivileged". Add support for privileged mappings, controlled by >> the IOMMU_PRIV prot flag. >> >> Also sneak in a whitespace change for consistency with nearby code. >> >> Signed-off-by: Mitchel Humpherys >> --- >> drivers/iommu/arm-smmu.c | 5 +++-- >> 1 file changed, 3 insertions(+), 2 deletions(-) >> >> diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c >> index ca18d6d42a..93999ec22c 100644 >> --- a/drivers/iommu/arm-smmu.c >> +++ b/drivers/iommu/arm-smmu.c >> @@ -1256,10 +1256,11 @@ static int arm_smmu_alloc_init_pte(struct arm_smmu_device *smmu, pmd_t *pmd, >> } >> >> if (stage == 1) { >> - pteval |= ARM_SMMU_PTE_AP_UNPRIV | ARM_SMMU_PTE_nG; >> + pteval |= ARM_SMMU_PTE_nG; >> + if (!(prot & IOMMU_PRIV)) >> + pteval |= ARM_SMMU_PTE_AP_UNPRIV; > > I think this actually makes more sense if we invert the logic, i.e. have > IOMMU_USER as a flag which sets the UNPRIV bit in the pte. I'm fine either way but the common case seems to be unprivileged mappings (at least in our system). We have one user of this flag out of a dozen or so users. > > I don't have the spec to hand, but I guess you can't enforce this at > stage-2? If so, do we also need a new IOMMU capability so people don't try > to use this for stage-2 only SMMUs? Hmm, actually we do have S2AP although it doesn't make a distinction between accesses from EL0 and EL1. But maybe it would make sense to make the `IOMMU_PRIV' mean `no access from EL0 or EL1' for stage 2 mappings? Something like: -- >8 -- Subject: iommu/arm-smmu: add support for access-protected mappings ARM SMMUs support memory access control via some bits in the translation table descriptor memory attributes. Currently we assume all translations are "unprivileged". Add support for privileged mappings, controlled by the IOMMU_PRIV prot flag. Also sneak in a whitespace change for consistency with nearby code. Signed-off-by: Mitchel Humpherys --- drivers/iommu/arm-smmu.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index ca18d6d42a..4f85b64f74 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -1256,18 +1256,19 @@ static int arm_smmu_alloc_init_pte(struct arm_smmu_device *smmu, pmd_t *pmd, } if (stage == 1) { - pteval |= ARM_SMMU_PTE_AP_UNPRIV | ARM_SMMU_PTE_nG; + pteval |= ARM_SMMU_PTE_nG; + if (!(prot & IOMMU_PRIV)) + pteval |= ARM_SMMU_PTE_AP_UNPRIV; if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ)) pteval |= ARM_SMMU_PTE_AP_RDONLY; - if (prot & IOMMU_CACHE) pteval |= (MAIR_ATTR_IDX_CACHE << ARM_SMMU_PTE_ATTRINDX_SHIFT); } else { pteval |= ARM_SMMU_PTE_HAP_FAULT; - if (prot & IOMMU_READ) + if (prot & IOMMU_READ && !(prot & IOMMU_PRIV)) pteval |= ARM_SMMU_PTE_HAP_READ; - if (prot & IOMMU_WRITE) + if (prot & IOMMU_WRITE && !(prot & IOMMU_PRIV)) pteval |= ARM_SMMU_PTE_HAP_WRITE; if (prot & IOMMU_CACHE) pteval |= ARM_SMMU_PTE_MEMATTR_OIWB;