From patchwork Tue Oct 25 00:17:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13018311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30F0AFA373D for ; Tue, 25 Oct 2022 00:17:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F66780011; Mon, 24 Oct 2022 20:17:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 293D78000C; Mon, 24 Oct 2022 20:17:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C36C80010; Mon, 24 Oct 2022 20:17:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C73BD8000D for ; Mon, 24 Oct 2022 20:17:46 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9788DA0C82 for ; Tue, 25 Oct 2022 00:17:46 +0000 (UTC) X-FDA: 80057558532.28.16504A0 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf08.hostedemail.com (Postfix) with ESMTP id 13114160003 for ; Tue, 25 Oct 2022 00:17:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666657066; x=1698193066; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rkwPYyuJLz3iWfInhtJeHv/DEtaKne5cyxOafJEfnU8=; b=nzqDOT3dTQhN+RA9WqZu1gG634c3lam+doEM6+dosZn77v1vD9btJozs Dw91L/haI0iq7w5+HwpBrHP3ei7u6lATdbw5RqWp8SkpBjpOkR8s3K6e6 5hIsDW4kzgNpBg9Hn2w3oCEM1ILkK5x3lK7IDH9QzXiPnC4SIHwEtGnzV s+PBkLXh8itvgLboWs4+kxs3PNtZsoc3TdlAGj5OF6bFTxK0rcjVHWjBj Z1HMNVcpO8rV+Gm7pQ6wWcjeoBCRhQSokvFjWzMdbUDo28yZhP6h4zctL Jm1MOjnek4dbcGB7qtWCBSkLP0KGK0l0uG59FAg5Wwv6QmBOnMG/njAH0 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="308644675" X-IronPort-AV: E=Sophos;i="5.95,210,1661842800"; d="scan'208";a="308644675" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2022 17:17:41 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="631447754" X-IronPort-AV: E=Sophos;i="5.95,210,1661842800"; d="scan'208";a="631447754" Received: from ghoyler-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.249.39.118]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2022 17:17:35 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 194451095C0; Tue, 25 Oct 2022 03:17:26 +0300 (+03) From: "Kirill A. Shutemov" To: Dave Hansen , Andy Lutomirski , Peter Zijlstra Cc: x86@kernel.org, Kostya Serebryany , Andrey Ryabinin , Andrey Konovalov , Alexander Potapenko , Taras Madan , Dmitry Vyukov , "H . J . Lu" , Andi Kleen , Rick Edgecombe , Bharata B Rao , Jacob Pan , Ashok Raj , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv11 11/16] x86/mm, iommu/sva: Make LAM and SVA mutually exclusive Date: Tue, 25 Oct 2022 03:17:17 +0300 Message-Id: <20221025001722.17466-12-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221025001722.17466-1-kirill.shutemov@linux.intel.com> References: <20221025001722.17466-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666657066; a=rsa-sha256; cv=none; b=AHhIUW5q0bSaw1DlDNccSnLxV25RFZtiC8/bHzCdel0hZWT4IOb6Ow+lEe7xqolA3mPwjE V7udzgLG3A7h/6NxVgre3w7Vnjrqyz+5N96CLe8DKJSi/xd/0LPep0VWjJbNC6C+Xj/YCv iP6NjW4HsOpN87QG/Wy/RXgGjFIVKmo= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=nzqDOT3d; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf08.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666657066; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SqZpGW07FSaZUpk18lzAxyl4dAWwqbU3UNN1fG+SyLQ=; b=Want9RLjGdYOkmZMIMPrZB4dJ2RcgXDAMtGbhBQ+nTRJwoaJzwGn1sl94+IXEG/Ovdf9a6 k7KPs1HzgTPEd/xagkNXZDJUV9yR89ocMo+F+r9ajoewT2w6mbdYPPC1HMdrpFs88pYhFn 9CJNccJbjoutFXuQJeDGTpmUsdNPpPw= X-Rspamd-Queue-Id: 13114160003 Authentication-Results: imf08.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=nzqDOT3d; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf08.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=kirill.shutemov@linux.intel.com X-Rspamd-Server: rspam12 X-Rspam-User: X-Stat-Signature: fm9seyeainsd965s8g599b58wmkyzo6n X-HE-Tag: 1666657065-263283 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: IOMMU and SVA-capable devices know nothing about LAM and only expect canonical addresses. An attempt to pass down tagged pointer will lead to address translation failure. By default do not allow to enable both LAM and use SVA in the same process. The new ARCH_FORCE_TAGGED_SVA arch_prctl() overrides the limitation. By using the arch_prctl() userspace takes responsibility to never pass tagged address to the device. Signed-off-by: Kirill A. Shutemov Reviewed-by: Ashok Raj Reviewed-by: Jacob Pan --- arch/x86/include/asm/mmu.h | 6 ++++-- arch/x86/include/asm/mmu_context.h | 6 ++++++ arch/x86/include/uapi/asm/prctl.h | 1 + arch/x86/kernel/process_64.c | 12 ++++++++++++ drivers/iommu/iommu-sva-lib.c | 12 ++++++++++++ include/linux/mmu_context.h | 7 +++++++ 6 files changed, 42 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h index 2fdb390040b5..1215b0a714c9 100644 --- a/arch/x86/include/asm/mmu.h +++ b/arch/x86/include/asm/mmu.h @@ -9,9 +9,11 @@ #include /* Uprobes on this MM assume 32-bit code */ -#define MM_CONTEXT_UPROBE_IA32 BIT(0) +#define MM_CONTEXT_UPROBE_IA32 BIT(0) /* vsyscall page is accessible on this MM */ -#define MM_CONTEXT_HAS_VSYSCALL BIT(1) +#define MM_CONTEXT_HAS_VSYSCALL BIT(1) +/* Allow LAM and SVA coexisting */ +#define MM_CONTEXT_FORCE_TAGGED_SVA BIT(2) /* * x86 has arch-specific MMU state beyond what lives in mm_struct. diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 84277547ad28..7bb572d3f612 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -114,6 +114,12 @@ static inline void mm_reset_untag_mask(struct mm_struct *mm) mm->context.untag_mask = -1UL; } +#define arch_pgtable_dma_compat arch_pgtable_dma_compat +static inline bool arch_pgtable_dma_compat(struct mm_struct *mm) +{ + return !mm_lam_cr3_mask(mm) || + (mm->context.flags & MM_CONTEXT_FORCE_TAGGED_SVA); +} #else static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm) diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h index a31e27b95b19..eb290d89cb32 100644 --- a/arch/x86/include/uapi/asm/prctl.h +++ b/arch/x86/include/uapi/asm/prctl.h @@ -23,5 +23,6 @@ #define ARCH_GET_UNTAG_MASK 0x4001 #define ARCH_ENABLE_TAGGED_ADDR 0x4002 #define ARCH_GET_MAX_TAG_BITS 0x4003 +#define ARCH_FORCE_TAGGED_SVA 0x4004 #endif /* _ASM_X86_PRCTL_H */ diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 9952e9f517ec..3c9a4d923d6d 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -783,6 +783,12 @@ static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits) goto out; } + if (mm_valid_pasid(mm) && + !(mm->context.flags & MM_CONTEXT_FORCE_TAGGED_SVA)) { + ret = -EBUSY; + goto out; + } + if (!nr_bits) { ret = -EINVAL; goto out; @@ -893,6 +899,12 @@ long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2) (unsigned long __user *)arg2); case ARCH_ENABLE_TAGGED_ADDR: return prctl_enable_tagged_addr(task->mm, arg2); + case ARCH_FORCE_TAGGED_SVA: + if (mmap_write_lock_killable(task->mm)) + return -EINTR; + task->mm->context.flags |= MM_CONTEXT_FORCE_TAGGED_SVA; + mmap_write_unlock(task->mm); + return 0; case ARCH_GET_MAX_TAG_BITS: if (!cpu_feature_enabled(X86_FEATURE_LAM)) return put_user(0, (unsigned long __user *)arg2); diff --git a/drivers/iommu/iommu-sva-lib.c b/drivers/iommu/iommu-sva-lib.c index 27be6b81e0b5..8934c32e923c 100644 --- a/drivers/iommu/iommu-sva-lib.c +++ b/drivers/iommu/iommu-sva-lib.c @@ -2,6 +2,8 @@ /* * Helpers for IOMMU drivers implementing SVA */ +#include +#include #include #include @@ -31,6 +33,15 @@ int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max) min == 0 || max < min) return -EINVAL; + /* Serialize against address tagging enabling */ + if (mmap_write_lock_killable(mm)) + return -EINTR; + + if (!arch_pgtable_dma_compat(mm)) { + mmap_write_unlock(mm); + return -EBUSY; + } + mutex_lock(&iommu_sva_lock); /* Is a PASID already associated with this mm? */ if (mm_valid_pasid(mm)) { @@ -46,6 +57,7 @@ int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max) mm_pasid_set(mm, pasid); out: mutex_unlock(&iommu_sva_lock); + mmap_write_unlock(mm); return ret; } EXPORT_SYMBOL_GPL(iommu_sva_alloc_pasid); diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h index 14b9c1fa05c4..f2b7a3f04099 100644 --- a/include/linux/mmu_context.h +++ b/include/linux/mmu_context.h @@ -35,4 +35,11 @@ static inline unsigned long mm_untag_mask(struct mm_struct *mm) } #endif +#ifndef arch_pgtable_dma_compat +static inline bool arch_pgtable_dma_compat(struct mm_struct *mm) +{ + return true; +} +#endif + #endif