From patchwork Fri Apr 4 13:14:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Wieczor-Retman X-Patchwork-Id: 14038456 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4534D1EDA0F; Fri, 4 Apr 2025 13:16:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743772564; cv=none; b=Gk4FxKZdybqDkVyTUpEygtCs7oxXNZN06FlrIcQBFBegE5Q+Mj3cqup5tlldD4PfB9qznIBiKgDx+lRJoFT4ud0aEw8YDJcfQwdROIvSv4FXy3gSnONK/akmCW2BJ0fUkFfYkswqw6E3o2FPNJH4LeZQCVhMe7zkAQq0jNKo0D0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743772564; c=relaxed/simple; bh=ikWUalPb1GKrXc7zL0s3gnRneZOpXlSFYbJqx1DBgFU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=D7s5eDKQqywOj5hEzAD3kHk6uB2oJX8v8kKHHEq27LYWQmZOx4MO+OYHvRhRGfMZlqrPMoEnYYnZ6GULqRVNe3a2HWcq9FQxJ7iO0GqxqUth396ms3evFJhzrw+erKi4ohfwbSs3QM/r5sPqgtQQ6TLuGAC6htjLSpO4diJD++0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GJ+W1eEw; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GJ+W1eEw" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1743772563; x=1775308563; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ikWUalPb1GKrXc7zL0s3gnRneZOpXlSFYbJqx1DBgFU=; b=GJ+W1eEwuA4PzDLscJ27oFVUj4dxFheMecvHwyMU0CSizTA0MrQkwTQr aQNw1+2Nhy93AZivvWCpKcF9i9PYCH9xNXaqVSLBTPYP5stkVLdQ4OUL7 gqgfWrJS95u0XUAdpTZ+uwK213ydPh7RyQaJmVJLXDsxSiA2Dx4KNwZ7p c4nFYqXK+pNMS+vDVLB1RucALSqChyQViT+VAxQjhuL5o9Yh3VUVjgLkp EEDl9hS3uvhtZFdnDo0IyYm1L52YJ40Zxtv9D/RhAdEQir6/ii7r1yT4Y Gc6dmmIXseAzXh+3YFl5e8Si2aNmA8XmUdxAJbktIRaSnShgqx4pIU9/0 A==; X-CSE-ConnectionGUID: lcQ4svMuQUmXeCowcimi3g== X-CSE-MsgGUID: xSnTBHdhSDiC2xJ0f85AOQ== X-IronPort-AV: E=McAfee;i="6700,10204,11394"; a="55401733" X-IronPort-AV: E=Sophos;i="6.15,188,1739865600"; d="scan'208";a="55401733" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2025 06:16:02 -0700 X-CSE-ConnectionGUID: /v4yemxXRNCf9yxykbmk+A== X-CSE-MsgGUID: DvwOzQD2QMSbRSno4tJp0A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,188,1739865600"; d="scan'208";a="128157141" Received: from opintica-mobl1 (HELO wieczorr-mobl1.intel.com) ([10.245.245.50]) by fmviesa009-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2025 06:15:47 -0700 From: Maciej Wieczor-Retman To: hpa@zytor.com, hch@infradead.org, nick.desaulniers+lkml@gmail.com, kuan-ying.lee@canonical.com, masahiroy@kernel.org, samuel.holland@sifive.com, mingo@redhat.com, corbet@lwn.net, ryabinin.a.a@gmail.com, guoweikang.kernel@gmail.com, jpoimboe@kernel.org, ardb@kernel.org, vincenzo.frascino@arm.com, glider@google.com, kirill.shutemov@linux.intel.com, apopple@nvidia.com, samitolvanen@google.com, maciej.wieczor-retman@intel.com, kaleshsingh@google.com, jgross@suse.com, andreyknvl@gmail.com, scott@os.amperecomputing.com, tony.luck@intel.com, dvyukov@google.com, pasha.tatashin@soleen.com, ziy@nvidia.com, broonie@kernel.org, gatlin.newhouse@gmail.com, jackmanb@google.com, wangkefeng.wang@huawei.com, thiago.bauermann@linaro.org, tglx@linutronix.de, kees@kernel.org, akpm@linux-foundation.org, jason.andryuk@amd.com, snovitoll@gmail.com, xin@zytor.com, jan.kiszka@siemens.com, bp@alien8.de, rppt@kernel.org, peterz@infradead.org, pankaj.gupta@amd.com, thuth@redhat.com, andriy.shevchenko@linux.intel.com, joel.granados@kernel.org, kbingham@kernel.org, nicolas@fjasle.eu, mark.rutland@arm.com, surenb@google.com, catalin.marinas@arm.com, morbo@google.com, justinstitt@google.com, ubizjak@gmail.com, jhubbard@nvidia.com, urezki@gmail.com, dave.hansen@linux.intel.com, bhe@redhat.com, luto@kernel.org, baohua@kernel.org, nathan@kernel.org, will@kernel.org, brgerst@gmail.com Cc: llvm@lists.linux.dev, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, x86@kernel.org Subject: [PATCH v3 05/14] x86: Reset tag for virtual to physical address conversions Date: Fri, 4 Apr 2025 15:14:09 +0200 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kbuild@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Any place where pointer arithmetic is used to convert a virtual address into a physical one can raise errors if the virtual address is tagged. Reset the pointer's tag by sign extending the tag bits in macros that do pointer arithmetic in address conversions. There will be no change in compiled code with KASAN disabled since the compiler will optimize the __tag_reset() out. Signed-off-by: Maciej Wieczor-Retman --- arch/x86/include/asm/page.h | 17 +++++++++++++---- arch/x86/include/asm/page_64.h | 2 +- arch/x86/mm/physaddr.c | 1 + 3 files changed, 15 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 9265f2fca99a..e37f63b50728 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -7,6 +7,7 @@ #ifdef __KERNEL__ #include +#include #ifdef CONFIG_X86_64 #include @@ -41,7 +42,7 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, #define __pa(x) __phys_addr((unsigned long)(x)) #endif -#define __pa_nodebug(x) __phys_addr_nodebug((unsigned long)(x)) +#define __pa_nodebug(x) __phys_addr_nodebug((unsigned long)(__tag_reset(x))) /* __pa_symbol should be used for C visible symbols. This seems to be the official gcc blessed way to do such arithmetic. */ /* @@ -65,9 +66,17 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, * virt_to_page(kaddr) returns a valid pointer if and only if * virt_addr_valid(kaddr) returns true. */ -#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) + +#ifdef CONFIG_KASAN_SW_TAGS +#define page_to_virt(x) ({ \ + __typeof__(x) __page = x; \ + void *__addr = __va(page_to_pfn((__typeof__(x))__tag_reset(__page)) << PAGE_SHIFT); \ + (void *)__tag_set((const void *)__addr, page_kasan_tag(__page)); \ +}) +#endif +#define virt_to_page(kaddr) pfn_to_page(__pa((void *)__tag_reset(kaddr)) >> PAGE_SHIFT) extern bool __virt_addr_valid(unsigned long kaddr); -#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long) (kaddr)) +#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long)(__tag_reset(kaddr))) static __always_inline void *pfn_to_kaddr(unsigned long pfn) { @@ -81,7 +90,7 @@ static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits) static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits) { - return __canonical_address(vaddr, vaddr_bits) == vaddr; + return __canonical_address(vaddr, vaddr_bits) == __tag_reset(vaddr); } #endif /* __ASSEMBLER__ */ diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index d3aab6f4e59a..44975a8ae665 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -33,7 +33,7 @@ static __always_inline unsigned long __phys_addr_nodebug(unsigned long x) extern unsigned long __phys_addr(unsigned long); extern unsigned long __phys_addr_symbol(unsigned long); #else -#define __phys_addr(x) __phys_addr_nodebug(x) +#define __phys_addr(x) __phys_addr_nodebug(__tag_reset(x)) #define __phys_addr_symbol(x) \ ((unsigned long)(x) - __START_KERNEL_map + phys_base) #endif diff --git a/arch/x86/mm/physaddr.c b/arch/x86/mm/physaddr.c index fc3f3d3e2ef2..7f2b11308245 100644 --- a/arch/x86/mm/physaddr.c +++ b/arch/x86/mm/physaddr.c @@ -14,6 +14,7 @@ #ifdef CONFIG_DEBUG_VIRTUAL unsigned long __phys_addr(unsigned long x) { + x = __tag_reset(x); unsigned long y = x - __START_KERNEL_map; /* use the carry flag to determine if x was < __START_KERNEL_map */