From patchwork Fri Oct 28 00:18:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9400667 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8BF4C60231 for ; Fri, 28 Oct 2016 00:20:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E3FE2A3FB for ; Fri, 28 Oct 2016 00:20:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5EFEF2A401; Fri, 28 Oct 2016 00:20:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D300D2A3FB for ; Fri, 28 Oct 2016 00:20:32 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bzuss-0006OV-Cs; Fri, 28 Oct 2016 00:18:58 +0000 Received: from mail-qk0-f170.google.com ([209.85.220.170]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bzusn-0006Lj-RK for linux-arm-kernel@lists.infradead.org; Fri, 28 Oct 2016 00:18:55 +0000 Received: by mail-qk0-f170.google.com with SMTP id o68so68295546qkf.3 for ; Thu, 27 Oct 2016 17:18:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=RYEHxj+pdDWu7Hbi1Hqn5D7Lqv02sm0WMgsGJNcQxpM=; b=Apy9lLSsldo5ZdFrC7TBwrDJzWo9P6on5/b5mFXeviXryg/YZVDR4S6q4eu4tIj5B2 azba/JMwRaDr9r9cYMpmidT16qDDqCHZHVNZHh1SW9cgCf1CS4vPkscKHvv3ncOTSeuQ Y1NnrPPmvk4Qtwwcm/g68VH633tFD8uqh4ce75qRiWx8+d891xU9WhZjB1iAoskBiw/F xHjtLzUR1eeZ8eYEXztMZwEgSqSgWCBQyFEEx2OwLQqs87ghyKJ0kMpIuKRAPhXXCxzS mJ8EENjR55UVsXgooR73gHXIqCK7dZsYLaPnVaRPdyzXrpBc8UQmVc0seuSmtUCQR8o2 6U8g== X-Gm-Message-State: ABUngvcTYjzL5pgiIpjZO55nDfqgqrCpyR87gX9Qb7CybKrZDeLbyD3UqmSnKju+D5/L5wLN X-Received: by 10.55.17.143 with SMTP id 15mr9675484qkr.297.1477613912205; Thu, 27 Oct 2016 17:18:32 -0700 (PDT) Received: from labbott-redhat-machine.redhat.com ([2601:602:9800:177f::3aaa]) by smtp.gmail.com with ESMTPSA id c15sm4972562qtb.21.2016.10.27.17.18.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Oct 2016 17:18:31 -0700 (PDT) From: Laura Abbott To: Mark Rutland , Ard Biesheuvel , Will Deacon , Catalin Marinas Subject: [RFC][PATCH] arm64: Add support for CONFIG_DEBUG_VIRTUAL Date: Thu, 27 Oct 2016 17:18:12 -0700 Message-Id: <1477613892-26076-1-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161027_171854_018234_9EC4042C X-CRM114-Status: GOOD ( 20.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laura Abbott , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks on virt_to_phys calls. The goal is to catch users who are calling virt_to_phys on non-linear addresses immediately. As features such as CONFIG_VMAP_STACK get enabled for arm64, this becomes increasingly important. Add checks to catch bad virt_to_phys usage. Signed-off-by: Laura Abbott --- This has been on my TODO list for a while. It caught a few bugs with CONFIG_VMAP_STACK on x86 so when that eventually gets added for arm64 it will be useful to have. This caught one driver calling __pa on an ioremapped address already. RFC for a couple of reasons: 1) This is basically a direct port of the x86 approach. 2) I needed some #ifndef __ASSEMBLY__ which I don't like to throw around. 3) I'm not quite sure about the bounds check for the VIRTUAL_BUG_ON with KASLR, specifically the _end check. 4) Is it worth actually keeping this as DEBUG_VIRTUAL vs. folding it into another option? --- arch/arm64/include/asm/memory.h | 11 ++++++++++- arch/arm64/mm/Makefile | 2 +- arch/arm64/mm/physaddr.c | 17 +++++++++++++++++ lib/Kconfig.debug | 2 +- mm/cma.c | 2 +- 5 files changed, 30 insertions(+), 4 deletions(-) create mode 100644 arch/arm64/mm/physaddr.c diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index ba62df8..9805adc 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -106,11 +106,19 @@ * private definitions which should NOT be used outside memory.h * files. Use virt_to_phys/phys_to_virt/__pa/__va instead. */ -#define __virt_to_phys(x) ({ \ +#define __virt_to_phys_nodebug(x) ({ \ phys_addr_t __x = (phys_addr_t)(x); \ __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET : \ (__x - kimage_voffset); }) +#ifdef CONFIG_DEBUG_VIRTUAL +#ifndef __ASSEMBLY__ +extern unsigned long __virt_to_phys(unsigned long x); +#endif +#else +#define __virt_to_phys(x) __virt_to_phys_nodebug(x) +#endif + #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET) #define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset)) @@ -202,6 +210,7 @@ static inline void *phys_to_virt(phys_addr_t x) * Drivers should NOT use these either. */ #define __pa(x) __virt_to_phys((unsigned long)(x)) +#define __pa_nodebug(x) __virt_to_phys_nodebug((unsigned long)(x)) #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) #define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys(x)) diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 54bb209..bcea84e 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -5,6 +5,6 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_ARM64_PTDUMP) += dump.o obj-$(CONFIG_NUMA) += numa.o - +obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_KASAN) += kasan_init.o KASAN_SANITIZE_kasan_init.o := n diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c new file mode 100644 index 0000000..6c271e2 --- /dev/null +++ b/arch/arm64/mm/physaddr.c @@ -0,0 +1,17 @@ +#include + +#include + +unsigned long __virt_to_phys(unsigned long x) +{ + phys_addr_t __x = (phys_addr_t)x; + + if (__x & BIT(VA_BITS - 1)) { + /* The bit check ensures this is the right range */ + return (__x & ~PAGE_OFFSET) + PHYS_OFFSET; + } else { + VIRTUAL_BUG_ON(x < kimage_vaddr || x > (unsigned long)_end); + return (__x - kimage_voffset); + } +} +EXPORT_SYMBOL(__virt_to_phys); diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 33bc56c..e5634bb 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -604,7 +604,7 @@ config DEBUG_VM_PGFLAGS config DEBUG_VIRTUAL bool "Debug VM translations" - depends on DEBUG_KERNEL && X86 + depends on DEBUG_KERNEL && (X86 || ARM64) help Enable some costly sanity checks in virtual to page code. This can catch mistakes with virt_to_page() and friends. diff --git a/mm/cma.c b/mm/cma.c index 384c2cb..2345803 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -235,7 +235,7 @@ int __init cma_declare_contiguous(phys_addr_t base, phys_addr_t highmem_start; int ret = 0; -#ifdef CONFIG_X86 +#if defined(CONFIG_X86) || defined(CONFIG_ARM64) /* * high_memory isn't direct mapped memory so retrieving its physical * address isn't appropriate. But it would be useful to check the