From patchwork Wed Nov 2 21:00:54 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9409885 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4A26160585 for ; Wed, 2 Nov 2016 21:06:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 39ABA2A5C1 for ; Wed, 2 Nov 2016 21:06:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2CE192A5C3; Wed, 2 Nov 2016 21:06:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C2D842A5C1 for ; Wed, 2 Nov 2016 21:06:15 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1c22hG-0001CA-HI; Wed, 02 Nov 2016 21:03:46 +0000 Received: from mail-qk0-f174.google.com ([209.85.220.174]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1c22f7-0007rx-PG for linux-arm-kernel@lists.infradead.org; Wed, 02 Nov 2016 21:01:37 +0000 Received: by mail-qk0-f174.google.com with SMTP id x190so32089656qkb.0 for ; Wed, 02 Nov 2016 14:01:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YR0Cc1qRj9n3Pu9iI78kJ9AGAToOcNocmHqMXUd0SGE=; b=kdB7CyH5ACF0tK16+Vdaa4cukkOuNgG5mCXZ3NnXP/Hg49McGoSQl9HtIFrscM2ZU1 oqr1e1l4BzosW4RpSPBZtvW435jtkaWCGTIV8pf9XDsAd3uicm8CoWt8S4wIxN8uQfm1 hqJfIjKWwApTEfjcpl3bKx4zvqefsfgASfG1is+DitppcKJCrQ89pesvV6Q5kaPHvUZv 7iUDd74SBsEeH697Tw02AGncHF7tcg7yMeoVTzWkn9Zrd+8h4l5YWCkFi+7b+104bx4C /EhEJEpqpDsFsLyHcL6z5qcK1kE4dhTqNjy564NbDtuYDm1IkOtobp23TcRzwulkkmuT BoZA== X-Gm-Message-State: ABUngvfgV6IevcEozLmq0vjpUrb4fhKkA368aQDwbViIz/Q8JKsveI6bL8N3d9Wi22AilP7B X-Received: by 10.55.115.6 with SMTP id o6mr5172876qkc.271.1478120472308; Wed, 02 Nov 2016 14:01:12 -0700 (PDT) Received: from X1-Carbon.redhat.com ([198.233.217.214]) by smtp.gmail.com with ESMTPSA id a54sm2193376qta.49.2016.11.02.14.01.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 02 Nov 2016 14:01:11 -0700 (PDT) From: Laura Abbott To: Mark Rutland , Ard Biesheuvel , Will Deacon , Catalin Marinas Subject: [PATCHv2 6/6] arm64: Add support for CONFIG_DEBUG_VIRTUAL Date: Wed, 2 Nov 2016 15:00:54 -0600 Message-Id: <20161102210054.16621-7-labbott@redhat.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20161102210054.16621-1-labbott@redhat.com> References: <20161102210054.16621-1-labbott@redhat.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161102_140134_079448_C29E2C71 X-CRM114-Status: GOOD ( 17.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Ingo Molnar , "H. Peter Anvin" , Joonsoo Kim , Thomas Gleixner , Laura Abbott , Andrew Morton , linux-arm-kernel@lists.infradead.org, Marek Szyprowski MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks on virt_to_phys calls. The goal is to catch users who are calling virt_to_phys on non-linear addresses immediately. As features such as CONFIG_VMAP_STACK get enabled for arm64, this becomes increasingly important. Add checks to catch bad virt_to_phys usage. Signed-off-by: Laura Abbott --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/memory.h | 12 +++++++++++- arch/arm64/mm/Makefile | 2 ++ arch/arm64/mm/physaddr.c | 34 ++++++++++++++++++++++++++++++++++ 4 files changed, 48 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/mm/physaddr.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 969ef88..83b95bc 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -6,6 +6,7 @@ config ARM64 select ACPI_MCFG if ACPI select ACPI_SPCR_TABLE if ACPI select ARCH_CLOCKSOURCE_DATA + select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI select ARCH_HAS_ELF_RANDOMIZE diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index d773e2c..eac3dbb 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -167,11 +167,19 @@ extern u64 kimage_voffset; * private definitions which should NOT be used outside memory.h * files. Use virt_to_phys/phys_to_virt/__pa/__va instead. */ -#define __virt_to_phys(x) ({ \ +#define __virt_to_phys_nodebug(x) ({ \ phys_addr_t __x = (phys_addr_t)(x); \ __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET : \ (__x - kimage_voffset); }) +#ifdef CONFIG_DEBUG_VIRTUAL +extern unsigned long __virt_to_phys(unsigned long x); +extern unsigned long __phys_addr_symbol(unsigned long x); +#else +#define __virt_to_phys(x) __virt_to_phys_nodebug(x) +#define __phys_addr_symbol __pa +#endif + #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET) #define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset)) @@ -202,6 +210,8 @@ static inline void *phys_to_virt(phys_addr_t x) * Drivers should NOT use these either. */ #define __pa(x) __virt_to_phys((unsigned long)(x)) +#define __pa_symbol(x) __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0)) +#define __pa_nodebug(x) __virt_to_phys_nodebug((unsigned long)(x)) #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) #define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x))) diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 54bb209..377f4ab 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -5,6 +5,8 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_ARM64_PTDUMP) += dump.o obj-$(CONFIG_NUMA) += numa.o +CFLAGS_physaddr.o := -DTEXT_OFFSET=$(TEXT_OFFSET) +obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_KASAN) += kasan_init.o KASAN_SANITIZE_kasan_init.o := n diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c new file mode 100644 index 0000000..874c782 --- /dev/null +++ b/arch/arm64/mm/physaddr.c @@ -0,0 +1,34 @@ +#include + +#include + +unsigned long __virt_to_phys(unsigned long x) +{ + phys_addr_t __x = (phys_addr_t)x; + + if (__x & BIT(VA_BITS - 1)) { + /* + * The linear kernel range starts in the middle of the virtual + * adddress space. Testing the top bit for the start of the + * region is a sufficient check. + */ + return (__x & ~PAGE_OFFSET) + PHYS_OFFSET; + } else { + VIRTUAL_BUG_ON(x < kimage_vaddr || x >= (unsigned long)_end); + return (__x - kimage_voffset); + } +} +EXPORT_SYMBOL(__virt_to_phys); + +unsigned long __phys_addr_symbol(unsigned long x) +{ + phys_addr_t __x = (phys_addr_t)x; + + /* + * This is intentionally different than above to be a tighter check + * for symbols. + */ + VIRTUAL_BUG_ON(x < kimage_vaddr + TEXT_OFFSET || x > (unsigned long) _end); + return (__x - kimage_voffset); +} +EXPORT_SYMBOL(__phys_addr_symbol);