From patchwork Mon Jul 29 16:21:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 11064179 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 89FDC13B1 for ; Mon, 29 Jul 2019 16:22:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 79DAB2862A for ; Mon, 29 Jul 2019 16:22:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6D87A2865F; Mon, 29 Jul 2019 16:22:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F15642862A for ; Mon, 29 Jul 2019 16:22:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=l9UH0MPsLwS6Lzq8GLyBp5IBLoMYosrOSyIs7WErgT4=; b=RSb64uuY/X4kJz 3OnWI+dSe0rqpddXO75O8aDLHJE8XgE3ProIXSYu4lLWa7HoPSBXvj7bX4v5m2xQI2myiJC3y4WvW IpsIERw+gLaNhAhOEb6hhuYImoCXpJ7s1TQyQMOgaLXq4p5HxUNoFrpH6CCTPvm3h8pmlbsrG72FP LAY0LpsXoMx+CAd1QX5Evk+dzq8F7GrJaoOISzWwpDVM3UQkOz1I1e/qSj4vphSYPeOPVid3yHEA4 GX7M9WHPid5EhPmSGVA83gRRK6T+YRo6jmzDOcQ8tyhKXamcyMQ86EtA8SLMCgrMca4KfRYQsENXu x3B5A5iLvEK+czEo/4kQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs8PV-0005oO-A8; Mon, 29 Jul 2019 16:22:05 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs8PR-0005nJ-Ks for linux-arm-kernel@lists.infradead.org; Mon, 29 Jul 2019 16:22:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5EBFF337; Mon, 29 Jul 2019 09:21:58 -0700 (PDT) Received: from capper-ampere.manchester.arm.com (capper-ampere.manchester.arm.com [10.32.98.74]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 329263F694; Mon, 29 Jul 2019 09:21:57 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH V4 00/11] 52-bit kernel + user VAs Date: Mon, 29 Jul 2019 17:21:06 +0100 Message-Id: <20190729162117.832-1-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_092201_775811_C0C334E9 X-CRM114-Status: GOOD ( 14.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, catalin.marinas@arm.com, bhsharma@redhat.com, Steve Capper , maz@kernel.org, will@kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch series adds support for 52-bit kernel VAs using some of the machinery already introduced by the 52-bit userspace VA code in 5.0. As 52-bit virtual address support is an optional hardware feature, software support for 52-bit kernel VAs needs to be deduced at early boot time. If HW support is not available, the kernel falls back to 48-bit. A significant proportion of this series focuses on "de-constifying" VA_BITS related constants. In order to allow for a KASAN shadow that changes size at boot time, one must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the start address. Also, it is highly desirable to maintain the same function addresses in the kernel .text between VA sizes. Both of these requirements necessitate us to flip the kernel address space halves s.t. the direct linear map occupies the lower addresses. In V4 of this series, an extra documentation patch is added to explain both the layout of the memory and the implementation of 52-bit support. Also added is a guard region after VMEMMAP to avoid ambiguity with IS_ERR style pointers. Finally the bitmask optimisations for VMEMMAP and PAGE_OFFSET are replaced with addition/subtraction in a new first patch for the series. In V3 of this series, the 52-bit user/48-bit kernel option is removed and we are left with a single 52-bit VA option instead. The offset_ttbr1 conditional logic has been re-worked to directly read a system register rather than rely on the alternative framework (I couldn't actually see a hotpath calling offset_ttbr1 and some parts of the early boot relied on offset_ttbr1 before the alternatives framework was called). Also some spurious de-constifying changes have been removed. In V2 of this series (apologies for the long delay from V1), the major change is that PAGE_OFFSET is retained as a constant. This allows for much faster virt_to_page computations. This is achieved by expanding the size of the VMEMMAP region to accommodate a disjoint 52-bit/48-bit direct linear map. This has been found to work well in my testing, but I would appreciate any feedback on this if it needs changing. To aid with git bisect, this logic is broken down into a few smaller patches. Steve Capper (11): arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START arm64: mm: Flip kernel VA space arm64: kasan: Switch to using KASAN_SHADOW_OFFSET arm64: dump: De-constify VA_START and KASAN_SHADOW_START arm64: mm: Introduce VA_BITS_MIN arm64: mm: Introduce VA_BITS_ACTUAL arm64: mm: Logic to make offset_ttbr1 conditional arm64: mm: Separate out vmemmap arm64: mm: Modify calculation of VMEMMAP_SIZE arm64: mm: Introduce 52-bit Kernel VAs docs: arm64: Add layout and 52-bit info to memory document Documentation/arm64/kasan-offsets.sh | 27 ++++ Documentation/arm64/memory.rst | 177 ++++++++++++++++++++++--- arch/arm64/Kconfig | 36 ++++- arch/arm64/Makefile | 8 -- arch/arm64/include/asm/assembler.h | 17 ++- arch/arm64/include/asm/efi.h | 4 +- arch/arm64/include/asm/kasan.h | 11 +- arch/arm64/include/asm/memory.h | 56 +++++--- arch/arm64/include/asm/mmu_context.h | 4 +- arch/arm64/include/asm/pgtable-hwdef.h | 2 +- arch/arm64/include/asm/pgtable.h | 6 +- arch/arm64/include/asm/processor.h | 2 +- arch/arm64/kernel/head.S | 13 +- arch/arm64/kernel/hibernate-asm.S | 8 +- arch/arm64/kernel/hibernate.c | 2 +- arch/arm64/kernel/kaslr.c | 6 +- arch/arm64/kvm/va_layout.c | 14 +- arch/arm64/mm/dump.c | 24 +++- arch/arm64/mm/fault.c | 4 +- arch/arm64/mm/init.c | 29 ++-- arch/arm64/mm/kasan_init.c | 11 +- arch/arm64/mm/mmu.c | 7 +- arch/arm64/mm/proc.S | 9 +- 23 files changed, 361 insertions(+), 116 deletions(-) create mode 100644 Documentation/arm64/kasan-offsets.sh