From patchwork Tue Apr 28 19:44:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 11515431 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A5A3A912 for ; Tue, 28 Apr 2020 19:47:10 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 75A7921707 for ; Tue, 28 Apr 2020 19:47:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Zq2BTN+L" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 75A7921707 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=I6lGPwZnbWU4QpyWdTNl8ayLtdwktN9d+792WGwSn8k=; b=Zq2BTN+LqHTj+2 vzUY7XAmo44U1yq/URL5hOFD64hKxChok6YP92Z4seQEKdcsAjXd2GcErg8HRnrjv27jVA1eNsV/d /F4jtsyC4tqyQZ9Emr17x59oG8vtuooswL0IAhOCalaPEuc6cOSkkcblre21vLjFcxBGdGyEQrUBp y7v/O3t7eqqoTIaxQp1H95z1ZTt8W5P8ShYLRtvXa58PZIQqYLGNFPtQNOWXiDMgJ4o1LlA01l5GN AWe7Z3KZHnVO042DZUfWWoR/nvY3MceLzhrh3desIPHDaX+5osM7m88DXyF0TqtGAP1iq/o9Q2b5v EqSs1tnGBlAzzOUu+eqw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTWC8-0001r7-BK; Tue, 28 Apr 2020 19:47:04 +0000 Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTW9y-0005uQ-TN; Tue, 28 Apr 2020 19:44:50 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Subject: [PATCH 1/7] mm: Document x86 uses a linked list of pgds Date: Tue, 28 Apr 2020 12:44:43 -0700 Message-Id: <20200428194449.22615-2-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200428194449.22615-1-willy@infradead.org> References: <20200428194449.22615-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Catalin Marinas , linux-kernel@vger.kernel.org, "Matthew Wilcox \(Oracle\)" , Russell King , linux-m68k@lists.linux-m68k.org, Geert Uytterhoeven , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: "Matthew Wilcox (Oracle)" x86 uses page->lru of the pages used for pgds, but that's not immediately obvious to anyone looking to make changes. Add a struct list_head to the union so it's clearly in use for pgds. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Ira Weiny --- include/linux/mm_types.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 4aba6c0c2ba8..9bb34e2cd5a5 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -142,8 +142,13 @@ struct page { struct list_head deferred_list; }; struct { /* Page table pages */ - unsigned long _pt_pad_1; /* compound_head */ - pgtable_t pmd_huge_pte; /* protected by page->ptl */ + union { + struct list_head pgd_list; /* x86 */ + struct { + unsigned long _pt_pad_1; + pgtable_t pmd_huge_pte; + }; + }; unsigned long _pt_pad_2; /* mapping */ union { struct mm_struct *pt_mm; /* x86 pgds only */ From patchwork Tue Apr 28 19:44:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 11515401 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DDDD592C for ; Tue, 28 Apr 2020 19:44:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BBDEC21707 for ; Tue, 28 Apr 2020 19:44:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="KkhfBmon" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BBDEC21707 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WyCPRCMXlM3JGsfvK0faajRz0F+TLpNXvO2E8WFnQK0=; b=KkhfBmontQUNmo qRTZE64YRjzUeno1plmSXJamj5vynHX6eRjdZ139dYRAHmG2I8yRNQg/MY1GW4kCXeDkLEFykAsLy O0HQcL/CDvPgumh8+XWJcTLsRfsBASZBfjqUSYOmarxMzS8BnJQGaN+6ked1rfxDVTh1/rcVBZE3I T/RdcxU2laUGa9m/5PuryJJ9G3ZpGxeXa8CWwMN1CNem6etQZ7ZOcsge/GvBnEB1oB8fBe9fJsDxs 3pNEc374G/ndWj0TbIiIfqleebaH6BZM/fzTM0A2WNMiYeN4Y/sW/RgT/2W9VBZCpnPilGkGq7qRw yQCHHYMD2YAbgCn/qJlQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTWA1-0005vV-2q; Tue, 28 Apr 2020 19:44:53 +0000 Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTW9y-0005uU-UU; Tue, 28 Apr 2020 19:44:50 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Subject: [PATCH 2/7] mm: Move pt_mm within struct page Date: Tue, 28 Apr 2020 12:44:44 -0700 Message-Id: <20200428194449.22615-3-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200428194449.22615-1-willy@infradead.org> References: <20200428194449.22615-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Catalin Marinas , linux-kernel@vger.kernel.org, "Matthew Wilcox \(Oracle\)" , Russell King , linux-m68k@lists.linux-m68k.org, Geert Uytterhoeven , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: "Matthew Wilcox (Oracle)" Instead of a per-arch word within struct page, use a formerly reserved word. This word is shared with page->mapping, so it must be cleared before being freed as it is checked in free_pages(). Signed-off-by: Matthew Wilcox (Oracle) --- arch/x86/mm/pgtable.c | 1 + include/linux/mm_types.h | 7 ++----- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 7bd2c3a52297..f5f46737aea0 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -95,6 +95,7 @@ static inline void pgd_list_del(pgd_t *pgd) struct page *page = virt_to_page(pgd); list_del(&page->lru); + page->pt_mm = NULL; } #define UNSHARED_PTRS_PER_PGD \ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 9bb34e2cd5a5..7efa12f4626f 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -149,11 +149,8 @@ struct page { pgtable_t pmd_huge_pte; }; }; - unsigned long _pt_pad_2; /* mapping */ - union { - struct mm_struct *pt_mm; /* x86 pgds only */ - atomic_t pt_frag_refcount; /* powerpc */ - }; + struct mm_struct *pt_mm; + atomic_t pt_frag_refcount; /* powerpc */ #if ALLOC_SPLIT_PTLOCKS spinlock_t *ptl; #else From patchwork Tue Apr 28 19:44:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 11515425 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EAF0E14DD for ; Tue, 28 Apr 2020 19:46:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8E86A21D80 for ; Tue, 28 Apr 2020 19:46:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="gLS8+Y01" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8E86A21D80 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xXMFtJTGf5P6B7XAJv+/VGjfKCxL6fbi8+vKP2+CWM0=; b=gLS8+Y01aGuc0+ kR+tMW8S1Vv+MSQVac3AetnXKQH2kxL3PEkuEMYj9aN1bGZfdUzgnsEwigNuPvthyF+MaGlxgdlpZ rjh5gj442v6tBCJMbXqOq2KK89UAErO6uFivqleHVrwrfbD3yaQUffnzjSgyTYdP0/lBsyEsIMU// GsmIdArcpJ/NNxL+R0Nme88OXvx8ijytchOyRBJ8RGNBKOPN6P/p8TUIc2iSNBXD4pLVgO9hnyuUe H6+uspFAWWBP0RWRBiIGhp2sf+rslGPDV8J1IsreRtLYdeKzcBe7uU+EhXUdfh9vv/UCXpa7Av42P ndA2LjQxYOMZCPMToaQQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTWBP-000173-4o; Tue, 28 Apr 2020 19:46:19 +0000 Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTW9y-0005uY-Vh; Tue, 28 Apr 2020 19:44:50 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Subject: [PATCH 3/7] arm: Thread mm_struct throughout page table allocation Date: Tue, 28 Apr 2020 12:44:45 -0700 Message-Id: <20200428194449.22615-4-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200428194449.22615-1-willy@infradead.org> References: <20200428194449.22615-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Catalin Marinas , linux-kernel@vger.kernel.org, "Matthew Wilcox \(Oracle\)" , Russell King , linux-m68k@lists.linux-m68k.org, Geert Uytterhoeven , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: "Matthew Wilcox (Oracle)" An upcoming patch will pass mm_struct to the page table constructor. Make sure ARM has the appropriate mm_struct at the point it needs to call the constructor. Signed-off-by: Matthew Wilcox (Oracle) --- arch/arm/mm/mmu.c | 64 +++++++++++++++++++++++------------------------ 1 file changed, 32 insertions(+), 32 deletions(-) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index ec8d0008bfa1..e5275bfbe695 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -690,7 +690,9 @@ EXPORT_SYMBOL(phys_mem_access_prot); #define vectors_base() (vectors_high() ? 0xffff0000 : 0) -static void __init *early_alloc(unsigned long sz) +typedef void *(arm_pt_alloc_t)(unsigned long size, struct mm_struct *); + +static void __init *early_alloc(unsigned long sz, struct mm_struct *mm) { void *ptr = memblock_alloc(sz, sz); @@ -701,7 +703,7 @@ static void __init *early_alloc(unsigned long sz) return ptr; } -static void *__init late_alloc(unsigned long sz) +static void *__init late_alloc(unsigned long sz, struct mm_struct *mm) { void *ptr = (void *)__get_free_pages(GFP_PGTABLE_KERNEL, get_order(sz)); @@ -710,31 +712,30 @@ static void *__init late_alloc(unsigned long sz) return ptr; } -static pte_t * __init arm_pte_alloc(pmd_t *pmd, unsigned long addr, - unsigned long prot, - void *(*alloc)(unsigned long sz)) +static pte_t * __init arm_pte_alloc(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, unsigned long prot, + arm_pt_alloc_t alloc) { if (pmd_none(*pmd)) { - pte_t *pte = alloc(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE); + pte_t *pte = alloc(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE, mm); __pmd_populate(pmd, __pa(pte), prot); } BUG_ON(pmd_bad(*pmd)); return pte_offset_kernel(pmd, addr); } -static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr, - unsigned long prot) +static pte_t * __init early_pte_alloc(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, unsigned long prot) { - return arm_pte_alloc(pmd, addr, prot, early_alloc); + return arm_pte_alloc(mm, pmd, addr, prot, early_alloc); } -static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr, - unsigned long end, unsigned long pfn, - const struct mem_type *type, - void *(*alloc)(unsigned long sz), - bool ng) +static void __init alloc_init_pte(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, unsigned long end, + unsigned long pfn, const struct mem_type *type, + arm_pt_alloc_t alloc, bool ng) { - pte_t *pte = arm_pte_alloc(pmd, addr, type->prot_l1, alloc); + pte_t *pte = arm_pte_alloc(mm, pmd, addr, type->prot_l1, alloc); do { set_pte_ext(pte, pfn_pte(pfn, __pgprot(type->prot_pte)), ng ? PTE_EXT_NG : 0); @@ -769,10 +770,10 @@ static void __init __map_init_section(pmd_t *pmd, unsigned long addr, flush_pmd_entry(p); } -static void __init alloc_init_pmd(pud_t *pud, unsigned long addr, - unsigned long end, phys_addr_t phys, - const struct mem_type *type, - void *(*alloc)(unsigned long sz), bool ng) +static void __init alloc_init_pmd(struct mm_struct *mm, pud_t *pud, + unsigned long addr, unsigned long end, + phys_addr_t phys, const struct mem_type *type, + arm_pt_alloc_t alloc, bool ng) { pmd_t *pmd = pmd_offset(pud, addr); unsigned long next; @@ -792,7 +793,7 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr, ((addr | next | phys) & ~SECTION_MASK) == 0) { __map_init_section(pmd, addr, next, phys, type, ng); } else { - alloc_init_pte(pmd, addr, next, + alloc_init_pte(mm, pmd, addr, next, __phys_to_pfn(phys), type, alloc, ng); } @@ -801,17 +802,17 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr, } while (pmd++, addr = next, addr != end); } -static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr, - unsigned long end, phys_addr_t phys, - const struct mem_type *type, - void *(*alloc)(unsigned long sz), bool ng) +static void __init alloc_init_pud(struct mm_struct *mm, pgd_t *pgd, + unsigned long addr, unsigned long end, + phys_addr_t phys, const struct mem_type *type, + arm_pt_alloc_t alloc, bool ng) { pud_t *pud = pud_offset(pgd, addr); unsigned long next; do { next = pud_addr_end(addr, end); - alloc_init_pmd(pud, addr, next, phys, type, alloc, ng); + alloc_init_pmd(mm, pud, addr, next, phys, type, alloc, ng); phys += next - addr; } while (pud++, addr = next, addr != end); } @@ -879,8 +880,7 @@ static void __init create_36bit_mapping(struct mm_struct *mm, #endif /* !CONFIG_ARM_LPAE */ static void __init __create_mapping(struct mm_struct *mm, struct map_desc *md, - void *(*alloc)(unsigned long sz), - bool ng) + arm_pt_alloc_t alloc, bool ng) { unsigned long addr, length, end; phys_addr_t phys; @@ -914,7 +914,7 @@ static void __init __create_mapping(struct mm_struct *mm, struct map_desc *md, do { unsigned long next = pgd_addr_end(addr, end); - alloc_init_pud(pgd, addr, next, phys, type, alloc, ng); + alloc_init_pud(mm, pgd, addr, next, phys, type, alloc, ng); phys += next - addr; addr = next; @@ -1316,7 +1316,7 @@ static void __init devicemaps_init(const struct machine_desc *mdesc) /* * Allocate the vector page early. */ - vectors = early_alloc(PAGE_SIZE * 2); + vectors = early_alloc(PAGE_SIZE * 2, &init_mm); early_trap_init(vectors); @@ -1413,11 +1413,11 @@ static void __init devicemaps_init(const struct machine_desc *mdesc) static void __init kmap_init(void) { #ifdef CONFIG_HIGHMEM - pkmap_page_table = early_pte_alloc(pmd_off_k(PKMAP_BASE), + pkmap_page_table = early_pte_alloc(&init_mm, pmd_off_k(PKMAP_BASE), PKMAP_BASE, _PAGE_KERNEL_TABLE); #endif - early_pte_alloc(pmd_off_k(FIXADDR_START), FIXADDR_START, + early_pte_alloc(&init_mm, pmd_off_k(FIXADDR_START), FIXADDR_START, _PAGE_KERNEL_TABLE); } @@ -1630,7 +1630,7 @@ void __init paging_init(const struct machine_desc *mdesc) top_pmd = pmd_off_k(0xffff0000); /* allocate the zero page. */ - zero_page = early_alloc(PAGE_SIZE); + zero_page = early_alloc(PAGE_SIZE, &init_mm); bootmem_init(); From patchwork Tue Apr 28 19:44:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 11515419 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F09F6912 for ; Tue, 28 Apr 2020 19:45:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B226F21973 for ; Tue, 28 Apr 2020 19:45:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="elqnrsCF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B226F21973 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=o2BjRWloGZCBjf+p28ksh1jeqFuYhUzt3u1vrMB0DRU=; b=elqnrsCFVPDK/8 FPuTjMDCnsX4KP35v+ZpiI8ZgI5E/NAu15xuMIlqen6c36m0l9P42VoMY5cip2O20oJypZa4pEUAc rjIVw4EUf9TZtqk62tMfCwUuhqWg77G4RR1rxpZM596QMlZwvXk5LI/40q2bOKxpaRHHCTPxvTH89 Xqe6RAeqxsq1gGwxEb6YEFaMmc+WI7USE/yylRor7auhjnsdhb4UL+9QSCcyq9/J4w4PRZmxzua/u I3xVvLQbQ9NJ6+UwBBZatu/Ce7FKidmLppOkNH1JJOOyxm8ja0ImYw8IkkXGp1EOYJ4EU7kKHYaIY NO9nkdWMf3qef0TV/xuw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTWAj-0000WW-64; Tue, 28 Apr 2020 19:45:38 +0000 Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTW9z-0005uc-0Y; Tue, 28 Apr 2020 19:44:51 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Subject: [PATCH 4/7] arm64: Thread mm_struct throughout page table allocation Date: Tue, 28 Apr 2020 12:44:46 -0700 Message-Id: <20200428194449.22615-5-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200428194449.22615-1-willy@infradead.org> References: <20200428194449.22615-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Catalin Marinas , linux-kernel@vger.kernel.org, "Matthew Wilcox \(Oracle\)" , Russell King , linux-m68k@lists.linux-m68k.org, Geert Uytterhoeven , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: "Matthew Wilcox (Oracle)" An upcoming patch will pass mm_struct to the page table constructor. Make sure arm64 has the appropriate mm_struct at the point it needs to call the constructor. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mark Rutland --- arch/arm64/mm/mmu.c | 89 ++++++++++++++++++++++----------------------- 1 file changed, 43 insertions(+), 46 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index a374e4f51a62..69ecc83c3be0 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -88,7 +88,9 @@ pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, } EXPORT_SYMBOL(phys_mem_access_prot); -static phys_addr_t __init early_pgtable_alloc(int shift) +typedef phys_addr_t (arm_pt_alloc_t)(int size, struct mm_struct *); + +static phys_addr_t __init early_pgtable_alloc(int shift, struct mm_struct *mm) { phys_addr_t phys; void *ptr; @@ -162,11 +164,9 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, pte_clear_fixmap(); } -static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, - unsigned long end, phys_addr_t phys, - pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), - int flags) +static void alloc_init_cont_pte(struct mm_struct *mm, pmd_t *pmdp, + unsigned long addr, unsigned long end, phys_addr_t phys, + pgprot_t prot, arm_pt_alloc_t pgtable_alloc, int flags) { unsigned long next; pmd_t pmd = READ_ONCE(*pmdp); @@ -175,7 +175,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, if (pmd_none(pmd)) { phys_addr_t pte_phys; BUG_ON(!pgtable_alloc); - pte_phys = pgtable_alloc(PAGE_SHIFT); + pte_phys = pgtable_alloc(PAGE_SHIFT, mm); __pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE); pmd = READ_ONCE(*pmdp); } @@ -197,9 +197,9 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, } while (addr = next, addr != end); } -static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, - phys_addr_t phys, pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), int flags) +static void init_pmd(struct mm_struct *mm, pud_t *pudp, unsigned long addr, + unsigned long end, phys_addr_t phys, pgprot_t prot, + arm_pt_alloc_t pgtable_alloc, int flags) { unsigned long next; pmd_t *pmdp; @@ -222,7 +222,7 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd), READ_ONCE(pmd_val(*pmdp)))); } else { - alloc_init_cont_pte(pmdp, addr, next, phys, prot, + alloc_init_cont_pte(mm, pmdp, addr, next, phys, prot, pgtable_alloc, flags); BUG_ON(pmd_val(old_pmd) != 0 && @@ -234,10 +234,9 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, pmd_clear_fixmap(); } -static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, - unsigned long end, phys_addr_t phys, - pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), int flags) +static void alloc_init_cont_pmd(struct mm_struct *mm, pud_t *pudp, + unsigned long addr, unsigned long end, phys_addr_t phys, + pgprot_t prot, arm_pt_alloc_t pgtable_alloc, int flags) { unsigned long next; pud_t pud = READ_ONCE(*pudp); @@ -249,7 +248,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, if (pud_none(pud)) { phys_addr_t pmd_phys; BUG_ON(!pgtable_alloc); - pmd_phys = pgtable_alloc(PMD_SHIFT); + pmd_phys = pgtable_alloc(PMD_SHIFT, mm); __pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE); pud = READ_ONCE(*pudp); } @@ -265,7 +264,8 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, (flags & NO_CONT_MAPPINGS) == 0) __prot = __pgprot(pgprot_val(prot) | PTE_CONT); - init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags); + init_pmd(mm, pudp, addr, next, phys, __prot, pgtable_alloc, + flags); phys += next - addr; } while (addr = next, addr != end); @@ -283,10 +283,9 @@ static inline bool use_1G_block(unsigned long addr, unsigned long next, return true; } -static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, - phys_addr_t phys, pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), - int flags) +static void alloc_init_pud(struct mm_struct *mm, pgd_t *pgdp, + unsigned long addr, unsigned long end, phys_addr_t phys, + pgprot_t prot, arm_pt_alloc_t pgtable_alloc, int flags) { unsigned long next; pud_t *pudp; @@ -295,7 +294,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, if (pgd_none(pgd)) { phys_addr_t pud_phys; BUG_ON(!pgtable_alloc); - pud_phys = pgtable_alloc(PUD_SHIFT); + pud_phys = pgtable_alloc(PUD_SHIFT, mm); __pgd_populate(pgdp, pud_phys, PUD_TYPE_TABLE); pgd = READ_ONCE(*pgdp); } @@ -321,7 +320,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, BUG_ON(!pgattr_change_is_safe(pud_val(old_pud), READ_ONCE(pud_val(*pudp)))); } else { - alloc_init_cont_pmd(pudp, addr, next, phys, prot, + alloc_init_cont_pmd(mm, pudp, addr, next, phys, prot, pgtable_alloc, flags); BUG_ON(pud_val(old_pud) != 0 && @@ -333,11 +332,9 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, pud_clear_fixmap(); } -static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, - unsigned long virt, phys_addr_t size, - pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), - int flags) +static void __create_pgd_mapping(struct mm_struct *mm, pgd_t *pgdir, + phys_addr_t phys, unsigned long virt, phys_addr_t size, + pgprot_t prot, arm_pt_alloc_t pgtable_alloc, int flags) { unsigned long addr, end, next; pgd_t *pgdp = pgd_offset_raw(pgdir, virt); @@ -355,13 +352,13 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, do { next = pgd_addr_end(addr, end); - alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc, + alloc_init_pud(mm, pgdp, addr, next, phys, prot, pgtable_alloc, flags); phys += next - addr; } while (pgdp++, addr = next, addr != end); } -static phys_addr_t __pgd_pgtable_alloc(int shift) +static phys_addr_t __pgd_pgtable_alloc(int shift, struct mm_struct *mm) { void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL); BUG_ON(!ptr); @@ -371,9 +368,9 @@ static phys_addr_t __pgd_pgtable_alloc(int shift) return __pa(ptr); } -static phys_addr_t pgd_pgtable_alloc(int shift) +static phys_addr_t pgd_pgtable_alloc(int shift, struct mm_struct *mm) { - phys_addr_t pa = __pgd_pgtable_alloc(shift); + phys_addr_t pa = __pgd_pgtable_alloc(shift, mm); /* * Call proper page table ctor in case later we need to @@ -404,8 +401,8 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt, &phys, virt); return; } - __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, - NO_CONT_MAPPINGS); + __create_pgd_mapping(&init_mm, init_mm.pgd, phys, virt, size, prot, + NULL, NO_CONT_MAPPINGS); } void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, @@ -419,7 +416,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, if (page_mappings_only) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; - __create_pgd_mapping(mm->pgd, phys, virt, size, prot, + __create_pgd_mapping(mm, mm->pgd, phys, virt, size, prot, pgd_pgtable_alloc, flags); } @@ -432,8 +429,8 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt, return; } - __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, - NO_CONT_MAPPINGS); + __create_pgd_mapping(&init_mm, init_mm.pgd, phys, virt, size, prot, + NULL, NO_CONT_MAPPINGS); /* flush the TLBs after updating live kernel mappings */ flush_tlb_kernel_range(virt, virt + size); @@ -442,8 +439,8 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt, static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start, phys_addr_t end, pgprot_t prot, int flags) { - __create_pgd_mapping(pgdp, start, __phys_to_virt(start), end - start, - prot, early_pgtable_alloc, flags); + __create_pgd_mapping(&init_mm, pgdp, start, __phys_to_virt(start), + end - start, prot, early_pgtable_alloc, flags); } void __init mark_linear_text_alias_ro(void) @@ -547,8 +544,8 @@ static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end, BUG_ON(!PAGE_ALIGNED(pa_start)); BUG_ON(!PAGE_ALIGNED(size)); - __create_pgd_mapping(pgdp, pa_start, (unsigned long)va_start, size, prot, - early_pgtable_alloc, flags); + __create_pgd_mapping(&init_mm, pgdp, pa_start, (unsigned long)va_start, + size, prot, early_pgtable_alloc, flags); if (!(vm_flags & VM_NO_GUARD)) size += PAGE_SIZE; @@ -591,8 +588,8 @@ static int __init map_entry_trampoline(void) /* Map only the text into the trampoline page table */ memset(tramp_pg_dir, 0, PGD_SIZE); - __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE, - prot, __pgd_pgtable_alloc, 0); + __create_pgd_mapping(&init_mm, tramp_pg_dir, pa_start, TRAMP_VALIAS, + PAGE_SIZE, prot, __pgd_pgtable_alloc, 0); /* Map both the text and data into the kernel page table */ __set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot); @@ -1381,9 +1378,9 @@ int arch_add_memory(int nid, u64 start, u64 size, if (rodata_full || debug_pagealloc_enabled()) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; - __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), - size, params->pgprot, __pgd_pgtable_alloc, - flags); + __create_pgd_mapping(&init_mm, swapper_pg_dir, start, + __phys_to_virt(start), size, params->pgprot, + __pgd_pgtable_alloc, flags); memblock_clear_nomap(start, size); From patchwork Tue Apr 28 19:44:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 11515421 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D05F914DD for ; Tue, 28 Apr 2020 19:46:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A47AE21707 for ; Tue, 28 Apr 2020 19:46:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="HZRRityz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A47AE21707 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xjwmxGMxXOqCaKeqRyRzSGLTTrTBI8/7GVzY+q1PaDo=; b=HZRRityzLUUiT6 rWsaKscZkHMjyQ9TLlhpCEZL7Ar3VEoG867KPZeXVuXvQGqraLeaXVspLnCxr3SeH7VJL8mlF+7oa Y6Dx9Jk6VFCUG5Syz4+IwTR81XcvEw+dPcKneqBMi12I4oULu9rapHRvtMJTYPWiylDZY9Dtx2tB6 qn6IzyQ6x6VanuO56wMg5MOtTpMJtQRIN89ckpuwJODdzDngEq855zvlAqkljcahEO4+FYiH6WpRd TPr/9oSOFN0El/ydXt63Z58qoJVWPWIGiUj4FX5Isxfm7UZNEc081nIRkS+xYxLuYB8IReo9Ri2je 4bqDPE/1pPNEdfaavk8Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTWB4-0000ov-Tc; Tue, 28 Apr 2020 19:45:58 +0000 Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTW9z-0005ug-1e; Tue, 28 Apr 2020 19:44:51 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Subject: [PATCH 5/7] m68k: Thread mm_struct throughout page table allocation Date: Tue, 28 Apr 2020 12:44:47 -0700 Message-Id: <20200428194449.22615-6-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200428194449.22615-1-willy@infradead.org> References: <20200428194449.22615-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Catalin Marinas , linux-kernel@vger.kernel.org, "Matthew Wilcox \(Oracle\)" , Russell King , linux-m68k@lists.linux-m68k.org, Geert Uytterhoeven , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: "Matthew Wilcox (Oracle)" An upcoming patch will pass mm_struct to the page table constructor. Make sure m68k has the appropriate mm_struct at the point it needs to call the constructor. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Geert Uytterhoeven Acked-by: Geert Uytterhoeven --- arch/m68k/include/asm/motorola_pgalloc.h | 10 +++++----- arch/m68k/mm/motorola.c | 2 +- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/m68k/include/asm/motorola_pgalloc.h b/arch/m68k/include/asm/motorola_pgalloc.h index c66e42917912..dbac0c597397 100644 --- a/arch/m68k/include/asm/motorola_pgalloc.h +++ b/arch/m68k/include/asm/motorola_pgalloc.h @@ -15,12 +15,12 @@ enum m68k_table_types { }; extern void init_pointer_table(void *table, int type); -extern void *get_pointer_table(int type); +extern void *get_pointer_table(int type, struct mm_struct *mm); extern int free_pointer_table(void *table, int type); static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) { - return get_pointer_table(TABLE_PTE); + return get_pointer_table(TABLE_PTE, mm); } static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) @@ -30,7 +30,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) static inline pgtable_t pte_alloc_one(struct mm_struct *mm) { - return get_pointer_table(TABLE_PTE); + return get_pointer_table(TABLE_PTE, mm); } static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable) @@ -47,7 +47,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable, static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) { - return get_pointer_table(TABLE_PMD); + return get_pointer_table(TABLE_PMD, mm); } static inline int pmd_free(struct mm_struct *mm, pmd_t *pmd) @@ -69,7 +69,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) static inline pgd_t *pgd_alloc(struct mm_struct *mm) { - return get_pointer_table(TABLE_PGD); + return get_pointer_table(TABLE_PGD, mm); } diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index fc16190ec2d6..7743480be0cf 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -113,7 +113,7 @@ void __init init_pointer_table(void *table, int type) return; } -void *get_pointer_table(int type) +void *get_pointer_table(int type, struct mm_struct *mm) { ptable_desc *dp = ptable_list[type].next; unsigned int mask = list_empty(&ptable_list[type]) ? 0 : PD_MARKBITS(dp); From patchwork Tue Apr 28 19:44:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 11515429 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 843A214DD for ; Tue, 28 Apr 2020 19:46:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 60BA921707 for ; Tue, 28 Apr 2020 19:46:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="pD4Vo1KC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 60BA921707 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1u+h49BFHtBjsLxvecLsBX0N3VyAxR1cdCCpiXLGLNs=; b=pD4Vo1KCCKnnvQ 9s/S1XRA1CjgTzqJrbZxbf82uvg8zottXZoGiuNUYc0Sp2N3BMBosnEA6e9OX+MafwQwxqQV1jxA3 1vWksZmzs4AHOHXLFTougIRE2Az8J75tnHyWCD/sklRACtaYDjsnzaUucI2kmevpvO4ZoluSAfpnZ K6mSJpRpI/oYN5zUBeMiNqzlPXkeFpH4ajBtYxxWO1AUui7hGKXgMGRY+3KVghlENCw9w/qD1R2OF 2A6dKXt9HPS5Y/YH2ApaRX1f/AoZT9kKLhzju3BSOfuLyRauEi8E6MF8m0THR0D7yoa81X86jusWR IoNH9z6gKBGFl+bdeVFw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTWBr-0001aY-Gc; Tue, 28 Apr 2020 19:46:47 +0000 Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTW9z-0005uk-2o; Tue, 28 Apr 2020 19:44:51 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Subject: [PATCH 6/7] mm: Set pt_mm in PTE constructor Date: Tue, 28 Apr 2020 12:44:48 -0700 Message-Id: <20200428194449.22615-7-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200428194449.22615-1-willy@infradead.org> References: <20200428194449.22615-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Catalin Marinas , linux-kernel@vger.kernel.org, "Matthew Wilcox \(Oracle\)" , Russell King , linux-m68k@lists.linux-m68k.org, Geert Uytterhoeven , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: "Matthew Wilcox (Oracle)" By setting pt_mm for pages in use as page tables, we can help with debugging and lay the foundation for handling hardware errors in page tables more gracefully. It also opens up the possibility for adding more sanity checks in the future. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Geert Uytterhoeven Acked-by: Geert Uytterhoeven --- arch/arc/include/asm/pgalloc.h | 2 +- arch/arm/mm/mmu.c | 2 +- arch/arm64/mm/mmu.c | 2 +- arch/m68k/include/asm/mcf_pgalloc.h | 2 +- arch/m68k/mm/motorola.c | 2 +- arch/openrisc/include/asm/pgalloc.h | 2 +- arch/powerpc/mm/pgtable-frag.c | 2 +- arch/s390/mm/pgalloc.c | 2 +- arch/sparc/mm/init_64.c | 2 +- arch/sparc/mm/srmmu.c | 2 +- arch/xtensa/include/asm/pgalloc.h | 2 +- include/asm-generic/pgalloc.h | 2 +- include/linux/mm.h | 5 ++++- 13 files changed, 16 insertions(+), 13 deletions(-) diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index b747f2ec2928..5f6b1f3bc2a2 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -108,7 +108,7 @@ pte_alloc_one(struct mm_struct *mm) return 0; memzero((void *)pte_pg, PTRS_PER_PTE * sizeof(pte_t)); page = virt_to_page(pte_pg); - if (!pgtable_pte_page_ctor(page)) { + if (!pgtable_pte_page_ctor(page, mm)) { __free_page(page); return 0; } diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index e5275bfbe695..9c16c45570ba 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -707,7 +707,7 @@ static void *__init late_alloc(unsigned long sz, struct mm_struct *mm) { void *ptr = (void *)__get_free_pages(GFP_PGTABLE_KERNEL, get_order(sz)); - if (!ptr || !pgtable_pte_page_ctor(virt_to_page(ptr))) + if (!ptr || !pgtable_pte_page_ctor(virt_to_page(ptr), mm)) BUG(); return ptr; } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 69ecc83c3be0..c706bed1e496 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -381,7 +381,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift, struct mm_struct *mm) * folded, and if so pgtable_pmd_page_ctor() becomes nop. */ if (shift == PAGE_SHIFT) - BUG_ON(!pgtable_pte_page_ctor(phys_to_page(pa))); + BUG_ON(!pgtable_pte_page_ctor(phys_to_page(pa), mm)); else if (shift == PMD_SHIFT) BUG_ON(!pgtable_pmd_page_ctor(phys_to_page(pa))); diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h index bc1228e00518..369a3523e834 100644 --- a/arch/m68k/include/asm/mcf_pgalloc.h +++ b/arch/m68k/include/asm/mcf_pgalloc.h @@ -50,7 +50,7 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) if (!page) return NULL; - if (!pgtable_pte_page_ctor(page)) { + if (!pgtable_pte_page_ctor(page, mm)) { __free_page(page); return NULL; } diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 7743480be0cf..6bb7c9f348ad 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -137,7 +137,7 @@ void *get_pointer_table(int type, struct mm_struct *mm) * m68k doesn't have SPLIT_PTE_PTLOCKS for not having * SMP. */ - pgtable_pte_page_ctor(virt_to_page(page)); + pgtable_pte_page_ctor(virt_to_page(page, mm)); } mmu_page_ctor(page); diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h index da12a4c38c4b..1a80dfc928b5 100644 --- a/arch/openrisc/include/asm/pgalloc.h +++ b/arch/openrisc/include/asm/pgalloc.h @@ -75,7 +75,7 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm) if (!pte) return NULL; clear_page(page_address(pte)); - if (!pgtable_pte_page_ctor(pte)) { + if (!pgtable_pte_page_ctor(pte, mm)) { __free_page(pte); return NULL; } diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c index ee4bd6d38602..59a8c85e01ac 100644 --- a/arch/powerpc/mm/pgtable-frag.c +++ b/arch/powerpc/mm/pgtable-frag.c @@ -61,7 +61,7 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT); if (!page) return NULL; - if (!pgtable_pte_page_ctor(page)) { + if (!pgtable_pte_page_ctor(page, mm)) { __free_page(page); return NULL; } diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 498c98a312f4..0363828749e2 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -208,7 +208,7 @@ unsigned long *page_table_alloc(struct mm_struct *mm) page = alloc_page(GFP_KERNEL); if (!page) return NULL; - if (!pgtable_pte_page_ctor(page)) { + if (!pgtable_pte_page_ctor(page, mm)) { __free_page(page); return NULL; } diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 1cf0d666dea3..d2cc80828415 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -2928,7 +2928,7 @@ pgtable_t pte_alloc_one(struct mm_struct *mm) struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO); if (!page) return NULL; - if (!pgtable_pte_page_ctor(page)) { + if (!pgtable_pte_page_ctor(page, mm)) { free_unref_page(page); return NULL; } diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index b7c94de70cca..019ff2019b55 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -382,7 +382,7 @@ pgtable_t pte_alloc_one(struct mm_struct *mm) if ((pte = (unsigned long)pte_alloc_one_kernel(mm)) == 0) return NULL; page = pfn_to_page(__nocache_pa(pte) >> PAGE_SHIFT); - if (!pgtable_pte_page_ctor(page)) { + if (!pgtable_pte_page_ctor(page, mm)) { __free_page(page); return NULL; } diff --git a/arch/xtensa/include/asm/pgalloc.h b/arch/xtensa/include/asm/pgalloc.h index 1d38f0e755ba..43cc05255832 100644 --- a/arch/xtensa/include/asm/pgalloc.h +++ b/arch/xtensa/include/asm/pgalloc.h @@ -55,7 +55,7 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) if (!pte) return NULL; page = virt_to_page(pte); - if (!pgtable_pte_page_ctor(page)) { + if (!pgtable_pte_page_ctor(page, mm)) { __free_page(page); return NULL; } diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 73f7421413cb..24c2d6e194fb 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -63,7 +63,7 @@ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp) pte = alloc_page(gfp); if (!pte) return NULL; - if (!pgtable_pte_page_ctor(pte)) { + if (!pgtable_pte_page_ctor(pte, mm)) { __free_page(pte); return NULL; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 5a323422d783..2a98eebeba91 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2157,11 +2157,13 @@ static inline void pgtable_init(void) pgtable_cache_init(); } -static inline bool pgtable_pte_page_ctor(struct page *page) +static inline +bool pgtable_pte_page_ctor(struct page *page, struct mm_struct *mm) { if (!ptlock_init(page)) return false; __SetPageTable(page); + page->pt_mm = mm; inc_zone_page_state(page, NR_PAGETABLE); return true; } @@ -2170,6 +2172,7 @@ static inline void pgtable_pte_page_dtor(struct page *page) { ptlock_free(page); __ClearPageTable(page); + page->pt_mm = NULL; dec_zone_page_state(page, NR_PAGETABLE); } From patchwork Tue Apr 28 19:44:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 11515415 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 33EB714DD for ; Tue, 28 Apr 2020 19:45:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D4525221F6 for ; Tue, 28 Apr 2020 19:45:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="u6oIHWVz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D4525221F6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mAttgk5aXeOv835goKZJGZK8gumDPlcBE0yesdDv2XE=; b=u6oIHWVzqIn2zU 69F9ECBy5CSNroQkffbfaqc01ALAkJ/R+dZbYKbjAcEo+Mvrdf/VnXNp+XRt7Ufrh4ksmNyBRBtPX JlMm9lm+9KN9KBbxGEP+5z3D6rXYKN6mlywPnFQpbL2M/tLovLK3Jy/Ts8W19l3ZYvMVC63mY3cAC SppdRVAPd71n42n0hSzNKHj19RDBsjYXd/IaHRnSLg40l1nzBWUV15hCnkp+VtQZC/EdZyYP69nXq fbhu5fJCcDvHAjHktozmSA+8R1CXT6NeC8lZOUqDf1z//z1+IpuLYui7nt6I8uBdEbHOMhak8LXEM xbI9eWtiXH7pV2myrgNg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTWAI-0006AD-Jr; Tue, 28 Apr 2020 19:45:10 +0000 Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTW9z-0005uo-43; Tue, 28 Apr 2020 19:44:51 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Subject: [PATCH 7/7] mm: Set pt_mm in PMD constructor Date: Tue, 28 Apr 2020 12:44:49 -0700 Message-Id: <20200428194449.22615-8-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200428194449.22615-1-willy@infradead.org> References: <20200428194449.22615-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Catalin Marinas , linux-kernel@vger.kernel.org, "Matthew Wilcox \(Oracle\)" , Russell King , linux-m68k@lists.linux-m68k.org, Geert Uytterhoeven , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: "Matthew Wilcox (Oracle)" By setting pt_mm for pages in use as page tables, we can help with debugging and lay the foundation for handling hardware errors in page tables more gracefully. It also opens up the possibility for adding more sanity checks in the future. Also set and clear the PageTable bit so that we know these are page tables. Signed-off-by: Matthew Wilcox (Oracle) --- arch/arm64/include/asm/pgalloc.h | 2 +- arch/arm64/mm/mmu.c | 2 +- arch/powerpc/mm/book3s64/pgtable.c | 2 +- arch/s390/include/asm/pgalloc.h | 2 +- arch/x86/include/asm/pgalloc.h | 2 +- arch/x86/mm/pgtable.c | 2 +- include/linux/mm.h | 13 +++++++++++-- 7 files changed, 17 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 172d76fa0245..920da9c5786c 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -30,7 +30,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) page = alloc_page(gfp); if (!page) return NULL; - if (!pgtable_pmd_page_ctor(page)) { + if (!pgtable_pmd_page_ctor(page, mm)) { __free_page(page); return NULL; } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index c706bed1e496..b7bdde1990be 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -383,7 +383,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift, struct mm_struct *mm) if (shift == PAGE_SHIFT) BUG_ON(!pgtable_pte_page_ctor(phys_to_page(pa), mm)); else if (shift == PMD_SHIFT) - BUG_ON(!pgtable_pmd_page_ctor(phys_to_page(pa))); + BUG_ON(!pgtable_pmd_page_ctor(phys_to_page(pa), mm)); return pa; } diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index e0bb69c616e4..9fda5287c197 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -297,7 +297,7 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm) page = alloc_page(gfp); if (!page) return NULL; - if (!pgtable_pmd_page_ctor(page)) { + if (!pgtable_pmd_page_ctor(page, mm)) { __free_pages(page, 0); return NULL; } diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 74a352f8c0d1..bebad4e5d42a 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -86,7 +86,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr) if (!table) return NULL; crst_table_init(table, _SEGMENT_ENTRY_EMPTY); - if (!pgtable_pmd_page_ctor(virt_to_page(table))) { + if (!pgtable_pmd_page_ctor(virt_to_page(table), mm)) { crst_table_free(mm, table); return NULL; } diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index 29aa7859bdee..33514f0a9e79 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -96,7 +96,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) page = alloc_pages(gfp, 0); if (!page) return NULL; - if (!pgtable_pmd_page_ctor(page)) { + if (!pgtable_pmd_page_ctor(page, mm)) { __free_pages(page, 0); return NULL; } diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index f5f46737aea0..8f4255662c5a 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -229,7 +229,7 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) pmd_t *pmd = (pmd_t *)__get_free_page(gfp); if (!pmd) failed = true; - if (pmd && !pgtable_pmd_page_ctor(virt_to_page(pmd))) { + if (pmd && !pgtable_pmd_page_ctor(virt_to_page(pmd), mm)) { free_page((unsigned long)pmd); pmd = NULL; failed = true; diff --git a/include/linux/mm.h b/include/linux/mm.h index 2a98eebeba91..e2924d900fc5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2216,11 +2216,14 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline +bool pgtable_pmd_page_ctor(struct page *page, struct mm_struct *mm) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; #endif + __SetPageTable(page); + page->pt_mm = mm; return ptlock_init(page); } @@ -2229,6 +2232,8 @@ static inline void pgtable_pmd_page_dtor(struct page *page) #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); #endif + __ClearPageTable(page); + page->pt_mm = NULL; ptlock_free(page); } @@ -2241,7 +2246,11 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } +static inline +bool pgtable_pmd_page_ctor(struct page *page, struct mm_struct *mm) +{ + return true; +} static inline void pgtable_pmd_page_dtor(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte)