From patchwork Tue Apr 14 15:34:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11488365 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF93392C for ; Tue, 14 Apr 2020 15:36:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D339206D5 for ; Tue, 14 Apr 2020 15:36:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="zd4UBwJm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D339206D5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 887F08E0011; Tue, 14 Apr 2020 11:35:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 835F78E0007; Tue, 14 Apr 2020 11:35:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 726028E0011; Tue, 14 Apr 2020 11:35:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id 560BD8E0007 for ; Tue, 14 Apr 2020 11:35:59 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 06A68824556B for ; Tue, 14 Apr 2020 15:35:59 +0000 (UTC) X-FDA: 76706861238.25.bag13_5147435d16106 X-Spam-Summary: 2,0,0,e0c20b44d65f009d,d41d8cd98f00b204,rppt@kernel.org,,RULES_HIT:1:2:41:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2559:2562:2898:3138:3139:3140:3141:3142:3865:3866:3868:3870:3871:4050:4250:4321:4385:4605:5007:6119:6261:6653:6742:6743:7576:9036:10004:11026:11473:11657:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12663:12679:12895:12986:13894:14096:14394:21080:21324:21451:21627:21990:30003:30054:30070,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: bag13_5147435d16106 X-Filterd-Recvd-Size: 10164 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Apr 2020 15:35:58 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A351E2076B; Tue, 14 Apr 2020 15:35:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1586878557; bh=h/ic3nlfvJC9adF65NwTWXd+vRZoDRjPyzHb1CRqUns=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zd4UBwJmEedDj/QdJMikKfRAUzgTJF/aG7zyXweY/JCEmg5ID0W21YZqg0lDEnncn nIDcLeVK4Bbfg0M0oKLU00Yst1LZthaiWrKomJFlG083tRPxeP9NVd/wadCfs22mQB coAznObVw5RzUgZiii5pZCRw3I2D4lvZ+mrU1CZk= From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Benjamin Herrenschmidt , Brian Cain , Catalin Marinas , Christophe Leroy , Fenghua Yu , Geert Uytterhoeven , Guan Xuetao , James Morse , Jonas Bonn , Julien Thierry , Ley Foon Tan , Marc Zyngier , Michael Ellerman , Paul Mackerras , Rich Felker , Russell King , Stafford Horne , Stefan Kristiansson , Suzuki K Poulose , Tony Luck , Will Deacon , Yoshinori Sato , kvmarm@lists.cs.columbia.edu, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-sh@vger.kernel.org, nios2-dev@lists.rocketboards.org, openrisc@lists.librecores.org, uclinux-h8-devel@lists.sourceforge.jp, Mike Rapoport , Mike Rapoport Subject: [PATCH v4 05/14] ia64: add support for folded p4d page tables Date: Tue, 14 Apr 2020 18:34:46 +0300 Message-Id: <20200414153455.21744-6-rppt@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200414153455.21744-1-rppt@kernel.org> References: <20200414153455.21744-1-rppt@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport Implement primitives necessary for the 4th level folding, add walks of p4d level where appropriate, remove usage of __ARCH_USE_5LEVEL_HACK and replace 5level-fixup.h with pgtable-nop4d.h Signed-off-by: Mike Rapoport --- arch/ia64/include/asm/pgalloc.h | 4 ++-- arch/ia64/include/asm/pgtable.h | 17 ++++++++--------- arch/ia64/mm/fault.c | 7 ++++++- arch/ia64/mm/hugetlbpage.c | 18 ++++++++++++------ arch/ia64/mm/init.c | 28 ++++++++++++++++++++++++---- 5 files changed, 52 insertions(+), 22 deletions(-) diff --git a/arch/ia64/include/asm/pgalloc.h b/arch/ia64/include/asm/pgalloc.h index f4c491044882..2a3050345099 100644 --- a/arch/ia64/include/asm/pgalloc.h +++ b/arch/ia64/include/asm/pgalloc.h @@ -36,9 +36,9 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) #if CONFIG_PGTABLE_LEVELS == 4 static inline void -pgd_populate(struct mm_struct *mm, pgd_t * pgd_entry, pud_t * pud) +p4d_populate(struct mm_struct *mm, p4d_t * p4d_entry, pud_t * pud) { - pgd_val(*pgd_entry) = __pa(pud); + p4d_val(*p4d_entry) = __pa(pud); } static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 0e7b645b76c6..787b0a91d255 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -283,12 +283,12 @@ extern unsigned long VMALLOC_END; #define pud_page(pud) virt_to_page((pud_val(pud) + PAGE_OFFSET)) #if CONFIG_PGTABLE_LEVELS == 4 -#define pgd_none(pgd) (!pgd_val(pgd)) -#define pgd_bad(pgd) (!ia64_phys_addr_valid(pgd_val(pgd))) -#define pgd_present(pgd) (pgd_val(pgd) != 0UL) -#define pgd_clear(pgdp) (pgd_val(*(pgdp)) = 0UL) -#define pgd_page_vaddr(pgd) ((unsigned long) __va(pgd_val(pgd) & _PFN_MASK)) -#define pgd_page(pgd) virt_to_page((pgd_val(pgd) + PAGE_OFFSET)) +#define p4d_none(p4d) (!p4d_val(p4d)) +#define p4d_bad(p4d) (!ia64_phys_addr_valid(p4d_val(p4d))) +#define p4d_present(p4d) (p4d_val(p4d) != 0UL) +#define p4d_clear(p4dp) (p4d_val(*(p4dp)) = 0UL) +#define p4d_page_vaddr(p4d) ((unsigned long) __va(p4d_val(p4d) & _PFN_MASK)) +#define p4d_page(p4d) virt_to_page((p4d_val(p4d) + PAGE_OFFSET)) #endif /* @@ -386,7 +386,7 @@ pgd_offset (const struct mm_struct *mm, unsigned long address) #if CONFIG_PGTABLE_LEVELS == 4 /* Find an entry in the second-level page table.. */ #define pud_offset(dir,addr) \ - ((pud_t *) pgd_page_vaddr(*(dir)) + (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))) + ((pud_t *) p4d_page_vaddr(*(dir)) + (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))) #endif /* Find an entry in the third-level page table.. */ @@ -580,10 +580,9 @@ extern struct page *zero_page_memmap_ptr; #if CONFIG_PGTABLE_LEVELS == 3 -#define __ARCH_USE_5LEVEL_HACK #include #endif -#include +#include #include #endif /* _ASM_IA64_PGTABLE_H */ diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c index 30d0c1fca99e..12242aa0dad1 100644 --- a/arch/ia64/mm/fault.c +++ b/arch/ia64/mm/fault.c @@ -29,6 +29,7 @@ static int mapped_kernel_page_is_present (unsigned long address) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *ptep, pte; @@ -37,7 +38,11 @@ mapped_kernel_page_is_present (unsigned long address) if (pgd_none(*pgd) || pgd_bad(*pgd)) return 0; - pud = pud_offset(pgd, address); + p4d = p4d_offset(pgd, address); + if (p4d_none(*p4d) || p4d_bad(*p4d)) + return 0; + + pud = pud_offset(p4d, address); if (pud_none(*pud) || pud_bad(*pud)) return 0; diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c index d16e419fd712..32352a73df0c 100644 --- a/arch/ia64/mm/hugetlbpage.c +++ b/arch/ia64/mm/hugetlbpage.c @@ -30,12 +30,14 @@ huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz) { unsigned long taddr = htlbpage_to_page(addr); pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte = NULL; pgd = pgd_offset(mm, taddr); - pud = pud_alloc(mm, pgd, taddr); + p4d = p4d_offset(pgd, taddr); + pud = pud_alloc(mm, p4d, taddr); if (pud) { pmd = pmd_alloc(mm, pud, taddr); if (pmd) @@ -49,17 +51,21 @@ huge_pte_offset (struct mm_struct *mm, unsigned long addr, unsigned long sz) { unsigned long taddr = htlbpage_to_page(addr); pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte = NULL; pgd = pgd_offset(mm, taddr); if (pgd_present(*pgd)) { - pud = pud_offset(pgd, taddr); - if (pud_present(*pud)) { - pmd = pmd_offset(pud, taddr); - if (pmd_present(*pmd)) - pte = pte_offset_map(pmd, taddr); + p4d = p4d_offset(pgd, addr); + if (p4d_present(*p4d)) { + pud = pud_offset(p4d, taddr); + if (pud_present(*pud)) { + pmd = pmd_offset(pud, taddr); + if (pmd_present(*pmd)) + pte = pte_offset_map(pmd, taddr); + } } } diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index d637b4ea3147..ca760f6cb18f 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -208,6 +208,7 @@ static struct page * __init put_kernel_page (struct page *page, unsigned long address, pgprot_t pgprot) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -215,7 +216,10 @@ put_kernel_page (struct page *page, unsigned long address, pgprot_t pgprot) pgd = pgd_offset_k(address); /* note: this is NOT pgd_offset()! */ { - pud = pud_alloc(&init_mm, pgd, address); + p4d = p4d_alloc(&init_mm, pgd, address); + if (!p4d) + goto out; + pud = pud_alloc(&init_mm, p4d, address); if (!pud) goto out; pmd = pmd_alloc(&init_mm, pud, address); @@ -382,6 +386,7 @@ int vmemmap_find_next_valid_pfn(int node, int i) do { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -392,7 +397,13 @@ int vmemmap_find_next_valid_pfn(int node, int i) continue; } - pud = pud_offset(pgd, end_address); + p4d = p4d_offset(pgd, end_address); + if (p4d_none(*p4d)) { + end_address += P4D_SIZE; + continue; + } + + pud = pud_offset(p4d, end_address); if (pud_none(*pud)) { end_address += PUD_SIZE; continue; @@ -430,6 +441,7 @@ int __init create_mem_map_page_table(u64 start, u64 end, void *arg) struct page *map_start, *map_end; int node; pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -444,12 +456,20 @@ int __init create_mem_map_page_table(u64 start, u64 end, void *arg) for (address = start_page; address < end_page; address += PAGE_SIZE) { pgd = pgd_offset_k(address); if (pgd_none(*pgd)) { + p4d = memblock_alloc_node(PAGE_SIZE, PAGE_SIZE, node); + if (!p4d) + goto err_alloc; + pgd_populate(&init_mm, pgd, p4d); + } + p4d = p4d_offset(pgd, address); + + if (p4d_none(*p4d)) { pud = memblock_alloc_node(PAGE_SIZE, PAGE_SIZE, node); if (!pud) goto err_alloc; - pgd_populate(&init_mm, pgd, pud); + p4d_populate(&init_mm, p4d, pud); } - pud = pud_offset(pgd, address); + pud = pud_offset(p4d, address); if (pud_none(*pud)) { pmd = memblock_alloc_node(PAGE_SIZE, PAGE_SIZE, node);