From patchwork Thu Jul 1 01:48:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12353097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C230CC11F69 for ; Thu, 1 Jul 2021 01:48:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 773E361466 for ; Thu, 1 Jul 2021 01:48:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 773E361466 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 002788D01E6; Wed, 30 Jun 2021 21:48:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF4C38D01D0; Wed, 30 Jun 2021 21:48:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBC3A8D01E6; Wed, 30 Jun 2021 21:48:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id B1BF78D01D0 for ; Wed, 30 Jun 2021 21:48:08 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8FBCB808E8EA for ; Thu, 1 Jul 2021 01:48:08 +0000 (UTC) X-FDA: 78312333456.31.0A65966 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf29.hostedemail.com (Postfix) with ESMTP id 3FD919000328 for ; Thu, 1 Jul 2021 01:48:08 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 0CF2B61468; Thu, 1 Jul 2021 01:48:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1625104087; bh=1Tnqcjpn1lxKVF+SS8i3MXAK1LI4r9vPnrvG0YG50bU=; h=Date:From:To:Subject:In-Reply-To:From; b=eRAJ6jO4Dfq2bd49M1OixD0/l5ALmKgdgEItqVOWav1ps2Qy4D62p5c+SO4Jmr8pU 0I18UYZWGPJOkvMRYiR1d+OUfwfOTUASJeRTNzKRThmGle5+mIaBEpDeA77nT9k9br gCED/CyONM6+jeOnevamhtGUg5NUOU9MuOOF3yG4= Date: Wed, 30 Jun 2021 18:48:06 -0700 From: Andrew Morton To: akpm@linux-foundation.org, benh@kernel.crashing.org, christophe.leroy@csgroup.eu, linux-mm@kvack.org, mike.kravetz@oracle.com, mm-commits@vger.kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, paulus@samba.org, rppt@kernel.org, torvalds@linux-foundation.org, uladzislau.rezki@sony.com Subject: [patch 019/192] mm/vmalloc: enable mapping of huge pages at pte level in vmap Message-ID: <20210701014806.AhSMhR7Z2%akpm@linux-foundation.org> In-Reply-To: <20210630184624.9ca1937310b0dd5ce66b30e7@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3FD919000328 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=eRAJ6jO4; spf=pass (imf29.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Stat-Signature: ryws98i88hhkfpjoh57kryz7gew76mdx X-HE-Tag: 1625104088-94376 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Christophe Leroy Subject: mm/vmalloc: enable mapping of huge pages at pte level in vmap On some architectures like powerpc, there are huge pages that are mapped at pte level. Enable it in vmap. For that, architectures can provide arch_vmap_pte_range_map_size() that returns the size of pages to map at pte level. Link: https://lkml.kernel.org/r/fb3ccc73377832ac6708181ec419128a2f98ce36.1620795204.git.christophe.leroy@csgroup.eu Signed-off-by: Christophe Leroy Cc: Benjamin Herrenschmidt Cc: Michael Ellerman Cc: Mike Kravetz Cc: Mike Rapoport Cc: Nicholas Piggin Cc: Paul Mackerras Cc: Uladzislau Rezki Signed-off-by: Andrew Morton --- include/linux/vmalloc.h | 8 ++++++++ mm/vmalloc.c | 21 ++++++++++++++++++--- 2 files changed, 26 insertions(+), 3 deletions(-) --- a/include/linux/vmalloc.h~mm-vmalloc-enable-mapping-of-huge-pages-at-pte-level-in-vmap +++ a/include/linux/vmalloc.h @@ -104,6 +104,14 @@ static inline bool arch_vmap_pmd_support } #endif +#ifndef arch_vmap_pte_range_map_size +static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, unsigned long end, + u64 pfn, unsigned int max_page_shift) +{ + return PAGE_SIZE; +} +#endif + /* * Highlevel APIs for driver use */ --- a/mm/vmalloc.c~mm-vmalloc-enable-mapping-of-huge-pages-at-pte-level-in-vmap +++ a/mm/vmalloc.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include @@ -83,10 +84,11 @@ static void free_work(struct work_struct /*** Page table manipulation functions ***/ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) + unsigned int max_page_shift, pgtbl_mod_mask *mask) { pte_t *pte; u64 pfn; + unsigned long size = PAGE_SIZE; pfn = phys_addr >> PAGE_SHIFT; pte = pte_alloc_kernel_track(pmd, addr, mask); @@ -94,9 +96,22 @@ static int vmap_pte_range(pmd_t *pmd, un return -ENOMEM; do { BUG_ON(!pte_none(*pte)); + +#ifdef CONFIG_HUGETLB_PAGE + size = arch_vmap_pte_range_map_size(addr, end, pfn, max_page_shift); + if (size != PAGE_SIZE) { + pte_t entry = pfn_pte(pfn, prot); + + entry = pte_mkhuge(entry); + entry = arch_make_huge_pte(entry, ilog2(size), 0); + set_huge_pte_at(&init_mm, addr, pte, entry); + pfn += PFN_DOWN(size); + continue; + } +#endif set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot)); pfn++; - } while (pte++, addr += PAGE_SIZE, addr != end); + } while (pte += PFN_DOWN(size), addr += size, addr != end); *mask |= PGTBL_PTE_MODIFIED; return 0; } @@ -145,7 +160,7 @@ static int vmap_pmd_range(pud_t *pud, un continue; } - if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) + if (vmap_pte_range(pmd, addr, next, phys_addr, prot, max_page_shift, mask)) return -ENOMEM; } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); return 0;