From patchwork Sat Dec 5 06:57:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2060C4361A for ; Sat, 5 Dec 2020 06:57:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 864D422D2A for ; Sat, 5 Dec 2020 06:57:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 864D422D2A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 17A9B6B005D; Sat, 5 Dec 2020 01:57:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 102756B0068; Sat, 5 Dec 2020 01:57:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0D6E6B006C; Sat, 5 Dec 2020 01:57:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id DBD886B005D for ; Sat, 5 Dec 2020 01:57:43 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8D3373633 for ; Sat, 5 Dec 2020 06:57:43 +0000 (UTC) X-FDA: 77558323206.17.hat07_2d0e083273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 700BF180D0180 for ; Sat, 5 Dec 2020 06:57:43 +0000 (UTC) X-HE-Tag: hat07_2d0e083273cb X-Filterd-Recvd-Size: 6488 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:57:42 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id t18so4395723plo.0 for ; Fri, 04 Dec 2020 22:57:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ls4Z+QtszsWeZpzsu5kDe+zUxHoBLiN8enIaEYCRRt8=; b=Ur9V+ZCZCWlA46d144Si/DYRLdEDbLIMoYCL/vTXTzSKwYvMQwD5hNBQV2h8mP2lmf EhTz4wAf6T3L8ytIUvJ9KIC9oe/KlDpEY+a5M2eoUEPVj8MpevxOBMatonf21SCFk7be SRuin3uYCV31QMdX9gKSn6HOz19yhgLn6saLWSXGmlFZMFZeJxafRx7LvGCBdwAzHCg0 RsYSG9xUQSDSL/zLbWqwh5BOLdw98TrziZrxRVudGEDZjC8+6rkMJtQwnaXgO4nIc3Pt t34vGmvMjuVIFAfSAwavfNc5TitE8OnIGWpQ0YDlGclWxTifMrLiFa4XVAwQ3K6LNMae TFig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ls4Z+QtszsWeZpzsu5kDe+zUxHoBLiN8enIaEYCRRt8=; b=Lu02Mg3gI306NnEmytyYHPXUCe+LfysUqT4ch2Ip/i4C9O7gYROOsjSNPLIvnpNbJV dJiFVdKiQILLUMu+X/JcmxjEB6A6nUcGU+CcOwzdQ39e+Bi5pQenLOueBKfgP7UumcpX beYU5oPqElL32EAW2k1q9x4x1N/0YJMkyrHi3rGhJl1XgnosC+ST2D85tuJRFMDErA20 KNhjbb5Qb27KCGadUIN6ouFZ+dXjBIJoGPEyaLWNLLETFJF3S1id1T1ZOCURcFb7/n2d GkSKacniKApl3pNrSZ0IJNfUm+I3oBqo9K8bApnpb+7eSHSOozzuEy9j99FuITNLfznI o49Q== X-Gm-Message-State: AOAM530HvS+DnxBbgwMXvI2u6UcoEui6mYgRNxHDAHUXeMR5OtTNZ+9d LUcG0CnVYofHp9gq3DmPjWXPDxMgTktnnA== X-Google-Smtp-Source: ABdhPJwUXGQ4YbyQ3wH3t7Grk2DNNqF0d4rKm63dWBD8myRXhzgMhNWR816AFkkGjQMhdUomL9+jkw== X-Received: by 2002:a17:90a:fd08:: with SMTP id cv8mr7378243pjb.29.1607151461853; Fri, 04 Dec 2020 22:57:41 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.57.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:57:41 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe Subject: [PATCH v9 01/12] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings Date: Sat, 5 Dec 2020 16:57:14 +1000 Message-Id: <20201205065725.1286370-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vmalloc_to_page returns NULL for addresses mapped by larger pages[*]. Whether or not a vmap is huge depends on the architecture details, alignments, boot options, etc., which the caller can not be expected to know. Therefore HUGE_VMAP is a regression for vmalloc_to_page. This change teaches vmalloc_to_page about larger pages, and returns the struct page that corresponds to the offset within the large page. This makes the API agnostic to mapping implementation details. [*] As explained by commit 029c54b095995 ("mm/vmalloc.c: huge-vmap: fail gracefully on unexpected huge vmap mappings") Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 41 ++++++++++++++++++++++++++--------------- 1 file changed, 26 insertions(+), 15 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6ae491a8b210..f85124e88bdb 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -34,7 +34,7 @@ #include #include #include - +#include #include #include #include @@ -343,7 +343,9 @@ int is_vmalloc_or_module_addr(const void *x) } /* - * Walk a vmap address to the struct page it maps. + * Walk a vmap address to the struct page it maps. Huge vmap mappings will + * return the tail page that corresponds to the base page address, which + * matches small vmap mappings. */ struct page *vmalloc_to_page(const void *vmalloc_addr) { @@ -363,25 +365,33 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) if (pgd_none(*pgd)) return NULL; + if (WARN_ON_ONCE(pgd_leaf(*pgd))) + return NULL; /* XXX: no allowance for huge pgd */ + if (WARN_ON_ONCE(pgd_bad(*pgd))) + return NULL; + p4d = p4d_offset(pgd, addr); if (p4d_none(*p4d)) return NULL; - pud = pud_offset(p4d, addr); + if (p4d_leaf(*p4d)) + return p4d_page(*p4d) + ((addr & ~P4D_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(p4d_bad(*p4d))) + return NULL; - /* - * Don't dereference bad PUD or PMD (below) entries. This will also - * identify huge mappings, which we may encounter on architectures - * that define CONFIG_HAVE_ARCH_HUGE_VMAP=y. Such regions will be - * identified as vmalloc addresses by is_vmalloc_addr(), but are - * not [unambiguously] associated with a struct page, so there is - * no correct value to return for them. - */ - WARN_ON_ONCE(pud_bad(*pud)); - if (pud_none(*pud) || pud_bad(*pud)) + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return NULL; + if (pud_leaf(*pud)) + return pud_page(*pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(pud_bad(*pud))) return NULL; + pmd = pmd_offset(pud, addr); - WARN_ON_ONCE(pmd_bad(*pmd)); - if (pmd_none(*pmd) || pmd_bad(*pmd)) + if (pmd_none(*pmd)) + return NULL; + if (pmd_leaf(*pmd)) + return pmd_page(*pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(pmd_bad(*pmd))) return NULL; ptep = pte_offset_map(pmd, addr); @@ -389,6 +399,7 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) if (pte_present(pte)) page = pte_page(pte); pte_unmap(ptep); + return page; } EXPORT_SYMBOL(vmalloc_to_page); From patchwork Sat Dec 5 06:57:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F683C4361A for ; Sat, 5 Dec 2020 06:57:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E40F622D58 for ; Sat, 5 Dec 2020 06:57:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E40F622D58 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 574016B0068; Sat, 5 Dec 2020 01:57:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FE326B006C; Sat, 5 Dec 2020 01:57:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 376106B006E; Sat, 5 Dec 2020 01:57:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 1CF8D6B0068 for ; Sat, 5 Dec 2020 01:57:49 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C4CD08249980 for ; Sat, 5 Dec 2020 06:57:48 +0000 (UTC) X-FDA: 77558323416.25.tail46_4115286273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id A6AD31804E3A0 for ; Sat, 5 Dec 2020 06:57:48 +0000 (UTC) X-HE-Tag: tail46_4115286273cb X-Filterd-Recvd-Size: 6597 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:57:48 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id f1so4367771plt.12 for ; Fri, 04 Dec 2020 22:57:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=a1tvGnEfaNu1rng4Q1t20sHjkYaWZdSh+BVKU9lSOv4=; b=T53DZbtLZEldP4mCs/saeYUy5MdXYsBFhyTXhieuTcNvi7+ku0UjbRPuXCtGpBkBpL +oKqmPyWqO6RCwSa56tlbfd4E58pHUsiymHrhf2Higxn00h7luCSOfKsJVaEKkIPolZi H0OzxICU4uTt6S7UMo/OT0wwfkhJ799re7k6JjbsHEgDknrOZp7TJ6Uf9WoBc36Xnf9t aqRXhHhIvaaZa6+Eu75qtgbcuXlgOhH8Zfx+MaqmqsaDzCMZEoDaWMXDaIGUGxHor0t9 kf3+II2kG/nzrDcMbm/cOAdpt4L4V0RBSU+tN79R27AR6CBVimGkfHbuSX0CoB+pWssg Yn2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=a1tvGnEfaNu1rng4Q1t20sHjkYaWZdSh+BVKU9lSOv4=; b=irVR1xA1kA/YtsVZTrms627RoN37ieCWZuayNvP2UdeRV5BvA5qrfqSYCvWpiRkHQY gdwUPFAW+pEEZUOf666uGWSFlA6EJkvTWt9XaIm+TL6aH7PrO45cbz/cdEYUjFXkFgzw UVLfIakqiR8UFJ/cg/OATBICctxXddcivirSPRz+oV4RYCKtBrhNs8hzPe8I/K5dG9wo qQ4M2SxwPkUl9eNX91iSyn2Wrc8b07NFoSkMAR2SC2plRJ4GopWMHZpvUA84IduIOOzf ixnkRaEKXNwMt/ZwcBFyCSrDffbam8vdSujV3U09U/XuRUyTKDD3FzVh7HgKkwfNVxb4 vKOA== X-Gm-Message-State: AOAM533/Rc4E8p6pDmAW/eiyuetAQ6Yo98zhb/PANI7AJJ+DEj9qD7JM 1en4XUP0JeF5Kot3kgTIN2tv3VDuVMU0cw== X-Google-Smtp-Source: ABdhPJz+1vt8ZsMkacxNhF1pkvu12/PxpiVv0HEQSaTFQ/iPDEV87Cui4rJ7w/s9DapvU/HyL7DxXA== X-Received: by 2002:a17:902:7606:b029:da:246c:5bd8 with SMTP id k6-20020a1709027606b02900da246c5bd8mr7004544pll.27.1607151467206; Fri, 04 Dec 2020 22:57:47 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.57.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:57:46 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe Subject: [PATCH v9 02/12] mm: apply_to_pte_range warn and fail if a large pte is encountered Date: Sat, 5 Dec 2020 16:57:15 +1000 Message-Id: <20201205065725.1286370-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: apply_to_pte_range might mistake a large pte for bad, or treat it as a page table, resulting in a crash or corruption. Add a test to warn and return error if large entries are found. Signed-off-by: Nicholas Piggin --- mm/memory.c | 66 +++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 49 insertions(+), 17 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index c48f8df6e502..3d0f0bc5d573 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2429,13 +2429,21 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, } do { next = pmd_addr_end(addr, end); - if (create || !pmd_none_or_clear_bad(pmd)) { - err = apply_to_pte_range(mm, pmd, addr, next, fn, data, - create, mask); - if (err) - break; + if (pmd_none(*pmd) && !create) + continue; + if (WARN_ON_ONCE(pmd_leaf(*pmd))) + return -EINVAL; + if (!pmd_none(*pmd) && WARN_ON_ONCE(pmd_bad(*pmd))) { + if (!create) + continue; + pmd_clear_bad(pmd); } + err = apply_to_pte_range(mm, pmd, addr, next, + fn, data, create, mask); + if (err) + break; } while (pmd++, addr = next, addr != end); + return err; } @@ -2457,13 +2465,21 @@ static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d, } do { next = pud_addr_end(addr, end); - if (create || !pud_none_or_clear_bad(pud)) { - err = apply_to_pmd_range(mm, pud, addr, next, fn, data, - create, mask); - if (err) - break; + if (pud_none(*pud) && !create) + continue; + if (WARN_ON_ONCE(pud_leaf(*pud))) + return -EINVAL; + if (!pud_none(*pud) && WARN_ON_ONCE(pud_bad(*pud))) { + if (!create) + continue; + pud_clear_bad(pud); } + err = apply_to_pmd_range(mm, pud, addr, next, + fn, data, create, mask); + if (err) + break; } while (pud++, addr = next, addr != end); + return err; } @@ -2485,13 +2501,21 @@ static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd, } do { next = p4d_addr_end(addr, end); - if (create || !p4d_none_or_clear_bad(p4d)) { - err = apply_to_pud_range(mm, p4d, addr, next, fn, data, - create, mask); - if (err) - break; + if (p4d_none(*p4d) && !create) + continue; + if (WARN_ON_ONCE(p4d_leaf(*p4d))) + return -EINVAL; + if (!p4d_none(*p4d) && WARN_ON_ONCE(p4d_bad(*p4d))) { + if (!create) + continue; + p4d_clear_bad(p4d); } + err = apply_to_pud_range(mm, p4d, addr, next, + fn, data, create, mask); + if (err) + break; } while (p4d++, addr = next, addr != end); + return err; } @@ -2511,9 +2535,17 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr, pgd = pgd_offset(mm, addr); do { next = pgd_addr_end(addr, end); - if (!create && pgd_none_or_clear_bad(pgd)) + if (pgd_none(*pgd) && !create) continue; - err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, create, &mask); + if (WARN_ON_ONCE(pgd_leaf(*pgd))) + return -EINVAL; + if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) { + if (!create) + continue; + pgd_clear_bad(pgd); + } + err = apply_to_p4d_range(mm, pgd, addr, next, + fn, data, create, &mask); if (err) break; } while (pgd++, addr = next, addr != end); From patchwork Sat Dec 5 06:57:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE723C433FE for ; Sat, 5 Dec 2020 06:57:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 89F9B22D58 for ; Sat, 5 Dec 2020 06:57:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 89F9B22D58 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2B0E86B006C; Sat, 5 Dec 2020 01:57:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2395C6B006E; Sat, 5 Dec 2020 01:57:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 129676B0070; Sat, 5 Dec 2020 01:57:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id EC4246B006C for ; Sat, 5 Dec 2020 01:57:54 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AFF2C1EE6 for ; Sat, 5 Dec 2020 06:57:54 +0000 (UTC) X-FDA: 77558323668.26.shelf48_340db7b273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 834E41804B656 for ; Sat, 5 Dec 2020 06:57:54 +0000 (UTC) X-HE-Tag: shelf48_340db7b273cb X-Filterd-Recvd-Size: 6312 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:57:54 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id b26so5359279pfi.3 for ; Fri, 04 Dec 2020 22:57:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Wf66xeGBT+zLBJ325IKif/6Ivp+txfeM6xGRJ6CQfBE=; b=fcGKhIppJJa4HGaXNdZBhdNLob29Ivf9BES+4C8PfV5PbxTaC14ALc7yzgbzTh1Cf2 LjY1qEbDYZiXN0mvRHB9kiU1bHI8141d6FpINWNGUwiZuAMlwJh7UbCitM0jugtSzM+e uuVzySGmGdz/Yxt1V7K/3YHoHcW8tcgLe1Yp12EMc82n5ttvwmI/rb1/KAYWS1plwtbs C3P/hnxWcEW+9SZQ68McBxZbgcmjUDMkfkRVryruxzpH8n3TAVWd3jxOUtaInOlv1kWv 90Gz53+ICxtmSLTNCU1YyD9uSx4stnX42USHoS8I9tMkYSCB4FkeuO7N/K7et3LRpziZ MFNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Wf66xeGBT+zLBJ325IKif/6Ivp+txfeM6xGRJ6CQfBE=; b=g6srQGjdBHOnFjStKmbZFRrJUSDLnBP8jm5JVB7nOC0jLvQur7vK//kytr9ywln523 95Z7fPWDqJRSfgRw0Bdt6Td+p2zKvn2Vc5ni3m2a8Sof+LAVP33Wfsd+k4auSJDLgvtJ LBuad+imzrAtsJlHhvmrhA8UsJeKZba1R+mCKLuAHwLraXmEZVz22zWC+COfPdbWkSVR FlYIc5vMPh7EwyDo/8qR5sro0gAKstZ8SvCU9AW3vRUaKrcUU4wxZMtOiLx/zWzBudSx K1ZbdHBWEtumW+iC29h/aUbF4iJj2c9x7oOSzY1ekSVcvwRb/Xn35wKIK+KUkmEltvsY Rf7w== X-Gm-Message-State: AOAM533ttvdgl/v8wbnKS66SmiUKSJxw91zi+0JBIkVWFmRZz5h2oq2H gT1gcEj6Ze4xXGAZjr9sEtfDbpQFQcEbAg== X-Google-Smtp-Source: ABdhPJxONuxdsKc3Y9tSWgtmcHfGYtNDoscWaaTlNa/hUFWSKu5YHYgn5z4+YBtAOkosOqfGAZt7dA== X-Received: by 2002:a62:ab13:0:b029:197:ca83:7720 with SMTP id p19-20020a62ab130000b0290197ca837720mr7159021pff.78.1607151472903; Fri, 04 Dec 2020 22:57:52 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.57.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:57:52 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe Subject: [PATCH v9 03/12] mm/vmalloc: rename vmap_*_range vmap_pages_*_range Date: Sat, 5 Dec 2020 16:57:16 +1000 Message-Id: <20201205065725.1286370-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The vmalloc mapper operates on a struct page * array rather than a linear physical address, re-name it to make this distinction clear. Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index f85124e88bdb..42326dbffaf0 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -189,7 +189,7 @@ void unmap_kernel_range_noflush(unsigned long start, unsigned long size) arch_sync_kernel_mappings(start, end); } -static int vmap_pte_range(pmd_t *pmd, unsigned long addr, +static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -217,7 +217,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, return 0; } -static int vmap_pmd_range(pud_t *pud, unsigned long addr, +static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -229,13 +229,13 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, return -ENOMEM; do { next = pmd_addr_end(addr, end); - if (vmap_pte_range(pmd, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pte_range(pmd, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (pmd++, addr = next, addr != end); return 0; } -static int vmap_pud_range(p4d_t *p4d, unsigned long addr, +static int vmap_pages_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -247,13 +247,13 @@ static int vmap_pud_range(p4d_t *p4d, unsigned long addr, return -ENOMEM; do { next = pud_addr_end(addr, end); - if (vmap_pmd_range(pud, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pmd_range(pud, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (pud++, addr = next, addr != end); return 0; } -static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, +static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -265,7 +265,7 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, return -ENOMEM; do { next = p4d_addr_end(addr, end); - if (vmap_pud_range(p4d, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pud_range(p4d, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (p4d++, addr = next, addr != end); return 0; @@ -306,7 +306,7 @@ int map_kernel_range_noflush(unsigned long addr, unsigned long size, next = pgd_addr_end(addr, end); if (pgd_bad(*pgd)) mask |= PGTBL_PGD_MODIFIED; - err = vmap_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); + err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); if (err) return err; } while (pgd++, addr = next, addr != end); From patchwork Sat Dec 5 06:57:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4BA6C4361A for ; Sat, 5 Dec 2020 06:58:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6535B22D2A for ; Sat, 5 Dec 2020 06:58:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6535B22D2A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 03F206B006E; Sat, 5 Dec 2020 01:58:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 015FC6B0070; Sat, 5 Dec 2020 01:58:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6E4B6B0071; Sat, 5 Dec 2020 01:58:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id D11DE6B006E for ; Sat, 5 Dec 2020 01:58:00 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8E984363C for ; Sat, 5 Dec 2020 06:58:00 +0000 (UTC) X-FDA: 77558323920.15.base48_2c0b4df273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 6F8E21814B0C1 for ; Sat, 5 Dec 2020 06:58:00 +0000 (UTC) X-HE-Tag: base48_2c0b4df273cb X-Filterd-Recvd-Size: 9358 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:57:59 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id i3so1767585pfd.6 for ; Fri, 04 Dec 2020 22:57:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0Y/hT12bvV3dyctfA7PHWDL72BJBoDG5OIVFWS0o/KA=; b=r3yXRuwHIyOw4PWIDNvlN60onX907p/oyXm4aLxiAVEu7zPB03gzfxLjbv6ANoeqFy Jr1/9cwMh+nYoZHJ0RBpC6PN01EZkoSbwaZ1wZ20JitN+P8CCNr/0DE2pYY3aPNIPMLd mtxAjY2rgDI88u50tQvLr+MvL36Jdnkx/bMaran2vgFDFOaH8beiFEj0arm/g4MupleP yzLVhgUprH/6de4+ae8oDvIox+M3jYdd1AQCmVWYuwNnqSIkEdseDCg4hlCv8C/KWyZS CmrkV/2clgtutIJUk4Hyjjc/v8etndujoRcmO7g3CQTnfIPWHX3g87uSJMYOoFoZcpuv m1hQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0Y/hT12bvV3dyctfA7PHWDL72BJBoDG5OIVFWS0o/KA=; b=BnZHQCRl6ayMHZZvTSnBkZvDgEbHc3dIJJuA4y8ePguvvFgepqAySWWRTLj9clPNKN 1j9KsX1LPNZJdagbOOvTpJpnXiZTLVXA6ZQSCrI3Vf4SxgR4EMeElm/EbS8/DzsZTHE2 b/OdQtPNsrvA+UbbZhk7PrMjUKo0X1BX1Uj3sX6xEQfy2F/S/TDPi4H0aY9GoJ89qXsn 0yRDiOrjZxTFL+cxLntpt8bUkyTJgmTsS/v5QKLH6aqdRNHgpf9LpOjspBHpZzYYLtd+ 108oOAYg2Qni0QZQ7ieO3SgmzNSVgKqiRwZPgCECX4Z1QLL3S1iZehI8DE55bCL7y8u/ rRzw== X-Gm-Message-State: AOAM530Tq3b6TWl5G8TGuFXR+eaPP7UEu93mhIAwPnHZeZtJIFVyM9Hn bSJMOxSoKMEYFL9AxShWRXq0dhm8CHa/YQ== X-Google-Smtp-Source: ABdhPJzY1w9SJtkFAhYcQ52KUb3NWH4L2iEK2AG9tjVfRUxzdSM3BeB7THCA5DFKn413NTM9lXb2AA== X-Received: by 2002:a63:ce17:: with SMTP id y23mr3622288pgf.444.1607151478939; Fri, 04 Dec 2020 22:57:58 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.57.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:57:58 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe Subject: [PATCH v9 04/12] mm/ioremap: rename ioremap_*_range to vmap_*_range Date: Sat, 5 Dec 2020 16:57:17 +1000 Message-Id: <20201205065725.1286370-5-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This will be used as a generic kernel virtual mapping function, so re-name it in preparation. Signed-off-by: Nicholas Piggin --- mm/ioremap.c | 64 +++++++++++++++++++++++++++------------------------- 1 file changed, 33 insertions(+), 31 deletions(-) diff --git a/mm/ioremap.c b/mm/ioremap.c index 5fa1ab41d152..3f4d36f9745a 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -61,9 +61,9 @@ static inline int ioremap_pud_enabled(void) { return 0; } static inline int ioremap_pmd_enabled(void) { return 0; } #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ -static int ioremap_pte_range(pmd_t *pmd, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) { pte_t *pte; u64 pfn; @@ -81,9 +81,8 @@ static int ioremap_pte_range(pmd_t *pmd, unsigned long addr, return 0; } -static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, - pgprot_t prot) +static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { if (!ioremap_pmd_enabled()) return 0; @@ -103,9 +102,9 @@ static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr, return pmd_set_huge(pmd, phys_addr, prot); } -static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) { pmd_t *pmd; unsigned long next; @@ -116,20 +115,19 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, do { next = pmd_addr_end(addr, end); - if (ioremap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) { *mask |= PGTBL_PMD_MODIFIED; continue; } - if (ioremap_pte_range(pmd, addr, next, phys_addr, prot, mask)) + if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) return -ENOMEM; } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); return 0; } -static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, - pgprot_t prot) +static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { if (!ioremap_pud_enabled()) return 0; @@ -149,9 +147,9 @@ static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr, return pud_set_huge(pud, phys_addr, prot); } -static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) { pud_t *pud; unsigned long next; @@ -162,20 +160,19 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr, do { next = pud_addr_end(addr, end); - if (ioremap_try_huge_pud(pud, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot)) { *mask |= PGTBL_PUD_MODIFIED; continue; } - if (ioremap_pmd_range(pud, addr, next, phys_addr, prot, mask)) + if (vmap_pmd_range(pud, addr, next, phys_addr, prot, mask)) return -ENOMEM; } while (pud++, phys_addr += (next - addr), addr = next, addr != end); return 0; } -static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, - pgprot_t prot) +static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { if (!ioremap_p4d_enabled()) return 0; @@ -195,9 +192,9 @@ static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr, return p4d_set_huge(p4d, phys_addr, prot); } -static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) { p4d_t *p4d; unsigned long next; @@ -208,19 +205,19 @@ static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr, do { next = p4d_addr_end(addr, end); - if (ioremap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) { + if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) { *mask |= PGTBL_P4D_MODIFIED; continue; } - if (ioremap_pud_range(p4d, addr, next, phys_addr, prot, mask)) + if (vmap_pud_range(p4d, addr, next, phys_addr, prot, mask)) return -ENOMEM; } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); return 0; } -int ioremap_page_range(unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot) +static int vmap_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { pgd_t *pgd; unsigned long start; @@ -235,8 +232,7 @@ int ioremap_page_range(unsigned long addr, pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); - err = ioremap_p4d_range(pgd, addr, next, phys_addr, prot, - &mask); + err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, &mask); if (err) break; } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); @@ -249,6 +245,12 @@ int ioremap_page_range(unsigned long addr, return err; } +int ioremap_page_range(unsigned long addr, + unsigned long end, phys_addr_t phys_addr, pgprot_t prot) +{ + return vmap_range(addr, end, phys_addr, prot); +} + #ifdef CONFIG_GENERIC_IOREMAP void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long prot) { From patchwork Sat Dec 5 06:57:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6C8AC433FE for ; Sat, 5 Dec 2020 06:58:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5093322D2A for ; Sat, 5 Dec 2020 06:58:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5093322D2A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CC5336B0070; Sat, 5 Dec 2020 01:58:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C73066B0071; Sat, 5 Dec 2020 01:58:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3B5D6B0072; Sat, 5 Dec 2020 01:58:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 965B26B0070 for ; Sat, 5 Dec 2020 01:58:09 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 458E0180AD80F for ; Sat, 5 Dec 2020 06:58:09 +0000 (UTC) X-FDA: 77558324298.15.owl94_43072ed273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 268E01814B0C1 for ; Sat, 5 Dec 2020 06:58:09 +0000 (UTC) X-HE-Tag: owl94_43072ed273cb X-Filterd-Recvd-Size: 17365 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:58:08 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id t37so4945002pga.7 for ; Fri, 04 Dec 2020 22:58:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WN1MJEbwycUs8BaafQPMrGtjaWRCXWm576G/C9HUSLk=; b=J71ls3XLKSum0z7YlwNuIhz1Pa/G2gd1+ZhqngPX0HP2nuMZYRchNUgOC27Rmtf9Cw 8YvtqrgM4/f9FRHfhLEv3Lt1cfNiCN2ikCquc4dd86tYvVOCSz1FcuErlulBVR4+HOo6 vE3xIF9xWBts9NjIfucFDfVvpa4ilvSwe8If27PEWdPGDsxA8ippZIod79kYczoIwCC7 emnCe/RBdFp/I+pu7zPdtLmaA4TOQu6sRuUCMzwfr2ly8Qudzi+2TxbXbZRTMAMi+OB1 DUaSsiiWYM+KYIMfA0O5DBfTY/asbPYD/+12W7AwVOmmqlTZU32za7eagSnXzRVOWGFf RR6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WN1MJEbwycUs8BaafQPMrGtjaWRCXWm576G/C9HUSLk=; b=rtmrU7Gnr4vXR212LJx0Yt3XV+neUY2kZ2duAPfB7h0W9ibEkqGLXFQIMw6JqLQ854 UCs0ifc/ZA2nQ5jtyGrD9E7sRBzSOPy2J05o6b4hujvupr0lqmpwGaNWnBQUkis106e+ K88SNSEjLFYqfWtAV6ip0vM6onnth+crma2lwMBXz7urvx+5vKbIJGYLYxH/DW+f81mb WtzIveDWsJ/aD4AUrE7H/vgO/KRs+G9yXR8Rzqngk/bJvvbSmvmPhrc/aFIJ/90+ODzb iVyPs/c6sAyKd5B+wd55Owrk+m9VGHQGfqGctfvou8qxm1sv4mZX5CR6G9Z/Xi95MEHV II8g== X-Gm-Message-State: AOAM533jWWwhDxtOqJk/Tgxv3fWP0L035Zm8IgK6zuG/hZzwOdgsxG0b 5nzE/AReW5opYQkmVW12d/XHw390LUdvnQ== X-Google-Smtp-Source: ABdhPJxagaTwqfmoz9jfmQZlPDgN+1dCgIoqYSkDfL7pJv11K1scBa32NpCBIUvvIwVsTNW8gSuKrw== X-Received: by 2002:a63:1d60:: with SMTP id d32mr9707022pgm.343.1607151487458; Fri, 04 Dec 2020 22:58:07 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.57.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:58:07 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH v9 05/12] mm: HUGE_VMAP arch support cleanup Date: Sat, 5 Dec 2020 16:57:18 +1000 Message-Id: <20201205065725.1286370-6-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This changes the awkward approach where architectures provide init functions to determine which levels they can provide large mappings for, to one where the arch is queried for each call. This removes code and indirection, and allows constant-folding of dead code for unsupported levels. This also adds a prot argument to the arch query. This is unused currently but could help with some architectures (e.g., some powerpc processors can't map uncacheable memory with large pages). Cc: linuxppc-dev@lists.ozlabs.org Cc: Catalin Marinas Cc: Will Deacon Cc: linux-arm-kernel@lists.infradead.org Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: x86@kernel.org Cc: "H. Peter Anvin" Acked-by: Catalin Marinas [arm64] Signed-off-by: Nicholas Piggin --- arch/arm64/include/asm/vmalloc.h | 8 +++ arch/arm64/mm/mmu.c | 10 +-- arch/powerpc/include/asm/vmalloc.h | 8 +++ arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +-- arch/x86/include/asm/vmalloc.h | 7 ++ arch/x86/mm/ioremap.c | 10 +-- include/linux/io.h | 9 --- include/linux/vmalloc.h | 6 ++ init/main.c | 1 - mm/ioremap.c | 88 +++++++++--------------- 10 files changed, 77 insertions(+), 78 deletions(-) diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h index 2ca708ab9b20..597b40405319 100644 --- a/arch/arm64/include/asm/vmalloc.h +++ b/arch/arm64/include/asm/vmalloc.h @@ -1,4 +1,12 @@ #ifndef _ASM_ARM64_VMALLOC_H #define _ASM_ARM64_VMALLOC_H +#include + +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +bool arch_vmap_p4d_supported(pgprot_t prot); +bool arch_vmap_pud_supported(pgprot_t prot); +bool arch_vmap_pmd_supported(pgprot_t prot); +#endif + #endif /* _ASM_ARM64_VMALLOC_H */ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index ca692a815731..1b60079c1cef 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1315,12 +1315,12 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot) return dt_virt; } -int __init arch_ioremap_p4d_supported(void) +bool arch_vmap_p4d_supported(pgprot_t prot) { - return 0; + return false; } -int __init arch_ioremap_pud_supported(void) +bool arch_vmap_pud_supported(pgprot_t prot); { /* * Only 4k granule supports level 1 block mappings. @@ -1330,9 +1330,9 @@ int __init arch_ioremap_pud_supported(void) !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); } -int __init arch_ioremap_pmd_supported(void) +bool arch_vmap_pmd_supported(pgprot_t prot) { - /* See arch_ioremap_pud_supported() */ + /* See arch_vmap_pud_supported() */ return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); } diff --git a/arch/powerpc/include/asm/vmalloc.h b/arch/powerpc/include/asm/vmalloc.h index b992dfaaa161..105abb73f075 100644 --- a/arch/powerpc/include/asm/vmalloc.h +++ b/arch/powerpc/include/asm/vmalloc.h @@ -1,4 +1,12 @@ #ifndef _ASM_POWERPC_VMALLOC_H #define _ASM_POWERPC_VMALLOC_H +#include + +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +bool arch_vmap_p4d_supported(pgprot_t prot); +bool arch_vmap_pud_supported(pgprot_t prot); +bool arch_vmap_pmd_supported(pgprot_t prot); +#endif + #endif /* _ASM_POWERPC_VMALLOC_H */ diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 3adcf730f478..ab426fc0cd4b 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1121,13 +1121,13 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma, set_pte_at(mm, addr, ptep, pte); } -int __init arch_ioremap_pud_supported(void) +bool arch_vmap_pud_supported(pgprot_t prot) { /* HPT does not cope with large pages in the vmalloc area */ return radix_enabled(); } -int __init arch_ioremap_pmd_supported(void) +bool arch_vmap_pmd_supported(pgprot_t prot) { return radix_enabled(); } @@ -1221,7 +1221,7 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) return 1; } -int __init arch_ioremap_p4d_supported(void) +bool arch_vmap_p4d_supported(pgprot_t prot) { - return 0; + return false; } diff --git a/arch/x86/include/asm/vmalloc.h b/arch/x86/include/asm/vmalloc.h index 29837740b520..094ea2b565f3 100644 --- a/arch/x86/include/asm/vmalloc.h +++ b/arch/x86/include/asm/vmalloc.h @@ -1,6 +1,13 @@ #ifndef _ASM_X86_VMALLOC_H #define _ASM_X86_VMALLOC_H +#include #include +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +bool arch_vmap_p4d_supported(pgprot_t prot); +bool arch_vmap_pud_supported(pgprot_t prot); +bool arch_vmap_pmd_supported(pgprot_t prot); +#endif + #endif /* _ASM_X86_VMALLOC_H */ diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 9e5ccc56f8e0..762b5ff4edad 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -481,21 +481,21 @@ void iounmap(volatile void __iomem *addr) } EXPORT_SYMBOL(iounmap); -int __init arch_ioremap_p4d_supported(void) +bool arch_vmap_p4d_supported(pgprot_t prot) { - return 0; + return false; } -int __init arch_ioremap_pud_supported(void) +bool arch_vmap_pud_supported(pgprot_t prot) { #ifdef CONFIG_X86_64 return boot_cpu_has(X86_FEATURE_GBPAGES); #else - return 0; + return false; #endif } -int __init arch_ioremap_pmd_supported(void) +bool arch_vmap_pmd_supported(pgprot_t prot) { return boot_cpu_has(X86_FEATURE_PSE); } diff --git a/include/linux/io.h b/include/linux/io.h index 8394c56babc2..f1effd4d7a3c 100644 --- a/include/linux/io.h +++ b/include/linux/io.h @@ -31,15 +31,6 @@ static inline int ioremap_page_range(unsigned long addr, unsigned long end, } #endif -#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -void __init ioremap_huge_init(void); -int arch_ioremap_p4d_supported(void); -int arch_ioremap_pud_supported(void); -int arch_ioremap_pmd_supported(void); -#else -static inline void ioremap_huge_init(void) { } -#endif - /* * Managed iomap interface */ diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 938eaf9517e2..b3218ba0904d 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -85,6 +85,12 @@ struct vmap_area { }; }; +#ifndef CONFIG_HAVE_ARCH_HUGE_VMAP +static inline bool arch_vmap_p4d_supported(pgprot_t prot) { return false; } +static inline bool arch_vmap_pud_supported(pgprot_t prot) { return false; } +static inline bool arch_vmap_pmd_supported(pgprot_t prot) { return false; } +#endif + /* * Highlevel APIs for driver use */ diff --git a/init/main.c b/init/main.c index 20baced721ad..5bd2f4f41d30 100644 --- a/init/main.c +++ b/init/main.c @@ -833,7 +833,6 @@ static void __init mm_init(void) pgtable_init(); debug_objects_mem_init(); vmalloc_init(); - ioremap_huge_init(); /* Should be run before the first non-init thread is created */ init_espfix_bsp(); /* Should be run after espfix64 is set up. */ diff --git a/mm/ioremap.c b/mm/ioremap.c index 3f4d36f9745a..c67f91164401 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -16,49 +16,16 @@ #include "pgalloc-track.h" #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -static int __read_mostly ioremap_p4d_capable; -static int __read_mostly ioremap_pud_capable; -static int __read_mostly ioremap_pmd_capable; -static int __read_mostly ioremap_huge_disabled; +static bool __ro_after_init iomap_max_page_shift = PAGE_SHIFT; static int __init set_nohugeiomap(char *str) { - ioremap_huge_disabled = 1; + iomap_max_page_shift = P4D_SHIFT; return 0; } early_param("nohugeiomap", set_nohugeiomap); - -void __init ioremap_huge_init(void) -{ - if (!ioremap_huge_disabled) { - if (arch_ioremap_p4d_supported()) - ioremap_p4d_capable = 1; - if (arch_ioremap_pud_supported()) - ioremap_pud_capable = 1; - if (arch_ioremap_pmd_supported()) - ioremap_pmd_capable = 1; - } -} - -static inline int ioremap_p4d_enabled(void) -{ - return ioremap_p4d_capable; -} - -static inline int ioremap_pud_enabled(void) -{ - return ioremap_pud_capable; -} - -static inline int ioremap_pmd_enabled(void) -{ - return ioremap_pmd_capable; -} - -#else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */ -static inline int ioremap_p4d_enabled(void) { return 0; } -static inline int ioremap_pud_enabled(void) { return 0; } -static inline int ioremap_pmd_enabled(void) { return 0; } +#else /* CONFIG_HAVE_ARCH_HUGE_VMAP */ +static const bool iomap_max_page_shift = PAGE_SHIFT; #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, @@ -82,9 +49,13 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, } static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) { - if (!ioremap_pmd_enabled()) + if (max_page_shift < PMD_SHIFT) + return 0; + + if (!arch_vmap_pmd_supported(prot)) return 0; if ((end - addr) != PMD_SIZE) @@ -104,7 +75,7 @@ static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) + unsigned int max_page_shift, pgtbl_mod_mask *mask) { pmd_t *pmd; unsigned long next; @@ -115,7 +86,7 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, do { next = pmd_addr_end(addr, end); - if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) { *mask |= PGTBL_PMD_MODIFIED; continue; } @@ -127,9 +98,13 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, } static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) { - if (!ioremap_pud_enabled()) + if (max_page_shift < PUD_SHIFT) + return 0; + + if (!arch_vmap_pud_supported(prot)) return 0; if ((end - addr) != PUD_SIZE) @@ -149,7 +124,7 @@ static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) + unsigned int max_page_shift, pgtbl_mod_mask *mask) { pud_t *pud; unsigned long next; @@ -160,21 +135,25 @@ static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, do { next = pud_addr_end(addr, end); - if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) { *mask |= PGTBL_PUD_MODIFIED; continue; } - if (vmap_pmd_range(pud, addr, next, phys_addr, prot, mask)) + if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask)) return -ENOMEM; } while (pud++, phys_addr += (next - addr), addr = next, addr != end); return 0; } static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) { - if (!ioremap_p4d_enabled()) + if (max_page_shift < P4D_SHIFT) + return 0; + + if (!arch_vmap_p4d_supported(prot)) return 0; if ((end - addr) != P4D_SIZE) @@ -194,7 +173,7 @@ static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) + unsigned int max_page_shift, pgtbl_mod_mask *mask) { p4d_t *p4d; unsigned long next; @@ -205,19 +184,20 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, do { next = p4d_addr_end(addr, end); - if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) { + if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) { *mask |= PGTBL_P4D_MODIFIED; continue; } - if (vmap_pud_range(p4d, addr, next, phys_addr, prot, mask)) + if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask)) return -ENOMEM; } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); return 0; } static int vmap_range(unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) { pgd_t *pgd; unsigned long start; @@ -232,7 +212,7 @@ static int vmap_range(unsigned long addr, unsigned long end, pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); - err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, &mask); + err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask); if (err) break; } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); @@ -248,7 +228,7 @@ static int vmap_range(unsigned long addr, unsigned long end, int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { - return vmap_range(addr, end, phys_addr, prot); + return vmap_range(addr, end, phys_addr, prot, iomap_max_page_shift); } #ifdef CONFIG_GENERIC_IOREMAP From patchwork Sat Dec 5 06:57:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 993C0C4361A for ; Sat, 5 Dec 2020 06:58:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3EAA222D2A for ; Sat, 5 Dec 2020 06:58:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3EAA222D2A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CC2196B0071; Sat, 5 Dec 2020 01:58:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C73496B0072; Sat, 5 Dec 2020 01:58:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B89B16B0073; Sat, 5 Dec 2020 01:58:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 99E726B0071 for ; Sat, 5 Dec 2020 01:58:14 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 680A01EE6 for ; Sat, 5 Dec 2020 06:58:14 +0000 (UTC) X-FDA: 77558324508.22.oil67_0417ded273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 4321418038E60 for ; Sat, 5 Dec 2020 06:58:14 +0000 (UTC) X-HE-Tag: oil67_0417ded273cb X-Filterd-Recvd-Size: 5605 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:58:13 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id v3so4365305plz.13 for ; Fri, 04 Dec 2020 22:58:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HiUfAFVtYQfdJFI+mGar/loPm26LbyhdUt8nmYytWQQ=; b=DxmZPA3yAfjPLXLMP04XS6JXFdsWLGTS01CQhH8sB/zsK9XTcQVwrq4kbXx7YdojRK yGn64NeViNWYWiFixEYhcMj2muh0L0+uifTZxha4vibt+CbN5dDPstR/oH88Rdx2TXe1 DPmtfYXy/yZqSoldqCGJe1d5yEgHX5VJZUvx34k38zBxcsWESi3TXgbQCCzU5FCHtmfX uvPDEqFMF5e6fcGCiv+1x0NHOXC6ULqNghY5MidMVepomjFC29csup82rx8jeus4x4iP AlnMWPJveZxBzAUtYmMKBg+6/Zy8SqNYCYqCD2AW793yFHcIhGjC4ULukG3zt76F5D01 hmvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HiUfAFVtYQfdJFI+mGar/loPm26LbyhdUt8nmYytWQQ=; b=B7IeRDm+Vp2yXjVtdaGnO50AHyQ1lN8zdRHT9fEjgCAwfCpZ6o+1DOMVeRGFhUf7NP SBkgkc3uXPTv60oXvfkE6hdmjSNM9Iglv/n9rsJR+6vc1Emt1Q+8f+dnTkN/3tUkENSq F4/3lZg1lv+C8n+gV0+lDWQkrLygJfe12PBbGycET/ByLv8XnksaLCUkSlInDqbfkeOE VXq7I1kIWAxm2lYbGnEtSyqPeS4777k8XvRFcYK5KQx+igJ6y76pATxilmcK/4dMroez G8VlHZ/QlCTXaLNzhBjwmfbEFzPhQ6uftvDY1drride5K/D8WEBhnglA0pnV0QzW+T/F 6Mlg== X-Gm-Message-State: AOAM533in40kz884+GriNF0mellYNlSUy7hSOenXVkfoYCl//h1DSjsH 5c15kymRT+Fby4bVfJvnAvJ1uwp2MI7CcA== X-Google-Smtp-Source: ABdhPJyvHIRuemZhXud4eIE3YEqh2cLPgp8Qkm3Pre2khvi9AhP4Lk22USojlqZWMBSIeZDSMx7xxA== X-Received: by 2002:a17:90b:253:: with SMTP id fz19mr7455121pjb.195.1607151492973; Fri, 04 Dec 2020 22:58:12 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.58.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:58:12 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe , Michael Ellerman Subject: [PATCH v9 06/12] powerpc: inline huge vmap supported functions Date: Sat, 5 Dec 2020 16:57:19 +1000 Message-Id: <20201205065725.1286370-7-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This allows unsupported levels to be constant folded away, and so p4d_free_pud_page can be removed because it's no longer linked to. Cc: linuxppc-dev@lists.ozlabs.org Acked-by: Michael Ellerman Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/vmalloc.h | 19 ++++++++++++++++--- arch/powerpc/mm/book3s64/radix_pgtable.c | 21 --------------------- 2 files changed, 16 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/include/asm/vmalloc.h b/arch/powerpc/include/asm/vmalloc.h index 105abb73f075..3f0c153befb0 100644 --- a/arch/powerpc/include/asm/vmalloc.h +++ b/arch/powerpc/include/asm/vmalloc.h @@ -1,12 +1,25 @@ #ifndef _ASM_POWERPC_VMALLOC_H #define _ASM_POWERPC_VMALLOC_H +#include #include #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -bool arch_vmap_p4d_supported(pgprot_t prot); -bool arch_vmap_pud_supported(pgprot_t prot); -bool arch_vmap_pmd_supported(pgprot_t prot); +static inline bool arch_vmap_p4d_supported(pgprot_t prot) +{ + return false; +} + +static inline bool arch_vmap_pud_supported(pgprot_t prot) +{ + /* HPT does not cope with large pages in the vmalloc area */ + return radix_enabled(); +} + +static inline bool arch_vmap_pmd_supported(pgprot_t prot) +{ + return radix_enabled(); +} #endif #endif /* _ASM_POWERPC_VMALLOC_H */ diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index ab426fc0cd4b..de6b558dc187 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1121,22 +1121,6 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma, set_pte_at(mm, addr, ptep, pte); } -bool arch_vmap_pud_supported(pgprot_t prot) -{ - /* HPT does not cope with large pages in the vmalloc area */ - return radix_enabled(); -} - -bool arch_vmap_pmd_supported(pgprot_t prot) -{ - return radix_enabled(); -} - -int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) -{ - return 0; -} - int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot) { pte_t *ptep = (pte_t *)pud; @@ -1220,8 +1204,3 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) return 1; } - -bool arch_vmap_p4d_supported(pgprot_t prot) -{ - return false; -} From patchwork Sat Dec 5 06:57:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94AB8C433FE for ; Sat, 5 Dec 2020 06:58:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4CF2A22D58 for ; Sat, 5 Dec 2020 06:58:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4CF2A22D58 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D86AF6B0073; Sat, 5 Dec 2020 01:58:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D5C656B0074; Sat, 5 Dec 2020 01:58:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFE196B0075; Sat, 5 Dec 2020 01:58:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0086.hostedemail.com [216.40.44.86]) by kanga.kvack.org (Postfix) with ESMTP id A7AAE6B0073 for ; Sat, 5 Dec 2020 01:58:21 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5FAE7181AC9B6 for ; Sat, 5 Dec 2020 06:58:21 +0000 (UTC) X-FDA: 77558324802.29.turn98_30079c6273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 3E53718086CC7 for ; Sat, 5 Dec 2020 06:58:21 +0000 (UTC) X-HE-Tag: turn98_30079c6273cb X-Filterd-Recvd-Size: 6155 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:58:20 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id h7so4835172pjk.1 for ; Fri, 04 Dec 2020 22:58:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2t14GdngmnBtNB8+/S//c7VFoU994BC7ZHYuUgeTwNA=; b=B70xeuCROEkE5Bz+UvG5+1R2Qm6aDLtzEm3lfkQy0Ufelnzzqf/8ZM5wJ6xnDM0WJo 9UnCgwfAAHI48ejrT5soi8lIxQWo7VXP/Anbt3Uw23eUNjYvmHWe1QIKB/qjkG+V/RWS c78bN3WNmdPnWnjYBGmV28AkFodlhbx6DCmHNnVWFUH78RC3GaFpsi+YSah9h9E/HK61 8DrI//repVXciq7MZNVD5QjtIbqaaJZ+gQhmradSme8NLwF/oS5Gw1E6SEduhxyVsZDc p6AyOMK9sQpFp3XbNZ/a9PRyjwmwAB3WkkB/tQqjb/35Uj8KovA0XcKeDdMVSokHb/L0 aBmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2t14GdngmnBtNB8+/S//c7VFoU994BC7ZHYuUgeTwNA=; b=DUpIPepGjDrwpROvIk11Y0AMJ0AHx5kBq3/i5WGG4sc0+13J8Eiou0PhSnBIFtXWue MsSCgP5zMuku5v5aqtNHewhH+6gsV8HBKJNzZD/Z0Dhril8Wui230G9HdrqKK+RL5i+a Jjqx3YRgylZj8dKyKFc5mpENoBbQ2OIucgLg5LqP8Fuhuz9MTV22CJMqbOGha1ULBc7v 384ERLUKfa9HDiqLLUgstQPoZr+g3NTb8O/NjSKP/DEbLIvTUNKKFGp1u535eOue+f1j J3nVU9VJe+hUt2oA1yVGbXIqVVWPUiisjiTGpndk9vhjBUNWVyBl0Dmad6poA7Ws4/Si GoYw== X-Gm-Message-State: AOAM530JMFNqUntns4pum1jtnx91JCH7z14OPHgpmxuv5rJvVRC3PuYD a/9ZHPI4xH3nQXda78gmqWK+HPdDURuBuA== X-Google-Smtp-Source: ABdhPJwjoSqyZStQFiANkD2aPMGXhbg/gkVP0cgFx8+7JFPhX8iwAUC3btu67DZQhOlu1F5LtmkdnA== X-Received: by 2002:a17:90b:68b:: with SMTP id m11mr111694pjz.208.1607151499847; Fri, 04 Dec 2020 22:58:19 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.58.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:58:19 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Subject: [PATCH v9 07/12] arm64: inline huge vmap supported functions Date: Sat, 5 Dec 2020 16:57:20 +1000 Message-Id: <20201205065725.1286370-8-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This allows unsupported levels to be constant folded away, and so p4d_free_pud_page can be removed because it's no longer linked to. Cc: Catalin Marinas Cc: Will Deacon Cc: linux-arm-kernel@lists.infradead.org Acked-by: Catalin Marinas Signed-off-by: Nicholas Piggin --- arch/arm64/include/asm/vmalloc.h | 23 ++++++++++++++++++++--- arch/arm64/mm/mmu.c | 26 -------------------------- 2 files changed, 20 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h index 597b40405319..fc9a12d6cc1a 100644 --- a/arch/arm64/include/asm/vmalloc.h +++ b/arch/arm64/include/asm/vmalloc.h @@ -4,9 +4,26 @@ #include #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -bool arch_vmap_p4d_supported(pgprot_t prot); -bool arch_vmap_pud_supported(pgprot_t prot); -bool arch_vmap_pmd_supported(pgprot_t prot); +static inline bool arch_vmap_p4d_supported(pgprot_t prot) +{ + return false; +} + +static inline bool arch_vmap_pud_supported(pgprot_t prot) +{ + /* + * Only 4k granule supports level 1 block mappings. + * SW table walks can't handle removal of intermediate entries. + */ + return IS_ENABLED(CONFIG_ARM64_4K_PAGES) && + !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); +} + +static inline bool arch_vmap_pmd_supported(pgprot_t prot) +{ + /* See arch_vmap_pud_supported() */ + return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); +} #endif #endif /* _ASM_ARM64_VMALLOC_H */ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 1b60079c1cef..0af5b5cfb9c6 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1315,27 +1315,6 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot) return dt_virt; } -bool arch_vmap_p4d_supported(pgprot_t prot) -{ - return false; -} - -bool arch_vmap_pud_supported(pgprot_t prot); -{ - /* - * Only 4k granule supports level 1 block mappings. - * SW table walks can't handle removal of intermediate entries. - */ - return IS_ENABLED(CONFIG_ARM64_4K_PAGES) && - !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); -} - -bool arch_vmap_pmd_supported(pgprot_t prot) -{ - /* See arch_vmap_pud_supported() */ - return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); -} - int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) { pud_t new_pud = pfn_pud(__phys_to_pfn(phys), mk_pud_sect_prot(prot)); @@ -1427,11 +1406,6 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr) return 1; } -int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) -{ - return 0; /* Don't attempt a block mapping */ -} - #ifdef CONFIG_MEMORY_HOTPLUG static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size) { From patchwork Sat Dec 5 06:57:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBE2FC433FE for ; Sat, 5 Dec 2020 06:58:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6A66822D6D for ; Sat, 5 Dec 2020 06:58:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6A66822D6D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 041306B0074; Sat, 5 Dec 2020 01:58:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 015E16B0075; Sat, 5 Dec 2020 01:58:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD4D26B0078; Sat, 5 Dec 2020 01:58:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id C2DC76B0074 for ; Sat, 5 Dec 2020 01:58:28 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 88231181AEF2A for ; Sat, 5 Dec 2020 06:58:28 +0000 (UTC) X-FDA: 77558325096.01.hill87_2908c9d273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 6A53410046D4A for ; Sat, 5 Dec 2020 06:58:28 +0000 (UTC) X-HE-Tag: hill87_2908c9d273cb X-Filterd-Recvd-Size: 6480 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:58:27 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id i38so4946351pgb.5 for ; Fri, 04 Dec 2020 22:58:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UA8jXCMrqvJQGY0iCvWQeL9p8YQUamzHjwR9BZrUGA0=; b=ixoQmnhNu+iHTE6dMu5n0TMwbk9gQXZJrp5BPd4nj7PWlUO8ufqJebhBmqsXFyW8Dj kMwORXdB3zpgGNBg3iaUDUhTXfjJqGp4WdlLfnTB3MCL9hQWdqhU3+sTpxRlHA0LsBjS ssB8DMSbO0zVaArWu+WG3Ir4VhRQdKbAArRIJsHiHv5pEKLTEASbsZ7gZo2ReCaO2H4F p0ePGs/hOfpLP0j3N2l0CiWPKVAUUgmop2u4sqdmSWtFcnjVDFLMRLcZ2Iqtf7V9sFh1 WxZPFTaSbfa1ijLV6ABCVc2rNw39S75ALk7KsYYXFb+7SH2Qn81c7qqSA2TpnACan6lP voKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UA8jXCMrqvJQGY0iCvWQeL9p8YQUamzHjwR9BZrUGA0=; b=WHwdjwgqVs5Tz9vWxyWaaX+AooNaId+s94ss7fMz/xni4VUrJy+JepF601VO3tu2VX ed8nGMrGHYY1k/ofFIEsVbZ4+uhnyEOaQGaA0XTj3t54aRdmmSBKXX+ReOYH0uj/Mb5F obnq+1q2hz/WwKhTdXnJ2Vxe5nJgBsUm3T/wvXI1Ny66aIB1TSK1QC0nVdR2/IiwFLEl YJOhXctgyi2w9lalE2BO4ta6Uo+gdvEtPotG2WCU1NJ8OgR2rMfMciRMCxbfd+EuGYPB +dSWi4BF3z6dYrg7vNxEOuHj2omh5fMKothzxGLHDlV6Apr6niTbEsQk9H1E6NWE3sKn MiEQ== X-Gm-Message-State: AOAM531eazwDrCCEU9vLAxP9OJif/Jz9h4NAUp98pyAaYTsKgWdBuXp0 K4XFS9zGuDeiJxRuT2SfhJDnhy0l1U0xgg== X-Google-Smtp-Source: ABdhPJwsGY5bbPgjp6Z2beq7Wy5FYiyWFqxlFd9Z8sOD8KS2/ylcOxRC30ISSWWh5AGOTBlvg8B/eg== X-Received: by 2002:a05:6a00:1596:b029:19d:96b8:6eab with SMTP id u22-20020a056a001596b029019d96b86eabmr7188996pfk.38.1607151506911; Fri, 04 Dec 2020 22:58:26 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.58.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:58:26 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH v9 08/12] x86: inline huge vmap supported functions Date: Sat, 5 Dec 2020 16:57:21 +1000 Message-Id: <20201205065725.1286370-9-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This allows unsupported levels to be constant folded away, and so p4d_free_pud_page can be removed because it's no longer linked to. Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: x86@kernel.org Cc: "H. Peter Anvin" Signed-off-by: Nicholas Piggin --- arch/x86/include/asm/vmalloc.h | 22 +++++++++++++++++++--- arch/x86/mm/ioremap.c | 19 ------------------- arch/x86/mm/pgtable.c | 13 ------------- 3 files changed, 19 insertions(+), 35 deletions(-) diff --git a/arch/x86/include/asm/vmalloc.h b/arch/x86/include/asm/vmalloc.h index 094ea2b565f3..e714b00fc0ca 100644 --- a/arch/x86/include/asm/vmalloc.h +++ b/arch/x86/include/asm/vmalloc.h @@ -1,13 +1,29 @@ #ifndef _ASM_X86_VMALLOC_H #define _ASM_X86_VMALLOC_H +#include #include #include #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -bool arch_vmap_p4d_supported(pgprot_t prot); -bool arch_vmap_pud_supported(pgprot_t prot); -bool arch_vmap_pmd_supported(pgprot_t prot); +static inline bool arch_vmap_p4d_supported(pgprot_t prot) +{ + return false; +} + +static inline bool arch_vmap_pud_supported(pgprot_t prot) +{ +#ifdef CONFIG_X86_64 + return boot_cpu_has(X86_FEATURE_GBPAGES); +#else + return false; +#endif +} + +static inline bool arch_vmap_pmd_supported(pgprot_t prot) +{ + return boot_cpu_has(X86_FEATURE_PSE); +} #endif #endif /* _ASM_X86_VMALLOC_H */ diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 762b5ff4edad..12c686c65ea9 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -481,25 +481,6 @@ void iounmap(volatile void __iomem *addr) } EXPORT_SYMBOL(iounmap); -bool arch_vmap_p4d_supported(pgprot_t prot) -{ - return false; -} - -bool arch_vmap_pud_supported(pgprot_t prot) -{ -#ifdef CONFIG_X86_64 - return boot_cpu_has(X86_FEATURE_GBPAGES); -#else - return false; -#endif -} - -bool arch_vmap_pmd_supported(pgprot_t prot) -{ - return boot_cpu_has(X86_FEATURE_PSE); -} - /* * Convert a physical pointer to a virtual kernel pointer for /dev/mem * access diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index dfd82f51ba66..801c418ee97d 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -780,14 +780,6 @@ int pmd_clear_huge(pmd_t *pmd) return 0; } -/* - * Until we support 512GB pages, skip them in the vmap area. - */ -int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) -{ - return 0; -} - #ifdef CONFIG_X86_64 /** * pud_free_pmd_page - Clear pud entry and free pmd page. @@ -859,11 +851,6 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) #else /* !CONFIG_X86_64 */ -int pud_free_pmd_page(pud_t *pud, unsigned long addr) -{ - return pud_none(*pud); -} - /* * Disable free page handling on x86-PAE. This assures that ioremap() * does not update sync'd pmd entries. See vmalloc_sync_one(). From patchwork Sat Dec 5 06:57:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97665C1B0D8 for ; Sat, 5 Dec 2020 06:58:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 13B2C22D2A for ; Sat, 5 Dec 2020 06:58:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 13B2C22D2A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A512E6B0075; Sat, 5 Dec 2020 01:58:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9DD356B0078; Sat, 5 Dec 2020 01:58:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A2B36B007B; Sat, 5 Dec 2020 01:58:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id 7236E6B0075 for ; Sat, 5 Dec 2020 01:58:35 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3B6E58249980 for ; Sat, 5 Dec 2020 06:58:35 +0000 (UTC) X-FDA: 77558325390.26.cork79_4402c2c273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 14E011804B668 for ; Sat, 5 Dec 2020 06:58:35 +0000 (UTC) X-HE-Tag: cork79_4402c2c273cb X-Filterd-Recvd-Size: 14979 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:58:34 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id 4so4384688plk.5 for ; Fri, 04 Dec 2020 22:58:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8TFdbq4jY02lpUIao4VCOhZe+uvscA77JDL+t080vhQ=; b=ZbyWqGrLt4RYllZ309zfMEoZFiP7EH0eV2vmvTvKXQfC5wvo6zBMSFVsL5sTAVkQev SlU0B4eXbzyASvOK3Ym/A3SDKAtjvbmv2ZGTE8x3EsrK41z3z2dzNTP558qeWU4WzYTV F+iPsbKPZl4atluZgGTq/juUrNXl4QabaPCa666/99twFVjsJnd4OH92Vdb+qNN6bxoY FrqkCYielmV6/sPWxpd0m3l+W/36r2txAZV89TkBslXXyRI8/z3NR5l7aGnIp1V7F0EJ l8TAFqMPnBRETeYLEsbkacQNfThi99wthXBAhXBiwRNdIQRbPZNi7QRtTzR/4m3BpzwS caAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8TFdbq4jY02lpUIao4VCOhZe+uvscA77JDL+t080vhQ=; b=gxYQJV5zFry0BzCIir7hQqytk7dK0NuUliw5PUn8sUBZdO1ArEEgdOgS2vI9F3ru/x H9hY8d1CWcmrtncf5U4eD8m2FuaHFEOnQygrq+2Ec+rUW4TdXdv5N7Gcw6m/C5ajspOK ooN1b1UEKRlhaw6kqVyQBmZPIm6pCdeBh0Lwf9xwzdvWGgyWKWQqosqlIRbY1xGp4BI4 a/wa6lwrYzBnuXqM1aRUGf3Sct6TAbYiDJx7xmRzQVkc10iR5nbHjbOnlxI+oP8q9pw2 z5qhGbXjgQwKoaQzkOk59RO5PNC1QFat2us0PqdW6jkkM49d2cH22flIEToQbQeSVNEo w7gA== X-Gm-Message-State: AOAM530Jhl/jabwAyzbhgrZmwHmhpzU+see/wtMxYG2X7VVabNAOw6XM kshKcBz8GIr+AYrhQmelDqSIDAb/zbeTsw== X-Google-Smtp-Source: ABdhPJxLZ83Si1qZ/15c5hGG5I88tnMd0c//yocED6nFfmWgGDliA9hxqi1QtHFVbxyNnrOcaZc4eA== X-Received: by 2002:a17:902:900c:b029:da:b7a3:d83a with SMTP id a12-20020a170902900cb02900dab7a3d83amr7077148plp.57.1607151513399; Fri, 04 Dec 2020 22:58:33 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.58.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:58:33 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe Subject: [PATCH v9 09/12] mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c Date: Sat, 5 Dec 2020 16:57:22 +1000 Message-Id: <20201205065725.1286370-10-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a generic kernel virtual memory mapper, not specific to ioremap. Signed-off-by: Nicholas Piggin --- include/linux/vmalloc.h | 3 + mm/ioremap.c | 197 ---------------------------------------- mm/vmalloc.c | 196 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 199 insertions(+), 197 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index b3218ba0904d..a5ae791dc1e0 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -180,6 +180,9 @@ extern struct vm_struct *remove_vm_area(const void *addr); extern struct vm_struct *find_vm_area(const void *addr); #ifdef CONFIG_MMU +int vmap_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift); extern int map_kernel_range_noflush(unsigned long start, unsigned long size, pgprot_t prot, struct page **pages); int map_kernel_range(unsigned long start, unsigned long size, pgprot_t prot, diff --git a/mm/ioremap.c b/mm/ioremap.c index c67f91164401..d1dcc7e744ac 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -28,203 +28,6 @@ early_param("nohugeiomap", set_nohugeiomap); static const bool iomap_max_page_shift = PAGE_SHIFT; #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ -static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) -{ - pte_t *pte; - u64 pfn; - - pfn = phys_addr >> PAGE_SHIFT; - pte = pte_alloc_kernel_track(pmd, addr, mask); - if (!pte) - return -ENOMEM; - do { - BUG_ON(!pte_none(*pte)); - set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot)); - pfn++; - } while (pte++, addr += PAGE_SIZE, addr != end); - *mask |= PGTBL_PTE_MODIFIED; - return 0; -} - -static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - if (max_page_shift < PMD_SHIFT) - return 0; - - if (!arch_vmap_pmd_supported(prot)) - return 0; - - if ((end - addr) != PMD_SIZE) - return 0; - - if (!IS_ALIGNED(addr, PMD_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, PMD_SIZE)) - return 0; - - if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) - return 0; - - return pmd_set_huge(pmd, phys_addr, prot); -} - -static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift, pgtbl_mod_mask *mask) -{ - pmd_t *pmd; - unsigned long next; - - pmd = pmd_alloc_track(&init_mm, pud, addr, mask); - if (!pmd) - return -ENOMEM; - do { - next = pmd_addr_end(addr, end); - - if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) { - *mask |= PGTBL_PMD_MODIFIED; - continue; - } - - if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) - return -ENOMEM; - } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); - return 0; -} - -static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - if (max_page_shift < PUD_SHIFT) - return 0; - - if (!arch_vmap_pud_supported(prot)) - return 0; - - if ((end - addr) != PUD_SIZE) - return 0; - - if (!IS_ALIGNED(addr, PUD_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, PUD_SIZE)) - return 0; - - if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) - return 0; - - return pud_set_huge(pud, phys_addr, prot); -} - -static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift, pgtbl_mod_mask *mask) -{ - pud_t *pud; - unsigned long next; - - pud = pud_alloc_track(&init_mm, p4d, addr, mask); - if (!pud) - return -ENOMEM; - do { - next = pud_addr_end(addr, end); - - if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) { - *mask |= PGTBL_PUD_MODIFIED; - continue; - } - - if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask)) - return -ENOMEM; - } while (pud++, phys_addr += (next - addr), addr = next, addr != end); - return 0; -} - -static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - if (max_page_shift < P4D_SHIFT) - return 0; - - if (!arch_vmap_p4d_supported(prot)) - return 0; - - if ((end - addr) != P4D_SIZE) - return 0; - - if (!IS_ALIGNED(addr, P4D_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, P4D_SIZE)) - return 0; - - if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) - return 0; - - return p4d_set_huge(p4d, phys_addr, prot); -} - -static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift, pgtbl_mod_mask *mask) -{ - p4d_t *p4d; - unsigned long next; - - p4d = p4d_alloc_track(&init_mm, pgd, addr, mask); - if (!p4d) - return -ENOMEM; - do { - next = p4d_addr_end(addr, end); - - if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) { - *mask |= PGTBL_P4D_MODIFIED; - continue; - } - - if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask)) - return -ENOMEM; - } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); - return 0; -} - -static int vmap_range(unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - pgd_t *pgd; - unsigned long start; - unsigned long next; - int err; - pgtbl_mod_mask mask = 0; - - might_sleep(); - BUG_ON(addr >= end); - - start = addr; - pgd = pgd_offset_k(addr); - do { - next = pgd_addr_end(addr, end); - err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask); - if (err) - break; - } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); - - flush_cache_vmap(start, end); - - if (mask & ARCH_PAGE_TABLE_SYNC_MASK) - arch_sync_kernel_mappings(start, end); - - return err; -} - int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 42326dbffaf0..2f236aeeac24 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -68,6 +68,202 @@ static void free_work(struct work_struct *w) } /*** Page table manipulation functions ***/ +static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + pgtbl_mod_mask *mask) +{ + pte_t *pte; + u64 pfn; + + pfn = phys_addr >> PAGE_SHIFT; + pte = pte_alloc_kernel_track(pmd, addr, mask); + if (!pte) + return -ENOMEM; + do { + BUG_ON(!pte_none(*pte)); + set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot)); + pfn++; + } while (pte++, addr += PAGE_SIZE, addr != end); + *mask |= PGTBL_PTE_MODIFIED; + return 0; +} + +static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + if (max_page_shift < PMD_SHIFT) + return 0; + + if (!arch_vmap_pmd_supported(prot)) + return 0; + + if ((end - addr) != PMD_SIZE) + return 0; + + if (!IS_ALIGNED(addr, PMD_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, PMD_SIZE)) + return 0; + + if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) + return 0; + + return pmd_set_huge(pmd, phys_addr, prot); +} + +static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift, pgtbl_mod_mask *mask) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_alloc_track(&init_mm, pud, addr, mask); + if (!pmd) + return -ENOMEM; + do { + next = pmd_addr_end(addr, end); + + if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) { + *mask |= PGTBL_PMD_MODIFIED; + continue; + } + + if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) + return -ENOMEM; + } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); + return 0; +} + +static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + if (max_page_shift < PUD_SHIFT) + return 0; + + if (!arch_vmap_pud_supported(prot)) + return 0; + + if ((end - addr) != PUD_SIZE) + return 0; + + if (!IS_ALIGNED(addr, PUD_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, PUD_SIZE)) + return 0; + + if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) + return 0; + + return pud_set_huge(pud, phys_addr, prot); +} + +static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift, pgtbl_mod_mask *mask) +{ + pud_t *pud; + unsigned long next; + + pud = pud_alloc_track(&init_mm, p4d, addr, mask); + if (!pud) + return -ENOMEM; + do { + next = pud_addr_end(addr, end); + + if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) { + *mask |= PGTBL_PUD_MODIFIED; + continue; + } + + if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask)) + return -ENOMEM; + } while (pud++, phys_addr += (next - addr), addr = next, addr != end); + return 0; +} + +static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + if (max_page_shift < P4D_SHIFT) + return 0; + + if (!arch_vmap_p4d_supported(prot)) + return 0; + + if ((end - addr) != P4D_SIZE) + return 0; + + if (!IS_ALIGNED(addr, P4D_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, P4D_SIZE)) + return 0; + + if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) + return 0; + + return p4d_set_huge(p4d, phys_addr, prot); +} + +static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift, pgtbl_mod_mask *mask) +{ + p4d_t *p4d; + unsigned long next; + + p4d = p4d_alloc_track(&init_mm, pgd, addr, mask); + if (!p4d) + return -ENOMEM; + do { + next = p4d_addr_end(addr, end); + + if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) { + *mask |= PGTBL_P4D_MODIFIED; + continue; + } + + if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask)) + return -ENOMEM; + } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); + return 0; +} + +int vmap_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + pgd_t *pgd; + unsigned long start; + unsigned long next; + int err; + pgtbl_mod_mask mask = 0; + + might_sleep(); + BUG_ON(addr >= end); + + start = addr; + pgd = pgd_offset_k(addr); + do { + next = pgd_addr_end(addr, end); + err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask); + if (err) + break; + } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); + + flush_cache_vmap(start, end); + + if (mask & ARCH_PAGE_TABLE_SYNC_MASK) + arch_sync_kernel_mappings(start, end); + + return err; +} static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgtbl_mod_mask *mask) From patchwork Sat Dec 5 06:57:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C32BC4361A for ; Sat, 5 Dec 2020 06:58:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B3AB822D2A for ; Sat, 5 Dec 2020 06:58:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B3AB822D2A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D37F6B0078; Sat, 5 Dec 2020 01:58:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 45FF26B007B; Sat, 5 Dec 2020 01:58:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 372566B007D; Sat, 5 Dec 2020 01:58:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0079.hostedemail.com [216.40.44.79]) by kanga.kvack.org (Postfix) with ESMTP id 20F096B0078 for ; Sat, 5 Dec 2020 01:58:40 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E1E968249980 for ; Sat, 5 Dec 2020 06:58:39 +0000 (UTC) X-FDA: 77558325558.02.brass68_171372f273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id BD31E10097AA0 for ; Sat, 5 Dec 2020 06:58:39 +0000 (UTC) X-HE-Tag: brass68_171372f273cb X-Filterd-Recvd-Size: 4737 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:58:39 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id e23so4927007pgk.12 for ; Fri, 04 Dec 2020 22:58:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cxeoavWkBKpHEVfZRK3a60KePQvEOREDyC26QCPQwGc=; b=ukqOtiGe7P2XNpwcoCHPv4PLVVLigIR0wfkbWLYHCk03Qwl78XRRc25X9C1RXu0jHe 71KyORclr783FXhdRwXEwnRYE8V5N9Dlyrpf+r1au98v+1tkhP1N6fl1HslUf7ZQ6F5a XTq1ZFQ+e2jvjUGVPu2b86zd+bEI1VIn6WCJ/qjsEj8yxdBoHFWiVg+eMQZWOAY5xRFj C11oel1teBnWh8SCk/mPyWyo/bfzkUXfP4dXdfU1VwuAWLs0NLmFg3F0v74OQujpFaki 9EIdB0FMiyQGzJUoI3N8vq5BLpK4IyZzsPMZT6nSzTsZ26MvND8g+ErfJa8+fsz1pJ2o 6LkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cxeoavWkBKpHEVfZRK3a60KePQvEOREDyC26QCPQwGc=; b=oJ137X8nH+ZMYtgq0lFxuXxIUMIDQLj3AvMzFOlNA+GbK/bpeuMKGlKojysNNNBbxM ttiynp6GjxC9gZMqftfPZUxy7VyN4CGO9K8DLF+igD6FERTp47hQvfn8rJMxR9xTrf2W FeQODJeUOogROrSgGHgBfJ2RaQqWJnH7ct2FufW/GxFbpxUJDU8Di3mWezVxRy6vHJrj RMhW/s1eV7WDoqXVQ8Y+jSbMLYB1Rm1Jksm6cSEN3xfYZf3MD0BFP8LeaeqA2gCNUQ5M ++gTOG8O4S/uJe57c5ehkS1OPVzLNfABs+HzNpYj5GbMnvpqU1QjusmIbBUmkt5Tl41/ aatw== X-Gm-Message-State: AOAM533vmnFQsR+zW/Zlq5885Qv1A6arL24vl+RXZ7VwLfHDhhimHrWG vvkURfRWv7ak6RMFq/9TrQFyx+BC3j0+lA== X-Google-Smtp-Source: ABdhPJy4ceoebyQbTqB4fE0S1EBU/ZP1A2fSE/NXY9uIYYRFQfIy2FdDgibtFhuJU5XMFvZqvTYP+Q== X-Received: by 2002:aa7:8052:0:b029:196:4dbb:99fe with SMTP id y18-20020aa780520000b02901964dbb99femr7374783pfm.11.1607151518507; Fri, 04 Dec 2020 22:58:38 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.58.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:58:38 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe Subject: [PATCH v9 10/12] mm/vmalloc: add vmap_range_noflush variant Date: Sat, 5 Dec 2020 16:57:23 +1000 Message-Id: <20201205065725.1286370-11-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As a side-effect, the order of flush_cache_vmap() and arch_sync_kernel_mappings() calls are switched, but that now matches the other callers in this file. Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 2f236aeeac24..ee9c3bee67f5 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -235,7 +235,7 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, return 0; } -int vmap_range(unsigned long addr, unsigned long end, +static int vmap_range_noflush(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) { @@ -257,14 +257,24 @@ int vmap_range(unsigned long addr, unsigned long end, break; } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); - flush_cache_vmap(start, end); - if (mask & ARCH_PAGE_TABLE_SYNC_MASK) arch_sync_kernel_mappings(start, end); return err; } +int vmap_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + int err; + + err = vmap_range_noflush(addr, end, phys_addr, prot, max_page_shift); + flush_cache_vmap(addr, end); + + return err; +} + static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgtbl_mod_mask *mask) { From patchwork Sat Dec 5 06:57:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C513C433FE for ; Sat, 5 Dec 2020 06:58:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1887022287 for ; Sat, 5 Dec 2020 06:58:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1887022287 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A2BD26B0036; Sat, 5 Dec 2020 01:58:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9DBE46B007B; Sat, 5 Dec 2020 01:58:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8553A6B007D; Sat, 5 Dec 2020 01:58:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0145.hostedemail.com [216.40.44.145]) by kanga.kvack.org (Postfix) with ESMTP id 669646B0036 for ; Sat, 5 Dec 2020 01:58:46 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2C93F363C for ; Sat, 5 Dec 2020 06:58:46 +0000 (UTC) X-FDA: 77558325852.15.hen01_0114cdb273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 0E06C1814B0C1 for ; Sat, 5 Dec 2020 06:58:46 +0000 (UTC) X-HE-Tag: hen01_0114cdb273cb X-Filterd-Recvd-Size: 17896 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:58:45 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id i38so4946641pgb.5 for ; Fri, 04 Dec 2020 22:58:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lRZepDZOFk10bQegSr670rdh7UW+D3+yhfDJQ0KUAZI=; b=o+Zc9b/1KrMxlVfGkFkl+BbssxM7bQc350ndlo/QIX05kStVvt5mo+8yObCo/dHBvQ Jv0EgCQd08L+XQUazZ83zn9VYQHgz1wz1o8cPfikxWN1dyHtqGoM7Z+LU1TAbtCV6qVy 1bE8x+InISioqdO8fu85kwFrFpItaQjhT0rF435p3kq/iB9/y0CU6VrXy8DSUY4Euir6 jTSkMpQHalnod8tOwjgiKtcmd1eMgg/LdeUNJAbKckDivjVmy2Vx2kZRzvCvfl5xpnCu o8OWH0y2zk1WQ7zlN/Hy5kNTuW0kFq7M7pTQ5WYxAoQlHSEMa2f0aWISX3pE9riF0Gj8 oSkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lRZepDZOFk10bQegSr670rdh7UW+D3+yhfDJQ0KUAZI=; b=PeNKGnwoDoo/wr8Ulbd/BYsuEI3Ik/vVd9cUGGznAQZ77LuBxvWTGBzHRACyaG7t77 VrMopMRAXd95epIXhf1PqkKSdanMqKFH5ilX4OB0JlPrYWc4798T+F+QjW0/uGURIUO3 9eAvLjVkN49ixi5/bG+tmp64w8gQJG5KsejQSPgFDZqaVB9dNtDpW4dPmNaYQpM+tdDK GAq0XxAhcYubVps8IhqCnW4AI+cXMsrH8IBLMNE1Tf1HJRIam9J9/pxwHGCLmfxMKXNR NnP69TkEmGyletM5WYCse25Ai+UV5GKXWSlDEAr9CzDoTUMfKU1Db+gTVxmer+OQFbqN H0Jw== X-Gm-Message-State: AOAM533Gy9YJnK9ad5Pa+ALfK58G38h9J9rw1xB3bDF2AXA1KNrpknQI HneQBvrfFvCaRVxFMdFQJSHUwqDZ3cEozg== X-Google-Smtp-Source: ABdhPJxrb67x3Gnv7/5cTUmtLQdbJLP6bJ+o2x3Seddk+PHjS9PwEyBkBrfa1SKhM5AYAvpqa12zMw== X-Received: by 2002:a63:4c5d:: with SMTP id m29mr1006255pgl.368.1607151524320; Fri, 04 Dec 2020 22:58:44 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.58.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:58:43 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe Subject: [PATCH v9 11/12] mm/vmalloc: Hugepage vmalloc mappings Date: Sat, 5 Dec 2020 16:57:24 +1000 Message-Id: <20201205065725.1286370-12-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC enables support on architectures that define HAVE_ARCH_HUGE_VMAP and supports PMD sized vmap mappings. vmalloc will attempt to allocate PMD-sized pages if allocating PMD size or larger, and fall back to small pages if that was unsuccessful. Architectures must ensure that any arch specific vmalloc allocations that require PAGE_SIZE mappings (e.g., module allocations vs strict module rwx) use the VM_NOHUGE flag to inhibit larger mappings. When hugepage vmalloc mappings are enabled in the next patch, this reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%. This can result in more internal fragmentation and memory overhead for a given allocation, an option nohugevmalloc is added to disable at boot. Signed-off-by: Nicholas Piggin --- arch/Kconfig | 10 +++ include/linux/vmalloc.h | 18 ++++ mm/page_alloc.c | 5 +- mm/vmalloc.c | 191 ++++++++++++++++++++++++++++++---------- 4 files changed, 178 insertions(+), 46 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 56b6ccc0e32d..d8f056fc27b4 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -662,6 +662,16 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD config HAVE_ARCH_HUGE_VMAP bool +config HAVE_ARCH_HUGE_VMALLOC + depends on HAVE_ARCH_HUGE_VMAP + bool + help + Archs that select this would be capable of PMD-sized vmaps (i.e., + arch_vmap_pmd_supported() returns true), and they must make no + assumptions that vmalloc memory is mapped with PAGE_SIZE ptes. The + VM_NOHUGE flag can be used to prohibit arch-specific allocations from + using hugepages to help with this (e.g., modules may require it). + config ARCH_WANT_HUGE_PMD_SHARE bool diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index a5ae791dc1e0..db018b531745 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -25,6 +25,7 @@ struct notifier_block; /* in notifier.h */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ #define VM_MAP_PUT_PAGES 0x00000100 /* put pages and free array in vfree */ +#define VM_NOHUGE 0x00000200 /* force PAGE_SIZE pte mapping */ /* * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC. @@ -59,6 +60,7 @@ struct vm_struct { unsigned long size; unsigned long flags; struct page **pages; + unsigned int page_order; unsigned int nr_pages; phys_addr_t phys_addr; const void *caller; @@ -196,6 +198,18 @@ static inline void set_vm_flush_reset_perms(void *addr) if (vm) vm->flags |= VM_FLUSH_RESET_PERMS; } + +static inline bool is_vm_area_hugepages(const void *addr) +{ + /* + * This may not 100% tell if the area is mapped with > PAGE_SIZE + * page table entries, if for some reason the architecture indicates + * larger sizes are available but decides not to use them, nothing + * prevents that. This only indicates the size of the physical page + * allocated in the vmalloc layer. + */ + return (find_vm_area(addr)->page_order > 0); +} #else static inline int map_kernel_range_noflush(unsigned long start, unsigned long size, @@ -212,6 +226,10 @@ unmap_kernel_range_noflush(unsigned long addr, unsigned long size) static inline void set_vm_flush_reset_perms(void *addr) { } +static inline bool is_vm_area_hugepages(const void *addr) +{ + return false; +} #endif /* for /dev/kmem */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index eaa227a479e4..d907da0ad349 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -70,6 +70,7 @@ #include #include #include +#include #include #include @@ -8171,6 +8172,7 @@ void *__init alloc_large_system_hash(const char *tablename, void *table = NULL; gfp_t gfp_flags; bool virt; + bool huge; /* allow the kernel cmdline to have a say */ if (!numentries) { @@ -8238,6 +8240,7 @@ void *__init alloc_large_system_hash(const char *tablename, } else if (get_order(size) >= MAX_ORDER || hashdist) { table = __vmalloc(size, gfp_flags); virt = true; + huge = is_vm_area_hugepages(table); } else { /* * If bucketsize is not a power-of-two, we may free @@ -8254,7 +8257,7 @@ void *__init alloc_large_system_hash(const char *tablename, pr_info("%s hash table entries: %ld (order: %d, %lu bytes, %s)\n", tablename, 1UL << log2qty, ilog2(size) - PAGE_SHIFT, size, - virt ? "vmalloc" : "linear"); + virt ? (huge ? "vmalloc hugepage" : "vmalloc") : "linear"); if (_hash_shift) *_hash_shift = log2qty; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ee9c3bee67f5..3800380b474f 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -42,6 +42,19 @@ #include "internal.h" #include "pgalloc-track.h" +#ifdef CONFIG_HAVE_ARCH_HUGE_VMALLOC +static bool __ro_after_init vmap_allow_huge = true; + +static int __init set_nohugevmalloc(char *str) +{ + vmap_allow_huge = false; + return 0; +} +early_param("nohugevmalloc", set_nohugevmalloc); +#else /* CONFIG_HAVE_ARCH_HUGE_VMALLOC */ +static const bool vmap_allow_huge = false; +#endif /* CONFIG_HAVE_ARCH_HUGE_VMALLOC */ + bool is_vmalloc_addr(const void *x) { unsigned long addr = (unsigned long)x; @@ -477,31 +490,12 @@ static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr, return 0; } -/** - * map_kernel_range_noflush - map kernel VM area with the specified pages - * @addr: start of the VM area to map - * @size: size of the VM area to map - * @prot: page protection flags to use - * @pages: pages to map - * - * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size specify should - * have been allocated using get_vm_area() and its friends. - * - * NOTE: - * This function does NOT do any cache flushing. The caller is responsible for - * calling flush_cache_vmap() on to-be-mapped areas before calling this - * function. - * - * RETURNS: - * 0 on success, -errno on failure. - */ -int map_kernel_range_noflush(unsigned long addr, unsigned long size, - pgprot_t prot, struct page **pages) +static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages) { unsigned long start = addr; - unsigned long end = addr + size; - unsigned long next; pgd_t *pgd; + unsigned long next; int err = 0; int nr = 0; pgtbl_mod_mask mask = 0; @@ -523,6 +517,65 @@ int map_kernel_range_noflush(unsigned long addr, unsigned long size, return 0; } +static int vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + unsigned int i, nr = (end - addr) >> PAGE_SHIFT; + + WARN_ON(page_shift < PAGE_SHIFT); + + if (page_shift == PAGE_SHIFT) + return vmap_small_pages_range_noflush(addr, end, prot, pages); + + for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) { + int err; + + err = vmap_range_noflush(addr, addr + (1UL << page_shift), + __pa(page_address(pages[i])), prot, + page_shift); + if (err) + return err; + + addr += 1UL << page_shift; + } + + return 0; +} + +static int vmap_pages_range(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + int err; + + err = vmap_pages_range_noflush(addr, end, prot, pages, page_shift); + flush_cache_vmap(addr, end); + return err; +} + +/** + * map_kernel_range_noflush - map kernel VM area with the specified pages + * @addr: start of the VM area to map + * @size: size of the VM area to map + * @prot: page protection flags to use + * @pages: pages to map + * + * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size specify should + * have been allocated using get_vm_area() and its friends. + * + * NOTE: + * This function does NOT do any cache flushing. The caller is responsible for + * calling flush_cache_vmap() on to-be-mapped areas before calling this + * function. + * + * RETURNS: + * 0 on success, -errno on failure. + */ +int map_kernel_range_noflush(unsigned long addr, unsigned long size, + pgprot_t prot, struct page **pages) +{ + return vmap_pages_range_noflush(addr, addr + size, prot, pages, PAGE_SHIFT); +} + int map_kernel_range(unsigned long start, unsigned long size, pgprot_t prot, struct page **pages) { @@ -2400,6 +2453,7 @@ static inline void set_area_direct_map(const struct vm_struct *area, { int i; + /* HUGE_VMALLOC passes small pages to set_direct_map */ for (i = 0; i < area->nr_pages; i++) if (page_address(area->pages[i])) set_direct_map(area->pages[i]); @@ -2433,11 +2487,12 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages) * map. Find the start and end range of the direct mappings to make sure * the vm_unmap_aliases() flush includes the direct map. */ - for (i = 0; i < area->nr_pages; i++) { + for (i = 0; i < area->nr_pages; i += 1U << area->page_order) { unsigned long addr = (unsigned long)page_address(area->pages[i]); if (addr) { + unsigned long page_size = PAGE_SIZE << area->page_order; start = min(addr, start); - end = max(addr + PAGE_SIZE, end); + end = max(addr + page_size, end); flush_dmap = 1; } } @@ -2480,11 +2535,11 @@ static void __vunmap(const void *addr, int deallocate_pages) if (deallocate_pages) { int i; - for (i = 0; i < area->nr_pages; i++) { + for (i = 0; i < area->nr_pages; i += 1U << area->page_order) { struct page *page = area->pages[i]; BUG_ON(!page); - __free_pages(page, 0); + __free_pages(page, area->page_order); } atomic_long_sub(area->nr_pages, &nr_vmalloc_pages); @@ -2674,12 +2729,17 @@ EXPORT_SYMBOL_GPL(vmap_pfn); #endif /* CONFIG_VMAP_PFN */ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, - pgprot_t prot, int node) + pgprot_t prot, unsigned int page_shift, + int node) { const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; - unsigned int nr_pages = get_vm_area_size(area) >> PAGE_SHIFT; - unsigned int array_size = nr_pages * sizeof(struct page *), i; + unsigned int page_order = page_shift - PAGE_SHIFT; + unsigned long addr = (unsigned long)area->addr; + unsigned long size = get_vm_area_size(area); + unsigned int nr_small_pages = size >> PAGE_SHIFT; + unsigned int array_size = nr_small_pages * sizeof(struct page *); struct page **pages; + unsigned int i; gfp_mask |= __GFP_NOWARN; if (!(gfp_mask & (GFP_DMA | GFP_DMA32))) @@ -2700,30 +2760,35 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, } area->pages = pages; - area->nr_pages = nr_pages; + area->nr_pages = nr_small_pages; + area->page_order = page_order; - for (i = 0; i < area->nr_pages; i++) { + /* + * Careful, we allocate and map page_order pages, but tracking is done + * per PAGE_SIZE page so as to keep the vm_struct APIs independent of + * the physical/mapped size. + */ + for (i = 0; i < area->nr_pages; i += 1U << page_order) { struct page *page; + int p; - if (node == NUMA_NO_NODE) - page = alloc_page(gfp_mask); - else - page = alloc_pages_node(node, gfp_mask, 0); - + page = alloc_pages_node(node, gfp_mask, page_order); if (unlikely(!page)) { /* Successfully allocated i pages, free them in __vfree() */ area->nr_pages = i; atomic_long_add(area->nr_pages, &nr_vmalloc_pages); goto fail; } - area->pages[i] = page; + + for (p = 0; p < (1U << page_order); p++) + area->pages[i + p] = page + p; + if (gfpflags_allow_blocking(gfp_mask)) cond_resched(); } atomic_long_add(area->nr_pages, &nr_vmalloc_pages); - if (map_kernel_range((unsigned long)area->addr, get_vm_area_size(area), - prot, pages) < 0) + if (vmap_pages_range(addr, addr + size, prot, pages, page_shift) < 0) goto fail; return area->addr; @@ -2731,7 +2796,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, fail: warn_alloc(gfp_mask, NULL, "vmalloc: allocation failure, allocated %ld of %ld bytes", - (area->nr_pages*PAGE_SIZE), area->size); + (area->nr_pages*PAGE_SIZE), size); __vfree(area->addr); return NULL; } @@ -2762,19 +2827,44 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, struct vm_struct *area; void *addr; unsigned long real_size = size; + unsigned long real_align = align; + unsigned int shift = PAGE_SHIFT; - size = PAGE_ALIGN(size); if (!size || (size >> PAGE_SHIFT) > totalram_pages()) goto fail; - area = __get_vm_area_node(real_size, align, VM_ALLOC | VM_UNINITIALIZED | + if (vmap_allow_huge && !(vm_flags & VM_NOHUGE) && + arch_vmap_pmd_supported(prot) && + (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))) { + unsigned long size_per_node; + + /* + * Try huge pages. Only try for PAGE_KERNEL allocations, + * others like modules don't yet expect huge pages in + * their allocations due to apply_to_page_range not + * supporting them. + */ + + size_per_node = size; + if (node == NUMA_NO_NODE) + size_per_node /= num_online_nodes(); + if (size_per_node >= PMD_SIZE) { + shift = PMD_SHIFT; + align = max(real_align, 1UL << shift); + size = ALIGN(real_size, 1UL << shift); + } + } + +again: + size = PAGE_ALIGN(size); + area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED | vm_flags, start, end, node, gfp_mask, caller); if (!area) goto fail; - addr = __vmalloc_area_node(area, gfp_mask, prot, node); + addr = __vmalloc_area_node(area, gfp_mask, prot, shift, node); if (!addr) - return NULL; + goto fail; /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED @@ -2788,8 +2878,19 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, return addr; fail: - warn_alloc(gfp_mask, NULL, + if (shift > PAGE_SHIFT) { + free_vm_area(area); + shift = PAGE_SHIFT; + align = real_align; + size = real_size; + goto again; + } + + if (!area) { + /* Warn for area allocation, page allocations already warn */ + warn_alloc(gfp_mask, NULL, "vmalloc: allocation failure: %lu bytes", real_size); + } return NULL; } From patchwork Sat Dec 5 06:57:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11952967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 159FDC4361A for ; Sat, 5 Dec 2020 06:58:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AFC3A22475 for ; Sat, 5 Dec 2020 06:58:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AFC3A22475 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4FB376B007B; Sat, 5 Dec 2020 01:58:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D1056B007D; Sat, 5 Dec 2020 01:58:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BF106B007E; Sat, 5 Dec 2020 01:58:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 2404F6B007B for ; Sat, 5 Dec 2020 01:58:51 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E9C73363C for ; Sat, 5 Dec 2020 06:58:50 +0000 (UTC) X-FDA: 77558326020.01.sound52_570a99a273cb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id C933610048E04 for ; Sat, 5 Dec 2020 06:58:50 +0000 (UTC) X-HE-Tag: sound52_570a99a273cb X-Filterd-Recvd-Size: 5456 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Sat, 5 Dec 2020 06:58:50 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id p6so4380909plr.7 for ; Fri, 04 Dec 2020 22:58:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yJtR5xFkunhGW99GQOAU+TGQjkAsTnnQTxf07zcOvgs=; b=F0AUUFvJuyPvisQotGkESCF08thwJoD8pLvQ56/LvEXslmEH0PbmpE3VoVsYHUPh5W R4sViBSPmYFfv8z31pBurxkieAdobFffotdSKXfs+2S5OID6ZZ07rbUBM/zOjiNNjRsr zRXHtgOhNfM9dDwmlzC1SpPNOSfdH3IMYzmJq72shpEdCADu3y8GX/BBhyMKh4OFH/nx JDyVkGBQyzZGwfsyzWil2bss/hIM0VPEQHehTp01lUD6ikeVvH7yq/Jh/MZHF0igWAH4 QZjVQt0sjl4YqvkXzfsPZHwxpwiK/6DWxbxy+JfSP7qEsFVJoHAQmiRucUWNY0F2KecP 91GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yJtR5xFkunhGW99GQOAU+TGQjkAsTnnQTxf07zcOvgs=; b=XKBgtoL3pGq6ARR+4ms78beag9xnManz9LCRzhla7PTJ6BSp4/SbhDtVRlfrJobzU9 z854jsMOZJCepRb8f8bL4lIVt4O9YgHVTkILuz8wTG+ok7tcpe1g3TgON96mT/R1J17p wvOrEYMX+Bq+/pVEbZrPMBHfIXZ9pXAIRgc0NfbRNCN7C7cqEcRtf8k1kuaYoyBpq/1A WugSteivHGma21+DZfNza4yOZ4G1rx1oHxC7vxE3F6TY9GEu4EZdNzzYvjF4gZM9+i9q md4a7xF4bKgO+LuL0ACO9NbI/mSTeiH8OFJ3HYeXuRRK1CAdsTUrseGRjS/e/hFeUXtp eC6g== X-Gm-Message-State: AOAM532pOVOmYd2yV7YGov1VxD/UAHOCAcjhcK8RwTrzAN6tSfn5bVL5 8QHszsToA33DDbFHVzlp6SRw2/Ezf+ijdA== X-Google-Smtp-Source: ABdhPJzkUWSPz0m3VE8zS2RyFMHsvOwCooTxRUR8sFq0DkwXmyrcWvagfoOGE71yDd28mcnPgNNnig== X-Received: by 2002:a17:90b:a53:: with SMTP id gw19mr7522366pjb.216.1607151529339; Fri, 04 Dec 2020 22:58:49 -0800 (PST) Received: from bobo.ozlabs.ibm.com ([1.129.145.238]) by smtp.gmail.com with ESMTPSA id a14sm1110848pfl.141.2020.12.04.22.58.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Dec 2020 22:58:49 -0800 (PST) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy , Rick Edgecombe Subject: [PATCH v9 12/12] powerpc/64s/radix: Enable huge vmalloc mappings Date: Sat, 5 Dec 2020 16:57:25 +1000 Message-Id: <20201205065725.1286370-13-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20201205065725.1286370-1-npiggin@gmail.com> References: <20201205065725.1286370-1-npiggin@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Nicholas Piggin --- Documentation/admin-guide/kernel-parameters.txt | 2 ++ arch/powerpc/Kconfig | 1 + arch/powerpc/kernel/module.c | 13 +++++++++++-- 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 44fde25bb221..3538c750c583 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3220,6 +3220,8 @@ nohugeiomap [KNL,X86,PPC,ARM64] Disable kernel huge I/O mappings. + nohugevmalloc [PPC] Disable kernel huge vmalloc mappings. + nosmt [KNL,S390] Disable symmetric multithreading (SMT). Equivalent to smt=1. diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index e9f13fe08492..ae10381dd324 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -178,6 +178,7 @@ config PPC select GENERIC_TIME_VSYSCALL select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU + select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14 diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c index a211b0253cdb..bc2695eeeb4c 100644 --- a/arch/powerpc/kernel/module.c +++ b/arch/powerpc/kernel/module.c @@ -92,8 +92,17 @@ void *module_alloc(unsigned long size) { BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR); - return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL, - PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, + /* + * Don't do huge page allocations for modules yet until more testing + * is done. STRICT_MODULE_RWX may require extra work to support this + * too. + */ + + return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, + GFP_KERNEL, + PAGE_KERNEL_EXEC, + VM_NOHUGE | VM_FLUSH_RESET_PERMS, + NUMA_NO_NODE, __builtin_return_address(0)); } #endif