From patchwork Fri Aug 21 04:44:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11728109 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7AAFD109B for ; Fri, 21 Aug 2020 04:44:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 47A76208E4 for ; Fri, 21 Aug 2020 04:44:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JswMtJ4G" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 47A76208E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 517AF8D000F; Fri, 21 Aug 2020 00:44:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4C70B8D0005; Fri, 21 Aug 2020 00:44:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3B5E98D000F; Fri, 21 Aug 2020 00:44:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0065.hostedemail.com [216.40.44.65]) by kanga.kvack.org (Postfix) with ESMTP id 227E78D0005 for ; Fri, 21 Aug 2020 00:44:44 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D08E63633 for ; Fri, 21 Aug 2020 04:44:43 +0000 (UTC) X-FDA: 77173335246.28.elbow86_520099327036 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id A8D266D71 for ; Fri, 21 Aug 2020 04:44:43 +0000 (UTC) X-Spam-Summary: 1,0,0,7ebeba319f860fa7,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3308:3355:3865:3867:3868:3871:3872:3874:4117:4250:4321:5007:6119:6261:6653:7514:7903:9036:9413:10004:11026:11232:11473:11658:11914:12043:12219:12291:12295:12296:12297:12438:12517:12519:12555:12683:12895:12986:13161:13229:13894:14181:14394:14687:14721:21080:21433:21444:21451:21627:21666:21990:30003:30054:30062:30070:30074:30090,0,RBL:209.85.215.194:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yrh853g138nmasf1rir61oiyqoiopwu1uepyi5f98ekzhbpbjw988g7yuwf7q.16pf537emigd1cnrgfxw4da7nsuwipafthxmuuio8amwygrfniwwyrnahicpsyh.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: elbow86_520099327036 X-Filterd-Recvd-Size: 6287 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Aug 2020 04:44:43 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id m34so413363pgl.11 for ; Thu, 20 Aug 2020 21:44:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+wqRNPVVkxxZAVOe5cjf4OdJxPseOUgKHXYAzqLL+cs=; b=JswMtJ4GRdG0XC7MN25+zcAR+FDhPMWqW0gS8oiXOKwZi451vgTtEaLszHOpU0HB4J 9H9u538gKqdzyBxKTJmAWqN56+57QcsEN5HPK6SZCW1ti6UBNMjVN/iA4y9FHpljJT+u wy1/OngEQwe/K6gcPyALLZYd8Pc6Jx4XOcGr79jiTCpIjIKsPQbCvWqEB+1Lhxj0lP3d Nqeoeik+DX2xY7QRyAS+whQzQTubpZrt+U5lvQHBCqNMtNjIG4Wx9uG1i4B0keKVkoLr sclYucLHw3o+bXw826g+zsIfsgQDnhblCrJQpV4HiqD2SqSebtf0UwG34fg8oRLgfCw6 rcJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+wqRNPVVkxxZAVOe5cjf4OdJxPseOUgKHXYAzqLL+cs=; b=DMqsWCMrMkrxWBbkJh6wFmXFgBKUWcSB5TJi/hzRYEwUSA3QoGLf4RyGOB4iOa9D0M OIYynOgKcEc96/RTpHD8ctstG9BqSr20XCV8UO1Z0gDFlW7Lf6EIh/JU8yFq9/BE6p6b SUIQPoikBZLo4tZOWT+cNXHmbEtQU6yjiwfPtSfjkPC6JJulgq3OCNDsZQ4LJDKL7+CQ gp+aePgtJtO41zanZ0r7op3uPcqS4YY2tITeZopAsf5b0EwUPJTDo6PMvkURZPpEFq+1 nYJsLpBoRoRUFZVffpVfxAuSATAinlQukoGVVD+Pd1JuCUKEvcKh94L2wa5Rkw/xdE3U WD9Q== X-Gm-Message-State: AOAM533iqx9HV8eL31xye7Um5c9lR7WS14zefA8HOkJ3m9PC+3oRrG6+ MWPUtLJc57mHf74E0mJbBT0K8qajAVA= X-Google-Smtp-Source: ABdhPJyMB3FHce5MTh/dSOT0zrOsoCyzxRf35b8HaRFN1ex/qAd5m4QLLGdfd8b6PLmPj7hs7h1Xvw== X-Received: by 2002:a62:8f4b:: with SMTP id n72mr1007665pfd.5.1597985082100; Thu, 20 Aug 2020 21:44:42 -0700 (PDT) Received: from bobo.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id l9sm683374pgg.29.2020.08.20.21.44.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Aug 2020 21:44:41 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron Subject: [PATCH v5 1/8] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings Date: Fri, 21 Aug 2020 14:44:20 +1000 Message-Id: <20200821044427.736424-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200821044427.736424-1-npiggin@gmail.com> References: <20200821044427.736424-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: A8D266D71 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vmalloc_to_page returns NULL for addresses mapped by larger pages[*]. Whether or not a vmap is huge depends on the architecture details, alignments, boot options, etc., which the caller can not be expected to know. Therefore HUGE_VMAP is a regression for vmalloc_to_page. This change teaches vmalloc_to_page about larger pages, and returns the struct page that corresponds to the offset within the large page. This makes the API agnostic to mapping implementation details. [*] As explained by commit 029c54b095995 ("mm/vmalloc.c: huge-vmap: fail gracefully on unexpected huge vmap mappings") Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 40 ++++++++++++++++++++++++++-------------- 1 file changed, 26 insertions(+), 14 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index b482d240f9a2..49f225b0f855 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -38,6 +38,7 @@ #include #include +#include #include #include @@ -343,7 +344,9 @@ int is_vmalloc_or_module_addr(const void *x) } /* - * Walk a vmap address to the struct page it maps. + * Walk a vmap address to the struct page it maps. Huge vmap mappings will + * return the tail page that corresponds to the base page address, which + * matches small vmap mappings. */ struct page *vmalloc_to_page(const void *vmalloc_addr) { @@ -363,25 +366,33 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) if (pgd_none(*pgd)) return NULL; + if (WARN_ON_ONCE(pgd_leaf(*pgd))) + return NULL; /* XXX: no allowance for huge pgd */ + if (WARN_ON_ONCE(pgd_bad(*pgd))) + return NULL; + p4d = p4d_offset(pgd, addr); if (p4d_none(*p4d)) return NULL; - pud = pud_offset(p4d, addr); + if (p4d_leaf(*p4d)) + return p4d_page(*p4d) + ((addr & ~P4D_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(p4d_bad(*p4d))) + return NULL; - /* - * Don't dereference bad PUD or PMD (below) entries. This will also - * identify huge mappings, which we may encounter on architectures - * that define CONFIG_HAVE_ARCH_HUGE_VMAP=y. Such regions will be - * identified as vmalloc addresses by is_vmalloc_addr(), but are - * not [unambiguously] associated with a struct page, so there is - * no correct value to return for them. - */ - WARN_ON_ONCE(pud_bad(*pud)); - if (pud_none(*pud) || pud_bad(*pud)) + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return NULL; + if (pud_leaf(*pud)) + return pud_page(*pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(pud_bad(*pud))) return NULL; + pmd = pmd_offset(pud, addr); - WARN_ON_ONCE(pmd_bad(*pmd)); - if (pmd_none(*pmd) || pmd_bad(*pmd)) + if (pmd_none(*pmd)) + return NULL; + if (pmd_leaf(*pmd)) + return pmd_page(*pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(pmd_bad(*pmd))) return NULL; ptep = pte_offset_map(pmd, addr); @@ -389,6 +400,7 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) if (pte_present(pte)) page = pte_page(pte); pte_unmap(ptep); + return page; } EXPORT_SYMBOL(vmalloc_to_page); From patchwork Fri Aug 21 04:44:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11728111 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 13BFE722 for ; Fri, 21 Aug 2020 04:44:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D58292075E for ; Fri, 21 Aug 2020 04:44:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="pCPdkWR+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D58292075E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0D2568D0014; Fri, 21 Aug 2020 00:44:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 05F368D0005; Fri, 21 Aug 2020 00:44:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3DBE8D0014; Fri, 21 Aug 2020 00:44:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id C94CF8D0005 for ; Fri, 21 Aug 2020 00:44:47 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 88317180AD81D for ; Fri, 21 Aug 2020 04:44:47 +0000 (UTC) X-FDA: 77173335414.12.magic71_201676e27036 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 5BD551801EDA1 for ; Fri, 21 Aug 2020 04:44:47 +0000 (UTC) X-Spam-Summary: 1,0,0,59895342a286454e,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1515:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:2898:3138:3139:3140:3141:3142:3354:3876:3877:4321:5007:6114:6119:6261:6642:6653:7514:9413:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13255:13894:14096:14110:14181:14394:14687:14721:21080:21444:21451:21627:21666:21990:30054:30070,0,RBL:209.85.215.196:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04ygndcm5kjhoz7b3ooza5ntm5f9syc8ha73weafrcdj8f1an8pmb7rddtuhwaf.mtrcedn96h53dqjjmd51asrhr4b6bke8qzhcsgttjw738robuwdfjzmmw19tdmr.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: magic71_201676e27036 X-Filterd-Recvd-Size: 5952 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Aug 2020 04:44:46 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id g33so428517pgb.4 for ; Thu, 20 Aug 2020 21:44:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r93I4AuViyvI3WFmm3zsFVOEd/ZSp6PAmt6HmvThDqM=; b=pCPdkWR+/c3zGQZvbI+yxPmJYiJKpXbInrdYKkqIhWmlE9TYC6j5Mf11xaW0biltIw OpEeh+fCH9qvFzH6jX5Lv3RtfHciH9iJpqSE5g2ZeBEhuEqYyTLS1C33+HGi7/0UHE3e XRi13Mvm7jZFBeh7aoSMNJgBAAR//QM0MiuA0mJUwYqzKpKXxvSVOg5RGBmfC53y0ZmG O8Mj2Wa7adwTYJ+v8cuTZwIbAVljHSCvjsq2/l6Iz3h9e0jeVS4gFdtofpkMYPzIOwSe QBzadfNfLEd+vu9GWevuLRVwbiTvNZO2gQ1OuXxvFaJ2bjkrLNyu8oi/X7lcwoXY0FUG aT9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r93I4AuViyvI3WFmm3zsFVOEd/ZSp6PAmt6HmvThDqM=; b=VPWGoAcQW5dR8HcapqciMU47BwUg0mYGyKsEBvPwOxjqOg7ZPYFK9vZT2MoTA3YlO6 Dz2nLgPvm1KGKMDSVb5oTZvUKnAXpNquciSo83fweSe0XfGW7Q9PPFW8xr91lLVhkf33 /3IDJBSSKfghz673Z5/K6wtzqusEaa8iepad505cnvn5qHpOI51SGUd2p5XdmFWUiuCo lZaoFRQGztOmUbgCyNXEPK+Pe/Kw59uPpIXButW2+sFiwPsNZ5udFG4+lb/ZICHyDY6Z 9Kvl3ZvYP3EtT/UAlst1POE6yZHFCQOq2IV4qV1QURzoVBlmRW9R3hUyHWg0kkElrGfW v/qw== X-Gm-Message-State: AOAM530jf/LY4PpwIjDZgaDovXELBBlTeQmoawc1vgvIMtRMWd7A92ZR Kzow3QapmocSjT/eWI3AFsbDkSnj+zE= X-Google-Smtp-Source: ABdhPJxcErdb6DHl7wNZg2CJKi37fhm5Y8ORf9S2133r96TOuyrJSJsmNCzKB7cRJeog1G9Yvb/4EQ== X-Received: by 2002:a62:9254:: with SMTP id o81mr988198pfd.73.1597985085951; Thu, 20 Aug 2020 21:44:45 -0700 (PDT) Received: from bobo.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id l9sm683374pgg.29.2020.08.20.21.44.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Aug 2020 21:44:45 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron Subject: [PATCH v5 2/8] mm: apply_to_pte_range warn and fail if a large pte is encountered Date: Fri, 21 Aug 2020 14:44:21 +1000 Message-Id: <20200821044427.736424-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200821044427.736424-1-npiggin@gmail.com> References: <20200821044427.736424-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5BD551801EDA1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Nicholas Piggin Reported-by: kernel test robot --- mm/memory.c | 60 +++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 44 insertions(+), 16 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index f95edbb77326..19986af291e0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2261,13 +2261,20 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, } do { next = pmd_addr_end(addr, end); - if (create || !pmd_none_or_clear_bad(pmd)) { - err = apply_to_pte_range(mm, pmd, addr, next, fn, data, - create); - if (err) - break; + if (pmd_none(*pmd) && !create) + continue; + if (WARN_ON_ONCE(pmd_leaf(*pmd))) + return -EINVAL; + if (WARN_ON_ONCE(pmd_bad(*pmd))) { + if (!create) + continue; + pmd_clear_bad(pmd); } + err = apply_to_pte_range(mm, pmd, addr, next, fn, data, create); + if (err) + break; } while (pmd++, addr = next, addr != end); + return err; } @@ -2288,13 +2295,20 @@ static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d, } do { next = pud_addr_end(addr, end); - if (create || !pud_none_or_clear_bad(pud)) { - err = apply_to_pmd_range(mm, pud, addr, next, fn, data, - create); - if (err) - break; + if (pud_none(*pud) && !create) + continue; + if (WARN_ON_ONCE(pud_leaf(*pud))) + return -EINVAL; + if (WARN_ON_ONCE(pud_bad(*pud))) { + if (!create) + continue; + pud_clear_bad(pud); } + err = apply_to_pmd_range(mm, pud, addr, next, fn, data, create); + if (err) + break; } while (pud++, addr = next, addr != end); + return err; } @@ -2315,13 +2329,20 @@ static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd, } do { next = p4d_addr_end(addr, end); - if (create || !p4d_none_or_clear_bad(p4d)) { - err = apply_to_pud_range(mm, p4d, addr, next, fn, data, - create); - if (err) - break; + if (p4d_none(*p4d) && !create) + continue; + if (WARN_ON_ONCE(p4d_leaf(*p4d))) + return -EINVAL; + if (WARN_ON_ONCE(p4d_bad(*p4d))) { + if (!create) + continue; + p4d_clear_bad(p4d); } + err = apply_to_pud_range(mm, p4d, addr, next, fn, data, create); + if (err) + break; } while (p4d++, addr = next, addr != end); + return err; } @@ -2340,8 +2361,15 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr, pgd = pgd_offset(mm, addr); do { next = pgd_addr_end(addr, end); - if (!create && pgd_none_or_clear_bad(pgd)) + if (pgd_none(*pgd) && !create) continue; + if (WARN_ON_ONCE(pgd_leaf(*pgd))) + return -EINVAL; + if (WARN_ON_ONCE(pgd_bad(*pgd))) { + if (!create) + continue; + pgd_clear_bad(pgd); + } err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, create); if (err) break; From patchwork Fri Aug 21 04:44:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11728113 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29BF5722 for ; Fri, 21 Aug 2020 04:44:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E1F65208E4 for ; Fri, 21 Aug 2020 04:44:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cfi4mDOu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E1F65208E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0A9BA8D0015; Fri, 21 Aug 2020 00:44:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 032128D0005; Fri, 21 Aug 2020 00:44:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E644A8D0015; Fri, 21 Aug 2020 00:44:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id CDE688D0005 for ; Fri, 21 Aug 2020 00:44:51 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8E7802493 for ; Fri, 21 Aug 2020 04:44:51 +0000 (UTC) X-FDA: 77173335582.30.toe68_17029dd27036 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 6171E180B3AA7 for ; Fri, 21 Aug 2020 04:44:51 +0000 (UTC) X-Spam-Summary: 1,0,0,a38618af589dd5d8,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3355:3865:3867:3872:4117:4321:5007:6261:6653:7514:8603:9413:10004:11026:11473:11658:11914:12043:12297:12438:12517:12519:12555:12895:12986:13161:13229:13894:14096:14181:14394:14687:14721:21080:21444:21451:21627:21666:21990:30003:30054,0,RBL:209.85.216.67:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04yfkngsbf8h5aqanognrsw4obo9bypm7h3jcjuakakfct3okjhqtqdb3qdxepp.cdos6t96pq8egchuruwbnqq8qf15tnqupi97ffyegf4mpp4rhoeu9bc6e3cxk35.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: toe68_17029dd27036 X-Filterd-Recvd-Size: 6634 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Aug 2020 04:44:50 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id q93so412674pjq.0 for ; Thu, 20 Aug 2020 21:44:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xoa50mjKdO+6rEo7JFgsM1iktECeAufQruBypQNx5uU=; b=cfi4mDOu03twyolblqmsiXSKLkWdrtP6DA32thuCHNC6mXen19Y+kwOWy25uSRuyAd Qa0bgRVU3+eP/5sefkTIIsZIeICp6AU1QK0DB6FIFGeK5xh2hEnGZU+nYr/VrW2rOUfc tV5MSHrfgtsv9vIlRDYAj4TdoEsWMfrh9gSbUHA0xIpWXvKqPDXaYOHOJj2GR400bxdp yTbeKz7R+/C0ewTgjav45Loptie3gcboNCy1eomfzJUs8waTkOxkU17vfX8XSv0uSzcC rNi2y4c5u7tA/ioI4pvh9ypWxzu1bFTMZ/o1CyfCiOAs420mJoSXUtb6Zc3TecHWn8CH sSgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xoa50mjKdO+6rEo7JFgsM1iktECeAufQruBypQNx5uU=; b=kMxxGNbnnlJ/wVO58/8QnJg7lyT205g9i168Q7mrNt/RLBc2Vuh1yW+19TnetG3/bP aOzXK8XPvePDmB0mSzEQXfu63UYAq7m+lcgcoNHH7C6FxBmIlRa9MMIMH1rpeBrNm1Ly Ya/nQ7iCCcR6II70s/I+yZnabAn5+DT9OqDllWo5a+CZBv6HUWqvgYALGgyV7kAxRgY0 4TEJJkYPLS+FuquIUmsHHpZhYiKCitKHOnLsiGqEUGd9w/VKYO3vq3St3mU0X43IEB4w 1WXcbZ7Cmv6qDzYzT2RCjHcAJSBvMcBPMXA1PbDKFtsBzVO2qT2Lk4Dg8m9oy2rTWBxU 2iyA== X-Gm-Message-State: AOAM53381llRSAkYpLn9jgtdOIvtpOER8k0JNy1ELBzFkErO7/JCWbJ2 GBl+XRgROZ4sLFIWwbdgwxmJVbG/n/M= X-Google-Smtp-Source: ABdhPJxPggI04ETKTB3DdnitxxNiKgH+fgh1eKljj6tsD8xS6vufzUeeuZNnWtOgyMzr1fIMfPln9g== X-Received: by 2002:a17:902:8491:: with SMTP id c17mr942848plo.93.1597985089799; Thu, 20 Aug 2020 21:44:49 -0700 (PDT) Received: from bobo.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id l9sm683374pgg.29.2020.08.20.21.44.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Aug 2020 21:44:49 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron Subject: [PATCH v5 3/8] mm/vmalloc: rename vmap_*_range vmap_pages_*_range Date: Fri, 21 Aug 2020 14:44:22 +1000 Message-Id: <20200821044427.736424-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200821044427.736424-1-npiggin@gmail.com> References: <20200821044427.736424-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6171E180B3AA7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The vmalloc mapper operates on a struct page * array rather than a linear physical address, re-name it to make this distinction clear. Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 28 ++++++++++++---------------- 1 file changed, 12 insertions(+), 16 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 49f225b0f855..3a1e45fd1626 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -190,9 +190,8 @@ void unmap_kernel_range_noflush(unsigned long start, unsigned long size) arch_sync_kernel_mappings(start, end); } -static int vmap_pte_range(pmd_t *pmd, unsigned long addr, - unsigned long end, pgprot_t prot, struct page **pages, int *nr, - pgtbl_mod_mask *mask) +static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { pte_t *pte; @@ -218,9 +217,8 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, return 0; } -static int vmap_pmd_range(pud_t *pud, unsigned long addr, - unsigned long end, pgprot_t prot, struct page **pages, int *nr, - pgtbl_mod_mask *mask) +static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { pmd_t *pmd; unsigned long next; @@ -230,15 +228,14 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, return -ENOMEM; do { next = pmd_addr_end(addr, end); - if (vmap_pte_range(pmd, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pte_range(pmd, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (pmd++, addr = next, addr != end); return 0; } -static int vmap_pud_range(p4d_t *p4d, unsigned long addr, - unsigned long end, pgprot_t prot, struct page **pages, int *nr, - pgtbl_mod_mask *mask) +static int vmap_pages_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { pud_t *pud; unsigned long next; @@ -248,15 +245,14 @@ static int vmap_pud_range(p4d_t *p4d, unsigned long addr, return -ENOMEM; do { next = pud_addr_end(addr, end); - if (vmap_pmd_range(pud, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pmd_range(pud, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (pud++, addr = next, addr != end); return 0; } -static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, - unsigned long end, pgprot_t prot, struct page **pages, int *nr, - pgtbl_mod_mask *mask) +static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { p4d_t *p4d; unsigned long next; @@ -266,7 +262,7 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, return -ENOMEM; do { next = p4d_addr_end(addr, end); - if (vmap_pud_range(p4d, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pud_range(p4d, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (p4d++, addr = next, addr != end); return 0; @@ -307,7 +303,7 @@ int map_kernel_range_noflush(unsigned long addr, unsigned long size, next = pgd_addr_end(addr, end); if (pgd_bad(*pgd)) mask |= PGTBL_PGD_MODIFIED; - err = vmap_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); + err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); if (err) return err; } while (pgd++, addr = next, addr != end); From patchwork Fri Aug 21 04:44:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11728115 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F0CE7722 for ; Fri, 21 Aug 2020 04:44:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ACBA1208E4 for ; Fri, 21 Aug 2020 04:44:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="kmFDAZrz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ACBA1208E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ABCA38D0016; Fri, 21 Aug 2020 00:44:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A44C98D0005; Fri, 21 Aug 2020 00:44:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 935728D0016; Fri, 21 Aug 2020 00:44:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id 7AD418D0005 for ; Fri, 21 Aug 2020 00:44:55 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3439C181AEF09 for ; Fri, 21 Aug 2020 04:44:55 +0000 (UTC) X-FDA: 77173335750.21.crowd32_180262027036 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 0108B180442C0 for ; Fri, 21 Aug 2020 04:44:54 +0000 (UTC) X-Spam-Summary: 1,0,0,7a5cc57bd9380f7e,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2559:2562:2693:2898:3138:3139:3140:3141:3142:3867:3868:3870:3872:4049:4120:4385:4605:5007:6119:6261:6653:7514:7875:8603:9413:10004:11026:11473:11658:11914:12043:12291:12297:12438:12517:12519:12555:12683:12895:13894:14096:14110:14394:14687:21080:21444:21451:21627:21666:21990:30054:30070,0,RBL:209.85.215.194:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yf4k5sgsyj6gbj3eimnsxkc8baeopkk8j1fcj1p4rxy1xzjbaqk81xn86zx35.4qoe7fpts474bbqebnahn3yioauzz4fg1omebef5b3cepqwaadmz3cmmcmhgr9g.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: crowd32_180262027036 X-Filterd-Recvd-Size: 9194 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Aug 2020 04:44:54 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id 189so410054pgg.13 for ; Thu, 20 Aug 2020 21:44:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zgciMtrhUFyjwGUVG4/y+NlOfAX4ULiG8JBMTv5iRYk=; b=kmFDAZrzlugaf2iOwFJzFKy0qRhytpzHPWy1SyzEHCf7x9/jJjxRvUYRt3BCCk2gIP G4nsleWjGt4aNX9fvVP1KwjLhxMcXIvVfmtN3e3JqaN3cKpbVL2k7MzRrLEd6j92X8V0 oIcDcwGG+tmZt9h0+p+96X55aiBn1lvUzVOzHmRiQq304Nwxl4zMONajDK1Vtrn4GVYh KSQ60Hj72mDozDv7yDH5QAPSFG+Bg3LgmMbAwkLnl2OL+rDtWrod3/UZItrijQavzDgb LLfAOYLTzIZa7dbKDU1ia4cBiQ9+m3Z1xDkKS6YKyaGG8mxCOox9KskxiZKX77/xORO9 71YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zgciMtrhUFyjwGUVG4/y+NlOfAX4ULiG8JBMTv5iRYk=; b=V5gvfwax1nax/a/yX23jQkEin+tEPm8ks0w/wfQKtHIOw1l0DsySywqwVzJ9isptzb Tiy84SRlbDLb4Prh3bR+1gmdvV3JEjkUQ3EQTQdKkv3B6H0ekyuHnZxAX01DtNhT6kZP I05sh6vCFklQUDJZotpHkpB/goXVJQjKXJ/091sOYZ7+VvDAH8hAvA2nuher5zwwFIkn YqTakRV0vLhNlYZ3VhVCjsLfpy8bOPZk4+UYGxWZhmsXBmJLyPbnaFdpuYISjju0gNXt pXMyXAJC2/1iE0VyA9Tlj8q/rYQdBHWJEWAUByQOMDMy6//xK9c1BlMAqv0rBpp7KmqW BFDg== X-Gm-Message-State: AOAM532Rg/GjO3dlkVno+Q4myPr9Js4HxcwC6v+lulmz8VEEh8OOf14w +hGxqcNgAxZ+Bw5s97/zeWtZl8edPpg= X-Google-Smtp-Source: ABdhPJw3Yjk+AuHsrY+Z3Qa1QvCviblO1Uhp70gloFt8uX+sF1897Ta2d+ML5N6Yc+IbHzqAbpKmvw== X-Received: by 2002:a05:6a00:22c9:: with SMTP id f9mr1023487pfj.212.1597985093609; Thu, 20 Aug 2020 21:44:53 -0700 (PDT) Received: from bobo.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id l9sm683374pgg.29.2020.08.20.21.44.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Aug 2020 21:44:53 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron Subject: [PATCH v5 4/8] lib/ioremap: rename ioremap_*_range to vmap_*_range Date: Fri, 21 Aug 2020 14:44:23 +1000 Message-Id: <20200821044427.736424-5-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200821044427.736424-1-npiggin@gmail.com> References: <20200821044427.736424-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0108B180442C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This will be moved to mm/ and used as a generic kernel virtual mapping function, so re-name it in preparation. Signed-off-by: Nicholas Piggin --- mm/ioremap.c | 55 ++++++++++++++++++++++------------------------------ 1 file changed, 23 insertions(+), 32 deletions(-) diff --git a/mm/ioremap.c b/mm/ioremap.c index 5fa1ab41d152..6016ae3227ad 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -61,9 +61,8 @@ static inline int ioremap_pud_enabled(void) { return 0; } static inline int ioremap_pmd_enabled(void) { return 0; } #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ -static int ioremap_pte_range(pmd_t *pmd, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, pgtbl_mod_mask *mask) { pte_t *pte; u64 pfn; @@ -81,9 +80,8 @@ static int ioremap_pte_range(pmd_t *pmd, unsigned long addr, return 0; } -static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, - pgprot_t prot) +static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { if (!ioremap_pmd_enabled()) return 0; @@ -103,9 +101,8 @@ static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr, return pmd_set_huge(pmd, phys_addr, prot); } -static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, pgtbl_mod_mask *mask) { pmd_t *pmd; unsigned long next; @@ -116,20 +113,19 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, do { next = pmd_addr_end(addr, end); - if (ioremap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) { *mask |= PGTBL_PMD_MODIFIED; continue; } - if (ioremap_pte_range(pmd, addr, next, phys_addr, prot, mask)) + if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) return -ENOMEM; } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); return 0; } -static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, - pgprot_t prot) +static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { if (!ioremap_pud_enabled()) return 0; @@ -149,9 +145,8 @@ static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr, return pud_set_huge(pud, phys_addr, prot); } -static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, pgtbl_mod_mask *mask) { pud_t *pud; unsigned long next; @@ -162,20 +157,19 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr, do { next = pud_addr_end(addr, end); - if (ioremap_try_huge_pud(pud, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot)) { *mask |= PGTBL_PUD_MODIFIED; continue; } - if (ioremap_pmd_range(pud, addr, next, phys_addr, prot, mask)) + if (vmap_pmd_range(pud, addr, next, phys_addr, prot, mask)) return -ENOMEM; } while (pud++, phys_addr += (next - addr), addr = next, addr != end); return 0; } -static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, - pgprot_t prot) +static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) { if (!ioremap_p4d_enabled()) return 0; @@ -195,9 +189,8 @@ static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr, return p4d_set_huge(p4d, phys_addr, prot); } -static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - pgtbl_mod_mask *mask) +static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, pgtbl_mod_mask *mask) { p4d_t *p4d; unsigned long next; @@ -208,19 +201,18 @@ static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr, do { next = p4d_addr_end(addr, end); - if (ioremap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) { + if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) { *mask |= PGTBL_P4D_MODIFIED; continue; } - if (ioremap_pud_range(p4d, addr, next, phys_addr, prot, mask)) + if (vmap_pud_range(p4d, addr, next, phys_addr, prot, mask)) return -ENOMEM; } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); return 0; } -int ioremap_page_range(unsigned long addr, - unsigned long end, phys_addr_t phys_addr, pgprot_t prot) +int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { pgd_t *pgd; unsigned long start; @@ -235,8 +227,7 @@ int ioremap_page_range(unsigned long addr, pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); - err = ioremap_p4d_range(pgd, addr, next, phys_addr, prot, - &mask); + err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, &mask); if (err) break; } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); @@ -272,7 +263,7 @@ void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long prot) return NULL; vaddr = (unsigned long)area->addr; - if (ioremap_page_range(vaddr, vaddr + size, addr, __pgprot(prot))) { + if (vmap_page_range(vaddr, vaddr + size, addr, __pgprot(prot))) { free_vm_area(area); return NULL; } From patchwork Fri Aug 21 04:44:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11728117 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59295109B for ; Fri, 21 Aug 2020 04:45:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0AED12075E for ; Fri, 21 Aug 2020 04:45:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="A7BZjrOM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0AED12075E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F02348D001A; Fri, 21 Aug 2020 00:44:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E8BB68D0005; Fri, 21 Aug 2020 00:44:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2C3B8D001A; Fri, 21 Aug 2020 00:44:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id B710D8D0005 for ; Fri, 21 Aug 2020 00:44:59 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7E712249C for ; Fri, 21 Aug 2020 04:44:59 +0000 (UTC) X-FDA: 77173335918.06.rod04_3c1401227036 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 4E33B10050390 for ; Fri, 21 Aug 2020 04:44:59 +0000 (UTC) X-Spam-Summary: 1,0,0,0a70b86e47221da0,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:4:41:69:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:2194:2196:2199:2200:2393:2559:2562:2638:2898:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:4321:4385:4605:5007:6119:6261:6653:7514:7875:8603:8660:9413:9592:10004:11026:11473:11657:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:12986:13148:13230:13894:14096:14394:14687:21080:21433:21444:21451:21611:21627:21666:21939:21966:21990:30012:30054:30070,0,RBL:209.85.216.68:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04yg89hhc9ysmt1efynqzfszpkgb5ocfe4ng4ycrxyxjuqpm69h45hk3qkyoqm6.fuwez7ihcrqa8w4w44pqqexc3wccbpf36rdfwk789oxgoj4zg5o1rimpdzzfpa4.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: rod04_3c1401227036 X-Filterd-Recvd-Size: 15685 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Aug 2020 04:44:58 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id n3so268723pjq.1 for ; Thu, 20 Aug 2020 21:44:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PxO9Fmz2XY4J5PcT3r/B/Ox9Dltz1o7JoYfRKqhdMIY=; b=A7BZjrOMLSmZeKG4GJvm0Cu6bu5oAddISr3WPxe68eP2pRXf6EMzyiFrTzC+VHKcm4 5579w6ZeAUUI4nVVOKjfzH365VJ8q7d1N7A039vKmAcX/VCVujRm/00k6fa6ID6KbZIO 6Wgmxi84WEo+bbjd7Pisj+obRLcJi9WqTp7pZBV7kXWA6E+CKE/9ySioipU/3AaFzGvk dVDEMFQxUIe61JPBaJbu+id3GD7Zpe4JEe7bYnvPIJKLyjpso/yZQI4OptYz7e/oPpBU 9zvXkb+xIszsp8fAxyzM/DbvHMWW8YdFqwD0rjtxelQN/Ygmi7XRueuHinLqkdwDHeEm NIQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PxO9Fmz2XY4J5PcT3r/B/Ox9Dltz1o7JoYfRKqhdMIY=; b=WoH7Lbz+SDfvYZfXHiN3wVYWGbNX8YrRBWxcN62Rxct+I7LB5w6V30DMKFSQpHa5DZ 0Sa/c0QRYxpZR//Y+54n2Z30JuJ9KR6FjOe+1eliYmHB5KENTWqNqm7Zxlek1zFC3+vb wWWg2ZQNsTM6BrYm1D9tRS/eDHO62bFLLc7gDMPZTEAyVSJn/IL/a2YOEn+fsIw/zf+S p/NFvmCwJRIrv+UAwYFvx7QYR0Hgz3WJxBwcNkfRdBqXzMGdCdWZuWz9X+Id+B0jy59J idHHyD8eLiSLySPIEOkXEZJdLxt2HQ12/+aYKhmlNjJj9otz1jcCg8h2LATsSnROqCZx UqsQ== X-Gm-Message-State: AOAM533o2jgWXME5gbQmevYimWZAuhnRLzzVo+hW1rM5/j2Nb/ExrTLn rLal/PH8FlDy4KYCLdA2aJpIIxgktj0= X-Google-Smtp-Source: ABdhPJw5xHkx99C353vQuXHQuBqQSsj3L+sMgxre4XYcVI6dzid5vkYDy4vRdO6OxRsZAj8qDuxH0w== X-Received: by 2002:a17:902:6ac3:: with SMTP id i3mr961255plt.21.1597985097741; Thu, 20 Aug 2020 21:44:57 -0700 (PDT) Received: from bobo.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id l9sm683374pgg.29.2020.08.20.21.44.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Aug 2020 21:44:57 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron Subject: [PATCH v5 5/8] mm: HUGE_VMAP arch support cleanup Date: Fri, 21 Aug 2020 14:44:24 +1000 Message-Id: <20200821044427.736424-6-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200821044427.736424-1-npiggin@gmail.com> References: <20200821044427.736424-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4E33B10050390 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This changes the awkward approach where architectures provide init functions to determine which levels they can provide large mappings for, to one where the arch is queried for each call. This removes code and indirection, and allows constant-folding of dead code for unsupported levels. This also adds a prot argument to the arch query. This is unused currently but could help with some architectures (e.g., some powerpc processors can't map uncacheable memory with large pages). Signed-off-by: Nicholas Piggin --- arch/arm64/mm/mmu.c | 12 +-- arch/powerpc/mm/book3s64/radix_pgtable.c | 10 ++- arch/x86/mm/ioremap.c | 12 +-- include/linux/io.h | 9 --- include/linux/vmalloc.h | 10 +++ init/main.c | 1 - mm/ioremap.c | 96 +++++++++++------------- 7 files changed, 73 insertions(+), 77 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 75df62fea1b6..bbb3ccf6a7ce 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1304,12 +1304,13 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot) return dt_virt; } -int __init arch_ioremap_p4d_supported(void) +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +bool arch_vmap_p4d_supported(pgprot_t prot) { - return 0; + return false; } -int __init arch_ioremap_pud_supported(void) +bool arch_vmap_pud_supported(pgprot_t prot) { /* * Only 4k granule supports level 1 block mappings. @@ -1319,11 +1320,12 @@ int __init arch_ioremap_pud_supported(void) !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); } -int __init arch_ioremap_pmd_supported(void) +bool arch_vmap_pmd_supported(pgprot_t prot) { - /* See arch_ioremap_pud_supported() */ + /* See arch_vmap_pud_supported() */ return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS); } +#endif int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot) { diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index ae823bba29f2..7d3a620c5adf 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1182,13 +1182,14 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma, set_pte_at(mm, addr, ptep, pte); } -int __init arch_ioremap_pud_supported(void) +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +bool arch_vmap_pud_supported(pgprot_t prot) { /* HPT does not cope with large pages in the vmalloc area */ return radix_enabled(); } -int __init arch_ioremap_pmd_supported(void) +bool arch_vmap_pmd_supported(pgprot_t prot) { return radix_enabled(); } @@ -1197,6 +1198,7 @@ int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) { return 0; } +#endif int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot) { @@ -1282,7 +1284,7 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) return 1; } -int __init arch_ioremap_p4d_supported(void) +bool arch_vmap_p4d_supported(pgprot_t prot) { - return 0; + return false; } diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 84d85dbd1dad..5b8b495ab4ed 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -481,24 +481,26 @@ void iounmap(volatile void __iomem *addr) } EXPORT_SYMBOL(iounmap); -int __init arch_ioremap_p4d_supported(void) +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +bool arch_vmap_p4d_supported(pgprot_t prot) { - return 0; + return false; } -int __init arch_ioremap_pud_supported(void) +bool arch_vmap_pud_supported(pgprot_t prot) { #ifdef CONFIG_X86_64 return boot_cpu_has(X86_FEATURE_GBPAGES); #else - return 0; + return false; #endif } -int __init arch_ioremap_pmd_supported(void) +bool arch_vmap_pmd_supported(pgprot_t prot) { return boot_cpu_has(X86_FEATURE_PSE); } +#endif /* * Convert a physical pointer to a virtual kernel pointer for /dev/mem diff --git a/include/linux/io.h b/include/linux/io.h index 8394c56babc2..f1effd4d7a3c 100644 --- a/include/linux/io.h +++ b/include/linux/io.h @@ -31,15 +31,6 @@ static inline int ioremap_page_range(unsigned long addr, unsigned long end, } #endif -#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -void __init ioremap_huge_init(void); -int arch_ioremap_p4d_supported(void); -int arch_ioremap_pud_supported(void); -int arch_ioremap_pmd_supported(void); -#else -static inline void ioremap_huge_init(void) { } -#endif - /* * Managed iomap interface */ diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 0221f852a7e1..787d77ad7536 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -84,6 +84,16 @@ struct vmap_area { }; }; +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +bool arch_vmap_p4d_supported(pgprot_t prot); +bool arch_vmap_pud_supported(pgprot_t prot); +bool arch_vmap_pmd_supported(pgprot_t prot); +#else +static inline bool arch_vmap_p4d_supported(pgprot_t prot) { return false; } +static inline bool arch_vmap_pud_supported(pgprot_t prot) { return false; } +static inline bool arch_vmap_pmd_supported(pgprot_t prot) { return false; } +#endif + /* * Highlevel APIs for driver use */ diff --git a/init/main.c b/init/main.c index ae78fb68d231..1c89aa127b8f 100644 --- a/init/main.c +++ b/init/main.c @@ -820,7 +820,6 @@ static void __init mm_init(void) pgtable_init(); debug_objects_mem_init(); vmalloc_init(); - ioremap_huge_init(); /* Should be run before the first non-init thread is created */ init_espfix_bsp(); /* Should be run after espfix64 is set up. */ diff --git a/mm/ioremap.c b/mm/ioremap.c index 6016ae3227ad..b0032dbadaf7 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -16,49 +16,16 @@ #include "pgalloc-track.h" #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -static int __read_mostly ioremap_p4d_capable; -static int __read_mostly ioremap_pud_capable; -static int __read_mostly ioremap_pmd_capable; -static int __read_mostly ioremap_huge_disabled; +static bool __ro_after_init iomap_allow_huge = true; static int __init set_nohugeiomap(char *str) { - ioremap_huge_disabled = 1; + iomap_allow_huge = false; return 0; } early_param("nohugeiomap", set_nohugeiomap); - -void __init ioremap_huge_init(void) -{ - if (!ioremap_huge_disabled) { - if (arch_ioremap_p4d_supported()) - ioremap_p4d_capable = 1; - if (arch_ioremap_pud_supported()) - ioremap_pud_capable = 1; - if (arch_ioremap_pmd_supported()) - ioremap_pmd_capable = 1; - } -} - -static inline int ioremap_p4d_enabled(void) -{ - return ioremap_p4d_capable; -} - -static inline int ioremap_pud_enabled(void) -{ - return ioremap_pud_capable; -} - -static inline int ioremap_pmd_enabled(void) -{ - return ioremap_pmd_capable; -} - -#else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */ -static inline int ioremap_p4d_enabled(void) { return 0; } -static inline int ioremap_pud_enabled(void) { return 0; } -static inline int ioremap_pmd_enabled(void) { return 0; } +#else /* CONFIG_HAVE_ARCH_HUGE_VMAP */ +static const bool iomap_allow_huge = false; #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, @@ -81,9 +48,12 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, } static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) { - if (!ioremap_pmd_enabled()) + if (max_page_shift < PMD_SHIFT) + return 0; + + if (!arch_vmap_pmd_supported(prot)) return 0; if ((end - addr) != PMD_SIZE) @@ -102,7 +72,8 @@ static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, } static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, pgtbl_mod_mask *mask) + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, + pgtbl_mod_mask *mask) { pmd_t *pmd; unsigned long next; @@ -113,7 +84,7 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, do { next = pmd_addr_end(addr, end); - if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) { *mask |= PGTBL_PMD_MODIFIED; continue; } @@ -125,9 +96,12 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, } static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) { - if (!ioremap_pud_enabled()) + if (max_page_shift < PUD_SHIFT) + return 0; + + if (!arch_vmap_pud_supported(prot)) return 0; if ((end - addr) != PUD_SIZE) @@ -146,7 +120,8 @@ static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, } static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, pgtbl_mod_mask *mask) + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, + pgtbl_mod_mask *mask) { pud_t *pud; unsigned long next; @@ -157,21 +132,24 @@ static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, do { next = pud_addr_end(addr, end); - if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot)) { + if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) { *mask |= PGTBL_PUD_MODIFIED; continue; } - if (vmap_pmd_range(pud, addr, next, phys_addr, prot, mask)) + if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask)) return -ENOMEM; } while (pud++, phys_addr += (next - addr), addr = next, addr != end); return 0; } static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot) + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) { - if (!ioremap_p4d_enabled()) + if (max_page_shift < P4D_SHIFT) + return 0; + + if (!arch_vmap_p4d_supported(prot)) return 0; if ((end - addr) != P4D_SIZE) @@ -190,7 +168,8 @@ static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, } static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, pgtbl_mod_mask *mask) + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, + pgtbl_mod_mask *mask) { p4d_t *p4d; unsigned long next; @@ -201,18 +180,19 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, do { next = p4d_addr_end(addr, end); - if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) { + if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) { *mask |= PGTBL_P4D_MODIFIED; continue; } - if (vmap_pud_range(p4d, addr, next, phys_addr, prot, mask)) + if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask)) return -ENOMEM; } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); return 0; } -int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) +static int vmap_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) { pgd_t *pgd; unsigned long start; @@ -227,7 +207,7 @@ int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_a pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); - err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, &mask); + err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask); if (err) break; } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); @@ -240,6 +220,16 @@ int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_a return err; } +int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) +{ + unsigned int max_page_shift = PAGE_SHIFT; + + if (iomap_allow_huge) + max_page_shift = P4D_SHIFT; + + return vmap_range(addr, end, phys_addr, prot, max_page_shift); +} + #ifdef CONFIG_GENERIC_IOREMAP void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long prot) { From patchwork Fri Aug 21 04:44:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11728119 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4E84618 for ; Fri, 21 Aug 2020 04:45:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 961C0206FA for ; Fri, 21 Aug 2020 04:45:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QHqhkczu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 961C0206FA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 84B5E8D001E; Fri, 21 Aug 2020 00:45:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7D42C8D0005; Fri, 21 Aug 2020 00:45:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69BA58D001E; Fri, 21 Aug 2020 00:45:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 4F89C8D0005 for ; Fri, 21 Aug 2020 00:45:04 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 172F2180AD81F for ; Fri, 21 Aug 2020 04:45:04 +0000 (UTC) X-FDA: 77173336128.19.pie15_081849327036 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id DFAAB1ACEA4 for ; Fri, 21 Aug 2020 04:45:03 +0000 (UTC) X-Spam-Summary: 1,0,0,20d77850a547901b,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:1:41:69:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2559:2562:2637:2898:3138:3139:3140:3141:3142:3867:3871:4321:4385:4605:5007:6119:6261:6653:7514:7875:8603:9036:9413:9592:10004:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14096:14110:14394:14687:21080:21324:21444:21451:21627:21666:21740:21990:30054:30070,0,RBL:209.85.214.196:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04y88khdbptty3xrxqbayff1gekegopg37etga1q7iqtiwh4kj9ya3a3sn4gjz9.3fa5bjc6urpxgua57byd4psmn9sic57ew38fahi3yadbhj7x35y7111osympjtc.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: pie15_081849327036 X-Filterd-Recvd-Size: 14779 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Aug 2020 04:45:03 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id v16so344722plo.1 for ; Thu, 20 Aug 2020 21:45:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n6u3W3xGRvE1f9TahLPt2tr1kQk4naizqZpWLpaJwxQ=; b=QHqhkczub0QoCsvKosRZdXhmYr4WgUA+2DWrKwC0RvhJ6+XaqL7lGYmPnpBb7XbBfS 1NueDGhVyXqQ4o+TCYoiEa6+X0IIRIteUqLvF4XPhtaL1E7U9xXJURt5h1oWwcBRPwEW y9zGryywe0qBTjIBiPMA+MKjgjgHVNvjCkybrIbON/Qd0Q38yjDFtpCUH/zU3tEtyutC CmoePZzrk3iWHjUGJEvL8SI4K8wS4bUbWaepbkeu+g6z2Sv017eKHQcXsyVLNJ9Ztq7D gG4Y2lMwrs7Z5o7cTxHOXt1nlgkf9Bz1J5/Vb3Sm1scafdzeu6VanUDcPgfBSCHykbPM ipXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n6u3W3xGRvE1f9TahLPt2tr1kQk4naizqZpWLpaJwxQ=; b=pDRfe58XE1PuV7M6hTTGLWUfkEtruDx7DR2fAfmiTQdAgXMp6dh5pe4ft0y9pTNn1W +m8PgiAdgBcd0jkAmaalyZOmgdx6/gS6RDQviythRNyCyHphD5+hGl9GbbokKELuamCl Y4ZlIt3Ju/R6i78Gj3IpZ2psP63rdrO9/bWz8zTCS8jUvhJfZsmv80Bdh+ISNHKkRGuy ESp0o0aVHJu9G4kvxt1ZBs/6VaKcTkPITffNW0tCxax+KpxYS00r/j1q1UfBnOXlU+8I +sHi0lgSvvxtP8TyGxYgw8pU4unbUnNUSJJfn8VHNHXRWEgpsfQdjCmF+u/tz4kb9OES YMMA== X-Gm-Message-State: AOAM530Q4pK9xTo2dBcuBHxh8CzsTZdYQ9XZrxeYEPlG8gwNPP5axGlt N5OnEPdksY/ujIahySa69NRxiSiwGSY= X-Google-Smtp-Source: ABdhPJyE0I+S0qNqMTArZ2SAxzZO21nEgfCrgQQqOACltU5WDqUub8QdU9ZmLPAnXzg1szr17gIh+Q== X-Received: by 2002:a17:90b:ecb:: with SMTP id gz11mr1052048pjb.118.1597985102122; Thu, 20 Aug 2020 21:45:02 -0700 (PDT) Received: from bobo.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id l9sm683374pgg.29.2020.08.20.21.44.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Aug 2020 21:45:01 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron Subject: [PATCH v5 6/8] mm: Move vmap_range from lib/ioremap.c to mm/vmalloc.c Date: Fri, 21 Aug 2020 14:44:25 +1000 Message-Id: <20200821044427.736424-7-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200821044427.736424-1-npiggin@gmail.com> References: <20200821044427.736424-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: DFAAB1ACEA4 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a generic kernel virtual memory mapper, not specific to ioremap. Signed-off-by: Nicholas Piggin --- include/linux/vmalloc.h | 2 + mm/ioremap.c | 192 ---------------------------------------- mm/vmalloc.c | 191 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 193 insertions(+), 192 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 787d77ad7536..e3590e93bfff 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -181,6 +181,8 @@ extern struct vm_struct *remove_vm_area(const void *addr); extern struct vm_struct *find_vm_area(const void *addr); #ifdef CONFIG_MMU +extern int vmap_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift); extern int map_kernel_range_noflush(unsigned long start, unsigned long size, pgprot_t prot, struct page **pages); int map_kernel_range(unsigned long start, unsigned long size, pgprot_t prot, diff --git a/mm/ioremap.c b/mm/ioremap.c index b0032dbadaf7..cdda0e022740 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -28,198 +28,6 @@ early_param("nohugeiomap", set_nohugeiomap); static const bool iomap_allow_huge = false; #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ -static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, pgtbl_mod_mask *mask) -{ - pte_t *pte; - u64 pfn; - - pfn = phys_addr >> PAGE_SHIFT; - pte = pte_alloc_kernel_track(pmd, addr, mask); - if (!pte) - return -ENOMEM; - do { - BUG_ON(!pte_none(*pte)); - set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot)); - pfn++; - } while (pte++, addr += PAGE_SIZE, addr != end); - *mask |= PGTBL_PTE_MODIFIED; - return 0; -} - -static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) -{ - if (max_page_shift < PMD_SHIFT) - return 0; - - if (!arch_vmap_pmd_supported(prot)) - return 0; - - if ((end - addr) != PMD_SIZE) - return 0; - - if (!IS_ALIGNED(addr, PMD_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, PMD_SIZE)) - return 0; - - if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) - return 0; - - return pmd_set_huge(pmd, phys_addr, prot); -} - -static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, - pgtbl_mod_mask *mask) -{ - pmd_t *pmd; - unsigned long next; - - pmd = pmd_alloc_track(&init_mm, pud, addr, mask); - if (!pmd) - return -ENOMEM; - do { - next = pmd_addr_end(addr, end); - - if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) { - *mask |= PGTBL_PMD_MODIFIED; - continue; - } - - if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) - return -ENOMEM; - } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); - return 0; -} - -static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) -{ - if (max_page_shift < PUD_SHIFT) - return 0; - - if (!arch_vmap_pud_supported(prot)) - return 0; - - if ((end - addr) != PUD_SIZE) - return 0; - - if (!IS_ALIGNED(addr, PUD_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, PUD_SIZE)) - return 0; - - if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) - return 0; - - return pud_set_huge(pud, phys_addr, prot); -} - -static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, - pgtbl_mod_mask *mask) -{ - pud_t *pud; - unsigned long next; - - pud = pud_alloc_track(&init_mm, p4d, addr, mask); - if (!pud) - return -ENOMEM; - do { - next = pud_addr_end(addr, end); - - if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) { - *mask |= PGTBL_PUD_MODIFIED; - continue; - } - - if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask)) - return -ENOMEM; - } while (pud++, phys_addr += (next - addr), addr = next, addr != end); - return 0; -} - -static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) -{ - if (max_page_shift < P4D_SHIFT) - return 0; - - if (!arch_vmap_p4d_supported(prot)) - return 0; - - if ((end - addr) != P4D_SIZE) - return 0; - - if (!IS_ALIGNED(addr, P4D_SIZE)) - return 0; - - if (!IS_ALIGNED(phys_addr, P4D_SIZE)) - return 0; - - if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) - return 0; - - return p4d_set_huge(p4d, phys_addr, prot); -} - -static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, - pgtbl_mod_mask *mask) -{ - p4d_t *p4d; - unsigned long next; - - p4d = p4d_alloc_track(&init_mm, pgd, addr, mask); - if (!p4d) - return -ENOMEM; - do { - next = p4d_addr_end(addr, end); - - if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) { - *mask |= PGTBL_P4D_MODIFIED; - continue; - } - - if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask)) - return -ENOMEM; - } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); - return 0; -} - -static int vmap_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) -{ - pgd_t *pgd; - unsigned long start; - unsigned long next; - int err; - pgtbl_mod_mask mask = 0; - - might_sleep(); - BUG_ON(addr >= end); - - start = addr; - pgd = pgd_offset_k(addr); - do { - next = pgd_addr_end(addr, end); - err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask); - if (err) - break; - } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); - - flush_cache_vmap(start, end); - - if (mask & ARCH_PAGE_TABLE_SYNC_MASK) - arch_sync_kernel_mappings(start, end); - - return err; -} - int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { unsigned int max_page_shift = PAGE_SHIFT; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 3a1e45fd1626..129f10545bb1 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -71,6 +71,197 @@ static void free_work(struct work_struct *w) } /*** Page table manipulation functions ***/ +static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, pgtbl_mod_mask *mask) +{ + pte_t *pte; + u64 pfn; + + pfn = phys_addr >> PAGE_SHIFT; + pte = pte_alloc_kernel_track(pmd, addr, mask); + if (!pte) + return -ENOMEM; + do { + BUG_ON(!pte_none(*pte)); + set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot)); + pfn++; + } while (pte++, addr += PAGE_SIZE, addr != end); + *mask |= PGTBL_PTE_MODIFIED; + return 0; +} + +static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) +{ + if (max_page_shift < PMD_SHIFT) + return 0; + + if (!arch_vmap_pmd_supported(prot)) + return 0; + + if ((end - addr) != PMD_SIZE) + return 0; + + if (!IS_ALIGNED(addr, PMD_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, PMD_SIZE)) + return 0; + + if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) + return 0; + + return pmd_set_huge(pmd, phys_addr, prot); +} + +static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, + pgtbl_mod_mask *mask) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_alloc_track(&init_mm, pud, addr, mask); + if (!pmd) + return -ENOMEM; + do { + next = pmd_addr_end(addr, end); + + if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) { + *mask |= PGTBL_PMD_MODIFIED; + continue; + } + + if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask)) + return -ENOMEM; + } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); + return 0; +} + +static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) +{ + if (max_page_shift < PUD_SHIFT) + return 0; + + if (!arch_vmap_pud_supported(prot)) + return 0; + + if ((end - addr) != PUD_SIZE) + return 0; + + if (!IS_ALIGNED(addr, PUD_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, PUD_SIZE)) + return 0; + + if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) + return 0; + + return pud_set_huge(pud, phys_addr, prot); +} + +static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, + pgtbl_mod_mask *mask) +{ + pud_t *pud; + unsigned long next; + + pud = pud_alloc_track(&init_mm, p4d, addr, mask); + if (!pud) + return -ENOMEM; + do { + next = pud_addr_end(addr, end); + + if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) { + *mask |= PGTBL_PUD_MODIFIED; + continue; + } + + if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask)) + return -ENOMEM; + } while (pud++, phys_addr += (next - addr), addr = next, addr != end); + return 0; +} + +static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift) +{ + if (max_page_shift < P4D_SHIFT) + return 0; + + if (!arch_vmap_p4d_supported(prot)) + return 0; + + if ((end - addr) != P4D_SIZE) + return 0; + + if (!IS_ALIGNED(addr, P4D_SIZE)) + return 0; + + if (!IS_ALIGNED(phys_addr, P4D_SIZE)) + return 0; + + if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) + return 0; + + return p4d_set_huge(p4d, phys_addr, prot); +} + +static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, + pgtbl_mod_mask *mask) +{ + p4d_t *p4d; + unsigned long next; + + p4d = p4d_alloc_track(&init_mm, pgd, addr, mask); + if (!p4d) + return -ENOMEM; + do { + next = p4d_addr_end(addr, end); + + if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) { + *mask |= PGTBL_P4D_MODIFIED; + continue; + } + + if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask)) + return -ENOMEM; + } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); + return 0; +} + +int vmap_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + pgd_t *pgd; + unsigned long start; + unsigned long next; + int err; + pgtbl_mod_mask mask = 0; + + might_sleep(); + BUG_ON(addr >= end); + + start = addr; + pgd = pgd_offset_k(addr); + do { + next = pgd_addr_end(addr, end); + err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask); + if (err) + break; + } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); + + flush_cache_vmap(start, end); + + if (mask & ARCH_PAGE_TABLE_SYNC_MASK) + arch_sync_kernel_mappings(start, end); + + return err; +} static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgtbl_mod_mask *mask) From patchwork Fri Aug 21 04:44:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11728121 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 72CBB618 for ; Fri, 21 Aug 2020 04:45:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 368FC206FA for ; Fri, 21 Aug 2020 04:45:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="J4ddgUza" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 368FC206FA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 27E588D001F; Fri, 21 Aug 2020 00:45:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 20BD78D0005; Fri, 21 Aug 2020 00:45:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0CF398D001F; Fri, 21 Aug 2020 00:45:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id E6E808D0005 for ; Fri, 21 Aug 2020 00:45:07 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B1613181AEF09 for ; Fri, 21 Aug 2020 04:45:07 +0000 (UTC) X-FDA: 77173336254.18.spade96_5d123f027036 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 4B154100EC667 for ; Fri, 21 Aug 2020 04:45:07 +0000 (UTC) X-Spam-Summary: 1,0,0,7d21970e6d30d201,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2731:3138:3139:3140:3141:3142:3352:3865:3866:3868:3870:3874:4321:5007:6119:6261:6653:7514:7875:9413:9592:10004:11026:11473:11658:11914:12043:12297:12438:12517:12519:12555:12895:12986:13069:13161:13229:13311:13357:13894:14096:14181:14384:14394:14687:14721:21080:21444:21451:21627:21666:21990:30054,0,RBL:209.85.210.195:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04yf7wiaktzk8c44qxsg7o3qpg1qrycr9fix7m89shg5c1o6xwjojwo7djb75td.ir6cy6dga9bykxkjxoj31dn3sei6dohy1857cqsagdd9qpxmu8g4mmzzhzimrqp.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: spade96_5d123f027036 X-Filterd-Recvd-Size: 4682 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Aug 2020 04:45:06 +0000 (UTC) Received: by mail-pf1-f195.google.com with SMTP id k18so463919pfp.7 for ; Thu, 20 Aug 2020 21:45:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8C8wdRPXIlS04tUD/986mX054vDuu2PdIQ/xTcAZZAA=; b=J4ddgUzasDXInj6ZCF2fxwiLsaEtScdGAOu8lOPhDFaYakCO3NrGk1LBxDUoJ2mXDg Cl7ESjYKzCuQKNbvsXKKZvD2eOHZhUqWcZF5TMNmPbRLGU4MsaPYfq+QCQcLA8q9W26z hs0tBFGG6129YBPGD1vMB9R7gD0+37YOzqN4rBxndaMqpt0WgDpXD19Gspzzq2JQo3gA mqxkfxRpdKad9bmkpLvHnAPblr3MCauTS/S9XUd5dMBBYpocjBbgU1hNHm7zJBi6Okwm 3b89yZyODzFGBBBHGRL6KzHhFuU/alVEYGDYpKF20JifG2kUrdfIClwhUGnoMUoqi0ys PZvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8C8wdRPXIlS04tUD/986mX054vDuu2PdIQ/xTcAZZAA=; b=DSSSSsLb5DD4yuOD5tZxcmVvGNedFXGDTwBKjEexYXD4mLnscDa1hKHTEAtWJdmQdZ 3Os0ZTzp7WZqfbLNj58Ovdizh17DC46dsJnYpDpvWLgbNn7GNTeztWyUKsLqnAaBqaVG Vibufpz8ocqFObCiqH8Soay2MiZRyMJ4xcqYWhwz8ERGi3QsKMGYDWNe19yE7oKmyxQC w9C4DVswwpA1mVQ+mZmT0iFP+75f4rMIjrVa5QTPtlo6/fPGvuaSxgdHaXLc/8G4lOxo 85ppqPqJFfREoi7dM/MIdoJ3VGI7HwuWLhO5VJm3LgOT348iFAQCNYfZRDYt/Hf9kUtO E+kQ== X-Gm-Message-State: AOAM531Dpktf+gr/yj24qs/+0M1BYL87wc354LYkpU4GnTJVprbDrmC5 Ga72iuARPz23xenA42Gki6j4GgyTEvA= X-Google-Smtp-Source: ABdhPJy7zCMoLaQ0q66stfVNfIyt+ykygBNPJfPuPBH8S4/UD8Lo5p19sMJZjysKorBqq4yJjZdPbQ== X-Received: by 2002:a63:b24b:: with SMTP id t11mr1038962pgo.233.1597985106002; Thu, 20 Aug 2020 21:45:06 -0700 (PDT) Received: from bobo.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id l9sm683374pgg.29.2020.08.20.21.45.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Aug 2020 21:45:05 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron Subject: [PATCH v5 7/8] mm/vmalloc: add vmap_range_noflush variant Date: Fri, 21 Aug 2020 14:44:26 +1000 Message-Id: <20200821044427.736424-8-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200821044427.736424-1-npiggin@gmail.com> References: <20200821044427.736424-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4B154100EC667 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As a side-effect, the order of flush_cache_vmap() and arch_sync_kernel_mappings() calls are switched, but that now matches the other callers in this file. Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 129f10545bb1..4e5cb7c7f780 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -234,8 +234,8 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, return 0; } -int vmap_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) +static int vmap_range_noflush(unsigned long addr, unsigned long end, phys_addr_t phys_addr, + pgprot_t prot, unsigned int max_page_shift) { pgd_t *pgd; unsigned long start; @@ -255,14 +255,23 @@ int vmap_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgp break; } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); - flush_cache_vmap(start, end); - if (mask & ARCH_PAGE_TABLE_SYNC_MASK) arch_sync_kernel_mappings(start, end); return err; } +int vmap_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, + unsigned int max_page_shift) +{ + int err; + + err = vmap_range_noflush(addr, end, phys_addr, prot, max_page_shift); + flush_cache_vmap(addr, end); + + return err; +} + static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgtbl_mod_mask *mask) { From patchwork Fri Aug 21 04:44:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 11728123 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BEE21618 for ; Fri, 21 Aug 2020 04:45:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 71BCD206FA for ; Fri, 21 Aug 2020 04:45:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PFtCvBOp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 71BCD206FA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 64F118D0020; Fri, 21 Aug 2020 00:45:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5DB648D0005; Fri, 21 Aug 2020 00:45:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49FD08D0020; Fri, 21 Aug 2020 00:45:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2B9DF8D0005 for ; Fri, 21 Aug 2020 00:45:12 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E4769249C for ; Fri, 21 Aug 2020 04:45:11 +0000 (UTC) X-FDA: 77173336422.02.skirt82_3200fde27036 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id B3DBF10097AA0 for ; Fri, 21 Aug 2020 04:45:11 +0000 (UTC) X-Spam-Summary: 1,0,0,acf2b08405d723bf,d41d8cd98f00b204,npiggin@gmail.com,,RULES_HIT:4:41:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2640:2689:2693:2731:2897:2899:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3874:4250:4321:4385:4605:4641:5007:6119:6261:6653:7514:7875:7903:7904:8603:9036:9413:9592:10004:11026:11233:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13894:14096:14394:14687:21080:21324:21444:21451:21611:21627:21666:21740:21990:30054:30056:30089:30090,0,RBL:209.85.214.194:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04y8ta41ksdn9mdofxztknt7wmzrjoctbtnpjweuwz581bu8o8mcuzzzxhbogpm.97ariuhbfg619xxjoh1a9giprh6qhyp9ej144ypemgqtcjfqtaxs6eytkxd51ac.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime :22,LUA_ X-HE-Tag: skirt82_3200fde27036 X-Filterd-Recvd-Size: 17516 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Aug 2020 04:45:11 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id g15so333543plj.6 for ; Thu, 20 Aug 2020 21:45:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iuDHQZszaxv26oMYybK0woCMEyfh52YQ1qNYQNbjimQ=; b=PFtCvBOpAVTxKrvI+7LF7kCZhLnG2oMHV6Cpt+q4V1tv7QOEe3FVFVfJoHiWQm8hOA mhTgJlzC7bb+tZgU7AJRuWBguuZhjErvqK8la8rnX8je8QWTNK3lDKz81K06TYrZVCvU gNYu8WoCiegio26F1JJAnkbzcqkRZ3ES21AnuWed6jMM1yBeWXENUNO+KbfzpFx21sGw O9R7y3HU/GsqcP+cKMH5d88hytDKpavFeJCgaYoQtAtuNFX0aw0RjoaX/+tVP+cLvq3G Iiu0XlubsSczTAkPfSIqkmPMusGi+0PXTphPjK4sIMwV5akjME/lRpk5S8gvSP5knlAo x+PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iuDHQZszaxv26oMYybK0woCMEyfh52YQ1qNYQNbjimQ=; b=qeuHzYr0qHx3oR+oHSv+1TPDRXXU/vULfISjsqzWdgbUiXfMAh93kRzKXSquhCvJF1 T1qevxMLOvfe3zBMQysX6ENIbJuuq/7EkYrj6DWaZPF8jdyAU0dwZETdbybw+QCXdy7a Zu5X6BhjhkfGmY3ZmJ6VKI9izCpLdmkHoi6hrVWltLeJQlWX0WtYbtGfwxl0+AJVkauk MNg4B9lqFQmzAlrVpVPXjCXOow6sEl/V6xUDS/6+Gh+K+yu4Wdcdywjv2Quiz/GNyCcQ Xx2CObC91z6vDCFefqLKhFW5mW8DIJxfQWLLTsiJr38WNz59I/JN5We/Omk8gCSDjiUy 31TQ== X-Gm-Message-State: AOAM5339FcyoHn4EcMYuY5NhxJLVnGhX+hwYUckFNuyghFrLTo1NSUJK nHrxLM5heP//lvcanRXHWsEdp5Zd36c= X-Google-Smtp-Source: ABdhPJzfaW8yoy/FewICM2kFJ610QPZPrBku6G5biJPSXZPLRqO1ckFBmS60jmYdOV+dwjr24h4STQ== X-Received: by 2002:a17:902:d90e:: with SMTP id c14mr935987plz.37.1597985110039; Thu, 20 Aug 2020 21:45:10 -0700 (PDT) Received: from bobo.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id l9sm683374pgg.29.2020.08.20.21.45.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Aug 2020 21:45:09 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron Subject: [PATCH v5 8/8] mm/vmalloc: Hugepage vmalloc mappings Date: Fri, 21 Aug 2020 14:44:27 +1000 Message-Id: <20200821044427.736424-9-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200821044427.736424-1-npiggin@gmail.com> References: <20200821044427.736424-1-npiggin@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B3DBF10097AA0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On platforms that define HAVE_ARCH_HUGE_VMAP and support PMD vmaps, vmalloc will attempt to allocate PMD-sized pages first, before falling back to small pages. Allocations which use something other than PAGE_KERNEL protections are not permitted to use huge pages yet, not all callers expect this (e.g., module allocations vs strict module rwx). This reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%. This can result in more internal fragmentation and memory overhead for a given allocation, an option nohugevmalloc is added to disable at boot. Signed-off-by: Nicholas Piggin Reported-by: kernel test robot Reported-by: kernel test robot --- .../admin-guide/kernel-parameters.txt | 2 + arch/Kconfig | 4 + arch/powerpc/Kconfig | 1 + include/linux/vmalloc.h | 1 + mm/page_alloc.c | 4 +- mm/vmalloc.c | 188 +++++++++++++----- 6 files changed, 152 insertions(+), 48 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index bdc1f33fd3d1..6f0b41289a90 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3190,6 +3190,8 @@ nohugeiomap [KNL,X86,PPC] Disable kernel huge I/O mappings. + nohugevmalloc [PPC] Disable kernel huge vmalloc mappings. + nosmt [KNL,S390] Disable symmetric multithreading (SMT). Equivalent to smt=1. diff --git a/arch/Kconfig b/arch/Kconfig index af14a567b493..b2b89d629317 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -616,6 +616,10 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD config HAVE_ARCH_HUGE_VMAP bool +config HAVE_ARCH_HUGE_VMALLOC + depends on HAVE_ARCH_HUGE_VMAP + bool + config ARCH_WANT_HUGE_PMD_SHARE bool diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 95dfd8ef3d4b..044e5a94967a 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -175,6 +175,7 @@ config PPC select GENERIC_TIME_VSYSCALL select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU + select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14 diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index e3590e93bfff..8f25dbaca0a1 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -58,6 +58,7 @@ struct vm_struct { unsigned long size; unsigned long flags; struct page **pages; + unsigned int page_order; unsigned int nr_pages; phys_addr_t phys_addr; const void *caller; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0e2bab486fea..d785e5335529 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8102,6 +8102,7 @@ void *__init alloc_large_system_hash(const char *tablename, void *table = NULL; gfp_t gfp_flags; bool virt; + bool huge; /* allow the kernel cmdline to have a say */ if (!numentries) { @@ -8169,6 +8170,7 @@ void *__init alloc_large_system_hash(const char *tablename, } else if (get_order(size) >= MAX_ORDER || hashdist) { table = __vmalloc(size, gfp_flags); virt = true; + huge = (find_vm_area(table)->page_order > 0); } else { /* * If bucketsize is not a power-of-two, we may free @@ -8185,7 +8187,7 @@ void *__init alloc_large_system_hash(const char *tablename, pr_info("%s hash table entries: %ld (order: %d, %lu bytes, %s)\n", tablename, 1UL << log2qty, ilog2(size) - PAGE_SHIFT, size, - virt ? "vmalloc" : "linear"); + virt ? (huge ? "vmalloc hugepage" : "vmalloc") : "linear"); if (_hash_shift) *_hash_shift = log2qty; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4e5cb7c7f780..564d7497e551 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -45,6 +45,19 @@ #include "internal.h" #include "pgalloc-track.h" +#ifdef CONFIG_HAVE_ARCH_HUGE_VMALLOC +static bool __ro_after_init vmap_allow_huge = true; + +static int __init set_nohugevmalloc(char *str) +{ + vmap_allow_huge = false; + return 0; +} +early_param("nohugevmalloc", set_nohugevmalloc); +#else /* CONFIG_HAVE_ARCH_HUGE_VMALLOC */ +static const bool vmap_allow_huge = false; +#endif /* CONFIG_HAVE_ARCH_HUGE_VMALLOC */ + bool is_vmalloc_addr(const void *x) { unsigned long addr = (unsigned long)x; @@ -468,31 +481,12 @@ static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long en return 0; } -/** - * map_kernel_range_noflush - map kernel VM area with the specified pages - * @addr: start of the VM area to map - * @size: size of the VM area to map - * @prot: page protection flags to use - * @pages: pages to map - * - * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size specify should - * have been allocated using get_vm_area() and its friends. - * - * NOTE: - * This function does NOT do any cache flushing. The caller is responsible for - * calling flush_cache_vmap() on to-be-mapped areas before calling this - * function. - * - * RETURNS: - * 0 on success, -errno on failure. - */ -int map_kernel_range_noflush(unsigned long addr, unsigned long size, - pgprot_t prot, struct page **pages) +static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages) { unsigned long start = addr; - unsigned long end = addr + size; - unsigned long next; pgd_t *pgd; + unsigned long next; int err = 0; int nr = 0; pgtbl_mod_mask mask = 0; @@ -514,6 +508,65 @@ int map_kernel_range_noflush(unsigned long addr, unsigned long size, return 0; } +static int vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + WARN_ON(page_shift < PAGE_SHIFT); + + if (page_shift == PAGE_SHIFT) { + return vmap_small_pages_range_noflush(addr, end, prot, pages); + } else { + unsigned int i, nr = (end - addr) >> PAGE_SHIFT; + + for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) { + int err; + + err = vmap_range_noflush(addr, addr + (1UL << page_shift), + __pa(page_address(pages[i])), prot, page_shift); + if (err) + return err; + + addr += 1UL << page_shift; + } + + return 0; + } +} + +static int vmap_pages_range(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + int err; + + err = vmap_pages_range_noflush(addr, end, prot, pages, page_shift); + flush_cache_vmap(addr, end); + return err; +} + +/** + * map_kernel_range_noflush - map kernel VM area with the specified pages + * @addr: start of the VM area to map + * @size: size of the VM area to map + * @prot: page protection flags to use + * @pages: pages to map + * + * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size specify should + * have been allocated using get_vm_area() and its friends. + * + * NOTE: + * This function does NOT do any cache flushing. The caller is responsible for + * calling flush_cache_vmap() on to-be-mapped areas before calling this + * function. + * + * RETURNS: + * 0 on success, -errno on failure. + */ +int map_kernel_range_noflush(unsigned long addr, unsigned long size, + pgprot_t prot, struct page **pages) +{ + return vmap_pages_range_noflush(addr, addr + size, prot, pages, PAGE_SHIFT); +} + int map_kernel_range(unsigned long start, unsigned long size, pgprot_t prot, struct page **pages) { @@ -2274,9 +2327,11 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, if (unlikely(!size)) return NULL; - if (flags & VM_IOREMAP) - align = 1ul << clamp_t(int, get_count_order_long(size), - PAGE_SHIFT, IOREMAP_MAX_ORDER); + if (flags & VM_IOREMAP) { + align = max(align, + 1ul << clamp_t(int, get_count_order_long(size), + PAGE_SHIFT, IOREMAP_MAX_ORDER)); + } area = kzalloc_node(sizeof(*area), gfp_mask & GFP_RECLAIM_MASK, node); if (unlikely(!area)) @@ -2391,6 +2446,7 @@ static inline void set_area_direct_map(const struct vm_struct *area, { int i; + /* HUGE_VMALLOC passes small pages to set_direct_map */ for (i = 0; i < area->nr_pages; i++) if (page_address(area->pages[i])) set_direct_map(area->pages[i]); @@ -2424,11 +2480,12 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages) * map. Find the start and end range of the direct mappings to make sure * the vm_unmap_aliases() flush includes the direct map. */ - for (i = 0; i < area->nr_pages; i++) { + for (i = 0; i < area->nr_pages; i += 1U << area->page_order) { unsigned long addr = (unsigned long)page_address(area->pages[i]); if (addr) { + unsigned long page_size = PAGE_SIZE << area->page_order; start = min(addr, start); - end = max(addr + PAGE_SIZE, end); + end = max(addr + page_size, end); flush_dmap = 1; } } @@ -2471,11 +2528,11 @@ static void __vunmap(const void *addr, int deallocate_pages) if (deallocate_pages) { int i; - for (i = 0; i < area->nr_pages; i++) { + for (i = 0; i < area->nr_pages; i += 1U << area->page_order) { struct page *page = area->pages[i]; BUG_ON(!page); - __free_pages(page, 0); + __free_pages(page, area->page_order); } atomic_long_sub(area->nr_pages, &nr_vmalloc_pages); @@ -2614,9 +2671,12 @@ void *vmap(struct page **pages, unsigned int count, EXPORT_SYMBOL(vmap); static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, - pgprot_t prot, int node) + pgprot_t prot, unsigned int page_shift, int node) { struct page **pages; + unsigned long addr = (unsigned long)area->addr; + unsigned long size = get_vm_area_size(area); + unsigned int page_order = page_shift - PAGE_SHIFT; unsigned int nr_pages, array_size, i; const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; const gfp_t alloc_mask = gfp_mask | __GFP_NOWARN; @@ -2624,7 +2684,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, 0 : __GFP_HIGHMEM; - nr_pages = get_vm_area_size(area) >> PAGE_SHIFT; + nr_pages = size >> PAGE_SHIFT; array_size = (nr_pages * sizeof(struct page *)); /* Please note that the recursion is strictly bounded. */ @@ -2643,29 +2703,29 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, area->pages = pages; area->nr_pages = nr_pages; + area->page_order = page_order; - for (i = 0; i < area->nr_pages; i++) { + for (i = 0; i < area->nr_pages; i += 1U << page_order) { struct page *page; + int p; - if (node == NUMA_NO_NODE) - page = alloc_page(alloc_mask|highmem_mask); - else - page = alloc_pages_node(node, alloc_mask|highmem_mask, 0); - + page = alloc_pages_node(node, alloc_mask|highmem_mask, page_order); if (unlikely(!page)) { /* Successfully allocated i pages, free them in __vunmap() */ area->nr_pages = i; atomic_long_add(area->nr_pages, &nr_vmalloc_pages); goto fail; } - area->pages[i] = page; + + for (p = 0; p < (1U << page_order); p++) + area->pages[i + p] = page + p; + if (gfpflags_allow_blocking(gfp_mask)) cond_resched(); } atomic_long_add(area->nr_pages, &nr_vmalloc_pages); - if (map_kernel_range((unsigned long)area->addr, get_vm_area_size(area), - prot, pages) < 0) + if (vmap_pages_range(addr, addr + size, prot, pages, page_shift) < 0) goto fail; return area->addr; @@ -2673,7 +2733,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, fail: warn_alloc(gfp_mask, NULL, "vmalloc: allocation failure, allocated %ld of %ld bytes", - (area->nr_pages*PAGE_SIZE), area->size); + (area->nr_pages*PAGE_SIZE), size); __vfree(area->addr); return NULL; } @@ -2704,19 +2764,42 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, struct vm_struct *area; void *addr; unsigned long real_size = size; + unsigned long real_align = align; + unsigned int shift = PAGE_SHIFT; - size = PAGE_ALIGN(size); if (!size || (size >> PAGE_SHIFT) > totalram_pages()) goto fail; - area = __get_vm_area_node(real_size, align, VM_ALLOC | VM_UNINITIALIZED | + if (vmap_allow_huge && (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))) { + unsigned long size_per_node; + + /* + * Try huge pages. Only try for PAGE_KERNEL allocations, + * others like modules don't yet expect huge pages in + * their allocations due to apply_to_page_range not + * supporting them. + */ + + size_per_node = size; + if (node == NUMA_NO_NODE) + size_per_node /= num_online_nodes(); + if (size_per_node >= PMD_SIZE) { + shift = PMD_SHIFT; + align = max(real_align, 1UL << shift); + size = ALIGN(real_size, 1UL << shift); + } + } + +again: + size = PAGE_ALIGN(size); + area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED | vm_flags, start, end, node, gfp_mask, caller); if (!area) goto fail; - addr = __vmalloc_area_node(area, gfp_mask, prot, node); + addr = __vmalloc_area_node(area, gfp_mask, prot, shift, node); if (!addr) - return NULL; + goto fail; /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED @@ -2730,8 +2813,19 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, return addr; fail: - warn_alloc(gfp_mask, NULL, + if (shift > PAGE_SHIFT) { + free_vm_area(area); + shift = PAGE_SHIFT; + align = real_align; + size = real_size; + goto again; + } + + if (!area) { + /* Warn for area allocation, page allocations already warn */ + warn_alloc(gfp_mask, NULL, "vmalloc: allocation failure: %lu bytes", real_size); + } return NULL; } @@ -3730,7 +3824,7 @@ static int s_show(struct seq_file *m, void *p) seq_printf(m, " %pS", v->caller); if (v->nr_pages) - seq_printf(m, " pages=%d", v->nr_pages); + seq_printf(m, " pages=%d order=%d", v->nr_pages, v->page_order); if (v->phys_addr) seq_printf(m, " phys=%pa", &v->phys_addr);