From patchwork Fri Nov 6 00:51:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11885601 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D1D1921 for ; Fri, 6 Nov 2020 00:52:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2952620759 for ; Fri, 6 Nov 2020 00:52:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="oZL2f7tF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2952620759 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 58D846B0204; Thu, 5 Nov 2020 19:52:07 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 572A96B0205; Thu, 5 Nov 2020 19:52:07 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DC1E6B0205; Thu, 5 Nov 2020 19:52:07 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id 0708A6B0203 for ; Thu, 5 Nov 2020 19:52:06 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 951EF181AEF00 for ; Fri, 6 Nov 2020 00:52:06 +0000 (UTC) X-FDA: 77452166652.08.front83_080c00f272ce Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 7942F1819E76B for ; Fri, 6 Nov 2020 00:52:06 +0000 (UTC) X-Spam-Summary: 1,0,0,aa0f31511af1b2f9,d41d8cd98f00b204,rcampbell@nvidia.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1261:1277:1311:1313:1314:1345:1359:1437:1513:1515:1516:1518:1521:1534:1542:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:2910:3138:3139:3140:3141:3142:3353:3867:3871:3874:4321:4385:5007:6119:6120:6261:6653:6742:7901:7903:10004:10400:11026:11473:11658:11914:12043:12048:12296:12297:12438:12521:12555:12895:12986:14096:14097:14181:14394:14721:21080:21451:21627:21990:30054:30064,0,RBL:216.228.121.143:@nvidia.com:.lbl8.mailshell.net-64.201.201.201 62.22.0.100;04y8ahqtk9yx33i35idzje68rzxfdyc14kdzunr6enexrs67h5ky5hektg7dddc.toskk6kgmh9hh77mgqc9fyhu4g79b8pxmgrssnoxgsaow66iq6c9gottr8zh1jg.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: front83_080c00f272ce X-Filterd-Recvd-Size: 4567 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Nov 2020 00:52:05 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 05 Nov 2020 16:52:06 -0800 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 6 Nov 2020 00:52:00 +0000 Received: from rcampbell-dev.nvidia.com (10.124.1.5) by mail.nvidia.com (172.20.187.12) with Microsoft SMTP Server id 15.0.1473.3 via Frontend Transport; Fri, 6 Nov 2020 00:52:00 +0000 From: Ralph Campbell To: , , , CC: Jerome Glisse , John Hubbard , Alistair Popple , Christoph Hellwig , "Jason Gunthorpe" , Bharata B Rao , Zi Yan , "Kirill A . Shutemov" , Yang Shi , Ben Skeggs , Shuah Khan , Andrew Morton , Ralph Campbell Subject: [PATCH v3 4/6] mm/thp: add THP allocation helper Date: Thu, 5 Nov 2020 16:51:45 -0800 Message-ID: <20201106005147.20113-5-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106005147.20113-1-rcampbell@nvidia.com> References: <20201106005147.20113-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604623926; bh=gT6gepnrM1WtFwSGyIYxXIn+WYLudok2qxIca0Wj42k=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=oZL2f7tFL6dXZy+k8i6jnnkaHUca7aBBFsVgDXAnb4xvEFYgpyFvCQ4YaxNbZ6XBq Wh3SlS7/fv42muBRoi1mZaQOpgVO3BO8z/HeoEc9Toj7kpKD7QD4Pin5rIkU0Rz1WI KNjgQYRTE2ltAGYmOTHDxGsM/jVB9zEcazwVpq0/aEz1cazc2NumdDnvTde16pxwlS p5ws5rm7fIGbvfDlIrUloSZYuY2u3FCJ8SxzgbiwISqxujHl3S2LsCwwJt9A5Jiles 53AbVNbPVeFZNC8i9zbIt0/rsNliFLXQ1leUZ+aMu21sHMWwznYbxAhybb5rXHxd18 pPpFN+3TyGtNg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Transparent huge page allocation policy is controlled by several sysfs variables. Rather than expose these to each device driver that needs to allocate THPs, provide a helper function. Signed-off-by: Ralph Campbell --- include/linux/gfp.h | 10 ++++++++++ mm/huge_memory.c | 14 ++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index c603237e006c..242398c4b556 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -564,6 +564,16 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) #define alloc_page_vma(gfp_mask, vma, addr) \ alloc_pages_vma(gfp_mask, 0, vma, addr, numa_node_id(), false) +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +extern struct page *alloc_transhugepage(struct vm_area_struct *vma, + unsigned long addr); +#else +static inline struct page *alloc_transhugepage(struct vm_area_struct *vma, + unsigned long addr) +{ + return NULL; +} +#endif extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); extern unsigned long get_zeroed_page(gfp_t gfp_mask); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a073e66d0ee2..c2c1d3e7c35f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -765,6 +765,20 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return __do_huge_pmd_anonymous_page(vmf, page, gfp); } +struct page *alloc_transhugepage(struct vm_area_struct *vma, + unsigned long haddr) +{ + gfp_t gfp; + struct page *page; + + gfp = alloc_hugepage_direct_gfpmask(vma); + page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER); + if (page) + prep_transhuge_page(page); + return page; +} +EXPORT_SYMBOL_GPL(alloc_transhugepage); + static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write, pgtable_t pgtable)