From patchwork Wed Sep 2 18:06:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 11751507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 03E7A109A for ; Wed, 2 Sep 2020 18:07:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B72B62083B for ; Wed, 2 Sep 2020 18:07:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=sent.com header.i=@sent.com header.b="TEpelWRj"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="SVTFniIB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B72B62083B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=sent.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0F74D90001F; Wed, 2 Sep 2020 14:06:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 82838900023; Wed, 2 Sep 2020 14:06:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1ADAD900024; Wed, 2 Sep 2020 14:06:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 7DBE3900012 for ; Wed, 2 Sep 2020 14:06:36 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 324523632 for ; Wed, 2 Sep 2020 18:06:36 +0000 (UTC) X-FDA: 77218901592.27.time91_590628c270a2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 016FB3D668 for ; Wed, 2 Sep 2020 18:06:35 +0000 (UTC) X-Spam-Summary: 1,0,0,c48a314e912f682f,d41d8cd98f00b204,zi.yan@sent.com,,RULES_HIT:1:2:41:355:379:541:800:960:966:968:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3872:4049:4250:4321:4385:4423:4605:5007:6119:6120:6261:6630:6653:7576:7875:7901:7903:8603:8957:9010:10004:11026:11473:11658:11914:12043:12291:12296:12438:12555:12679:12683:12895:12986:13161:13229:13894:14096:21080:21451:21627:21987:21990:30003:30034:30054:30064:30070,0,RBL:66.111.4.25:@sent.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yfx5sztj3kpigcurooab1czk8x9opqcf5urx1fxfik1j4kp1n54zsdzujquwb.rhgo7tgbbb1ojyi438gzb1d8k4ayrzhyweirqmytdkwp8pj7fwrci6dtktwcgfb.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: time91_590628c270a2 X-Filterd-Recvd-Size: 10613 Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Sep 2020 18:06:35 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 3434E5C01D3; Wed, 2 Sep 2020 14:06:35 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Wed, 02 Sep 2020 14:06:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm1; bh=vo+WlP2P6ZaoJ IBCKZI+fyRIw5q3MoMn49o4u3ecZgg=; b=TEpelWRjPEIN4vRp/+z6iP4vNaUPK TCfhY2IrfliRvbztLqUxe/30zaxBLRMNaOfnCx4pTfN+O/8DNd+Pt3aXgK5LLiId 2X3/ltk49Uhbc1yU2cSrD7mnSHuQbOcH7vRCPFnXZuYWbyTdmOxM1ldW0UNO89+w ZAfcvluypPsVDAYYBqmdGlLd42+zsXSV5J/z1vL4tvDVfhZ3ATygP5WeCGNGR7DS 75E9INuE3acxE/LvEpuGf8LVWWa/Iqrhei2VQAvU29p2DbZvX1snfWYdZnqv1qzf oj1hUediWQjrhLVVizqERzNJfk9Y5GjqzAfQG8JNrnyTNus8FC6pInQ/w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=vo+WlP2P6ZaoJIBCKZI+fyRIw5q3MoMn49o4u3ecZgg=; b=SVTFniIB rrl/tDZwQceF1nSi0Gj3wgFNyMZ7A2wCl+DZP9PuJ3elUf1hNTd7D2tJADQzuAGO r2/A2hglI/Pii+/yTm/oTYCtqvnt51yzJgp6Z8ROmqUyrKYqXkEV9K4voQbg6nzb K3DHuFUEOuep6H6xZWIXjyJ0y5h98frbJoTtrOlxYJxd0GvvPq2WOF64fzFQs0+B gFHgN/TDH+SrqWIVa6MM8v9+RcppP5/OfyZMVC95TVnLpQM/d8FIiyLjF0PbbOEi kfQqWy98Qy/ZBJ+j1lAsi1fX7QM/hrXgE4Vfdv3x1RON3hpT+r8AJAL5SLYtwIRP dp8XSwGkGt5nhA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefledguddvudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhhrggfgsedtkeertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeduhf ffveektdduhfdutdfgtdekkedvhfetuedufedtgffgvdevleehheevjefgtdenucfkphep uddvrdegiedruddtiedrudeigeenucevlhhushhtvghrufhiiigvpeduvdenucfrrghrrg hmpehmrghilhhfrhhomhepiihirdihrghnsehsvghnthdrtghomh X-ME-Proxy: Received: from nvrsysarch6.NVidia.COM (unknown [12.46.106.164]) by mail.messagingengine.com (Postfix) with ESMTPA id 89BD23060272; Wed, 2 Sep 2020 14:06:34 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org, Roman Gushchin Cc: Rik van Riel , "Kirill A . Shutemov" , Matthew Wilcox , Shakeel Butt , Yang Shi , David Nellans , linux-kernel@vger.kernel.org, Zi Yan Subject: [RFC PATCH 16/16] mm: thp: use cma reservation for pud thp allocation. Date: Wed, 2 Sep 2020 14:06:28 -0400 Message-Id: <20200902180628.4052244-17-zi.yan@sent.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902180628.4052244-1-zi.yan@sent.com> References: <20200902180628.4052244-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Queue-Id: 016FB3D668 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan Sharing hugepage_cma reservation with hugetlb for pud thp allocaiton. The reserved cma regions still can be used for moveable page allocations. During 1GB page split, all subpages are cleared from the CMA bitmap, since they are no more 1GB pages and will be freed via the normal path instead of cma_release(). Signed-off-by: Zi Yan --- include/linux/cma.h | 3 +++ include/linux/huge_mm.h | 10 ++++++++++ mm/cma.c | 31 +++++++++++++++++++++++++++++++ mm/huge_memory.c | 30 ++++++++++++++++++++++++++++++ mm/mempolicy.c | 12 +++++++++--- mm/page_alloc.c | 3 ++- 6 files changed, 85 insertions(+), 4 deletions(-) diff --git a/include/linux/cma.h b/include/linux/cma.h index abcf7ab712f9..b765d19e4052 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -46,6 +46,9 @@ extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, bool no_warn); extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count); +extern bool cma_clear_bitmap_if_in_range(struct cma *cma, const struct page *page, + unsigned int count); + extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); extern void cma_reserve(int min_order, unsigned long requested_size, diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 3bf8d8a09f08..5a45877055bb 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -24,6 +24,8 @@ extern struct page *follow_trans_huge_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, unsigned int flags); +extern struct page *alloc_thp_pud_page(int nid); +extern bool free_thp_pud_page(struct page *page, int order); #else static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud) { @@ -43,6 +45,14 @@ struct page *follow_trans_huge_pud(struct vm_area_struct *vma, { return NULL; } +struct page *alloc_thp_pud_page(int nid) +{ + return NULL; +} +extern bool free_thp_pud_page(struct page *page, int order); +{ + return false; +} #endif extern vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd); diff --git a/mm/cma.c b/mm/cma.c index aa3a17d8a191..3f721b8f7ccd 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -532,6 +532,37 @@ bool cma_release(struct cma *cma, const struct page *pages, unsigned int count) return true; } +/** + * cma_clear_bitmap_if_in_range() - clear bitmap for a given page + * @cma: Contiguous memory region for which the allocation is performed. + * @pages: Allocated pages. + * @count: Number of allocated pages. + * + * This function clears bitmap of memory allocated by cma_alloc(). + * It returns false when provided pages do not belong to contiguous area and + * true otherwise. + */ +bool cma_clear_bitmap_if_in_range(struct cma *cma, const struct page *pages, + unsigned int count) +{ + unsigned long pfn; + + if (!cma || !pages) + return false; + + pfn = page_to_pfn(pages); + + if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count) + return false; + + if (pfn + count > cma->base_pfn + cma->count) + return false; + + cma_clear_bitmap(cma, pfn, count); + + return true; +} + int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data) { int i; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e1440a13da63..2020b843fd97 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -64,6 +65,10 @@ static struct shrinker deferred_split_shrinker; static atomic_t huge_zero_refcount; struct page *huge_zero_page __read_mostly; +#ifdef CONFIG_CMA +extern struct cma *hugepage_cma[MAX_NUMNODES]; +#endif + bool transparent_hugepage_enabled(struct vm_area_struct *vma) { /* The addr is used to check if the vma size fits */ @@ -2526,6 +2531,13 @@ static void __split_huge_pud_page(struct page *page, struct list_head *list, /* no file-back page support yet */ VM_BUG_ON(!PageAnon(page)); + /* */ + if (IS_ENABLED(CONFIG_CMA)) { + struct cma *cma = hugepage_cma[page_to_nid(head)]; + VM_BUG_ON(!cma_clear_bitmap_if_in_range(cma, head, + thp_nr_pages(head))); + } + for (i = HPAGE_PUD_NR - HPAGE_PMD_NR; i >= 1; i -= HPAGE_PMD_NR) { __split_huge_pud_page_tail(head, i, lruvec, list); } @@ -3753,3 +3765,21 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) update_mmu_cache_pmd(vma, address, pvmw->pmd); } #endif + +struct page *alloc_thp_pud_page(int nid) +{ + struct page *page = NULL; +#ifdef CONFIG_CMA + page = cma_alloc(hugepage_cma[nid], HPAGE_PUD_NR, HPAGE_PUD_ORDER, true); +#endif + return page; +} + +bool free_thp_pud_page(struct page *page, int order) +{ + bool ret = false; +#ifdef CONFIG_CMA + ret = cma_release(hugepage_cma[page_to_nid(page)], page, 1< MAX_ORDER) { - page = alloc_contig_pages(1UL< MAX_ORDER) { - page = alloc_contig_pages(1UL<= MAX_ORDER) { destroy_compound_gigantic_page(page, order); - free_contig_range(page_to_pfn(page), 1 << order); + if (!free_thp_pud_page(page, order)) + free_contig_range(page_to_pfn(page), 1 << order); } else { migratetype = get_pfnblock_migratetype(page, pfn); local_irq_save(flags);