From patchwork Mon Nov 2 14:39:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Goldsworthy X-Patchwork-Id: 11874051 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C652139F for ; Mon, 2 Nov 2020 14:40:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BBA262084C for ; Mon, 2 Nov 2020 14:40:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="VEjEFvkM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BBA262084C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EAE136B0075; Mon, 2 Nov 2020 09:40:43 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E5D6D6B0078; Mon, 2 Nov 2020 09:40:43 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D27136B007B; Mon, 2 Nov 2020 09:40:43 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id 9E0756B0075 for ; Mon, 2 Nov 2020 09:40:43 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 370E7181AEF0B for ; Mon, 2 Nov 2020 14:40:43 +0000 (UTC) X-FDA: 77439739566.06.talk31_2e05a0f272b0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 0B98E100410E8 for ; Mon, 2 Nov 2020 14:40:43 +0000 (UTC) X-Spam-Summary: 1,0,0,139f3f86e7302ba6,d41d8cd98f00b204,bounce+d06763.be9e4a-linux-mm=kvack.org@mg.codeaurora.org,,RULES_HIT:1:2:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:2196:2198:2199:2200:2393:2553:2559:2562:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4052:4250:4321:4385:4605:5007:6119:6261:6653:7576:7904:9010:10004:11026:11218:11232:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:13161:13229:14096:14394:21080:21094:21323:21450:21451:21524:21611:21627:21972:21990:22119:30012:30034:30054:30090,0,RBL:69.72.42.4:@mg.codeaurora.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yreyjwrx9tci8eof54fzsrqo1mjycmej4xm3hrj5mjf59pybzb3oo99dsdzdf.z1555rxir3kgkyykrdsrewox4mmrw187ty7k3fd6ykg4o8aqfptq4gnsgwj5ncw.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:66,LUA_SUMM ARY:none X-HE-Tag: talk31_2e05a0f272b0 X-Filterd-Recvd-Size: 12249 Received: from m42-4.mailgun.net (m42-4.mailgun.net [69.72.42.4]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Mon, 2 Nov 2020 14:40:38 +0000 (UTC) DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1604328042; h=References: In-Reply-To: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=BAmD8TtrxrmTztZhyNakoE+6qpeDFNkhSNT5JSZJ3Mg=; b=VEjEFvkMhUESmAYyOLl8Gp+m5Nc1qpKTCC7Gcqr6PjUcJ8jAKFeLN/YA7zHCmOr/fCptGpG3 jHnDYxhNcbsgZ1sbXBXXbpo+dnM5wbF7iHRuwIsFjqHAzMd7ylcZzT+fy0ugbtrqkBRFGrWE X6hxeBtd62rv5CwiKDpuVMtJ3XI= X-Mailgun-Sending-Ip: 69.72.42.4 X-Mailgun-Sid: WyIwY2Q3OCIsICJsaW51eC1tbUBrdmFjay5vcmciLCAiYmU5ZTRhIl0= Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n02.prod.us-west-2.postgun.com with SMTP id 5fa01a5523306fb602e7866c (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 02 Nov 2020 14:40:21 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 5AF04C433F0; Mon, 2 Nov 2020 14:40:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00,SPF_FAIL, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from cgoldswo-linux.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: cgoldswo) by smtp.codeaurora.org (Postfix) with ESMTPSA id 01292C433C8; Mon, 2 Nov 2020 14:40:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 01292C433C8 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=cgoldswo@codeaurora.org From: Chris Goldsworthy To: Andrew Morton , Minchan Kim , Nitin Gupta , Sergey Senozhatsky Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Heesub Shin , Kyungmin Park , Vinayak Menon , Chris Goldsworthy Subject: [PATCH 1/2] cma: redirect page allocation to CMA Date: Mon, 2 Nov 2020 06:39:21 -0800 Message-Id: <3f99bcf028ea80a0de785f99f60a1c3a7bd89a9e.1604282969.git.cgoldswo@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Heesub Shin CMA pages are designed to be used as fallback for movable allocations and cannot be used for non-movable allocations. If CMA pages are utilized poorly, non-movable allocations may end up getting starved if all regular movable pages are allocated and the only pages left are CMA. Always using CMA pages first creates unacceptable performance problems. As a midway alternative, use CMA pages for certain userspace allocations. The userspace pages can be migrated or dropped quickly which giving decent utilization. Signed-off-by: Kyungmin Park Signed-off-by: Heesub Shin Signed-off-by: Vinayak Menon [cgoldswo@codeaurora.org: Place in bugfixes] Signed-off-by: Chris Goldsworthy Reported-by: kernel test robot --- include/linux/gfp.h | 15 +++++++++ include/linux/highmem.h | 4 ++- include/linux/mmzone.h | 4 +++ mm/page_alloc.c | 83 +++++++++++++++++++++++++++++++------------------ 4 files changed, 74 insertions(+), 32 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index c603237..e80b7d2 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -39,11 +39,21 @@ struct vm_area_struct; #define ___GFP_HARDWALL 0x100000u #define ___GFP_THISNODE 0x200000u #define ___GFP_ACCOUNT 0x400000u +#ifdef CONFIG_CMA +#define ___GFP_CMA 0x800000u +#else +#define ___GFP_CMA 0 +#endif #ifdef CONFIG_LOCKDEP +#ifdef CONFIG_CMA +#define ___GFP_NOLOCKDEP 0x1000000u +#else #define ___GFP_NOLOCKDEP 0x800000u +#endif #else #define ___GFP_NOLOCKDEP 0 #endif + /* If the above are modified, __GFP_BITS_SHIFT may need updating */ /* @@ -57,6 +67,7 @@ struct vm_area_struct; #define __GFP_HIGHMEM ((__force gfp_t)___GFP_HIGHMEM) #define __GFP_DMA32 ((__force gfp_t)___GFP_DMA32) #define __GFP_MOVABLE ((__force gfp_t)___GFP_MOVABLE) /* ZONE_MOVABLE allowed */ +#define __GFP_CMA ((__force gfp_t)___GFP_CMA) #define GFP_ZONEMASK (__GFP_DMA|__GFP_HIGHMEM|__GFP_DMA32|__GFP_MOVABLE) /** @@ -224,7 +235,11 @@ struct vm_area_struct; #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ +#ifdef CONFIG_CMA +#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP)) +#else #define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP)) +#endif #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 14e6202..35f052b 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -274,7 +274,9 @@ static inline struct page * alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, unsigned long vaddr) { - return __alloc_zeroed_user_highpage(__GFP_MOVABLE, vma, vaddr); + return __alloc_zeroed_user_highpage( + __GFP_MOVABLE|__GFP_CMA, vma, + vaddr); } static inline void clear_highpage(struct page *page) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index fb3bf69..3f913be 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -471,6 +471,10 @@ struct zone { struct pglist_data *zone_pgdat; struct per_cpu_pageset __percpu *pageset; +#ifdef CONFIG_CMA + bool cma_alloc; +#endif + #ifndef CONFIG_SPARSEMEM /* * Flags for a pageblock_nr_pages block. See pageblock-flags.h. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d772206..f938de7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2860,35 +2860,34 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, { struct page *page; -#ifdef CONFIG_CMA - /* - * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. - */ - if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { - page = __rmqueue_cma_fallback(zone, order); - if (page) - return page; - } -#endif retry: page = __rmqueue_smallest(zone, order, migratetype); - if (unlikely(!page)) { - if (alloc_flags & ALLOC_CMA) - page = __rmqueue_cma_fallback(zone, order); - if (!page && __rmqueue_fallback(zone, order, migratetype, - alloc_flags)) - goto retry; - } + if (unlikely(!page) && __rmqueue_fallback(zone, order, migratetype, + alloc_flags)) + goto retry; trace_mm_page_alloc_zone_locked(page, order, migratetype); return page; } +static struct page *__rmqueue_cma(struct zone *zone, unsigned int order, + int migratetype, + unsigned int alloc_flags) +{ + struct page *page = 0; + +#ifdef CONFIG_CMA + if (migratetype == MIGRATE_MOVABLE && !zone->cma_alloc) + page = __rmqueue_cma_fallback(zone, order); + else +#endif + page = __rmqueue_smallest(zone, order, migratetype); + + trace_mm_page_alloc_zone_locked(page, order, MIGRATE_CMA); + return page; +} + /* * Obtain a specified number of elements from the buddy allocator, all under * a single hold of the lock, for efficiency. Add them to the supplied list. @@ -2896,14 +2895,20 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, */ static int rmqueue_bulk(struct zone *zone, unsigned int order, unsigned long count, struct list_head *list, - int migratetype, unsigned int alloc_flags) + int migratetype, unsigned int alloc_flags, int cma) { int i, alloced = 0; spin_lock(&zone->lock); for (i = 0; i < count; ++i) { - struct page *page = __rmqueue(zone, order, migratetype, - alloc_flags); + struct page *page; + + if (cma) + page = __rmqueue_cma(zone, order, migratetype, + alloc_flags); + else + page = __rmqueue(zone, order, migratetype, alloc_flags); + if (unlikely(page == NULL)) break; @@ -3388,7 +3393,8 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z) static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype, unsigned int alloc_flags, struct per_cpu_pages *pcp, - struct list_head *list) + struct list_head *list, + gfp_t gfp_flags) { struct page *page; @@ -3396,7 +3402,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype, if (list_empty(list)) { pcp->count += rmqueue_bulk(zone, 0, pcp->batch, list, - migratetype, alloc_flags); + migratetype, alloc_flags, + gfp_flags && __GFP_CMA); if (unlikely(list_empty(list))) return NULL; } @@ -3422,7 +3429,8 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, local_irq_save(flags); pcp = &this_cpu_ptr(zone->pageset)->pcp; list = &pcp->lists[migratetype]; - page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); + page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list, + gfp_flags); if (page) { __count_zid_vm_events(PGALLOC, page_zonenum(page), 1); zone_statistics(preferred_zone, zone); @@ -3448,7 +3456,7 @@ struct page *rmqueue(struct zone *preferred_zone, * MIGRATE_MOVABLE pcplist could have the pages on CMA area and * we need to skip it when CMA area isn't allowed. */ - if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & ALLOC_CMA || + if (!IS_ENABLED(CONFIG_CMA) || gfp_flags & __GFP_CMA || migratetype != MIGRATE_MOVABLE) { page = rmqueue_pcplist(preferred_zone, zone, gfp_flags, migratetype, alloc_flags); @@ -3476,8 +3484,14 @@ struct page *rmqueue(struct zone *preferred_zone, if (page) trace_mm_page_alloc_zone_locked(page, order, migratetype); } - if (!page) - page = __rmqueue(zone, order, migratetype, alloc_flags); + if (!page) { + if (gfp_flags & __GFP_CMA) + page = __rmqueue_cma(zone, order, migratetype, + alloc_flags); + else + page = __rmqueue(zone, order, migratetype, + alloc_flags); + } } while (page && check_new_pages(page, order)); spin_unlock(&zone->lock); if (!page) @@ -3790,7 +3804,8 @@ static inline unsigned int current_alloc_flags(gfp_t gfp_mask, unsigned int pflags = current->flags; if (!(pflags & PF_MEMALLOC_NOCMA) && - gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) + gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE && + gfp_mask & __GFP_CMA) alloc_flags |= ALLOC_CMA; #endif @@ -8529,6 +8544,9 @@ int alloc_contig_range(unsigned long start, unsigned long end, if (ret) return ret; +#ifdef CONFIG_CMA + cc.zone->cma_alloc = 1; +#endif /* * In case of -EBUSY, we'd like to know which page causes problem. * So, just fall through. test_pages_isolated() has a tracepoint @@ -8610,6 +8628,9 @@ int alloc_contig_range(unsigned long start, unsigned long end, done: undo_isolate_page_range(pfn_max_align_down(start), pfn_max_align_up(end), migratetype); +#ifdef CONFIG_CMA + cc.zone->cma_alloc = 0; +#endif return ret; } EXPORT_SYMBOL(alloc_contig_range); From patchwork Mon Nov 2 14:39:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Goldsworthy X-Patchwork-Id: 11874053 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1485D921 for ; Mon, 2 Nov 2020 14:40:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C68CE2072C for ; Mon, 2 Nov 2020 14:40:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="EH4c/NTA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C68CE2072C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E5E526B007B; Mon, 2 Nov 2020 09:40:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E0EAB6B007D; Mon, 2 Nov 2020 09:40:47 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6B5E6B007E; Mon, 2 Nov 2020 09:40:47 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id 832566B007B for ; Mon, 2 Nov 2020 09:40:47 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1469A8249980 for ; Mon, 2 Nov 2020 14:40:47 +0000 (UTC) X-FDA: 77439739734.09.linen61_33001f7272b0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id E8C53180AD806 for ; Mon, 2 Nov 2020 14:40:46 +0000 (UTC) X-Spam-Summary: 1,0,0,ded647f339139878,d41d8cd98f00b204,bounce+d06763.be9e4a-linux-mm=kvack.org@mg.codeaurora.org,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1542:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3353:3865:3866:3867:3870:3871:4250:4321:4385:4605:5007:6119:6261:6653:7576:7974:10004:11026:11218:11232:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12986:13161:13229:14181:14394:14721:21080:21325:21451:21627:21972:21990:22119:30012:30054,0,RBL:104.130.96.5:@mg.codeaurora.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201;04ygs5qish5f54o4o8qcpzztg1dy7ycaw3m8ac68ex6tejae6fgcnrp77eowp9b.5apxd3fozfw9m7a4hktuiccxaskuyh3x6o4fgfe6u1hd9a51uysmr78n4rdq3pe.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:112,LUA_SUMMARY:none X-HE-Tag: linen61_33001f7272b0 X-Filterd-Recvd-Size: 5350 Received: from z5.mailgun.us (z5.mailgun.us [104.130.96.5]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Mon, 2 Nov 2020 14:40:42 +0000 (UTC) DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1604328046; h=References: In-Reply-To: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=snwxvlRE7nLYTeaBPML9K64AgFTva8Hm58yvYVS5uW4=; b=EH4c/NTAkA5hrsHGHmUJbEOWrmMDoTuedDrEiObt0JOn0bYggnOWQM895I3BK53u8/MrHZr9 Rjww6hxKrgTvCo/oSBotLuaR3xvnS+BYGnFaNJClrN32R1nnmE/u6Es6OF2tOYqmoBziYRyZ 22wIGV+oWaAToQ2UhI908FoFHN0= X-Mailgun-Sending-Ip: 104.130.96.5 X-Mailgun-Sid: WyIwY2Q3OCIsICJsaW51eC1tbUBrdmFjay5vcmciLCAiYmU5ZTRhIl0= Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n05.prod.us-east-1.postgun.com with SMTP id 5fa01a567b1a71d6684169f9 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 02 Nov 2020 14:40:22 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 3FA84C433C9; Mon, 2 Nov 2020 14:40:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00,SPF_FAIL autolearn=no autolearn_force=no version=3.4.0 Received: from cgoldswo-linux.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: cgoldswo) by smtp.codeaurora.org (Postfix) with ESMTPSA id 2714EC433C6; Mon, 2 Nov 2020 14:40:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 2714EC433C6 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=fail smtp.mailfrom=cgoldswo@codeaurora.org From: Chris Goldsworthy To: Andrew Morton , Minchan Kim , Nitin Gupta , Sergey Senozhatsky Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vinayak Menon , Chris Goldsworthy Subject: [PATCH 2/2] zram: allow zram to allocate CMA pages Date: Mon, 2 Nov 2020 06:39:22 -0800 Message-Id: <4c77bb100706b714213ff840d827a48e40ac9177.1604282969.git.cgoldswo@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vinayak Menon Though zram pages are movable, they aren't allowed to enter MIGRATE_CMA pageblocks. zram is not seen to pin pages for long which can cause an issue. Moreover allowing zram to pick CMA pages can be helpful in cases seen where zram order 0 alloc fails when there are lots of free cma pages, resulting in kswapd or direct reclaim not making enough progress. Signed-off-by: Vinayak Menon Signed-off-by: Chris Goldsworthy --- drivers/block/zram/zram_drv.c | 5 +++-- mm/zsmalloc.c | 4 ++-- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 9a1e6ee..4b6b16d 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1380,13 +1380,14 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec, __GFP_KSWAPD_RECLAIM | __GFP_NOWARN | __GFP_HIGHMEM | - __GFP_MOVABLE); + __GFP_MOVABLE | + __GFP_CMA); if (!handle) { zcomp_stream_put(zram->comp); atomic64_inc(&zram->stats.writestall); handle = zs_malloc(zram->mem_pool, comp_len, GFP_NOIO | __GFP_HIGHMEM | - __GFP_MOVABLE); + __GFP_MOVABLE | __GFP_CMA); if (handle) goto compress_again; return -ENOMEM; diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b03bee2..16ba318 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -351,7 +351,7 @@ static void destroy_cache(struct zs_pool *pool) static unsigned long cache_alloc_handle(struct zs_pool *pool, gfp_t gfp) { return (unsigned long)kmem_cache_alloc(pool->handle_cachep, - gfp & ~(__GFP_HIGHMEM|__GFP_MOVABLE)); + gfp & ~(__GFP_HIGHMEM|__GFP_MOVABLE|__GFP_CMA)); } static void cache_free_handle(struct zs_pool *pool, unsigned long handle) @@ -362,7 +362,7 @@ static void cache_free_handle(struct zs_pool *pool, unsigned long handle) static struct zspage *cache_alloc_zspage(struct zs_pool *pool, gfp_t flags) { return kmem_cache_alloc(pool->zspage_cachep, - flags & ~(__GFP_HIGHMEM|__GFP_MOVABLE)); + flags & ~(__GFP_HIGHMEM|__GFP_MOVABLE|__GFP_CMA)); } static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage)