From patchwork Fri Apr 7 10:42:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 13204652 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 557E4C6FD1D for ; Fri, 7 Apr 2023 10:44:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4086900003; Fri, 7 Apr 2023 06:44:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AF148900002; Fri, 7 Apr 2023 06:44:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B96A900003; Fri, 7 Apr 2023 06:44:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8B1F6900002 for ; Fri, 7 Apr 2023 06:44:21 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5A59F141312 for ; Fri, 7 Apr 2023 10:44:21 +0000 (UTC) X-FDA: 80654260722.06.88FF4FC Received: from SHSQR01.spreadtrum.com (mx1.unisoc.com [222.66.158.135]) by imf29.hostedemail.com (Postfix) with ESMTP id 6DFBE120005 for ; Fri, 7 Apr 2023 10:44:18 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf29.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680864259; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references; bh=/Zpp1ugUK/ayZguL0xJiUVhvUG8toK/S5ovFSaCg4fE=; b=rzGT/ECEf2kquSCFKwJubtoZJD5Q0QOGF4JNKE4gjmHy713edVDqPj6jesgxVjPHbe8l+p nTMjcRgMBGGb9viamKzk4ACAgVtdv8EGdDeNkUtWI96jT+a/Xa9bNMOdec3cQHIJGsN04t C42we0sovp0GU0QZklB24pX9nn/FdTE= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf29.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680864259; a=rsa-sha256; cv=none; b=iAxTJEpZijfuy83nCR43IVkZHr4hPljj4qkvU56Bt4HPNyWo5yUROn2V8nLinMeSl5bqDG nuqjfsmBWSNfnQEZsUMoG8n5uYsFZVKMENMqigkNfdCrlf3DxYqEVa9OH1tz+rZ2ZLhBwo xbh/eqQ3EPU0EjgbvRTgwh01ztF0Hwk= Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 337AgViW016377; Fri, 7 Apr 2023 18:42:32 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Fri, 7 Apr 2023 18:42:28 +0800 From: "zhaoyang.huang" To: Andrew Morton , Minchan Kim , Joonsoo Kim , , , Zhaoyang Huang , Subject: [RFC PATCH] mm: introduce defer free for cma Date: Fri, 7 Apr 2023 18:42:11 +0800 Message-ID: <1680864131-4675-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS01.spreadtrum.com (10.0.1.201) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 337AgViW016377 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6DFBE120005 X-Stat-Signature: omypoygsccudm95u5r519z61ak6x9h39 X-HE-Tag: 1680864258-388040 X-HE-Meta: U2FsdGVkX1/7VGJabA5rmYvCjIZBVxVHjtvyI1V1BvIzIBNXLKC+A67Lf2Om/7ELdy6GLPHXucQUfzyD5j3q5GJdTBiMQo/unaMKuyd4T4daY+7AgL9YpY15Z2BFkrkHRiQdKzouHEcmmBkTIFXDZG3MSriA+aiBgkNyhDD+Wbm8OU/3ojfbBat0rxjQ1Q2FmuI0cYNaURAa/qY6RdAHZ4FJ4iAjYkiefoJtDO5bDjS1R1wmtiYy7e4Fhk75X5X9ooi/fnCmHWEioCFmOpCcwWyxNppij7bf5VH9zQsYOVdA6/3q27PPGaysABjG9KsofX1sNsDlNqdmHmecf41sfMDwWdnIhGfSR5YKgBpZkJik5pyVQPTu3qKCmG9Zi2nqrkGACMFaNKh9Wl+ceQ1MG8THo+Vs7WuauBzC/sr63DOZz+E1UXO0+CC9snlL+Uo6mTTRQvzpEEOc+881jwsStsjilHFKrgBg6pN2zzv06K5WaPRXt4xtfIczkMeS+zTfx4A9Ewd32l331HPbPKEkuXsHlcgP8A+aoV6Xy26ezrXUzWJK89F6cbmYQaR20LFZA5lIrzvLKC2nroDmW+BTT3rNcrn2x4h0WYIqBHpBOevpHwFVUWhZmZSJyPah+c2Ib2QlXODWgWzbaUriSC1pOby355MGKSOJzxBQV3Pugo8jAgdlRx+tggbMUnLc7CY4zdPkyxGBRYPwh0bVSlgUNrwZc1qUwmukAWNM3Wv2fm1FwFdbbDEBZmEKtismQ/mAEATnepi39/4x5//P9PsHr5cvf6AUk5NItcZH1mwRg7p6CaWbjYyc9vZddZcEMy3q8pWsZ6zkyHQutqnnklIikWbrQ19lO6y994Cs+B6KD4JMDkRqg0n+45LiVKqBO2T1Y53d3L8lm5j7bTRAa8l1+JamoARB3izyFtLYBluns4UX7zt29lnGAYqjXEs+QlYBRgT/fjCBkNFpwgGutO+ I2XzSGFt MtKheiJTJ9696RVNd13LoKISfyxErn+DW6DfJlK6A4SP6JR4mqsWSN9K/UYSRGXLrxLERDgZnNGEFc8am9MsZ9ZTXSSFktuwr20lRsOYjsdlanboWb80A75HHtNPJ1vSqRxRYjKOxI983QCIjB46QxhQ3GrO6cg05ekTUpqQUiXSwLIFudIZuYqNJcyD19cVE6QApaO+MdkH/lchZfYCyMI4fIBV15qfhD2wUQA6TRP7+dZpGaTQm/F1IIUdRr1rsRyxnTOUu+iBmcLLm8zvjcq0E1Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zhaoyang Huang Continues page blocks are expensive for the system. Introducing defer free mechanism to buffer some which make the allocation easier. The shrinker will ensure the page block can be reclaimed when there is memory pressure. Signed-off-by: Zhaoyang Huang --- mm/cma.c | 166 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-- mm/cma.h | 11 +++++ 2 files changed, 174 insertions(+), 3 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index 4a978e0..ad67ae5 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -153,6 +153,20 @@ static int __init cma_init_reserved_areas(void) } core_initcall(cma_init_reserved_areas); +static unsigned long cma_free_get(struct cma *cma) +{ + unsigned long used; + unsigned long val; + + spin_lock_irq(&cma->lock); + /* pages counter is smaller than sizeof(int) */ + used = bitmap_weight(cma->bitmap, (int)cma_bitmap_maxno(cma)); + val = cma->count - (u64)used << cma->order_per_bit; + spin_unlock_irq(&cma->lock); + + return val; +} + void __init cma_reserve_pages_on_error(struct cma *cma) { cma->reserve_pages_on_error = true; @@ -411,6 +425,46 @@ static void cma_debug_show_areas(struct cma *cma) static inline void cma_debug_show_areas(struct cma *cma) { } #endif +static int cma_defer_area_fetch(struct cma *cma, unsigned long pfn, + unsigned long count) +{ + struct cma_defer_free_area *area; + unsigned long new_pfn; + int ret = -1; + + if (!atomic64_read(&cma->defer_count)) + return ret; + if (count <= atomic64_read(&cma->defer_count)) { + spin_lock_irq(&cma->lock); + list_for_each_entry(area, &cma->defer_free, list) { + /*area found for given pfn and count*/ + if (pfn >= area->pfn && count <= area->count) { + list_del(&area->list); + /*set bits for allocated pfn*/ + bitmap_set(cma->bitmap, pfn - cma->base_pfn, count); + kfree(area); + atomic64_sub(count, &cma->defer_count); + /*release the rest pfn to cma*/ + if (!list_empty(&cma->defer_free) && (pfn == area->pfn)) { + new_pfn = pfn + count; + cma_release(cma, pfn_to_page(new_pfn), area->count - count); + } + ret = 0; + spin_unlock_irq(&cma->lock); + return ret; + } + } + } + /*no area found, release all to buddy*/ + list_for_each_entry(area, &cma->defer_free, list) { + list_del(&area->list); + free_contig_range(area->pfn, area->count); + cma_clear_bitmap(cma, area->pfn, area->count); + kfree(area); + } + spin_unlock_irq(&cma->lock); + return ret; +} /** * cma_alloc() - allocate pages from contiguous area * @cma: Contiguous memory region for which the allocation is performed. @@ -469,9 +523,11 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, spin_unlock_irq(&cma->lock); pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); + mutex_lock(&cma_mutex); - ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, - GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); + /*search for defer area first*/ + ret = cma_defer_area_fetch(cma, pfn, count) ? alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, + GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)) : 0; mutex_unlock(&cma_mutex); if (ret == 0) { page = pfn_to_page(pfn); @@ -556,6 +612,8 @@ bool cma_release(struct cma *cma, const struct page *pages, unsigned long count) { unsigned long pfn; + unsigned long flags; + struct cma_defer_free_area *defer_area; if (!cma_pages_valid(cma, pages, count)) return false; @@ -566,7 +624,21 @@ bool cma_release(struct cma *cma, const struct page *pages, VM_BUG_ON(pfn + count > cma->base_pfn + cma->count); - free_contig_range(pfn, count); + if (cma->batch) { + defer_area = kmalloc(sizeof(struct cma_defer_free_area), GFP_KERNEL); + if (defer_area) { + defer_area->pfn = pfn; + defer_area->count = count; + spin_lock_irqsave(&cma->lock, flags); + list_add(&defer_area->list, &cma->defer_free); + atomic64_add(count, &cma->defer_count); + spin_unlock_irqrestore(&cma->lock, flags); + cma_clear_bitmap(cma, pfn, count); + return true; + } + } + else + free_contig_range(pfn, count); cma_clear_bitmap(cma, pfn, count); trace_cma_release(cma->name, pfn, pages, count); @@ -586,3 +658,91 @@ int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data) return 0; } + +static unsigned long cma_defer_free_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct cma *cma = container_of(shrinker, struct cma, shrinker); + unsigned long val; + + val = atomic64_read(&cma->defer_count); + return val; +} + +static unsigned long cma_defer_free_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct cma *cma = container_of(shrinker, struct cma, shrinker); + unsigned long to_scan; + struct cma_defer_free_area *area; + unsigned long new_pfn; + unsigned long defer_count; + + if (sc->nr_to_scan < cma->batch) + return 0; + + to_scan = cma->batch - sc->nr_to_scan; + defer_count = atomic64_read(&cma->defer_count); + spin_lock_irq(&cma->lock); + + /*free all node*/ + if (to_scan >= defer_count) { + list_for_each_entry(area, &cma->defer_free, list) { + list_del(&area->list); + free_contig_range(area->pfn, area->count); + cma_clear_bitmap(cma, area->pfn, area->count); + kfree(area); + } + atomic64_set(&cma->defer_count, 0); + return defer_count; + } + + list_for_each_entry(area, &cma->defer_free, list) { + if (to_scan <= area->count) { + list_del(&area->list); + free_contig_range(area->pfn, area->count); + cma_clear_bitmap(cma, area->pfn, area->count); + kfree(area); + atomic64_sub(to_scan, &cma->defer_count); + /*release the rest pfn to cma*/ + if (!list_empty(&cma->defer_free)) { + new_pfn = area->pfn + to_scan; + cma_release(cma, pfn_to_page(new_pfn), area->count - to_scan); + } + break; + } + else { + list_del(&area->list); + free_contig_range(area->pfn, area->count); + cma_clear_bitmap(cma, area->pfn, area->count); + kfree(area); + to_scan = to_scan - atomic64_read(&cma->defer_count); + /*release the rest pfn to cma*/ + if (!list_empty(&cma->defer_free)) { + new_pfn = area->pfn + to_scan; + cma_release(cma, pfn_to_page(new_pfn), area->count - to_scan); + } + continue; + } + } + spin_unlock_irq(&cma->lock); + return 0; +} + +static struct shrinker cma_shrinker = { + .count_objects = cma_defer_free_count, + .scan_objects = cma_defer_free_scan, + .seeks = 0, +}; +static int __init cma_init(void) +{ + int ret = -1; + ret = prealloc_shrinker(&cma_shrinker, "cma-shadow"); + if (ret) + goto err; + register_shrinker_prepared(&cma_shrinker); + ret = 0; +err: + return ret; +} +module_init(cma_init); diff --git a/mm/cma.h b/mm/cma.h index 88a0595..e1e3e2f 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -4,6 +4,7 @@ #include #include +#include struct cma_kobject { struct kobject kobj; @@ -31,6 +32,16 @@ struct cma { struct cma_kobject *cma_kobj; #endif bool reserve_pages_on_error; + struct list_head defer_free; + atomic64_t defer_count; + unsigned long batch; + struct shrinker shrinker; +}; + +struct cma_defer_free_area { + unsigned long pfn; + unsigned long count; + struct list_head list; }; extern struct cma cma_areas[MAX_CMA_AREAS];