From patchwork Mon Feb 10 01:56:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ge Yang X-Patchwork-Id: 13967179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99C4EC0219B for ; Mon, 10 Feb 2025 01:56:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 749F16B007B; Sun, 9 Feb 2025 20:56:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F97F6B0083; Sun, 9 Feb 2025 20:56:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E8276B0085; Sun, 9 Feb 2025 20:56:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4064F6B007B for ; Sun, 9 Feb 2025 20:56:18 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A9057161CB8 for ; Mon, 10 Feb 2025 01:56:17 +0000 (UTC) X-FDA: 83102369994.27.C67D585 Received: from m16.mail.126.com (m16.mail.126.com [117.135.210.9]) by imf12.hostedemail.com (Postfix) with ESMTP id DE9C340009 for ; Mon, 10 Feb 2025 01:56:14 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=126.com header.s=s110527 header.b=Bd+RXmFn; spf=pass (imf12.hostedemail.com: domain of yangge1116@126.com designates 117.135.210.9 as permitted sender) smtp.mailfrom=yangge1116@126.com; dmarc=pass (policy=none) header.from=126.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739152576; a=rsa-sha256; cv=none; b=167XBoolL5dGReIAVS32Ka82VK6nknkFgm3BkFsTPUELz+VbBzdoRaL6CVccnTQFFMFCql TAjr90OvaBI9OTgTJRRSS5hXST1Ve6SR35STahrvuwF/gCauSaACCTMAlfpLUh4BPOKkUj TrashsVlOCdRRmpTr1YcZIyWxK/SXzo= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=126.com header.s=s110527 header.b=Bd+RXmFn; spf=pass (imf12.hostedemail.com: domain of yangge1116@126.com designates 117.135.210.9 as permitted sender) smtp.mailfrom=yangge1116@126.com; dmarc=pass (policy=none) header.from=126.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739152576; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:dkim-signature; bh=Q3Xj9GzrFFmpq/2ACrEPZg39Mv76RqsgUb3eYGwsiN0=; b=DdAnb3mL8873i/vsUzTPjg1ycU7GX7T700QMCrjVQBiJwbLSuOGb9PtB4w0oKvt5d2kvqs P+KgVcOppScd4S827HaFRxhKIQNIyXMqCuLXY5ufJkdtnNmas0wGn7JkgKtmn1euTm3xSB p1dNbCFDHhYdYVAqMPwt+Cl+XKjj9Ow= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com; s=s110527; h=From:Subject:Date:Message-Id; bh=Q3Xj9GzrFFmpq/2ACr EPZg39Mv76RqsgUb3eYGwsiN0=; b=Bd+RXmFnajMe770FLYJrO+Q0nO/greOli2 74AnjNPj7prwWXhL3Sg5Tfbse+qAei2xKHSSJMjWtscBazrbPk012HOnqqjR01Yk H+NGxRR8cKlDdPF6bfzco14G3u3okanQE8W4pzg8bwVP6zU/t5jWhIbUwdAaNvWX ztiNnVHpY= Received: from hg-OptiPlex-7040.hygon.cn (unknown []) by gzga-smtp-mtada-g1-0 (Coremail) with SMTP id _____wD3N3O3XKlnw57OAg--.49629S2; Mon, 10 Feb 2025 09:56:08 +0800 (CST) From: yangge1116@126.com To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, 21cnbao@gmail.com, david@redhat.com, baolin.wang@linux.alibaba.com, aisheng.dong@nxp.com, liuzixing@hygon.cn, yangge Subject: [PATCH V2] mm/cma: using per-CMA locks to improve concurrent allocation performance Date: Mon, 10 Feb 2025 09:56:06 +0800 Message-Id: <1739152566-744-1-git-send-email-yangge1116@126.com> X-Mailer: git-send-email 2.7.4 X-CM-TRANSID: _____wD3N3O3XKlnw57OAg--.49629S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxAF17Xr43tr1rCry8WF43trb_yoW5Zw1kpr WrWw1DJry5Xr17Zw1UAayq9rnY9wn29FWUKFWFva4fZ3ZxAr909r1rta45uF48urZ7WFyS vry0q3y5Za1UZ37anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x0zRoCJQUUUUU= X-Originating-IP: [112.64.138.194] X-CM-SenderInfo: 51dqwwjhrrila6rslhhfrp/1tbifgLvG2epUJ-RIwAAs7 X-Stat-Signature: 9m64gyqokmg6ffsw3f9x6czizxwxjn8o X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: DE9C340009 X-Rspam-User: X-HE-Tag: 1739152574-96844 X-HE-Meta: U2FsdGVkX188cH/CM1UzRhf2itoOJpmD78tR/wiqMAvJ+pzVfO/EyQZiBg2+OpfWtAiFm3R8sdfXQo3vIcLvhhrzffERh/aJERTlALFNQXlxZhGUjTk5GxwdVo5Igd3F2aqdkespyZsov9FLuJC7O920dI0cE17WlHkH59Tvgqxq8RiSYLCY+RwS1t565te3JFfrFy2p8caM5hP2S8bpNcDZEoyfYyUYO+o1yk9LC1+XDr8dNAMHd5x+XBnV5Ctxnz7YabNKq8xE56b03g5osOyW1MrjB/Y3JJYK1jD+X4clG/k/mz1IGRSuFXyLlAzBijBolptm9njPgFGcKe9fXCKGwFwAeXgDMn14Bp7i0VhJ91qOPNJrjFV+EGGJtvgXAc6EE6v43CDR8ZrqVAuVPSzUFCnyXsmFO4Su4/VZuwNd5IDEPJP9LYnVCugCvxjLql/Q0jWTPx5MGApMFUs3PJVqz54aQzXFHKDlKf9Uc4Wnm0n+hHpNYwoKikhCANSUbtT6ZZ01IYRAv/4yL2lWH8gOQl0vfV8aLxzj7fvHdrw6wytghW48otq+gpqmMn7vw157tbi9D6KJgigTWENqEB3O1KPijswuLwrlnD+18J3GZ13cOP1rHML6XM7JNaiF+7UGbiEex43Px8TX1NZacLYKfilj2gorQrewWr5tx20DIJdjz4EgcwEAbf/gRW9G4xt+y5cCiZVOt08/rCPZev2SQPT2urYup5oLgUumAbSx87QgjL1O+jigd8NLA9gApTUTpfAof+lH4n2CHPTwj0imQjKkY4C+ukUileSSQBolbsAfg1yqMOIm9aLXBILPVkBsnlGbhMm1L0jmbHUa/jW9RNG78Abx/Yff4/6LILw2tV0b/mFWip4IUW0AOOEppvxLjjrRc1y2nPJ88gDAeFIolKdaBIjbJ+wNk1tMMjjqXyTKL+BV38zvwbd8J1PvTEjzo47ZV4fhPYPwO59 Ck3PkJMW QQc67X3CtEISDjGWM2f/xRK9RTNtXfUcJWlJMINA4+usCoXS7++5lsVLrfK0jnkH7KYlBcTYQXqrwN964JrlCIiVFgsrAcwpA0rhQwZhazQwqhILYjHMczFmhqtI0OWejhcMouzT6HcOv8HL1t1L/i79tl5J6E5bF7N3xvBK8C5n/2uNBUtMydP7kiMFuIK7wLDfTdQ8RqhT8Hqa6Xy5ckpypyy1FCK7Pge0Ac+S9YXxpUcc7nzDNo+vjR8aA4sl7fU3sNe+pr0sCGh3w6/yIEUxHyQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: yangge For different CMAs, concurrent allocation of CMA memory ideally should not require synchronization using locks. Currently, a global cma_mutex lock is employed to synchronize all CMA allocations, which can impact the performance of concurrent allocations across different CMAs. To test the performance impact, follow these steps: 1. Boot the kernel with the command line argument hugetlb_cma=30G to allocate a 30GB CMA area specifically for huge page allocations. (note: on my machine, which has 3 nodes, each node is initialized with 10G of CMA) 2. Use the dd command with parameters if=/dev/zero of=/dev/shm/file bs=1G count=30 to fully utilize the CMA area by writing zeroes to a file in /dev/shm. 3. Open three terminals and execute the following commands simultaneously: (Note: Each of these commands attempts to allocate 10GB [2621440 * 4KB pages] of CMA memory.) On Terminal 1: time echo 2621440 > /sys/kernel/debug/cma/hugetlb1/alloc On Terminal 2: time echo 2621440 > /sys/kernel/debug/cma/hugetlb2/alloc On Terminal 3: time echo 2621440 > /sys/kernel/debug/cma/hugetlb3/alloc We attempt to allocate pages through the CMA debug interface and use the time command to measure the duration of each allocation. Performance comparison: Without this patch With this patch Terminal1 ~7s ~7s Terminal2 ~14s ~8s Terminal3 ~21s ~7s To slove problem above, we could use per-CMA locks to improve concurrent allocation performance. This would allow each CMA to be managed independently, reducing the need for a global lock and thus improving scalability and performance. Signed-off-by: yangge Reviewed-by: Barry Song Acked-by: David Hildenbrand Reviewed-by: Oscar Salvador --- V2: - update code and message suggested by Barry. mm/cma.c | 7 ++++--- mm/cma.h | 1 + 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index 34a4df2..a0d4d2f 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -34,7 +34,6 @@ struct cma cma_areas[MAX_CMA_AREAS]; unsigned int cma_area_count; -static DEFINE_MUTEX(cma_mutex); static int __init __cma_declare_contiguous_nid(phys_addr_t base, phys_addr_t size, phys_addr_t limit, @@ -175,6 +174,8 @@ static void __init cma_activate_area(struct cma *cma) spin_lock_init(&cma->lock); + mutex_init(&cma->alloc_mutex); + #ifdef CONFIG_CMA_DEBUGFS INIT_HLIST_HEAD(&cma->mem_head); spin_lock_init(&cma->mem_head_lock); @@ -813,9 +814,9 @@ static int cma_range_alloc(struct cma *cma, struct cma_memrange *cmr, spin_unlock_irq(&cma->lock); pfn = cmr->base_pfn + (bitmap_no << cma->order_per_bit); - mutex_lock(&cma_mutex); + mutex_lock(&cma->alloc_mutex); ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, gfp); - mutex_unlock(&cma_mutex); + mutex_unlock(&cma->alloc_mutex); if (ret == 0) { page = pfn_to_page(pfn); break; diff --git a/mm/cma.h b/mm/cma.h index df7fc62..41a3ab0 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -39,6 +39,7 @@ struct cma { unsigned long available_count; unsigned int order_per_bit; /* Order of pages represented by one bit */ spinlock_t lock; + struct mutex alloc_mutex; #ifdef CONFIG_CMA_DEBUGFS struct hlist_head mem_head; spinlock_t mem_head_lock;