From patchwork Tue Jun 25 13:52:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13711222 X-Patchwork-Delegate: kuba@kernel.org Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2164317083A; Tue, 25 Jun 2024 13:55:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719323744; cv=none; b=tQC6NdPFa2QVq2kr8/L0JJWt7+6JFAFZ3Cy93fz7CDMRfxLJMcaqmtXOj8qMEcXJIY4Cp7a2C8QVJUR0Muh7O7dzw5zJ/pjkJbv+qtovEaCftp8cvfXVJ++Tao4wg0DdjDcB8MYaE03KLb0rab79vOzrtWP3RnjQbbMZhmrErqU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719323744; c=relaxed/simple; bh=vG+NpuVV7lfnxiDGJuSffWNOo2DgSZMYUuJi2QLLHj0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=HNkzoXgQzLTTuhxaPrRqlUQ9rl9TYmCofEaCj050aqEKTxI5y2rnIuwNe5U+XBahM5EkE/Y324+CXeur010oPOrAAJUDz7sc3oo/0KI8MevOg5We5oySe0OR/MmYM/WzBvH3KXlzTk6dVQ29A/zZLaXasJRM4s61N84JN+tLlko= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4W7mXH3VfZzdd8l; Tue, 25 Jun 2024 21:54:07 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 0103014022E; Tue, 25 Jun 2024 21:55:41 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 25 Jun 2024 21:55:40 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v9 08/13] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Date: Tue, 25 Jun 2024 21:52:11 +0800 Message-ID: <20240625135216.47007-9-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240625135216.47007-1-linyunsheng@huawei.com> References: <20240625135216.47007-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemf200006.china.huawei.com (7.185.36.61) X-Patchwork-Delegate: kuba@kernel.org There are more new APIs calling __page_frag_cache_refill() in this patchset, which may cause compiler not being able to inline __page_frag_cache_refill() into __page_frag_alloc_va_align(). Not being able to do the inlining seems to casue some notiable performance degradation in arm64 system with 64K PAGE_SIZE after adding new API calling __page_frag_cache_refill(). It seems there is about 24Bytes binary size increase for __page_frag_cache_refill() and __page_frag_cache_refill() in arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, It seems we can have more than 100Bytes decrease for the binary size by using __alloc_pages() to replace alloc_pages_node(), as there seems to be some unnecessary checking for nid being NUMA_NO_NODE, especially when page_frag is still part of the mm system. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- mm/page_frag_cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 4fd421d4f22c..58facd2b59f7 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -62,11 +62,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, + numa_mem_id(), NULL); #endif if (unlikely(!page)) { - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); if (unlikely(!page)) { memset(nc, 0, sizeof(*nc)); return NULL;