From patchwork Tue Feb 14 19:02:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 13140740 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A0CFC64EC7 for ; Tue, 14 Feb 2023 19:02:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 32D8E6B007D; Tue, 14 Feb 2023 14:02:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B6706B007E; Tue, 14 Feb 2023 14:02:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 109936B0080; Tue, 14 Feb 2023 14:02:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 007796B007D for ; Tue, 14 Feb 2023 14:02:39 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C687940F80 for ; Tue, 14 Feb 2023 19:02:39 +0000 (UTC) X-FDA: 80466818838.22.D56B516 Received: from mail-il1-f182.google.com (mail-il1-f182.google.com [209.85.166.182]) by imf23.hostedemail.com (Postfix) with ESMTP id EBBFA140023 for ; Tue, 14 Feb 2023 19:02:37 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=PnRIpLuD; spf=pass (imf23.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.182 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676401358; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gBFO9aqhV38THoX2Fo6HR6/ykmrwIoxryt9YcUaBQeE=; b=G/b7/6e1SYCA+AkOBvF9572iwulMPr/MMjy6y/sJMIn7Ph+CygXoK06M+zK7KbY91AQ7H1 tei27PKanZRWOAxi7LguCmGDVszUTFcS6FwffDa6SJb5ISqUP7v3C7XrnzWFnOhaGoyivX M6L8EGITrdz3slFGkbDlRbbu0ec5NRg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=PnRIpLuD; spf=pass (imf23.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.182 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676401358; a=rsa-sha256; cv=none; b=vwGL9W1WRDBoSH/x4BpvEgH0ESphuM148nnm0QhyU59iE6UYx5PZnrnsWM76FG32WL5I4l WOKdqKthSMv/rs9tk6cosNlIfCEJZ6Tu8RAYPWfG/fP5j4s+spuPxZWjjcSkzah1NqV6+s KjJglRpcekAgIECcQQ1XAUgB+Nc3UEA= Received: by mail-il1-f182.google.com with SMTP id v6so1701542ilc.10 for ; Tue, 14 Feb 2023 11:02:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gBFO9aqhV38THoX2Fo6HR6/ykmrwIoxryt9YcUaBQeE=; b=PnRIpLuDmcy5yt6aiMl/PRAsAv2Us1E5Oo6e+hJclJqr5tKuSkkUot/8A+HdCQb84I 03ZPQhb+lNhGg6q826jB6Ts/gEXF0darsyRG3YL/JjAX2Gt09f0PcPIiXJFZvrxPDfxZ W73SECriG9ixrM2P58H85Wz9W6lF4oYYgZHN+Zh+CMOq0MQkX1mzM+HZoaKvTN6rS7Fk etOzeUyrQsMDkBC4Yd/NrG82W9NQnqDMfYA1Njm0oOo4DOqOhPqIqKo6YiqCEjibcUGP ZQSrVhuPqYAufaucTwkOXohFcSD5ZDZ67Pf+Kj+4bUE7ayw4EXksBt/NGF5gslNWNZzL 0B+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gBFO9aqhV38THoX2Fo6HR6/ykmrwIoxryt9YcUaBQeE=; b=TFPJ0tCaiqdG1InLe3GGU6T7MBmnDrXrbI8E52XOotlRsEW8I/ibM8QjPK4yMn45rT p9en+eEcGxDYPJ5q00O6oOuQbwdGAydAJxEVB+nHH0Ug79iD6AnIwjN6v/nKir9JWo6t dSNetqN4E/Qy2mRt1C4keLfrRB2iOW7/SfbiZb+JhC0K4qAcT6mDN3LzNB331bRSF/LX fOG3sNvozzbYE6w5owqg7yF9YBzWTpVQ9bmT07hBxPi4jxRsB7yhUR91E9lpyChwg340 123tKJ72ZRVxajOUU5ZbM1d9Insq0V9i49BjcKGvqLWWL1XxnHgBIpiV4YNjNuhXyQxR 9Pzw== X-Gm-Message-State: AO0yUKWcxAM/ira9YQcKKEjV1F2qgBL7A+Lp4q7nVS7eZ4eObOLl2kJc Ga1GEz0IHt/GHKVaXHhIR4Kxq/J25zz+Yg== X-Google-Smtp-Source: AK7set9u6j7La87WkLYk/UYILiOBFdY1iKGZoRD/y8K4y4KxyNTy+Te/VhP83riNGGKX7PeiklCynQ== X-Received: by 2002:a05:6e02:12ed:b0:315:55cc:ff07 with SMTP id l13-20020a056e0212ed00b0031555ccff07mr3475691iln.4.1676401357094; Tue, 14 Feb 2023 11:02:37 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id r11-20020a056e0219cb00b0030c27c9eea4sm3608770ill.33.2023.02.14.11.02.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Feb 2023 11:02:36 -0800 (PST) From: Yang Shi To: mgorman@techsingularity.net, agk@redhat.com, snitzer@kernel.org, dm-devel@redhat.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 1/5] mm: page_alloc: add API for bulk allocator with callback Date: Tue, 14 Feb 2023 11:02:17 -0800 Message-Id: <20230214190221.1156876-2-shy828301@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230214190221.1156876-1-shy828301@gmail.com> References: <20230214190221.1156876-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EBBFA140023 X-Stat-Signature: macgikzobd889xqqh5959isnmqgcnthi X-Rspam-User: X-HE-Tag: 1676401357-363153 X-HE-Meta: U2FsdGVkX19GpZeVIsecyQdIgAjJt7LmB6iPQkQPFhA/42CmGCgsKWPCLf7lep6eQxBf0ajZ6efOMXYAyxaYK2NsnDuLippoFeK533d/QTyLvi2rtXEdK2bxhRLIHk0dZgTSplDUx7/99vpH3DB3s232kHwKVwCfOiysG3okwHawa2LIe5ND3iba6XWEFj8QdqbpD+csNOZxdvMOwj7xkVoHJYyZ1wHvoYfn8FrCXK/IwiXrPyT7aiaU379Z63mnIr9RG1AAMQ4lZv8sa4tpX4uF4wKjeHtIC1cJF8A4eo5oqsxe5MIUv3H6P/tDBkyYc61CpTzKbyZA8b8+bI9uM7ozxYgIUMep7rpZbcBlQaRaSE7WmL0GTgkSTqyGFHJ1Htdq/6FoRzq+cnf3/PNPX/V3oCMP+aQI5e1EOifkX5BxlmqK9q558dyocs8ww5fB0Q10sEnjLE19t7avDkkc4UdQUizB3YXOUwpgRHAeUx00vddV3MonOcbZSz9Eq5BuXIoPfmMdrMdibPb/87fnanEnfLuVYhlpKUEYcq2iLBi0klENuLx/tG6k1BiGaVboryWHLSXF8Rqi/NmvpynXBPBFex16No5DGR8yU3Kd1kVPjDRw4bFRUrQP8HLHPNCDvwn0Cgx4eXz2vGA19J8vNiyUiQ374u65c96p7SPNI49mIPiLjX9bB+gS/zuCeFWDpXcMtx9ve7WCRdysnxD4PiXZahNWNRMexIhlGjDaK/f5NrgCcUxNfuBOVIxSVjU8AhtYlIWSFpRF4meZ/RSxmCVziUwSAmGXx+/6gG7P1MkWiXhqHhjqq3JZZw7Q7XQITI1hxWp1jysn/6zOpHJTyin2low5C6g01Ep0nliiMSD9gY+RVf6WI3MBsFL/TwlCsd2e3ZkUK8HSIppy9uiCYT2pateBYYYNhTBCUSbuVeWx2RV0XyTa0d0KIHL7Zn2o2TYQUMlK6/CLER8IFzi FwWqeHZA 30S/AlBi7tWC4xbsQUYjCujgZ55ycaNwUBQ0lSfXHN1M62eiSOllvcAhIfq8MUQT6pNJrjFLjDAGVpuExvAdVTQbrvU3SY8rsYqoI6zO3Lu/zib9ga4d6TTS98NWN9dG+M41fTksO/2AdWKNpCtQnRcG1MYIcLPlbeKBl5kPGeCm9d9DtPhItIBpj1gJnzu8zLl3RSJkxB03vDa44C5OKYegXMyw8bonu+W3zTDFNACDkzn++UZtCkDVYGes3FrUMtufbAFs13wDnNnrehbeWFfSBsBUdo+AzUm2pnzLwr0Txxxm7742TqiOURdgmFHfBhPscjWj0gaTk4XniL5srfzi9jN0BC0RcYuBkOfjds4Ha5xZZVFkvSffqVl/6iJu0nRM3/AzzAX6jbil753aQ7nE7GnplOoZXnT8JIqPgUPS5ROpuNFQUwg4SVdPNu6aQd232JK3vGR88aTuFyDhgByPMWxkXZpMKmfiTUsETi/91oZ9/IXX+M7/qB+Xv1DuSYBuR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the bulk allocator support to pass pages via list or array, but neither is suitable for some usecases, for example, dm-crypt, which doesn't need a list, but array may be too big to fit on stack. So adding a new bulk allocator API, which passes in a callback function that deal with the allocated pages. The API defined in this patch will be used by the following patches. Signed-off-by: Yang Shi --- include/linux/gfp.h | 21 +++++++++++++++++---- mm/mempolicy.c | 12 +++++++----- mm/page_alloc.c | 21 +++++++++++++++------ 3 files changed, 39 insertions(+), 15 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 65a78773dcca..265c19b4822f 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -182,7 +182,9 @@ struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, struct list_head *page_list, - struct page **page_array); + struct page **page_array, + void (*cb)(struct page *, void *), + void *data); unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, unsigned long nr_pages, @@ -192,13 +194,15 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, static inline unsigned long alloc_pages_bulk_list(gfp_t gfp, unsigned long nr_pages, struct list_head *list) { - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NULL); + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NULL, + NULL, NULL); } static inline unsigned long alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array) { - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array, + NULL, NULL); } static inline unsigned long @@ -207,7 +211,16 @@ alloc_pages_bulk_array_node(gfp_t gfp, int nid, unsigned long nr_pages, struct p if (nid == NUMA_NO_NODE) nid = numa_mem_id(); - return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, NULL, page_array); + return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, NULL, page_array, + NULL, NULL); +} + +static inline unsigned long +alloc_pages_bulk_cb(gfp_t gfp, unsigned long nr_pages, + void (*cb)(struct page *page, void *data), void *data) +{ + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, NULL, + cb, data); } static inline void warn_if_node_offline(int this_node, gfp_t gfp_mask) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0919c7a719d4..00b2d5341790 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2318,12 +2318,13 @@ static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, nr_allocated = __alloc_pages_bulk(gfp, interleave_nodes(pol), NULL, nr_pages_per_node + 1, NULL, - page_array); + page_array, NULL, NULL); delta--; } else { nr_allocated = __alloc_pages_bulk(gfp, interleave_nodes(pol), NULL, - nr_pages_per_node, NULL, page_array); + nr_pages_per_node, NULL, page_array, + NULL, NULL); } page_array += nr_allocated; @@ -2344,12 +2345,13 @@ static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); nr_allocated = __alloc_pages_bulk(preferred_gfp, nid, &pol->nodes, - nr_pages, NULL, page_array); + nr_pages, NULL, page_array, + NULL, NULL); if (nr_allocated < nr_pages) nr_allocated += __alloc_pages_bulk(gfp, numa_node_id(), NULL, nr_pages - nr_allocated, NULL, - page_array + nr_allocated); + page_array + nr_allocated, NULL, NULL); return nr_allocated; } @@ -2377,7 +2379,7 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, return __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol), nr_pages, NULL, - page_array); + page_array, NULL, NULL); } int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1113483fa6c5..d23b8e49a8cd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5402,22 +5402,27 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, * @nr_pages: The number of pages desired on the list or array * @page_list: Optional list to store the allocated pages * @page_array: Optional array to store the pages + * @cb: Optional callback to handle the page + * @data: The parameter passed in by the callback * * This is a batched version of the page allocator that attempts to * allocate nr_pages quickly. Pages are added to page_list if page_list - * is not NULL, otherwise it is assumed that the page_array is valid. + * is not NULL, or it is assumed if the page_array is valid, or it is + * passed to a callback if cb is valid. * - * For lists, nr_pages is the number of pages that should be allocated. + * For lists and cb, nr_pages is the number of pages that should be allocated. * * For arrays, only NULL elements are populated with pages and nr_pages * is the maximum number of pages that will be stored in the array. * - * Returns the number of pages on the list or array. + * Returns the number of pages on the list or array or consumed by cb. */ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, struct list_head *page_list, - struct page **page_array) + struct page **page_array, + void (*cb)(struct page *, void *), + void *data) { struct page *page; unsigned long __maybe_unused UP_flags; @@ -5532,8 +5537,10 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, prep_new_page(page, 0, gfp, 0); if (page_list) list_add(&page->lru, page_list); - else + else if (page_array) page_array[nr_populated] = page; + else + cb(page, data); nr_populated++; } @@ -5554,8 +5561,10 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, if (page) { if (page_list) list_add(&page->lru, page_list); - else + else if (page_array) page_array[nr_populated] = page; + else + cb(page, data); nr_populated++; }