From patchwork Tue Feb 14 19:02:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 13140740 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A0CFC64EC7 for ; Tue, 14 Feb 2023 19:02:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 32D8E6B007D; Tue, 14 Feb 2023 14:02:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B6706B007E; Tue, 14 Feb 2023 14:02:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 109936B0080; Tue, 14 Feb 2023 14:02:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 007796B007D for ; Tue, 14 Feb 2023 14:02:39 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C687940F80 for ; Tue, 14 Feb 2023 19:02:39 +0000 (UTC) X-FDA: 80466818838.22.D56B516 Received: from mail-il1-f182.google.com (mail-il1-f182.google.com [209.85.166.182]) by imf23.hostedemail.com (Postfix) with ESMTP id EBBFA140023 for ; Tue, 14 Feb 2023 19:02:37 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=PnRIpLuD; spf=pass (imf23.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.182 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676401358; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gBFO9aqhV38THoX2Fo6HR6/ykmrwIoxryt9YcUaBQeE=; b=G/b7/6e1SYCA+AkOBvF9572iwulMPr/MMjy6y/sJMIn7Ph+CygXoK06M+zK7KbY91AQ7H1 tei27PKanZRWOAxi7LguCmGDVszUTFcS6FwffDa6SJb5ISqUP7v3C7XrnzWFnOhaGoyivX M6L8EGITrdz3slFGkbDlRbbu0ec5NRg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=PnRIpLuD; spf=pass (imf23.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.182 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676401358; a=rsa-sha256; cv=none; b=vwGL9W1WRDBoSH/x4BpvEgH0ESphuM148nnm0QhyU59iE6UYx5PZnrnsWM76FG32WL5I4l WOKdqKthSMv/rs9tk6cosNlIfCEJZ6Tu8RAYPWfG/fP5j4s+spuPxZWjjcSkzah1NqV6+s KjJglRpcekAgIECcQQ1XAUgB+Nc3UEA= Received: by mail-il1-f182.google.com with SMTP id v6so1701542ilc.10 for ; Tue, 14 Feb 2023 11:02:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gBFO9aqhV38THoX2Fo6HR6/ykmrwIoxryt9YcUaBQeE=; b=PnRIpLuDmcy5yt6aiMl/PRAsAv2Us1E5Oo6e+hJclJqr5tKuSkkUot/8A+HdCQb84I 03ZPQhb+lNhGg6q826jB6Ts/gEXF0darsyRG3YL/JjAX2Gt09f0PcPIiXJFZvrxPDfxZ W73SECriG9ixrM2P58H85Wz9W6lF4oYYgZHN+Zh+CMOq0MQkX1mzM+HZoaKvTN6rS7Fk etOzeUyrQsMDkBC4Yd/NrG82W9NQnqDMfYA1Njm0oOo4DOqOhPqIqKo6YiqCEjibcUGP ZQSrVhuPqYAufaucTwkOXohFcSD5ZDZ67Pf+Kj+4bUE7ayw4EXksBt/NGF5gslNWNZzL 0B+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gBFO9aqhV38THoX2Fo6HR6/ykmrwIoxryt9YcUaBQeE=; b=TFPJ0tCaiqdG1InLe3GGU6T7MBmnDrXrbI8E52XOotlRsEW8I/ibM8QjPK4yMn45rT p9en+eEcGxDYPJ5q00O6oOuQbwdGAydAJxEVB+nHH0Ug79iD6AnIwjN6v/nKir9JWo6t dSNetqN4E/Qy2mRt1C4keLfrRB2iOW7/SfbiZb+JhC0K4qAcT6mDN3LzNB331bRSF/LX fOG3sNvozzbYE6w5owqg7yF9YBzWTpVQ9bmT07hBxPi4jxRsB7yhUR91E9lpyChwg340 123tKJ72ZRVxajOUU5ZbM1d9Insq0V9i49BjcKGvqLWWL1XxnHgBIpiV4YNjNuhXyQxR 9Pzw== X-Gm-Message-State: AO0yUKWcxAM/ira9YQcKKEjV1F2qgBL7A+Lp4q7nVS7eZ4eObOLl2kJc Ga1GEz0IHt/GHKVaXHhIR4Kxq/J25zz+Yg== X-Google-Smtp-Source: AK7set9u6j7La87WkLYk/UYILiOBFdY1iKGZoRD/y8K4y4KxyNTy+Te/VhP83riNGGKX7PeiklCynQ== X-Received: by 2002:a05:6e02:12ed:b0:315:55cc:ff07 with SMTP id l13-20020a056e0212ed00b0031555ccff07mr3475691iln.4.1676401357094; Tue, 14 Feb 2023 11:02:37 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id r11-20020a056e0219cb00b0030c27c9eea4sm3608770ill.33.2023.02.14.11.02.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Feb 2023 11:02:36 -0800 (PST) From: Yang Shi To: mgorman@techsingularity.net, agk@redhat.com, snitzer@kernel.org, dm-devel@redhat.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 1/5] mm: page_alloc: add API for bulk allocator with callback Date: Tue, 14 Feb 2023 11:02:17 -0800 Message-Id: <20230214190221.1156876-2-shy828301@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230214190221.1156876-1-shy828301@gmail.com> References: <20230214190221.1156876-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EBBFA140023 X-Stat-Signature: macgikzobd889xqqh5959isnmqgcnthi X-Rspam-User: X-HE-Tag: 1676401357-363153 X-HE-Meta: U2FsdGVkX19GpZeVIsecyQdIgAjJt7LmB6iPQkQPFhA/42CmGCgsKWPCLf7lep6eQxBf0ajZ6efOMXYAyxaYK2NsnDuLippoFeK533d/QTyLvi2rtXEdK2bxhRLIHk0dZgTSplDUx7/99vpH3DB3s232kHwKVwCfOiysG3okwHawa2LIe5ND3iba6XWEFj8QdqbpD+csNOZxdvMOwj7xkVoHJYyZ1wHvoYfn8FrCXK/IwiXrPyT7aiaU379Z63mnIr9RG1AAMQ4lZv8sa4tpX4uF4wKjeHtIC1cJF8A4eo5oqsxe5MIUv3H6P/tDBkyYc61CpTzKbyZA8b8+bI9uM7ozxYgIUMep7rpZbcBlQaRaSE7WmL0GTgkSTqyGFHJ1Htdq/6FoRzq+cnf3/PNPX/V3oCMP+aQI5e1EOifkX5BxlmqK9q558dyocs8ww5fB0Q10sEnjLE19t7avDkkc4UdQUizB3YXOUwpgRHAeUx00vddV3MonOcbZSz9Eq5BuXIoPfmMdrMdibPb/87fnanEnfLuVYhlpKUEYcq2iLBi0klENuLx/tG6k1BiGaVboryWHLSXF8Rqi/NmvpynXBPBFex16No5DGR8yU3Kd1kVPjDRw4bFRUrQP8HLHPNCDvwn0Cgx4eXz2vGA19J8vNiyUiQ374u65c96p7SPNI49mIPiLjX9bB+gS/zuCeFWDpXcMtx9ve7WCRdysnxD4PiXZahNWNRMexIhlGjDaK/f5NrgCcUxNfuBOVIxSVjU8AhtYlIWSFpRF4meZ/RSxmCVziUwSAmGXx+/6gG7P1MkWiXhqHhjqq3JZZw7Q7XQITI1hxWp1jysn/6zOpHJTyin2low5C6g01Ep0nliiMSD9gY+RVf6WI3MBsFL/TwlCsd2e3ZkUK8HSIppy9uiCYT2pateBYYYNhTBCUSbuVeWx2RV0XyTa0d0KIHL7Zn2o2TYQUMlK6/CLER8IFzi FwWqeHZA 30S/AlBi7tWC4xbsQUYjCujgZ55ycaNwUBQ0lSfXHN1M62eiSOllvcAhIfq8MUQT6pNJrjFLjDAGVpuExvAdVTQbrvU3SY8rsYqoI6zO3Lu/zib9ga4d6TTS98NWN9dG+M41fTksO/2AdWKNpCtQnRcG1MYIcLPlbeKBl5kPGeCm9d9DtPhItIBpj1gJnzu8zLl3RSJkxB03vDa44C5OKYegXMyw8bonu+W3zTDFNACDkzn++UZtCkDVYGes3FrUMtufbAFs13wDnNnrehbeWFfSBsBUdo+AzUm2pnzLwr0Txxxm7742TqiOURdgmFHfBhPscjWj0gaTk4XniL5srfzi9jN0BC0RcYuBkOfjds4Ha5xZZVFkvSffqVl/6iJu0nRM3/AzzAX6jbil753aQ7nE7GnplOoZXnT8JIqPgUPS5ROpuNFQUwg4SVdPNu6aQd232JK3vGR88aTuFyDhgByPMWxkXZpMKmfiTUsETi/91oZ9/IXX+M7/qB+Xv1DuSYBuR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the bulk allocator support to pass pages via list or array, but neither is suitable for some usecases, for example, dm-crypt, which doesn't need a list, but array may be too big to fit on stack. So adding a new bulk allocator API, which passes in a callback function that deal with the allocated pages. The API defined in this patch will be used by the following patches. Signed-off-by: Yang Shi --- include/linux/gfp.h | 21 +++++++++++++++++---- mm/mempolicy.c | 12 +++++++----- mm/page_alloc.c | 21 +++++++++++++++------ 3 files changed, 39 insertions(+), 15 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 65a78773dcca..265c19b4822f 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -182,7 +182,9 @@ struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, struct list_head *page_list, - struct page **page_array); + struct page **page_array, + void (*cb)(struct page *, void *), + void *data); unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, unsigned long nr_pages, @@ -192,13 +194,15 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, static inline unsigned long alloc_pages_bulk_list(gfp_t gfp, unsigned long nr_pages, struct list_head *list) { - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NULL); + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NULL, + NULL, NULL); } static inline unsigned long alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array) { - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array, + NULL, NULL); } static inline unsigned long @@ -207,7 +211,16 @@ alloc_pages_bulk_array_node(gfp_t gfp, int nid, unsigned long nr_pages, struct p if (nid == NUMA_NO_NODE) nid = numa_mem_id(); - return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, NULL, page_array); + return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, NULL, page_array, + NULL, NULL); +} + +static inline unsigned long +alloc_pages_bulk_cb(gfp_t gfp, unsigned long nr_pages, + void (*cb)(struct page *page, void *data), void *data) +{ + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, NULL, + cb, data); } static inline void warn_if_node_offline(int this_node, gfp_t gfp_mask) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0919c7a719d4..00b2d5341790 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2318,12 +2318,13 @@ static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, nr_allocated = __alloc_pages_bulk(gfp, interleave_nodes(pol), NULL, nr_pages_per_node + 1, NULL, - page_array); + page_array, NULL, NULL); delta--; } else { nr_allocated = __alloc_pages_bulk(gfp, interleave_nodes(pol), NULL, - nr_pages_per_node, NULL, page_array); + nr_pages_per_node, NULL, page_array, + NULL, NULL); } page_array += nr_allocated; @@ -2344,12 +2345,13 @@ static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); nr_allocated = __alloc_pages_bulk(preferred_gfp, nid, &pol->nodes, - nr_pages, NULL, page_array); + nr_pages, NULL, page_array, + NULL, NULL); if (nr_allocated < nr_pages) nr_allocated += __alloc_pages_bulk(gfp, numa_node_id(), NULL, nr_pages - nr_allocated, NULL, - page_array + nr_allocated); + page_array + nr_allocated, NULL, NULL); return nr_allocated; } @@ -2377,7 +2379,7 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, return __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol), nr_pages, NULL, - page_array); + page_array, NULL, NULL); } int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1113483fa6c5..d23b8e49a8cd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5402,22 +5402,27 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, * @nr_pages: The number of pages desired on the list or array * @page_list: Optional list to store the allocated pages * @page_array: Optional array to store the pages + * @cb: Optional callback to handle the page + * @data: The parameter passed in by the callback * * This is a batched version of the page allocator that attempts to * allocate nr_pages quickly. Pages are added to page_list if page_list - * is not NULL, otherwise it is assumed that the page_array is valid. + * is not NULL, or it is assumed if the page_array is valid, or it is + * passed to a callback if cb is valid. * - * For lists, nr_pages is the number of pages that should be allocated. + * For lists and cb, nr_pages is the number of pages that should be allocated. * * For arrays, only NULL elements are populated with pages and nr_pages * is the maximum number of pages that will be stored in the array. * - * Returns the number of pages on the list or array. + * Returns the number of pages on the list or array or consumed by cb. */ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, struct list_head *page_list, - struct page **page_array) + struct page **page_array, + void (*cb)(struct page *, void *), + void *data) { struct page *page; unsigned long __maybe_unused UP_flags; @@ -5532,8 +5537,10 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, prep_new_page(page, 0, gfp, 0); if (page_list) list_add(&page->lru, page_list); - else + else if (page_array) page_array[nr_populated] = page; + else + cb(page, data); nr_populated++; } @@ -5554,8 +5561,10 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, if (page) { if (page_list) list_add(&page->lru, page_list); - else + else if (page_array) page_array[nr_populated] = page; + else + cb(page, data); nr_populated++; } From patchwork Tue Feb 14 19:02:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 13140741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1B19C6379F for ; Tue, 14 Feb 2023 19:02:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A83DB6B007E; Tue, 14 Feb 2023 14:02:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A09E26B0080; Tue, 14 Feb 2023 14:02:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 883536B0081; Tue, 14 Feb 2023 14:02:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 712A06B007E for ; Tue, 14 Feb 2023 14:02:41 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3804F1608D9 for ; Tue, 14 Feb 2023 19:02:41 +0000 (UTC) X-FDA: 80466818922.04.21813CD Received: from mail-il1-f181.google.com (mail-il1-f181.google.com [209.85.166.181]) by imf12.hostedemail.com (Postfix) with ESMTP id 3ED9940009 for ; Tue, 14 Feb 2023 19:02:39 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=maTfyp8N; spf=pass (imf12.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.181 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676401359; a=rsa-sha256; cv=none; b=siK1iQvbsbABkOIPrtYXkTQcCfD7dw6Ue+ZfD+wmJ5Ty25O+1t0Labsq08VLQbnWs0EhCe NUnq7kl4aQVdQrvgZXRNdXXSHpMJvGV+v7h/xkvIjUdRYCY7Bwq7K94Ybn+mR7W1ArpFLK iNmk6C5frswsC7eMt7WnkE8KDxDNwEI= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=maTfyp8N; spf=pass (imf12.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.181 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676401359; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MPPamIyRpbGBQ5ryZF3ausqPZtq/gq8Gwr+ekJX0vLw=; b=loJCU4VAvT60E61HSMeFnNDYMaIbESgyoXD3hOGSMB2Xe05LMIP3SZllu3Y6Q9gJq3iufh 8EHy3BcZ2ID967alSWi/PFBORJwpXu2ZfYliMBOrDvnDMxwwe/3qqo3rpNH0F3HqzwwsrT JLV+rcZtMA0B4TQeH+8r9yZoTvpGmWU= Received: by mail-il1-f181.google.com with SMTP id v13so5098153iln.4 for ; Tue, 14 Feb 2023 11:02:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MPPamIyRpbGBQ5ryZF3ausqPZtq/gq8Gwr+ekJX0vLw=; b=maTfyp8N7g3NFmHY2zeN8YjUoTVcYuR7ajHYTMcSRQzHkFYDHmJhx9ivtMUdBVF71L GQn4jY6oHeJqVgK7GJEF9aLjpgWkCTqjGs3hbIXrso1oVFzSV1eo9609BfFbuuuBBCuI ZyGyKx8aU7z1Om3ICTXa1KR7/JLA9jQTSprAIC8dEz7zaymZt3th4EEmYbwsrXXlC+p1 sxZ7SNbSVNo/pkS61yMLbC5HX9no9qZGegY9IG9qe3Rj+1PzQlqNTBYxeczquNnk6ooA Oc5s7X0SnOx68eQv4KuSVtS35HYPX7X0fqe6ZVipE81nnvy/AAEtYche0HnA6lbxB56E CXSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MPPamIyRpbGBQ5ryZF3ausqPZtq/gq8Gwr+ekJX0vLw=; b=ZS1GDPli13lsNKKKDkyYCJkg6tPy0dsGxEaNU34JbEvOIvUF/MS3zZPVFIllsjH+o2 DRZhjuMeehnV1PP5U8I6BR2oL5kFzPdzO+Zz9OIW35E0z2QYvzX+/CEQguv9iDh7oEOo +6yiek+oMvzgWs3gmP9oD5CBzK2e0EQEuaISLPhNFpZ50LDUGxj+jXX4rBW4YNXXTxdL ggSMosoQvEYjhpAitxjrxUNmQChmMLVPPLjhBgUUj8w6e85Bkgw5EYB2tC01y3tofa99 EzwFX+JjGhKVmyTlS6qdf3KgBzzocOpQqe5lih3Q6Ny33t4oxjb6kJ5/2lnTgN2FOHMv Ub1A== X-Gm-Message-State: AO0yUKXJ++jvMvpDukJipeDjoF1+Xd0ULnmqsJtj87jQWd2B19YlqGPG CnOLbDP6LPa5KNILbnqDpfE= X-Google-Smtp-Source: AK7set+eY5XAwZXkOnW0qFOv85zlppyeV5NeG8+1WBt2UYlWnui6VEDbC+TuErm7ZvFyUGKQs5ccQQ== X-Received: by 2002:a92:c241:0:b0:310:fa45:ac78 with SMTP id k1-20020a92c241000000b00310fa45ac78mr2181236ilo.29.1676401358385; Tue, 14 Feb 2023 11:02:38 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id r11-20020a056e0219cb00b0030c27c9eea4sm3608770ill.33.2023.02.14.11.02.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Feb 2023 11:02:38 -0800 (PST) From: Yang Shi To: mgorman@techsingularity.net, agk@redhat.com, snitzer@kernel.org, dm-devel@redhat.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 2/5] mm: mempool: extract the common initialization and alloc code Date: Tue, 14 Feb 2023 11:02:18 -0800 Message-Id: <20230214190221.1156876-3-shy828301@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230214190221.1156876-1-shy828301@gmail.com> References: <20230214190221.1156876-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 3ED9940009 X-Rspamd-Server: rspam01 X-Stat-Signature: u46iipbpsaz8p5j1bb6a55cgthqww3ac X-HE-Tag: 1676401359-995478 X-HE-Meta: U2FsdGVkX1/B4VdwH1U/yRctu85YVKBEw9pL2Cs/nK+LPezzcTmJP0U+hyWdKNyvbnBqhjTpN6lvT5umq3jzDhxe3oR5m4C3b1VWiMt7F34Gwygtu6To+zSxEkcscrgp3R+2VEOjrC1cNWAWC4vfF+klor4u3DRSOCTdCyqZd0EzM6yZhMy3+u5VOzeayXZ+wj8Xco7XezonqelCAyL+maDpoSHuMZEIh+ZTxQ0sWU89RLZ8iZDiH1n5gMjiM09QLGAtqFgV6TmLwS4KrgJmJMmlQWXxntNLjawPR7vmIvYUXoucJgNczsXo5rX7FiwQg+G2ml4AQL1v9F7SkRAo3vYZT1ORV70rQw5wsMISRkSbfGWmUceEHVSYFhf9VCmDvapLnvF00ab8UtTrlRoI2s8ZkQqZx6wXRpVt/Jn/D4WWlf7BDilMsxgenW/x6OKtbaY8aLIleWRSRWtiLm3SHtvxKMpJMGv55nx4LU34wj25xbUkP1QhMUINBGe6r3gGp1yxfOnVSRDS26emePXTt4izu6RtFaMWLuon5QfYTXAjFY4gMsYiarEoacjWqPqNg1uwS4KTGJqbIQjOtNrkdFFdB7vO5x4UKOeG6Ieeoh4KKl+mQcneLpBlG9vgnDBzNWfpOKPwd2zT/tX4oRcIbKRFkBquu79+i0r/GswCM7K6NXVz8TIh9y0mpB8RDhffQk+7/ECaHOfDcYanYaqBygCwxB9Dlzc8lfVXT9PMBzSZ5Uiq87oglPKqMJ000wNzq2Kg7l4ktH67ByCP4Npm09QsE3PyGcgEYfXZInXLoqFN8LA776OnJdaii/Vpw7u1rev9WSibeKQObonj2x9m5qKYV3s5jI37yygrTPqtNBEy+8pMxSAs0mDqqVvTPoL3jfzQ/8j+TK98o5txJn+dYASQpo5ktrYP7ynSpT+4qrYcuAYEX7ffpgz8EFHw08QJY8LN41XN1nJbKIx4rHc phT0C+yc t5L/T7ivnhEhZIr7WPfz2iWDTlH5roNkJ9fnUl5aJP4rfm34tHSCAyQbg+0mgJ3YjvoUU/AhJrVXvUKliVxREtGxFjF72d6mLYSsSj3nGHJYC7uWlBsn/SzaUe0IP37knWuZrMAkz0JvPPnrJrCVbB2JFW7+fZeeCzMAgEeu/GkMDWg4WYTSxEFzEDNSVemH/aqC0KWKolbTil/l2KDUO4O6j0KAbi88VnA+SB71G880Lb3NhPSAzbOZy6SRlS3t+B0BHwScyx5IJ6qPgLbcwzHHsZ5sq3X32yiLJcRLO/pYreqsbc7G0YWTuApbVMXSVkv/ly+vJLrazxxnmgeyRqsrxJ/5cXW3fOow5IBXXjHBpflHMZDgMKsw13tsfV+vqXCgXCisCalChLVdkHr52q+aMTctbib7XnlJ2cl2V3a/S4LWdtlM01xWuDFfyGG19de/PMlycNsEjjvQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Extract the common initialization code to __mempool_init() and __mempool_create(). And extract the common alloc code into an internal function. This will make the following patch easier and avoid duplicate code. Signed-off-by: Yang Shi --- mm/mempool.c | 93 ++++++++++++++++++++++++++++++++-------------------- 1 file changed, 57 insertions(+), 36 deletions(-) diff --git a/mm/mempool.c b/mm/mempool.c index 734bcf5afbb7..975c9d1491b6 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -182,9 +182,10 @@ void mempool_destroy(mempool_t *pool) } EXPORT_SYMBOL(mempool_destroy); -int mempool_init_node(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, - mempool_free_t *free_fn, void *pool_data, - gfp_t gfp_mask, int node_id) +static inline int __mempool_init(mempool_t *pool, int min_nr, + mempool_alloc_t *alloc_fn, + mempool_free_t *free_fn, void *pool_data, + gfp_t gfp_mask, int node_id) { spin_lock_init(&pool->lock); pool->min_nr = min_nr; @@ -214,6 +215,14 @@ int mempool_init_node(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, return 0; } + +int mempool_init_node(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, + mempool_free_t *free_fn, void *pool_data, + gfp_t gfp_mask, int node_id) +{ + return __mempool_init(pool, min_nr, alloc_fn, free_fn, pool_data, + gfp_mask, node_id); +} EXPORT_SYMBOL(mempool_init_node); /** @@ -233,12 +242,30 @@ EXPORT_SYMBOL(mempool_init_node); int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data) { - return mempool_init_node(pool, min_nr, alloc_fn, free_fn, - pool_data, GFP_KERNEL, NUMA_NO_NODE); - + return __mempool_init(pool, min_nr, alloc_fn, free_fn, + pool_data, GFP_KERNEL, NUMA_NO_NODE); } EXPORT_SYMBOL(mempool_init); +static mempool_t *__mempool_create(int min_nr, mempool_alloc_t *alloc_fn, + mempool_free_t *free_fn, void *pool_data, + gfp_t gfp_mask, int node_id) +{ + mempool_t *pool; + + pool = kzalloc_node(sizeof(*pool), gfp_mask, node_id); + if (!pool) + return NULL; + + if (__mempool_init(pool, min_nr, alloc_fn, free_fn, pool_data, + gfp_mask, node_id)) { + kfree(pool); + return NULL; + } + + return pool; +} + /** * mempool_create - create a memory pool * @min_nr: the minimum number of elements guaranteed to be @@ -258,8 +285,8 @@ EXPORT_SYMBOL(mempool_init); mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data) { - return mempool_create_node(min_nr, alloc_fn, free_fn, pool_data, - GFP_KERNEL, NUMA_NO_NODE); + return __mempool_create(min_nr, alloc_fn, free_fn, pool_data, + GFP_KERNEL, NUMA_NO_NODE); } EXPORT_SYMBOL(mempool_create); @@ -267,19 +294,8 @@ mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int node_id) { - mempool_t *pool; - - pool = kzalloc_node(sizeof(*pool), gfp_mask, node_id); - if (!pool) - return NULL; - - if (mempool_init_node(pool, min_nr, alloc_fn, free_fn, pool_data, - gfp_mask, node_id)) { - kfree(pool); - return NULL; - } - - return pool; + return __mempool_create(min_nr, alloc_fn, free_fn, pool_data, + gfp_mask, node_id); } EXPORT_SYMBOL(mempool_create_node); @@ -363,21 +379,7 @@ int mempool_resize(mempool_t *pool, int new_min_nr) } EXPORT_SYMBOL(mempool_resize); -/** - * mempool_alloc - allocate an element from a specific memory pool - * @pool: pointer to the memory pool which was allocated via - * mempool_create(). - * @gfp_mask: the usual allocation bitmask. - * - * this function only sleeps if the alloc_fn() function sleeps or - * returns NULL. Note that due to preallocation, this function - * *never* fails when called from process contexts. (it might - * fail if called from an IRQ context.) - * Note: using __GFP_ZERO is not supported. - * - * Return: pointer to the allocated element or %NULL on error. - */ -void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) +static void *__mempool_alloc(mempool_t *pool, gfp_t gfp_mask) { void *element; unsigned long flags; @@ -444,6 +446,25 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) finish_wait(&pool->wait, &wait); goto repeat_alloc; } + +/** + * mempool_alloc - allocate an element from a specific memory pool + * @pool: pointer to the memory pool which was allocated via + * mempool_create(). + * @gfp_mask: the usual allocation bitmask. + * + * this function only sleeps if the alloc_fn() function sleeps or + * returns NULL. Note that due to preallocation, this function + * *never* fails when called from process contexts. (it might + * fail if called from an IRQ context.) + * Note: using __GFP_ZERO is not supported. + * + * Return: pointer to the allocated element or %NULL on error. + */ +void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) +{ + return __mempool_alloc(pool, gfp_mask); +} EXPORT_SYMBOL(mempool_alloc); /** From patchwork Tue Feb 14 19:02:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 13140742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8261C05027 for ; Tue, 14 Feb 2023 19:02:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0BECC6B0080; Tue, 14 Feb 2023 14:02:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 046166B0081; Tue, 14 Feb 2023 14:02:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E03586B0082; Tue, 14 Feb 2023 14:02:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BFBF86B0080 for ; Tue, 14 Feb 2023 14:02:42 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8123940F99 for ; Tue, 14 Feb 2023 19:02:42 +0000 (UTC) X-FDA: 80466818964.11.F50D134 Received: from mail-il1-f182.google.com (mail-il1-f182.google.com [209.85.166.182]) by imf13.hostedemail.com (Postfix) with ESMTP id 8408A20025 for ; Tue, 14 Feb 2023 19:02:40 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=XJb3P1Gc; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf13.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.182 as permitted sender) smtp.mailfrom=shy828301@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676401360; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XqvEeGV6oh0EWfoDksrWyy2/Blf2fwRqNf/jz7ukuRw=; b=zuu2LFYNJWoFgTgO7OQEKi1Kb1roOJOc8/sgxTFUmIwcpoKAxT9SLEHVZ026G8YCjhuxHe vXSIHo9b6a5P2nej6/SMnJbvG5T+2aEmkDMskhr4vgYJMA26fumVratQeARVvjveok9Nab 6RW6Ex/PAF9AU05jGHUQk+oLzIfG/0U= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=XJb3P1Gc; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf13.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.182 as permitted sender) smtp.mailfrom=shy828301@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676401360; a=rsa-sha256; cv=none; b=dOQkiEfJyyBMSlayuedo29JAlBqAivnBNTgXDAf1V29LJANBG4bgH3ny9vuj+Mn3wsCn0g dM9SZgxuwPtT4JGpLwluxrF9RPLL8ILAsYDvCpy6rbcqlGZ2lMqXt5lpbH3mWspYjPPP5M s5WKs7SWzJk8zBWFp4qPmRwpHfqy8pI= Received: by mail-il1-f182.google.com with SMTP id h29so4835752ila.8 for ; Tue, 14 Feb 2023 11:02:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XqvEeGV6oh0EWfoDksrWyy2/Blf2fwRqNf/jz7ukuRw=; b=XJb3P1GciLVeQG93r4o0mynAwuHd7u3qjuhxyiL0TEqMUd8vqu1fsRZ4Zfzb/k3DPU prTNpeY9UpiQ7F6VxxBag9eRJ2onavUHA2hM/fNVsfsis5vV0zbZXsKHxWFnJiZsPyMC ZtUy/4hdmnwt8HEoY2D8x/GuOhbx+p7zKivIYO4haVkzb/hKCf2QaUOAFbVw023mInPA Kwi1C/2Po0TUXCgEWVJjEw4GkHjc2JmuiT+ZYhNrIyZK+KxZy2OkNyEB4o+KpwFq9rpt LWvmGDrYIktQWZlXFNI9xbwzh7koFfoyxSmZ9Oksm6wdlIBi63AppQtvOcOJJN6gQ+8F +GkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XqvEeGV6oh0EWfoDksrWyy2/Blf2fwRqNf/jz7ukuRw=; b=auLWj+9SymctcHeNaOw69ckXQZQ1OB6dNUkQRSL3eAGjUsE5HGpOqIy1u9TVAUpz/T eC4XyX9bMRAZMbLxYMzcFDsg9OCLVIlAtkzBY9wHocPfQhyLFkABSVSUcOpB+a99DSgB 0/n5gOfUvsfoBbo9EK5uAnRcTy7REZt74VRtTctWEzFSo3EBcDa/Znu1FGAgYhfQKDrK 8fnx7kCUzOWekKdqTthHdKMVBOr2wV1QkHhKzX672bTq6e0Ko99oNQg2DC03CO49HnT4 fqat9E4IjiaiLHYTwpDkIQDWKQ7YP2VzEEerLR+QFaKje9x2ixZo7BebmIf/QdBeTXbq SzxQ== X-Gm-Message-State: AO0yUKUNUDybadAEooUkrD/j0W7d0LLC48iN/yVNyqh+vu/4F2/TEn07 GH/1/F6hdpBhrUhtrgErOZqZTmhckZIG1A== X-Google-Smtp-Source: AK7set9w7As1PUAlwdFW1t3QYCy1I6uoqWw2i5qBHiV+D4qRRjOPuzWXvnCCj9OvgR5i77kopvWOXg== X-Received: by 2002:a05:6e02:1a43:b0:310:ae72:32a0 with SMTP id u3-20020a056e021a4300b00310ae7232a0mr4441998ilv.21.1676401359609; Tue, 14 Feb 2023 11:02:39 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id r11-20020a056e0219cb00b0030c27c9eea4sm3608770ill.33.2023.02.14.11.02.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Feb 2023 11:02:39 -0800 (PST) From: Yang Shi To: mgorman@techsingularity.net, agk@redhat.com, snitzer@kernel.org, dm-devel@redhat.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 3/5] mm: mempool: introduce page bulk allocator Date: Tue, 14 Feb 2023 11:02:19 -0800 Message-Id: <20230214190221.1156876-4-shy828301@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230214190221.1156876-1-shy828301@gmail.com> References: <20230214190221.1156876-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 8408A20025 X-Stat-Signature: cfd5zrix1qd4dg9m4ak8wxkbpz4qrghh X-HE-Tag: 1676401360-504253 X-HE-Meta: U2FsdGVkX18q5wwpY23F5HOD/bZo3JCkOkovVJxyucrV+SwTChyQpthdKV/yjKDTN+xbZ1QZ3NUx8K6ftVqCfgHihHNLp+vT+EAUMUzQyu08zgF9zMfEXd2IsLHIVjg5YEAn6u1kVGyzEXsiOWOEfIVvMQEU50poUnEioqk4etK9stn5eMY+iWAIv8Y4cWjtaI7D13rN1n7EYSHFeyBydaSRd5+DnGUVZaOIyjZ55PH4Wl2KZ70vhIgw/lIrQtfkLyTNH1V1EVkc834qYw9pYznSBH86ducxppN1Qtnl8PgoZmtHIo/reVgMxANL35RF1sWBZhZ7vMhpzDjB0gxAIMyec5+LBuakE1DPwWVowNwXDpvRgTiCck1iOcMOr+2JdRhEyPgtgDTOJrgC/5y9xpdS3Y21NZqkBxwp3Hoq1G3+hOPEv9tb/qe6pZc18GGp4sqr1D35I8oWlJvtv/0zgF8E7YXiR8afQWNnUM6G9TApqkm9vHmDrHrvlE6XFP1XwNAilyNtbihta4KsX1lYOrnn3kG/SxFZ1luRLd43yI5hKShPZFIKMUcqryiHwvzUG5rFTlxxMEQ5gPGt2RnpR64xmdzzMclDIOVyRIVBBScwFomsTEKFkq+9OyH0Zti3xN4mnSUb2LLL9NeuJftF+vXlo6wur/tg02Yw4x6Q7FQUqCsmk1EDLcGjZnZ6fGHz2bIUEsyZDsX3JFmACH3GjvXeb1yuRP18nbLRFOxVjSJMnYwZMr30dowEVHq3LSIcBQew4suTsDM7VdKLXGLa/v3ZRtAMe/dtWpVNIVRFNqHjimGxius8R7oFKnO6evOgMc1Yh+KjcipUSAAKF0acIaTxrqtl+h6rW7DATdAAnNbFcLqz8Zo6oSC+iPspBchAYkf1RX5k4k8qrCPIIoF2O0miiBrzFLHcnIaYw0LEiohOggh0k3neAkrfA+uU0TR1bSnrWX52wOQGoMkGJDt 895W+AxK kbe+VyTLzSHTEOpclgksys1gaaUMeLCfCpj2otasteP2rLE7WTgyXUIBUyGAtaAtasl9JYBSNIP2twZCgVEHVBWRjtlLRiKEbSQ20Zq6KVBUAt6bQlHfvN78Svr12vIvvZXhm1ZhfWljMrGF/biZUyFmoV4TR1ZuJg5r6FYrx/HnlYKe5MG/JTRPyPFLGnS4Ax+InVH8RBghwhWZ9vLh8KV9/+y2X5J4PTTlcGSa3yqJsE/HYhw9SM2x1iTZVNHez+YpEonWTiS7SVRr+guwRXssRHcUM5MNlU0fHl8d6p02ASsstg1MPUt7DwPZW23VIm4qx0t1H9ysuPWQD2cJpxPAvMW0YNIngEg0poJdvFFzeHgl93OJxyyyXCDV7UChYJ+TXSIZruHFHXUyLt5cBQEYmXnQ7azOAO1D6qOGPwEhsNNrxMa3Td4UDsIqz0JSB+rNadIxJ6nstwXY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since v5.13 the page bulk allocator was introduced to allocate order-0 pages in bulk. There are a few mempool allocator callers which does order-0 page allocation in a loop, for example, dm-crypt, f2fs compress, etc. A mempool page bulk allocator seems useful. So introduce the mempool page bulk allocator. It introduces the below APIs: - mempool_init_pages_bulk() - mempool_create_pages_bulk() They initialize the mempool for page bulk allocator. The pool is filled by alloc_page() in a loop. - mempool_alloc_pages_bulk_array() - mempool_alloc_pages_bulk_cb() They do bulk allocation from mempool. They do the below conceptually: 1. Call bulk page allocator 2. If the allocation is fulfilled then return otherwise try to allocate the remaining pages from the mempool 3. If it is fulfilled then return otherwise retry from #1 with sleepable gfp 4. If it is still failed, sleep for a while to wait for the mempool is refilled, then retry from #1 The populated pages will stay on the array until the callers consume them or free them, or are consumed by the callback immediately. Since mempool allocator is guaranteed to success in the sleepable context, so the two APIs return true for success or false for fail. It is the caller's responsibility to handle failure case (partial allocation), just like the page bulk allocator. The mempool typically is an object agnostic allocator, but bulk allocation is only supported by pages, so the mempool bulk allocator is for page allocation only as well. Signed-off-by: Yang Shi --- include/linux/mempool.h | 21 +++++ mm/mempool.c | 177 ++++++++++++++++++++++++++++++++++++---- 2 files changed, 181 insertions(+), 17 deletions(-) diff --git a/include/linux/mempool.h b/include/linux/mempool.h index 4aae6c06c5f2..1907395b2ef5 100644 --- a/include/linux/mempool.h +++ b/include/linux/mempool.h @@ -13,6 +13,12 @@ struct kmem_cache; typedef void * (mempool_alloc_t)(gfp_t gfp_mask, void *pool_data); typedef void (mempool_free_t)(void *element, void *pool_data); +typedef unsigned int (mempool_alloc_pages_bulk_t)(gfp_t gfp_mask, + unsigned int nr, void *pool_data, + struct page **page_array, + void (*cb)(struct page *, void *), + void *data); + typedef struct mempool_s { spinlock_t lock; int min_nr; /* nr of elements at *elements */ @@ -22,6 +28,7 @@ typedef struct mempool_s { void *pool_data; mempool_alloc_t *alloc; mempool_free_t *free; + mempool_alloc_pages_bulk_t *alloc_pages_bulk; wait_queue_head_t wait; } mempool_t; @@ -41,18 +48,32 @@ int mempool_init_node(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, gfp_t gfp_mask, int node_id); int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data); +int mempool_init_pages_bulk(mempool_t *pool, int min_nr, + mempool_alloc_pages_bulk_t *alloc_pages_bulk_fn, + mempool_free_t *free_fn, void *pool_data); extern mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data); extern mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int nid); +extern mempool_t *mempool_create_pages_bulk(int min_nr, + mempool_alloc_pages_bulk_t *alloc_pages_bulk_fn, + mempool_free_t *free_fn, void *pool_data); extern int mempool_resize(mempool_t *pool, int new_min_nr); extern void mempool_destroy(mempool_t *pool); extern void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) __malloc; extern void mempool_free(void *element, mempool_t *pool); +extern bool mempool_alloc_pages_bulk_array(mempool_t *pool, gfp_t gfp_mask, + unsigned int nr, + struct page **page_array); +extern bool mempool_alloc_pages_bulk_cb(mempool_t *pool, gfp_t gfp_mask, + unsigned int nr, + void (*cb)(struct page *, void *), + void *data); + /* * A mempool_alloc_t and mempool_free_t that get the memory from * a slab cache that is passed in through pool_data. diff --git a/mm/mempool.c b/mm/mempool.c index 975c9d1491b6..dddcd847d765 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -183,6 +183,7 @@ void mempool_destroy(mempool_t *pool) EXPORT_SYMBOL(mempool_destroy); static inline int __mempool_init(mempool_t *pool, int min_nr, + mempool_alloc_pages_bulk_t *alloc_pages_bulk_fn, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int node_id) @@ -192,8 +193,11 @@ static inline int __mempool_init(mempool_t *pool, int min_nr, pool->pool_data = pool_data; pool->alloc = alloc_fn; pool->free = free_fn; + pool->alloc_pages_bulk = alloc_pages_bulk_fn; init_waitqueue_head(&pool->wait); + WARN_ON_ONCE(alloc_pages_bulk_fn && alloc_fn); + pool->elements = kmalloc_array_node(min_nr, sizeof(void *), gfp_mask, node_id); if (!pool->elements) @@ -205,7 +209,10 @@ static inline int __mempool_init(mempool_t *pool, int min_nr, while (pool->curr_nr < pool->min_nr) { void *element; - element = pool->alloc(gfp_mask, pool->pool_data); + if (pool->alloc_pages_bulk) + element = alloc_page(gfp_mask); + else + element = pool->alloc(gfp_mask, pool->pool_data); if (unlikely(!element)) { mempool_exit(pool); return -ENOMEM; @@ -220,7 +227,7 @@ int mempool_init_node(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int node_id) { - return __mempool_init(pool, min_nr, alloc_fn, free_fn, pool_data, + return __mempool_init(pool, min_nr, NULL, alloc_fn, free_fn, pool_data, gfp_mask, node_id); } EXPORT_SYMBOL(mempool_init_node); @@ -242,14 +249,39 @@ EXPORT_SYMBOL(mempool_init_node); int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data) { - return __mempool_init(pool, min_nr, alloc_fn, free_fn, + return __mempool_init(pool, min_nr, NULL, alloc_fn, free_fn, pool_data, GFP_KERNEL, NUMA_NO_NODE); } EXPORT_SYMBOL(mempool_init); -static mempool_t *__mempool_create(int min_nr, mempool_alloc_t *alloc_fn, - mempool_free_t *free_fn, void *pool_data, - gfp_t gfp_mask, int node_id) +/** + * mempool_init_pages_bulk - initialize a pages pool for bulk allocator + * @pool: pointer to the memory pool that should be initialized + * @min_nr: the minimum number of elements guaranteed to be + * allocated for this pool. + * @alloc_pages_bulk_fn: user-defined pages bulk allocation function. + * @free_fn: user-defined element-freeing function. + * @pool_data: optional private data available to the user-defined functions. + * + * Like mempool_create(), but initializes the pool in (i.e. embedded in another + * structure). + * + * Return: %0 on success, negative error code otherwise. + */ +int mempool_init_pages_bulk(mempool_t *pool, int min_nr, + mempool_alloc_pages_bulk_t *alloc_pages_bulk_fn, + mempool_free_t *free_fn, void *pool_data) +{ + return __mempool_init(pool, min_nr, alloc_pages_bulk_fn, NULL, + free_fn, pool_data, GFP_KERNEL, NUMA_NO_NODE); +} +EXPORT_SYMBOL(mempool_init_pages_bulk); + +static mempool_t *__mempool_create(int min_nr, + mempool_alloc_pages_bulk_t *alloc_pages_bulk_fn, + mempool_alloc_t *alloc_fn, + mempool_free_t *free_fn, void *pool_data, + gfp_t gfp_mask, int node_id) { mempool_t *pool; @@ -257,8 +289,8 @@ static mempool_t *__mempool_create(int min_nr, mempool_alloc_t *alloc_fn, if (!pool) return NULL; - if (__mempool_init(pool, min_nr, alloc_fn, free_fn, pool_data, - gfp_mask, node_id)) { + if (__mempool_init(pool, min_nr, alloc_pages_bulk_fn, alloc_fn, + free_fn, pool_data, gfp_mask, node_id)) { kfree(pool); return NULL; } @@ -285,7 +317,7 @@ static mempool_t *__mempool_create(int min_nr, mempool_alloc_t *alloc_fn, mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data) { - return __mempool_create(min_nr, alloc_fn, free_fn, pool_data, + return __mempool_create(min_nr, NULL, alloc_fn, free_fn, pool_data, GFP_KERNEL, NUMA_NO_NODE); } EXPORT_SYMBOL(mempool_create); @@ -294,11 +326,21 @@ mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int node_id) { - return __mempool_create(min_nr, alloc_fn, free_fn, pool_data, + return __mempool_create(min_nr, NULL, alloc_fn, free_fn, pool_data, gfp_mask, node_id); } EXPORT_SYMBOL(mempool_create_node); +mempool_t* mempool_create_pages_bulk(int min_nr, + mempool_alloc_pages_bulk_t *alloc_pages_bulk_fn, + mempool_free_t *free_fn, void *pool_data) +{ + return __mempool_create(min_nr, alloc_pages_bulk_fn, NULL, + free_fn, pool_data, GFP_KERNEL, + NUMA_NO_NODE); +} +EXPORT_SYMBOL(mempool_create_pages_bulk); + /** * mempool_resize - resize an existing memory pool * @pool: pointer to the memory pool which was allocated via @@ -379,12 +421,23 @@ int mempool_resize(mempool_t *pool, int new_min_nr) } EXPORT_SYMBOL(mempool_resize); -static void *__mempool_alloc(mempool_t *pool, gfp_t gfp_mask) +#define MEMPOOL_BULK_SUCCESS_PTR ((void *)16) + +static void * __mempool_alloc(mempool_t *pool, gfp_t gfp_mask, unsigned int nr, + struct page **page_array, + void (*cb)(struct page *, void *), + void *data) { void *element; unsigned long flags; wait_queue_entry_t wait; gfp_t gfp_temp; + int i; + unsigned int ret, nr_remaining; + struct page *page; + bool bulk_page_alloc = true; + + ret = nr_remaining = 0; VM_WARN_ON_ONCE(gfp_mask & __GFP_ZERO); might_alloc(gfp_mask); @@ -395,14 +448,27 @@ static void *__mempool_alloc(mempool_t *pool, gfp_t gfp_mask) gfp_temp = gfp_mask & ~(__GFP_DIRECT_RECLAIM|__GFP_IO); + if ((nr == 1) && (!page_array && !cb && !data)) + bulk_page_alloc = false; + repeat_alloc: + i = 0; + + if (bulk_page_alloc) { + ret = pool->alloc_pages_bulk(gfp_temp, nr, pool->pool_data, + page_array, cb, data); + if (ret == nr) + return MEMPOOL_BULK_SUCCESS_PTR; + } else { + element = pool->alloc(gfp_temp, pool->pool_data); + if (likely(element != NULL)) + return element; + } - element = pool->alloc(gfp_temp, pool->pool_data); - if (likely(element != NULL)) - return element; + nr_remaining = nr - ret; spin_lock_irqsave(&pool->lock, flags); - if (likely(pool->curr_nr)) { + while (pool->curr_nr && (nr_remaining > 0)) { element = remove_element(pool); spin_unlock_irqrestore(&pool->lock, flags); /* paired with rmb in mempool_free(), read comment there */ @@ -412,9 +478,34 @@ static void *__mempool_alloc(mempool_t *pool, gfp_t gfp_mask) * for debugging. */ kmemleak_update_trace(element); - return element; + + if (!bulk_page_alloc) + return element; + + page = (struct page *)element; + if (page_array) + page_array[ret + i] = page; + else + cb(page, data); + + i++; + nr_remaining--; + + spin_lock_irqsave(&pool->lock, flags); + } + + if (bulk_page_alloc && !nr_remaining) { + spin_unlock_irqrestore(&pool->lock, flags); + return MEMPOOL_BULK_SUCCESS_PTR; } + /* + * The bulk allocator counts in the populated pages for array, + * but don't do it for the callback version. + */ + if (bulk_page_alloc && !page_array) + nr = nr_remaining; + /* * We use gfp mask w/o direct reclaim or IO for the first round. If * alloc failed with that and @pool was empty, retry immediately. @@ -463,10 +554,62 @@ static void *__mempool_alloc(mempool_t *pool, gfp_t gfp_mask) */ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) { - return __mempool_alloc(pool, gfp_mask); + return __mempool_alloc(pool, gfp_mask, 1, NULL, NULL, NULL); } EXPORT_SYMBOL(mempool_alloc); +/** + * mempool_alloc_pages_bulk - allocate a bulk of pagesfrom a specific + * memory pool + * @pool: pointer to the memory pool which was allocated via + * mempool_create(). + * @gfp_mask: the usual allocation bitmask. + * @nr: the number of requested pages. + * @page_array: the array the pages will be added to. + * @cb: the callback function that will handle the page. + * @data: the parameter used by the callback + * + * this function only sleeps if the alloc_pages_bulk_fn() function sleeps + * or the allocation can not be satisfied even though the mempool is depleted. + * Note that due to preallocation, this function *never* fails when called + * from process contexts. (it might fail if called from an IRQ context.) + * Note: using __GFP_ZERO is not supported. And the caller should not pass + * in both valid page_array and callback. + * + * Return: true when nr pages are allocated or false if not. It is the + * caller's responsibility to free the partial allocated pages. + */ +static bool mempool_alloc_pages_bulk(mempool_t *pool, gfp_t gfp_mask, + unsigned int nr, + struct page **page_array, + void (*cb)(struct page *, void *), + void *data) +{ + if(!__mempool_alloc(pool, gfp_mask, nr, page_array, cb, data)) + return false; + + return true; +} + +bool mempool_alloc_pages_bulk_array(mempool_t *pool, gfp_t gfp_mask, + unsigned int nr, + struct page **page_array) +{ + return mempool_alloc_pages_bulk(pool, gfp_mask, nr, page_array, + NULL, NULL); +} +EXPORT_SYMBOL(mempool_alloc_pages_bulk_array); + +bool mempool_alloc_pages_bulk_cb(mempool_t *pool, gfp_t gfp_mask, + unsigned int nr, + void (*cb)(struct page *, void *), + void *data) +{ + return mempool_alloc_pages_bulk(pool, gfp_mask, nr, NULL, + cb, data); +} +EXPORT_SYMBOL(mempool_alloc_pages_bulk_cb); + /** * mempool_free - return an element to the pool. * @element: pool element pointer. From patchwork Tue Feb 14 19:02:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 13140743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9707AC61DA4 for ; Tue, 14 Feb 2023 19:02:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 293276B0081; Tue, 14 Feb 2023 14:02:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CD076B0082; Tue, 14 Feb 2023 14:02:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F39B56B0083; Tue, 14 Feb 2023 14:02:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E42726B0081 for ; Tue, 14 Feb 2023 14:02:43 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8DA311C6721 for ; Tue, 14 Feb 2023 19:02:43 +0000 (UTC) X-FDA: 80466819006.30.2D1C3DA Received: from mail-il1-f172.google.com (mail-il1-f172.google.com [209.85.166.172]) by imf21.hostedemail.com (Postfix) with ESMTP id 7EEE61C002F for ; Tue, 14 Feb 2023 19:02:41 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=cpuu8RHh; spf=pass (imf21.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.172 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676401361; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mFajPTMRcCMM2sMtnpqlL+/MlXEk9UiKBjz1wMnJjFA=; b=QUW2wx+OoEDUdk3MzfuDbdPpE8Azked9uIps4Km2oMBhl8AlgPUkhb6YIp+ChnEEBBeUk/ lrTwR6/XCulrJgPPx1CyThCI2Y8sDiZVxWenb6vvjVg+dhc7+ExibwcepKgEWCEo3BWtcc Pscig7j7XiowJ8eK9bvxMhwexEVmisc= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=cpuu8RHh; spf=pass (imf21.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.172 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676401361; a=rsa-sha256; cv=none; b=sCxQD4hFVAUIeEvSMXdAC3/Eh8k8JKKRIizfaTkVfqQVpSW4wuuqey7cZ8xbKazUD4Nv34 ZGIrkSjXy+CsCVSwYbe6S5QEWOXbGu/pqUZoNWaEUvP6mBRwIKxrFfgixpj/UpmMKMHsNO H0xssVhO9dRZWKSboG65WdySMrdQN/g= Received: by mail-il1-f172.google.com with SMTP id h4so3234173ile.5 for ; Tue, 14 Feb 2023 11:02:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mFajPTMRcCMM2sMtnpqlL+/MlXEk9UiKBjz1wMnJjFA=; b=cpuu8RHhLHTB9Kf5s0trAt7bengNbUN4ZN2xoDYyF+XdtzVhnHEooRlP9BzRHtxlEP r3Rf1jjDg8BQhkHtx/TVMG/uS8NRXNag+1UWtBCkPSLdTJy+q87vzrf0AACu9OupPfqI GnrPCavwpOhXW44xfYvrtpkf/gp3krMsKHx7/M6K8g4qqClwoz6f8Iz7CE+WhipAL6FO UTRQhaQ8FC74Ihf+tDyxdXiJdWD3uKM5/g54L2eY5a93eT0WfuzvRIFE1gDaP91Cbjii 8/IghgHlkNLzreTfyh7VixSz1DQiCh+dNirmbfzL8nalJsFlCCsVvFn1fwk40nyxGMlR tFOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mFajPTMRcCMM2sMtnpqlL+/MlXEk9UiKBjz1wMnJjFA=; b=LNNmuhFtmS4EFfIlDh9MzTYyvcMlaYJeEG3UV0T6YaqaR0feiDLMhRtYl0n/MnHrFf rDErR4wKHzkvN+xhixDxMQAyKRO4c61rq1xFiVaN14tBZ3xLMJP671zCmrS/7eB96uF6 zRrZRJZPMtAoR4DYOyRTcZ9N+sTOEDWUo8gBoYRZkqGwHuShdAM3EA1E5iuur1yTnRMW FsNVV7/sj7gs40gcchgeyWaxepojeryR491WFTSJEuV7NUOMZapnc+X+/X0nOFniOKtr my8HeyuK93PVcofyE/o5AAsC6BFpFSVvr6ahudVfog2EVL4NpnSg4sUHmUrOSTq2Im6w KRPQ== X-Gm-Message-State: AO0yUKWK2IGRYeiSZUxB+de28P8pwhV5A2uyZBXblUG9NIVVA/Iheuem +WXog7/biy5WC6Cuoc3acanzj1igvYH98w== X-Google-Smtp-Source: AK7set+zChOBkvDmv8upovQoaKfuAxrbFAxXV5MZ5rjassuOdi24vOvvtuhY+6wN48rk9TwI/nKFhA== X-Received: by 2002:a05:6e02:1b8c:b0:315:45c5:9185 with SMTP id h12-20020a056e021b8c00b0031545c59185mr3407244ili.31.1676401360782; Tue, 14 Feb 2023 11:02:40 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id r11-20020a056e0219cb00b0030c27c9eea4sm3608770ill.33.2023.02.14.11.02.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Feb 2023 11:02:40 -0800 (PST) From: Yang Shi To: mgorman@techsingularity.net, agk@redhat.com, snitzer@kernel.org, dm-devel@redhat.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 4/5] md: dm-crypt: move crypt_free_buffer_pages ahead Date: Tue, 14 Feb 2023 11:02:20 -0800 Message-Id: <20230214190221.1156876-5-shy828301@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230214190221.1156876-1-shy828301@gmail.com> References: <20230214190221.1156876-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 6aijm8pdnjjhwm4xatj3drxoerg6zbop X-Rspamd-Queue-Id: 7EEE61C002F X-HE-Tag: 1676401361-813827 X-HE-Meta: U2FsdGVkX1//SknF36McXLTkPdw12ky9NVv/HSmSa3X7ERzqW5Z+hgXZ3Wj5qdcbiIJarN0kh4K2e2SOV8OOBKwzYMTliZ2SNwbc4FgMBbIX0bsylJfsIhCxnPQlkxOd21fP/1ucDkusR5zLMAT+1JD7ktApUblVd9nh/4fWbACJy5gKHSns2rG/G/Ymtyzuaqh1yqtwleOND8mJiUywzWWGMzbNsGusKPHvEAayeQLbbzu8IbBbLEmRvHfH8gl0Bd8nipMt1/6m8eTLzLVAQxflDKI7RO1hpYta15DI48ebzL+mvmZkgUluVRQqbxOtAFOv6tVJ/8m3niDponPnTJvj+CIQ3n5GA9amFpHF/9jW/+kvTcKjx0W1Hqn/MJJSLNlu73csu+LwBN4hxwjeYaBpHS0hKmQG2CVq+0Uq7zbTw6T+a9RBWQnk3nBQkWmVpuuujx2kVl+7D2qwVPMIGNUyFVKLMboz0A3ZLSKVuCyHJk6LageOXsGBQkK93zzwm1aXb0x9n9OkIbstRcgyf4GggU1h+stE+hfroRV9+p0KT4wiREVXmKke3Xu52zLVhcJmiEbzIElh+gwdB6hGSRHwhSUJhtOzOh0QdmxnKpMeyfyEqtqNoSBV6Crwl2voCRCsrpXDWlRg9fbhkbKITirAgaKD0o88jXDfE0wion37URWeOuTQAf/vLfX1VR4TPeN5ClXCai/BMjKTJ8zznDKnygOvgraMz3y1U5uPsrovx/856GzA15O4e6pvYvQS0Up+ziCZup+HRAXy5BYDdQ+tL1zmgxt+z28P8GJtgFsjiMbaEPlft9W6YA47M1hZPKf8xpZt3zprs8g1ILpvx7fDqYO9/HQt70aAno7bfFxVs210Wrf63sszf17PDqAueYHrA14K3VzasM9STmLxpO7QXj6HJY7d56I3b2MzbjMzRVqsHlVUPlim3QBpVSnpxiwpU9OU6Uujr9ifYI+ Hyay7Fwh Dm6HbVNcuc19oU0HfTk7EfiSHEV6/AAVKS+7EzyWDUlmw/KDBz2+egEmpBBwi69K7S8UGGD6LTcIWlkc+PzF5K2tjdwJbk+1fjq0v5698T+3+TuUB8NqHh9uFPyoJsEVpHZSbkyFgSjKUHoSAlrCp/Gfq54giMykLdN1Dl3DCDIpVfryEvMNOvoaY7Wi/+t5peSPnCkVz/goRwAjj9r3BDvojU3rjyaW9aSxA1LRWnc9RWINVTWrxiYxfnBXWfkbTo3wHmVQBL8/Akt+r/zOpxrhi2ZSTrbAUAZAGwLR75sg3eMt/foxj+FtvWhjnZhjA6mrk5s6WUiOvd6DJgyFUzmJoCirP2cL0toe11baS0MCg3SoxQ0oBYgxUZ6anbLOiY8DBO59zyQItla8WqkAdbC+r+oyO5HGjp/9/IyDZnFWKmQBFsMpZxFX5/QpyHWuG27ry4OGLlazck1Y1nJ0Fq7mTDn0qk9d2e3VhEF+nR7wN+osNzHBCw6r/tcflz/8Hhuhn X-Bogosity: Ham, tests=bogofilter, spamicity=0.094561, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With moving crypt_free_buffer_pages() before crypt_alloc_buffer(), we don't need an extra declaration anymore. Signed-off-by: Yang Shi --- drivers/md/dm-crypt.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 2653516bcdef..73069f200cc5 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1639,7 +1639,17 @@ static blk_status_t crypt_convert(struct crypt_config *cc, return 0; } -static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone); + +static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone) +{ + struct bio_vec *bv; + struct bvec_iter_all iter_all; + + bio_for_each_segment_all(bv, clone, iter_all) { + BUG_ON(!bv->bv_page); + mempool_free(bv->bv_page, &cc->page_pool); + } +} /* * Generate a new unfragmented bio with the given size @@ -1707,17 +1717,6 @@ static struct bio *crypt_alloc_buffer(struct dm_crypt_io *io, unsigned size) return clone; } -static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone) -{ - struct bio_vec *bv; - struct bvec_iter_all iter_all; - - bio_for_each_segment_all(bv, clone, iter_all) { - BUG_ON(!bv->bv_page); - mempool_free(bv->bv_page, &cc->page_pool); - } -} - static void crypt_io_init(struct dm_crypt_io *io, struct crypt_config *cc, struct bio *bio, sector_t sector) { From patchwork Tue Feb 14 19:02:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 13140744 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B716C6379F for ; Tue, 14 Feb 2023 19:02:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B9956B0082; Tue, 14 Feb 2023 14:02:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 268CE6B0083; Tue, 14 Feb 2023 14:02:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 130C76B0085; Tue, 14 Feb 2023 14:02:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E6F146B0082 for ; Tue, 14 Feb 2023 14:02:46 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8415B40FF6 for ; Tue, 14 Feb 2023 19:02:46 +0000 (UTC) X-FDA: 80466819132.14.820AED2 Received: from mail-il1-f182.google.com (mail-il1-f182.google.com [209.85.166.182]) by imf12.hostedemail.com (Postfix) with ESMTP id 078A24000E for ; Tue, 14 Feb 2023 19:02:42 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=MdRoFbtD; spf=pass (imf12.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.182 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676401363; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=adXFc2bAE5N2Ax60+7xcwGTH5KneoPqPES/UhE90xOw=; b=m2e2BzK6O2CrcgZN1op7eI/B500vUh0qQiv9Spl8DBFw24h8eR5hrRV4+0F3wU8VrwXt5d 6yOx/Pnr2uLPfED9WBhoBAH5j53kFOHWtyxTFlg2lw+tXoL+ZUuhDFaCLRgPKesTghnES8 yRY+2RTG5tzzuXWW7mH6YAcutfVEjCE= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=MdRoFbtD; spf=pass (imf12.hostedemail.com: domain of shy828301@gmail.com designates 209.85.166.182 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676401363; a=rsa-sha256; cv=none; b=Fl9NAM7/kRJ4ppoJKe3aWWvNKTfUY7kkX0WwQV5YzziloxL9PBLXM6LIcSAnDFkwora/2C 1QLa2g076TupdlpUvLxN2MaBB+jT3ASCIYhytRiFGoHa9NG07gz+yAuuUyY1Q66pw/8xJH nY1I1MYBgMnmA+xHYAMHhW/Uu64xYVk= Received: by mail-il1-f182.google.com with SMTP id b9so6487764ila.0 for ; Tue, 14 Feb 2023 11:02:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=adXFc2bAE5N2Ax60+7xcwGTH5KneoPqPES/UhE90xOw=; b=MdRoFbtDdjDktt43Tl7ma1gEq45TyJ9GZrVi0bN312z/k6DkErJ75noKidLixwJ8AK ATrDw9nEf0YS/1j64xu7oxhF8oAf91RGwTaeYBgkgJoong5DASQHrwepoju3D1HSvHKV tYeylvvoxUG4y48XbDBOWtpm0MNpoyMI6t6izegp37E2hEdE2HEHB3HHpiuQ7mJoVd5j 6I14up127L40XGG8IlaJrd6EHG+3foQXQCtJE3GoZr24iekZTofq1Vant0J2b92EHByi OAORaS9DQT4UxbLW1dhEO07k/MQXkZ55K5nHJdWD/MXBxr6z5CIvsqUvvERbdbVs9ic1 lSsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=adXFc2bAE5N2Ax60+7xcwGTH5KneoPqPES/UhE90xOw=; b=yPu/R+dNwVaXv1XocwmFuRTkYL8y7FQYTaRI3lWlfxDtM++v4C4vRSMnGYfP7gp67P H2cBWUXU3TMkAXV3cBWgQPJLv2lZhxQixdSDne49rGIqO3EbbZoEkB0d1YxV+cb0eDio sBdrXINctJDImqbmE+ZVxOLqPWk/g6ZQgpjLeflWndhmQWNTIKTdPsXOPYVVB6HUMg67 qrwqZvUBLBF6s4cq4A5YZCn0Qy7h4aMFgqG/ltb8XfELZiOj4z7YjekAV1ZrszQQsnRA 6+IsQzjhV5eThqWzUP28/sAaGfnDbYDu9kHWYexI6K9QF3pdD+6JnIxYWU1t/w+Xu+hd eN7w== X-Gm-Message-State: AO0yUKUWP6C/9Y6Cf+ZC/jREekTDvGT7CwEvwN2tF8LjfHyj7IQrTcDN J5q0fG16Q1kEgJpdpI3li8M= X-Google-Smtp-Source: AK7set8iFAYLcyI+QfN4YO8vq9jpVvyppFH1CxRcIdDBG4+l1xKvf1nsBaCarjrOf2vHopjy3DPMbA== X-Received: by 2002:a05:6e02:1a66:b0:313:bab3:2f3a with SMTP id w6-20020a056e021a6600b00313bab32f3amr3292517ilv.22.1676401361981; Tue, 14 Feb 2023 11:02:41 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id r11-20020a056e0219cb00b0030c27c9eea4sm3608770ill.33.2023.02.14.11.02.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Feb 2023 11:02:41 -0800 (PST) From: Yang Shi To: mgorman@techsingularity.net, agk@redhat.com, snitzer@kernel.org, dm-devel@redhat.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 5/5] md: dm-crypt: use mempool page bulk allocator Date: Tue, 14 Feb 2023 11:02:21 -0800 Message-Id: <20230214190221.1156876-6-shy828301@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230214190221.1156876-1-shy828301@gmail.com> References: <20230214190221.1156876-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 078A24000E X-Stat-Signature: 74auhx6hd5iq6qzurxw5dnko4wct8poa X-HE-Tag: 1676401362-235422 X-HE-Meta: U2FsdGVkX18zh5DsgMfiOQKj0TE+45OUFBrBdR0wL6iH1cO3AM8lM+132KhEymWql00eO4aLhraOIsihJK2Pzz32dCWygGbu3hacnL/TT+fTWY884wNddajpZ1QvyyDUgOmwatDLx9bZ5zyEmHqYqvw0kNZ/sSwBbJWcLfGTLhTTh9EsFaumjg9g0XFJewIiRNbsLEYFrR8GDwkKOZJt2F+5AZii3NvCkd3omqKzhEYPQOFelTPpHgc/nqI5QxjNsz+DfaMN4kcG9ObzTSoMkJPn87dG+QgitEt8h164bTAZElldstutZ81kKyFgZ1syqKZxqT13O9MlL1UjRmjkxsi8L/FYPeUDSeKt/8yMCan5xOww2cqOuOOhr+iUmL9+wTVAyPNi8i0GTuaVyCyvp6K76yRbhkCCuxKAVp0PitSVRsUaO9OPzEMxh8TyjSDtcpi8Qmpx9VRxEIQvd7IjsmEqjqdHz3v7WIZHCVQEsd65tdpWwPZBzURwYMFkcBje7l4HaVTm6lk6a2rkzpyDgZ+mJ86VGk6Ca4Qw+rJcuyOv5/r/8QZA/VRBnDQcothcd1chfAXuv/h2OmtvjfWui5mpHHWYiJfmySmodV4333wzauMRskPDAPRGGSoCXaOw4qX1VS6YEYDcFyxms0/eYONVveJJ1ezzypKgbEOpsuKpcZrFuJu88R6hl1AEjg8RzCvKlGqIpvtr6HmsmDH8HYTPUhTDW6RnyLG5SNY+hHdlIEn/Pk3MnS3+5H3EnzHa4gWQ7uHSLVniZGO1MJvD8IrQiBTih7Zg2PkJDvq9DxGAtWzf0QE+AbbikI8dahVKDseVIru6gTdX/oQvqKCHpNEFXZaMZM4AnBI1Z64+iZPKrtQQsuczb+Et0YhYboEGvkKjP6B7ijBIpFZdxsU60yaU/pip9Kn9yvD7/Pw6jirjF5njcreRQRzLqKnc5VOuiZNevnJtBYEk+euch4R QLYi1Rgf Aa6pmL4y3h6JwpeDRexSA0Vg5f9FBi43ySVfhNuG0S7kejDnNuDXaD/YLDfMO/wvHOa4AM5hHWBsdLrVinAVHRWHepOXyZdumV4auEE3il8pRZUrIAfLKGT2yKPlp6JUbEq9g4bw8v/FKeiqz6TbkDJzXcbHmMtOpGPVeR350XSb0Q3fmZeKR5yvmyhP8NwV66szdsXBwQA4gY3t5PESDWY37/Iad7ADpMbVcmx9AkQ9qVVLrog5wSneuvSu2oDkdPTuoJYbTcKj9LCLJ10eUcH+4kD196S8wzrOWJ8oEKg7SVRU39wKG2pgrxDNz/9SVFnlW+Y2Hg5rfMiXTUqJ590V9mxtXQ2bUcUVnAWcgKyJgPcF9uAOUWhh/t/hMo0yKI/Jd7HuPUPEW0SyP63Dgql2Kv2o/AfiyOj1zZnbDiqTDQVzNBfpXkNW+z0h8oIiINSxhWgKK7LKiIYZj7pW6GtdH5B+IkmFWI9CA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When using dm-crypt for full disk encryption, dm-crypt would allocate an out bio and allocate the same amount of pages as in bio for encryption. It currently allocates one page at a time in a loop. This is not efficient. So using mempool page bulk allocator instead of allocating one page at a time. The mempool page bulk allocator would improve the IOPS with 1M I/O by approxiamately 6%. The test is done on a machine with 80 vCPU and 128GB memory with an encrypted ram device (the impact from storage hardware could be minimized so that we could benchmark the dm-crypt layer more accurately). Before the patch: Jobs: 1 (f=1): [w(1)][100.0%][w=1301MiB/s][w=1301 IOPS][eta 00m:00s] crypt: (groupid=0, jobs=1): err= 0: pid=48512: Wed Feb 1 18:11:30 2023 write: IOPS=1300, BW=1301MiB/s (1364MB/s)(76.2GiB/60001msec); 0 zone resets slat (usec): min=724, max=867, avg=765.71, stdev=19.27 clat (usec): min=4, max=196297, avg=195688.86, stdev=6450.50 lat (usec): min=801, max=197064, avg=196454.90, stdev=6450.35 clat percentiles (msec): | 1.00th=[ 197], 5.00th=[ 197], 10.00th=[ 197], 20.00th=[ 197], | 30.00th=[ 197], 40.00th=[ 197], 50.00th=[ 197], 60.00th=[ 197], | 70.00th=[ 197], 80.00th=[ 197], 90.00th=[ 197], 95.00th=[ 197], | 99.00th=[ 197], 99.50th=[ 197], 99.90th=[ 197], 99.95th=[ 197], | 99.99th=[ 197] bw ( MiB/s): min= 800, max= 1308, per=99.69%, avg=1296.94, stdev=46.02, samples=119 iops : min= 800, max= 1308, avg=1296.94, stdev=46.02, samples=119 lat (usec) : 10=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.02%, 50=0.05% lat (msec) : 100=0.08%, 250=99.83% cpu : usr=3.88%, sys=96.02%, ctx=69, majf=1, minf=9 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwts: total=0,78060,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=256 Run status group 0 (all jobs): WRITE: bw=1301MiB/s (1364MB/s), 1301MiB/s-1301MiB/s (1364MB/s-1364MB/s), io=76.2GiB (81.9GB), run=60001-60001msec After the patch: Jobs: 1 (f=1): [w(1)][100.0%][w=1401MiB/s][w=1401 IOPS][eta 00m:00s] crypt: (groupid=0, jobs=1): err= 0: pid=2171: Wed Feb 1 21:08:16 2023 write: IOPS=1401, BW=1402MiB/s (1470MB/s)(82.1GiB/60001msec); 0 zone resets slat (usec): min=685, max=815, avg=710.77, stdev=13.24 clat (usec): min=4, max=182206, avg=181658.31, stdev=5810.58 lat (usec): min=709, max=182913, avg=182369.36, stdev=5810.67 clat percentiles (msec): | 1.00th=[ 182], 5.00th=[ 182], 10.00th=[ 182], 20.00th=[ 182], | 30.00th=[ 182], 40.00th=[ 182], 50.00th=[ 182], 60.00th=[ 182], | 70.00th=[ 182], 80.00th=[ 182], 90.00th=[ 182], 95.00th=[ 182], | 99.00th=[ 182], 99.50th=[ 182], 99.90th=[ 182], 99.95th=[ 182], | 99.99th=[ 182] bw ( MiB/s): min= 900, max= 1408, per=99.71%, avg=1397.60, stdev=46.04, samples=119 iops : min= 900, max= 1408, avg=1397.60, stdev=46.04, samples=119 lat (usec) : 10=0.01%, 750=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.02%, 50=0.05% lat (msec) : 100=0.08%, 250=99.83% cpu : usr=3.66%, sys=96.23%, ctx=76, majf=1, minf=9 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwts: total=0,84098,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=256 Run status group 0 (all jobs): WRITE: bw=1402MiB/s (1470MB/s), 1402MiB/s-1402MiB/s (1470MB/s-1470MB/s), io=82.1GiB (88.2GB), run=60001-60001msec The function tracing also shows the time consumed by page allocations is reduced significantly. The test allocated 1M (256 pages) bio in the same environment. Before the patch: It took approximately 600us by excluding the bio_add_page() calls. 2720.630754 | 56) xfs_io-38859 | 2.571 us | mempool_alloc(); 2720.630757 | 56) xfs_io-38859 | 0.937 us | bio_add_page(); 2720.630758 | 56) xfs_io-38859 | 1.772 us | mempool_alloc(); 2720.630760 | 56) xfs_io-38859 | 0.852 us | bio_add_page(); …. 2720.631559 | 56) xfs_io-38859 | 2.058 us | mempool_alloc(); 2720.631561 | 56) xfs_io-38859 | 0.717 us | bio_add_page(); 2720.631562 | 56) xfs_io-38859 | 2.014 us | mempool_alloc(); 2720.631564 | 56) xfs_io-38859 | 0.620 us | bio_add_page(); After the patch: It took approxiamately 30us. 11564.266385 | 22) xfs_io-136183 | + 30.551 us | __alloc_pages_bulk(); Page allocations overhead is around 6% (600us/9853us) in dm-crypt layer shown by function trace. The data also matches the IOPS data shown by fio. And the benchmark with 4K size I/O doesn't show measurable regression. Signed-off-by: Yang Shi --- drivers/md/dm-crypt.c | 72 +++++++++++++++++++++++++++---------------- 1 file changed, 46 insertions(+), 26 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 73069f200cc5..30268ba07fd6 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1651,6 +1651,21 @@ static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone) } } +struct crypt_bulk_cb_data { + struct bio *bio; + unsigned int size; +}; + +static void crypt_bulk_alloc_cb(struct page *page, void *data) +{ + unsigned int len; + struct crypt_bulk_cb_data *b_data = (struct crypt_bulk_cb_data *)data; + + len = (b_data->size > PAGE_SIZE) ? PAGE_SIZE : b_data->size; + bio_add_page(b_data->bio, page, len, 0); + b_data->size -= len; +} + /* * Generate a new unfragmented bio with the given size * This should never violate the device limitations (but only because @@ -1674,8 +1689,7 @@ static struct bio *crypt_alloc_buffer(struct dm_crypt_io *io, unsigned size) struct bio *clone; unsigned int nr_iovecs = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; gfp_t gfp_mask = GFP_NOWAIT | __GFP_HIGHMEM; - unsigned i, len, remaining_size; - struct page *page; + struct crypt_bulk_cb_data data; retry: if (unlikely(gfp_mask & __GFP_DIRECT_RECLAIM)) @@ -1686,22 +1700,17 @@ static struct bio *crypt_alloc_buffer(struct dm_crypt_io *io, unsigned size) clone->bi_private = io; clone->bi_end_io = crypt_endio; - remaining_size = size; - - for (i = 0; i < nr_iovecs; i++) { - page = mempool_alloc(&cc->page_pool, gfp_mask); - if (!page) { - crypt_free_buffer_pages(cc, clone); - bio_put(clone); - gfp_mask |= __GFP_DIRECT_RECLAIM; - goto retry; - } - - len = (remaining_size > PAGE_SIZE) ? PAGE_SIZE : remaining_size; - - bio_add_page(clone, page, len, 0); + data.bio = clone; + data.size = size; - remaining_size -= len; + if (!mempool_alloc_pages_bulk_cb(&cc->page_pool, gfp_mask, nr_iovecs, + crypt_bulk_alloc_cb, &data)) { + crypt_free_buffer_pages(cc, clone); + bio_put(clone); + data.bio = NULL; + data.size = 0; + gfp_mask |= __GFP_DIRECT_RECLAIM; + goto retry; } /* Allocate space for integrity tags */ @@ -2655,10 +2664,14 @@ static void crypt_calculate_pages_per_client(void) dm_crypt_pages_per_client = pages; } -static void *crypt_page_alloc(gfp_t gfp_mask, void *pool_data) +static unsigned int crypt_alloc_pages_bulk(gfp_t gfp_mask, unsigned int nr, + void *pool_data, + struct page **page_array, + void (*cb)(struct page *, void *), + void *data) { struct crypt_config *cc = pool_data; - struct page *page; + unsigned int ret; /* * Note, percpu_counter_read_positive() may over (and under) estimate @@ -2667,13 +2680,13 @@ static void *crypt_page_alloc(gfp_t gfp_mask, void *pool_data) */ if (unlikely(percpu_counter_read_positive(&cc->n_allocated_pages) >= dm_crypt_pages_per_client) && likely(gfp_mask & __GFP_NORETRY)) - return NULL; + return 0; - page = alloc_page(gfp_mask); - if (likely(page != NULL)) - percpu_counter_add(&cc->n_allocated_pages, 1); + ret = alloc_pages_bulk_cb(gfp_mask, nr, cb, data); - return page; + percpu_counter_add(&cc->n_allocated_pages, ret); + + return ret; } static void crypt_page_free(void *page, void *pool_data) @@ -2705,11 +2718,16 @@ static void crypt_dtr(struct dm_target *ti) bioset_exit(&cc->bs); + /* + * With mempool bulk allocator the pages in the pool are not + * counted in n_allocated_pages. + */ + WARN_ON(percpu_counter_sum(&cc->n_allocated_pages) != 0); + mempool_exit(&cc->page_pool); mempool_exit(&cc->req_pool); mempool_exit(&cc->tag_pool); - WARN_ON(percpu_counter_sum(&cc->n_allocated_pages) != 0); percpu_counter_destroy(&cc->n_allocated_pages); if (cc->iv_gen_ops && cc->iv_gen_ops->dtr) @@ -3251,7 +3269,9 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) ALIGN(sizeof(struct dm_crypt_io) + cc->dmreq_start + additional_req_size, ARCH_KMALLOC_MINALIGN); - ret = mempool_init(&cc->page_pool, BIO_MAX_VECS, crypt_page_alloc, crypt_page_free, cc); + ret = mempool_init_pages_bulk(&cc->page_pool, BIO_MAX_VECS, + crypt_alloc_pages_bulk, crypt_page_free, + cc); if (ret) { ti->error = "Cannot allocate page mempool"; goto bad;