From patchwork Tue Apr 11 16:08:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13207809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 925C4C77B70 for ; Tue, 11 Apr 2023 16:09:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2BC32900006; Tue, 11 Apr 2023 12:09:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 243BE900002; Tue, 11 Apr 2023 12:09:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09749900006; Tue, 11 Apr 2023 12:09:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E9009900002 for ; Tue, 11 Apr 2023 12:09:27 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id BB570C0D23 for ; Tue, 11 Apr 2023 16:09:27 +0000 (UTC) X-FDA: 80669595174.28.070D22C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 21A8B4002A for ; Tue, 11 Apr 2023 16:09:24 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cagfA6i8; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681229365; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ivyfYwnUZR0ptY6OLV+4rOfP5TDkuV50aqZyhshAVbQ=; b=oRouEmhza+CEriij66GcK4i9loS006z0KEacV/rpQScYRaxL/LXWJTrGWtFf8pbA2Wu2QB Dd687cV9F4vzRCakgIW7OpzSnXvMDn886ABuYzt+jrJiIPgAGnj3oi4uqHsdYy8cn8icAV RLmWTgcg0GyE3/bR7MQa4Vi3DAUjnbI= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cagfA6i8; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681229365; a=rsa-sha256; cv=none; b=37bCsb1GfjzlkftDhiSDF2vJo9UR1F2HeG+emLuSIojzHXoyRH/1tBx3rtgKRX2dN82Nyo hkPDvQf77J6LTvqHPZ8cPjIyCDTVM9JO2wfRF0bEcCnbzSKDEo97eIO5ooyzjzQ6Mxv47K LOXXii4vh+S4t9/BBMNk/gcdh0F8I9g= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681229364; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ivyfYwnUZR0ptY6OLV+4rOfP5TDkuV50aqZyhshAVbQ=; b=cagfA6i8tbLCDF84BwooxxX4L5hAgjKtBc6IQEmlAtzKAQgjLOn65uJAuEWc2mE+4YT2oW TCOJk/0wTViNmySDv9xfhx3ej5UUlkYKkuRR4lwOTjg7U/b/vZY+JGkCZgrVGWZ1ng+so1 MpfBQyhx9H0KltTQh5KpmaMdxesClCs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-180-vRsXjnCWNMOIS6VRPgPWxQ-1; Tue, 11 Apr 2023 12:09:20 -0400 X-MC-Unique: vRsXjnCWNMOIS6VRPgPWxQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A59BA85A588; Tue, 11 Apr 2023 16:09:18 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0CE7714171D5; Tue, 11 Apr 2023 16:09:13 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Jeroen de Borst , Catherine Sullivan , Shailend Chand , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Andrew Morton , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-nvme@lists.infradead.org Subject: [PATCH net-next v6 03/18] mm: Make the page_frag_cache allocator use multipage folios Date: Tue, 11 Apr 2023 17:08:47 +0100 Message-Id: <20230411160902.4134381-4-dhowells@redhat.com> In-Reply-To: <20230411160902.4134381-1-dhowells@redhat.com> References: <20230411160902.4134381-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 21A8B4002A X-Stat-Signature: drhfuc4jdjd7wx7x1ybumfuofqmyzeu8 X-Rspam-User: X-HE-Tag: 1681229364-111814 X-HE-Meta: U2FsdGVkX1/oKtRKqP26wrXzzxiZ8+7UrgWz/gYFFOUK+1MSQU9KMlQeHUdhOf7Cqu0bI08zBiD23BBRF4YbdzThAjlJMCvAHbN0iK8+/7LqhtphcNtJWYDkIjR65UnApces2A/GRHpF7J+v86ksWV3Cw8Jzg30RSvodPnZQyz1j2tR+MfLxFV00uOfY9GbOxmRzONbg8QB4u2NuPtrpmdtnA3Fdwrm/Bg5iszZK+v1nCUshON8NvLTU6OFDcrY2Wcr4NdbNFqQEqZSmT6UfAtgt8c5MldL2Xkhuw2j7PJteG3CttcP+qoNxP+sn0mRKOBZlqi5Ka1Ts2nGXuJMyLbFTe5I9cJUGzC+lh/j1xhll+OP/91J8l8mjRNPR3MgIZWffAa6PFYNtGLmsr2sgEb1M5+C92S0lqiGaPfhhM2xys9LkG33I6ckyDij6tZp3tkDND9NoPYhrKMlsRx7m4WDlvWRjfxiBPyuXUaBXQztUOpigR9b1Pc5V/6wa7PBmOaOHAvdx6RyjRWC/A5LsujlJyoiBXzljkJpgZag6wx0yXaQvNUhxLH3DrZlnWuzd8PIpUHH2/poBUHWz5TY8b4evrE6o9eb/AW3NlEYKTiLq1zTSxncwXPMjtSdPkWQkzw3fgRyJPvUbULt+rOaHdEVCMFYuB2cEHAFeQtOhEJSKzw7vmN9zJzlHiq2ec4XwgLwc3VtY2ObYStDlUsP8piAW3Ei20Qf3563pyjbjCUUMH9RRVu3nD+dNCkK+bVivKSoLGQdZ6Qwlxyai8boE+bsEPgCfNZa1q9reECAuMEkTZoGrA0RHHifLgsbX0ohCYT7vN3QYDAhJxpz3ziooXpIjCndBMNqS4SqOxl6WsbS8I3spU7OgTBGt1O0qmFmzg8ZAett3r9r9MV+W8OG4hwPPYfTjeHiJciAKMUpx8fBDFq2/pViEypZByAV63nqYyhGUufxWny6DXb3hn20 9q0cB8Bx lh7Wyi2NHQrJjj5Y7mgVGpyfSV+C73AxwMMlKDF/Qyb6TiD8LJvU+Zxry1GOfMvahIt/AVMdXkU9WmphHx8RHv5PxcXyEWJK23p4xJhHaereqyv1rhgiePzT+p83zjAHQYpvIeb2aY1Gxo9EVPJptjFhlZlq73faF+7u+AthdMTHy00ttSFTWnD7foZsaOEvxg7lW+q3Ot2UDNntVpxkKHuiGKg3VytM46SgAzUD4u7uG1RW9qdLdr9t0TMVQKueiqK8jfF37zPf3LjHO0KqPW6gbtT09FmNuYdjNTkwiCoZyXCyv4HS03mnQTv6I42wr98PAYcKLZBS5W7rhr2MzENFte9ybZg/37muS8ot+UygDWT7uwxvE+VMZHQzc1S9jA4Gy9VSbC0taK2eHZZyamua7Gie0IyUoMRxVOtvMvJuLQoIaMDMkjN/h/mFG+Eh3rfneErv/KqBgcGO7I1UPI/HloT37T+YDHuGEA02vudWoMNVxzUTHznNvyj3aAJKWwTgbtLxgVhXvWAaEORh0bbUV/oxnccOjz9cozDiYdqxHzZidZ9EyUWt0BfmN7hZP6Qk44UPGznG3oXbEcr1j9BiK698Reu8w90A+JKLZtw1Ws1ZG7G9EI2ypg3+Q+VehNyUymsHGiX3wi06aFgwt+kO2XfADDFY3ZhXDBWILP0nfuRn97A6eHb83zF65Ye+QVCaoIc6zW5XaFw2kn+2qghe0vuzHAXQi6LoGOFeryTCg1zoS6RTcm9wy0nFY5KjTaFrVJbqKCj1cG7ShcdDfZ/WTqJ8+d3qN8ol90TDY2u2G2ct/czr6DJFr8+0ogDl0diXfrZbQhskVg854yxfOOPkoAF1EHp53wT3veUtHNXfFdQKS63FBV21cTryLawz29j3s8LMwvr/lfidlSUzHU2YF26dL4rrnrKnC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change the page_frag_cache allocator to use multipage folios rather than groups of pages. This reduces page_frag_free to just a folio_put() or put_page(). Signed-off-by: David Howells cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Jeroen de Borst cc: Catherine Sullivan cc: Shailend Chand cc: Felix Fietkau cc: John Crispin cc: Sean Wang cc: Mark Lee cc: Lorenzo Bianconi cc: Matthias Brugger cc: AngeloGioacchino Del Regno cc: Keith Busch cc: Jens Axboe cc: Christoph Hellwig cc: Sagi Grimberg cc: Chaitanya Kulkarni cc: Andrew Morton cc: Matthew Wilcox cc: netdev@vger.kernel.org cc: linux-arm-kernel@lists.infradead.org cc: linux-mediatek@lists.infradead.org cc: linux-nvme@lists.infradead.org cc: linux-mm@kvack.org --- Notes: ver #6) - Removed a couple of leftover page pointer declarations. drivers/net/ethernet/google/gve/gve_main.c | 11 +-- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 17 +--- drivers/nvme/host/tcp.c | 8 +- drivers/nvme/target/tcp.c | 5 +- include/linux/gfp.h | 2 + include/linux/mm_types.h | 13 +-- mm/page_frag_alloc.c | 102 +++++++++++---------- 7 files changed, 67 insertions(+), 91 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 57ce74315eba..b2fc1a3e6340 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1263,17 +1263,10 @@ static void gve_unreg_xdp_info(struct gve_priv *priv) static void gve_drain_page_cache(struct gve_priv *priv) { - struct page_frag_cache *nc; int i; - for (i = 0; i < priv->rx_cfg.num_queues; i++) { - nc = &priv->rx[i].page_cache; - if (nc->va) { - __page_frag_cache_drain(virt_to_page(nc->va), - nc->pagecnt_bias); - nc->va = NULL; - } - } + for (i = 0; i < priv->rx_cfg.num_queues; i++) + page_frag_cache_clear(&priv->rx[i].page_cache); } static int gve_open(struct net_device *dev) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index 69fba29055e9..d90fea2c7d04 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -286,7 +286,6 @@ mtk_wed_wo_queue_free(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) static void mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { - struct page *page; int i; for (i = 0; i < q->n_desc; i++) { @@ -298,19 +297,12 @@ mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) entry->buf = NULL; } - if (!q->cache.va) - return; - - page = virt_to_page(q->cache.va); - __page_frag_cache_drain(page, q->cache.pagecnt_bias); - memset(&q->cache, 0, sizeof(q->cache)); + page_frag_cache_clear(&q->cache); } static void mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { - struct page *page; - for (;;) { void *buf = mtk_wed_wo_dequeue(wo, q, NULL, true); @@ -320,12 +312,7 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) skb_free_frag(buf); } - if (!q->cache.va) - return; - - page = virt_to_page(q->cache.va); - __page_frag_cache_drain(page, q->cache.pagecnt_bias); - memset(&q->cache, 0, sizeof(q->cache)); + page_frag_cache_clear(&q->cache); } static void diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 42c0598c31f2..05629e83b41d 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1312,7 +1312,6 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) { - struct page *page; struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); struct nvme_tcp_queue *queue = &ctrl->queues[qid]; unsigned int noreclaim_flag; @@ -1323,12 +1322,7 @@ static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) if (queue->hdr_digest || queue->data_digest) nvme_tcp_free_crypto(queue); - if (queue->pf_cache.va) { - page = virt_to_head_page(queue->pf_cache.va); - __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); - queue->pf_cache.va = NULL; - } - + page_frag_cache_clear(&queue->pf_cache); noreclaim_flag = memalloc_noreclaim_save(); sock_release(queue->sock); memalloc_noreclaim_restore(noreclaim_flag); diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 66e8f9fd0ca7..ae871c31cf00 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1438,7 +1438,6 @@ static void nvmet_tcp_free_cmd_data_in_buffers(struct nvmet_tcp_queue *queue) static void nvmet_tcp_release_queue_work(struct work_struct *w) { - struct page *page; struct nvmet_tcp_queue *queue = container_of(w, struct nvmet_tcp_queue, release_work); @@ -1460,9 +1459,7 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w) if (queue->hdr_digest || queue->data_digest) nvmet_tcp_free_crypto(queue); ida_free(&nvmet_tcp_queue_ida, queue->idx); - - page = virt_to_head_page(queue->pf_cache.va); - __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); + page_frag_cache_clear(&queue->pf_cache); kfree(queue); } diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 65a78773dcca..9f77ba6af361 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -314,6 +314,8 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc, return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +void page_frag_cache_clear(struct page_frag_cache *nc); + extern void page_frag_free(void *addr); #define __free_page(page) __free_pages((page), 0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0722859c3647..49a70b3f44a9 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -420,18 +420,13 @@ static inline void *folio_get_private(struct folio *folio) } struct page_frag_cache { - void * va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; - __u16 size; -#else - __u32 offset; -#endif + struct folio *folio; + unsigned int offset; /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. */ - unsigned int pagecnt_bias; - bool pfmemalloc; + unsigned int pagecnt_bias; + bool pfmemalloc; }; typedef unsigned long vm_flags_t; diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c index bee95824ef8f..ac4cf1eac8ea 100644 --- a/mm/page_frag_alloc.c +++ b/mm/page_frag_alloc.c @@ -16,88 +16,96 @@ #include #include -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) +/* + * Allocate a new folio for the frag cache. + */ +static struct folio *page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) { - struct page *page = NULL; + struct folio *folio = NULL; gfp_t gfp = gfp_mask; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | - __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; + gfp_mask |= __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; + folio = folio_alloc(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); #endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; + if (unlikely(!folio)) + folio = folio_alloc(gfp, 0); - return page; + if (folio) + nc->folio = folio; + return folio; } void __page_frag_cache_drain(struct page *page, unsigned int count) { - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + struct folio *folio = page_folio(page); - if (page_ref_sub_and_test(page, count - 1)) - __free_pages(page, compound_order(page)); + VM_BUG_ON_FOLIO(folio_ref_count(folio) == 0, folio); + + folio_put_refs(folio, count); } EXPORT_SYMBOL(__page_frag_cache_drain); +void page_frag_cache_clear(struct page_frag_cache *nc) +{ + struct folio *folio = nc->folio; + + if (folio) { + VM_BUG_ON_FOLIO(folio_ref_count(folio) == 0, folio); + folio_put_refs(folio, nc->pagecnt_bias); + nc->folio = NULL; + } +} +EXPORT_SYMBOL(page_frag_cache_clear); + void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; + struct folio *folio = nc->folio; + size_t offset; - if (unlikely(!nc->va)) { + if (unlikely(!folio)) { refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) + folio = page_frag_cache_refill(nc, gfp_mask); + if (!folio) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + folio_ref_add(folio, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pfmemalloc = folio_is_pfmemalloc(folio); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->offset = folio_size(folio); } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (page_ref_count(page) != nc->pagecnt_bias) + offset = nc->offset; + if (unlikely(fragsz > offset)) { + /* Reuse the folio if everyone we gave it to has finished with + * it. + */ + if (!folio_ref_sub_and_test(folio, nc->pagecnt_bias)) { + nc->folio = NULL; goto refill; + } + if (unlikely(nc->pfmemalloc)) { - page_ref_sub(page, nc->pagecnt_bias - 1); - __free_pages(page, compound_order(page)); + __folio_put(folio); + nc->folio = NULL; goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + folio_set_count(folio, PAGE_FRAG_CACHE_MAX_SIZE + 1); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { + offset = folio_size(folio); + if (unlikely(fragsz > offset)) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -107,15 +115,17 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, * it could make memory pressure worse * so we simply return NULL here. */ + nc->offset = offset; return NULL; } } nc->pagecnt_bias--; + offset -= fragsz; offset &= align_mask; nc->offset = offset; - return nc->va + offset; + return folio_address(folio) + offset; } EXPORT_SYMBOL(page_frag_alloc_align); @@ -124,8 +134,6 @@ EXPORT_SYMBOL(page_frag_alloc_align); */ void page_frag_free(void *addr) { - struct page *page = virt_to_head_page(addr); - - __free_pages(page, compound_order(page)); + folio_put(virt_to_folio(addr)); } EXPORT_SYMBOL(page_frag_free);