From patchwork Thu Jan 18 22:19:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13523245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F593C47422 for ; Thu, 18 Jan 2024 22:22:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 238CA6B0080; Thu, 18 Jan 2024 17:22:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E9316B0081; Thu, 18 Jan 2024 17:22:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 060F76B0082; Thu, 18 Jan 2024 17:22:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E80C26B0081 for ; Thu, 18 Jan 2024 17:22:25 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C9C321C0982 for ; Thu, 18 Jan 2024 22:22:25 +0000 (UTC) X-FDA: 81693856650.09.6D5A261 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf03.hostedemail.com (Postfix) with ESMTP id D886720010 for ; Thu, 18 Jan 2024 22:22:23 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=qsqDMXUk; spf=pass (imf03.hostedemail.com: domain of david@fromorbit.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=david@fromorbit.com; dmarc=pass (policy=quarantine) header.from=fromorbit.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705616543; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xOQkNaBecqrKeof8epQC7tBtDt+f9sgqsspBN6V4LXw=; b=khlGIqCs/fnl4BDiwCl4h9OEVl8i1SeQBWvUmMqBwLEWJjtBjrTD5bfdQULzXn3yAx7fJF UjlpAVycWct1kWcIq7s3MLeUKZs/YMk+vgAuPVRbXDd+LSLkztQjkVvu+bE39LWtPKWjif JFqk3jb0U+r3ComiXIvBGfxFMuiHbPE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705616543; a=rsa-sha256; cv=none; b=Hy8mizWGVJdEM15iZLtR+HTlZi4M69lOntiouSqIpZuboh1angF3rL7tI2F8VBwXiP0Bx+ ju6q8ZwNFZkouelMW5kSSId2+/Lt+mAhirBi2wrvg4kjpM/gMmpU7js+AXDU3E8vws4yNd OaxOHtvS+/fdxVy6vMVJnx2YHRflkks= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=qsqDMXUk; spf=pass (imf03.hostedemail.com: domain of david@fromorbit.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=david@fromorbit.com; dmarc=pass (policy=quarantine) header.from=fromorbit.com Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1d70696b6faso783735ad.3 for ; Thu, 18 Jan 2024 14:22:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20230601.gappssmtp.com; s=20230601; t=1705616542; x=1706221342; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xOQkNaBecqrKeof8epQC7tBtDt+f9sgqsspBN6V4LXw=; b=qsqDMXUk28ujx3cv0lgdQbMz0ALWTQcQjX+sKPDOQ8wb9Eg3hYKMyx8U9zQOmkphB/ ot2FqjKpUDPdVOg2nnB23DmAJOqWHvdBsw2inggdVMbM35BqiA4DWm7Ca4jyoUfy09zh NPrrvzm2f38Xm6IboIk6nyshCN6bvJPaD/MPMBwOm6VEgzmz+cF8B7oLltAi7Yfd8/gV Kyh2x6kJuN9/WwyALDpa7tJW5KL1AzfhbJ80M4osbjvRIgmGu1BCUTlbR599GYQBxU1p SBvdfFhvSBB11HT3Ra+L/D4KbPXj0HPmSRz82ZssPIrPfzIhbucpDs5zNPXAYVPqD66Z EQjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705616542; x=1706221342; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xOQkNaBecqrKeof8epQC7tBtDt+f9sgqsspBN6V4LXw=; b=NTQU5blEY32+JvmnX7IeePQh5M1I1WsvEHxruAUafIzfVWlhOU6/mr0vtPXZJVHbj0 fvM+zWaB/gY73aypI1WI8RAkC+XzU3zv8crWkkRVGOwGttSlmUSsGkTxoLUE7k0QAM3C HBgihQ2UJ3AokkYIyVyXd9CaVweTDZd94aqUnu0PzGLl4UqiZHrfyQi2hdcwoi3Rd7TE Z68vA5iRKvxPIDstEeku97C2myzeck+2a91fSQ8X73yXJvhfE+xai4UkTH+64kTg/fwX PIgfN9Us2k0yATkD1mdI/xCPqn0M/xWu+Iz7GXC6+G5725PQI4xjCQOdsIiLrG1C4VXf 4CdA== X-Gm-Message-State: AOJu0Yx0fK94x/MkTfVuUbN8wp4Id3kdLXhSPFTD+sTF0qpQgqoKqub6 ZLhYF5FKfzI8bJW7YliBJwVG7kPgbwpbbkl2Tn7J448qS9YoQC+9ThgPuTbzKLc= X-Google-Smtp-Source: AGHT+IHiaXm1EWAGasNNOOfrGx0XW1ck6lQq+G178B7PjIkwrsJeX+rv69F7epMmGMSQOU5IySypvg== X-Received: by 2002:a17:902:bc8b:b0:1d5:8bf4:c7b2 with SMTP id bb11-20020a170902bc8b00b001d58bf4c7b2mr1474681plb.88.1705616542573; Thu, 18 Jan 2024 14:22:22 -0800 (PST) Received: from dread.disaster.area (pa49-180-249-6.pa.nsw.optusnet.com.au. [49.180.249.6]) by smtp.gmail.com with ESMTPSA id ji15-20020a170903324f00b001d7164acf5csm601148plb.120.2024.01.18.14.22.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Jan 2024 14:22:22 -0800 (PST) Received: from [192.168.253.23] (helo=devoid.disaster.area) by dread.disaster.area with esmtp (Exim 4.96) (envelope-from ) id 1rQamB-00CCGQ-11; Fri, 19 Jan 2024 09:22:18 +1100 Received: from dave by devoid.disaster.area with local (Exim 4.97) (envelope-from ) id 1rQamA-0000000HMlq-3FAD; Fri, 19 Jan 2024 09:22:18 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: willy@infradead.org, linux-mm@kvack.org Subject: [PATCH 2/3] xfs: use folios in the buffer cache Date: Fri, 19 Jan 2024 09:19:40 +1100 Message-ID: <20240118222216.4131379-3-david@fromorbit.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240118222216.4131379-1-david@fromorbit.com> References: <20240118222216.4131379-1-david@fromorbit.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: D886720010 X-Rspam-User: X-Stat-Signature: mt549o6m6gdghc9j8zcxsg4qp3n3nn5k X-Rspamd-Server: rspam03 X-HE-Tag: 1705616543-876718 X-HE-Meta: U2FsdGVkX19sqnBFUvtjqSvtx1lib99Jy5Gb6c99ETbQ40vCGlso3ZBS+Hk3dhMJasP9ZiqpBi5ZJDhE5zQN0RRdKlCZf+LwekKHVcHhFFgeoFEEQInPQzO/t6W5lEhZNtLNSBuxug8g8Kw2pyfVK6a9Zh/Z0IO4o3wDU1NSHYmN422f9nRWDBMtG/wQO/uBfN56rl/I17iNzay4LoA5EudY1fsYRAdxhkJ+gOOCsh8YfABYdI9qcLCIQirAzzbuDtq2tGD5xONrB2TFBGrETFReMb17NmStU06QqrfuyVUCSQ1ilnQ+WBxvxyUh7sKuwcxzfLltbAFhyxgQdrOxDObkZ1UKlFz5YVjRpisCkyTGVEjgxECqOT+kF1/UTfYYqiKanqjgG7yqITKJ24coGemjJTUYLtMCEBowueKEjqVtZAOsFhHNSOG5nybElNbk1odfpHq1Dd4wLSzLeEzS3sutIQfM8UgbffC8jv5i3xoEyT0fbhIcVMgUkn9E/+LLZPdmxDg5t7y48T+5pR1zvoGgxG70+yLGQONOyhNGJelKvHky2z8nKdQKxF+TPQSHVwv5pGexa0kAIeQRRsugHU5DGwxry7FN1tmaoa5EH5S7xy7Yh3WOBZKErm7eQwDGczwo4xICs/iK9IqxAeqVdwB15zDP81rk8h8Ws0fbYkHJqXw8ug4y3mQ+KW8Pynl8CUZhKJRAf5O9juNiNhAcnuAPxZvtLRGO6I4biejeEPfs/+MHS6FiAAWc1gbDwYCGzZlRpw1cAxwYkm2slb4n9+oLyiPg4OQt26XZBwht8Rh67x6ow7ny9RVi1GkRoEfIOPos6v4wyS5WFsBucCsPASwZ3RdfvNHEe8n4e7yRcvhD1diEhq4lzhFIjimwBnJoZFGjIh4KGgzd5Y6HzShJ2Yd0G4JZisnSaqp+9btx1VvTns5ElIhrW2ipGEDI35kiN0nJ6Bmj/RWQ8XKuk+y wkrfXKYE PJxGlTivtX4O0xih4gB6LVcAH3XAVttndHVmsHds3+vo2HNwvC1jl8RuOFc2ExGOmnYvXMIUOMGbHDKWQlqiKrMfOMXcapKXaaT6I6CmvG9XNAbTTTaaUvYXviPGQoTpWjKC6chGh0fjObHQLx3hLu708t2JeLeInJRAYK4/A9+PhEfq8Q6GfJCG858F2YscsOUj/hxHkZguIWYR2wt0QS0m/PYXJ1aou+Dhsnn32VcWhVq1msFuegzgOvK3lhs1MkEIRqZ+kGJ4H3CE0wfNdeRjjjPwtcPwXVs/GM6fjjToPv/HUkDtUF4506sHp9xj+iYi4vtlYf46Jfcc/wiCq8UqPbgXwPHdGX+YfTTtHvNIF0RYa6szRYEDTo9X2xbN8pBD1lryHg9yg2v/5N1/jhb/mB8dHvixaRGet7399hmXUc48BrNlbVhFbsg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Dave Chinner Convert the use of struct pages to struct folio everywhere. This is just direct API conversion, no actual logic of code changes should result. Note: this conversion currently assumes only single page folios are allocated, and because some of the MM interfaces we use take pointers to arrays of struct pages, the address of single page folios and struct pages are the same. e.g alloc_pages_bulk_array(), vm_map_ram(), etc. Signed-off-by: Dave Chinner --- fs/xfs/xfs_buf.c | 127 +++++++++++++++++++++--------------------- fs/xfs/xfs_buf.h | 14 ++--- fs/xfs/xfs_buf_item.c | 2 +- fs/xfs/xfs_linux.h | 8 +++ 4 files changed, 80 insertions(+), 71 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 08f2fbc04db5..15907e92d0d3 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -60,25 +60,25 @@ xfs_buf_submit( return __xfs_buf_submit(bp, !(bp->b_flags & XBF_ASYNC)); } +/* + * Return true if the buffer is vmapped. + * + * b_addr is null if the buffer is not mapped, but the code is clever enough to + * know it doesn't have to map a single folio, so the check has to be both for + * b_addr and bp->b_folio_count > 1. + */ static inline int xfs_buf_is_vmapped( struct xfs_buf *bp) { - /* - * Return true if the buffer is vmapped. - * - * b_addr is null if the buffer is not mapped, but the code is clever - * enough to know it doesn't have to map a single page, so the check has - * to be both for b_addr and bp->b_page_count > 1. - */ - return bp->b_addr && bp->b_page_count > 1; + return bp->b_addr && bp->b_folio_count > 1; } static inline int xfs_buf_vmap_len( struct xfs_buf *bp) { - return (bp->b_page_count * PAGE_SIZE); + return (bp->b_folio_count * PAGE_SIZE); } /* @@ -197,7 +197,7 @@ xfs_buf_get_maps( } /* - * Frees b_pages if it was allocated. + * Frees b_maps if it was allocated. */ static void xfs_buf_free_maps( @@ -273,26 +273,26 @@ _xfs_buf_alloc( } static void -xfs_buf_free_pages( +xfs_buf_free_folios( struct xfs_buf *bp) { uint i; - ASSERT(bp->b_flags & _XBF_PAGES); + ASSERT(bp->b_flags & _XBF_FOLIOS); if (xfs_buf_is_vmapped(bp)) - vm_unmap_ram(bp->b_addr, bp->b_page_count); + vm_unmap_ram(bp->b_addr, bp->b_folio_count); - for (i = 0; i < bp->b_page_count; i++) { - if (bp->b_pages[i]) - __free_page(bp->b_pages[i]); + for (i = 0; i < bp->b_folio_count; i++) { + if (bp->b_folios[i]) + __folio_put(bp->b_folios[i]); } - mm_account_reclaimed_pages(bp->b_page_count); + mm_account_reclaimed_pages(bp->b_folio_count); - if (bp->b_pages != bp->b_page_array) - kfree(bp->b_pages); - bp->b_pages = NULL; - bp->b_flags &= ~_XBF_PAGES; + if (bp->b_folios != bp->b_folio_array) + kfree(bp->b_folios); + bp->b_folios = NULL; + bp->b_flags &= ~_XBF_FOLIOS; } static void @@ -313,8 +313,8 @@ xfs_buf_free( ASSERT(list_empty(&bp->b_lru)); - if (bp->b_flags & _XBF_PAGES) - xfs_buf_free_pages(bp); + if (bp->b_flags & _XBF_FOLIOS) + xfs_buf_free_folios(bp); else if (bp->b_flags & _XBF_KMEM) kfree(bp->b_addr); @@ -345,15 +345,15 @@ xfs_buf_alloc_kmem( return -ENOMEM; } bp->b_offset = offset_in_page(bp->b_addr); - bp->b_pages = bp->b_page_array; - bp->b_pages[0] = kmem_to_page(bp->b_addr); - bp->b_page_count = 1; + bp->b_folios = bp->b_folio_array; + bp->b_folios[0] = kmem_to_folio(bp->b_addr); + bp->b_folio_count = 1; bp->b_flags |= _XBF_KMEM; return 0; } static int -xfs_buf_alloc_pages( +xfs_buf_alloc_folios( struct xfs_buf *bp, xfs_buf_flags_t flags) { @@ -364,16 +364,16 @@ xfs_buf_alloc_pages( gfp_mask |= __GFP_NORETRY; /* Make sure that we have a page list */ - bp->b_page_count = DIV_ROUND_UP(BBTOB(bp->b_length), PAGE_SIZE); - if (bp->b_page_count <= XB_PAGES) { - bp->b_pages = bp->b_page_array; + bp->b_folio_count = DIV_ROUND_UP(BBTOB(bp->b_length), PAGE_SIZE); + if (bp->b_folio_count <= XB_FOLIOS) { + bp->b_folios = bp->b_folio_array; } else { - bp->b_pages = kzalloc(sizeof(struct page *) * bp->b_page_count, + bp->b_folios = kzalloc(sizeof(struct folio *) * bp->b_folio_count, gfp_mask); - if (!bp->b_pages) + if (!bp->b_folios) return -ENOMEM; } - bp->b_flags |= _XBF_PAGES; + bp->b_flags |= _XBF_FOLIOS; /* Assure zeroed buffer for non-read cases. */ if (!(flags & XBF_READ)) @@ -387,9 +387,9 @@ xfs_buf_alloc_pages( for (;;) { long last = filled; - filled = alloc_pages_bulk_array(gfp_mask, bp->b_page_count, - bp->b_pages); - if (filled == bp->b_page_count) { + filled = alloc_pages_bulk_array(gfp_mask, bp->b_folio_count, + (struct page **)bp->b_folios); + if (filled == bp->b_folio_count) { XFS_STATS_INC(bp->b_mount, xb_page_found); break; } @@ -398,7 +398,7 @@ xfs_buf_alloc_pages( continue; if (flags & XBF_READ_AHEAD) { - xfs_buf_free_pages(bp); + xfs_buf_free_folios(bp); return -ENOMEM; } @@ -412,14 +412,14 @@ xfs_buf_alloc_pages( * Map buffer into kernel address-space if necessary. */ STATIC int -_xfs_buf_map_pages( +_xfs_buf_map_folios( struct xfs_buf *bp, xfs_buf_flags_t flags) { - ASSERT(bp->b_flags & _XBF_PAGES); - if (bp->b_page_count == 1) { + ASSERT(bp->b_flags & _XBF_FOLIOS); + if (bp->b_folio_count == 1) { /* A single page buffer is always mappable */ - bp->b_addr = page_address(bp->b_pages[0]); + bp->b_addr = folio_address(bp->b_folios[0]); } else if (flags & XBF_UNMAPPED) { bp->b_addr = NULL; } else { @@ -443,8 +443,8 @@ _xfs_buf_map_pages( */ nofs_flag = memalloc_nofs_save(); do { - bp->b_addr = vm_map_ram(bp->b_pages, bp->b_page_count, - -1); + bp->b_addr = vm_map_ram((struct page **)bp->b_folios, + bp->b_folio_count, -1); if (bp->b_addr) break; vm_unmap_aliases(); @@ -571,7 +571,7 @@ xfs_buf_find_lock( return -ENOENT; } ASSERT((bp->b_flags & _XBF_DELWRI_Q) == 0); - bp->b_flags &= _XBF_KMEM | _XBF_PAGES; + bp->b_flags &= _XBF_KMEM | _XBF_FOLIOS; bp->b_ops = NULL; } return 0; @@ -629,14 +629,15 @@ xfs_buf_find_insert( goto out_drop_pag; /* - * For buffers that fit entirely within a single page, first attempt to - * allocate the memory from the heap to minimise memory usage. If we - * can't get heap memory for these small buffers, we fall back to using - * the page allocator. + * For buffers that fit entirely within a single page folio, first + * attempt to allocate the memory from the heap to minimise memory + * usage. If we can't get heap memory for these small buffers, we fall + * back to using the page allocator. */ + if (BBTOB(new_bp->b_length) >= PAGE_SIZE || xfs_buf_alloc_kmem(new_bp, flags) < 0) { - error = xfs_buf_alloc_pages(new_bp, flags); + error = xfs_buf_alloc_folios(new_bp, flags); if (error) goto out_free_buf; } @@ -728,11 +729,11 @@ xfs_buf_get_map( /* We do not hold a perag reference anymore. */ if (!bp->b_addr) { - error = _xfs_buf_map_pages(bp, flags); + error = _xfs_buf_map_folios(bp, flags); if (unlikely(error)) { xfs_warn_ratelimited(btp->bt_mount, - "%s: failed to map %u pages", __func__, - bp->b_page_count); + "%s: failed to map %u folios", __func__, + bp->b_folio_count); xfs_buf_relse(bp); return error; } @@ -963,14 +964,14 @@ xfs_buf_get_uncached( if (error) return error; - error = xfs_buf_alloc_pages(bp, flags); + error = xfs_buf_alloc_folios(bp, flags); if (error) goto fail_free_buf; - error = _xfs_buf_map_pages(bp, 0); + error = _xfs_buf_map_folios(bp, 0); if (unlikely(error)) { xfs_warn(target->bt_mount, - "%s: failed to map pages", __func__); + "%s: failed to map folios", __func__); goto fail_free_buf; } @@ -1465,7 +1466,7 @@ xfs_buf_ioapply_map( blk_opf_t op) { int page_index; - unsigned int total_nr_pages = bp->b_page_count; + unsigned int total_nr_pages = bp->b_folio_count; int nr_pages; struct bio *bio; sector_t sector = bp->b_maps[map].bm_bn; @@ -1503,7 +1504,7 @@ xfs_buf_ioapply_map( if (nbytes > size) nbytes = size; - rbytes = bio_add_page(bio, bp->b_pages[page_index], nbytes, + rbytes = bio_add_folio(bio, bp->b_folios[page_index], nbytes, offset); if (rbytes < nbytes) break; @@ -1716,13 +1717,13 @@ xfs_buf_offset( struct xfs_buf *bp, size_t offset) { - struct page *page; + struct folio *folio; if (bp->b_addr) return bp->b_addr + offset; - page = bp->b_pages[offset >> PAGE_SHIFT]; - return page_address(page) + (offset & (PAGE_SIZE-1)); + folio = bp->b_folios[offset >> PAGE_SHIFT]; + return folio_address(folio) + (offset & (PAGE_SIZE-1)); } void @@ -1735,18 +1736,18 @@ xfs_buf_zero( bend = boff + bsize; while (boff < bend) { - struct page *page; + struct folio *folio; int page_index, page_offset, csize; page_index = (boff + bp->b_offset) >> PAGE_SHIFT; page_offset = (boff + bp->b_offset) & ~PAGE_MASK; - page = bp->b_pages[page_index]; + folio = bp->b_folios[page_index]; csize = min_t(size_t, PAGE_SIZE - page_offset, BBTOB(bp->b_length) - boff); ASSERT((csize + page_offset) <= PAGE_SIZE); - memset(page_address(page) + page_offset, 0, csize); + memset(folio_address(folio) + page_offset, 0, csize); boff += csize; } diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h index b470de08a46c..1e7298ff3fa5 100644 --- a/fs/xfs/xfs_buf.h +++ b/fs/xfs/xfs_buf.h @@ -29,7 +29,7 @@ struct xfs_buf; #define XBF_READ_AHEAD (1u << 2) /* asynchronous read-ahead */ #define XBF_NO_IOACCT (1u << 3) /* bypass I/O accounting (non-LRU bufs) */ #define XBF_ASYNC (1u << 4) /* initiator will not wait for completion */ -#define XBF_DONE (1u << 5) /* all pages in the buffer uptodate */ +#define XBF_DONE (1u << 5) /* all folios in the buffer uptodate */ #define XBF_STALE (1u << 6) /* buffer has been staled, do not find it */ #define XBF_WRITE_FAIL (1u << 7) /* async writes have failed on this buffer */ @@ -39,7 +39,7 @@ struct xfs_buf; #define _XBF_LOGRECOVERY (1u << 18)/* log recovery buffer */ /* flags used only internally */ -#define _XBF_PAGES (1u << 20)/* backed by refcounted pages */ +#define _XBF_FOLIOS (1u << 20)/* backed by refcounted folios */ #define _XBF_KMEM (1u << 21)/* backed by heap memory */ #define _XBF_DELWRI_Q (1u << 22)/* buffer on a delwri queue */ @@ -68,7 +68,7 @@ typedef unsigned int xfs_buf_flags_t; { _XBF_INODES, "INODES" }, \ { _XBF_DQUOTS, "DQUOTS" }, \ { _XBF_LOGRECOVERY, "LOG_RECOVERY" }, \ - { _XBF_PAGES, "PAGES" }, \ + { _XBF_FOLIOS, "FOLIOS" }, \ { _XBF_KMEM, "KMEM" }, \ { _XBF_DELWRI_Q, "DELWRI_Q" }, \ /* The following interface flags should never be set */ \ @@ -116,7 +116,7 @@ typedef struct xfs_buftarg { struct ratelimit_state bt_ioerror_rl; } xfs_buftarg_t; -#define XB_PAGES 2 +#define XB_FOLIOS 2 struct xfs_buf_map { xfs_daddr_t bm_bn; /* block number for I/O */ @@ -180,14 +180,14 @@ struct xfs_buf { struct xfs_buf_log_item *b_log_item; struct list_head b_li_list; /* Log items list head */ struct xfs_trans *b_transp; - struct page **b_pages; /* array of page pointers */ - struct page *b_page_array[XB_PAGES]; /* inline pages */ + struct folio **b_folios; /* array of folio pointers */ + struct folio *b_folio_array[XB_FOLIOS]; /* inline folios */ struct xfs_buf_map *b_maps; /* compound buffer map */ struct xfs_buf_map __b_map; /* inline compound buffer map */ int b_map_count; atomic_t b_pin_count; /* pin count */ atomic_t b_io_remaining; /* #outstanding I/O requests */ - unsigned int b_page_count; /* size of page array */ + unsigned int b_folio_count; /* size of folio array */ unsigned int b_offset; /* page offset of b_addr, only for _XBF_KMEM buffers */ int b_error; /* error code on I/O */ diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c index 83a81cb52d8e..d1407cee48d9 100644 --- a/fs/xfs/xfs_buf_item.c +++ b/fs/xfs/xfs_buf_item.c @@ -69,7 +69,7 @@ xfs_buf_item_straddle( { void *first, *last; - if (bp->b_page_count == 1 || !(bp->b_flags & XBF_UNMAPPED)) + if (bp->b_folio_count == 1 || !(bp->b_flags & XBF_UNMAPPED)) return false; first = xfs_buf_offset(bp, offset + (first_bit << XFS_BLF_SHIFT)); diff --git a/fs/xfs/xfs_linux.h b/fs/xfs/xfs_linux.h index caccb7f76690..804389b8e802 100644 --- a/fs/xfs/xfs_linux.h +++ b/fs/xfs/xfs_linux.h @@ -279,4 +279,12 @@ kmem_to_page(void *addr) return virt_to_page(addr); } +static inline struct folio * +kmem_to_folio(void *addr) +{ + if (is_vmalloc_addr(addr)) + return page_folio(vmalloc_to_page(addr)); + return virt_to_folio(addr); +} + #endif /* __XFS_LINUX__ */