From patchwork Wed May 29 13:45:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13678914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC147C25B75 for ; Wed, 29 May 2024 13:46:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B8BB6B00B4; Wed, 29 May 2024 09:46:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 63E8F6B00B6; Wed, 29 May 2024 09:46:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 491D06B00B7; Wed, 29 May 2024 09:46:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 278EE6B00B4 for ; Wed, 29 May 2024 09:46:10 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D776C141221 for ; Wed, 29 May 2024 13:46:09 +0000 (UTC) X-FDA: 82171557258.22.C315633 Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) by imf30.hostedemail.com (Postfix) with ESMTP id 22B6080013 for ; Wed, 29 May 2024 13:46:07 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=XOB+N42p; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf30.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716990368; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CqWeILzukHgisrDT2Qez2BasOK0ht2d7BaH6ApZlpmc=; b=M/XkiSQM8+yVzpyR9Xw1Jr1JibxgelYHmH0P904hKod+NmSSuyq5b89ziQCAFIWuRtIqBA touiTBPmqOO97/u/DQz8c+wihNRvBa+Oqch/adXpwzZSoAlU+op1iBJx3JYuvto3h1/D7i PByMyo6n1qUZiMD3luP66byxDFPgH60= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=XOB+N42p; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf30.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716990368; a=rsa-sha256; cv=none; b=OTsA66tqO0JvQSzf1YdXseGwTTD7gBwQr1sSCxugMECd/FnUmeWuNuzE5l6HReT+6ivMu+ BkQ5xIIF2V8CphwcsDUKKYsoO9018cJLKsc2nkHPfeV70AIbLkI+e5nQip1BegmJEXgM07 92tV0ZH1Adp/PrQtAmv7O2NCEq+M550= Received: from smtp2.mailbox.org (smtp2.mailbox.org [IPv6:2001:67c:2050:b231:465::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4Vq9dT0Tshz9shb; Wed, 29 May 2024 15:46:05 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1716990365; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CqWeILzukHgisrDT2Qez2BasOK0ht2d7BaH6ApZlpmc=; b=XOB+N42pJ9+3yh+2a6EE3JxrNjiMx8gcmgcydGpbQbIokRbFL9n3t1fou4c1PKi/QVZjOS dTxNmxENzLyvTp1SXADWg964pMqyPN3GvoMMuBfee0rb00QvHSqiAOuULn+UyqpRJTRgTN 9e9oJyX7ba+wT93bWncZTvZBfDTMvOZmPihPM4+cO/q40fueOR+XsgPF3Wgq2fBwarvFlU zPAKTlze5cXDwRdOdd9G2L1s1oqtTlVZUbyy89yBU89iVhZ2zyAWqcJlR9CK5IIol98vkB iFvGPn/UHeDLe6u78yR2gJyiiDC3TA7k7Ux5UUp++B4SSoEcLTqVFSyWzsjLBA== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, chandan.babu@oracle.com, akpm@linux-foundation.org, brauner@kernel.org, willy@infradead.org, djwong@kernel.org Cc: linux-kernel@vger.kernel.org, hare@suse.de, john.g.garry@oracle.com, gost.dev@samsung.com, yang@os.amperecomputing.com, p.raghav@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, hch@lst.de, mcgrof@kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 11/11] xfs: enable block size larger than page size support Date: Wed, 29 May 2024 15:45:09 +0200 Message-Id: <20240529134509.120826-12-kernel@pankajraghav.com> In-Reply-To: <20240529134509.120826-1-kernel@pankajraghav.com> References: <20240529134509.120826-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 22B6080013 X-Stat-Signature: gecromh81mouk9ojrnrod1sotqd3ipui X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1716990367-794845 X-HE-Meta: U2FsdGVkX184OyRthoaOWhfuQVVcpMg2XwhXR5hZEV3+d++WKfbUTNwsgrHZvMU+CXBeDH1+YXgpi5WyKuPHGl4a79dIQC+6jKeedxV7K42z+T4TjLNds0fTbbCFAJzj7QMZ4kNUw4CowxwaBtZ9D7WGWg30NPJaJL8ieqgwFC1giiSCnySqK0Em5R+5GGlFxKnuTkoXE1gkEbtvHWsicclQLvXG4UwvDQLrnM4QdDWdFc7Ohhgg2SshC8+4cjzkn8daulRwEmR+5TG/IYtLYGK77nJ8TNxQ2iKZf+Tp3qRCFjf9r5RnM3jzsPTENGe18C2BP78sqGbpM8cXUvf+s6QDA2iMP1cir4Fm9gLwHAnWfbQpUvXRi0tsVDQ6+XTDlilM7C+gMXHJxLbcQer+HEFCnLIw1ciyRoekGd+4DDeNAp5VIU8ITHmKjQR/dbvqchZjCIn0gflxaxBnzJLvGpWx4CU4je41HOaKF7V5FzxZdlW2vQN+GwCNshhOYdbhedAUajUOLdlG2ujNdF+dy/eCxe+R7p+6ewppiROHXNFiVAMsd8V9oIPMr8awyQc020/Af9VPPDZLmvUXtJRqNnn7uw9GEWGbV+v2YFRZkpAcsNaP0lAgHNAU1TZEGF0n/d/09bth/CBaECBHNC7XTMpoXGu3Wd+uAqkj2u85ItEo14O7y9Lerd82yuRbcr37Hm5nItQtDsalErQ2DacQ4+V7ynEtu2SN2xkO32enMp7FHKjEsbe7hynr9jvXTnDBUFVpeUiLiXWIJrYdOyszZNeUeMfJM3Ua0qDbCOxtyn36slpHSdMmjzbozLKduaR5wPRY+D4FleXixC4SoxKirCpkUXaVt0vyf46jRUPWWA+keRTMkh8LAgW3K3q7fChrKHYvADkOacxKNyNFkT2WyMelpl8+r+/GHRfCfed0eyt2Y+2nV1WHld+pJc3xfGvS7VrCCyII6pGWMbvwzoO 8fmFXXG2 FB9Q4JWNTPqP3kRjwiy2kUahsDnKwhXKhOUHcq5ehEkQka3uJ862mCkdaxn+GMYdjF9DY5XZZ7aMvnwHLiQJdUhxV06VSG8Ewrdjhnr465QTty97jgvkHWgkBiNgAp7CqKzqSl0Q0KrJp830TG2ctkcNFzfgGg4sjyPC7sPSE6hyX6MdjNKqXV3UX04UNsTxOxySo4O711z7y2j3npWFtSL6MZr5o5WuDch/rR4bUNxztcqpM61XRKybI5FhekwcrAh5h03cJFKkN7IeiNkH+VPq+iOhcog2b3g72 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav Page cache now has the ability to have a minimum order when allocating a folio which is a prerequisite to add support for block size > page size. Reviewed-by: Darrick J. Wong Signed-off-by: Luis Chamberlain Signed-off-by: Pankaj Raghav --- fs/xfs/libxfs/xfs_ialloc.c | 5 +++++ fs/xfs/libxfs/xfs_shared.h | 3 +++ fs/xfs/xfs_icache.c | 6 ++++-- fs/xfs/xfs_mount.c | 1 - fs/xfs/xfs_super.c | 18 ++++++++++-------- 5 files changed, 22 insertions(+), 11 deletions(-) diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c index 14c81f227c5b..1e76431d75a4 100644 --- a/fs/xfs/libxfs/xfs_ialloc.c +++ b/fs/xfs/libxfs/xfs_ialloc.c @@ -3019,6 +3019,11 @@ xfs_ialloc_setup_geometry( igeo->ialloc_align = mp->m_dalign; else igeo->ialloc_align = 0; + + if (mp->m_sb.sb_blocksize > PAGE_SIZE) + igeo->min_folio_order = mp->m_sb.sb_blocklog - PAGE_SHIFT; + else + igeo->min_folio_order = 0; } /* Compute the location of the root directory inode that is laid out by mkfs. */ diff --git a/fs/xfs/libxfs/xfs_shared.h b/fs/xfs/libxfs/xfs_shared.h index 34f104ed372c..e67a1c7cc0b0 100644 --- a/fs/xfs/libxfs/xfs_shared.h +++ b/fs/xfs/libxfs/xfs_shared.h @@ -231,6 +231,9 @@ struct xfs_ino_geometry { /* precomputed value for di_flags2 */ uint64_t new_diflags2; + /* minimum folio order of a page cache allocation */ + unsigned int min_folio_order; + }; #endif /* __XFS_SHARED_H__ */ diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 0953163a2d84..5ed3dc9e7d90 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -89,7 +89,8 @@ xfs_inode_alloc( /* VFS doesn't initialise i_mode or i_state! */ VFS_I(ip)->i_mode = 0; VFS_I(ip)->i_state = 0; - mapping_set_large_folios(VFS_I(ip)->i_mapping); + mapping_set_folio_min_order(VFS_I(ip)->i_mapping, + M_IGEO(mp)->min_folio_order); XFS_STATS_INC(mp, vn_active); ASSERT(atomic_read(&ip->i_pincount) == 0); @@ -324,7 +325,8 @@ xfs_reinit_inode( inode->i_rdev = dev; inode->i_uid = uid; inode->i_gid = gid; - mapping_set_large_folios(inode->i_mapping); + mapping_set_folio_min_order(inode->i_mapping, + M_IGEO(mp)->min_folio_order); return error; } diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 46cb0384143b..a99454208807 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -135,7 +135,6 @@ xfs_sb_validate_fsb_count( uint64_t max_index; uint64_t max_bytes; - ASSERT(PAGE_SHIFT >= sbp->sb_blocklog); ASSERT(sbp->sb_blocklog >= BBSHIFT); if (check_shl_overflow(nblocks, sbp->sb_blocklog, &max_bytes)) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 27e9f749c4c7..b8a93a8f35ca 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1638,16 +1638,18 @@ xfs_fs_fill_super( goto out_free_sb; } - /* - * Until this is fixed only page-sized or smaller data blocks work. - */ if (mp->m_sb.sb_blocksize > PAGE_SIZE) { - xfs_warn(mp, - "File system with blocksize %d bytes. " - "Only pagesize (%ld) or less will currently work.", + if (!xfs_has_crc(mp)) { + xfs_warn(mp, +"V4 Filesystem with blocksize %d bytes. Only pagesize (%ld) or less is supported.", mp->m_sb.sb_blocksize, PAGE_SIZE); - error = -ENOSYS; - goto out_free_sb; + error = -ENOSYS; + goto out_free_sb; + } + + xfs_warn(mp, +"EXPERIMENTAL: V5 Filesystem with Large Block Size (%d bytes) enabled.", + mp->m_sb.sb_blocksize); } /* Ensure this filesystem fits in the page cache limits */