From patchwork Tue May 14 17:38:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 13664304 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E7B2181319 for ; Tue, 14 May 2024 17:39:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708358; cv=none; b=PriIg1PYqAvqqI3EJin36BLiFw9UJVzdtmohajTYA+ZbJrE1MFZl1lkdi4x3c5uh2SPXFEnqgQbUS0izH2WdTd2/9P8+XCn1WYwO7j2HcZa8bLEJwO4tzrAk37wp+YG4JQAnTSt6/JnK0m8AdxRVh7bSBD4vmgLDyYdfxuZrRHw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708358; c=relaxed/simple; bh=nfZe9PyFcNTBz+0MHtHtGvd+j23GfMlHF02ruavVO6U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aKf5tKU+tRdJ4vc64gTRpMamqXEALW6++UhLF2uRdgeyrmqyfLHE3QEoT7HZhgblB9F4GqysB6QvNJZDXcxH2LcL3k347zpd6boyZiXmmlEBmLRvLClahERaWI0evwDt9tP7GjPLzuSMM3O9eqMYKcFCSAZhibaa44dyrqLb9fk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=s8KkhFEG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="s8KkhFEG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 89B23C4AF08; Tue, 14 May 2024 17:39:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715708358; bh=nfZe9PyFcNTBz+0MHtHtGvd+j23GfMlHF02ruavVO6U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s8KkhFEGgXJ94xhNL/cTpJqbsnNTkK/tBlwgBBH9S5gYr8UIdQONeWyRm3qlhNdNC H9WwXJJETlCIYLj8/dzCvvANr1imqRCDgh36tNiFkVv473ytgKAXwIg5Ha0uKibXN2 qzbpYiKFMCWt0EIOCH3MF9wPjsT1c55mHIkBNDGSBZxGZCAkT4XTQeVWDWGHiYGLjl /UW8OUJta+uw1GCV3gOmVMy0fQscyHbVVDvY/MtW2Pa+T4wq8nTAgyUiUPTZgTNZXs 6TkTd64nx5niCsajI9D4W0qaOcoCdWL/tVGQDAx6Ga3TN1UHCKU1quwVVcGD444BQ2 d2mi2zLItvQrQ== From: Hannes Reinecke To: Jens Axboe Cc: Matthew Wilcox , Luis Chamberlain , Pankaj Raghav , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Hannes Reinecke Subject: [PATCH 1/6] fs/mpage: avoid negative shift for large blocksize Date: Tue, 14 May 2024 19:38:55 +0200 Message-Id: <20240514173900.62207-2-hare@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240514173900.62207-1-hare@kernel.org> References: <20240514173900.62207-1-hare@kernel.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 For large blocksizes the number of block bits is larger than PAGE_SHIFT, so use folio_pos() to calculate the sector number from the folio. Signed-off-by: Hannes Reinecke --- fs/mpage.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/mpage.c b/fs/mpage.c index fa8b99a199fa..558b627d382c 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -188,7 +188,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) if (folio_buffers(folio)) goto confused; - block_in_file = (sector_t)folio->index << (PAGE_SHIFT - blkbits); + block_in_file = folio_pos(folio) >> blkbits; last_block = block_in_file + args->nr_pages * blocks_per_page; last_block_in_file = (i_size_read(inode) + blocksize - 1) >> blkbits; if (last_block > last_block_in_file) @@ -534,7 +534,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, * The page has no buffers: map it to disk */ BUG_ON(!folio_test_uptodate(folio)); - block_in_file = (sector_t)folio->index << (PAGE_SHIFT - blkbits); + block_in_file = folio_pos(folio) >> blkbits; /* * Whole page beyond EOF? Skip allocating blocks to avoid leaking * space. From patchwork Tue May 14 17:38:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 13664305 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E51E181325 for ; Tue, 14 May 2024 17:39:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708360; cv=none; b=bg8+K3yooY2pZ6b5+YH5x3g2Xix7IQ+3Xx/w4FRJfdObSCnKxLYJi0rjif0ZWD4L8zyIlsPWB4V6fEACMthsc+mLez+sICkVl6nAwGvjN5lGdQ37C93kAON4es9dJ9nXbF3Ja/HI63YZ36/WPZcCckI7YRgEuY7IcYJY2BFehSY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708360; c=relaxed/simple; bh=J3KaC8y/QoqDXiQG7pAOqRWVFiVs5GHTt0L+EDHw71c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=uZvSECMYybmhWCGLqLS0lFW/SvmtvD72X6jWSI3XsH6f3XiqQSxpEiHcxJvE1foXDyExph5EAKcVt6q0tkYCEnttNbh1c5gtzdxMAw1xojSv+KG2SR1w4xhTgBMBKDcZoW/8cvICQzeFFZJ/cTPS0T8ofaStBcGhhBRz6xjZkM0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eXCmkK4/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eXCmkK4/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7EAE7C2BD11; Tue, 14 May 2024 17:39:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715708360; bh=J3KaC8y/QoqDXiQG7pAOqRWVFiVs5GHTt0L+EDHw71c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eXCmkK4/VtpzFaonE3Dktl0jjwhcklOmSUqOuRvUKoo3bpnE/I1d7n5nurn4k6ePU /nVeIjFiuXXZQWqPS08lvR+9lMBHFQe314Q0987eB8d/+pWBBUeLY3X9G4al1uZvgM fnmXoV2eyc8VGLiJlS7MYM7vUXfb5an24Yu7hxdJrNPmlhfhytFzKmjP6N474/SmhL 5yW3ckuZh1xNU4bozH4BYE4YiBsgEJyu5yqsiBYGxfFsdsL5AsRQRJDDK9VzfRGYzh UZiQdoosl24DCKV/ZsTyLP717lwuXyY/DoUBtRYao0dMJzizDzma2ZMy7Dvfg+kQt6 ltk9OS54ArgdQ== From: Hannes Reinecke To: Jens Axboe Cc: Matthew Wilcox , Luis Chamberlain , Pankaj Raghav , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Hannes Reinecke Subject: [PATCH 2/6] fs/mpage: use blocks_per_folio instead of blocks_per_page Date: Tue, 14 May 2024 19:38:56 +0200 Message-Id: <20240514173900.62207-3-hare@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240514173900.62207-1-hare@kernel.org> References: <20240514173900.62207-1-hare@kernel.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Convert mpage to folios and associate the number of blocks with a folio and not a page. Signed-off-by: Hannes Reinecke --- fs/mpage.c | 45 +++++++++++++++++++++------------------------ 1 file changed, 21 insertions(+), 24 deletions(-) diff --git a/fs/mpage.c b/fs/mpage.c index 558b627d382c..7cb9d9efdba8 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -114,7 +114,7 @@ static void map_buffer_to_folio(struct folio *folio, struct buffer_head *bh, * don't make any buffers if there is only one buffer on * the folio and the folio just needs to be set up to date */ - if (inode->i_blkbits == PAGE_SHIFT && + if (inode->i_blkbits == folio_shift(folio) && buffer_uptodate(bh)) { folio_mark_uptodate(folio); return; @@ -160,7 +160,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) struct folio *folio = args->folio; struct inode *inode = folio->mapping->host; const unsigned blkbits = inode->i_blkbits; - const unsigned blocks_per_page = PAGE_SIZE >> blkbits; + const unsigned blocks_per_folio = folio_size(folio) >> blkbits; const unsigned blocksize = 1 << blkbits; struct buffer_head *map_bh = &args->map_bh; sector_t block_in_file; @@ -168,7 +168,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) sector_t last_block_in_file; sector_t first_block; unsigned page_block; - unsigned first_hole = blocks_per_page; + unsigned first_hole = blocks_per_folio; struct block_device *bdev = NULL; int length; int fully_mapped = 1; @@ -177,9 +177,6 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) unsigned relative_block; gfp_t gfp = mapping_gfp_constraint(folio->mapping, GFP_KERNEL); - /* MAX_BUF_PER_PAGE, for example */ - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); - if (args->is_readahead) { opf |= REQ_RAHEAD; gfp |= __GFP_NORETRY | __GFP_NOWARN; @@ -189,7 +186,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) goto confused; block_in_file = folio_pos(folio) >> blkbits; - last_block = block_in_file + args->nr_pages * blocks_per_page; + last_block = block_in_file + ((args->nr_pages * PAGE_SIZE) >> blkbits); last_block_in_file = (i_size_read(inode) + blocksize - 1) >> blkbits; if (last_block > last_block_in_file) last_block = last_block_in_file; @@ -211,7 +208,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) clear_buffer_mapped(map_bh); break; } - if (page_block == blocks_per_page) + if (page_block == blocks_per_folio) break; page_block++; block_in_file++; @@ -223,7 +220,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) * Then do more get_blocks calls until we are done with this folio. */ map_bh->b_folio = folio; - while (page_block < blocks_per_page) { + while (page_block < blocks_per_folio) { map_bh->b_state = 0; map_bh->b_size = 0; @@ -236,7 +233,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) if (!buffer_mapped(map_bh)) { fully_mapped = 0; - if (first_hole == blocks_per_page) + if (first_hole == blocks_per_folio) first_hole = page_block; page_block++; block_in_file++; @@ -254,7 +251,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) goto confused; } - if (first_hole != blocks_per_page) + if (first_hole != blocks_per_folio) goto confused; /* hole -> non-hole */ /* Contiguous blocks? */ @@ -267,7 +264,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) if (relative_block == nblocks) { clear_buffer_mapped(map_bh); break; - } else if (page_block == blocks_per_page) + } else if (page_block == blocks_per_folio) break; page_block++; block_in_file++; @@ -275,8 +272,8 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) bdev = map_bh->b_bdev; } - if (first_hole != blocks_per_page) { - folio_zero_segment(folio, first_hole << blkbits, PAGE_SIZE); + if (first_hole != blocks_per_folio) { + folio_zero_segment(folio, first_hole << blkbits, folio_size(folio)); if (first_hole == 0) { folio_mark_uptodate(folio); folio_unlock(folio); @@ -310,10 +307,10 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) relative_block = block_in_file - args->first_logical_block; nblocks = map_bh->b_size >> blkbits; if ((buffer_boundary(map_bh) && relative_block == nblocks) || - (first_hole != blocks_per_page)) + (first_hole != blocks_per_folio)) args->bio = mpage_bio_submit_read(args->bio); else - args->last_block_in_bio = first_block + blocks_per_page - 1; + args->last_block_in_bio = first_block + blocks_per_folio - 1; out: return args->bio; @@ -392,7 +389,7 @@ int mpage_read_folio(struct folio *folio, get_block_t get_block) { struct mpage_readpage_args args = { .folio = folio, - .nr_pages = 1, + .nr_pages = folio_nr_pages(folio), .get_block = get_block, }; @@ -463,12 +460,12 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, struct address_space *mapping = folio->mapping; struct inode *inode = mapping->host; const unsigned blkbits = inode->i_blkbits; - const unsigned blocks_per_page = PAGE_SIZE >> blkbits; + const unsigned blocks_per_folio = folio_size(folio) >> blkbits; sector_t last_block; sector_t block_in_file; sector_t first_block; unsigned page_block; - unsigned first_unmapped = blocks_per_page; + unsigned first_unmapped = blocks_per_folio; struct block_device *bdev = NULL; int boundary = 0; sector_t boundary_block = 0; @@ -493,12 +490,12 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, */ if (buffer_dirty(bh)) goto confused; - if (first_unmapped == blocks_per_page) + if (first_unmapped == blocks_per_folio) first_unmapped = page_block; continue; } - if (first_unmapped != blocks_per_page) + if (first_unmapped != blocks_per_folio) goto confused; /* hole -> non-hole */ if (!buffer_dirty(bh) || !buffer_uptodate(bh)) @@ -543,7 +540,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, goto page_is_mapped; last_block = (i_size - 1) >> blkbits; map_bh.b_folio = folio; - for (page_block = 0; page_block < blocks_per_page; ) { + for (page_block = 0; page_block < blocks_per_folio; ) { map_bh.b_state = 0; map_bh.b_size = 1 << blkbits; @@ -625,14 +622,14 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, BUG_ON(folio_test_writeback(folio)); folio_start_writeback(folio); folio_unlock(folio); - if (boundary || (first_unmapped != blocks_per_page)) { + if (boundary || (first_unmapped != blocks_per_folio)) { bio = mpage_bio_submit_write(bio); if (boundary_block) { write_boundary_block(boundary_bdev, boundary_block, 1 << blkbits); } } else { - mpd->last_block_in_bio = first_block + blocks_per_page - 1; + mpd->last_block_in_bio = first_block + blocks_per_folio - 1; } goto out; From patchwork Tue May 14 17:38:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 13664306 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62831181319 for ; Tue, 14 May 2024 17:39:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708362; cv=none; b=goW0GZtXpvH0zicfelKt7xtD8N4MwT0ZKuUb/SPi8ghxVIy2YhoJoV2hpwe+Dsx5Wynl4Lvob63cYA/YhPdyjJRB2W4S/QzxJVNj+tN6fxcuB3BEYCFYVejqmfLXAzivg9Cryj0beMAAOPZ9Km/9sY3LNLFlEVkZ5zShB+vGK1I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708362; c=relaxed/simple; bh=nKOQbmAsvRgnkJlMIxmUUs3nh180UwytsivOS6qm4gQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LW5dr1TRw84jk2xDcNK4Ne5ilelnuRBbKrPHQIZv6DIpTPVeEoNChAOi84VC34Jsn/jkrEOtGNL5naMDJ+j4xsHBrtFLIkEqo7nM0WmppPvH4sofbfGi9c95yOGSb36arecGf+6kqypsnptqK8fUgSD7itQClDRRYYsc9kI54h4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sW7ST2nZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sW7ST2nZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6FA59C32782; Tue, 14 May 2024 17:39:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715708362; bh=nKOQbmAsvRgnkJlMIxmUUs3nh180UwytsivOS6qm4gQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sW7ST2nZRp4Av9CDL8V23mKnU7oTtIBdSSH61U5juZ85jhYI+AcUuJKbZ8oRTzzkc yNzxKK04R56Tm3y4e28RkiPrEbCjfZS9/rLwb4I7MV6ZXQ7MmVTh6u7pfieQ92F7DO JyVO4PcvvMBLHvwiZzc85ECea34JQ4fyngtKlc694M9DEETWI0uONbN2er/1L2HiY3 LW0i56JD1gVAA3zrZ8pwzjXJQkJRrwzLIJ2DrOWCD8shEQMcmTgu8IFnkDA3IkpMR/ kAa1oA9n0aLYaoxUxQ3bOLMGXQaQFsNAcuxdMJfWHf0XAyGgRCOy018ASzpyzF9b9/ ThDAJLpycXuAw== From: Hannes Reinecke To: Jens Axboe Cc: Matthew Wilcox , Luis Chamberlain , Pankaj Raghav , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Hannes Reinecke Subject: [PATCH 3/6] blk-merge: split bio by max_segment_size, not PAGE_SIZE Date: Tue, 14 May 2024 19:38:57 +0200 Message-Id: <20240514173900.62207-4-hare@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240514173900.62207-1-hare@kernel.org> References: <20240514173900.62207-1-hare@kernel.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Bvecs can be larger than a page, and the block layer handles this just fine. So do not split by PAGE_SIZE but rather by the max_segment_size if that happens to be larger. Signed-off-by: Hannes Reinecke --- block/blk-merge.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 4e3483a16b75..570573d7a34f 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -278,6 +278,7 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim, struct bio_vec bv, bvprv, *bvprvp = NULL; struct bvec_iter iter; unsigned nsegs = 0, bytes = 0; + unsigned bv_seg_lim = max(PAGE_SIZE, lim->max_segment_size); bio_for_each_bvec(bv, bio, iter) { /* @@ -289,7 +290,7 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim, if (nsegs < lim->max_segments && bytes + bv.bv_len <= max_bytes && - bv.bv_offset + bv.bv_len <= PAGE_SIZE) { + bv.bv_offset + bv.bv_len <= bv_seg_lim) { nsegs++; bytes += bv.bv_len; } else { From patchwork Tue May 14 17:38:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 13664307 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B76A181319 for ; Tue, 14 May 2024 17:39:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708364; cv=none; b=HW8wbUtmHGVOTBPA+MA6bxWLWnPaFbI9Y7p3nmn8Ie2/qYI3uabIZxRu9PUOTmauWPN86ekj1j8TLU5tAVWHpKmkVn3zqgCfsRCltWujvHb9NQqoosgprPie7VYtQxgGDvtIlGaaoaffvgTaJU7HRal+V5FdNWNWOuaqbNtVPB8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708364; c=relaxed/simple; bh=7s3ygm1qdmW1SDxuIuRIwVke+9uiYVb8lCs0ybKAYMc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=r6BVW7h9Lo4BcM8qIMKrLP9e6kgELFTyZ5G3JY6qEWnREBiWOfoCg8CLPgYrPab4onBvLRUXpQPWmlFEMUXZqL3/lZX8Mv6LtAz7p/D8WqaOtTGNRxI5PSkMcHWw37QKutas65c4utVp9pt9Q4J/GYNQSs1jO+ToRFtI9pU1YaI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dDL3a79W; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dDL3a79W" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 61091C32786; Tue, 14 May 2024 17:39:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715708363; bh=7s3ygm1qdmW1SDxuIuRIwVke+9uiYVb8lCs0ybKAYMc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dDL3a79WQhHZh0uAZSQ5XbZo3r9X875ZxUn51Ayr32ZkSQJfGRuuaMYytRzPbVyFX ASHrJK/sOIm58CniJPZxEmh8Nq+LGnwhmcajZLR0j52NDltUdRjFU/zB05e7ZlEgGa A8l0Xj5BydMygYn8klsgyXWFi4q871hNdiyramenxkETcg2tm0+zBrS9REqfnNLeBE Sw5G5Gj7g1BCMORYpDeE6/XnWOdapbF6q4O7E6yygsfwzmJf+Ui/xq2vuAt9VIQtjV hVetRfz31805bcjckOxuUtPCrTHhuckcnPbtp2AmSsj8c7KQQOW0zg4FdvKXWjLbkG MPPfzXipA+sww== From: Hannes Reinecke To: Jens Axboe Cc: Matthew Wilcox , Luis Chamberlain , Pankaj Raghav , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Hannes Reinecke Subject: [PATCH 4/6] block/bdev: enable large folio support for large logical block sizes Date: Tue, 14 May 2024 19:38:58 +0200 Message-Id: <20240514173900.62207-5-hare@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240514173900.62207-1-hare@kernel.org> References: <20240514173900.62207-1-hare@kernel.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Call mapping_set_folio_min_order() when modifying the logical block size to ensure folios are allocated with the correct size. Signed-off-by: Hannes Reinecke --- block/bdev.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/block/bdev.c b/block/bdev.c index b8e32d933a63..bd2efcad4f32 100644 --- a/block/bdev.c +++ b/block/bdev.c @@ -142,6 +142,8 @@ static void set_init_blocksize(struct block_device *bdev) bsize <<= 1; } bdev->bd_inode->i_blkbits = blksize_bits(bsize); + mapping_set_folio_min_order(bdev->bd_inode->i_mapping, + get_order(bsize)); } int set_blocksize(struct block_device *bdev, int size) @@ -158,6 +160,8 @@ int set_blocksize(struct block_device *bdev, int size) if (bdev->bd_inode->i_blkbits != blksize_bits(size)) { sync_blockdev(bdev); bdev->bd_inode->i_blkbits = blksize_bits(size); + mapping_set_folio_min_order(bdev->bd_inode->i_mapping, + get_order(size)); kill_bdev(bdev); } return 0; From patchwork Tue May 14 17:38:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 13664308 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B861181337 for ; Tue, 14 May 2024 17:39:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708367; cv=none; b=QeRExdIPLYYtb9jdGFKfiG7BQCCL9lRbZHbl8CDfiPzFo0VJX0e2Jz5cU3/c4+0o5nMNMPQfCvGklL6mI57KY9Zv4G9tyILUBXuApJTQVavm0RMS0rrGcE1RqjqbgC6g0xbstCuHZBZp8x4zSIyLMejYZKAflRx8lKKGA9VpYtM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708367; c=relaxed/simple; bh=2MLs8aaNGq+Z9BPfM3fqM3TE7Pu96U6iUhQcyxpAkm8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YUl8F0+g9rlig7/OdQrZ1vvMzI/E6TxijgVuVbRa35r0M/ygsqvcqy5fngUMwW3pnBBHbkNJywhhcdptLEIH15alAf/IOAEQv15G/BUGc7lEYWGEaNzmVs4BzSXpfS8KpeMQIR7o+WBl5+/ze+itOAcSjzvPWw8cnNMvK38sQY4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iysJzMaR; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iysJzMaR" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52E12C2BD11; Tue, 14 May 2024 17:39:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715708365; bh=2MLs8aaNGq+Z9BPfM3fqM3TE7Pu96U6iUhQcyxpAkm8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iysJzMaR5lswnCwhLkr1ZR9+Vqdq/KQOX48aG7GIum/4RIFSOG8pPPV09UFL/iN6g wl5OuHTINhxJjJhdZ9ueImaQzD+9nGuHStjWbHAi+n9aIMx7jLKJXgHjP/YnIdac8P 58VVJITm3d+2gEc3vDHfLJS1/nlhwi0zeHIqtpfbQllvK5wmTd4CSdMPoW796mHnG9 l4nGZKqMdb/UbSV7FIAH1TcyrgayMvvHfSzsr5C6OiaVsw1sOXtoWXdtdRlQ2dJ33m xGg8qwbyJNEGGExp4F7klBP0YMudQeJVEaSPCrFnWTTdiKCAt6X2iNqqvjLvOQiiUL FmqbTTd7bySBw== From: Hannes Reinecke To: Jens Axboe Cc: Matthew Wilcox , Luis Chamberlain , Pankaj Raghav , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Hannes Reinecke Subject: [PATCH 5/6] block/bdev: lift restrictions on supported blocksize Date: Tue, 14 May 2024 19:38:59 +0200 Message-Id: <20240514173900.62207-6-hare@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240514173900.62207-1-hare@kernel.org> References: <20240514173900.62207-1-hare@kernel.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 We now can support blocksizes larger than PAGE_SIZE, so lift the restriction. Signed-off-by: Hannes Reinecke --- block/bdev.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/block/bdev.c b/block/bdev.c index bd2efcad4f32..f092a1b04629 100644 --- a/block/bdev.c +++ b/block/bdev.c @@ -148,8 +148,9 @@ static void set_init_blocksize(struct block_device *bdev) int set_blocksize(struct block_device *bdev, int size) { - /* Size must be a power of two, and between 512 and PAGE_SIZE */ - if (size > PAGE_SIZE || size < 512 || !is_power_of_2(size)) + /* Size must be a power of two, and between 512 and MAX_PAGECACHE_ORDER*/ + if (get_order(bs) > MAX_PAGECACHE_ORDER || size < 512 || + !is_power_of_2(size)) return -EINVAL; /* Size cannot be smaller than the size supported by the device */ @@ -174,7 +175,7 @@ int sb_set_blocksize(struct super_block *sb, int size) if (set_blocksize(sb->s_bdev, size)) return 0; /* If we get here, we know size is power of two - * and it's value is between 512 and PAGE_SIZE */ + * and it's value is larger than 512 */ sb->s_blocksize = size; sb->s_blocksize_bits = blksize_bits(size); return sb->s_blocksize; From patchwork Tue May 14 17:39:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 13664309 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2926F181337 for ; Tue, 14 May 2024 17:39:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708368; cv=none; b=c3qQ2/ukkRdQIm0xre56g5LwF9abBv3GyAtmIDUHdnU/fbK3x+QDn2WX4yuh3NRfanj/uKaaN5vnBXCeOOro4aCqjhaIDuXsWHGBlp0/JVkTxz9Zm6Rx18OvCKUda8d0WYq/qxzhd+MXMkWNKUuWzV7SAdXqFdRgj4ntCJBwFQ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715708368; c=relaxed/simple; bh=DovLNH0jWI5RgqORRkqm9l+XJ5AhX39zkvbiQ0Futvc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hXWIvubGZClF4oaZS3bsAV5GIjVl5teA5X40RKwy9diRdwWKF++VMdBXn38z0kBsiYWWG48V/8D+AMzsl0hw8jzFPHmqJOtSA6Y3aP2O+xWTg6Onk9+SmXTE5dLPYQLEEr4BRhywjqGPNsryFspYY0x77Sv49SV2RyV25OTBlVs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IlIQnAwc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IlIQnAwc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 462C5C32782; Tue, 14 May 2024 17:39:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715708367; bh=DovLNH0jWI5RgqORRkqm9l+XJ5AhX39zkvbiQ0Futvc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IlIQnAwcsHIjT3vGrkL2MlHhLWPBsGvCvVPdf6YEH0DurwDMzPZyTREBMAnohIoP8 MZSoLqMRqgEssNu+YSn/1n7lIxcjuMsNnBL0yvywsDlRye3O/VdiPXt4qXLXVsqsHN Eq81/ab+itMx9Hc6sz9gas+Sn27iOmTGNHqwEVcUBwbJJb2W4DKiK467ozO+aVHs+z FWQQ8IgOr9Fvb2Z2ssAuWhx1DLNw9stxLrNLxBSQd1vOpu1ELd1whGrnENf0DI+wDD e0DL7xodgbMjwHDoD4p+TRYtNAkbjMcvNAAi4FIH/634i08v8lLwvqIE+WJCSdOirJ h63mxYXobFr2A== From: Hannes Reinecke To: Jens Axboe Cc: Matthew Wilcox , Luis Chamberlain , Pankaj Raghav , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Hannes Reinecke Subject: [PATCH 6/6] nvme: enable logical block size > PAGE_SIZE Date: Tue, 14 May 2024 19:39:00 +0200 Message-Id: <20240514173900.62207-7-hare@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240514173900.62207-1-hare@kernel.org> References: <20240514173900.62207-1-hare@kernel.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Pankaj Raghav Don't set the capacity to zero for when logical block size > PAGE_SIZE as the block device with iomap aops support allocating block cache with a minimum folio order. Signed-off-by: Pankaj Raghav Signed-off-by: Hannes Reinecke --- drivers/nvme/host/core.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 828c77fa13b7..111bf4197052 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1963,11 +1963,11 @@ static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *id, bool valid = true; /* - * The block layer can't support LBA sizes larger than the page size - * or smaller than a sector size yet, so catch this early and don't - * allow block I/O. + * The block layer can't support LBA sizes larger than + * MAX_PAGECACHE_ORDER or smaller than a sector size, so catch this + * early and don't allow block I/O. */ - if (head->lba_shift > PAGE_SHIFT || head->lba_shift < SECTOR_SHIFT) { + if (get_order(bs) > MAX_PAGECACHE_ORDER || head->lba_shift < SECTOR_SHIFT) { bs = (1 << 9); valid = false; }