diff mbox series

[RFC,1/1] block/ioctl: Add an ioctl to enable large folios for block buffered IO path

Message ID 20241127054737.33351-2-bharata@amd.com (mailing list archive)
State New
Headers show
Series Large folios in block buffered IO path | expand

Commit Message

Bharata B Rao Nov. 27, 2024, 5:47 a.m. UTC
In order to experiment using large folios for block devices read/write
operations, expose an ioctl that userspace can selectively use on the
raw block devices.

For the write path, this forces iomap layer to provision large
folios (via iomap_file_buffered_write()).

Signed-off-by: Bharata B Rao <bharata@amd.com>
---
 block/ioctl.c           | 8 ++++++++
 include/uapi/linux/fs.h | 2 ++
 2 files changed, 10 insertions(+)

Comments

Christoph Hellwig Nov. 27, 2024, 6:26 a.m. UTC | #1
On Wed, Nov 27, 2024 at 11:17:37AM +0530, Bharata B Rao wrote:
> In order to experiment using large folios for block devices read/write
> operations, expose an ioctl that userspace can selectively use on the
> raw block devices.
> 
> For the write path, this forces iomap layer to provision large
> folios (via iomap_file_buffered_write()).

Well, unless CONFIG_BUFFER_HEAD is disabled, the block device uses
the buffer head based write path, which currently doesn't fully
support large folios (although there is series out to do so on
fsdevel right now), so I don't think this will fully work.

But the more important problem, and the reason why we don't use
the non-buffer_head path by default is that the block device mapping
is reused by a lot of file systems, which are not aware of large
folios, and will get utterly confused.  So if we want to do anything
smart on the block device mapping, we'll have to ensure we're back
to state compatible with these file systems before calling into
their mount code, and stick to the old code while file systems are
mounted.

Of course the real question is:  why do you care about buffered
I/O performance on the block device node?
Bharata B Rao Nov. 27, 2024, 10:37 a.m. UTC | #2
On 27-Nov-24 11:56 AM, Christoph Hellwig wrote:
> On Wed, Nov 27, 2024 at 11:17:37AM +0530, Bharata B Rao wrote:
>> In order to experiment using large folios for block devices read/write
>> operations, expose an ioctl that userspace can selectively use on the
>> raw block devices.
>>
>> For the write path, this forces iomap layer to provision large
>> folios (via iomap_file_buffered_write()).
> 
> Well, unless CONFIG_BUFFER_HEAD is disabled, the block device uses
> the buffer head based write path, which currently doesn't fully
> support large folios (although there is series out to do so on
> fsdevel right now), so I don't think this will fully work.

I believe you are referring to the patchset that enables bs > ps for 
block devices - 
https://lore.kernel.org/linux-fsdevel/20241113094727.1497722-1-mcgrof@kernel.org/

With the above patchset, block device can use buffer head based write 
path without disabling CONFIG_BUFFER_HEAD and that is a pre-requisite 
for buffered IO path in the block layer (blkdev_buffered_write()) to 
correctly/fully use large folios. Did I get that right?

> 
> But the more important problem, and the reason why we don't use
> the non-buffer_head path by default is that the block device mapping
> is reused by a lot of file systems, which are not aware of large
> folios, and will get utterly confused.  So if we want to do anything
> smart on the block device mapping, we'll have to ensure we're back
> to state compatible with these file systems before calling into
> their mount code, and stick to the old code while file systems are
> mounted.

In fact I was trying to see if it is possible to advertise large folio 
support in bdev mapping only for those block devices which don't have FS 
mounted on them. But apparently it was not so straight forward and my 
initial attempt at this resulted in FS corruption. Hence I resorted to 
the current ioctl approach as a way to showcase the problem and the 
potential benefit.

> 
> Of course the real question is:  why do you care about buffered
> I/O performance on the block device node?
> 

Various combinations of FIO options 
(direct/buffered/blocksizes/readwrite ratios etc) was part of a customer 
test/regression suite and we found this particular case of FIO with 
buffered IO on NVME block devices to have a lot of scalability issues. 
Hence checking if there are ways to mitigate those.

Thanks for your reply.

Regards,
Bharata.
diff mbox series

Patch

diff --git a/block/ioctl.c b/block/ioctl.c
index 6554b728bae6..6af26a08ef34 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -548,6 +548,12 @@  static int blkdev_bszset(struct file *file, blk_mode_t mode,
 	return ret;
 }
 
+static int blkdev_set_large_folio(struct block_device *bdev)
+{
+	mapping_set_large_folios(bdev->bd_mapping);
+	return 0;
+}
+
 /*
  * Common commands that are handled the same way on native and compat
  * user space. Note the separate arg/argp parameters that are needed
@@ -632,6 +638,8 @@  static int blkdev_common_ioctl(struct block_device *bdev, blk_mode_t mode,
 		return blkdev_pr_preempt(bdev, mode, argp, true);
 	case IOC_PR_CLEAR:
 		return blkdev_pr_clear(bdev, mode, argp);
+	case BLKSETLFOLIO:
+		return blkdev_set_large_folio(bdev);
 	default:
 		return -ENOIOCTLCMD;
 	}
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 753971770733..5c8a326b68a1 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -203,6 +203,8 @@  struct fsxattr {
 #define BLKROTATIONAL _IO(0x12,126)
 #define BLKZEROOUT _IO(0x12,127)
 #define BLKGETDISKSEQ _IOR(0x12,128,__u64)
+#define BLKSETLFOLIO _IO(0x12, 129)
+
 /*
  * A jump here: 130-136 are reserved for zoned block devices
  * (see uapi/linux/blkzoned.h)