Message ID | 20220601210141.3773402-1-shr@fb.com (mailing list archive) |
---|---|
Headers | show |
Series | io-uring/xfs: support async buffered writes | expand |
On 6/1/22 3:01 PM, Stefan Roesch wrote: > This patch series adds support for async buffered writes when using both > xfs and io-uring. Currently io-uring only supports buffered writes in the > slow path, by processing them in the io workers. With this patch series it is > now possible to support buffered writes in the fast path. To be able to use > the fast path the required pages must be in the page cache, the required locks > in xfs can be granted immediately and no additional blocks need to be read > form disk. This series looks good to me now, but will need some slight rebasing since the 5.20 io_uring branch has split up the code a bit. Trivial to do though, I suspect it'll apply directly if we just change fs/io_uring.c to io_uring/rw.c instead. The bigger question is how to stage this, as it's touching a bit of fs, mm, and io_uring...
On Thu, Jun 02, 2022 at 02:09:00AM -0600, Jens Axboe wrote: > On 6/1/22 3:01 PM, Stefan Roesch wrote: > > This patch series adds support for async buffered writes when using both > > xfs and io-uring. Currently io-uring only supports buffered writes in the > > slow path, by processing them in the io workers. With this patch series it is > > now possible to support buffered writes in the fast path. To be able to use > > the fast path the required pages must be in the page cache, the required locks > > in xfs can be granted immediately and no additional blocks need to be read > > form disk. > > This series looks good to me now, but will need some slight rebasing > since the 5.20 io_uring branch has split up the code a bit. Trivial to > do though, I suspect it'll apply directly if we just change > fs/io_uring.c to io_uring/rw.c instead. > > The bigger question is how to stage this, as it's touching a bit of fs, > mm, and io_uring... What data integrity testing has this had? Has it been run through a few billion fsx operations with w/ io_uring read/write enabled? Cheers, Dave.
On 6/2/22 8:43 PM, Dave Chinner wrote: > On Thu, Jun 02, 2022 at 02:09:00AM -0600, Jens Axboe wrote: >> On 6/1/22 3:01 PM, Stefan Roesch wrote: >>> This patch series adds support for async buffered writes when using both >>> xfs and io-uring. Currently io-uring only supports buffered writes in the >>> slow path, by processing them in the io workers. With this patch series it is >>> now possible to support buffered writes in the fast path. To be able to use >>> the fast path the required pages must be in the page cache, the required locks >>> in xfs can be granted immediately and no additional blocks need to be read >>> form disk. >> >> This series looks good to me now, but will need some slight rebasing >> since the 5.20 io_uring branch has split up the code a bit. Trivial to >> do though, I suspect it'll apply directly if we just change >> fs/io_uring.c to io_uring/rw.c instead. >> >> The bigger question is how to stage this, as it's touching a bit of fs, >> mm, and io_uring... > > What data integrity testing has this had? Has it been run through a > few billion fsx operations with w/ io_uring read/write enabled? I'll let Stefan expand on this, but just mention what I know - it has been fun via fio at least. Each of the performance tests were hour long each, and also specific test cases were written to test the boundary conditions of what pages of a range where in page cache, etc. Also with data verification. Don't know if fsx specifically has been used it.
On 6/3/22 6:04 AM, Jens Axboe wrote: > On 6/2/22 8:43 PM, Dave Chinner wrote: >> On Thu, Jun 02, 2022 at 02:09:00AM -0600, Jens Axboe wrote: >>> On 6/1/22 3:01 PM, Stefan Roesch wrote: >>>> This patch series adds support for async buffered writes when using both >>>> xfs and io-uring. Currently io-uring only supports buffered writes in the >>>> slow path, by processing them in the io workers. With this patch series it is >>>> now possible to support buffered writes in the fast path. To be able to use >>>> the fast path the required pages must be in the page cache, the required locks >>>> in xfs can be granted immediately and no additional blocks need to be read >>>> form disk. >>> >>> This series looks good to me now, but will need some slight rebasing >>> since the 5.20 io_uring branch has split up the code a bit. Trivial to >>> do though, I suspect it'll apply directly if we just change >>> fs/io_uring.c to io_uring/rw.c instead. >>> >>> The bigger question is how to stage this, as it's touching a bit of fs, >>> mm, and io_uring... >> >> What data integrity testing has this had? Has it been run through a >> few billion fsx operations with w/ io_uring read/write enabled? > > I'll let Stefan expand on this, but just mention what I know - it has > been fun via fio at least. Each of the performance tests were hour long > each, and also specific test cases were written to test the boundary > conditions of what pages of a range where in page cache, etc. Also with > data verification. > I performed the following tests: - fio tests with various block sizes and different modes (psysnc, io_uring, libaio) - fsx tests with one billion ops - individual test program - to test with different block sizes - test short writes - test holes - test without readahead > Don't know if fsx specifically has been used it. >