diff mbox series

[-RFC,1/2] mm: avoid data corruption when extending DIO write race with buffered read

Message ID 20231202091432.8349-2-libaokun1@huawei.com (mailing list archive)
State New
Headers show
Series mm/ext4: avoid data corruption when extending DIO write race with buffered read | expand

Commit Message

Baokun Li Dec. 2, 2023, 9:14 a.m. UTC
When DIO write and buffered read are performed on the same file on two
CPUs, the following race may occur:

          cpu1                           cpu2
 Direct write 1024 from 4096 | Buffered read 8192 from 0
-----------------------------|----------------------------
...                           ...
 ext4_file_write_iter
  ext4_dio_write_iter
   iomap_dio_rw
   ...
                               ext4_file_read_iter
                                generic_file_read_iter
                                 filemap_read
                                  filemap_get_pages
                                   ...
                                    ext4_mpage_readpages
                                     ext4_readpage_limit(inode)
                                      i_size_read(inode) // 4096
    ext4_dio_write_end_io
     i_size_write(inode, 5120)
                                   i_size_read(inode) // 5120

1. read alloc 8192

  0                                      8192
  |-------------------|-------------------|

2. read form disk (i_size 4096)

  0   filled data   4096  filled zero    8192
  |-------------------|-------------------|

3. copyout (i_size 5120)

  0 copyout to uset buffer 5120          8192
  |------------------------|--------------|
                      |~~~~|
                   Inconsistent data

In the above race, because of the change of inode_size, the actual data
read from the disk is only 4096 bytes, but copied to the user's buffer
5120 bytes, including 1024 bytes of zero-filled tail page, which results
in the data read by the user is not consistent with the data on the disk.

To solve this problem completely, we should take the lesser of the number
of bytes actually read or the inode_size and use that as the final read
size. The problem here is that we don't know how many bytes of valid data
filemap_get_pages() reads, or how many bytes of valid data are in a page,
so we have to rely on inode_size to determine the range of valid data.

So we read the inode_size before and after filemap_get_pages(), and take
the smaller of the two as the size of the copyout to reduce the
probability of the above issue being triggered.

Signed-off-by: Baokun Li <libaokun1@huawei.com>
---
 mm/filemap.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/mm/filemap.c b/mm/filemap.c
index 71f00539ac00..47c1729afbb4 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2587,7 +2587,8 @@  ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter,
 		if ((iocb->ki_flags & IOCB_WAITQ) && already_read)
 			iocb->ki_flags |= IOCB_NOWAIT;
 
-		if (unlikely(iocb->ki_pos >= i_size_read(inode)))
+		isize = i_size_read(inode);
+		if (unlikely(iocb->ki_pos >= isize))
 			break;
 
 		error = filemap_get_pages(iocb, iter->count, &fbatch, false);
@@ -2602,7 +2603,7 @@  ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter,
 		 * part of the page is not copied back to userspace (unless
 		 * another truncate extends the file - this is desired though).
 		 */
-		isize = i_size_read(inode);
+		isize = min_t(loff_t, isize, i_size_read(inode));
 		if (unlikely(iocb->ki_pos >= isize))
 			goto put_folios;
 		end_offset = min_t(loff_t, isize, iocb->ki_pos + iter->count);