diff mbox series

iov_iter: fix copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP

Message ID dd5f0c89-186e-18e1-4f43-19a60f5a9774@google.com (mailing list archive)
State New
Headers show
Series iov_iter: fix copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP | expand

Commit Message

Hugh Dickins Oct. 27, 2024, 10:23 p.m. UTC
generic/077 on x86_32 CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y with highmem,
on huge=always tmpfs, issues a warning and then hangs (interruptibly):

WARNING: CPU: 5 PID: 3517 at mm/highmem.c:622 kunmap_local_indexed+0x62/0xc9
CPU: 5 UID: 0 PID: 3517 Comm: cp Not tainted 6.12.0-rc4 #2
...
copy_page_from_iter_atomic+0xa6/0x5ec
generic_perform_write+0xf6/0x1b4
shmem_file_write_iter+0x54/0x67

Fix copy_page_from_iter_atomic() by limiting it in that case
(include/linux/skbuff.h skb_frag_must_loop() does similar).

But going forward, perhaps CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP is too
surprising, has outlived its usefulness, and should just be removed?

Fixes: 908a1ad89466 ("iov_iter: Handle compound highmem pages in copy_page_from_iter_atomic()")
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: stable@vger.kernel.org
---
 lib/iov_iter.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Thomas Gleixner Oct. 28, 2024, 8:41 a.m. UTC | #1
On Sun, Oct 27 2024 at 15:23, Hugh Dickins wrote:
> generic/077 on x86_32 CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y with highmem,
> on huge=always tmpfs, issues a warning and then hangs (interruptibly):
>
> WARNING: CPU: 5 PID: 3517 at mm/highmem.c:622 kunmap_local_indexed+0x62/0xc9
> CPU: 5 UID: 0 PID: 3517 Comm: cp Not tainted 6.12.0-rc4 #2
> ...
> copy_page_from_iter_atomic+0xa6/0x5ec
> generic_perform_write+0xf6/0x1b4
> shmem_file_write_iter+0x54/0x67
>
> Fix copy_page_from_iter_atomic() by limiting it in that case
> (include/linux/skbuff.h skb_frag_must_loop() does similar).
>
> But going forward, perhaps CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP is too
> surprising, has outlived its usefulness, and should just be removed?

It has caught real problems and as long as we have highmem support, it
should stay IMO to provide test coverage.

Thanks,

        tglx
Christoph Hellwig Oct. 28, 2024, 8:55 a.m. UTC | #2
On Sun, Oct 27, 2024 at 03:23:23PM -0700, Hugh Dickins wrote:
> generic/077 on x86_32 CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y with highmem,
> on huge=always tmpfs, issues a warning and then hangs (interruptibly):

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Christian Brauner Oct. 28, 2024, 12:41 p.m. UTC | #3
On Sun, 27 Oct 2024 15:23:23 -0700, Hugh Dickins wrote:
> generic/077 on x86_32 CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y with highmem,
> on huge=always tmpfs, issues a warning and then hangs (interruptibly):
> 
> WARNING: CPU: 5 PID: 3517 at mm/highmem.c:622 kunmap_local_indexed+0x62/0xc9
> CPU: 5 UID: 0 PID: 3517 Comm: cp Not tainted 6.12.0-rc4 #2
> ...
> copy_page_from_iter_atomic+0xa6/0x5ec
> generic_perform_write+0xf6/0x1b4
> shmem_file_write_iter+0x54/0x67
> 
> [...]

Applied to the vfs.fixes branch of the vfs/vfs.git tree.
Patches in the vfs.fixes branch should appear in linux-next soon.

Please report any outstanding bugs that were missed during review in a
new review to the original patch series allowing us to drop it.

It's encouraged to provide Acked-bys and Reviewed-bys even though the
patch has now been applied. If possible patch trailers will be updated.

Note that commit hashes shown below are subject to change due to rebase,
trailer updates or similar. If in doubt, please check the listed branch.

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git
branch: vfs.fixes

[1/1] iov_iter: fix copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP
      https://git.kernel.org/vfs/vfs/c/c749d9b7ebbc
Matthew Wilcox Oct. 28, 2024, 2:04 p.m. UTC | #4
On Sun, Oct 27, 2024 at 03:23:23PM -0700, Hugh Dickins wrote:
> generic/077 on x86_32 CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y with highmem,
> on huge=always tmpfs, issues a warning and then hangs (interruptibly):

> +++ b/lib/iov_iter.c
> @@ -461,6 +461,8 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
>  		size_t bytes, struct iov_iter *i)
>  {
>  	size_t n, copied = 0;
> +	bool uses_kmap = IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) ||
> +			 PageHighMem(page);
>  
>  	if (!page_copy_sane(page, offset, bytes))
>  		return 0;
> @@ -471,7 +473,7 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
>  		char *p;
>  
>  		n = bytes - copied;
> -		if (PageHighMem(page)) {
> +		if (uses_kmap) {
>  			page += offset / PAGE_SIZE;
>  			offset %= PAGE_SIZE;
>  			n = min_t(size_t, n, PAGE_SIZE - offset);

Urgh.  I've done this same optimisation elsewhere.

memcpy_from_folio:
                if (folio_test_highmem(folio) &&
                    chunk > PAGE_SIZE - offset_in_page(offset))
                        chunk = PAGE_SIZE - offset_in_page(offset);

also memcpy_to_folio(), folio_zero_tail(), folio_fill_tail(),
memcpy_from_file_folio()

I think that means we need a new predicate.  I don't have a good name
yet.  folio_kmap_can_access_multiple_pages() is a bit too wordy.  Anyone
think of a good one?
Linus Torvalds Oct. 28, 2024, 6:50 p.m. UTC | #5
On Sun, 27 Oct 2024 at 22:41, Thomas Gleixner <tglx@linutronix.de> wrote:
>
> It has caught real problems and as long as we have highmem support, it
> should stay IMO to provide test coverage.

Yeah. I'd love to get rid of highmem support entirely, and that day
*will* come. Old 32-bit architectures that do stupid things can just
deal with old kernels, we need to leave that braindamage behind some
day.

But as long as we support it, we should at least also have the debug
support for it on sane hardware.

Of course, maybe we should just make PageHighMem() always return true
for CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP, but I suspect that would cause
more pain than is worth it.

But yeah, I do think we should seriously start thinking about just
getting rid of HIGHMEM.

               Linus
diff mbox series

Patch

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 1abb32c0da50..94051b83fdd8 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -461,6 +461,8 @@  size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
 		size_t bytes, struct iov_iter *i)
 {
 	size_t n, copied = 0;
+	bool uses_kmap = IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) ||
+			 PageHighMem(page);
 
 	if (!page_copy_sane(page, offset, bytes))
 		return 0;
@@ -471,7 +473,7 @@  size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
 		char *p;
 
 		n = bytes - copied;
-		if (PageHighMem(page)) {
+		if (uses_kmap) {
 			page += offset / PAGE_SIZE;
 			offset %= PAGE_SIZE;
 			n = min_t(size_t, n, PAGE_SIZE - offset);
@@ -482,7 +484,7 @@  size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
 		kunmap_atomic(p);
 		copied += n;
 		offset += n;
-	} while (PageHighMem(page) && copied != bytes && n > 0);
+	} while (uses_kmap && copied != bytes && n > 0);
 
 	return copied;
 }