Message ID | 20230807152657.1692414-1-m.szyprowski@samsung.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm: dma-mapping: fix potential endless loop in __dma_page_dev_to_cpu() | expand |
On Mon, Aug 07, 2023 at 05:26:57PM +0200, Marek Szyprowski wrote: > It is possible that the folio_size() of the next folio returns zero, so What? How can folio_size() return zero? return PAGE_SIZE << folio_order(folio); It is a minimum of PAGE_SIZE. > avoid looping with 'left' equals to zero in D-cache cleaning loop. > > This fixes the following endless loop observed by RCU stall: > --->8--- > rcu: INFO: rcu_sched self-detected stall on CPU > rcu: 0-....: (27320 ticks this GP) idle=e414/1/0x40000002 softirq=36/36 fqs=13044 > rcu: (t=27385 jiffies g=-1067 q=34 ncpus=8) > CPU: 0 PID: 93 Comm: kworker/0:1H Not tainted 6.5.0-rc5-next-20230807 #6981 > Hardware name: Samsung Exynos (Flattened Device Tree) > Workqueue: mmc_complete mmc_blk_mq_complete_work > PC is at _set_bit+0x28/0x44 > LR is at __dma_page_dev_to_cpu+0xdc/0x170 > .. > _set_bit from __dma_page_dev_to_cpu+0xdc/0x170 > __dma_page_dev_to_cpu from dma_direct_unmap_sg+0x100/0x130 > dma_direct_unmap_sg from dw_mci_post_req+0x68/0x6c > dw_mci_post_req from mmc_blk_mq_post_req+0x34/0x100 I don't know what you've actually hit here, but the explanation is wrong.
On Mon, Aug 07, 2023 at 05:26:57PM +0200, Marek Szyprowski wrote: > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c > index 70cb7e63a9a5..02250106e5ed 100644 > --- a/arch/arm/mm/dma-mapping.c > +++ b/arch/arm/mm/dma-mapping.c > @@ -718,7 +718,7 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, > folio = folio_next(folio); > } > > - while (left >= (ssize_t)folio_size(folio)) { > + while (left && left >= (ssize_t)folio_size(folio)) { > set_bit(PG_dcache_clean, &folio->flags); > left -= folio_size(folio); > folio = folio_next(folio); I've been thinking about this and I think this is the right fix for the wrong reason. I don't understand how it can produce the failure you saw, but we shouldn't be calling folio_next() if left is zero, let alone calling folio_size() on it. So I'd rather see this fix: while (left >= (ssize_t)folio_size(folio)) { set_bit(PG_dcache_clean, &folio->flags); left -= folio_size(folio); + if (!left) + break; folio = folio_next(folio); }
On Mon, Aug 07, 2023 at 11:14:13PM +0100, Matthew Wilcox wrote: > On Mon, Aug 07, 2023 at 05:26:57PM +0200, Marek Szyprowski wrote: > > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c > > index 70cb7e63a9a5..02250106e5ed 100644 > > --- a/arch/arm/mm/dma-mapping.c > > +++ b/arch/arm/mm/dma-mapping.c > > @@ -718,7 +718,7 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, > > folio = folio_next(folio); > > } > > > > - while (left >= (ssize_t)folio_size(folio)) { > > + while (left && left >= (ssize_t)folio_size(folio)) { > > set_bit(PG_dcache_clean, &folio->flags); > > left -= folio_size(folio); > > folio = folio_next(folio); > > I've been thinking about this and I think this is the right fix for the > wrong reason. I don't understand how it can produce the failure you > saw, but we shouldn't be calling folio_next() if left is zero, let alone > calling folio_size() on it. So I'd rather see this fix: > > while (left >= (ssize_t)folio_size(folio)) { > set_bit(PG_dcache_clean, &folio->flags); > left -= folio_size(folio); > + if (!left) > + break; Given that set_bit() involves atomics, wouldn't it be better if this had been written as: while (left >= folio_size(folio)) { left -= folio_size(folio); set_bit(PG_dcache_clean, &folio->flags); if (!left) break; > folio = folio_next(folio); > } That likely means that folio_size() will only be evaluated once per loop rather than twice. I may be wrong though, I didn't check the generated code. Also, I'm wondering what that ssize_t cast is doing there - "left" is a size_t, which is unsigned. folio_size() returns a size_t, so is also unsigned. Why convert folio_size() to a signed number to then be compared with an unsigned number? Or did "left" get converted to ssize_t along with the folio conversion? Even if it did, how could "left" be negative (except through casting a large positive number as "size" that in 2's complement would be negative after casting to "left") ?
Hi, On 07.08.2023 18:23, Matthew Wilcox wrote: > On Mon, Aug 07, 2023 at 05:26:57PM +0200, Marek Szyprowski wrote: >> It is possible that the folio_size() of the next folio returns zero, so > What? How can folio_size() return zero? > > return PAGE_SIZE << folio_order(folio); > > It is a minimum of PAGE_SIZE. Well, the folio_order() on that next folio returns 255, so folio_size() overflows to zero. However, the main source of this issue is relying on the properties of the folio beyond the requested sync region. > ... Best regards
On Mon, Aug 07, 2023 at 11:46:05PM +0100, Russell King (Oracle) wrote: > On Mon, Aug 07, 2023 at 11:14:13PM +0100, Matthew Wilcox wrote: > > On Mon, Aug 07, 2023 at 05:26:57PM +0200, Marek Szyprowski wrote: > > > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c > > > index 70cb7e63a9a5..02250106e5ed 100644 > > > --- a/arch/arm/mm/dma-mapping.c > > > +++ b/arch/arm/mm/dma-mapping.c > > > @@ -718,7 +718,7 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, > > > folio = folio_next(folio); > > > } > > > > > > - while (left >= (ssize_t)folio_size(folio)) { > > > + while (left && left >= (ssize_t)folio_size(folio)) { > > > set_bit(PG_dcache_clean, &folio->flags); > > > left -= folio_size(folio); > > > folio = folio_next(folio); > > > > I've been thinking about this and I think this is the right fix for the > > wrong reason. I don't understand how it can produce the failure you > > saw, but we shouldn't be calling folio_next() if left is zero, let alone > > calling folio_size() on it. So I'd rather see this fix: > > > > while (left >= (ssize_t)folio_size(folio)) { > > set_bit(PG_dcache_clean, &folio->flags); > > left -= folio_size(folio); > > + if (!left) > > + break; > > Given that set_bit() involves atomics, wouldn't it be better if this > had been written as: > > while (left >= folio_size(folio)) { > left -= folio_size(folio); > set_bit(PG_dcache_clean, &folio->flags); > if (!left) > break; > > folio = folio_next(folio); > > } > > That likely means that folio_size() will only be evaluated once per > loop rather than twice. I may be wrong though, I didn't check the > generated code. I'd really like it if gcc did notice that folio_size() could be CSE. Unfortunately, I don't think it can. +long rmk(struct folio *folio, long size) +{ + while (size >= folio_size(folio)) { + size -= folio_size(folio); + folio_set_workingset(folio); + if (size < 0) + return size; + folio = folio_next(folio); + } + + return size; +} 000039d4 <rmk>: 39d4: e92d41f0 push {r4, r5, r6, r7, r8, lr} 39d8: e1a04000 mov r4, r0 39dc: e1a05001 mov r5, r1 39e0: e3a06a01 mov r6, #4096 @ 0x1000 39e4: e3a07020 mov r7, #32 39e8: ea000010 b 3a30 <rmk+0x5c> 39ec: e5943000 ldr r3, [r4] 39f0: e1a01004 mov r1, r4 39f4: e3a00009 mov r0, #9 39f8: e3130040 tst r3, #64 @ 0x40 39fc: 03a03a01 moveq r3, #4096 @ 0x1000 3a00: 15d43020 ldrbne r3, [r4, #32] 3a04: 11a03316 lslne r3, r6, r3 3a08: e0455003 sub r5, r5, r3 3a0c: ebfffffe bl 0 <_set_bit> 3a0c: R_ARM_CALL _set_bit 3a10: e3550000 cmp r5, #0 3a14: ba00000c blt 3a4c <rmk+0x78> 3a18: e5943000 ldr r3, [r4] 3a1c: e3130040 tst r3, #64 @ 0x40 3a20: 03a03020 moveq r3, #32 3a24: 15d43020 ldrbne r3, [r4, #32] 3a28: 11a03317 lslne r3, r7, r3 3a2c: e0844003 add r4, r4, r3 3a30: e5943000 ldr r3, [r4] 3a34: e3130040 tst r3, #64 @ 0x40 3a38: 03a03a01 moveq r3, #4096 @ 0x1000 3a3c: 15d43020 ldrbne r3, [r4, #32] 3a40: 11a03316 lslne r3, r6, r3 3a44: e1550003 cmp r5, r3 3a48: 2affffe7 bcs 39ec <rmk+0x18> 3a4c: e1a00005 mov r0, r5 3a50: e8bd81f0 pop {r4, r5, r6, r7, r8, pc} Certainly seems to me like it's calculating folio_size() twice. And actually it's redone the ordering to put the calculation after the call to set_bit! > Also, I'm wondering what that ssize_t cast is doing there - "left" > is a size_t, which is unsigned. folio_size() returns a size_t, so > is also unsigned. Why convert folio_size() to a signed number to > then be compared with an unsigned number? Because earlier we did: + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } so left might now be negative. If we did an unsigned comparison, we'd go round this loop. Er. And the fix from Marek didn't accommodate this problem. So we need a fix-fix: if (offset) { left -= folio_size(folio) - offset; + if (left <= 0) + return; folio = folio_next(folio); } Marek, can you do the honours here? > Or did "left" get converted to ssize_t along with the folio > conversion? > > Even if it did, how could "left" be negative (except through casting > a large positive number as "size" that in 2's complement would be > negative after casting to "left") ? > > -- > RMK's Patch system: https://www.armlinux.org.uk/developer/patches/ > FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 70cb7e63a9a5..02250106e5ed 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -718,7 +718,7 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, folio = folio_next(folio); } - while (left >= (ssize_t)folio_size(folio)) { + while (left && left >= (ssize_t)folio_size(folio)) { set_bit(PG_dcache_clean, &folio->flags); left -= folio_size(folio); folio = folio_next(folio);
It is possible that the folio_size() of the next folio returns zero, so avoid looping with 'left' equals to zero in D-cache cleaning loop. This fixes the following endless loop observed by RCU stall: --->8--- rcu: INFO: rcu_sched self-detected stall on CPU rcu: 0-....: (27320 ticks this GP) idle=e414/1/0x40000002 softirq=36/36 fqs=13044 rcu: (t=27385 jiffies g=-1067 q=34 ncpus=8) CPU: 0 PID: 93 Comm: kworker/0:1H Not tainted 6.5.0-rc5-next-20230807 #6981 Hardware name: Samsung Exynos (Flattened Device Tree) Workqueue: mmc_complete mmc_blk_mq_complete_work PC is at _set_bit+0x28/0x44 LR is at __dma_page_dev_to_cpu+0xdc/0x170 .. _set_bit from __dma_page_dev_to_cpu+0xdc/0x170 __dma_page_dev_to_cpu from dma_direct_unmap_sg+0x100/0x130 dma_direct_unmap_sg from dw_mci_post_req+0x68/0x6c dw_mci_post_req from mmc_blk_mq_post_req+0x34/0x100 mmc_blk_mq_post_req from mmc_blk_mq_complete_work+0x50/0x60 mmc_blk_mq_complete_work from process_one_work+0x20c/0x4d8 process_one_work from worker_thread+0x58/0x54c worker_thread from kthread+0xe0/0xfc kthread from ret_from_fork+0x14/0x2c --->8--- Fixes: cc24e9c0895c ("arm: implement the new page table range API") Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> --- arch/arm/mm/dma-mapping.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)