diff mbox series

[2/6] mm/page_io: use a folio in sio_read_complete()

Message ID 20230717132602.2202147-3-zhangpeng362@huawei.com (mailing list archive)
State New
Headers show
Series Convert several functions in page_io.c to use a folio | expand

Commit Message

Peng Zhang July 17, 2023, 1:25 p.m. UTC
From: ZhangPeng <zhangpeng362@huawei.com>

Saves three implicit call to compound_head().

Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
---
 mm/page_io.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

Comments

Matthew Wilcox July 17, 2023, 1:40 p.m. UTC | #1
On Mon, Jul 17, 2023 at 09:25:58PM +0800, Peng Zhang wrote:
> +++ b/mm/page_io.c
> @@ -406,19 +406,19 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
>  
>  	if (ret == sio->len) {
>  		for (p = 0; p < sio->pages; p++) {
> -			struct page *page = sio->bvec[p].bv_page;
> +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
>  
> -			SetPageUptodate(page);
> -			unlock_page(page);
> +			folio_mark_uptodate(folio);
> +			folio_unlock(folio);
>  		}

I'm kind of shocked this works today.  Usually bvecs coalesce adjacent
pages into a single entry, so you need to use a real iterator like
bio_for_each_folio_all() to extract individual pages from a bvec.
Maybe the sio bvec is constructed inefficiently.

I think Kent had some bvec folio iterators in progress?

>  		count_vm_events(PSWPIN, sio->pages);
>  	} else {
>  		for (p = 0; p < sio->pages; p++) {
> -			struct page *page = sio->bvec[p].bv_page;
> +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
>  
> -			SetPageError(page);
> -			ClearPageUptodate(page);
> -			unlock_page(page);
> +			folio_set_error(folio);
> +			folio_clear_uptodate(folio);
> +			folio_unlock(folio);

Similar questions to the last patch -- who checks the error flag on this
page/folio, and isn't the folio already !uptodate?

>  		}
>  		pr_alert_ratelimited("Read-error on swap-device\n");
>  	}
> -- 
> 2.25.1
>
Peng Zhang July 18, 2023, 12:58 p.m. UTC | #2
On 2023/7/17 21:40, Matthew Wilcox wrote:

> On Mon, Jul 17, 2023 at 09:25:58PM +0800, Peng Zhang wrote:
>> +++ b/mm/page_io.c
>> @@ -406,19 +406,19 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
>>   
>>   	if (ret == sio->len) {
>>   		for (p = 0; p < sio->pages; p++) {
>> -			struct page *page = sio->bvec[p].bv_page;
>> +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
>>   
>> -			SetPageUptodate(page);
>> -			unlock_page(page);
>> +			folio_mark_uptodate(folio);
>> +			folio_unlock(folio);
>>   		}
> I'm kind of shocked this works today.  Usually bvecs coalesce adjacent
> pages into a single entry, so you need to use a real iterator like
> bio_for_each_folio_all() to extract individual pages from a bvec.
> Maybe the sio bvec is constructed inefficiently.
>
> I think Kent had some bvec folio iterators in progress?

I'll convert bio_first_page_all() to bio_first_folio_all() in a v2.

>>   		count_vm_events(PSWPIN, sio->pages);
>>   	} else {
>>   		for (p = 0; p < sio->pages; p++) {
>> -			struct page *page = sio->bvec[p].bv_page;
>> +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
>>   
>> -			SetPageError(page);
>> -			ClearPageUptodate(page);
>> -			unlock_page(page);
>> +			folio_set_error(folio);
>> +			folio_clear_uptodate(folio);
>> +			folio_unlock(folio);
> Similar questions to the last patch -- who checks the error flag on this
> page/folio, and isn't the folio already !uptodate?

Yes, the folio is already !uptodate. I'll drop this line.
Maybe wait_dev_supers() in fs/btrfs/disk-io.c checks the PageError()?

>>   		}
>>   		pr_alert_ratelimited("Read-error on swap-device\n");
>>   	}
>> -- 
>> 2.25.1
>>
>
Matthew Wilcox July 18, 2023, 4:21 p.m. UTC | #3
On Tue, Jul 18, 2023 at 08:58:17PM +0800, zhangpeng (AS) wrote:
> On 2023/7/17 21:40, Matthew Wilcox wrote:
> 
> > On Mon, Jul 17, 2023 at 09:25:58PM +0800, Peng Zhang wrote:
> > > +++ b/mm/page_io.c
> > > @@ -406,19 +406,19 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
> > >   	if (ret == sio->len) {
> > >   		for (p = 0; p < sio->pages; p++) {
> > > -			struct page *page = sio->bvec[p].bv_page;
> > > +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
> > > -			SetPageUptodate(page);
> > > -			unlock_page(page);
> > > +			folio_mark_uptodate(folio);
> > > +			folio_unlock(folio);
> > >   		}
> > I'm kind of shocked this works today.  Usually bvecs coalesce adjacent
> > pages into a single entry, so you need to use a real iterator like
> > bio_for_each_folio_all() to extract individual pages from a bvec.
> > Maybe the sio bvec is constructed inefficiently.
> > 
> > I think Kent had some bvec folio iterators in progress?
> 
> I'll convert bio_first_page_all() to bio_first_folio_all() in a v2.

That isn't my point at all.  What I'm saying is that when you call a
function like bio_add_folio(), if @folio is physically adjacent to the
immediately prior folio already in the bio, it will extend the bv_len
instead of adding the new folio to the bvec.  Maybe there's nothing like
that for sio.
Peng Zhang July 19, 2023, 7:47 a.m. UTC | #4
On 2023/7/19 0:21, Matthew Wilcox wrote:

> On Tue, Jul 18, 2023 at 08:58:17PM +0800, zhangpeng (AS) wrote:
>> On 2023/7/17 21:40, Matthew Wilcox wrote:
>>
>>> On Mon, Jul 17, 2023 at 09:25:58PM +0800, Peng Zhang wrote:
>>>> +++ b/mm/page_io.c
>>>> @@ -406,19 +406,19 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
>>>>    	if (ret == sio->len) {
>>>>    		for (p = 0; p < sio->pages; p++) {
>>>> -			struct page *page = sio->bvec[p].bv_page;
>>>> +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
>>>> -			SetPageUptodate(page);
>>>> -			unlock_page(page);
>>>> +			folio_mark_uptodate(folio);
>>>> +			folio_unlock(folio);
>>>>    		}
>>> I'm kind of shocked this works today.  Usually bvecs coalesce adjacent
>>> pages into a single entry, so you need to use a real iterator like
>>> bio_for_each_folio_all() to extract individual pages from a bvec.
>>> Maybe the sio bvec is constructed inefficiently.
>>>
>>> I think Kent had some bvec folio iterators in progress?
>> I'll convert bio_first_page_all() to bio_first_folio_all() in a v2.
> That isn't my point at all.  What I'm saying is that when you call a
> function like bio_add_folio(), if @folio is physically adjacent to the
> immediately prior folio already in the bio, it will extend the bv_len
> instead of adding the new folio to the bvec.  Maybe there's nothing like
> that for sio.

Thanks for your reply! I got it.
Christoph Hellwig July 20, 2023, 5:22 a.m. UTC | #5
On Mon, Jul 17, 2023 at 02:40:24PM +0100, Matthew Wilcox wrote:
> >  		for (p = 0; p < sio->pages; p++) {
> > -			struct page *page = sio->bvec[p].bv_page;
> > +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
> >  
> > -			SetPageUptodate(page);
> > -			unlock_page(page);
> > +			folio_mark_uptodate(folio);
> > +			folio_unlock(folio);
> >  		}
> 
> I'm kind of shocked this works today.  Usually bvecs coalesce adjacent
> pages into a single entry, so you need to use a real iterator like
> bio_for_each_folio_all() to extract individual pages from a bvec.
> Maybe the sio bvec is constructed inefficiently.

sio_read_complete is a kiocb.ki_complete handler.  There is no
coalesce going on for ITER_BVEC iov_iters, which share nothing
but the underlying data structure with the block I/O path.
diff mbox series

Patch

diff --git a/mm/page_io.c b/mm/page_io.c
index ebf431e5f538..438d0c7c2194 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -406,19 +406,19 @@  static void sio_read_complete(struct kiocb *iocb, long ret)
 
 	if (ret == sio->len) {
 		for (p = 0; p < sio->pages; p++) {
-			struct page *page = sio->bvec[p].bv_page;
+			struct folio *folio = page_folio(sio->bvec[p].bv_page);
 
-			SetPageUptodate(page);
-			unlock_page(page);
+			folio_mark_uptodate(folio);
+			folio_unlock(folio);
 		}
 		count_vm_events(PSWPIN, sio->pages);
 	} else {
 		for (p = 0; p < sio->pages; p++) {
-			struct page *page = sio->bvec[p].bv_page;
+			struct folio *folio = page_folio(sio->bvec[p].bv_page);
 
-			SetPageError(page);
-			ClearPageUptodate(page);
-			unlock_page(page);
+			folio_set_error(folio);
+			folio_clear_uptodate(folio);
+			folio_unlock(folio);
 		}
 		pr_alert_ratelimited("Read-error on swap-device\n");
 	}