diff mbox series

[09/15] ceph: Use a folio in ceph_filemap_fault()

Message ID 20230825201225.348148-10-willy@infradead.org (mailing list archive)
State New, archived
Headers show
Series Many folio conversions for ceph | expand

Commit Message

Matthew Wilcox (Oracle) Aug. 25, 2023, 8:12 p.m. UTC
This leg of the function is concerned with inline data, so we know it's
at index 0 and contains only a single page.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ceph/addr.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

Comments

Matthew Wilcox (Oracle) Aug. 26, 2023, 3 a.m. UTC | #1
On Fri, Aug 25, 2023 at 09:12:19PM +0100, Matthew Wilcox (Oracle) wrote:
> +++ b/fs/ceph/addr.c
> @@ -1608,29 +1608,30 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
>  		ret = VM_FAULT_SIGBUS;
>  	} else {
>  		struct address_space *mapping = inode->i_mapping;
> -		struct page *page;
> +		struct folio *folio;
>  
>  		filemap_invalidate_lock_shared(mapping);
> -		page = find_or_create_page(mapping, 0,
> +		folio = __filemap_get_folio(mapping, 0,
> +				FGP_LOCK|FGP_ACCESSED|FGP_CREAT,
>  				mapping_gfp_constraint(mapping, ~__GFP_FS));
> -		if (!page) {
> +		if (!folio) {

This needs to be "if (IS_ERR(folio))".  Meant to fix that but forgot.
Xiubo Li Aug. 28, 2023, 1:19 a.m. UTC | #2
On 8/26/23 11:00, Matthew Wilcox wrote:
> On Fri, Aug 25, 2023 at 09:12:19PM +0100, Matthew Wilcox (Oracle) wrote:
>> +++ b/fs/ceph/addr.c
>> @@ -1608,29 +1608,30 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
>>   		ret = VM_FAULT_SIGBUS;
>>   	} else {
>>   		struct address_space *mapping = inode->i_mapping;
>> -		struct page *page;
>> +		struct folio *folio;
>>   
>>   		filemap_invalidate_lock_shared(mapping);
>> -		page = find_or_create_page(mapping, 0,
>> +		folio = __filemap_get_folio(mapping, 0,
>> +				FGP_LOCK|FGP_ACCESSED|FGP_CREAT,
>>   				mapping_gfp_constraint(mapping, ~__GFP_FS));
>> -		if (!page) {
>> +		if (!folio) {
> This needs to be "if (IS_ERR(folio))".  Meant to fix that but forgot.
>
Hi Matthew,

Next time please rebase to the latest ceph-client latest upstream 
'testing' branch, we need to test this series by using the qa 
teuthology, which is running based on the 'testing' branch.

Thanks

- Xiubo
Jeff Layton Aug. 29, 2023, 11:55 a.m. UTC | #3
On Mon, 2023-08-28 at 09:19 +0800, Xiubo Li wrote:
> On 8/26/23 11:00, Matthew Wilcox wrote:
> > On Fri, Aug 25, 2023 at 09:12:19PM +0100, Matthew Wilcox (Oracle) wrote:
> > > +++ b/fs/ceph/addr.c
> > > @@ -1608,29 +1608,30 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
> > >   		ret = VM_FAULT_SIGBUS;
> > >   	} else {
> > >   		struct address_space *mapping = inode->i_mapping;
> > > -		struct page *page;
> > > +		struct folio *folio;
> > >   
> > >   		filemap_invalidate_lock_shared(mapping);
> > > -		page = find_or_create_page(mapping, 0,
> > > +		folio = __filemap_get_folio(mapping, 0,
> > > +				FGP_LOCK|FGP_ACCESSED|FGP_CREAT,
> > >   				mapping_gfp_constraint(mapping, ~__GFP_FS));
> > > -		if (!page) {
> > > +		if (!folio) {
> > This needs to be "if (IS_ERR(folio))".  Meant to fix that but forgot.
> > 
> Hi Matthew,
> 
> Next time please rebase to the latest ceph-client latest upstream 
> 'testing' branch, we need to test this series by using the qa 
> teuthology, which is running based on the 'testing' branch.
> 

People working on wide-scale changes to the kernel really shouldn't have
to go hunting down random branches to base their changes on. That's the
purpose of linux-next.

This is an ongoing problem with ceph maintenance -- patches sit in the
"testing" branch that doesn't go into linux-next. Anyone who wants to
work on patches vs. linux-next that touch ceph runs the risk of
developing against outdated code.

The rationale for this (at least at one time) was a fear of breaking
linux-next, but that its purpose. If there are problems, we want to know
early!

As long as you don't introduce build breaks, anything you shovel into
next is unlikely to be a problematic. There aren't that many people
doing ceph testing with linux-next, so the risk of breaking things is
pretty low, at least with patches that only touch ceph code. You do need
to be a bit more careful with patches that touch common code, but those
are pretty rare in the ceph tree.

Please change this!
Matthew Wilcox (Oracle) Aug. 29, 2023, 1:30 p.m. UTC | #4
On Tue, Aug 29, 2023 at 07:55:01AM -0400, Jeff Layton wrote:
> On Mon, 2023-08-28 at 09:19 +0800, Xiubo Li wrote:
> > Next time please rebase to the latest ceph-client latest upstream 
> > 'testing' branch, we need to test this series by using the qa 
> > teuthology, which is running based on the 'testing' branch.
> 
> People working on wide-scale changes to the kernel really shouldn't have
> to go hunting down random branches to base their changes on. That's the
> purpose of linux-next.

Yes.  As I said last time this came up
https://lore.kernel.org/linux-fsdevel/ZH94oBBFct9b9g3z@casper.infradead.org/

it's not reasonable for me to track down every filesystem's private
git tree.  I'm happy to re-do these patches against linux-next in a
week or two, but I'm not going to start working against your ceph tree.
I'm not a Ceph developer, I'm a Linux developer.  I work against Linus'
tree or Stephen's tree.
Ilya Dryomov Aug. 30, 2023, 10:44 a.m. UTC | #5
On Tue, Aug 29, 2023 at 3:30 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Aug 29, 2023 at 07:55:01AM -0400, Jeff Layton wrote:
> > On Mon, 2023-08-28 at 09:19 +0800, Xiubo Li wrote:
> > > Next time please rebase to the latest ceph-client latest upstream
> > > 'testing' branch, we need to test this series by using the qa
> > > teuthology, which is running based on the 'testing' branch.
> >
> > People working on wide-scale changes to the kernel really shouldn't have
> > to go hunting down random branches to base their changes on. That's the
> > purpose of linux-next.
>
> Yes.  As I said last time this came up
> https://lore.kernel.org/linux-fsdevel/ZH94oBBFct9b9g3z@casper.infradead.org/
>
> it's not reasonable for me to track down every filesystem's private
> git tree.  I'm happy to re-do these patches against linux-next in a
> week or two, but I'm not going to start working against your ceph tree.
> I'm not a Ceph developer, I'm a Linux developer.  I work against Linus'
> tree or Stephen's tree.

Agreed.  Definitely not reasonable, it's the CephFS team's job to sort
out conflicts when applying patches to the testing branch.

The problem is that the testing branch is also carrying a bunch of "DO
NOT MERGE" fail-fast and/or debugging patches that aren't suitable for
linux-next.  The corollary of that is that we end up testing something
slightly different in our CI.  Xiubo, please review that list and let's
try to get it down to a bare minimum.

Thanks,

                Ilya
Xiubo Li Aug. 31, 2023, 3:52 a.m. UTC | #6
On 8/30/23 18:44, Ilya Dryomov wrote:
> On Tue, Aug 29, 2023 at 3:30 PM Matthew Wilcox <willy@infradead.org> wrote:
>> On Tue, Aug 29, 2023 at 07:55:01AM -0400, Jeff Layton wrote:
>>> On Mon, 2023-08-28 at 09:19 +0800, Xiubo Li wrote:
>>>> Next time please rebase to the latest ceph-client latest upstream
>>>> 'testing' branch, we need to test this series by using the qa
>>>> teuthology, which is running based on the 'testing' branch.
>>> People working on wide-scale changes to the kernel really shouldn't have
>>> to go hunting down random branches to base their changes on. That's the
>>> purpose of linux-next.
>> Yes.  As I said last time this came up
>> https://lore.kernel.org/linux-fsdevel/ZH94oBBFct9b9g3z@casper.infradead.org/
>>
>> it's not reasonable for me to track down every filesystem's private
>> git tree.  I'm happy to re-do these patches against linux-next in a
>> week or two, but I'm not going to start working against your ceph tree.
>> I'm not a Ceph developer, I'm a Linux developer.  I work against Linus'
>> tree or Stephen's tree.
> Agreed.  Definitely not reasonable, it's the CephFS team's job to sort
> out conflicts when applying patches to the testing branch.
>
> The problem is that the testing branch is also carrying a bunch of "DO
> NOT MERGE" fail-fast and/or debugging patches that aren't suitable for
> linux-next.  The corollary of that is that we end up testing something
> slightly different in our CI.  Xiubo, please review that list and let's
> try to get it down to a bare minimum.

Sure. Thanks!

- Xiubo


> Thanks,
>
>                  Ilya
>
diff mbox series

Patch

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 765b37db2729..1812c3e6e64f 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1608,29 +1608,30 @@  static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
 		ret = VM_FAULT_SIGBUS;
 	} else {
 		struct address_space *mapping = inode->i_mapping;
-		struct page *page;
+		struct folio *folio;
 
 		filemap_invalidate_lock_shared(mapping);
-		page = find_or_create_page(mapping, 0,
+		folio = __filemap_get_folio(mapping, 0,
+				FGP_LOCK|FGP_ACCESSED|FGP_CREAT,
 				mapping_gfp_constraint(mapping, ~__GFP_FS));
-		if (!page) {
+		if (!folio) {
 			ret = VM_FAULT_OOM;
 			goto out_inline;
 		}
-		err = __ceph_do_getattr(inode, page,
+		err = __ceph_do_getattr(inode, &folio->page,
 					 CEPH_STAT_CAP_INLINE_DATA, true);
 		if (err < 0 || off >= i_size_read(inode)) {
-			unlock_page(page);
-			put_page(page);
+			folio_unlock(folio);
+			folio_put(folio);
 			ret = vmf_error(err);
 			goto out_inline;
 		}
 		if (err < PAGE_SIZE)
-			zero_user_segment(page, err, PAGE_SIZE);
+			folio_zero_segment(folio, err, folio_size(folio));
 		else
-			flush_dcache_page(page);
-		SetPageUptodate(page);
-		vmf->page = page;
+			flush_dcache_folio(folio);
+		folio_mark_uptodate(folio);
+		vmf->page = folio_page(folio, 0);
 		ret = VM_FAULT_MAJOR | VM_FAULT_LOCKED;
 out_inline:
 		filemap_invalidate_unlock_shared(mapping);