Message ID | 20240515125136.3714580-3-libaokun@huaweicloud.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | cachefiles: some bugfixes for clean object/send req/poll | expand |
On Wed, 2024-05-15 at 20:51 +0800, libaokun@huaweicloud.com wrote: > From: Baokun Li <libaokun1@huawei.com> > > Because after an object is dropped, requests for that object are > useless, > flush them to avoid causing other problems. > > This prepares for the later addition of cancel_work_sync(). After the > reopen requests is generated, flush it to avoid cancel_work_sync() > blocking by waiting for daemon to complete the reopen requests. > > Signed-off-by: Baokun Li <libaokun1@huawei.com> > --- > fs/cachefiles/ondemand.c | 19 +++++++++++++++++++ > 1 file changed, 19 insertions(+) > > diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c > index 73da4d4eaa9b..d24bff43499b 100644 > --- a/fs/cachefiles/ondemand.c > +++ b/fs/cachefiles/ondemand.c > @@ -564,12 +564,31 @@ int cachefiles_ondemand_init_object(struct > cachefiles_object *object) > > void cachefiles_ondemand_clean_object(struct cachefiles_object > *object) > { > + unsigned long index; > + struct cachefiles_req *req; > + struct cachefiles_cache *cache; > + > if (!object->ondemand) > return; > > cachefiles_ondemand_send_req(object, CACHEFILES_OP_CLOSE, 0, > cachefiles_ondemand_init_close_req, NULL); > + > + if (!object->ondemand->ondemand_id) > + return; > + > + /* Flush all requests for the object that is being dropped. > */ I wouldn't call this a "Flush". In the context of writeback, that usually means that we're writing out pages now in order to do something else. In this case, it looks like you're more canceling these requests since you're marking them with an error and declaring them complete. > + cache = object->volume->cache; > + xa_lock(&cache->reqs); > cachefiles_ondemand_set_object_dropping(object); > + xa_for_each(&cache->reqs, index, req) { > + if (req->object == object) { > + req->error = -EIO; > + complete(&req->done); > + __xa_erase(&cache->reqs, index); > + } > + } > + xa_unlock(&cache->reqs); > } > > int cachefiles_ondemand_init_obj_info(struct cachefiles_object > *object,
On 2024/6/27 19:01, Jeff Layton wrote: > On Wed, 2024-05-15 at 20:51 +0800, libaokun@huaweicloud.com wrote: >> From: Baokun Li <libaokun1@huawei.com> >> >> Because after an object is dropped, requests for that object are >> useless, >> flush them to avoid causing other problems. >> >> This prepares for the later addition of cancel_work_sync(). After the >> reopen requests is generated, flush it to avoid cancel_work_sync() >> blocking by waiting for daemon to complete the reopen requests. >> >> Signed-off-by: Baokun Li <libaokun1@huawei.com> >> --- >> fs/cachefiles/ondemand.c | 19 +++++++++++++++++++ >> 1 file changed, 19 insertions(+) >> >> diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c >> index 73da4d4eaa9b..d24bff43499b 100644 >> --- a/fs/cachefiles/ondemand.c >> +++ b/fs/cachefiles/ondemand.c >> @@ -564,12 +564,31 @@ int cachefiles_ondemand_init_object(struct >> cachefiles_object *object) >> >> void cachefiles_ondemand_clean_object(struct cachefiles_object >> *object) >> { >> + unsigned long index; >> + struct cachefiles_req *req; >> + struct cachefiles_cache *cache; >> + >> if (!object->ondemand) >> return; >> >> cachefiles_ondemand_send_req(object, CACHEFILES_OP_CLOSE, 0, >> cachefiles_ondemand_init_close_req, NULL); >> + >> + if (!object->ondemand->ondemand_id) >> + return; >> + >> + /* Flush all requests for the object that is being dropped. >> */ > I wouldn't call this a "Flush". In the context of writeback, that > usually means that we're writing out pages now in order to do something > else. In this case, it looks like you're more canceling these requests > since you're marking them with an error and declaring them complete. Makes sense, I'll update 'flush' to 'cancel' in the comment and subject. I am not a native speaker of English, so some of the expressions may not be accurate, thank you for correcting me. Thanks, Baokun >> + cache = object->volume->cache; >> + xa_lock(&cache->reqs); >> cachefiles_ondemand_set_object_dropping(object); >> + xa_for_each(&cache->reqs, index, req) { >> + if (req->object == object) { >> + req->error = -EIO; >> + complete(&req->done); >> + __xa_erase(&cache->reqs, index); >> + } >> + } >> + xa_unlock(&cache->reqs); >> } >> >> int cachefiles_ondemand_init_obj_info(struct cachefiles_object >> *object,
On Thu, Jun 27, 2024 at 07:20:16PM GMT, Baokun Li wrote: > On 2024/6/27 19:01, Jeff Layton wrote: > > On Wed, 2024-05-15 at 20:51 +0800, libaokun@huaweicloud.com wrote: > > > From: Baokun Li <libaokun1@huawei.com> > > > > > > Because after an object is dropped, requests for that object are > > > useless, > > > flush them to avoid causing other problems. > > > > > > This prepares for the later addition of cancel_work_sync(). After the > > > reopen requests is generated, flush it to avoid cancel_work_sync() > > > blocking by waiting for daemon to complete the reopen requests. > > > > > > Signed-off-by: Baokun Li <libaokun1@huawei.com> > > > --- > > > fs/cachefiles/ondemand.c | 19 +++++++++++++++++++ > > > 1 file changed, 19 insertions(+) > > > > > > diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c > > > index 73da4d4eaa9b..d24bff43499b 100644 > > > --- a/fs/cachefiles/ondemand.c > > > +++ b/fs/cachefiles/ondemand.c > > > @@ -564,12 +564,31 @@ int cachefiles_ondemand_init_object(struct > > > cachefiles_object *object) > > > void cachefiles_ondemand_clean_object(struct cachefiles_object > > > *object) > > > { > > > + unsigned long index; > > > + struct cachefiles_req *req; > > > + struct cachefiles_cache *cache; > > > + > > > if (!object->ondemand) > > > return; > > > cachefiles_ondemand_send_req(object, CACHEFILES_OP_CLOSE, 0, > > > cachefiles_ondemand_init_close_req, NULL); > > > + > > > + if (!object->ondemand->ondemand_id) > > > + return; > > > + > > > + /* Flush all requests for the object that is being dropped. > > > */ > > I wouldn't call this a "Flush". In the context of writeback, that > > usually means that we're writing out pages now in order to do something > > else. In this case, it looks like you're more canceling these requests > > since you're marking them with an error and declaring them complete. > Makes sense, I'll update 'flush' to 'cancel' in the comment and subject. > > I am not a native speaker of English, so some of the expressions may > not be accurate, thank you for correcting me. Can you please resend all patch series that we're supposed to take for this cycle, please?
On 2024/6/27 23:18, Christian Brauner wrote: > On Thu, Jun 27, 2024 at 07:20:16PM GMT, Baokun Li wrote: >> On 2024/6/27 19:01, Jeff Layton wrote: >>> On Wed, 2024-05-15 at 20:51 +0800, libaokun@huaweicloud.com wrote: >>>> From: Baokun Li <libaokun1@huawei.com> >>>> >>>> Because after an object is dropped, requests for that object are >>>> useless, >>>> flush them to avoid causing other problems. >>>> >>>> This prepares for the later addition of cancel_work_sync(). After the >>>> reopen requests is generated, flush it to avoid cancel_work_sync() >>>> blocking by waiting for daemon to complete the reopen requests. >>>> >>>> Signed-off-by: Baokun Li <libaokun1@huawei.com> >>>> --- >>>> fs/cachefiles/ondemand.c | 19 +++++++++++++++++++ >>>> 1 file changed, 19 insertions(+) >>>> >>>> diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c >>>> index 73da4d4eaa9b..d24bff43499b 100644 >>>> --- a/fs/cachefiles/ondemand.c >>>> +++ b/fs/cachefiles/ondemand.c >>>> @@ -564,12 +564,31 @@ int cachefiles_ondemand_init_object(struct >>>> cachefiles_object *object) >>>> void cachefiles_ondemand_clean_object(struct cachefiles_object >>>> *object) >>>> { >>>> + unsigned long index; >>>> + struct cachefiles_req *req; >>>> + struct cachefiles_cache *cache; >>>> + >>>> if (!object->ondemand) >>>> return; >>>> cachefiles_ondemand_send_req(object, CACHEFILES_OP_CLOSE, 0, >>>> cachefiles_ondemand_init_close_req, NULL); >>>> + >>>> + if (!object->ondemand->ondemand_id) >>>> + return; >>>> + >>>> + /* Flush all requests for the object that is being dropped. >>>> */ >>> I wouldn't call this a "Flush". In the context of writeback, that >>> usually means that we're writing out pages now in order to do something >>> else. In this case, it looks like you're more canceling these requests >>> since you're marking them with an error and declaring them complete. >> Makes sense, I'll update 'flush' to 'cancel' in the comment and subject. >> >> I am not a native speaker of English, so some of the expressions may >> not be accurate, thank you for correcting me. > Can you please resend all patch series that we're supposed to take for > this cycle, please? Sure, I'm organising to combine the two patch series today and send it out as v3.
diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c index 73da4d4eaa9b..d24bff43499b 100644 --- a/fs/cachefiles/ondemand.c +++ b/fs/cachefiles/ondemand.c @@ -564,12 +564,31 @@ int cachefiles_ondemand_init_object(struct cachefiles_object *object) void cachefiles_ondemand_clean_object(struct cachefiles_object *object) { + unsigned long index; + struct cachefiles_req *req; + struct cachefiles_cache *cache; + if (!object->ondemand) return; cachefiles_ondemand_send_req(object, CACHEFILES_OP_CLOSE, 0, cachefiles_ondemand_init_close_req, NULL); + + if (!object->ondemand->ondemand_id) + return; + + /* Flush all requests for the object that is being dropped. */ + cache = object->volume->cache; + xa_lock(&cache->reqs); cachefiles_ondemand_set_object_dropping(object); + xa_for_each(&cache->reqs, index, req) { + if (req->object == object) { + req->error = -EIO; + complete(&req->done); + __xa_erase(&cache->reqs, index); + } + } + xa_unlock(&cache->reqs); } int cachefiles_ondemand_init_obj_info(struct cachefiles_object *object,