diff mbox series

btrfs: qgroup: delete a todo about using kmem cache to alloc structure

Message ID 20240528062343.795139-1-sunjunchao2870@gmail.com (mailing list archive)
State New, archived
Headers show
Series btrfs: qgroup: delete a todo about using kmem cache to alloc structure | expand

Commit Message

Julian Sun May 28, 2024, 6:23 a.m. UTC
Generic slub works fine so far. And it's not necessary to create a
dedicated kmem cache and leave it unused if quotas are disabled.
So let's delete the todo line.

Signed-off-by: Junchao Sun <sunjunchao2870@gmail.com>
---
 fs/btrfs/qgroup.h | 1 -
 1 file changed, 1 deletion(-)

Comments

Qu Wenruo May 28, 2024, 6:42 a.m. UTC | #1
在 2024/5/28 15:53, Junchao Sun 写道:
> Generic slub works fine so far. And it's not necessary to create a
> dedicated kmem cache and leave it unused if quotas are disabled.
> So let's delete the todo line.
>
> Signed-off-by: Junchao Sun <sunjunchao2870@gmail.com>

Reviewed-by: Qu Wenruo <wqu@suse.com>

My bad, at the time of writing I didn't notice that qgroup is not always
enabled.
Furthermore nowadays squota won't utilize that structure either, making
it less meaningful to go kmemcache.

Thanks,
Qu
> ---
>   fs/btrfs/qgroup.h | 1 -
>   1 file changed, 1 deletion(-)
>
> diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
> index 706640be0ec2..7874c972029c 100644
> --- a/fs/btrfs/qgroup.h
> +++ b/fs/btrfs/qgroup.h
> @@ -123,7 +123,6 @@ struct btrfs_inode;
>
>   /*
>    * Record a dirty extent, and info qgroup to update quota on it
> - * TODO: Use kmem cache to alloc it.
>    */
>   struct btrfs_qgroup_extent_record {
>   	struct rb_node node;
David Sterba May 28, 2024, 3:35 p.m. UTC | #2
On Tue, May 28, 2024 at 02:23:43PM +0800, Junchao Sun wrote:
> Generic slub works fine so far. And it's not necessary to create a
> dedicated kmem cache and leave it unused if quotas are disabled.
> So let's delete the todo line.
> 
> Signed-off-by: Junchao Sun <sunjunchao2870@gmail.com>

Reviewed-by: David Sterba <dsterba@suse.com>
Julian Sun May 29, 2024, 1:11 p.m. UTC | #3
Qu Wenruo <quwenruo.btrfs@gmx.com> 于2024年5月28日周二 14:42写道:
>
>
>
> 在 2024/5/28 15:53, Junchao Sun 写道:
> > Generic slub works fine so far. And it's not necessary to create a
> > dedicated kmem cache and leave it unused if quotas are disabled.
> > So let's delete the todo line.
> >
> > Signed-off-by: Junchao Sun <sunjunchao2870@gmail.com>
>
> Reviewed-by: Qu Wenruo <wqu@suse.com>
>
> My bad, at the time of writing I didn't notice that qgroup is not always
> enabled.
>
> > Furthermore nowadays squota won't utilize that structure either, making
> > it less meaningful to go kmemcache.
Thank you for your further explanation. By the way, is there anything
meaningful todo? I saw some in code, but I can't ensure if they are
still meaningful. I'd like to try contributing to btrfs and improve my
understanding of it.
Also, is it a meaningful to view the contents of snapshots without
rolling them back? The company I work for is considering implementing
it recently...
>
> Thanks,
> Qu
> > ---
> >   fs/btrfs/qgroup.h | 1 -
> >   1 file changed, 1 deletion(-)
> >
> > diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
> > index 706640be0ec2..7874c972029c 100644
> > --- a/fs/btrfs/qgroup.h
> > +++ b/fs/btrfs/qgroup.h
> > @@ -123,7 +123,6 @@ struct btrfs_inode;
> >
> >   /*
> >    * Record a dirty extent, and info qgroup to update quota on it
> > - * TODO: Use kmem cache to alloc it.
> >    */
> >   struct btrfs_qgroup_extent_record {
> >       struct rb_node node;
David Sterba May 29, 2024, 3:12 p.m. UTC | #4
On Wed, May 29, 2024 at 09:11:05PM +0800, JunChao Sun wrote:
> Qu Wenruo <quwenruo.btrfs@gmx.com> 于2024年5月28日周二 14:42写道:
> >
> >
> >
> > 在 2024/5/28 15:53, Junchao Sun 写道:
> > > Generic slub works fine so far. And it's not necessary to create a
> > > dedicated kmem cache and leave it unused if quotas are disabled.
> > > So let's delete the todo line.
> > >
> > > Signed-off-by: Junchao Sun <sunjunchao2870@gmail.com>
> >
> > Reviewed-by: Qu Wenruo <wqu@suse.com>
> >
> > My bad, at the time of writing I didn't notice that qgroup is not always
> > enabled.
> >
> > > Furthermore nowadays squota won't utilize that structure either, making
> > > it less meaningful to go kmemcache.
> Thank you for your further explanation. By the way, is there anything
> meaningful todo? I saw some in code, but I can't ensure if they are
> still meaningful. I'd like to try contributing to btrfs and improve my
> understanding of it.

You could find TODO or XXX marks in the sources but I'm against tracking
todos like that and the one you saw and all the others are from old
times. All are to be removed, though with evaluation if they apply and
are still worth to be tracked by other means.

We don't have one place to track todos also things are more like an idea
than a full fledged task with specification and agreement that this is
what we want to do. This means it's better to talk to us first if you'd
like to implement something, either privately by mail or you can join
slack.

Otherwise generic contributions are always welcome, with perhaps a
friendly warning that the code base is old and there are several people
working on it so the style and coding patterns are kind of alwyas
standing in the way. Reading code first to get the idea is recommended.

> Also, is it a meaningful to view the contents of snapshots without
> rolling them back? The company I work for is considering implementing
> it recently...

This sounds like a use case that's above the filesystem, eg. rollback of
snapshot I know of is done by snapper, what btrfs does is just to
provide the capability of snapshots. Viewing snapshots is possible,
either going to its directory or mounting it to some path and examining
it.
Qu Wenruo May 29, 2024, 9:55 p.m. UTC | #5
在 2024/5/29 22:41, JunChao Sun 写道:
> Qu Wenruo <quwenruo.btrfs@gmx.com> 于2024年5月28日周二 14:42写道:
[...]
>>
>> Reviewed-by: Qu Wenruo <wqu@suse.com>
>>
>> My bad, at the time of writing I didn't notice that qgroup is not always
>> enabled.
>>
>>> Furthermore nowadays squota won't utilize that structure either, making
>>> it less meaningful to go kmemcache.
> Thank you for your further explanation. By the way, is there anything
> meaningful todo? I saw some in code, but I can't ensure if they are
> still meaningful. I'd like to try contributing to btrfs and improve my
> understanding of it.

In fact I'm considering converting btrfs_qgroup_extent_record::node to
XArray, inspired by the recent conversion by Filipe for btrfs_root::inodes.

Which should reduce the memory usage for btrfs_qgroup_extent_record.


> Also, is it a meaningful to view the contents of snapshots without
> rolling them back? The company I work for is considering implementing
> it recently...

What do you mean by rolling back?

IIRC one can always access the snapshot from parent subvolume, or just
mount the snapshot.

Thanks,
Qu

>>
>> Thanks,
>> Qu
>>> ---
>>>    fs/btrfs/qgroup.h | 1 -
>>>    1 file changed, 1 deletion(-)
>>>
>>> diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
>>> index 706640be0ec2..7874c972029c 100644
>>> --- a/fs/btrfs/qgroup.h
>>> +++ b/fs/btrfs/qgroup.h
>>> @@ -123,7 +123,6 @@ struct btrfs_inode;
>>>
>>>    /*
>>>     * Record a dirty extent, and info qgroup to update quota on it
>>> - * TODO: Use kmem cache to alloc it.
>>>     */
>>>    struct btrfs_qgroup_extent_record {
>>>        struct rb_node node;
>
Julian Sun May 30, 2024, 3:51 a.m. UTC | #6
David Sterba <dsterba@suse.cz> 于2024年5月29日周三 23:12写道:
>
> On Wed, May 29, 2024 at 09:11:05PM +0800, JunChao Sun wrote:
> > Qu Wenruo <quwenruo.btrfs@gmx.com> 于2024年5月28日周二 14:42写道:
> > >
> > >
> > >
> > > 在 2024/5/28 15:53, Junchao Sun 写道:
> > > > Generic slub works fine so far. And it's not necessary to create a
> > > > dedicated kmem cache and leave it unused if quotas are disabled.
> > > > So let's delete the todo line.
> > > >
> > > > Signed-off-by: Junchao Sun <sunjunchao2870@gmail.com>
> > >
> > > Reviewed-by: Qu Wenruo <wqu@suse.com>
> > >
> > > My bad, at the time of writing I didn't notice that qgroup is not always
> > > enabled.
> > >
> > > > Furthermore nowadays squota won't utilize that structure either, making
> > > > it less meaningful to go kmemcache.
> > Thank you for your further explanation. By the way, is there anything
> > meaningful todo? I saw some in code, but I can't ensure if they are
> > still meaningful. I'd like to try contributing to btrfs and improve my
> > understanding of it.
>
> You could find TODO or XXX marks in the sources but I'm against tracking
> todos like that and the one you saw and all the others are from old
> times. All are to be removed, though with evaluation if they apply and
> are still worth to be tracked by other means.
>
>
> > We don't have one place to track todos also things are more like an idea
> > than a full fledged task with specification and agreement that this is
> > what we want to do. This means it's better to talk to us first if you'd
> > like to implement something, either privately by mail or you can join
> > slack.
> >
> > Otherwise generic contributions are always welcome, with perhaps a
> > friendly warning that the code base is old and there are several people
> > working on it so the style and coding patterns are kind of alwyas
> > standing in the way. Reading code first to get the idea is recommended.
I see. Thanks for your detailed explanation.
>
> > Also, is it a meaningful to view the contents of snapshots without
> > rolling them back? The company I work for is considering implementing
> > it recently...
>
>
> > This sounds like a use case that's above the filesystem, eg. rollback of
> > snapshot I know of is done by snapper, what btrfs does is just to
> > provide the capability of snapshots. Viewing snapshots is possible,
> > either going to its directory or mounting it to some path and examining
> > it.
Julian Sun May 30, 2024, 3:56 a.m. UTC | #7
Qu Wenruo <quwenruo.btrfs@gmx.com> 于2024年5月30日周四 05:55写道:
>
>
>
> 在 2024/5/29 22:41, JunChao Sun 写道:
> > Qu Wenruo <quwenruo.btrfs@gmx.com> 于2024年5月28日周二 14:42写道:
> [...]
> >>
> >> Reviewed-by: Qu Wenruo <wqu@suse.com>
> >>
> >> My bad, at the time of writing I didn't notice that qgroup is not always
> >> enabled.
> >>
> >>> Furthermore nowadays squota won't utilize that structure either, making
> >>> it less meaningful to go kmemcache.
> > Thank you for your further explanation. By the way, is there anything
> > meaningful todo? I saw some in code, but I can't ensure if they are
> > still meaningful. I'd like to try contributing to btrfs and improve my
> > understanding of it.
>
>
> > In fact I'm considering converting btrfs_qgroup_extent_record::node to
> > XArray, inspired by the recent conversion by Filipe for btrfs_root::inodes.
> >
> > Which should reduce the memory usage for btrfs_qgroup_extent_record.
Thanks for your suggestion. I saw the patch, it's really great. Maybe
I can try to do same thing for btrfs_qgroup_extent_record::node.
>
>
> > Also, is it a meaningful to view the contents of snapshots without
> > rolling them back? The company I work for is considering implementing
> > it recently...
>
> What do you mean by rolling back?
>
>
> > IIRC one can always access the snapshot from parent subvolume, or just
> > mount the snapshot.
yeh, I realized it now.
>
> Thanks,
> Qu
>
> >>
> >> Thanks,
> >> Qu
> >>> ---
> >>>    fs/btrfs/qgroup.h | 1 -
> >>>    1 file changed, 1 deletion(-)
> >>>
> >>> diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
> >>> index 706640be0ec2..7874c972029c 100644
> >>> --- a/fs/btrfs/qgroup.h
> >>> +++ b/fs/btrfs/qgroup.h
> >>> @@ -123,7 +123,6 @@ struct btrfs_inode;
> >>>
> >>>    /*
> >>>     * Record a dirty extent, and info qgroup to update quota on it
> >>> - * TODO: Use kmem cache to alloc it.
> >>>     */
> >>>    struct btrfs_qgroup_extent_record {
> >>>        struct rb_node node;
> >
diff mbox series

Patch

diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
index 706640be0ec2..7874c972029c 100644
--- a/fs/btrfs/qgroup.h
+++ b/fs/btrfs/qgroup.h
@@ -123,7 +123,6 @@  struct btrfs_inode;
 
 /*
  * Record a dirty extent, and info qgroup to update quota on it
- * TODO: Use kmem cache to alloc it.
  */
 struct btrfs_qgroup_extent_record {
 	struct rb_node node;