Message ID | 20220609105343.13591-3-lhenriques@suse.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Two xattrs-related fixes for ceph | expand |
On 6/9/22 6:53 PM, Luís Henriques wrote: > CephFS doesn't have a maximum xattr size. Instead, it imposes a maximum > size for the full set of xattrs names+values, which by default is 64K. > And since ceph reports 4M as the blocksize (the default ceph object size), > generic/486 will fail in this filesystem because it will end up using > XATTR_SIZE_MAX to set the size of the 2nd (big) xattr value. > > The fix is to adjust the max size in attr_replace_test so that it takes > into account the initial xattr name and value lengths. > > Signed-off-by: Luís Henriques <lhenriques@suse.de> > --- > src/attr_replace_test.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/src/attr_replace_test.c b/src/attr_replace_test.c > index cca8dcf8ff60..1c8d1049a1d8 100644 > --- a/src/attr_replace_test.c > +++ b/src/attr_replace_test.c > @@ -29,6 +29,11 @@ int main(int argc, char *argv[]) > char *value; > struct stat sbuf; > size_t size = sizeof(value); > + /* > + * Take into account the initial (small) xattr name and value sizes and > + * subtract them from the XATTR_SIZE_MAX maximum. > + */ > + size_t maxsize = XATTR_SIZE_MAX - strlen(name) - 1; Why not use the statfs to get the filesystem type first ? And then just minus the strlen(name) for ceph only ? > > if (argc != 2) > fail("Usage: %s <file>\n", argv[0]); > @@ -46,7 +51,7 @@ int main(int argc, char *argv[]) > size = sbuf.st_blksize * 3 / 4; > if (!size) > fail("Invalid st_blksize(%ld)\n", sbuf.st_blksize); > - size = MIN(size, XATTR_SIZE_MAX); > + size = MIN(size, maxsize); > value = malloc(size); > if (!value) > fail("Failed to allocate memory\n"); >
On Fri, Jun 10, 2022 at 01:35:36PM +0800, Xiubo Li wrote: > > On 6/9/22 6:53 PM, Luís Henriques wrote: > > CephFS doesn't have a maximum xattr size. Instead, it imposes a maximum > > size for the full set of xattrs names+values, which by default is 64K. > > And since ceph reports 4M as the blocksize (the default ceph object size), > > generic/486 will fail in this filesystem because it will end up using > > XATTR_SIZE_MAX to set the size of the 2nd (big) xattr value. > > > > The fix is to adjust the max size in attr_replace_test so that it takes > > into account the initial xattr name and value lengths. > > > > Signed-off-by: Luís Henriques <lhenriques@suse.de> > > --- > > src/attr_replace_test.c | 7 ++++++- > > 1 file changed, 6 insertions(+), 1 deletion(-) > > > > diff --git a/src/attr_replace_test.c b/src/attr_replace_test.c > > index cca8dcf8ff60..1c8d1049a1d8 100644 > > --- a/src/attr_replace_test.c > > +++ b/src/attr_replace_test.c > > @@ -29,6 +29,11 @@ int main(int argc, char *argv[]) > > char *value; > > struct stat sbuf; > > size_t size = sizeof(value); > > + /* > > + * Take into account the initial (small) xattr name and value sizes and > > + * subtract them from the XATTR_SIZE_MAX maximum. > > + */ > > + size_t maxsize = XATTR_SIZE_MAX - strlen(name) - 1; > > Why not use the statfs to get the filesystem type first ? And then just > minus the strlen(name) for ceph only ? No. The test mechanism has no business knowing what filesystem type it is running on - the test itself is supposed to get the limits for the filesystem type from the test infrastructure. As I've already said: the right thing to do is to pass the maximum attr size for the test to use via the command line from the fstest itself. As per g/020, the fstests infrastructure is where we encode weird fs limit differences and behaviours based on $FSTYP. Hacking around weird filesystem specific behaviours deep inside random bits of test source code is not maintainable. AFAIA, only ceph is having a problem with this test, so it's trivial to encode into g/486 with: # ceph has a weird dynamic maximum xattr size and block size that is # much, much larger than the maximum supported attr size. Hence the # replace test can't auto-probe a sane attr size and so we have # to provide it with a maximum size that will work. max_attr_size=65536 [ "$FSTYP" = "ceph" ] && max_attr_size=64000 attr_replace_test -m $max_attr_size ..... ..... Cheers, Dave.
On Fri, Jun 10, 2022 at 05:25:45PM +1000, Dave Chinner wrote: > On Fri, Jun 10, 2022 at 01:35:36PM +0800, Xiubo Li wrote: > > > > On 6/9/22 6:53 PM, Luís Henriques wrote: > > > CephFS doesn't have a maximum xattr size. Instead, it imposes a maximum > > > size for the full set of xattrs names+values, which by default is 64K. > > > And since ceph reports 4M as the blocksize (the default ceph object size), > > > generic/486 will fail in this filesystem because it will end up using > > > XATTR_SIZE_MAX to set the size of the 2nd (big) xattr value. > > > > > > The fix is to adjust the max size in attr_replace_test so that it takes > > > into account the initial xattr name and value lengths. > > > > > > Signed-off-by: Luís Henriques <lhenriques@suse.de> > > > --- > > > src/attr_replace_test.c | 7 ++++++- > > > 1 file changed, 6 insertions(+), 1 deletion(-) > > > > > > diff --git a/src/attr_replace_test.c b/src/attr_replace_test.c > > > index cca8dcf8ff60..1c8d1049a1d8 100644 > > > --- a/src/attr_replace_test.c > > > +++ b/src/attr_replace_test.c > > > @@ -29,6 +29,11 @@ int main(int argc, char *argv[]) > > > char *value; > > > struct stat sbuf; > > > size_t size = sizeof(value); > > > + /* > > > + * Take into account the initial (small) xattr name and value sizes and > > > + * subtract them from the XATTR_SIZE_MAX maximum. > > > + */ > > > + size_t maxsize = XATTR_SIZE_MAX - strlen(name) - 1; > > > > Why not use the statfs to get the filesystem type first ? And then just > > minus the strlen(name) for ceph only ? > > No. The test mechanism has no business knowing what filesystem type > it is running on - the test itself is supposed to get the limits for > the filesystem type from the test infrastructure. > > As I've already said: the right thing to do is to pass the maximum > attr size for the test to use via the command line from the fstest > itself. As per g/020, the fstests infrastructure is where we encode > weird fs limit differences and behaviours based on $FSTYP. Hacking > around weird filesystem specific behaviours deep inside random bits > of test source code is not maintainable. > > AFAIA, only ceph is having a problem with this test, so it's trivial > to encode into g/486 with: > > # ceph has a weird dynamic maximum xattr size and block size that is > # much, much larger than the maximum supported attr size. Hence the > # replace test can't auto-probe a sane attr size and so we have > # to provide it with a maximum size that will work. > max_attr_size=65536 > [ "$FSTYP" = "ceph" ] && max_attr_size=64000 > attr_replace_test -m $max_attr_size ..... > ..... Agree. I'd recommend changing the attr_replace_test.c, make it have a default max xattr size (keep using the XATTR_SIZE_MAX or define one if it's not defined), then give it an optinal option which can specify a customed max xattr size from outside. Then the test case (e.g. g/486) which uses attr_replace_test can specify a max xattr size if it needs. And it's easier to figure out what attr size is better for a specified fs in test case. Thanks, Zorro > > Cheers, > > Dave. > > -- > Dave Chinner > david@fromorbit.com >
Zorro Lang <zlang@redhat.com> writes: > On Fri, Jun 10, 2022 at 05:25:45PM +1000, Dave Chinner wrote: >> On Fri, Jun 10, 2022 at 01:35:36PM +0800, Xiubo Li wrote: >> > >> > On 6/9/22 6:53 PM, Luís Henriques wrote: >> > > CephFS doesn't have a maximum xattr size. Instead, it imposes a maximum >> > > size for the full set of xattrs names+values, which by default is 64K. >> > > And since ceph reports 4M as the blocksize (the default ceph object size), >> > > generic/486 will fail in this filesystem because it will end up using >> > > XATTR_SIZE_MAX to set the size of the 2nd (big) xattr value. >> > > >> > > The fix is to adjust the max size in attr_replace_test so that it takes >> > > into account the initial xattr name and value lengths. >> > > >> > > Signed-off-by: Luís Henriques <lhenriques@suse.de> >> > > --- >> > > src/attr_replace_test.c | 7 ++++++- >> > > 1 file changed, 6 insertions(+), 1 deletion(-) >> > > >> > > diff --git a/src/attr_replace_test.c b/src/attr_replace_test.c >> > > index cca8dcf8ff60..1c8d1049a1d8 100644 >> > > --- a/src/attr_replace_test.c >> > > +++ b/src/attr_replace_test.c >> > > @@ -29,6 +29,11 @@ int main(int argc, char *argv[]) >> > > char *value; >> > > struct stat sbuf; >> > > size_t size = sizeof(value); >> > > + /* >> > > + * Take into account the initial (small) xattr name and value sizes and >> > > + * subtract them from the XATTR_SIZE_MAX maximum. >> > > + */ >> > > + size_t maxsize = XATTR_SIZE_MAX - strlen(name) - 1; >> > >> > Why not use the statfs to get the filesystem type first ? And then just >> > minus the strlen(name) for ceph only ? >> >> No. The test mechanism has no business knowing what filesystem type >> it is running on - the test itself is supposed to get the limits for >> the filesystem type from the test infrastructure. >> >> As I've already said: the right thing to do is to pass the maximum >> attr size for the test to use via the command line from the fstest >> itself. As per g/020, the fstests infrastructure is where we encode >> weird fs limit differences and behaviours based on $FSTYP. Hacking >> around weird filesystem specific behaviours deep inside random bits >> of test source code is not maintainable. >> >> AFAIA, only ceph is having a problem with this test, so it's trivial >> to encode into g/486 with: >> >> # ceph has a weird dynamic maximum xattr size and block size that is >> # much, much larger than the maximum supported attr size. Hence the >> # replace test can't auto-probe a sane attr size and so we have >> # to provide it with a maximum size that will work. >> max_attr_size=65536 >> [ "$FSTYP" = "ceph" ] && max_attr_size=64000 >> attr_replace_test -m $max_attr_size ..... >> ..... > > Agree. I'd recommend changing the attr_replace_test.c, make it have a > default max xattr size (keep using the XATTR_SIZE_MAX or define one if > it's not defined), then give it an optinal option which can specify a > customed max xattr size from outside. > > Then the test case (e.g. g/486) which uses attr_replace_test can > specify a max xattr size if it needs. And it's easier to figure > out what attr size is better for a specified fs in test case. Awesome, thanks. I'll send out next rev with these changes. Thank you all. Cheers,
diff --git a/src/attr_replace_test.c b/src/attr_replace_test.c index cca8dcf8ff60..1c8d1049a1d8 100644 --- a/src/attr_replace_test.c +++ b/src/attr_replace_test.c @@ -29,6 +29,11 @@ int main(int argc, char *argv[]) char *value; struct stat sbuf; size_t size = sizeof(value); + /* + * Take into account the initial (small) xattr name and value sizes and + * subtract them from the XATTR_SIZE_MAX maximum. + */ + size_t maxsize = XATTR_SIZE_MAX - strlen(name) - 1; if (argc != 2) fail("Usage: %s <file>\n", argv[0]); @@ -46,7 +51,7 @@ int main(int argc, char *argv[]) size = sbuf.st_blksize * 3 / 4; if (!size) fail("Invalid st_blksize(%ld)\n", sbuf.st_blksize); - size = MIN(size, XATTR_SIZE_MAX); + size = MIN(size, maxsize); value = malloc(size); if (!value) fail("Failed to allocate memory\n");
CephFS doesn't have a maximum xattr size. Instead, it imposes a maximum size for the full set of xattrs names+values, which by default is 64K. And since ceph reports 4M as the blocksize (the default ceph object size), generic/486 will fail in this filesystem because it will end up using XATTR_SIZE_MAX to set the size of the 2nd (big) xattr value. The fix is to adjust the max size in attr_replace_test so that it takes into account the initial xattr name and value lengths. Signed-off-by: Luís Henriques <lhenriques@suse.de> --- src/attr_replace_test.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)