Message ID | b3aaaf5265c7e7ce6228ba2146f57aaae09f55e6.1693899008.git.nicola.vetrini@bugseng.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [XEN] xen/types: address Rule 10.1 for macro BITS_TO_LONGS | expand |
On 05.09.2023 09:31, Nicola Vetrini wrote: > Given its use in the declaration > 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument > 'bits' has essential type 'enum iommu_feature', which is not > allowed by the Rule as an operand to the addition operator. > Given that its value can be represented by a signed integer, > the explicit cast resolves the violation. Wait - why would this lead to a change to BITS_TO_LONGS()? And if that was to be changed, why plain int? I don't think negative input makes sense there, and in principle I'd expect values beyond 4 billion to also be permissible (even if likely no such use will ever appear in a DECLARE_BITMAP(), but elsewhere it may make sense). Even going to "unsigned long" may be too limiting ... > --- a/xen/include/xen/types.h > +++ b/xen/include/xen/types.h > @@ -23,7 +23,7 @@ typedef signed long ssize_t; > typedef __PTRDIFF_TYPE__ ptrdiff_t; > > #define BITS_TO_LONGS(bits) \ > - (((bits)+BITS_PER_LONG-1)/BITS_PER_LONG) > + (((int)(bits)+BITS_PER_LONG-1)/BITS_PER_LONG) > #define DECLARE_BITMAP(name,bits) \ > unsigned long name[BITS_TO_LONGS(bits)] > Furthermore, as always - if this was to be touched, please take care of style violations (numerous missing blanks) at this occasion. Jan
On 05/09/2023 09:46, Jan Beulich wrote: > On 05.09.2023 09:31, Nicola Vetrini wrote: >> Given its use in the declaration >> 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument >> 'bits' has essential type 'enum iommu_feature', which is not >> allowed by the Rule as an operand to the addition operator. >> Given that its value can be represented by a signed integer, >> the explicit cast resolves the violation. > > Wait - why would this lead to a change to BITS_TO_LONGS()? And if that > was to be changed, why plain int? I don't think negative input makes > sense there, and in principle I'd expect values beyond 4 billion to > also be permissible (even if likely no such use will ever appear in a > DECLARE_BITMAP(), but elsewhere it may make sense). Even going to > "unsigned long" may be too limiting ... > You have a point. I can think of doing it like this: DECLARE_BITMAP(features, (int)IOMMU_FEAT_count) on the grounds that the enum constant is representable in an int, and it does not seem likely to get much bigger. Having an unsigned cast requires making the whole expression essentially unsigned, otherwise Rule 10.4 is violated because BITS_PER_LONG is essentially signed. This can be done, but it depends on how BITS_TO_LONGS will be/is used. >> --- a/xen/include/xen/types.h >> +++ b/xen/include/xen/types.h >> @@ -23,7 +23,7 @@ typedef signed long ssize_t; >> typedef __PTRDIFF_TYPE__ ptrdiff_t; >> >> #define BITS_TO_LONGS(bits) \ >> - (((bits)+BITS_PER_LONG-1)/BITS_PER_LONG) >> + (((int)(bits)+BITS_PER_LONG-1)/BITS_PER_LONG) >> #define DECLARE_BITMAP(name,bits) \ >> unsigned long name[BITS_TO_LONGS(bits)] >> > > Furthermore, as always - if this was to be touched, please take care > of style violations (numerous missing blanks) at this occasion. > Then the whole file needs a cleanup.
On 05.09.2023 10:20, Nicola Vetrini wrote: > On 05/09/2023 09:46, Jan Beulich wrote: >> On 05.09.2023 09:31, Nicola Vetrini wrote: >>> Given its use in the declaration >>> 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument >>> 'bits' has essential type 'enum iommu_feature', which is not >>> allowed by the Rule as an operand to the addition operator. >>> Given that its value can be represented by a signed integer, >>> the explicit cast resolves the violation. >> >> Wait - why would this lead to a change to BITS_TO_LONGS()? And if that >> was to be changed, why plain int? I don't think negative input makes >> sense there, and in principle I'd expect values beyond 4 billion to >> also be permissible (even if likely no such use will ever appear in a >> DECLARE_BITMAP(), but elsewhere it may make sense). Even going to >> "unsigned long" may be too limiting ... >> > > You have a point. I can think of doing it like this: > DECLARE_BITMAP(features, (int)IOMMU_FEAT_count) > on the grounds that the enum constant is representable in an int, and it > does not seem likely > to get much bigger. > Having an unsigned cast requires making the whole expression > essentially unsigned, otherwise Rule 10.4 is violated because > BITS_PER_LONG is > essentially signed. This can be done, but it depends on how > BITS_TO_LONGS will be/is used. It'll need looking closely, yes, but I expect that actually wants to be an unsigned constant. I wouldn't be surprised if some use of DECLARE_BITMAP() appeared (or already existed) where the 2nd argument involves sizeof() in some way. >>> --- a/xen/include/xen/types.h >>> +++ b/xen/include/xen/types.h >>> @@ -23,7 +23,7 @@ typedef signed long ssize_t; >>> typedef __PTRDIFF_TYPE__ ptrdiff_t; >>> >>> #define BITS_TO_LONGS(bits) \ >>> - (((bits)+BITS_PER_LONG-1)/BITS_PER_LONG) >>> + (((int)(bits)+BITS_PER_LONG-1)/BITS_PER_LONG) >>> #define DECLARE_BITMAP(name,bits) \ >>> unsigned long name[BITS_TO_LONGS(bits)] >>> >> >> Furthermore, as always - if this was to be touched, please take care >> of style violations (numerous missing blanks) at this occasion. > > Then the whole file needs a cleanup. Perhaps, but going as we touch things is generally deemed better than doing a single cleanup-only patch. First and foremost to not unduly affect the usefulness of "git blame" and alike. Jan
On 05/09/2023 10:33, Jan Beulich wrote: > On 05.09.2023 10:20, Nicola Vetrini wrote: >> On 05/09/2023 09:46, Jan Beulich wrote: >>> On 05.09.2023 09:31, Nicola Vetrini wrote: >>>> Given its use in the declaration >>>> 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument >>>> 'bits' has essential type 'enum iommu_feature', which is not >>>> allowed by the Rule as an operand to the addition operator. >>>> Given that its value can be represented by a signed integer, >>>> the explicit cast resolves the violation. >>> >>> Wait - why would this lead to a change to BITS_TO_LONGS()? And if >>> that >>> was to be changed, why plain int? I don't think negative input makes >>> sense there, and in principle I'd expect values beyond 4 billion to >>> also be permissible (even if likely no such use will ever appear in a >>> DECLARE_BITMAP(), but elsewhere it may make sense). Even going to >>> "unsigned long" may be too limiting ... >>> >> >> You have a point. I can think of doing it like this: >> DECLARE_BITMAP(features, (int)IOMMU_FEAT_count) >> on the grounds that the enum constant is representable in an int, and >> it >> does not seem likely >> to get much bigger. >> Having an unsigned cast requires making the whole expression >> essentially unsigned, otherwise Rule 10.4 is violated because >> BITS_PER_LONG is >> essentially signed. This can be done, but it depends on how >> BITS_TO_LONGS will be/is used. > > It'll need looking closely, yes, but I expect that actually wants to be > an > unsigned constant. I wouldn't be surprised if some use of > DECLARE_BITMAP() > appeared (or already existed) where the 2nd argument involves sizeof() > in > some way. > I think there's one with ARRAY_SIZE. In my opinion this can be resolved as follows: #define BYTES_PER_LONG (1U << LONG_BYTEORDER) // the essential type gets from signed to unsigned #define BITS_TO_LONGS(bits) \ (((unsigned long long)(bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) // same here >>>> --- a/xen/include/xen/types.h >>>> +++ b/xen/include/xen/types.h >>>> @@ -23,7 +23,7 @@ typedef signed long ssize_t; >>>> typedef __PTRDIFF_TYPE__ ptrdiff_t; >>>> >>>> #define BITS_TO_LONGS(bits) \ >>>> - (((bits)+BITS_PER_LONG-1)/BITS_PER_LONG) >>>> + (((int)(bits)+BITS_PER_LONG-1)/BITS_PER_LONG) >>>> #define DECLARE_BITMAP(name,bits) \ >>>> unsigned long name[BITS_TO_LONGS(bits)] >>>> >>> >>> Furthermore, as always - if this was to be touched, please take care >>> of style violations (numerous missing blanks) at this occasion. >> >> Then the whole file needs a cleanup. > > Perhaps, but going as we touch things is generally deemed better than > doing > a single cleanup-only patch. First and foremost to not unduly affect > the > usefulness of "git blame" and alike. > Ok
On 06.09.2023 17:57, Nicola Vetrini wrote: > On 05/09/2023 10:33, Jan Beulich wrote: >> On 05.09.2023 10:20, Nicola Vetrini wrote: >>> On 05/09/2023 09:46, Jan Beulich wrote: >>>> On 05.09.2023 09:31, Nicola Vetrini wrote: >>>>> Given its use in the declaration >>>>> 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument >>>>> 'bits' has essential type 'enum iommu_feature', which is not >>>>> allowed by the Rule as an operand to the addition operator. >>>>> Given that its value can be represented by a signed integer, >>>>> the explicit cast resolves the violation. >>>> >>>> Wait - why would this lead to a change to BITS_TO_LONGS()? And if >>>> that >>>> was to be changed, why plain int? I don't think negative input makes >>>> sense there, and in principle I'd expect values beyond 4 billion to >>>> also be permissible (even if likely no such use will ever appear in a >>>> DECLARE_BITMAP(), but elsewhere it may make sense). Even going to >>>> "unsigned long" may be too limiting ... >>>> >>> >>> You have a point. I can think of doing it like this: >>> DECLARE_BITMAP(features, (int)IOMMU_FEAT_count) >>> on the grounds that the enum constant is representable in an int, and >>> it >>> does not seem likely >>> to get much bigger. >>> Having an unsigned cast requires making the whole expression >>> essentially unsigned, otherwise Rule 10.4 is violated because >>> BITS_PER_LONG is >>> essentially signed. This can be done, but it depends on how >>> BITS_TO_LONGS will be/is used. >> >> It'll need looking closely, yes, but I expect that actually wants to be >> an >> unsigned constant. I wouldn't be surprised if some use of >> DECLARE_BITMAP() >> appeared (or already existed) where the 2nd argument involves sizeof() >> in >> some way. >> > > I think there's one with ARRAY_SIZE. In my opinion this can be resolved > as follows: > > #define BYTES_PER_LONG (1U << LONG_BYTEORDER) // the essential type gets > from signed to unsigned > > #define BITS_TO_LONGS(bits) \ > (((unsigned long long)(bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) // > same here Except, as said before, I consider any kind of cast on "bits" latently problematic. Jan
On Wed, 6 Sep 2023, Jan Beulich wrote: > On 06.09.2023 17:57, Nicola Vetrini wrote: > > On 05/09/2023 10:33, Jan Beulich wrote: > >> On 05.09.2023 10:20, Nicola Vetrini wrote: > >>> On 05/09/2023 09:46, Jan Beulich wrote: > >>>> On 05.09.2023 09:31, Nicola Vetrini wrote: > >>>>> Given its use in the declaration > >>>>> 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument > >>>>> 'bits' has essential type 'enum iommu_feature', which is not > >>>>> allowed by the Rule as an operand to the addition operator. > >>>>> Given that its value can be represented by a signed integer, > >>>>> the explicit cast resolves the violation. > >>>> > >>>> Wait - why would this lead to a change to BITS_TO_LONGS()? And if > >>>> that > >>>> was to be changed, why plain int? I don't think negative input makes > >>>> sense there, and in principle I'd expect values beyond 4 billion to > >>>> also be permissible (even if likely no such use will ever appear in a > >>>> DECLARE_BITMAP(), but elsewhere it may make sense). Even going to > >>>> "unsigned long" may be too limiting ... > >>>> > >>> > >>> You have a point. I can think of doing it like this: > >>> DECLARE_BITMAP(features, (int)IOMMU_FEAT_count) I think this is a good solution for this case (even more so if we can't find a better implementation of BITS_TO_LONGS) > >>> on the grounds that the enum constant is representable in an int, and > >>> it > >>> does not seem likely > >>> to get much bigger. > >>> Having an unsigned cast requires making the whole expression > >>> essentially unsigned, otherwise Rule 10.4 is violated because > >>> BITS_PER_LONG is > >>> essentially signed. This can be done, but it depends on how > >>> BITS_TO_LONGS will be/is used. > >> > >> It'll need looking closely, yes, but I expect that actually wants to be > >> an > >> unsigned constant. I wouldn't be surprised if some use of > >> DECLARE_BITMAP() > >> appeared (or already existed) where the 2nd argument involves sizeof() > >> in > >> some way. > >> > > > > I think there's one with ARRAY_SIZE. In my opinion this can be resolved > > as follows: > > > > #define BYTES_PER_LONG (1U << LONG_BYTEORDER) // the essential type gets > > from signed to unsigned > > > > #define BITS_TO_LONGS(bits) \ > > (((unsigned long long)(bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) // > > same here > > Except, as said before, I consider any kind of cast on "bits" latently > problematic. Can't we just do this (same but without the cast): #define BYTES_PER_LONG (1U << LONG_BYTEORDER) #define BITS_TO_LONGS(bits) \ (((bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) Then we just need to make sure to pass an unsigned to BITS_TO_LONGS. In the case above we would do: DECLARE_BITMAP(features, (unsigned int)IOMMU_FEAT_count)
On 07.09.2023 03:33, Stefano Stabellini wrote: > On Wed, 6 Sep 2023, Jan Beulich wrote: >> On 06.09.2023 17:57, Nicola Vetrini wrote: >>> On 05/09/2023 10:33, Jan Beulich wrote: >>>> On 05.09.2023 10:20, Nicola Vetrini wrote: >>>>> On 05/09/2023 09:46, Jan Beulich wrote: >>>>>> On 05.09.2023 09:31, Nicola Vetrini wrote: >>>>>>> Given its use in the declaration >>>>>>> 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument >>>>>>> 'bits' has essential type 'enum iommu_feature', which is not >>>>>>> allowed by the Rule as an operand to the addition operator. >>>>>>> Given that its value can be represented by a signed integer, >>>>>>> the explicit cast resolves the violation. >>>>>> >>>>>> Wait - why would this lead to a change to BITS_TO_LONGS()? And if >>>>>> that >>>>>> was to be changed, why plain int? I don't think negative input makes >>>>>> sense there, and in principle I'd expect values beyond 4 billion to >>>>>> also be permissible (even if likely no such use will ever appear in a >>>>>> DECLARE_BITMAP(), but elsewhere it may make sense). Even going to >>>>>> "unsigned long" may be too limiting ... >>>>>> >>>>> >>>>> You have a point. I can think of doing it like this: >>>>> DECLARE_BITMAP(features, (int)IOMMU_FEAT_count) > > I think this is a good solution for this case (even more so if we can't > find a better implementation of BITS_TO_LONGS) > > >>>>> on the grounds that the enum constant is representable in an int, and >>>>> it >>>>> does not seem likely >>>>> to get much bigger. >>>>> Having an unsigned cast requires making the whole expression >>>>> essentially unsigned, otherwise Rule 10.4 is violated because >>>>> BITS_PER_LONG is >>>>> essentially signed. This can be done, but it depends on how >>>>> BITS_TO_LONGS will be/is used. >>>> >>>> It'll need looking closely, yes, but I expect that actually wants to be >>>> an >>>> unsigned constant. I wouldn't be surprised if some use of >>>> DECLARE_BITMAP() >>>> appeared (or already existed) where the 2nd argument involves sizeof() >>>> in >>>> some way. >>>> >>> >>> I think there's one with ARRAY_SIZE. In my opinion this can be resolved >>> as follows: >>> >>> #define BYTES_PER_LONG (1U << LONG_BYTEORDER) // the essential type gets >>> from signed to unsigned >>> >>> #define BITS_TO_LONGS(bits) \ >>> (((unsigned long long)(bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) // >>> same here >> >> Except, as said before, I consider any kind of cast on "bits" latently >> problematic. > > Can't we just do this (same but without the cast): > > #define BYTES_PER_LONG (1U << LONG_BYTEORDER) > #define BITS_TO_LONGS(bits) \ > (((bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) Right, that's what I was expecting we'd use (with blanks suitably added of course). Jan > Then we just need to make sure to pass an unsigned to BITS_TO_LONGS. In > the case above we would do: > > DECLARE_BITMAP(features, (unsigned int)IOMMU_FEAT_count) >
On 07/09/2023 03:33, Stefano Stabellini wrote: > On Wed, 6 Sep 2023, Jan Beulich wrote: >> On 06.09.2023 17:57, Nicola Vetrini wrote: >> > On 05/09/2023 10:33, Jan Beulich wrote: >> >> On 05.09.2023 10:20, Nicola Vetrini wrote: >> >>> On 05/09/2023 09:46, Jan Beulich wrote: >> >>>> On 05.09.2023 09:31, Nicola Vetrini wrote: >> >>>>> Given its use in the declaration >> >>>>> 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument >> >>>>> 'bits' has essential type 'enum iommu_feature', which is not >> >>>>> allowed by the Rule as an operand to the addition operator. >> >>>>> Given that its value can be represented by a signed integer, >> >>>>> the explicit cast resolves the violation. >> >>>> >> >>>> Wait - why would this lead to a change to BITS_TO_LONGS()? And if >> >>>> that >> >>>> was to be changed, why plain int? I don't think negative input makes >> >>>> sense there, and in principle I'd expect values beyond 4 billion to >> >>>> also be permissible (even if likely no such use will ever appear in a >> >>>> DECLARE_BITMAP(), but elsewhere it may make sense). Even going to >> >>>> "unsigned long" may be too limiting ... >> >>>> >> >>> >> >>> You have a point. I can think of doing it like this: >> >>> DECLARE_BITMAP(features, (int)IOMMU_FEAT_count) > > I think this is a good solution for this case (even more so if we can't > find a better implementation of BITS_TO_LONGS) > > >> >>> on the grounds that the enum constant is representable in an int, and >> >>> it >> >>> does not seem likely >> >>> to get much bigger. >> >>> Having an unsigned cast requires making the whole expression >> >>> essentially unsigned, otherwise Rule 10.4 is violated because >> >>> BITS_PER_LONG is >> >>> essentially signed. This can be done, but it depends on how >> >>> BITS_TO_LONGS will be/is used. >> >> >> >> It'll need looking closely, yes, but I expect that actually wants to be >> >> an >> >> unsigned constant. I wouldn't be surprised if some use of >> >> DECLARE_BITMAP() >> >> appeared (or already existed) where the 2nd argument involves sizeof() >> >> in >> >> some way. >> >> >> > >> > I think there's one with ARRAY_SIZE. In my opinion this can be resolved >> > as follows: >> > >> > #define BYTES_PER_LONG (1U << LONG_BYTEORDER) // the essential type gets >> > from signed to unsigned >> > >> > #define BITS_TO_LONGS(bits) \ >> > (((unsigned long long)(bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) // >> > same here >> >> Except, as said before, I consider any kind of cast on "bits" latently >> problematic. > > Can't we just do this (same but without the cast): > > #define BYTES_PER_LONG (1U << LONG_BYTEORDER) > #define BITS_TO_LONGS(bits) \ > (((bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) > > Then we just need to make sure to pass an unsigned to BITS_TO_LONGS. In > the case above we would do: > > DECLARE_BITMAP(features, (unsigned int)IOMMU_FEAT_count) There is a build error due to -Werror because of a pointer comparison at line 469 of common/numa.c: i = min(PADDR_BITS, BITS_PER_LONG - 1); where #define PADDR_BITS 52 I guess PADDR_BITS can become unsigned or gain a cast
On 08.09.2023 10:48, Nicola Vetrini wrote: > On 07/09/2023 03:33, Stefano Stabellini wrote: >> On Wed, 6 Sep 2023, Jan Beulich wrote: >>> On 06.09.2023 17:57, Nicola Vetrini wrote: >>>> On 05/09/2023 10:33, Jan Beulich wrote: >>>>> On 05.09.2023 10:20, Nicola Vetrini wrote: >>>>>> On 05/09/2023 09:46, Jan Beulich wrote: >>>>>>> On 05.09.2023 09:31, Nicola Vetrini wrote: >>>>>>>> Given its use in the declaration >>>>>>>> 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument >>>>>>>> 'bits' has essential type 'enum iommu_feature', which is not >>>>>>>> allowed by the Rule as an operand to the addition operator. >>>>>>>> Given that its value can be represented by a signed integer, >>>>>>>> the explicit cast resolves the violation. >>>>>>> >>>>>>> Wait - why would this lead to a change to BITS_TO_LONGS()? And if >>>>>>> that >>>>>>> was to be changed, why plain int? I don't think negative input makes >>>>>>> sense there, and in principle I'd expect values beyond 4 billion to >>>>>>> also be permissible (even if likely no such use will ever appear in a >>>>>>> DECLARE_BITMAP(), but elsewhere it may make sense). Even going to >>>>>>> "unsigned long" may be too limiting ... >>>>>>> >>>>>> >>>>>> You have a point. I can think of doing it like this: >>>>>> DECLARE_BITMAP(features, (int)IOMMU_FEAT_count) >> >> I think this is a good solution for this case (even more so if we can't >> find a better implementation of BITS_TO_LONGS) >> >> >>>>>> on the grounds that the enum constant is representable in an int, and >>>>>> it >>>>>> does not seem likely >>>>>> to get much bigger. >>>>>> Having an unsigned cast requires making the whole expression >>>>>> essentially unsigned, otherwise Rule 10.4 is violated because >>>>>> BITS_PER_LONG is >>>>>> essentially signed. This can be done, but it depends on how >>>>>> BITS_TO_LONGS will be/is used. >>>>> >>>>> It'll need looking closely, yes, but I expect that actually wants to be >>>>> an >>>>> unsigned constant. I wouldn't be surprised if some use of >>>>> DECLARE_BITMAP() >>>>> appeared (or already existed) where the 2nd argument involves sizeof() >>>>> in >>>>> some way. >>>>> >>>> >>>> I think there's one with ARRAY_SIZE. In my opinion this can be resolved >>>> as follows: >>>> >>>> #define BYTES_PER_LONG (1U << LONG_BYTEORDER) // the essential type gets >>>> from signed to unsigned >>>> >>>> #define BITS_TO_LONGS(bits) \ >>>> (((unsigned long long)(bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) // >>>> same here >>> >>> Except, as said before, I consider any kind of cast on "bits" latently >>> problematic. >> >> Can't we just do this (same but without the cast): >> >> #define BYTES_PER_LONG (1U << LONG_BYTEORDER) >> #define BITS_TO_LONGS(bits) \ >> (((bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) >> >> Then we just need to make sure to pass an unsigned to BITS_TO_LONGS. In >> the case above we would do: >> >> DECLARE_BITMAP(features, (unsigned int)IOMMU_FEAT_count) > > There is a build error due to -Werror because of a pointer comparison at > line 469 of common/numa.c: > i = min(PADDR_BITS, BITS_PER_LONG - 1); > where > #define PADDR_BITS 52 > > I guess PADDR_BITS can become unsigned or gain a cast While generally converting constants to unsigned comes with a certain risk, I think for this (and its siblings) this ought to be okay. As to the alternative of a cast - before considering that, please consider e.g. adding 0u (as we do elsewhere in the code base to deal with such cases). Jan
On 08.09.2023 13:57, Jan Beulich wrote: > On 08.09.2023 10:48, Nicola Vetrini wrote: >> There is a build error due to -Werror because of a pointer comparison at >> line 469 of common/numa.c: >> i = min(PADDR_BITS, BITS_PER_LONG - 1); >> where >> #define PADDR_BITS 52 >> >> I guess PADDR_BITS can become unsigned or gain a cast > > While generally converting constants to unsigned comes with a certain > risk, I think for this (and its siblings) this ought to be okay. As to > the alternative of a cast - before considering that, please consider > e.g. adding 0u (as we do elsewhere in the code base to deal with such > cases). And just after sending I realized that this would still be disliked by Misra's type system. (Much like then aiui the 1 above will need to become 1u. Which is all pretty horrible.) Jan
On 08/09/2023 13:59, Jan Beulich wrote: > On 08.09.2023 13:57, Jan Beulich wrote: >> On 08.09.2023 10:48, Nicola Vetrini wrote: >>> There is a build error due to -Werror because of a pointer comparison >>> at >>> line 469 of common/numa.c: >>> i = min(PADDR_BITS, BITS_PER_LONG - 1); >>> where >>> #define PADDR_BITS 52 >>> >>> I guess PADDR_BITS can become unsigned or gain a cast >> >> While generally converting constants to unsigned comes with a certain >> risk, I think for this (and its siblings) this ought to be okay. As to >> the alternative of a cast - before considering that, please consider >> e.g. adding 0u (as we do elsewhere in the code base to deal with such >> cases). > > And just after sending I realized that this would still be disliked by > Misra's type system. (Much like then aiui the 1 above will need to > become 1u. Which is all pretty horrible.) > > Jan I have a proposal: in our tests we enabled an ECLAIR configuration that allows to bypass the constraint imposed by Rule 10.4 that warrants the 1U iff the value is constant and both types can represent it correctly (in this case BITS_PER_LONG -1). This would allow using the proposed solution and documenting why it's ok not to respect R10.4. What do you think?
On 08/09/2023 16:53, Nicola Vetrini wrote: > On 08/09/2023 13:59, Jan Beulich wrote: >> On 08.09.2023 13:57, Jan Beulich wrote: >>> On 08.09.2023 10:48, Nicola Vetrini wrote: >>>> There is a build error due to -Werror because of a pointer >>>> comparison at >>>> line 469 of common/numa.c: >>>> i = min(PADDR_BITS, BITS_PER_LONG - 1); >>>> where >>>> #define PADDR_BITS 52 >>>> >>>> I guess PADDR_BITS can become unsigned or gain a cast >>> >>> While generally converting constants to unsigned comes with a certain >>> risk, I think for this (and its siblings) this ought to be okay. As >>> to >>> the alternative of a cast - before considering that, please consider >>> e.g. adding 0u (as we do elsewhere in the code base to deal with such >>> cases). >> >> And just after sending I realized that this would still be disliked by >> Misra's type system. (Much like then aiui the 1 above will need to >> become 1u. Which is all pretty horrible.) >> >> Jan > > I have a proposal: in our tests we enabled an ECLAIR configuration > that allows to bypass the > constraint imposed by Rule 10.4 that warrants the 1U iff the value is > constant and both types > can represent it correctly (in this case BITS_PER_LONG -1). This would > allow using the proposed > solution and documenting why it's ok not to respect R10.4. What do you > think? And perhaps also use min_t instead of min, so that the typecheck can be avoided.
On Fri, 8 Sep 2023, Nicola Vetrini wrote: > On 08/09/2023 13:59, Jan Beulich wrote: > > On 08.09.2023 13:57, Jan Beulich wrote: > > > On 08.09.2023 10:48, Nicola Vetrini wrote: > > > > There is a build error due to -Werror because of a pointer comparison at > > > > line 469 of common/numa.c: > > > > i = min(PADDR_BITS, BITS_PER_LONG - 1); > > > > where > > > > #define PADDR_BITS 52 > > > > > > > > I guess PADDR_BITS can become unsigned or gain a cast > > > > > > While generally converting constants to unsigned comes with a certain > > > risk, I think for this (and its siblings) this ought to be okay. As to > > > the alternative of a cast - before considering that, please consider > > > e.g. adding 0u (as we do elsewhere in the code base to deal with such > > > cases). > > > > And just after sending I realized that this would still be disliked by > > Misra's type system. (Much like then aiui the 1 above will need to > > become 1u. Which is all pretty horrible.) > > > > Jan > > I have a proposal: in our tests we enabled an ECLAIR configuration that allows > to bypass the > constraint imposed by Rule 10.4 that warrants the 1U iff the value is constant > and both types > can represent it correctly (in this case BITS_PER_LONG -1). This would allow > using the proposed > solution and documenting why it's ok not to respect R10.4. What do you think? I think that would be OK. I think we would want to document this in rules.rst. Please send a patch.
On 08.09.2023 17:09, Nicola Vetrini wrote: > On 08/09/2023 16:53, Nicola Vetrini wrote: >> On 08/09/2023 13:59, Jan Beulich wrote: >>> On 08.09.2023 13:57, Jan Beulich wrote: >>>> On 08.09.2023 10:48, Nicola Vetrini wrote: >>>>> There is a build error due to -Werror because of a pointer >>>>> comparison at >>>>> line 469 of common/numa.c: >>>>> i = min(PADDR_BITS, BITS_PER_LONG - 1); >>>>> where >>>>> #define PADDR_BITS 52 >>>>> >>>>> I guess PADDR_BITS can become unsigned or gain a cast >>>> >>>> While generally converting constants to unsigned comes with a certain >>>> risk, I think for this (and its siblings) this ought to be okay. As >>>> to >>>> the alternative of a cast - before considering that, please consider >>>> e.g. adding 0u (as we do elsewhere in the code base to deal with such >>>> cases). >>> >>> And just after sending I realized that this would still be disliked by >>> Misra's type system. (Much like then aiui the 1 above will need to >>> become 1u. Which is all pretty horrible.) >> >> I have a proposal: in our tests we enabled an ECLAIR configuration >> that allows to bypass the >> constraint imposed by Rule 10.4 that warrants the 1U iff the value is >> constant and both types >> can represent it correctly (in this case BITS_PER_LONG -1). This would >> allow using the proposed >> solution and documenting why it's ok not to respect R10.4. What do you >> think? I'd definitely prefer us using such a model, yes. > And perhaps also use min_t instead of min, so that the typecheck can be > avoided. Already the wording you use suggests a problem: min() is using a type check for a reason (and that check is actually guaranteeing that some other of the Misra rules isn't violated, aiui), so we would better not try to "avoid" its use. There are certainly rare cases where using min() / max() would be unwieldy, hence why we have min_t() / max_t(), but imo wherever reasonably possible we ought to use the type-checking variants. Jan
On 08/09/2023 21:37, Stefano Stabellini wrote: > On Fri, 8 Sep 2023, Nicola Vetrini wrote: >> On 08/09/2023 13:59, Jan Beulich wrote: >> > On 08.09.2023 13:57, Jan Beulich wrote: >> > > On 08.09.2023 10:48, Nicola Vetrini wrote: >> > > > There is a build error due to -Werror because of a pointer comparison at >> > > > line 469 of common/numa.c: >> > > > i = min(PADDR_BITS, BITS_PER_LONG - 1); >> > > > where >> > > > #define PADDR_BITS 52 >> > > > >> > > > I guess PADDR_BITS can become unsigned or gain a cast >> > > >> > > While generally converting constants to unsigned comes with a certain >> > > risk, I think for this (and its siblings) this ought to be okay. As to >> > > the alternative of a cast - before considering that, please consider >> > > e.g. adding 0u (as we do elsewhere in the code base to deal with such >> > > cases). >> > >> > And just after sending I realized that this would still be disliked by >> > Misra's type system. (Much like then aiui the 1 above will need to >> > become 1u. Which is all pretty horrible.) >> > >> > Jan >> >> I have a proposal: in our tests we enabled an ECLAIR configuration >> that allows >> to bypass the >> constraint imposed by Rule 10.4 that warrants the 1U iff the value is >> constant >> and both types >> can represent it correctly (in this case BITS_PER_LONG -1). This would >> allow >> using the proposed >> solution and documenting why it's ok not to respect R10.4. What do you >> think? > > I think that would be OK. I think we would want to document this in > rules.rst. Please send a patch. I checked: that configuration is already enabled in current staging, so perhaps the only action in that regard would be to send a patch documenting it in rules.rst. I just noticed one further issue with making BYTES_PER_LONG unsigned, in that causes several instances of (1U << 3) to appear inside the file xen/arch/x86/xen.lds produced by the build, which in turn causes ld to fail on that 'U'. For reference, the version of ld used by the build is the following: GNU ld (GNU Binutils for Ubuntu) 2.38 The following is a snippet of the output: . = ALIGN((1 << 12)); __ro_after_init_end = .; __start_bug_frames = .; *(.bug_frames.0) __stop_bug_frames_0 = .; *(.bug_frames.1) __stop_bug_frames_1 = .; *(.bug_frames.2) __stop_bug_frames_2 = .; *(.bug_frames.3) __stop_bug_frames_3 = .; *(.rodata) *(.rodata.*) *(.data.rel.ro) *(.data.rel.ro.*) . = ALIGN((1U << 3)); __start_vpci_array = .; *(SORT(.data.vpci.*)) __end_vpci_array = .; } :text .note.gnu.build-id : AT(ADDR(".note.gnu.build-id") - (((((((261 >> 8) * 0xffff000000000000) | (261 << 39))) + ((1 << 39) / 2)) + (64 << 30)) + (1 << 30))) { __note_gnu_build_id_start = .; *(.note.gnu.build-id) __note_gnu_build_id_end = .;
On 19.09.2023 11:19, Nicola Vetrini wrote: > I just noticed one further issue with making BYTES_PER_LONG unsigned, in > that causes > several instances of (1U << 3) to appear inside the file > xen/arch/x86/xen.lds > produced by the build, which in turn causes ld to fail on that 'U'. That should be avoidable if _AC() is used in the #define. Jan > For > reference, the version of ld used by the build is the following: > GNU ld (GNU Binutils for Ubuntu) 2.38 > > The following is a snippet of the output: > > . = ALIGN((1 << 12)); > __ro_after_init_end = .; > __start_bug_frames = .; > *(.bug_frames.0) > __stop_bug_frames_0 = .; > *(.bug_frames.1) > __stop_bug_frames_1 = .; > *(.bug_frames.2) > __stop_bug_frames_2 = .; > *(.bug_frames.3) > __stop_bug_frames_3 = .; > *(.rodata) > *(.rodata.*) > *(.data.rel.ro) > *(.data.rel.ro.*) > . = ALIGN((1U << 3)); __start_vpci_array = .; > *(SORT(.data.vpci.*)) __end_vpci_array = .; > } :text > .note.gnu.build-id : AT(ADDR(".note.gnu.build-id") - (((((((261 >> 8) > * 0xffff000000000000) | (261 << 39))) + ((1 << 39) / 2)) + (64 << 30)) + > (1 << 30))) { > __note_gnu_build_id_start = .; > *(.note.gnu.build-id) > __note_gnu_build_id_end = .; > >
On 19/09/2023 11:33, Jan Beulich wrote: > On 19.09.2023 11:19, Nicola Vetrini wrote: >> I just noticed one further issue with making BYTES_PER_LONG unsigned, >> in >> that causes >> several instances of (1U << 3) to appear inside the file >> xen/arch/x86/xen.lds >> produced by the build, which in turn causes ld to fail on that 'U'. > > That should be avoidable if _AC() is used in the #define. > I think all instances on x86 are caused by . = ALIGN(POINTER_ALIGN); where, for all arches in xen/arch/<arch>/include/asm/config.h there is #define POINTER_ALIGN BYTES_PER_LONG $ grep -B 1 -A 1 "1U" xen.lds *(.data.rel.ro.*) . = ALIGN((1U << 3)); __start_vpci_array = .; *(SORT(.data.vpci.*)) __end_vpci_array = .; } :text -- *(.init.bss.stack_aligned) . = ALIGN((1U << 3)); __initdata_cf_clobber_start = .; -- *(.init.rodata.*) . = ALIGN((1U << 3)); __setup_start = .; -- *(.bss .bss.*) . = ALIGN((1U << 3)); __bss_end = .; Do you think changing the definition of POINTER_ALIGN will break something? -- Nicola Vetrini, BSc Software Engineer, BUGSENG srl (https://bugseng.com)
On 19.09.2023 11:54, Nicola Vetrini wrote: > On 19/09/2023 11:33, Jan Beulich wrote: >> On 19.09.2023 11:19, Nicola Vetrini wrote: >>> I just noticed one further issue with making BYTES_PER_LONG unsigned, >>> in >>> that causes >>> several instances of (1U << 3) to appear inside the file >>> xen/arch/x86/xen.lds >>> produced by the build, which in turn causes ld to fail on that 'U'. >> >> That should be avoidable if _AC() is used in the #define. >> > > I think all instances on x86 are caused by > . = ALIGN(POINTER_ALIGN); > where, for all arches in xen/arch/<arch>/include/asm/config.h there is > #define POINTER_ALIGN BYTES_PER_LONG > > $ grep -B 1 -A 1 "1U" xen.lds > *(.data.rel.ro.*) > . = ALIGN((1U << 3)); __start_vpci_array = .; > *(SORT(.data.vpci.*)) __end_vpci_array = .; > } :text > -- > *(.init.bss.stack_aligned) > . = ALIGN((1U << 3)); > __initdata_cf_clobber_start = .; > -- > *(.init.rodata.*) > . = ALIGN((1U << 3)); > __setup_start = .; > -- > *(.bss .bss.*) > . = ALIGN((1U << 3)); > __bss_end = .; > > > Do you think changing the definition of POINTER_ALIGN will break > something? Why (and in which way) would you mean to change POINTER_ALIGN? Jan
On 05/09/2023 09:31, Nicola Vetrini wrote: > Given its use in the declaration > 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument > 'bits' has essential type 'enum iommu_feature', which is not > allowed by the Rule as an operand to the addition operator. > Given that its value can be represented by a signed integer, > the explicit cast resolves the violation. > > Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> > --- > xen/include/xen/types.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/xen/include/xen/types.h b/xen/include/xen/types.h > index aea259db1ef2..2ff8f243908d 100644 > --- a/xen/include/xen/types.h > +++ b/xen/include/xen/types.h > @@ -23,7 +23,7 @@ typedef signed long ssize_t; > typedef __PTRDIFF_TYPE__ ptrdiff_t; > > #define BITS_TO_LONGS(bits) \ > - (((bits)+BITS_PER_LONG-1)/BITS_PER_LONG) > + (((int)(bits)+BITS_PER_LONG-1)/BITS_PER_LONG) > #define DECLARE_BITMAP(name,bits) \ > unsigned long name[BITS_TO_LONGS(bits)] Revisiting this thread after a while: I did some digging and changing the essential type of BITS_TO_LONGS to unsigned #define BYTES_PER_LONG (_AC(1, U) << LONG_BYTEORDER) #define BITS_PER_LONG (BYTES_PER_LONG << 3) [...] #define BITS_TO_LONGS(bits) \ (((bits) + BITS_PER_LONG - 1U) / BITS_PER_LONG) #define DECLARE_BITMAP(name,bits) \ unsigned long name[BITS_TO_LONGS(bits)] leads to a whole lot of issues due to the extensive usage of these macros (BITS_TO_LONGS, BITS_PER_LONG) in comparisons with e.g. the macros min/max. The comments made in this series to the patch do have value (e.g. BITS_TO_LONGS should be expected to have only a positive argument), but ultimately the changes required in order to do this and respect all other constraints (either in the form of MISRA rules or gcc warnings) is far too broad to be tackled by a single patch. Notable examples of such consequences: There is a build error due to -Werror because of a pointer comparison at line 469 of common/numa.c: i = min(PADDR_BITS, BITS_PER_LONG - 1); where #define PADDR_BITS 52 if x86's PADDR_BITS becomes unsigned, then other issues arise, such as: xenheap_bits = min(flsl(mfn + 1) - 1 + PAGE_SHIFT, PADDR_BITS); here the type of flsl is int, so either flsl should become unsigned too, or the second argument should be suitably modified. In the end, the modification that solves a lot of violations (due to this being inside an header, though it's a single place to be modified) is this: DECLARE_BITMAP(features, (int)IOMMU_FEAT_count) If, as it has been argued, BITS_TO_LONGS really needs to become unsigned, then a more general rethinking of the types involved needs to happen.
On 04.10.2023 15:23, Nicola Vetrini wrote: > On 05/09/2023 09:31, Nicola Vetrini wrote: >> Given its use in the declaration >> 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument >> 'bits' has essential type 'enum iommu_feature', which is not >> allowed by the Rule as an operand to the addition operator. >> Given that its value can be represented by a signed integer, >> the explicit cast resolves the violation. >> >> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> >> --- >> xen/include/xen/types.h | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/xen/include/xen/types.h b/xen/include/xen/types.h >> index aea259db1ef2..2ff8f243908d 100644 >> --- a/xen/include/xen/types.h >> +++ b/xen/include/xen/types.h >> @@ -23,7 +23,7 @@ typedef signed long ssize_t; >> typedef __PTRDIFF_TYPE__ ptrdiff_t; >> >> #define BITS_TO_LONGS(bits) \ >> - (((bits)+BITS_PER_LONG-1)/BITS_PER_LONG) >> + (((int)(bits)+BITS_PER_LONG-1)/BITS_PER_LONG) >> #define DECLARE_BITMAP(name,bits) \ >> unsigned long name[BITS_TO_LONGS(bits)] > > Revisiting this thread after a while: I did some digging and changing > the essential type of > BITS_TO_LONGS to unsigned > > #define BYTES_PER_LONG (_AC(1, U) << LONG_BYTEORDER) > #define BITS_PER_LONG (BYTES_PER_LONG << 3) > [...] > #define BITS_TO_LONGS(bits) \ > (((bits) + BITS_PER_LONG - 1U) / BITS_PER_LONG) > #define DECLARE_BITMAP(name,bits) \ > unsigned long name[BITS_TO_LONGS(bits)] > > leads to a whole lot of issues due to the extensive usage of these > macros > (BITS_TO_LONGS, BITS_PER_LONG) in comparisons with e.g. the macros > min/max. > The comments made in this series to the patch do have value (e.g. > BITS_TO_LONGS should be > expected to have only a positive argument), but ultimately the changes > required in order to > do this and respect all other constraints (either in the form of MISRA > rules or gcc warnings) > is far too broad to be tackled by a single patch. > > Notable examples of such consequences: > > There is a build error due to -Werror because of a pointer comparison at > line 469 of common/numa.c: > i = min(PADDR_BITS, BITS_PER_LONG - 1); > where > #define PADDR_BITS 52 > > if x86's PADDR_BITS becomes unsigned, then other issues arise, such as: > > xenheap_bits = min(flsl(mfn + 1) - 1 + PAGE_SHIFT, PADDR_BITS); > > here the type of flsl is int, so either flsl should become unsigned too, > or the second > argument should be suitably modified. If PADDR_BITS was to gain a U suffix, I expect PAGE_SHIFT ought to as well. Which would address the compiler complaint, but of course would then still leave the left hand expression not aligned with Misra's essential type system. > In the end, the modification that solves a lot of violations (due to > this being inside an header, though it's a single place to be modified) > is this: > > DECLARE_BITMAP(features, (int)IOMMU_FEAT_count) > > If, as it has been argued, BITS_TO_LONGS really needs to become > unsigned, then a more general > rethinking of the types involved needs to happen. Well, yes, just that we'll need to find ways to make the changes gradually, not all in one go. Even if not admitted to originally, I think it was pretty clear that the Misra effort would lead to such. Jan
diff --git a/xen/include/xen/types.h b/xen/include/xen/types.h index aea259db1ef2..2ff8f243908d 100644 --- a/xen/include/xen/types.h +++ b/xen/include/xen/types.h @@ -23,7 +23,7 @@ typedef signed long ssize_t; typedef __PTRDIFF_TYPE__ ptrdiff_t; #define BITS_TO_LONGS(bits) \ - (((bits)+BITS_PER_LONG-1)/BITS_PER_LONG) + (((int)(bits)+BITS_PER_LONG-1)/BITS_PER_LONG) #define DECLARE_BITMAP(name,bits) \ unsigned long name[BITS_TO_LONGS(bits)]
Given its use in the declaration 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument 'bits' has essential type 'enum iommu_feature', which is not allowed by the Rule as an operand to the addition operator. Given that its value can be represented by a signed integer, the explicit cast resolves the violation. Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> --- xen/include/xen/types.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)