Message ID | cc6bf44701c808645c69bacaf4463295e2cb0fba.1708354388.git.nicola.vetrini@bugseng.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [XEN] xen: cache clearing and invalidation helpers refactoring | expand |
On 19.02.2024 16:14, Nicola Vetrini wrote: > The cache clearing and invalidation helpers in x86 and Arm didn't > comply with MISRA C Rule 17.7: "The value returned by a function > having non-void return type shall be used". On Arm they > were always returning 0, while some in x86 returned -EOPNOTSUPP > and in common/grant_table the return value is saved. > > As a consequence, a common helper arch_grant_cache_flush that returns > an integer is introduced, so that each architecture can choose whether to > return an error value on certain conditions, and the helpers have either > been changed to return void (on Arm) or deleted entirely (on x86). > > Signed-off-by: Julien Grall <julien@xen.org> > Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> > --- > The original refactor idea came from Julien Grall in [1]; I edited that proposal > to fix build errors. > > I did introduce a cast to void for the call to flush_area_local on x86, because > even before this patch the return value of that function wasn't checked in all > but one use in x86/smp.c, and in this context the helper (perhaps incidentally) > ignored the return value of flush_area_local. I object to such casting to void, at least until there's an overriding decision that for Misra purposes such casts may be needed. > --- a/xen/arch/arm/include/asm/page.h > +++ b/xen/arch/arm/include/asm/page.h > @@ -123,6 +123,7 @@ > > #ifndef __ASSEMBLY__ > > +#include <public/grant_table.h> This is a no-go, imo (also on x86): Adding this include here effectively means that nearly every CU will have a dependency on that header, no matter that most are entirely agnostic of grants. Each arch has a grant_table.h - is there any reason the new, grant-specific helper can't be put there? > @@ -182,21 +183,21 @@ void flush_area_mask(const cpumask_t *mask, const void *va, > } > > static inline void flush_page_to_ram(unsigned long mfn, bool sync_icache) {} > -static inline int invalidate_dcache_va_range(const void *p, > - unsigned long size) > -{ return -EOPNOTSUPP; } > -static inline int clean_and_invalidate_dcache_va_range(const void *p, > - unsigned long size) > + > +static inline int arch_grant_cache_flush(unsigned int op, const void *p, > + unsigned long size) > { > - unsigned int order = get_order_from_bytes(size); > + unsigned int order; > + > + if ( !(op & GNTTAB_CACHE_CLEAN) ) > + return -EOPNOTSUPP; > + > + order = get_order_from_bytes(size); > /* sub-page granularity support needs to be added if necessary */ > - flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order)); > + (void) flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order)); As to my objection to the addition of a cast, did you actually check what this function returns, before saying "incidentally" in the respective remark? Already the return type being "unsigned int" is indicative of the return value not being suitable here for handing on. In addition there shouldn't be a blank after a cast. Instead, if already you were to touch this line, it would have been nice if you took the opportunity and added the missing blanks around the binary operator involved. Jan
On 2024-02-20 08:45, Jan Beulich wrote: > On 19.02.2024 16:14, Nicola Vetrini wrote: >> The cache clearing and invalidation helpers in x86 and Arm didn't >> comply with MISRA C Rule 17.7: "The value returned by a function >> having non-void return type shall be used". On Arm they >> were always returning 0, while some in x86 returned -EOPNOTSUPP >> and in common/grant_table the return value is saved. >> >> As a consequence, a common helper arch_grant_cache_flush that returns >> an integer is introduced, so that each architecture can choose whether >> to >> return an error value on certain conditions, and the helpers have >> either >> been changed to return void (on Arm) or deleted entirely (on x86). >> >> Signed-off-by: Julien Grall <julien@xen.org> >> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> >> --- >> The original refactor idea came from Julien Grall in [1]; I edited >> that proposal >> to fix build errors. >> >> I did introduce a cast to void for the call to flush_area_local on >> x86, because >> even before this patch the return value of that function wasn't >> checked in all >> but one use in x86/smp.c, and in this context the helper (perhaps >> incidentally) >> ignored the return value of flush_area_local. > > I object to such casting to void, at least until there's an overriding > decision that for Misra purposes such casts may be needed. > There are three choices here: 1. cast to void 2. deviation for flush_area_local, which for the case of the cache helpers is what led to this patch; it may still be a viable option, if other maintainers agree 3. refactor of flush_area_local; this is not viable here because the return value is actually used and useful, as far as I can tell, in smp.c >> --- a/xen/arch/arm/include/asm/page.h >> +++ b/xen/arch/arm/include/asm/page.h >> @@ -123,6 +123,7 @@ >> >> #ifndef __ASSEMBLY__ >> >> +#include <public/grant_table.h> > > This is a no-go, imo (also on x86): Adding this include here > effectively > means that nearly every CU will have a dependency on that header, no > matter that most are entirely agnostic of grants. Each arch has a > grant_table.h - is there any reason the new, grant-specific helper > can't > be put there? > I would have to test, but I think that can be done >> @@ -182,21 +183,21 @@ void flush_area_mask(const cpumask_t *mask, >> const void *va, >> } >> >> static inline void flush_page_to_ram(unsigned long mfn, bool >> sync_icache) {} >> -static inline int invalidate_dcache_va_range(const void *p, >> - unsigned long size) >> -{ return -EOPNOTSUPP; } >> -static inline int clean_and_invalidate_dcache_va_range(const void *p, >> - unsigned long >> size) >> + >> +static inline int arch_grant_cache_flush(unsigned int op, const void >> *p, >> + unsigned long size) >> { >> - unsigned int order = get_order_from_bytes(size); >> + unsigned int order; >> + >> + if ( !(op & GNTTAB_CACHE_CLEAN) ) >> + return -EOPNOTSUPP; >> + >> + order = get_order_from_bytes(size); >> /* sub-page granularity support needs to be added if necessary */ >> - flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order)); >> + (void) flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order)); > > As to my objection to the addition of a cast, did you actually check > what this function returns, before saying "incidentally" in the > respective remark? Already the return type being "unsigned int" is > indicative of the return value not being suitable here for handing > on. > My "incidentally" was motivated by the fact that the caller doesn't check whether flags_in != flags_out (effectively tests for the execution of a certain code path). It may have been deliberate or not, I don't know. If it was accidental, then a check of the return value in arch_grant_cache_flush will eliminate the need for a void cast. > In addition there shouldn't be a blank after a cast. Instead, if > already you were to touch this line, it would have been nice if you > took the opportunity and added the missing blanks around the binary > operator involved. > That's true, thanks.
On 2024-02-20 09:14, Nicola Vetrini wrote: > On 2024-02-20 08:45, Jan Beulich wrote: >> On 19.02.2024 16:14, Nicola Vetrini wrote: >>> The cache clearing and invalidation helpers in x86 and Arm didn't >>> comply with MISRA C Rule 17.7: "The value returned by a function >>> having non-void return type shall be used". On Arm they >>> were always returning 0, while some in x86 returned -EOPNOTSUPP >>> and in common/grant_table the return value is saved. >>> >>> As a consequence, a common helper arch_grant_cache_flush that returns >>> an integer is introduced, so that each architecture can choose >>> whether to >>> return an error value on certain conditions, and the helpers have >>> either >>> been changed to return void (on Arm) or deleted entirely (on x86). >>> >>> Signed-off-by: Julien Grall <julien@xen.org> >>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> >>> --- >>> The original refactor idea came from Julien Grall in [1]; I edited >>> that proposal >>> to fix build errors. >>> >>> I did introduce a cast to void for the call to flush_area_local on >>> x86, because >>> even before this patch the return value of that function wasn't >>> checked in all >>> but one use in x86/smp.c, and in this context the helper (perhaps >>> incidentally) >>> ignored the return value of flush_area_local. >> >> I object to such casting to void, at least until there's an overriding >> decision that for Misra purposes such casts may be needed. >> > > There are three choices here: > 1. cast to void > 2. deviation for flush_area_local, which for the case of the cache > helpers is what led to this patch; it may still be a viable option, if > other maintainers agree > 3. refactor of flush_area_local; this is not viable here because the > return value is actually used and useful, as far as I can tell, in > smp.c > >>> --- a/xen/arch/arm/include/asm/page.h >>> +++ b/xen/arch/arm/include/asm/page.h >>> @@ -123,6 +123,7 @@ >>> >>> #ifndef __ASSEMBLY__ >>> >>> +#include <public/grant_table.h> >> >> This is a no-go, imo (also on x86): Adding this include here >> effectively >> means that nearly every CU will have a dependency on that header, no >> matter that most are entirely agnostic of grants. Each arch has a >> grant_table.h - is there any reason the new, grant-specific helper >> can't >> be put there? >> > > I would have to test, but I think that can be done > The only blocker so far is that this triggers a build error due to a circular dependency between xen/mm.h and asm/flushtlb.h on x86. Also found some earlier evidence [1] that there are some oddities around asm/flushtlb's inclusion. [1] https://lore.kernel.org/xen-devel/20200318210540.5602-1-andrew.cooper3@citrix.com/
On 2024-02-21 13:08, Nicola Vetrini wrote: > On 2024-02-20 09:14, Nicola Vetrini wrote: >> On 2024-02-20 08:45, Jan Beulich wrote: >>> On 19.02.2024 16:14, Nicola Vetrini wrote: >>>> The cache clearing and invalidation helpers in x86 and Arm didn't >>>> comply with MISRA C Rule 17.7: "The value returned by a function >>>> having non-void return type shall be used". On Arm they >>>> were always returning 0, while some in x86 returned -EOPNOTSUPP >>>> and in common/grant_table the return value is saved. >>>> >>>> As a consequence, a common helper arch_grant_cache_flush that >>>> returns >>>> an integer is introduced, so that each architecture can choose >>>> whether to >>>> return an error value on certain conditions, and the helpers have >>>> either >>>> been changed to return void (on Arm) or deleted entirely (on x86). >>>> >>>> Signed-off-by: Julien Grall <julien@xen.org> >>>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> >>>> --- >>>> The original refactor idea came from Julien Grall in [1]; I edited >>>> that proposal >>>> to fix build errors. >>>> >>>> I did introduce a cast to void for the call to flush_area_local on >>>> x86, because >>>> even before this patch the return value of that function wasn't >>>> checked in all >>>> but one use in x86/smp.c, and in this context the helper (perhaps >>>> incidentally) >>>> ignored the return value of flush_area_local. >>> >>> I object to such casting to void, at least until there's an >>> overriding >>> decision that for Misra purposes such casts may be needed. >>> >> >> There are three choices here: >> 1. cast to void >> 2. deviation for flush_area_local, which for the case of the cache >> helpers is what led to this patch; it may still be a viable option, if >> other maintainers agree >> 3. refactor of flush_area_local; this is not viable here because the >> return value is actually used and useful, as far as I can tell, in >> smp.c >> >>>> --- a/xen/arch/arm/include/asm/page.h >>>> +++ b/xen/arch/arm/include/asm/page.h >>>> @@ -123,6 +123,7 @@ >>>> >>>> #ifndef __ASSEMBLY__ >>>> >>>> +#include <public/grant_table.h> >>> >>> This is a no-go, imo (also on x86): Adding this include here >>> effectively >>> means that nearly every CU will have a dependency on that header, no >>> matter that most are entirely agnostic of grants. Each arch has a >>> grant_table.h - is there any reason the new, grant-specific helper >>> can't >>> be put there? >>> >> >> I would have to test, but I think that can be done >> > > The only blocker so far is that this triggers a build error due to a > circular dependency between xen/mm.h and asm/flushtlb.h on x86. Also > found some earlier evidence [1] that there are some oddities around > asm/flushtlb's inclusion. > > [1] > https://lore.kernel.org/xen-devel/20200318210540.5602-1-andrew.cooper3@citrix.com/ There could be a way of untangling asm/flushtlb.h from xen/mm.h, by moving "accumulate_tlbflush" and "filtered_flush_tlb_mask" introduced by commit 80943aa40e30 ("replace tlbflush check and operation with inline functions") [1]. However, these function should then be part of a generic xen/flushtlb.h header, since they are used in common code (e.g., common/page_alloc) and a bunch of common code source files should move their includes (see [2] for a partial non-working patch). Do you feel that this is a feasible route? In passing it should be noted that the header ordering in x86/alternative.c is not the one usually prescribed, so that may be taken care of as well. [1] https://lore.kernel.org/xen-devel/1474338664-5054-1-git-send-email-dongli.zhang@oracle.com/ [2] https://gitlab.com/xen-project/people/bugseng/xen/-/commit/a2be0927f724e7e9f891d1e00831739137c29041
On 21.02.2024 16:46, Nicola Vetrini wrote: > On 2024-02-21 13:08, Nicola Vetrini wrote: >> On 2024-02-20 09:14, Nicola Vetrini wrote: >>> On 2024-02-20 08:45, Jan Beulich wrote: >>>> On 19.02.2024 16:14, Nicola Vetrini wrote: >>>>> The cache clearing and invalidation helpers in x86 and Arm didn't >>>>> comply with MISRA C Rule 17.7: "The value returned by a function >>>>> having non-void return type shall be used". On Arm they >>>>> were always returning 0, while some in x86 returned -EOPNOTSUPP >>>>> and in common/grant_table the return value is saved. >>>>> >>>>> As a consequence, a common helper arch_grant_cache_flush that >>>>> returns >>>>> an integer is introduced, so that each architecture can choose >>>>> whether to >>>>> return an error value on certain conditions, and the helpers have >>>>> either >>>>> been changed to return void (on Arm) or deleted entirely (on x86). >>>>> >>>>> Signed-off-by: Julien Grall <julien@xen.org> >>>>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> >>>>> --- >>>>> The original refactor idea came from Julien Grall in [1]; I edited >>>>> that proposal >>>>> to fix build errors. >>>>> >>>>> I did introduce a cast to void for the call to flush_area_local on >>>>> x86, because >>>>> even before this patch the return value of that function wasn't >>>>> checked in all >>>>> but one use in x86/smp.c, and in this context the helper (perhaps >>>>> incidentally) >>>>> ignored the return value of flush_area_local. >>>> >>>> I object to such casting to void, at least until there's an >>>> overriding >>>> decision that for Misra purposes such casts may be needed. >>>> >>> >>> There are three choices here: >>> 1. cast to void >>> 2. deviation for flush_area_local, which for the case of the cache >>> helpers is what led to this patch; it may still be a viable option, if >>> other maintainers agree >>> 3. refactor of flush_area_local; this is not viable here because the >>> return value is actually used and useful, as far as I can tell, in >>> smp.c >>> >>>>> --- a/xen/arch/arm/include/asm/page.h >>>>> +++ b/xen/arch/arm/include/asm/page.h >>>>> @@ -123,6 +123,7 @@ >>>>> >>>>> #ifndef __ASSEMBLY__ >>>>> >>>>> +#include <public/grant_table.h> >>>> >>>> This is a no-go, imo (also on x86): Adding this include here >>>> effectively >>>> means that nearly every CU will have a dependency on that header, no >>>> matter that most are entirely agnostic of grants. Each arch has a >>>> grant_table.h - is there any reason the new, grant-specific helper >>>> can't >>>> be put there? >>>> >>> >>> I would have to test, but I think that can be done >>> >> >> The only blocker so far is that this triggers a build error due to a >> circular dependency between xen/mm.h and asm/flushtlb.h on x86. Also >> found some earlier evidence [1] that there are some oddities around >> asm/flushtlb's inclusion. >> >> [1] >> https://lore.kernel.org/xen-devel/20200318210540.5602-1-andrew.cooper3@citrix.com/ > > There could be a way of untangling asm/flushtlb.h from xen/mm.h, by > moving "accumulate_tlbflush" and "filtered_flush_tlb_mask" introduced by > commit 80943aa40e30 ("replace tlbflush check and operation with inline > functions") [1]. > However, these function should then be part of a generic xen/flushtlb.h > header, since they are used in common code (e.g., common/page_alloc) and > a bunch of common code source files should move their includes (see [2] > for a partial non-working patch). Do you feel that this is a feasible > route? Yeah, introducing xen/flushtlb.h to hold these sounds pretty sensible. > In passing it should be noted that the header ordering in > x86/alternative.c is not the one usually prescribed, so that may be > taken care of as well. I'm afraid I don't understand this remark. Jan
>>>> >>>>>> --- a/xen/arch/arm/include/asm/page.h >>>>>> +++ b/xen/arch/arm/include/asm/page.h >>>>>> @@ -123,6 +123,7 @@ >>>>>> >>>>>> #ifndef __ASSEMBLY__ >>>>>> >>>>>> +#include <public/grant_table.h> >>>>> >>>>> This is a no-go, imo (also on x86): Adding this include here >>>>> effectively >>>>> means that nearly every CU will have a dependency on that header, >>>>> no >>>>> matter that most are entirely agnostic of grants. Each arch has a >>>>> grant_table.h - is there any reason the new, grant-specific helper >>>>> can't >>>>> be put there? >>>>> >>>> >>>> I would have to test, but I think that can be done >>>> >>> >>> The only blocker so far is that this triggers a build error due to a >>> circular dependency between xen/mm.h and asm/flushtlb.h on x86. Also >>> found some earlier evidence [1] that there are some oddities around >>> asm/flushtlb's inclusion. >>> >>> [1] >>> https://lore.kernel.org/xen-devel/20200318210540.5602-1-andrew.cooper3@citrix.com/ >> >> There could be a way of untangling asm/flushtlb.h from xen/mm.h, by >> moving "accumulate_tlbflush" and "filtered_flush_tlb_mask" introduced >> by >> commit 80943aa40e30 ("replace tlbflush check and operation with inline >> functions") [1]. >> However, these function should then be part of a generic >> xen/flushtlb.h >> header, since they are used in common code (e.g., common/page_alloc) >> and >> a bunch of common code source files should move their includes (see >> [2] >> for a partial non-working patch). Do you feel that this is a feasible >> route? > > Yeah, introducing xen/flushtlb.h to hold these sounds pretty sensible. > There is some shuffling of includes to be done to get it to build, which I'm still trying to address. Most problems are down to the use of struct page_info in page_set_tlbflush_timestamp in x86/.*/asm/flushtlb.h which then prompts the inclusion asm/mm.h probably. >> In passing it should be noted that the header ordering in >> x86/alternative.c is not the one usually prescribed, so that may be >> taken care of as well. > > I'm afraid I don't understand this remark. > I just meant to say that this #include <xen/delay.h> #include <xen/types.h> #include <asm/apic.h> #include <asm/endbr.h> #include <asm/processor.h> #include <asm/alternative.h> #include <xen/init.h> #include <asm/setup.h> #include <asm/system.h> #include <asm/traps.h> #include <asm/nmi.h> #include <asm/nops.h> #include <xen/livepatch.h> is not the usual order of xen/*.h then asm/*.h and there is no comment justifying that ordering. So in the process of including asm/flushtlb.h here the inclusion order can be tidied up (or also indipendently), unless there is some reason I'm missing that disallows it.
On 22.02.2024 15:43, Nicola Vetrini wrote: >>> In passing it should be noted that the header ordering in >>> x86/alternative.c is not the one usually prescribed, so that may be >>> taken care of as well. >> >> I'm afraid I don't understand this remark. >> > > I just meant to say that this > > #include <xen/delay.h> > #include <xen/types.h> > #include <asm/apic.h> > #include <asm/endbr.h> > #include <asm/processor.h> > #include <asm/alternative.h> > #include <xen/init.h> > #include <asm/setup.h> > #include <asm/system.h> > #include <asm/traps.h> > #include <asm/nmi.h> > #include <asm/nops.h> > #include <xen/livepatch.h> > > is not the usual order of xen/*.h then asm/*.h and there is no comment > justifying that ordering. Well, you'll find such in many other places. It hasn't been for that long that we've been trying to get #include-s into a more predictable shape. > So in the process of including asm/flushtlb.h > here the inclusion order can be tidied up (or also indipendently), > unless there is some reason I'm missing that disallows it. Independently, if at all possible, would be better. Unless of course you need to touch almost all of that block anyway. Jan
On 2024-02-19 16:14, Nicola Vetrini wrote: > The cache clearing and invalidation helpers in x86 and Arm didn't > comply with MISRA C Rule 17.7: "The value returned by a function > having non-void return type shall be used". On Arm they > were always returning 0, while some in x86 returned -EOPNOTSUPP > and in common/grant_table the return value is saved. > > As a consequence, a common helper arch_grant_cache_flush that returns > an integer is introduced, so that each architecture can choose whether > to > return an error value on certain conditions, and the helpers have > either > been changed to return void (on Arm) or deleted entirely (on x86). > > Signed-off-by: Julien Grall <julien@xen.org> > Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> > --- > The original refactor idea came from Julien Grall in [1]; I edited that > proposal > to fix build errors. > > I did introduce a cast to void for the call to flush_area_local on x86, > because > even before this patch the return value of that function wasn't checked > in all > but one use in x86/smp.c, and in this context the helper (perhaps > incidentally) > ignored the return value of flush_area_local. > > [1] > https://lore.kernel.org/xen-devel/09589e8f-77b6-47f7-b5bd-cf485e4b60f1@xen.org/ > --- > xen/arch/arm/include/asm/page.h | 33 ++++++++++++++++++----------- > xen/arch/x86/include/asm/flushtlb.h | 23 ++++++++++---------- > xen/common/grant_table.c | 9 +------- > 3 files changed, 34 insertions(+), 31 deletions(-) > I'll put this patch in the backlog at the moment: too many intricacies while trying to untangle xen/flushtlb from xen/mm.h, and there are easier cases that can be done faster. If someone is interested I can post the partial work I've done so far, even though it doesn't build on x86. > diff --git a/xen/arch/arm/include/asm/page.h > b/xen/arch/arm/include/asm/page.h > index 69f817d1e68a..e90c9de3616e 100644 > --- a/xen/arch/arm/include/asm/page.h > +++ b/xen/arch/arm/include/asm/page.h > @@ -123,6 +123,7 @@ > > #ifndef __ASSEMBLY__ > > +#include <public/grant_table.h> > #include <xen/errno.h> > #include <xen/types.h> > #include <xen/lib.h> > @@ -159,13 +160,13 @@ static inline size_t read_dcache_line_bytes(void) > * if 'range' is large enough we might want to use model-specific > * full-cache flushes. */ > > -static inline int invalidate_dcache_va_range(const void *p, unsigned > long size) > +static inline void invalidate_dcache_va_range(const void *p, unsigned > long size) > { > size_t cacheline_mask = dcache_line_bytes - 1; > unsigned long idx = 0; > > if ( !size ) > - return 0; > + return; > > /* Passing a region that wraps around is illegal */ > ASSERT(((uintptr_t)p + size - 1) >= (uintptr_t)p); > @@ -188,17 +189,15 @@ static inline int > invalidate_dcache_va_range(const void *p, unsigned long size) > asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p > + idx)); > > dsb(sy); /* So we know the flushes happen before > continuing */ > - > - return 0; > } > > -static inline int clean_dcache_va_range(const void *p, unsigned long > size) > +static inline void clean_dcache_va_range(const void *p, unsigned long > size) > { > size_t cacheline_mask = dcache_line_bytes - 1; > unsigned long idx = 0; > > if ( !size ) > - return 0; > + return; > > /* Passing a region that wraps around is illegal */ > ASSERT(((uintptr_t)p + size - 1) >= (uintptr_t)p); > @@ -211,18 +210,16 @@ static inline int clean_dcache_va_range(const > void *p, unsigned long size) > idx += dcache_line_bytes, size -= dcache_line_bytes ) > asm volatile (__clean_dcache_one(0) : : "r" (p + idx)); > dsb(sy); /* So we know the flushes happen before > continuing */ > - /* ARM callers assume that dcache_* functions cannot fail. */ > - return 0; > } > > -static inline int clean_and_invalidate_dcache_va_range > +static inline void clean_and_invalidate_dcache_va_range > (const void *p, unsigned long size) > { > size_t cacheline_mask = dcache_line_bytes - 1; > unsigned long idx = 0; > > if ( !size ) > - return 0; > + return; > > /* Passing a region that wraps around is illegal */ > ASSERT(((uintptr_t)p + size - 1) >= (uintptr_t)p); > @@ -235,8 +232,6 @@ static inline int > clean_and_invalidate_dcache_va_range > idx += dcache_line_bytes, size -= dcache_line_bytes ) > asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p > + idx)); > dsb(sy); /* So we know the flushes happen before > continuing */ > - /* ARM callers assume that dcache_* functions cannot fail. */ > - return 0; > } > > /* Macros for flushing a single small item. The predicate is always > @@ -266,6 +261,20 @@ static inline int > clean_and_invalidate_dcache_va_range > : : "r" (_p), "m" (*_p)); > \ > } while (0) > > +static inline int arch_grant_cache_flush(unsigned int op, const void > *p, > + unsigned long size) > +{ > + if ( (op & GNTTAB_CACHE_INVAL) && (op & GNTTAB_CACHE_CLEAN) ) > + clean_and_invalidate_dcache_va_range(p, size); > + else if ( op & GNTTAB_CACHE_INVAL ) > + invalidate_dcache_va_range(p, size); > + else if ( op & GNTTAB_CACHE_CLEAN ) > + clean_dcache_va_range(p, size); > + > + /* ARM callers assume that dcache_* functions cannot fail. */ > + return 0; > +} > + > /* > * Write a pagetable entry. > * > diff --git a/xen/arch/x86/include/asm/flushtlb.h > b/xen/arch/x86/include/asm/flushtlb.h > index bb0ad58db49b..c37bf4455714 100644 > --- a/xen/arch/x86/include/asm/flushtlb.h > +++ b/xen/arch/x86/include/asm/flushtlb.h > @@ -10,6 +10,7 @@ > #ifndef __FLUSHTLB_H__ > #define __FLUSHTLB_H__ > > +#include <public/grant_table.h> > #include <xen/mm.h> > #include <xen/percpu.h> > #include <xen/smp.h> > @@ -182,21 +183,21 @@ void flush_area_mask(const cpumask_t *mask, const > void *va, > } > > static inline void flush_page_to_ram(unsigned long mfn, bool > sync_icache) {} > -static inline int invalidate_dcache_va_range(const void *p, > - unsigned long size) > -{ return -EOPNOTSUPP; } > -static inline int clean_and_invalidate_dcache_va_range(const void *p, > - unsigned long > size) > + > +static inline int arch_grant_cache_flush(unsigned int op, const void > *p, > + unsigned long size) > { > - unsigned int order = get_order_from_bytes(size); > + unsigned int order; > + > + if ( !(op & GNTTAB_CACHE_CLEAN) ) > + return -EOPNOTSUPP; > + > + order = get_order_from_bytes(size); > /* sub-page granularity support needs to be added if necessary */ > - flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order)); > + (void) flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order)); > + > return 0; > } > -static inline int clean_dcache_va_range(const void *p, unsigned long > size) > -{ > - return clean_and_invalidate_dcache_va_range(p, size); > -} > > unsigned int guest_flush_tlb_flags(const struct domain *d); > void guest_flush_tlb_mask(const struct domain *d, const cpumask_t > *mask); > diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c > index 5721eab22561..8615ea144bb3 100644 > --- a/xen/common/grant_table.c > +++ b/xen/common/grant_table.c > @@ -3572,14 +3572,7 @@ static int _cache_flush(const > gnttab_cache_flush_t *cflush, grant_ref_t *cur_ref > v = map_domain_page(mfn); > v += cflush->offset; > > - if ( (cflush->op & GNTTAB_CACHE_INVAL) && (cflush->op & > GNTTAB_CACHE_CLEAN) ) > - ret = clean_and_invalidate_dcache_va_range(v, cflush->length); > - else if ( cflush->op & GNTTAB_CACHE_INVAL ) > - ret = invalidate_dcache_va_range(v, cflush->length); > - else if ( cflush->op & GNTTAB_CACHE_CLEAN ) > - ret = clean_dcache_va_range(v, cflush->length); > - else > - ret = 0; > + ret = arch_grant_cache_flush(cflush->op, v, cflush->length); > > if ( d != owner ) > {
On Fri, 23 Feb 2024, Nicola Vetrini wrote: > On 2024-02-19 16:14, Nicola Vetrini wrote: > > The cache clearing and invalidation helpers in x86 and Arm didn't > > comply with MISRA C Rule 17.7: "The value returned by a function > > having non-void return type shall be used". On Arm they > > were always returning 0, while some in x86 returned -EOPNOTSUPP > > and in common/grant_table the return value is saved. > > > > As a consequence, a common helper arch_grant_cache_flush that returns > > an integer is introduced, so that each architecture can choose whether to > > return an error value on certain conditions, and the helpers have either > > been changed to return void (on Arm) or deleted entirely (on x86). > > > > Signed-off-by: Julien Grall <julien@xen.org> > > Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> > > --- > > The original refactor idea came from Julien Grall in [1]; I edited that > > proposal > > to fix build errors. > > > > I did introduce a cast to void for the call to flush_area_local on x86, > > because > > even before this patch the return value of that function wasn't checked in > > all > > but one use in x86/smp.c, and in this context the helper (perhaps > > incidentally) > > ignored the return value of flush_area_local. > > > > [1] > > https://lore.kernel.org/xen-devel/09589e8f-77b6-47f7-b5bd-cf485e4b60f1@xen.org/ > > --- > > xen/arch/arm/include/asm/page.h | 33 ++++++++++++++++++----------- > > xen/arch/x86/include/asm/flushtlb.h | 23 ++++++++++---------- > > xen/common/grant_table.c | 9 +------- > > 3 files changed, 34 insertions(+), 31 deletions(-) > > > > I'll put this patch in the backlog at the moment: too many intricacies while > trying to untangle xen/flushtlb from xen/mm.h, and there are easier cases that > can be done faster. If someone is interested I can post the partial work I've > done so far, even though it doesn't > build on x86. I understand that the blocker is: diff --git a/xen/arch/arm/include/asm/page.h b/xen/arch/arm/include/asm/page.h index 69f817d1e6..e90c9de361 100644 --- a/xen/arch/arm/include/asm/page.h +++ b/xen/arch/arm/include/asm/page.h @@ -123,6 +123,7 @@ #ifndef __ASSEMBLY__ +#include <public/grant_table.h> #include <xen/errno.h> #include <xen/types.h> #include <xen/lib.h> And the headers disentagling required to solve it, right? Let me ask a silly question. public/grant_table.h seems needed by arch_grant_cache_flush. Can we move arch_grant_cache_flush somewhere else? It is not like page.h is a perfect fit for it anyway. For instance, can we move it to xen/arch/arm/include/asm/grant_table.h ?
Hi Stefano, On 2024-02-24 00:05, Stefano Stabellini wrote: > On Fri, 23 Feb 2024, Nicola Vetrini wrote: >> On 2024-02-19 16:14, Nicola Vetrini wrote: >> > The cache clearing and invalidation helpers in x86 and Arm didn't >> > comply with MISRA C Rule 17.7: "The value returned by a function >> > having non-void return type shall be used". On Arm they >> > were always returning 0, while some in x86 returned -EOPNOTSUPP >> > and in common/grant_table the return value is saved. >> > >> > As a consequence, a common helper arch_grant_cache_flush that returns >> > an integer is introduced, so that each architecture can choose whether to >> > return an error value on certain conditions, and the helpers have either >> > been changed to return void (on Arm) or deleted entirely (on x86). >> > >> > Signed-off-by: Julien Grall <julien@xen.org> >> > Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> >> > --- >> > The original refactor idea came from Julien Grall in [1]; I edited that >> > proposal >> > to fix build errors. >> > >> > I did introduce a cast to void for the call to flush_area_local on x86, >> > because >> > even before this patch the return value of that function wasn't checked in >> > all >> > but one use in x86/smp.c, and in this context the helper (perhaps >> > incidentally) >> > ignored the return value of flush_area_local. >> > >> > [1] >> > https://lore.kernel.org/xen-devel/09589e8f-77b6-47f7-b5bd-cf485e4b60f1@xen.org/ >> > --- >> > xen/arch/arm/include/asm/page.h | 33 ++++++++++++++++++----------- >> > xen/arch/x86/include/asm/flushtlb.h | 23 ++++++++++---------- >> > xen/common/grant_table.c | 9 +------- >> > 3 files changed, 34 insertions(+), 31 deletions(-) >> > >> >> I'll put this patch in the backlog at the moment: too many intricacies >> while >> trying to untangle xen/flushtlb from xen/mm.h, and there are easier >> cases that >> can be done faster. If someone is interested I can post the partial >> work I've >> done so far, even though it doesn't >> build on x86. > > I understand that the blocker is: > > diff --git a/xen/arch/arm/include/asm/page.h > b/xen/arch/arm/include/asm/page.h > index 69f817d1e6..e90c9de361 100644 > --- a/xen/arch/arm/include/asm/page.h > +++ b/xen/arch/arm/include/asm/page.h > @@ -123,6 +123,7 @@ > > #ifndef __ASSEMBLY__ > > +#include <public/grant_table.h> > #include <xen/errno.h> > #include <xen/types.h> > #include <xen/lib.h> > > > And the headers disentagling required to solve it, right? > > > Let me ask a silly question. public/grant_table.h seems needed by > arch_grant_cache_flush. Can we move arch_grant_cache_flush somewhere > else? It is not like page.h is a perfect fit for it anyway. > > For instance, can we move it to > > xen/arch/arm/include/asm/grant_table.h > > ? Yes, this is what was suggested and what I was trying to accomplish. Basically my plan is: 1. move the arch_grant_cache_flush helper to asm/grant_table.h for both architectures 2. pull out of xen/mm.h this hunk (note the inclusion of asm/flushtlb in the middle of the file) because there is a build error on tlbflush_current_time() induced in some .c file (don't remember which) by the earlier movement -#include <asm/flushtlb.h> - -static inline void accumulate_tlbflush(bool *need_tlbflush, - const struct page_info *page, - uint32_t *tlbflush_timestamp) -{ - if ( page->u.free.need_tlbflush && - page->tlbflush_timestamp <= tlbflush_current_time() && - (!*need_tlbflush || - page->tlbflush_timestamp > *tlbflush_timestamp) ) - { - *need_tlbflush = true; - *tlbflush_timestamp = page->tlbflush_timestamp; - } -} - -static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp) -{ - cpumask_t mask; - - cpumask_copy(&mask, &cpu_online_map); - tlbflush_filter(&mask, tlbflush_timestamp); - if ( !cpumask_empty(&mask) ) - { - perfc_incr(need_flush_tlb_flush); - arch_flush_tlb_mask(&mask); - } -} - which is going to be in a new header xen/flushtlb.h 3. replace various inclusions the previously relied on the fact that xen/mm.h included asm/flushtlb.h (some even stating this as evidenced from the hunk below) diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c index 75fc84e445ce..91ee7e6ec39e 100644 --- a/xen/arch/x86/cpu/microcode/amd.c +++ b/xen/arch/x86/cpu/microcode/amd.c @@ -15,8 +15,8 @@ */ #include <xen/err.h> +#include <xen/flushtlb.h> #include <xen/init.h> -#include <xen/mm.h> /* TODO: Fix asm/tlbflush.h breakage */ and then make everything build. However, the dependencies tied to xen/mm.h are quite numerous on x86, and I'm not seeing an obvious way to avoid touching xen/mm.h. See this tree [1] for the latest state of the patch. If anyone has an idea how to tackle this in a smarter way, I'm open to suggestions. Specifically in step (2) there might be a way to avoid doing that modification, perhaps. [1] https://gitlab.com/xen-project/people/bugseng/xen/-/commits/cache_helpers
On 23.02.2024 08:58, Nicola Vetrini wrote: > On 2024-02-19 16:14, Nicola Vetrini wrote: >> The cache clearing and invalidation helpers in x86 and Arm didn't >> comply with MISRA C Rule 17.7: "The value returned by a function >> having non-void return type shall be used". On Arm they >> were always returning 0, while some in x86 returned -EOPNOTSUPP >> and in common/grant_table the return value is saved. >> >> As a consequence, a common helper arch_grant_cache_flush that returns >> an integer is introduced, so that each architecture can choose whether >> to >> return an error value on certain conditions, and the helpers have >> either >> been changed to return void (on Arm) or deleted entirely (on x86). >> >> Signed-off-by: Julien Grall <julien@xen.org> >> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> >> --- >> The original refactor idea came from Julien Grall in [1]; I edited that >> proposal >> to fix build errors. >> >> I did introduce a cast to void for the call to flush_area_local on x86, >> because >> even before this patch the return value of that function wasn't checked >> in all >> but one use in x86/smp.c, and in this context the helper (perhaps >> incidentally) >> ignored the return value of flush_area_local. >> >> [1] >> https://lore.kernel.org/xen-devel/09589e8f-77b6-47f7-b5bd-cf485e4b60f1@xen.org/ >> --- >> xen/arch/arm/include/asm/page.h | 33 ++++++++++++++++++----------- >> xen/arch/x86/include/asm/flushtlb.h | 23 ++++++++++---------- >> xen/common/grant_table.c | 9 +------- >> 3 files changed, 34 insertions(+), 31 deletions(-) >> > > I'll put this patch in the backlog at the moment: too many intricacies > while trying to untangle xen/flushtlb from xen/mm.h, and there are > easier cases that can be done faster. If someone is interested I can > post the partial work I've done so far, even though it doesn't > build on x86. This https://lists.xen.org/archives/html/xen-devel/2024-02/msg01513.html may be of interest to you in the context here. Jan
On Sat, 24 Feb 2024, Nicola Vetrini wrote: > Hi Stefano, > > On 2024-02-24 00:05, Stefano Stabellini wrote: > > On Fri, 23 Feb 2024, Nicola Vetrini wrote: > > > On 2024-02-19 16:14, Nicola Vetrini wrote: > > > > The cache clearing and invalidation helpers in x86 and Arm didn't > > > > comply with MISRA C Rule 17.7: "The value returned by a function > > > > having non-void return type shall be used". On Arm they > > > > were always returning 0, while some in x86 returned -EOPNOTSUPP > > > > and in common/grant_table the return value is saved. > > > > > > > > As a consequence, a common helper arch_grant_cache_flush that returns > > > > an integer is introduced, so that each architecture can choose whether > > > to > > > > return an error value on certain conditions, and the helpers have either > > > > been changed to return void (on Arm) or deleted entirely (on x86). > > > > > > > > Signed-off-by: Julien Grall <julien@xen.org> > > > > Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> > > > > --- > > > > The original refactor idea came from Julien Grall in [1]; I edited that > > > > proposal > > > > to fix build errors. > > > > > > > > I did introduce a cast to void for the call to flush_area_local on x86, > > > > because > > > > even before this patch the return value of that function wasn't checked > > > in > > > > all > > > > but one use in x86/smp.c, and in this context the helper (perhaps > > > > incidentally) > > > > ignored the return value of flush_area_local. > > > > > > > > [1] > > > > > > > https://lore.kernel.org/xen-devel/09589e8f-77b6-47f7-b5bd-cf485e4b60f1@xen.org/ > > > > --- > > > > xen/arch/arm/include/asm/page.h | 33 ++++++++++++++++++----------- > > > > xen/arch/x86/include/asm/flushtlb.h | 23 ++++++++++---------- > > > > xen/common/grant_table.c | 9 +------- > > > > 3 files changed, 34 insertions(+), 31 deletions(-) > > > > > > > > > > I'll put this patch in the backlog at the moment: too many intricacies > > > while > > > trying to untangle xen/flushtlb from xen/mm.h, and there are easier cases > > > that > > > can be done faster. If someone is interested I can post the partial work > > > I've > > > done so far, even though it doesn't > > > build on x86. > > > > I understand that the blocker is: > > > > diff --git a/xen/arch/arm/include/asm/page.h > > b/xen/arch/arm/include/asm/page.h > > index 69f817d1e6..e90c9de361 100644 > > --- a/xen/arch/arm/include/asm/page.h > > +++ b/xen/arch/arm/include/asm/page.h > > @@ -123,6 +123,7 @@ > > > > #ifndef __ASSEMBLY__ > > > > +#include <public/grant_table.h> > > #include <xen/errno.h> > > #include <xen/types.h> > > #include <xen/lib.h> > > > > > > And the headers disentagling required to solve it, right? > > > > > > Let me ask a silly question. public/grant_table.h seems needed by > > arch_grant_cache_flush. Can we move arch_grant_cache_flush somewhere > > else? It is not like page.h is a perfect fit for it anyway. > > > > For instance, can we move it to > > > > xen/arch/arm/include/asm/grant_table.h > > > > ? > > Yes, this is what was suggested and what I was trying to accomplish. > Basically my plan is: > > 1. move the arch_grant_cache_flush helper to asm/grant_table.h for both > architectures > 2. pull out of xen/mm.h this hunk (note the inclusion of asm/flushtlb in the > middle of the file) because there is a build error on tlbflush_current_time() > induced in some .c file (don't remember which) by the earlier movement It looks like it would be easier to resolve the build error on tlbflush_current_time() in another way. What's the build error exactly? Looking at the implementation of tlbflush_current_time on the various arches I couldn't find any potential issues. I just moved the implementation of arch_grant_cache_flush to arch specific headers and it seemed to have worked for me. See below. Maybe I am building with a different kconfig. diff --git a/xen/arch/arm/include/asm/grant_table.h b/xen/arch/arm/include/asm/grant_table.h index d3c518a926..2c5c07e061 100644 --- a/xen/arch/arm/include/asm/grant_table.h +++ b/xen/arch/arm/include/asm/grant_table.h @@ -76,6 +76,20 @@ int replace_grant_host_mapping(uint64_t gpaddr, mfn_t frame, #define gnttab_need_iommu_mapping(d) \ (is_domain_direct_mapped(d) && is_iommu_enabled(d)) +static inline int arch_grant_cache_flush(unsigned int op, const void *p, + unsigned long size) +{ + if ( (op & GNTTAB_CACHE_INVAL) && (op & GNTTAB_CACHE_CLEAN) ) + clean_and_invalidate_dcache_va_range(p, size); + else if ( op & GNTTAB_CACHE_INVAL ) + invalidate_dcache_va_range(p, size); + else if ( op & GNTTAB_CACHE_CLEAN ) + clean_dcache_va_range(p, size); + + /* ARM callers assume that dcache_* functions cannot fail. */ + return 0; +} + #endif /* __ASM_GRANT_TABLE_H__ */ /* * Local variables: diff --git a/xen/arch/arm/include/asm/page.h b/xen/arch/arm/include/asm/page.h index 69f817d1e6..aea692a24d 100644 --- a/xen/arch/arm/include/asm/page.h +++ b/xen/arch/arm/include/asm/page.h @@ -159,13 +159,13 @@ static inline size_t read_dcache_line_bytes(void) * if 'range' is large enough we might want to use model-specific * full-cache flushes. */ -static inline int invalidate_dcache_va_range(const void *p, unsigned long size) +static inline void invalidate_dcache_va_range(const void *p, unsigned long size) { size_t cacheline_mask = dcache_line_bytes - 1; unsigned long idx = 0; if ( !size ) - return 0; + return; /* Passing a region that wraps around is illegal */ ASSERT(((uintptr_t)p + size - 1) >= (uintptr_t)p); @@ -188,17 +188,15 @@ static inline int invalidate_dcache_va_range(const void *p, unsigned long size) asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p + idx)); dsb(sy); /* So we know the flushes happen before continuing */ - - return 0; } -static inline int clean_dcache_va_range(const void *p, unsigned long size) +static inline void clean_dcache_va_range(const void *p, unsigned long size) { size_t cacheline_mask = dcache_line_bytes - 1; unsigned long idx = 0; if ( !size ) - return 0; + return; /* Passing a region that wraps around is illegal */ ASSERT(((uintptr_t)p + size - 1) >= (uintptr_t)p); @@ -211,18 +209,16 @@ static inline int clean_dcache_va_range(const void *p, unsigned long size) idx += dcache_line_bytes, size -= dcache_line_bytes ) asm volatile (__clean_dcache_one(0) : : "r" (p + idx)); dsb(sy); /* So we know the flushes happen before continuing */ - /* ARM callers assume that dcache_* functions cannot fail. */ - return 0; } -static inline int clean_and_invalidate_dcache_va_range +static inline void clean_and_invalidate_dcache_va_range (const void *p, unsigned long size) { size_t cacheline_mask = dcache_line_bytes - 1; unsigned long idx = 0; if ( !size ) - return 0; + return; /* Passing a region that wraps around is illegal */ ASSERT(((uintptr_t)p + size - 1) >= (uintptr_t)p); @@ -235,8 +231,6 @@ static inline int clean_and_invalidate_dcache_va_range idx += dcache_line_bytes, size -= dcache_line_bytes ) asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p + idx)); dsb(sy); /* So we know the flushes happen before continuing */ - /* ARM callers assume that dcache_* functions cannot fail. */ - return 0; } /* Macros for flushing a single small item. The predicate is always diff --git a/xen/arch/x86/include/asm/flushtlb.h b/xen/arch/x86/include/asm/flushtlb.h index bb0ad58db4..d0c9120b5f 100644 --- a/xen/arch/x86/include/asm/flushtlb.h +++ b/xen/arch/x86/include/asm/flushtlb.h @@ -182,21 +182,6 @@ void flush_area_mask(const cpumask_t *mask, const void *va, } static inline void flush_page_to_ram(unsigned long mfn, bool sync_icache) {} -static inline int invalidate_dcache_va_range(const void *p, - unsigned long size) -{ return -EOPNOTSUPP; } -static inline int clean_and_invalidate_dcache_va_range(const void *p, - unsigned long size) -{ - unsigned int order = get_order_from_bytes(size); - /* sub-page granularity support needs to be added if necessary */ - flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order)); - return 0; -} -static inline int clean_dcache_va_range(const void *p, unsigned long size) -{ - return clean_and_invalidate_dcache_va_range(p, size); -} unsigned int guest_flush_tlb_flags(const struct domain *d); void guest_flush_tlb_mask(const struct domain *d, const cpumask_t *mask); diff --git a/xen/arch/x86/include/asm/grant_table.h b/xen/arch/x86/include/asm/grant_table.h index 5c23cec90c..60a6dbb231 100644 --- a/xen/arch/x86/include/asm/grant_table.h +++ b/xen/arch/x86/include/asm/grant_table.h @@ -72,4 +72,20 @@ static inline void gnttab_clear_flags(struct domain *d, #define gnttab_need_iommu_mapping(d) \ (!paging_mode_translate(d) && need_iommu_pt_sync(d)) +static inline int arch_grant_cache_flush(unsigned int op, const void *p, + unsigned long size) +{ + unsigned int order; + + if ( !(op & GNTTAB_CACHE_CLEAN) ) + return -EOPNOTSUPP; + + order = get_order_from_bytes(size); + /* sub-page granularity support needs to be added if necessary */ + (void) flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order)); + + return 0; +} + + #endif /* __ASM_GRANT_TABLE_H__ */ diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 37b178a67b..0df663944f 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -3575,14 +3575,7 @@ static int _cache_flush(const gnttab_cache_flush_t *cflush, grant_ref_t *cur_ref v = map_domain_page(mfn); v += cflush->offset; - if ( (cflush->op & GNTTAB_CACHE_INVAL) && (cflush->op & GNTTAB_CACHE_CLEAN) ) - ret = clean_and_invalidate_dcache_va_range(v, cflush->length); - else if ( cflush->op & GNTTAB_CACHE_INVAL ) - ret = invalidate_dcache_va_range(v, cflush->length); - else if ( cflush->op & GNTTAB_CACHE_CLEAN ) - ret = clean_dcache_va_range(v, cflush->length); - else - ret = 0; + ret = arch_grant_cache_flush(cflush->op, v, cflush->length); if ( d != owner ) {
On 2024-02-26 11:51, Jan Beulich wrote: > On 23.02.2024 08:58, Nicola Vetrini wrote: >> On 2024-02-19 16:14, Nicola Vetrini wrote: >>> The cache clearing and invalidation helpers in x86 and Arm didn't >>> comply with MISRA C Rule 17.7: "The value returned by a function >>> having non-void return type shall be used". On Arm they >>> were always returning 0, while some in x86 returned -EOPNOTSUPP >>> and in common/grant_table the return value is saved. >>> >>> As a consequence, a common helper arch_grant_cache_flush that returns >>> an integer is introduced, so that each architecture can choose >>> whether >>> to >>> return an error value on certain conditions, and the helpers have >>> either >>> been changed to return void (on Arm) or deleted entirely (on x86). >>> >>> Signed-off-by: Julien Grall <julien@xen.org> >>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com> >>> --- >>> The original refactor idea came from Julien Grall in [1]; I edited >>> that >>> proposal >>> to fix build errors. >>> >>> I did introduce a cast to void for the call to flush_area_local on >>> x86, >>> because >>> even before this patch the return value of that function wasn't >>> checked >>> in all >>> but one use in x86/smp.c, and in this context the helper (perhaps >>> incidentally) >>> ignored the return value of flush_area_local. >>> >>> [1] >>> https://lore.kernel.org/xen-devel/09589e8f-77b6-47f7-b5bd-cf485e4b60f1@xen.org/ >>> --- >>> xen/arch/arm/include/asm/page.h | 33 >>> ++++++++++++++++++----------- >>> xen/arch/x86/include/asm/flushtlb.h | 23 ++++++++++---------- >>> xen/common/grant_table.c | 9 +------- >>> 3 files changed, 34 insertions(+), 31 deletions(-) >>> >> >> I'll put this patch in the backlog at the moment: too many intricacies >> while trying to untangle xen/flushtlb from xen/mm.h, and there are >> easier cases that can be done faster. If someone is interested I can >> post the partial work I've done so far, even though it doesn't >> build on x86. > > This > https://lists.xen.org/archives/html/xen-devel/2024-02/msg01513.html > may be of interest to you in the context here. > I'll take a look, thanks.
diff --git a/xen/arch/arm/include/asm/page.h b/xen/arch/arm/include/asm/page.h index 69f817d1e68a..e90c9de3616e 100644 --- a/xen/arch/arm/include/asm/page.h +++ b/xen/arch/arm/include/asm/page.h @@ -123,6 +123,7 @@ #ifndef __ASSEMBLY__ +#include <public/grant_table.h> #include <xen/errno.h> #include <xen/types.h> #include <xen/lib.h> @@ -159,13 +160,13 @@ static inline size_t read_dcache_line_bytes(void) * if 'range' is large enough we might want to use model-specific * full-cache flushes. */ -static inline int invalidate_dcache_va_range(const void *p, unsigned long size) +static inline void invalidate_dcache_va_range(const void *p, unsigned long size) { size_t cacheline_mask = dcache_line_bytes - 1; unsigned long idx = 0; if ( !size ) - return 0; + return; /* Passing a region that wraps around is illegal */ ASSERT(((uintptr_t)p + size - 1) >= (uintptr_t)p); @@ -188,17 +189,15 @@ static inline int invalidate_dcache_va_range(const void *p, unsigned long size) asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p + idx)); dsb(sy); /* So we know the flushes happen before continuing */ - - return 0; } -static inline int clean_dcache_va_range(const void *p, unsigned long size) +static inline void clean_dcache_va_range(const void *p, unsigned long size) { size_t cacheline_mask = dcache_line_bytes - 1; unsigned long idx = 0; if ( !size ) - return 0; + return; /* Passing a region that wraps around is illegal */ ASSERT(((uintptr_t)p + size - 1) >= (uintptr_t)p); @@ -211,18 +210,16 @@ static inline int clean_dcache_va_range(const void *p, unsigned long size) idx += dcache_line_bytes, size -= dcache_line_bytes ) asm volatile (__clean_dcache_one(0) : : "r" (p + idx)); dsb(sy); /* So we know the flushes happen before continuing */ - /* ARM callers assume that dcache_* functions cannot fail. */ - return 0; } -static inline int clean_and_invalidate_dcache_va_range +static inline void clean_and_invalidate_dcache_va_range (const void *p, unsigned long size) { size_t cacheline_mask = dcache_line_bytes - 1; unsigned long idx = 0; if ( !size ) - return 0; + return; /* Passing a region that wraps around is illegal */ ASSERT(((uintptr_t)p + size - 1) >= (uintptr_t)p); @@ -235,8 +232,6 @@ static inline int clean_and_invalidate_dcache_va_range idx += dcache_line_bytes, size -= dcache_line_bytes ) asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p + idx)); dsb(sy); /* So we know the flushes happen before continuing */ - /* ARM callers assume that dcache_* functions cannot fail. */ - return 0; } /* Macros for flushing a single small item. The predicate is always @@ -266,6 +261,20 @@ static inline int clean_and_invalidate_dcache_va_range : : "r" (_p), "m" (*_p)); \ } while (0) +static inline int arch_grant_cache_flush(unsigned int op, const void *p, + unsigned long size) +{ + if ( (op & GNTTAB_CACHE_INVAL) && (op & GNTTAB_CACHE_CLEAN) ) + clean_and_invalidate_dcache_va_range(p, size); + else if ( op & GNTTAB_CACHE_INVAL ) + invalidate_dcache_va_range(p, size); + else if ( op & GNTTAB_CACHE_CLEAN ) + clean_dcache_va_range(p, size); + + /* ARM callers assume that dcache_* functions cannot fail. */ + return 0; +} + /* * Write a pagetable entry. * diff --git a/xen/arch/x86/include/asm/flushtlb.h b/xen/arch/x86/include/asm/flushtlb.h index bb0ad58db49b..c37bf4455714 100644 --- a/xen/arch/x86/include/asm/flushtlb.h +++ b/xen/arch/x86/include/asm/flushtlb.h @@ -10,6 +10,7 @@ #ifndef __FLUSHTLB_H__ #define __FLUSHTLB_H__ +#include <public/grant_table.h> #include <xen/mm.h> #include <xen/percpu.h> #include <xen/smp.h> @@ -182,21 +183,21 @@ void flush_area_mask(const cpumask_t *mask, const void *va, } static inline void flush_page_to_ram(unsigned long mfn, bool sync_icache) {} -static inline int invalidate_dcache_va_range(const void *p, - unsigned long size) -{ return -EOPNOTSUPP; } -static inline int clean_and_invalidate_dcache_va_range(const void *p, - unsigned long size) + +static inline int arch_grant_cache_flush(unsigned int op, const void *p, + unsigned long size) { - unsigned int order = get_order_from_bytes(size); + unsigned int order; + + if ( !(op & GNTTAB_CACHE_CLEAN) ) + return -EOPNOTSUPP; + + order = get_order_from_bytes(size); /* sub-page granularity support needs to be added if necessary */ - flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order)); + (void) flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order)); + return 0; } -static inline int clean_dcache_va_range(const void *p, unsigned long size) -{ - return clean_and_invalidate_dcache_va_range(p, size); -} unsigned int guest_flush_tlb_flags(const struct domain *d); void guest_flush_tlb_mask(const struct domain *d, const cpumask_t *mask); diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 5721eab22561..8615ea144bb3 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -3572,14 +3572,7 @@ static int _cache_flush(const gnttab_cache_flush_t *cflush, grant_ref_t *cur_ref v = map_domain_page(mfn); v += cflush->offset; - if ( (cflush->op & GNTTAB_CACHE_INVAL) && (cflush->op & GNTTAB_CACHE_CLEAN) ) - ret = clean_and_invalidate_dcache_va_range(v, cflush->length); - else if ( cflush->op & GNTTAB_CACHE_INVAL ) - ret = invalidate_dcache_va_range(v, cflush->length); - else if ( cflush->op & GNTTAB_CACHE_CLEAN ) - ret = clean_dcache_va_range(v, cflush->length); - else - ret = 0; + ret = arch_grant_cache_flush(cflush->op, v, cflush->length); if ( d != owner ) {