Message ID | 20220923155542.1212814-1-vschneid@redhat.com (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | sched, net: NUMA-aware CPU spreading interface | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Guessing tree name failed - patch did not apply |
On Fri, Sep 23, 2022 at 04:55:37PM +0100, Valentin Schneider wrote: > for_each_cpu_and() is very convenient as it saves having to allocate a > temporary cpumask to store the result of cpumask_and(). The same issue > applies to cpumask_andnot() which doesn't actually need temporary storage > for iteration purposes. > > Following what has been done for for_each_cpu_and(), introduce > for_each_cpu_andnot(). > > Signed-off-by: Valentin Schneider <vschneid@redhat.com> > --- > include/linux/cpumask.h | 39 +++++++++++++++++++++++++++++++++++++++ > 1 file changed, 39 insertions(+) > > diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h > index 1b442fb2001f..4c69e338bb8c 100644 > --- a/include/linux/cpumask.h > +++ b/include/linux/cpumask.h > @@ -238,6 +238,25 @@ unsigned int cpumask_next_and(int n, const struct cpumask *src1p, > nr_cpumask_bits, n + 1); > } > > +/** > + * cpumask_next_andnot - get the next cpu in *src1p & ~*src2p > + * @n: the cpu prior to the place to search (ie. return will be > @n) > + * @src1p: the first cpumask pointer > + * @src2p: the second cpumask pointer > + * > + * Returns >= nr_cpu_ids if no further cpus set in *src1p & ~*src2p > + */ > +static inline > +unsigned int cpumask_next_andnot(int n, const struct cpumask *src1p, > + const struct cpumask *src2p) > +{ > + /* -1 is a legal arg here. */ > + if (n != -1) > + cpumask_check(n); This is wrong. n-1 should be illegal here. The correct check is: cpumask_check(n+1); > + return find_next_andnot_bit(cpumask_bits(src1p), cpumask_bits(src2p), > + nr_cpumask_bits, n + 1); > +} > + > /** > * for_each_cpu - iterate over every cpu in a mask > * @cpu: the (optionally unsigned) integer iterator > @@ -317,6 +336,26 @@ unsigned int __pure cpumask_next_wrap(int n, const struct cpumask *mask, int sta > (cpu) = cpumask_next_and((cpu), (mask1), (mask2)), \ > (cpu) < nr_cpu_ids;) > > +/** > + * for_each_cpu_andnot - iterate over every cpu present in one mask, excluding > + * those present in another. > + * @cpu: the (optionally unsigned) integer iterator > + * @mask1: the first cpumask pointer > + * @mask2: the second cpumask pointer > + * > + * This saves a temporary CPU mask in many places. It is equivalent to: > + * struct cpumask tmp; > + * cpumask_andnot(&tmp, &mask1, &mask2); > + * for_each_cpu(cpu, &tmp) > + * ... > + * > + * After the loop, cpu is >= nr_cpu_ids. > + */ > +#define for_each_cpu_andnot(cpu, mask1, mask2) \ > + for ((cpu) = -1; \ > + (cpu) = cpumask_next_andnot((cpu), (mask1), (mask2)), \ > + (cpu) < nr_cpu_ids;) This would raise cpumaks_check() warning at the very last iteration. Because cpu is initialized insize the loop, you don't need to check it at all. You can do it like this: #define for_each_cpu_andnot(cpu, mask1, mask2) \ for_each_andnot_bit(...) Check this series for details (and please review). https://lore.kernel.org/all/20220919210559.1509179-8-yury.norov@gmail.com/T/ Thanks, Yury
On 25/09/22 08:23, Yury Norov wrote: > On Fri, Sep 23, 2022 at 04:55:37PM +0100, Valentin Schneider wrote: >> +/** >> + * for_each_cpu_andnot - iterate over every cpu present in one mask, excluding >> + * those present in another. >> + * @cpu: the (optionally unsigned) integer iterator >> + * @mask1: the first cpumask pointer >> + * @mask2: the second cpumask pointer >> + * >> + * This saves a temporary CPU mask in many places. It is equivalent to: >> + * struct cpumask tmp; >> + * cpumask_andnot(&tmp, &mask1, &mask2); >> + * for_each_cpu(cpu, &tmp) >> + * ... >> + * >> + * After the loop, cpu is >= nr_cpu_ids. >> + */ >> +#define for_each_cpu_andnot(cpu, mask1, mask2) \ >> + for ((cpu) = -1; \ >> + (cpu) = cpumask_next_andnot((cpu), (mask1), (mask2)), \ >> + (cpu) < nr_cpu_ids;) > > This would raise cpumaks_check() warning at the very last iteration. > Because cpu is initialized insize the loop, you don't need to check it > at all. You can do it like this: > > #define for_each_cpu_andnot(cpu, mask1, mask2) \ > for_each_andnot_bit(...) > > Check this series for details (and please review). > https://lore.kernel.org/all/20220919210559.1509179-8-yury.norov@gmail.com/T/ > Thanks, I'll have a look. > Thanks, > Yury
On Tue, Sep 27, 2022 at 05:45:04PM +0100, Valentin Schneider wrote: > On 25/09/22 08:23, Yury Norov wrote: > > On Fri, Sep 23, 2022 at 04:55:37PM +0100, Valentin Schneider wrote: > >> +/** > >> + * for_each_cpu_andnot - iterate over every cpu present in one mask, excluding > >> + * those present in another. > >> + * @cpu: the (optionally unsigned) integer iterator > >> + * @mask1: the first cpumask pointer > >> + * @mask2: the second cpumask pointer > >> + * > >> + * This saves a temporary CPU mask in many places. It is equivalent to: > >> + * struct cpumask tmp; > >> + * cpumask_andnot(&tmp, &mask1, &mask2); > >> + * for_each_cpu(cpu, &tmp) > >> + * ... > >> + * > >> + * After the loop, cpu is >= nr_cpu_ids. > >> + */ > >> +#define for_each_cpu_andnot(cpu, mask1, mask2) \ > >> + for ((cpu) = -1; \ > >> + (cpu) = cpumask_next_andnot((cpu), (mask1), (mask2)), \ > >> + (cpu) < nr_cpu_ids;) > > > > This would raise cpumaks_check() warning at the very last iteration. > > Because cpu is initialized insize the loop, you don't need to check it > > at all. You can do it like this: > > > > #define for_each_cpu_andnot(cpu, mask1, mask2) \ > > for_each_andnot_bit(...) > > > > Check this series for details (and please review). > > https://lore.kernel.org/all/20220919210559.1509179-8-yury.norov@gmail.com/T/ > > > > Thanks, I'll have a look. Also, if you send first 4 patches as a separate series on top of bitmap-for-next, I'll be able to include them in bitmap-for-next and then in 6.1 pull-request. Thanks, Yury
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index 1b442fb2001f..4c69e338bb8c 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -238,6 +238,25 @@ unsigned int cpumask_next_and(int n, const struct cpumask *src1p, nr_cpumask_bits, n + 1); } +/** + * cpumask_next_andnot - get the next cpu in *src1p & ~*src2p + * @n: the cpu prior to the place to search (ie. return will be > @n) + * @src1p: the first cpumask pointer + * @src2p: the second cpumask pointer + * + * Returns >= nr_cpu_ids if no further cpus set in *src1p & ~*src2p + */ +static inline +unsigned int cpumask_next_andnot(int n, const struct cpumask *src1p, + const struct cpumask *src2p) +{ + /* -1 is a legal arg here. */ + if (n != -1) + cpumask_check(n); + return find_next_andnot_bit(cpumask_bits(src1p), cpumask_bits(src2p), + nr_cpumask_bits, n + 1); +} + /** * for_each_cpu - iterate over every cpu in a mask * @cpu: the (optionally unsigned) integer iterator @@ -317,6 +336,26 @@ unsigned int __pure cpumask_next_wrap(int n, const struct cpumask *mask, int sta (cpu) = cpumask_next_and((cpu), (mask1), (mask2)), \ (cpu) < nr_cpu_ids;) +/** + * for_each_cpu_andnot - iterate over every cpu present in one mask, excluding + * those present in another. + * @cpu: the (optionally unsigned) integer iterator + * @mask1: the first cpumask pointer + * @mask2: the second cpumask pointer + * + * This saves a temporary CPU mask in many places. It is equivalent to: + * struct cpumask tmp; + * cpumask_andnot(&tmp, &mask1, &mask2); + * for_each_cpu(cpu, &tmp) + * ... + * + * After the loop, cpu is >= nr_cpu_ids. + */ +#define for_each_cpu_andnot(cpu, mask1, mask2) \ + for ((cpu) = -1; \ + (cpu) = cpumask_next_andnot((cpu), (mask1), (mask2)), \ + (cpu) < nr_cpu_ids;) + /** * cpumask_any_but - return a "random" in a cpumask, but not this one. * @mask: the cpumask to search
for_each_cpu_and() is very convenient as it saves having to allocate a temporary cpumask to store the result of cpumask_and(). The same issue applies to cpumask_andnot() which doesn't actually need temporary storage for iteration purposes. Following what has been done for for_each_cpu_and(), introduce for_each_cpu_andnot(). Signed-off-by: Valentin Schneider <vschneid@redhat.com> --- include/linux/cpumask.h | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+)