From patchwork Mon Sep 19 21:05:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12981020 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B677C6FA82 for ; Mon, 19 Sep 2022 21:06:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229865AbiISVGW (ORCPT ); Mon, 19 Sep 2022 17:06:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229783AbiISVGU (ORCPT ); Mon, 19 Sep 2022 17:06:20 -0400 Received: from mail-qk1-x735.google.com (mail-qk1-x735.google.com [IPv6:2607:f8b0:4864:20::735]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE7D1E82; Mon, 19 Sep 2022 14:06:19 -0700 (PDT) Received: by mail-qk1-x735.google.com with SMTP id i3so379484qkl.3; Mon, 19 Sep 2022 14:06:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date; bh=i+YbDv67G8LotfUM9oVF+1zCzZ8Q2PBCNdqV7Fv3SSs=; b=XvizBUqTaZHLGwniEtLDcQ8fTNvupZjxri9ST9XDNKoy8R9WBWbpiGtkKO14VVHWdp 5OdQM3cDqQrqUOgmEoewC2W4Zwa3THiaYYIqDNkaqEIzPgADE7df69pD7wpiAcEB945b 9dZMQ57yChyKm2ptCOlk6Izsi5I6JcTsCo4UPcidsn3ppFo8PP38FwnjsrjGcEUQXgcG VI0ccTMUF3TuhI/54zPshHNDE7hpTF4OfxMR3D1rGwQcBhZG23WjKYTohd8W3fp3EiwI nYBlAABn1wsLble7eFr7odR+y6r55FLhK5O4X6OGpe5LF2G3rkvggd1W778QyL7zJY+o YhLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date; bh=i+YbDv67G8LotfUM9oVF+1zCzZ8Q2PBCNdqV7Fv3SSs=; b=muGAIYzpc9LVjY1kK9BH8ZuTrDuwNdDW6byNosNmsLIEAu/lNNUXlo5oydDsEdr6l9 driE/Ep4n4ToN5nnZjhaNfCUOwi5V6By33OpWW8oC4LG0h0RTK3G/XUcGKpdirpwrjnb cRpe/sglRLD+U2EI5HS3+ws53+zofOQcQSOslIG/px+ctXRouWTKWA88mN5+GBz+eBMM 9ImQ+JA9MYsK9qsoIZ0JfXiWoy2dbKOuozMxxqum8+Gozm0o1l0+UxCEX6x+NTkCzYyd jx6QyFKbvt9o9peSaWumvjinhp4GjA0OT1Ytabk68ezt7Wcm1WfoCSWV3JdigdgYA7hJ Y6+w== X-Gm-Message-State: ACrzQf3eMfo0aIUedkBbUyQaVQ7kgrz+zplmPyCFu5KQViVcZ0L832/o B8sFDyWBfd6CGkE13KhSLjEEpTLmkr8= X-Google-Smtp-Source: AMsMyM6AnoCFRT2WphLU9NjB2k6zkhfc82gC0Z5/g51Tdjr8eqzeB0kdHzXuIIdPw/f7twURsaX7zA== X-Received: by 2002:a05:620a:424a:b0:6be:74ee:f093 with SMTP id w10-20020a05620a424a00b006be74eef093mr14409054qko.175.1663621578697; Mon, 19 Sep 2022 14:06:18 -0700 (PDT) Received: from localhost ([2601:4c1:c100:2270:bb7d:3b54:df44:5476]) by smtp.gmail.com with ESMTPSA id x26-20020a05620a0b5a00b006ce3e4fb328sm13101014qkg.42.2022.09.19.14.06.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Sep 2022 14:06:18 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Andy Shevchenko , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rasmus Villemoes , Yury Norov Subject: [PATCH 1/7] cpumask: fix checking valid cpu range Date: Mon, 19 Sep 2022 14:05:53 -0700 Message-Id: <20220919210559.1509179-2-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220919210559.1509179-1-yury.norov@gmail.com> References: <20220919210559.1509179-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The range of valid CPUs is [0, nr_cpu_ids). Some cpumask functions are passed with a shifted CPU index, and for them, the valid range is [-1, nr_cpu_ids-1). Currently for those functions, we check the index against [-1, nr_cpu_ids), which is wrong. Signed-off-by: Yury Norov --- include/linux/cpumask.h | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index e4f9136a4a63..a1cd4eb1a3d6 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -174,9 +174,8 @@ static inline unsigned int cpumask_last(const struct cpumask *srcp) static inline unsigned int cpumask_next(int n, const struct cpumask *srcp) { - /* -1 is a legal arg here. */ - if (n != -1) - cpumask_check(n); + /* n is a prior cpu */ + cpumask_check(n + 1); return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n + 1); } @@ -189,9 +188,8 @@ unsigned int cpumask_next(int n, const struct cpumask *srcp) */ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp) { - /* -1 is a legal arg here. */ - if (n != -1) - cpumask_check(n); + /* n is a prior cpu */ + cpumask_check(n + 1); return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1); } @@ -231,9 +229,8 @@ static inline unsigned int cpumask_next_and(int n, const struct cpumask *src1p, const struct cpumask *src2p) { - /* -1 is a legal arg here. */ - if (n != -1) - cpumask_check(n); + /* n is a prior cpu */ + cpumask_check(n + 1); return find_next_and_bit(cpumask_bits(src1p), cpumask_bits(src2p), nr_cpumask_bits, n + 1); } @@ -267,8 +264,8 @@ static inline unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap) { cpumask_check(start); - if (n != -1) - cpumask_check(n); + /* n is a prior cpu */ + cpumask_check(n + 1); /* * Return the first available CPU when wrapping, or when starting before cpu0, From patchwork Mon Sep 19 21:05:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12981021 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA527ECAAD3 for ; Mon, 19 Sep 2022 21:06:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229885AbiISVGY (ORCPT ); Mon, 19 Sep 2022 17:06:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229803AbiISVGV (ORCPT ); Mon, 19 Sep 2022 17:06:21 -0400 Received: from mail-qt1-x834.google.com (mail-qt1-x834.google.com [IPv6:2607:f8b0:4864:20::834]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6F08201A2; Mon, 19 Sep 2022 14:06:20 -0700 (PDT) Received: by mail-qt1-x834.google.com with SMTP id r20so427000qtn.12; Mon, 19 Sep 2022 14:06:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date; bh=7McnKBC1d6hJiefhI/Psw8V+yIknsxdCgUA4XbjJgIs=; b=jLuSg5hifiTnXr/Spb39JRpbJ67hqVX9IF/EdNqzZ03PUMmV7JcTOOd4A4lr1q4FI3 btoy5I02qI30wvAvZFcVkg9HmkKcMhvYt4cMi2SlwGfZeHoxodO35gfiA5+I5M7+SJTG JqtkXKKTgBQJi72Y7kw1doYwG+tiifZ7tFWU4bbQNbFm5lECYSLCNNw2lEJpyUF+o6xU tp8f5vVbcKVdRolaaFfhIchsmncekBDQwaI+1LKGW0hYhjQtrV/IgjJSL/u8wzPJlI5b umUHQMPkFXPoJGA0Asy7nQP9TDZ8h03LgAd0jghqWfySmbFo4W8qPT/m6UOSPhJ6vSzn sEbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date; bh=7McnKBC1d6hJiefhI/Psw8V+yIknsxdCgUA4XbjJgIs=; b=c/BJZ3zCfUicKsAOmveFzO219pLk7bp2RYBEbwKAkMvAabLWg8lK2vIzNrBElrGWgQ pHn8G6mcNP2i+HOCwR31PYEKziIN3wPUEgENF/zVOMcKHUIR0y6rcxC3/0ow0RjzJ+4Z QXr7ibNQJiSrCRFZPuiv05ZBkWJqGCQkRswmZtaPym83jhJndHOo2zsyRH1Rp89V17A5 diPC5Sg5lz/sQB1QMRXYM5hy0aOwRzAwIKcaUAPyAO9n6yXRra0Xc5ZJRwfqDCiiGBOa l9yR1d0Uvr04ySYZ7n/r62fPxEfDnLezx1FxkAr/NrHXXF5K3que7xk82FpQFp0vUAb1 ObKQ== X-Gm-Message-State: ACrzQf14qv2oiGVVONRyVfFKXPu1OU86VtbciTnVQ1eA1qoMdfKgopRO K70m0Dt7RdHWELJLi3eyh1tdqxFQ3I4= X-Google-Smtp-Source: AMsMyM5mjfV0X6uUyVRBMsUGqQ4TewVolnYOW0MIPpDc7MPXcR41fB1nxW7bOmJDP+1L60gu13toKQ== X-Received: by 2002:ac8:5d89:0:b0:35b:b58a:2bcb with SMTP id d9-20020ac85d89000000b0035bb58a2bcbmr16965461qtx.273.1663621579696; Mon, 19 Sep 2022 14:06:19 -0700 (PDT) Received: from localhost ([2601:4c1:c100:2270:bb7d:3b54:df44:5476]) by smtp.gmail.com with ESMTPSA id m12-20020ac866cc000000b0035bbc29b3c9sm11131591qtp.60.2022.09.19.14.06.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Sep 2022 14:06:19 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Andy Shevchenko , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rasmus Villemoes , Yury Norov Subject: [PATCH 2/7] net: fix cpu_max_bits_warn() usage in netif_attrmask_next{,_and} Date: Mon, 19 Sep 2022 14:05:54 -0700 Message-Id: <20220919210559.1509179-3-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220919210559.1509179-1-yury.norov@gmail.com> References: <20220919210559.1509179-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The functions require to be passed with a cpu index prior to one that is the first to start search, so the valid input range is [-1, nr_cpu_ids-1). However, the code checks against [-1, nr_cpu_ids). Signed-off-by: Yury Norov Acked-by: Jakub Kicinski --- include/linux/netdevice.h | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 05d6f3facd5a..4d6d5a2dd82e 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3643,9 +3643,8 @@ static inline bool netif_attr_test_online(unsigned long j, static inline unsigned int netif_attrmask_next(int n, const unsigned long *srcp, unsigned int nr_bits) { - /* -1 is a legal arg here. */ - if (n != -1) - cpu_max_bits_warn(n, nr_bits); + /* n is a prior cpu */ + cpu_max_bits_warn(n + 1, nr_bits); if (srcp) return find_next_bit(srcp, nr_bits, n + 1); @@ -3666,9 +3665,8 @@ static inline int netif_attrmask_next_and(int n, const unsigned long *src1p, const unsigned long *src2p, unsigned int nr_bits) { - /* -1 is a legal arg here. */ - if (n != -1) - cpu_max_bits_warn(n, nr_bits); + /* n is a prior cpu */ + cpu_max_bits_warn(n + 1, nr_bits); if (src1p && src2p) return find_next_and_bit(src1p, src2p, nr_bits, n + 1); From patchwork Mon Sep 19 21:05:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12981022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE00DECAAA1 for ; Mon, 19 Sep 2022 21:06:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229941AbiISVGi (ORCPT ); Mon, 19 Sep 2022 17:06:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229869AbiISVGW (ORCPT ); Mon, 19 Sep 2022 17:06:22 -0400 Received: from mail-qt1-x835.google.com (mail-qt1-x835.google.com [IPv6:2607:f8b0:4864:20::835]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E010F3686E; Mon, 19 Sep 2022 14:06:21 -0700 (PDT) Received: by mail-qt1-x835.google.com with SMTP id y2so459461qtv.5; Mon, 19 Sep 2022 14:06:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date; bh=TchURNr/90KWHMrLM9Hgr2knqs7JXTfW1o1T1ys+O9U=; b=EoFtUEjr3hBTKUAk8CZxJHE+y410UAmqEn/pc6v7uObleuiCYx9jqfP+CKCfyjEaAT 7Kv8u4n2qV0LbUtBtL7By9ShAuLGuyaFItVnU5ZW/I2T6yV2i0VpP6mUpajWVVFMYmdN reo6PAZAwPOHfsGBL8WpUYlHkbDUdfbWTqTIXqgzq2fdS2nzaTzH7ssK01dgThqmSElq Keym8Z9QPNxbt5WJUCwFMvDQwuQ+RR34tt77yzHrK25uzcZOjjSfNpZf1Do7+Cl6TWjJ qgJqBUR/XtphPC2ku0/4AJPbTJ5w6oGr+prWJoJcj63RvnPKlV+ZSsL9wXSHpzrqLMKN bCtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date; bh=TchURNr/90KWHMrLM9Hgr2knqs7JXTfW1o1T1ys+O9U=; b=wV7EVvhlXYqGncYTB770DeuPp90Op5iB3DrswjTwiRk17aiSnCPp16a64SGgbuk16Z Anhen/3KwJdXHdyrG+bBsYPBUxUbQ2L2GvJUA34VkrRb+ap6NgfJx3kFaTMm8lOejqNv zixcqq43v+7iuktff+2jIXduQDIe+l7D+LKSscM8DP2WZRrCrlZKjcPep9ga2I6SasWM l0mpqUR1l7tuzIsF4xo51PkggdiB+JnmvFPdNGJeamjAx2QYz5PLFGhQdhA6B+X/S2Ms XVS5SWCE98oyvcIN6kEXYYGPRAY2AZT3bRYJRUEHMecSY45QcxqKxQkT/2tfgugQ+96K DZ9A== X-Gm-Message-State: ACrzQf27EOmnYdDpZvnc9oJvjzohUNplQ7vsFL4bM2DyVkPGqnkldSUt IJ0dDq0D3P2ZXgrQJO1hlyWgSQuk8q8= X-Google-Smtp-Source: AMsMyM75/GQLHr8r8vLXSWqytoewoOsjNQU5sC+qpOLMQY1umzoPuDyfoSRWoxdzAeRPRdJG5HzS4g== X-Received: by 2002:ac8:5c4c:0:b0:35c:1373:5afc with SMTP id j12-20020ac85c4c000000b0035c13735afcmr17252499qtj.86.1663621580673; Mon, 19 Sep 2022 14:06:20 -0700 (PDT) Received: from localhost ([2601:4c1:c100:2270:bb7d:3b54:df44:5476]) by smtp.gmail.com with ESMTPSA id t13-20020a37ea0d000000b006ce60296f97sm819972qkj.68.2022.09.19.14.06.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Sep 2022 14:06:20 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Andy Shevchenko , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rasmus Villemoes , Yury Norov Subject: [PATCH 3/7] cpumask: switch for_each_cpu{,_not} to use for_each_bit() Date: Mon, 19 Sep 2022 14:05:55 -0700 Message-Id: <20220919210559.1509179-4-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220919210559.1509179-1-yury.norov@gmail.com> References: <20220919210559.1509179-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The difference between for_each_cpu() and for_each_set_bit() is that the latter uses cpumask_next() instead of find_next_bit(), and so calls cpumask_check(). This check is useless because the iterator value is not provided by user. It generates false-positives for the very last iteration of for_each_cpu(). Signed-off-by: Yury Norov --- include/linux/cpumask.h | 12 +++--------- include/linux/find.h | 5 +++++ 2 files changed, 8 insertions(+), 9 deletions(-) diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index a1cd4eb1a3d6..3a9566f1373a 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -243,9 +243,7 @@ unsigned int cpumask_next_and(int n, const struct cpumask *src1p, * After the loop, cpu is >= nr_cpu_ids. */ #define for_each_cpu(cpu, mask) \ - for ((cpu) = -1; \ - (cpu) = cpumask_next((cpu), (mask)), \ - (cpu) < nr_cpu_ids;) + for_each_set_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) /** * for_each_cpu_not - iterate over every cpu in a complemented mask @@ -255,9 +253,7 @@ unsigned int cpumask_next_and(int n, const struct cpumask *src1p, * After the loop, cpu is >= nr_cpu_ids. */ #define for_each_cpu_not(cpu, mask) \ - for ((cpu) = -1; \ - (cpu) = cpumask_next_zero((cpu), (mask)), \ - (cpu) < nr_cpu_ids;) + for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) #if NR_CPUS == 1 static inline @@ -310,9 +306,7 @@ unsigned int __pure cpumask_next_wrap(int n, const struct cpumask *mask, int sta * After the loop, cpu is >= nr_cpu_ids. */ #define for_each_cpu_and(cpu, mask1, mask2) \ - for ((cpu) = -1; \ - (cpu) = cpumask_next_and((cpu), (mask1), (mask2)), \ - (cpu) < nr_cpu_ids;) + for_each_and_bit(cpu, cpumask_bits(mask1), cpumask_bits(mask2), nr_cpumask_bits) /** * cpumask_any_but - return a "random" in a cpumask, but not this one. diff --git a/include/linux/find.h b/include/linux/find.h index b100944daba0..128615a3f93e 100644 --- a/include/linux/find.h +++ b/include/linux/find.h @@ -390,6 +390,11 @@ unsigned long find_next_bit_le(const void *addr, unsigned (bit) < (size); \ (bit) = find_next_bit((addr), (size), (bit) + 1)) +#define for_each_and_bit(bit, addr1, addr2, size) \ + for ((bit) = find_next_and_bit((addr1), (addr2), (size), 0); \ + (bit) < (size); \ + (bit) = find_next_and_bit((addr1), (addr2), (size), (bit) + 1)) + /* same as for_each_set_bit() but use bit as value to start with */ #define for_each_set_bit_from(bit, addr, size) \ for ((bit) = find_next_bit((addr), (size), (bit)); \ From patchwork Mon Sep 19 21:05:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12981024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D621C6FA82 for ; Mon, 19 Sep 2022 21:06:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229970AbiISVGj (ORCPT ); Mon, 19 Sep 2022 17:06:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229872AbiISVGX (ORCPT ); Mon, 19 Sep 2022 17:06:23 -0400 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBF8B4AD4A; Mon, 19 Sep 2022 14:06:22 -0700 (PDT) Received: by mail-qt1-x82c.google.com with SMTP id f26so432790qto.11; Mon, 19 Sep 2022 14:06:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date; bh=niDsSZ0B4tRMkQqyG3c14j4OPhIREF3CmeN5buhV29M=; b=Ch27sSQXSwnH+NUftLwAC/4CoQcRgvlBZg90Bfrz6v/yc1Q+GEgN2bNbahOcQXsOYW XLaMXx7XyH0yuq5/jExx5bSKaDrmHoZYvGNib4u/Hjuc/Qp/ZjUURSrnGxgGGlfMHoDA pMCRdhZ8flK1oX6mnbKKpX94PQ8oZwAuqtWMOGeInnaJetpqBUwUPWCEuvu5SR4cca/N X0+Lx/Oj2a56HfdInvLscNLRB95Z0WSJ9Vlak1EKHpzraxwvW7hkwPMVtF3HKzrTAEVP vxVDKkUGiz7/bMbxMEkmH+7z+QrPwHzu6wIDqXzlmpU5VX/yzzfPq4z1ee53QJ+2NQFw BaVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date; bh=niDsSZ0B4tRMkQqyG3c14j4OPhIREF3CmeN5buhV29M=; b=5F6waT0ZoANETQJu4W32LCWoX+HmrqcMDyQ2h1AaKcs1jnVUHJbAvJGxRvpiaWMF34 D72U1hYOH1tDjY5HEH41gYmzQt5n4nBnc+FaksPhAXOpVwOG6p/FHUUMTbIlNCvYRR6t cKN78K+ujwkHUgO/QexuC5TxcjPMcmVk1MbR5FUu2EmmVRCxE2Bz11T2OFNOoC280ifF cbgS99EWPBBNOboPoFH2Q5ThID9Nenc4z5Oomd0o3bILyAfPGbwbh0NLskU/b6z1Rvb5 pMpsGH/PTa8Gn6m5z94u4Ci2Ac9rkqv2o6loW5ddk/qtd+XZwQ0dMhn8GdHluuiEjrVR 9GCA== X-Gm-Message-State: ACrzQf23Z44wm2PEfglFmdY87BCq4HpnxKUW2ECqQav6hWIQCBD8R2PU C6KK5p4yo5iacn8LoK7DhTTFloNq7R0= X-Google-Smtp-Source: AMsMyM6HpzGnWLWB9U8P7UD+3E9W0tuvCzoOs7/arzCwPYPy+2dtHuyHjlKlTUApV4tklUF6Dxjlbw== X-Received: by 2002:a05:622a:2cb:b0:35c:c034:314f with SMTP id a11-20020a05622a02cb00b0035cc034314fmr16597733qtx.641.1663621581596; Mon, 19 Sep 2022 14:06:21 -0700 (PDT) Received: from localhost ([2601:4c1:c100:2270:bb7d:3b54:df44:5476]) by smtp.gmail.com with ESMTPSA id x5-20020a05620a258500b006bc1512986esm13793917qko.97.2022.09.19.14.06.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Sep 2022 14:06:21 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Andy Shevchenko , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rasmus Villemoes , Yury Norov Subject: [PATCH 4/7] lib/find_bit: add find_next{,_and}_bit_wrap Date: Mon, 19 Sep 2022 14:05:56 -0700 Message-Id: <20220919210559.1509179-5-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220919210559.1509179-1-yury.norov@gmail.com> References: <20220919210559.1509179-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The helper is better optimized for the worst case: in case of empty cpumask, current code traverses 2 * size: next = cpumask_next_and(prev, src1p, src2p); if (next >= nr_cpu_ids) next = cpumask_first_and(src1p, src2p); At bitmap level we can stop earlier after checking 'size + offset' bits. Signed-off-by: Yury Norov --- include/linux/find.h | 46 ++++++++++++++++++++++++++++++++++++++++++++ lib/cpumask.c | 12 +++--------- 2 files changed, 49 insertions(+), 9 deletions(-) diff --git a/include/linux/find.h b/include/linux/find.h index 128615a3f93e..77c087b7a451 100644 --- a/include/linux/find.h +++ b/include/linux/find.h @@ -290,6 +290,52 @@ unsigned long find_last_bit(const unsigned long *addr, unsigned long size) } #endif +/** + * find_next_and_bit_wrap - find the next set bit in both memory regions + * @addr1: The first address to base the search on + * @addr2: The second address to base the search on + * @size: The bitmap size in bits + * @offset: The bitnumber to start searching at + * + * Returns the bit number for the next set bit, or first set bit up to @offset + * If no bits are set, returns @size. + */ +static inline +unsigned long find_next_and_bit_wrap(const unsigned long *addr1, + const unsigned long *addr2, + unsigned long size, unsigned long offset) +{ + unsigned long bit = find_next_and_bit(addr1, addr2, size, offset); + + if (bit < size) + return bit; + + bit = find_first_and_bit(addr1, addr2, offset); + return bit < offset ? bit : size; +} + +/** + * find_next_bit_wrap - find the next set bit in both memory regions + * @addr: The first address to base the search on + * @size: The bitmap size in bits + * @offset: The bitnumber to start searching at + * + * Returns the bit number for the next set bit, or first set bit up to @offset + * If no bits are set, returns @size. + */ +static inline +unsigned long find_next_bit_wrap(const unsigned long *addr, + unsigned long size, unsigned long offset) +{ + unsigned long bit = find_next_bit(addr, size, offset); + + if (bit < size) + return bit; + + bit = find_first_bit(addr, offset); + return bit < offset ? bit : size; +} + /** * find_next_clump8 - find next 8-bit clump with set bits in a memory region * @clump: location to store copy of found clump diff --git a/lib/cpumask.c b/lib/cpumask.c index 2c4a63b6f03f..c7c392514fd3 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -166,10 +166,8 @@ unsigned int cpumask_any_and_distribute(const struct cpumask *src1p, /* NOTE: our first selection will skip 0. */ prev = __this_cpu_read(distribute_cpu_mask_prev); - next = cpumask_next_and(prev, src1p, src2p); - if (next >= nr_cpu_ids) - next = cpumask_first_and(src1p, src2p); - + next = find_next_and_bit_wrap(cpumask_bits(src1p), cpumask_bits(src2p), + nr_cpumask_bits, prev + 1); if (next < nr_cpu_ids) __this_cpu_write(distribute_cpu_mask_prev, next); @@ -183,11 +181,7 @@ unsigned int cpumask_any_distribute(const struct cpumask *srcp) /* NOTE: our first selection will skip 0. */ prev = __this_cpu_read(distribute_cpu_mask_prev); - - next = cpumask_next(prev, srcp); - if (next >= nr_cpu_ids) - next = cpumask_first(srcp); - + next = find_next_bit_wrap(cpumask_bits(srcp), nr_cpumask_bits, prev + 1); if (next < nr_cpu_ids) __this_cpu_write(distribute_cpu_mask_prev, next); From patchwork Mon Sep 19 21:05:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12981025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D25DAECAAA1 for ; Mon, 19 Sep 2022 21:06:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229984AbiISVGm (ORCPT ); Mon, 19 Sep 2022 17:06:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229739AbiISVGh (ORCPT ); Mon, 19 Sep 2022 17:06:37 -0400 Received: from mail-qv1-xf30.google.com (mail-qv1-xf30.google.com [IPv6:2607:f8b0:4864:20::f30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A3CD4BA49; Mon, 19 Sep 2022 14:06:23 -0700 (PDT) Received: by mail-qv1-xf30.google.com with SMTP id v15so648950qvi.11; Mon, 19 Sep 2022 14:06:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date; bh=6fETi1H2mTKvLVxOg8aLwhhuuPaALbYAFKdOKdv20oU=; b=gGKgpl2P00UIkv2Mu0jWezXMmi4npggcQ6MSKUqMK75AfZ0tq1UsRu6wETliNBwH0w xM9bMrK2ozauJIHbqvUdsk4TZh3ghoyuYM8YFUIIiMHzfi6w5m2JbU5Azi3LmBvUpnfH ta/rDc0HFJTsTTkfQutrkvDzFJJgAbRVk6xETfNAAE5gqOfUUMWDuKE3WXUFOKbkzc/o ON1GD9SmW8O/g7Bee9S+97fv/BVKx+JHiKW07g3ae2ffFxqfycfq3G8TC43PGSThXOnX dAeN/RdOLvpqrphF0cTw6TfdbNJrSYQjNpIzX3ZLI312/5RnDCDAG59c5aNaFKq4wTNs 0UQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date; bh=6fETi1H2mTKvLVxOg8aLwhhuuPaALbYAFKdOKdv20oU=; b=L+I2Rt70c6TAL2tqGliyELTkbAeJqHrBrNadr8P59UUgrOco3c0OlT9QB9Hqtt7K0m AL2i/ZZgxU+fSWhkmVPKRzQ0TNC57fL2quUrRRkjRQKAyZqVQVs4sDqm/fHpG9CFr6on RBGwfxtjBTKokDQTKaURpP/42WamcGeJBuV7jj26jqV877i9Frw2ydwC3pwLuXrR8uC2 UvLskLAB6xPsH2KZCxgT4+JnVdjaNHGzlS+nO3rkuje5FDGvmhKKflgi10VVTYCw4LIl ozoyy/F3QWir2p33sYa28V3JiOk3rmGqEhHnUSS7MVIhLo5YB/agIQTwiGzAyQC+U4n5 ctLA== X-Gm-Message-State: ACrzQf3k945jVNH/GPomHgSPY3d/B9rspwq1JZR28MMFpmj/NIAtX+TY a3rM9OfJRy/CvDrp30y/4qxVKIipowk= X-Google-Smtp-Source: AMsMyM5Byg6loTEYHrcOaeK8MSQTszoOSTk0zHhQ3hE5pvA1kWNVArOhJBplZkwrdc6t7WAVs5zzyQ== X-Received: by 2002:a05:6214:29cc:b0:4ad:40e0:97da with SMTP id gh12-20020a05621429cc00b004ad40e097damr5814717qvb.31.1663621582725; Mon, 19 Sep 2022 14:06:22 -0700 (PDT) Received: from localhost ([2601:4c1:c100:2270:bb7d:3b54:df44:5476]) by smtp.gmail.com with ESMTPSA id y11-20020a37f60b000000b006a65c58db99sm13110819qkj.64.2022.09.19.14.06.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Sep 2022 14:06:22 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Andy Shevchenko , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rasmus Villemoes , Yury Norov Subject: [PATCH 5/7] lib/bitmap: introduce for_each_set_bit_wrap() macro Date: Mon, 19 Sep 2022 14:05:57 -0700 Message-Id: <20220919210559.1509179-6-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220919210559.1509179-1-yury.norov@gmail.com> References: <20220919210559.1509179-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add for_each_set_bit_wrap() macro and use it in for_each_cpu_wrap(). The new macro is based on __for_each_wrap() iterator, which is simpler and smaller than cpumask_next_wrap(). Signed-off-by: Yury Norov --- include/linux/cpumask.h | 6 ++---- include/linux/find.h | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+), 4 deletions(-) diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index 3a9566f1373a..286804bfe3b7 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -286,10 +286,8 @@ unsigned int __pure cpumask_next_wrap(int n, const struct cpumask *mask, int sta * * After the loop, cpu is >= nr_cpu_ids. */ -#define for_each_cpu_wrap(cpu, mask, start) \ - for ((cpu) = cpumask_next_wrap((start)-1, (mask), (start), false); \ - (cpu) < nr_cpumask_bits; \ - (cpu) = cpumask_next_wrap((cpu), (mask), (start), true)) +#define for_each_cpu_wrap(cpu, mask, start) \ + for_each_set_bit_wrap(cpu, cpumask_bits(mask), nr_cpumask_bits, start) /** * for_each_cpu_and - iterate over every cpu in both masks diff --git a/include/linux/find.h b/include/linux/find.h index 77c087b7a451..3b746a183216 100644 --- a/include/linux/find.h +++ b/include/linux/find.h @@ -336,6 +336,32 @@ unsigned long find_next_bit_wrap(const unsigned long *addr, return bit < offset ? bit : size; } +/* + * Helper for for_each_set_bit_wrap(). Make sure you're doing right thing + * before using it alone. + */ +static inline +unsigned long __for_each_wrap(const unsigned long *bitmap, unsigned long size, + unsigned long start, unsigned long n) +{ + unsigned long bit; + + /* If not wrapped around */ + if (n > start) { + /* and have a bit, just return it. */ + bit = find_next_bit(bitmap, size, n); + if (bit < size) + return bit; + + /* Otherwise, wrap around and ... */ + n = 0; + } + + /* Search the other part. */ + bit = find_next_bit(bitmap, start, n); + return bit < start ? bit : size; +} + /** * find_next_clump8 - find next 8-bit clump with set bits in a memory region * @clump: location to store copy of found clump @@ -514,6 +540,19 @@ unsigned long find_next_bit_le(const void *addr, unsigned (b) = find_next_zero_bit((addr), (size), (e) + 1), \ (e) = find_next_bit((addr), (size), (b) + 1)) +/** + * for_each_set_bit_wrap - iterate over all set bits starting from @start, and + * wrapping around the end of bitmap. + * @bit: offset for current iteration + * @addr: bitmap address to base the search on + * @size: bitmap size in number of bits + * @start: Starting bit for bitmap traversing, wrapping around the bitmap end + */ +#define for_each_set_bit_wrap(bit, addr, size, start) \ + for ((bit) = find_next_bit_wrap((addr), (size), (start)); \ + (bit) < (size); \ + (bit) = __for_each_wrap((addr), (size), (start), (bit) + 1)) + /** * for_each_set_clump8 - iterate over bitmap for each 8-bit clump with set bits * @start: bit offset to start search and to store the current iteration offset From patchwork Mon Sep 19 21:05:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12981026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67533ECAAD3 for ; Mon, 19 Sep 2022 21:06:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229869AbiISVGp (ORCPT ); Mon, 19 Sep 2022 17:06:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229929AbiISVGi (ORCPT ); Mon, 19 Sep 2022 17:06:38 -0400 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF59F40E37; Mon, 19 Sep 2022 14:06:24 -0700 (PDT) Received: by mail-qk1-x733.google.com with SMTP id 3so371999qka.5; Mon, 19 Sep 2022 14:06:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date; bh=tptuErfv/fu7k0PaoUFPz/CmqBpDgYxeL9X9pfAsdFw=; b=NGW7dN6KVT198Ogv47rTbUmJqUCYJz9eDSgwC9Up8QFWg19Nw6z5IfhnAaffRzMZnk Q5LxHdz2xvCAabVGDchwHI9qpl1KFAyY1jdPRRxdU+p5P3K+TP/3aSERwF1BLfc63iha sJeiuxH2uaB0CN+7shIt03LlEiQB23h1JxX91+cXwrJtwXqJejKHiMq0pI5oMKZ++uYb Foky2+ySiT6UJ5XiYvSmaYMpUgZcCuhAB0uTNSuBccDxgyfOqMyVwY8WzjpeAAnSqaMN Cg3m+aPTOX/Y9De2CbyE3pFQpyuOZqz70JzSTzYxXIrufFU5KII5HE+d4Ju4CNHF6MGo HnRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date; bh=tptuErfv/fu7k0PaoUFPz/CmqBpDgYxeL9X9pfAsdFw=; b=GuN2nfAPMvcS+bT9Blsoa2yt6ZEZAB56wiIGjyExP9J3MblbxnU3295Sw8IatySIbk d/T7+5ObS2LFekhN+3kFeioMSFrod0YTafKWgtm7XTE2IVSRPfTKyqZ4xnXQUdlRCWrC FoRP4v+UukDhx6tcqzKggfgnyq2u7QsluBWOuXGPBe/86BY/i/AatSbM+HsUBz4bAxtl 1iHtt5cCuU77lHvWi7GovV5DwUWcBswzgcCavdzwJhSQjt+PeCyCVc+6o9ny5y+SahXk Z3icksC367p7ruQjbdt+meK2UsbVsFHasVOxqmZu3yXA7ujNI6/i+BaIuPknMCg6NRm3 d+TQ== X-Gm-Message-State: ACrzQf1ybEclaHGRnqJa/X528ZeMi4+k+kM+HLGayIOMLicwGWYpDFa+ x8YvUzzTVe/deKoXa2tiUy/97aKA1Es= X-Google-Smtp-Source: AMsMyM7fP8TbSx3kK4aZYzsUWeAW/6HAH29Mr7SG5Lh9R40UvTc+uQ55uk49TnFheccEvlfIHWC6hQ== X-Received: by 2002:a05:620a:15a1:b0:6cc:f925:7c89 with SMTP id f1-20020a05620a15a100b006ccf9257c89mr14168850qkk.319.1663621583617; Mon, 19 Sep 2022 14:06:23 -0700 (PDT) Received: from localhost ([2601:4c1:c100:2270:bb7d:3b54:df44:5476]) by smtp.gmail.com with ESMTPSA id m10-20020a05622a118a00b0035bb0cd479csm11984773qtk.40.2022.09.19.14.06.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Sep 2022 14:06:23 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Andy Shevchenko , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rasmus Villemoes , Yury Norov Subject: [PATCH 6/7] lib/find: optimize for_each() macros Date: Mon, 19 Sep 2022 14:05:58 -0700 Message-Id: <20220919210559.1509179-7-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220919210559.1509179-1-yury.norov@gmail.com> References: <20220919210559.1509179-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Moving an iterator of the macros inside conditional part of for-loop helps to generate a better code. It had been first implemented in commit 7baac8b91f9871ba ("cpumask: make for_each_cpu_mask a bit smaller"). Now that cpumask for-loops are the aliases to bitmap loops, it's worth to optimize them the same way. Bloat-o-meter says: add/remove: 8/12 grow/shrink: 147/592 up/down: 4876/-24416 (-19540) Signed-off-by: Yury Norov --- include/linux/find.h | 56 ++++++++++++++++++++------------------------ 1 file changed, 25 insertions(+), 31 deletions(-) diff --git a/include/linux/find.h b/include/linux/find.h index 3b746a183216..0cdfab9734a6 100644 --- a/include/linux/find.h +++ b/include/linux/find.h @@ -458,31 +458,25 @@ unsigned long find_next_bit_le(const void *addr, unsigned #endif #define for_each_set_bit(bit, addr, size) \ - for ((bit) = find_next_bit((addr), (size), 0); \ - (bit) < (size); \ - (bit) = find_next_bit((addr), (size), (bit) + 1)) + for ((bit) = 0; (bit) = find_next_bit((addr), (size), (bit)), (bit) < (size); (bit)++) #define for_each_and_bit(bit, addr1, addr2, size) \ - for ((bit) = find_next_and_bit((addr1), (addr2), (size), 0); \ - (bit) < (size); \ - (bit) = find_next_and_bit((addr1), (addr2), (size), (bit) + 1)) + for ((bit) = 0; \ + (bit) = find_next_and_bit((addr1), (addr2), (size), (bit)), (bit) < (size);\ + (bit)++) /* same as for_each_set_bit() but use bit as value to start with */ #define for_each_set_bit_from(bit, addr, size) \ - for ((bit) = find_next_bit((addr), (size), (bit)); \ - (bit) < (size); \ - (bit) = find_next_bit((addr), (size), (bit) + 1)) + for (; (bit) = find_next_bit((addr), (size), (bit)), (bit) < (size); (bit)++) #define for_each_clear_bit(bit, addr, size) \ - for ((bit) = find_next_zero_bit((addr), (size), 0); \ - (bit) < (size); \ - (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + for ((bit) = 0; \ + (bit) = find_next_zero_bit((addr), (size), (bit)), (bit) < (size); \ + (bit)++) /* same as for_each_clear_bit() but use bit as value to start with */ #define for_each_clear_bit_from(bit, addr, size) \ - for ((bit) = find_next_zero_bit((addr), (size), (bit)); \ - (bit) < (size); \ - (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + for (; (bit) = find_next_zero_bit((addr), (size), (bit)), (bit) < (size); (bit)++) /** * for_each_set_bitrange - iterate over all set bit ranges [b; e) @@ -492,11 +486,11 @@ unsigned long find_next_bit_le(const void *addr, unsigned * @size: bitmap size in number of bits */ #define for_each_set_bitrange(b, e, addr, size) \ - for ((b) = find_next_bit((addr), (size), 0), \ - (e) = find_next_zero_bit((addr), (size), (b) + 1); \ + for ((b) = 0; \ + (b) = find_next_bit((addr), (size), b), \ + (e) = find_next_zero_bit((addr), (size), (b) + 1), \ (b) < (size); \ - (b) = find_next_bit((addr), (size), (e) + 1), \ - (e) = find_next_zero_bit((addr), (size), (b) + 1)) + (b) = (e) + 1) /** * for_each_set_bitrange_from - iterate over all set bit ranges [b; e) @@ -506,11 +500,11 @@ unsigned long find_next_bit_le(const void *addr, unsigned * @size: bitmap size in number of bits */ #define for_each_set_bitrange_from(b, e, addr, size) \ - for ((b) = find_next_bit((addr), (size), (b)), \ - (e) = find_next_zero_bit((addr), (size), (b) + 1); \ + for (; \ + (b) = find_next_bit((addr), (size), (b)), \ + (e) = find_next_zero_bit((addr), (size), (b) + 1), \ (b) < (size); \ - (b) = find_next_bit((addr), (size), (e) + 1), \ - (e) = find_next_zero_bit((addr), (size), (b) + 1)) + (b) = (e) + 1) /** * for_each_clear_bitrange - iterate over all unset bit ranges [b; e) @@ -520,11 +514,11 @@ unsigned long find_next_bit_le(const void *addr, unsigned * @size: bitmap size in number of bits */ #define for_each_clear_bitrange(b, e, addr, size) \ - for ((b) = find_next_zero_bit((addr), (size), 0), \ - (e) = find_next_bit((addr), (size), (b) + 1); \ + for ((b) = 0; \ + (b) = find_next_zero_bit((addr), (size), (b)), \ + (e) = find_next_bit((addr), (size), (b) + 1), \ (b) < (size); \ - (b) = find_next_zero_bit((addr), (size), (e) + 1), \ - (e) = find_next_bit((addr), (size), (b) + 1)) + (b) = (e) + 1) /** * for_each_clear_bitrange_from - iterate over all unset bit ranges [b; e) @@ -534,11 +528,11 @@ unsigned long find_next_bit_le(const void *addr, unsigned * @size: bitmap size in number of bits */ #define for_each_clear_bitrange_from(b, e, addr, size) \ - for ((b) = find_next_zero_bit((addr), (size), (b)), \ - (e) = find_next_bit((addr), (size), (b) + 1); \ + for (; \ + (b) = find_next_zero_bit((addr), (size), (b)), \ + (e) = find_next_bit((addr), (size), (b) + 1), \ (b) < (size); \ - (b) = find_next_zero_bit((addr), (size), (e) + 1), \ - (e) = find_next_bit((addr), (size), (b) + 1)) + (b) = (e) + 1) /** * for_each_set_bit_wrap - iterate over all set bits starting from @start, and From patchwork Mon Sep 19 21:05:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12981027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06FD3ECAAD3 for ; Mon, 19 Sep 2022 21:06:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229953AbiISVGq (ORCPT ); Mon, 19 Sep 2022 17:06:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229940AbiISVGi (ORCPT ); Mon, 19 Sep 2022 17:06:38 -0400 Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com [IPv6:2607:f8b0:4864:20::82b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 062A74BD0F; Mon, 19 Sep 2022 14:06:25 -0700 (PDT) Received: by mail-qt1-x82b.google.com with SMTP id h21so463026qta.3; Mon, 19 Sep 2022 14:06:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date; bh=r1QhdmpQMtQJBGpWdt2iusBbJ4ttfhaPUaZeWLw+GG0=; b=mEeyLqlfr55mb2JcVs/ZosA+TBgy5vzCnkDmfOcpXEKyA/zSocITWoem8z7/tCZGFu 32eh53Q3USw/qICJFtqRSsYz10rB6adBQzWse7v/T+yJT95HxKzDENOZSsgHq8X1CMKo VkYFrNO/GuPfY6KcxPQU4mdU66Tt8Gi4ox+xTNudTR96BeDixPCtLbnUhjaKsCh5ORGv cgGrbiypRNki7eTYMQY+mGSjDBvTGbtsYmsXEUrwTXHvyfzgjwuIBSveXdYgcNbx5Eh+ dJ06Xk0tbYvYQMX/atnotG0wgpujUVGPA+kLNYGKuwqwW4crS7VnvU8qd0jOHDMZ1GKR 0DOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date; bh=r1QhdmpQMtQJBGpWdt2iusBbJ4ttfhaPUaZeWLw+GG0=; b=sZeUya/lJhg7KYeca5i/DS0ev7rTXmzVkVV4sAiFg1Lu8LCpK8jTJqpR8Ke4Td4i6z 8OMOENxOIUJC/jwMMQ5e74WINbvPfRiTc7KccqiSCtSj5A4g+7bb242/UfZPOLEy0yZk hrwQx+kKw7iJYRO+pdfA3F0Lm8ujHG76LDoUB76Y37fdhe1WIgmSKpG9P+JsjNlilwGj X1Ta/Z+PHCX+R1SCc6SuBmHpisPCbhzE9evPl5048pqUlkX+lSXK7YUJiRXiISs5SxoM 29F72sU6BKJADqpA4B6DW+NkrowHrM7UTYnRlD22sdunHB4WyWZU/Q+AodmkbP6+3WZQ bnGw== X-Gm-Message-State: ACrzQf1ObeEZZq6jGimxnglPzKGcn0EE6ls39mGmtGqL50h30LWGTZNP nFRK7QtpBINvhcZtzkGSGzw+c24xtZw= X-Google-Smtp-Source: AMsMyM4CWs5pUhSs/w1xKR/4awngwmZN0BzrowmW69c1gT2PfdCuWsI2k0OqeWPE4iZgLiD91iipww== X-Received: by 2002:a05:622a:1185:b0:35c:e2c4:7a4e with SMTP id m5-20020a05622a118500b0035ce2c47a4emr7849335qtk.241.1663621584571; Mon, 19 Sep 2022 14:06:24 -0700 (PDT) Received: from localhost ([2601:4c1:c100:2270:bb7d:3b54:df44:5476]) by smtp.gmail.com with ESMTPSA id i67-20020a37b846000000b006ce7d9dea7asm12993993qkf.13.2022.09.19.14.06.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Sep 2022 14:06:24 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Andy Shevchenko , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rasmus Villemoes , Yury Norov Subject: [PATCH 7/7] lib/bitmap: add tests for for_each() loops Date: Mon, 19 Sep 2022 14:05:59 -0700 Message-Id: <20220919210559.1509179-8-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220919210559.1509179-1-yury.norov@gmail.com> References: <20220919210559.1509179-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org We have a test for test_for_each_set_clump8 only. Add basic tests for the others. Signed-off-by: Yury Norov --- lib/test_bitmap.c | 244 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 243 insertions(+), 1 deletion(-) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index da52dc759c95..a8005ad3bd58 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -726,6 +726,239 @@ static void __init test_for_each_set_clump8(void) expect_eq_clump8(start, CLUMP_EXP_NUMBITS, clump_exp, &clump); } +static void __init test_for_each_set_bit_wrap(void) +{ + DECLARE_BITMAP(orig, 500); + DECLARE_BITMAP(copy, 500); + unsigned int wr, bit; + + bitmap_zero(orig, 500); + + /* Set individual bits */ + for (bit = 0; bit < 500; bit += 10) + bitmap_set(orig, bit, 1); + + /* Set range of bits */ + bitmap_set(orig, 100, 50); + + for (wr = 0; wr < 500; wr++) { + bitmap_zero(copy, 500); + + for_each_set_bit_wrap(bit, orig, 500, wr) + bitmap_set(copy, bit, 1); + + expect_eq_bitmap(orig, copy, 500); + } +} + +static void __init test_for_each_set_bit(void) +{ + DECLARE_BITMAP(orig, 500); + DECLARE_BITMAP(copy, 500); + unsigned int bit; + + bitmap_zero(orig, 500); + bitmap_zero(copy, 500); + + /* Set individual bits */ + for (bit = 0; bit < 500; bit += 10) + bitmap_set(orig, bit, 1); + + /* Set range of bits */ + bitmap_set(orig, 100, 50); + + for_each_set_bit(bit, orig, 500) + bitmap_set(copy, bit, 1); + + expect_eq_bitmap(orig, copy, 500); +} + +static void __init test_for_each_set_bit_from(void) +{ + DECLARE_BITMAP(orig, 500); + DECLARE_BITMAP(copy, 500); + unsigned int wr, bit; + + bitmap_zero(orig, 500); + + /* Set individual bits */ + for (bit = 0; bit < 500; bit += 10) + bitmap_set(orig, bit, 1); + + /* Set range of bits */ + bitmap_set(orig, 100, 50); + + for (wr = 0; wr < 500; wr++) { + DECLARE_BITMAP(tmp, 500); + + bitmap_zero(copy, 500); + bit = wr; + + for_each_set_bit_from(bit, orig, 500) + bitmap_set(copy, bit, 1); + + bitmap_copy(tmp, orig, 500); + bitmap_clear(tmp, 0, wr); + expect_eq_bitmap(tmp, copy, 500); + } +} + +static void __init test_for_each_clear_bit(void) +{ + DECLARE_BITMAP(orig, 500); + DECLARE_BITMAP(copy, 500); + unsigned int bit; + + bitmap_fill(orig, 500); + bitmap_fill(copy, 500); + + /* Set individual bits */ + for (bit = 0; bit < 500; bit += 10) + bitmap_clear(orig, bit, 1); + + /* Set range of bits */ + bitmap_clear(orig, 100, 50); + + for_each_clear_bit(bit, orig, 500) + bitmap_clear(copy, bit, 1); + + expect_eq_bitmap(orig, copy, 500); +} + +static void __init test_for_each_clear_bit_from(void) +{ + DECLARE_BITMAP(orig, 500); + DECLARE_BITMAP(copy, 500); + unsigned int wr, bit; + + bitmap_fill(orig, 500); + + /* Set individual bits */ + for (bit = 0; bit < 500; bit += 10) + bitmap_clear(orig, bit, 1); + + /* Set range of bits */ + bitmap_clear(orig, 100, 50); + + for (wr = 0; wr < 500; wr++) { + DECLARE_BITMAP(tmp, 500); + + bitmap_fill(copy, 500); + bit = wr; + + for_each_clear_bit_from(bit, orig, 500) + bitmap_clear(copy, bit, 1); + + bitmap_copy(tmp, orig, 500); + bitmap_set(tmp, 0, wr); + expect_eq_bitmap(tmp, copy, 500); + } +} + +static void __init test_for_each_set_bitrange(void) +{ + DECLARE_BITMAP(orig, 500); + DECLARE_BITMAP(copy, 500); + unsigned int s, e; + + bitmap_zero(orig, 500); + bitmap_zero(copy, 500); + + /* Set individual bits */ + for (s = 0; s < 500; s += 10) + bitmap_set(orig, s, 1); + + /* Set range of bits */ + bitmap_set(orig, 100, 50); + + for_each_set_bitrange(s, e, orig, 500) + bitmap_set(copy, s, e-s); + + expect_eq_bitmap(orig, copy, 500); +} + +static void __init test_for_each_clear_bitrange(void) +{ + DECLARE_BITMAP(orig, 500); + DECLARE_BITMAP(copy, 500); + unsigned int s, e; + + bitmap_fill(orig, 500); + bitmap_fill(copy, 500); + + /* Set individual bits */ + for (s = 0; s < 500; s += 10) + bitmap_clear(orig, s, 1); + + /* Set range of bits */ + bitmap_clear(orig, 100, 50); + + for_each_clear_bitrange(s, e, orig, 500) + bitmap_clear(copy, s, e-s); + + expect_eq_bitmap(orig, copy, 500); +} + +static void __init test_for_each_set_bitrange_from(void) +{ + DECLARE_BITMAP(orig, 500); + DECLARE_BITMAP(copy, 500); + unsigned int wr, s, e; + + bitmap_zero(orig, 500); + + /* Set individual bits */ + for (s = 0; s < 500; s += 10) + bitmap_set(orig, s, 1); + + /* Set range of bits */ + bitmap_set(orig, 100, 50); + + for (wr = 0; wr < 500; wr++) { + DECLARE_BITMAP(tmp, 500); + + bitmap_zero(copy, 500); + s = wr; + + for_each_set_bitrange_from(s, e, orig, 500) + bitmap_set(copy, s, e - s); + + bitmap_copy(tmp, orig, 500); + bitmap_clear(tmp, 0, wr); + expect_eq_bitmap(tmp, copy, 500); + } +} + +static void __init test_for_each_clear_bitrange_from(void) +{ + DECLARE_BITMAP(orig, 500); + DECLARE_BITMAP(copy, 500); + unsigned int wr, s, e; + + bitmap_fill(orig, 500); + + /* Set individual bits */ + for (s = 0; s < 500; s += 10) + bitmap_clear(orig, s, 1); + + /* Set range of bits */ + bitmap_set(orig, 100, 50); + + for (wr = 0; wr < 500; wr++) { + DECLARE_BITMAP(tmp, 500); + + bitmap_fill(copy, 500); + s = wr; + + for_each_clear_bitrange_from(s, e, orig, 500) + bitmap_clear(copy, s, e - s); + + bitmap_copy(tmp, orig, 500); + bitmap_set(tmp, 0, wr); + expect_eq_bitmap(tmp, copy, 500); + } +} + struct test_bitmap_cut { unsigned int first; unsigned int cut; @@ -989,12 +1222,21 @@ static void __init selftest(void) test_bitmap_parselist(); test_bitmap_printlist(); test_mem_optimisations(); - test_for_each_set_clump8(); test_bitmap_cut(); test_bitmap_print_buf(); test_bitmap_const_eval(); test_find_nth_bit(); + test_for_each_set_bit(); + test_for_each_set_bit_from(); + test_for_each_clear_bit(); + test_for_each_clear_bit_from(); + test_for_each_set_bitrange(); + test_for_each_clear_bitrange(); + test_for_each_set_bitrange_from(); + test_for_each_clear_bitrange_from(); + test_for_each_set_clump8(); + test_for_each_set_bit_wrap(); } KSTM_MODULE_LOADERS(test_bitmap);