From patchwork Sun Jan 23 18:38:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12721113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BF94C433F5 for ; Sun, 23 Jan 2022 18:40:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239632AbiAWSkE (ORCPT ); Sun, 23 Jan 2022 13:40:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239622AbiAWSkC (ORCPT ); Sun, 23 Jan 2022 13:40:02 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5123AC061744; Sun, 23 Jan 2022 10:40:02 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id d15-20020a17090a110f00b001b4e7d27474so14267008pja.2; Sun, 23 Jan 2022 10:40:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=zfgimupqB4xhJIvLvGgrer31y9tj8J1RXghy7qnKugc=; b=gxMhHOGPzZikvHKNf5etkD8h5zwamCQvFvYm0T3TfrFpDLfs91aCsHtI+xqcPa8URu FFjowqDhhJ69tQF4OsI04OdzbE5H9ypOLCLw0BtlXsPReQihAHoMJCWdJw5zE4jVLwhO g1nviPaL/KLA6cJWfPzd19d6k4Ji+gex9ZTZO6W3Zaz3jY8H7L8ePhgHrqIqtcKI37r7 cowwVtPoBOznRQ1kol1AgbR+qWyIdu3IGSHuMqs4JxjKQY8qyAQLqRH8r0wbcItyWyPo E6MFahxwAjE0Bt/rxvJ7psK8eynqLBYQJQKjMtYYJccVf+NgvivqphIBCiCRJT/6TI3g RrSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zfgimupqB4xhJIvLvGgrer31y9tj8J1RXghy7qnKugc=; b=UejStlkhzuhx+lQNLhnf6qYMHBvkJ6yjLtghEAmZXMf+LBgRrInK7XpHi3KKXh4vOe jhpDWfbBGJiYYL0CDxmnSHIj0Mm2Jgz4snGLQQsIXLv+TdM622OhTaOMePNLhEfLRLXp 6ASkFtykNOeH994+XtTlx5dl5VPrhjkkgOTGN7jRhRismz40AVDEKNt4s+JxlYs3oNY/ EnrY4uFoE3qKKiMvA6BSsuSMlFtY0J9f3zB+o5F8V7MTwHLya+9XQFNIyd/B7/96dYqa el3yfkQMiUVncJOi0fHxA89d7nXXvDlKlvwGIY0azNZnj6mFCNRS42mpCxEBT9UipC7m DTnQ== X-Gm-Message-State: AOAM5317ks6D9UURsd+ougE+pSX77B017RQc+8qxuo8p8TIdlJe0wjpj R93f73LTPxm7JEWogSakeYs= X-Google-Smtp-Source: ABdhPJxLWQjXfOWkcABvAW38WG6KUSqa6hXvCSlwZOr+ADYOSoQ4eSLrZHymWaTH90oODMTjxHWQng== X-Received: by 2002:a17:902:ea0d:b0:14b:4c26:f43d with SMTP id s13-20020a170902ea0d00b0014b4c26f43dmr2267545plg.135.1642963201797; Sun, 23 Jan 2022 10:40:01 -0800 (PST) Received: from localhost (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id pj7sm1940960pjb.50.2022.01.23.10.40.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 Jan 2022 10:40:01 -0800 (PST) From: Yury Norov To: Yury Norov , Andy Shevchenko , Rasmus Villemoes , Andrew Morton , =?utf-8?b?TWljaGHFgiBNaXJvc8WC?= =?utf-8?b?YXc=?= , Greg Kroah-Hartman , Peter Zijlstra , David Laight , Joe Perches , Dennis Zhou , Emil Renner Berthing , Nicholas Piggin , Matti Vaittinen , Alexey Klimov , linux-kernel@vger.kernel.org, Tariq Toukan , "David S. Miller" , Jakub Kicinski , netdev@vger.kernel.org, linux-rdma@vger.kernel.org Subject: [PATCH 04/54] net: mellanox: fix open-coded for_each_set_bit() Date: Sun, 23 Jan 2022 10:38:35 -0800 Message-Id: <20220123183925.1052919-5-yury.norov@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220123183925.1052919-1-yury.norov@gmail.com> References: <20220123183925.1052919-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Mellanox driver has an open-coded for_each_set_bit(). Fix it. Signed-off-by: Yury Norov Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx4/cmd.c | 23 ++++++----------------- 1 file changed, 6 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c index e10b7b04b894..c56d2194cbfc 100644 --- a/drivers/net/ethernet/mellanox/mlx4/cmd.c +++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c @@ -1994,21 +1994,16 @@ static void mlx4_allocate_port_vpps(struct mlx4_dev *dev, int port) static int mlx4_master_activate_admin_state(struct mlx4_priv *priv, int slave) { - int port, err; + int p, port, err; struct mlx4_vport_state *vp_admin; struct mlx4_vport_oper_state *vp_oper; struct mlx4_slave_state *slave_state = &priv->mfunc.master.slave_state[slave]; struct mlx4_active_ports actv_ports = mlx4_get_active_ports( &priv->dev, slave); - int min_port = find_first_bit(actv_ports.ports, - priv->dev.caps.num_ports) + 1; - int max_port = min_port - 1 + - bitmap_weight(actv_ports.ports, priv->dev.caps.num_ports); - for (port = min_port; port <= max_port; port++) { - if (!test_bit(port - 1, actv_ports.ports)) - continue; + for_each_set_bit(p, actv_ports.ports, priv->dev.caps.num_ports) { + port = p + 1; priv->mfunc.master.vf_oper[slave].smi_enabled[port] = priv->mfunc.master.vf_admin[slave].enable_smi[port]; vp_oper = &priv->mfunc.master.vf_oper[slave].vport[port]; @@ -2063,19 +2058,13 @@ static int mlx4_master_activate_admin_state(struct mlx4_priv *priv, int slave) static void mlx4_master_deactivate_admin_state(struct mlx4_priv *priv, int slave) { - int port; + int p, port; struct mlx4_vport_oper_state *vp_oper; struct mlx4_active_ports actv_ports = mlx4_get_active_ports( &priv->dev, slave); - int min_port = find_first_bit(actv_ports.ports, - priv->dev.caps.num_ports) + 1; - int max_port = min_port - 1 + - bitmap_weight(actv_ports.ports, priv->dev.caps.num_ports); - - for (port = min_port; port <= max_port; port++) { - if (!test_bit(port - 1, actv_ports.ports)) - continue; + for_each_set_bit(p, actv_ports.ports, priv->dev.caps.num_ports) { + port = p + 1; priv->mfunc.master.vf_oper[slave].smi_enabled[port] = MLX4_VF_SMI_DISABLED; vp_oper = &priv->mfunc.master.vf_oper[slave].vport[port]; From patchwork Sun Jan 23 18:38:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12721114 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DB64C433EF for ; Sun, 23 Jan 2022 18:41:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239773AbiAWSlP (ORCPT ); Sun, 23 Jan 2022 13:41:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239801AbiAWSkw (ORCPT ); Sun, 23 Jan 2022 13:40:52 -0500 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65E54C061756; Sun, 23 Jan 2022 10:40:45 -0800 (PST) Received: by mail-pj1-x1036.google.com with SMTP id d15-20020a17090a110f00b001b4e7d27474so14268037pja.2; Sun, 23 Jan 2022 10:40:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=geJcDF7eG8EOHs97ZbucxlWSNBb9ica7os3IO5WyQsk=; b=HZ5wFq5vKklf9YkTPPXH/lZMPEpCXNyoVqeQMYs5eSx898mpi8WLmcz/f0KpKVFBeW csCQYfbmltl4T3oVDVfzDu4v5mpUC6d+uWCx1BKMIrwjNVXkvy/oBKMa98kAw6vNd+BZ RVKXg0LetIuDEYO2OW3Zzx9ar3qZzetE1p3RxXMGVGXPU+PsDMqz8DRwZ2tm9YMbZWAw TH//sNAOe6+1oHsRcVNo4paHLUKJNoRfSdoRzqdFGgQdIrkpmbhkoc0NqiMsDUwRRfIN 9iVDq8wrBejuUjKzUcXbb6RDNssoqskWFcbdbeYQfLdz3jJqRtd/vRp/SMnfQdvMBYi2 BWWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=geJcDF7eG8EOHs97ZbucxlWSNBb9ica7os3IO5WyQsk=; b=IcQ5DbsQJojZ9y2Bi/8zowQtHIoYnv9ZvokUd4Cn0jvHFJrqAkkwMAycXH1mZP9+b8 NzclidjnMtKCepNLrTHHkXIEP0Df8iJJ+qoB0/LVusltKDmyn4Cj1KdXxSSmdeG1o2nF RoXxzvOpDKapslS/X+8YB+E1O5MuvSvXcbbumJOlfP3N3TkYhrlqdS9w57xaPYypKBer 2Nl2u+cUODwPM+uPCQhpN2CKeFVzqs24Wnsu/NpuJFsJgmio43U7/Gd986FYSJZvZYKl KIZqWEr2VHxljqBwSK4BIy9pp9B3jo/2ueIEmiskyw0jB4Lvzf3HQn5WlpVmttf2SlMP FLIA== X-Gm-Message-State: AOAM533FhfnZQwrzitdu0JQ8tIfYZ72egRBoiqr3KO0zQll278SIQGPy YuZNrti5rkNG8qiEpzIT06Q= X-Google-Smtp-Source: ABdhPJymK27TH5kGJL/EoV+91cYg2zsWinqPXBG6oJtzEAWeUpTPZAAzN7lPEF4QI4wEaGLrxdCnkQ== X-Received: by 2002:a17:90b:4ad1:: with SMTP id mh17mr9873766pjb.160.1642963244937; Sun, 23 Jan 2022 10:40:44 -0800 (PST) Received: from localhost (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id rj9sm11344424pjb.49.2022.01.23.10.40.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 Jan 2022 10:40:44 -0800 (PST) From: Yury Norov To: Yury Norov , Andy Shevchenko , Rasmus Villemoes , Andrew Morton , =?utf-8?b?TWljaGHFgiBNaXJvc8WC?= =?utf-8?b?YXc=?= , Greg Kroah-Hartman , Peter Zijlstra , David Laight , Joe Perches , Dennis Zhou , Emil Renner Berthing , Nicholas Piggin , Matti Vaittinen , Alexey Klimov , linux-kernel@vger.kernel.org, Mike Marciniszyn , Dennis Dalessandro , Jason Gunthorpe , linux-rdma@vger.kernel.org Subject: [PATCH 18/54] drivers/infiniband: replace cpumask_weight with cpumask_empty where appropriate Date: Sun, 23 Jan 2022 10:38:49 -0800 Message-Id: <20220123183925.1052919-19-yury.norov@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220123183925.1052919-1-yury.norov@gmail.com> References: <20220123183925.1052919-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org drivers/infiniband/hw/hfi1/affinity.c code calls cpumask_weight() to check if any bit of a given cpumask is set. We can do it more efficiently with cpumask_empty() because cpumask_empty() stops traversing the cpumask as soon as it finds first set bit, while cpumask_weight() counts all bits unconditionally. Signed-off-by: Yury Norov Reviewed-by: Leon Romanovsky --- drivers/infiniband/hw/hfi1/affinity.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c index 98c813ba4304..38eee675369a 100644 --- a/drivers/infiniband/hw/hfi1/affinity.c +++ b/drivers/infiniband/hw/hfi1/affinity.c @@ -667,7 +667,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd) * engines, use the same CPU cores as general/control * context. */ - if (cpumask_weight(&entry->def_intr.mask) == 0) + if (cpumask_empty(&entry->def_intr.mask)) cpumask_copy(&entry->def_intr.mask, &entry->general_intr_mask); } @@ -687,7 +687,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd) * vectors, use the same CPU core as the general/control * context. */ - if (cpumask_weight(&entry->comp_vect_mask) == 0) + if (cpumask_empty(&entry->comp_vect_mask)) cpumask_copy(&entry->comp_vect_mask, &entry->general_intr_mask); } From patchwork Sun Jan 23 18:39:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12721115 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EF4BC4332F for ; Sun, 23 Jan 2022 18:43:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240139AbiAWSnH (ORCPT ); Sun, 23 Jan 2022 13:43:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240141AbiAWSmH (ORCPT ); Sun, 23 Jan 2022 13:42:07 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C298C061788; Sun, 23 Jan 2022 10:42:07 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id n8so13504407plc.3; Sun, 23 Jan 2022 10:42:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=1tLzjpnOADmNqLEv3y7RAP4vzyys1NFYghrGKziaABc=; b=khLdulZwVjToAAKsGPnVUTUcnSPw7gBK/gucAq2M+Jhnco/kZMbd3eY8aFWgX4ugKJ 7k+577Q/M58DQZd93gqoQYYcOYaxBVnkv5vFtyRh5jLDXRz4dHZ22XFzyvEzVigCqsfe KELEEKI+T+GTTCEDGx8gKeDamtVKHpIQ1xaCL6/PdkSahphjpKm9WIn49Og/34rufImJ hbjG0GfhOIvrGodBhGF98wMT4qBTUKamvsIMxxuN+3xpMeMpv8ToXjyI3NXBEEzJSmIP r5QQQNMMVWUxA35xSqj71hqMNWyUkUBFAjI9uGyb4qYvlskV5J++DcBZXbHJUMVaYtzp upKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1tLzjpnOADmNqLEv3y7RAP4vzyys1NFYghrGKziaABc=; b=XHyygYXuf0JfQ4PWT7N81cl7nfGypNcUQurFlg09zAlznNUGcY3Q59ucBos0uuj/lX AdsQVcyDOL+h8k/bfOICMckhJQ2ciuAPOyIKzMPpYUUETiUAeSUu3VL7k5wB1LLilsbB 4qLENV4Nafu7n6drOQ4MKNVYSL/GxntRelU0HagCSPxS336mt0nEKJKgCWyvrpVYaYJO SSL0dplUIfhtZ5CEQYddj5VIev2NVCEmxwhPJFG289UqTSHohifSFNKuoUaokq9OcWpH Tdteb46leda4hsNAuG/Aq0wpBUGtTS9zWyy9ax+UmiIhBS+c5wWmWVgLyaJh6Wk4o3/t 3UGg== X-Gm-Message-State: AOAM531/Dno9xEi59lHCy49aNifiLa2wtNJ4znYORISjKJQJP/Mub2tN 6qS1ff6Ha7oFoIx1+BpuTQE= X-Google-Smtp-Source: ABdhPJy02pmad9QwzHaMHfP8LQid/ly1qUr6qMXANtQBkQB6BCcJa3GDae9da0jqfsH81LIjOzFggg== X-Received: by 2002:a17:90b:3b46:: with SMTP id ot6mr9845312pjb.104.1642963326516; Sun, 23 Jan 2022 10:42:06 -0800 (PST) Received: from localhost (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id x12sm9977539pge.58.2022.01.23.10.42.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 Jan 2022 10:42:06 -0800 (PST) From: Yury Norov To: Yury Norov , Andy Shevchenko , Rasmus Villemoes , Andrew Morton , =?utf-8?b?TWljaGHFgiBNaXJvc8WC?= =?utf-8?b?YXc=?= , Greg Kroah-Hartman , Peter Zijlstra , David Laight , Joe Perches , Dennis Zhou , Emil Renner Berthing , Nicholas Piggin , Matti Vaittinen , Alexey Klimov , linux-kernel@vger.kernel.org, Mike Marciniszyn , Dennis Dalessandro , Jason Gunthorpe , linux-rdma@vger.kernel.org Subject: [PATCH 44/54] infiniband: replace cpumask_weight with cpumask_weight_{eq, ...} where appropriate Date: Sun, 23 Jan 2022 10:39:15 -0800 Message-Id: <20220123183925.1052919-45-yury.norov@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220123183925.1052919-1-yury.norov@gmail.com> References: <20220123183925.1052919-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Infiniband code uses cpumask_weight() to compare the weight of cpumask with a given number. We can do it more efficiently with cpumask_weight_{eq, ...} because conditional cpumask_weight may stop traversing the cpumask earlier, as soon as condition is met. Signed-off-by: Yury Norov --- drivers/infiniband/hw/hfi1/affinity.c | 9 ++++----- drivers/infiniband/hw/qib/qib_file_ops.c | 2 +- drivers/infiniband/hw/qib/qib_iba7322.c | 2 +- 3 files changed, 6 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c index 38eee675369a..7c5ca5c5306a 100644 --- a/drivers/infiniband/hw/hfi1/affinity.c +++ b/drivers/infiniband/hw/hfi1/affinity.c @@ -507,7 +507,7 @@ static int _dev_comp_vect_cpu_mask_init(struct hfi1_devdata *dd, * available CPUs divide it by the number of devices in the * local NUMA node. */ - if (cpumask_weight(&entry->comp_vect_mask) == 1) { + if (cpumask_weight_eq(&entry->comp_vect_mask, 1)) { possible_cpus_comp_vect = 1; dd_dev_warn(dd, "Number of kernel receive queues is too large for completion vector affinity to be effective\n"); @@ -593,7 +593,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd) { struct hfi1_affinity_node *entry; const struct cpumask *local_mask; - int curr_cpu, possible, i, ret; + int curr_cpu, i, ret; bool new_entry = false; local_mask = cpumask_of_node(dd->node); @@ -626,10 +626,9 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd) local_mask); /* fill in the receive list */ - possible = cpumask_weight(&entry->def_intr.mask); curr_cpu = cpumask_first(&entry->def_intr.mask); - if (possible == 1) { + if (cpumask_weight_eq(&entry->def_intr.mask, 1)) { /* only one CPU, everyone will use it */ cpumask_set_cpu(curr_cpu, &entry->rcv_intr.mask); cpumask_set_cpu(curr_cpu, &entry->general_intr_mask); @@ -1017,7 +1016,7 @@ int hfi1_get_proc_affinity(int node) cpu = cpumask_first(proc_mask); cpumask_set_cpu(cpu, &set->used); goto done; - } else if (current->nr_cpus_allowed < cpumask_weight(&set->mask)) { + } else if (cpumask_weight_gt(&set->mask, current->nr_cpus_allowed)) { hfi1_cdbg(PROC, "PID %u %s affinity set to CPU set(s) %*pbl", current->pid, current->comm, cpumask_pr_args(proc_mask)); diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c index aa290928cf96..add89bc21b0a 100644 --- a/drivers/infiniband/hw/qib/qib_file_ops.c +++ b/drivers/infiniband/hw/qib/qib_file_ops.c @@ -1151,7 +1151,7 @@ static void assign_ctxt_affinity(struct file *fp, struct qib_devdata *dd) * reserve a processor for it on the local NUMA node. */ if ((weight >= qib_cpulist_count) && - (cpumask_weight(local_mask) <= qib_cpulist_count)) { + (cpumask_weight_le(local_mask, qib_cpulist_count))) { for_each_cpu(local_cpu, local_mask) if (!test_and_set_bit(local_cpu, qib_cpulist)) { fd->rec_cpu_num = local_cpu; diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c index ceed302cf6a0..b17f96509d2c 100644 --- a/drivers/infiniband/hw/qib/qib_iba7322.c +++ b/drivers/infiniband/hw/qib/qib_iba7322.c @@ -3405,7 +3405,7 @@ static void qib_setup_7322_interrupt(struct qib_devdata *dd, int clearpend) local_mask = cpumask_of_pcibus(dd->pcidev->bus); firstcpu = cpumask_first(local_mask); if (firstcpu >= nr_cpu_ids || - cpumask_weight(local_mask) == num_online_cpus()) { + cpumask_weight_eq(local_mask, num_online_cpus())) { local_mask = topology_core_cpumask(0); firstcpu = cpumask_first(local_mask); }