From patchwork Fri Jul 1 12:54:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12903265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0CEF2C433EF for ; Fri, 1 Jul 2022 12:56:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZOeR7EGMA8FsA2hT2ua+aAVQDxIxMAee2xgXFQHWcuw=; b=XgaW6ljeqtCvsB cUy8lzfBJEkEtZqUzVUXHu/ZVFbD/YU1kFRoDHBJcY6c+Hu6c0oeO4oNxY91oft5d60lqnh/chBny F9sUVKIWNHYYtNfVm44jacpfMVYj5bWj/sT0OzcHlyrl85J1Q7ibHQ5BUPo+AP3h1+JV88NaoX+m/ qrVwVxdorG9NsU4r0+CBLxyYkAH7+6qgPLToNteQvNSsJQqcWQzDg49XFRG4Qv30vxubVjNKAwnZk NtrRQ7wR6VHDaYv4gdE1FDlK13M1G/ZFuVFczi/XYMucB2ervziCeNcpiK/uMOUvITlfPX3OzHcHF WDwEHsBpoLNaaVLXhm4w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o7GAx-004rq7-84; Fri, 01 Jul 2022 12:55:11 +0000 Received: from mail-qk1-x72b.google.com ([2607:f8b0:4864:20::72b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o7GAU-004rZy-7o for linux-arm-kernel@lists.infradead.org; Fri, 01 Jul 2022 12:54:44 +0000 Received: by mail-qk1-x72b.google.com with SMTP id b125so1710932qkg.11 for ; Fri, 01 Jul 2022 05:54:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dALoUA4TxajQrqRALn6VbZI9jeqizcyFDqXo6cANVYg=; b=fLsxEHYqlZGYXvAnn65jdSw7P2u6uw3w6pwoF8d4eIU+BYP/+k+0yfKYWdeL9ocAVK T42XW4e174CunC/AhJUMA0kZMfvpVuU2qFbWFpGT8siUOx4oyaxTCefTmHXmCAq8wUUS nqZhALiNdWIX6AXvXrhkr1PDcQ6RS6rRmX9GKtVp5GNBJCPRoeYwbhqtl1IxhCsCb7FW 0Ohn/XpktZ0TaKIdsYX38JyoIuM5ibGA8267lVtHDhHZc+yN7cWyfZK2D2+PW0CkL+NW W9tyGeaPtGBs9doohBhPzmjFE/rjLvqftcA9dnlDovwVXSMOEmPt0ghXus+nW+tjpcl8 JCGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dALoUA4TxajQrqRALn6VbZI9jeqizcyFDqXo6cANVYg=; b=65PZEhGENJGp1D3qegId1aHEYOvSZZtRpnqEGG9G654PJrBWBWN7lIiaiWOkAnkg/w YscJ2SReVfnNCSO6mKwDQaP7d2DB1X811gneyX+FLV8shK47BBWz00CjtXyxSqR15jnQ nUCEzCJtuqZ8gqBObNCG3q51F6HzJANdPqCCskxGvCbc9L5eXR0TQsP2srNZS4Vs9ytZ k4MNl2Zee6nXFeoTZuuX1BecyqXr/zC7MMD1iNqgH5f6B2VeuSOxrAS1ySFOCMQVUBrp naF9bP1K9ocnH5GajvsVE1ygalLzEKGpObglAE9xBsPZDkdgIugHPEm1nzzHYGqi3JV0 Ot8g== X-Gm-Message-State: AJIora/E/Iadfvpa/XUFPfwwSYcUBfcFXkdFW0ZLcoZzDQv3A3Z+ozw4 q7wTQzDTZT2VvBWc2mPeOqg= X-Google-Smtp-Source: AGRyM1umJBMsQT8dcVWHPG2fYnWNUoEty93L5a2a9n7i9NhpB8s8LMJLTYWhNi8UYMZ7/bD5TFG5vg== X-Received: by 2002:a05:620a:1e3:b0:6af:504d:c1f2 with SMTP id x3-20020a05620a01e300b006af504dc1f2mr10470131qkn.34.1656680079339; Fri, 01 Jul 2022 05:54:39 -0700 (PDT) Received: from localhost ([2601:4c1:c100:1230:f902:9816:653f:2f66]) by smtp.gmail.com with ESMTPSA id he18-20020a05622a601200b00317c38c8606sm8999322qtb.20.2022.07.01.05.54.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Jul 2022 05:54:39 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Andrew Morton , Andy Shevchenko , David Howells , Geert Uytterhoeven , Jonathan Corbet , "Kirill A . Shutemov" , Matthew Wilcox , NeilBrown , Rasmus Villemoes , Russell King , Vlastimil Babka , William Kucharski , linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Cc: Yury Norov Subject: [PATCH 5/8] lib/cpumask: change return types to unsigned where appropriate Date: Fri, 1 Jul 2022 05:54:27 -0700 Message-Id: <20220701125430.2907638-6-yury.norov@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220701125430.2907638-1-yury.norov@gmail.com> References: <20220701125430.2907638-1-yury.norov@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220701_055442_342728_D177E004 X-CRM114-Status: GOOD ( 13.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Switch return types to unsigned int where return values cannot be negative. Signed-off-by: Yury Norov --- include/linux/cpumask.h | 14 +++++++------- lib/cpumask.c | 18 +++++++++--------- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index b54e27d9da6b..760022bcb925 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -176,12 +176,12 @@ static inline unsigned int cpumask_local_spread(unsigned int i, int node) return 0; } -static inline int cpumask_any_and_distribute(const struct cpumask *src1p, +static inline unsigned int cpumask_any_and_distribute(const struct cpumask *src1p, const struct cpumask *src2p) { return cpumask_first_and(src1p, src2p); } -static inline int cpumask_any_distribute(const struct cpumask *srcp) +static inline unsigned int cpumask_any_distribute(const struct cpumask *srcp) { return cpumask_first(srcp); } @@ -258,12 +258,12 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp) return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1); } -int __pure cpumask_next_and(int n, const struct cpumask *, const struct cpumask *); -int __pure cpumask_any_but(const struct cpumask *mask, unsigned int cpu); +unsigned int __pure cpumask_next_and(int n, const struct cpumask *, const struct cpumask *); +unsigned int __pure cpumask_any_but(const struct cpumask *mask, unsigned int cpu); unsigned int cpumask_local_spread(unsigned int i, int node); -int cpumask_any_and_distribute(const struct cpumask *src1p, +unsigned int cpumask_any_and_distribute(const struct cpumask *src1p, const struct cpumask *src2p); -int cpumask_any_distribute(const struct cpumask *srcp); +unsigned int cpumask_any_distribute(const struct cpumask *srcp); /** * for_each_cpu - iterate over every cpu in a mask @@ -289,7 +289,7 @@ int cpumask_any_distribute(const struct cpumask *srcp); (cpu) = cpumask_next_zero((cpu), (mask)), \ (cpu) < nr_cpu_ids;) -extern int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap); +unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap); /** * for_each_cpu_wrap - iterate over every cpu in a mask, starting at a specified location diff --git a/lib/cpumask.c b/lib/cpumask.c index a971a82d2f43..da68f6bbde44 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -31,7 +31,7 @@ EXPORT_SYMBOL(cpumask_next); * * Returns >= nr_cpu_ids if no further cpus set in both. */ -int cpumask_next_and(int n, const struct cpumask *src1p, +unsigned int cpumask_next_and(int n, const struct cpumask *src1p, const struct cpumask *src2p) { /* -1 is a legal arg here. */ @@ -50,7 +50,7 @@ EXPORT_SYMBOL(cpumask_next_and); * Often used to find any cpu but smp_processor_id() in a mask. * Returns >= nr_cpu_ids if no cpus set. */ -int cpumask_any_but(const struct cpumask *mask, unsigned int cpu) +unsigned int cpumask_any_but(const struct cpumask *mask, unsigned int cpu) { unsigned int i; @@ -74,9 +74,9 @@ EXPORT_SYMBOL(cpumask_any_but); * Note: the @wrap argument is required for the start condition when * we cannot assume @start is set in @mask. */ -int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap) +unsigned int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap) { - int next; + unsigned int next; again: next = cpumask_next(n, mask); @@ -205,7 +205,7 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) */ unsigned int cpumask_local_spread(unsigned int i, int node) { - int cpu; + unsigned int cpu; /* Wrap: we always want a cpu. */ i %= num_online_cpus(); @@ -243,10 +243,10 @@ static DEFINE_PER_CPU(int, distribute_cpu_mask_prev); * * Returns >= nr_cpu_ids if the intersection is empty. */ -int cpumask_any_and_distribute(const struct cpumask *src1p, +unsigned int cpumask_any_and_distribute(const struct cpumask *src1p, const struct cpumask *src2p) { - int next, prev; + unsigned int next, prev; /* NOTE: our first selection will skip 0. */ prev = __this_cpu_read(distribute_cpu_mask_prev); @@ -262,9 +262,9 @@ int cpumask_any_and_distribute(const struct cpumask *src1p, } EXPORT_SYMBOL(cpumask_any_and_distribute); -int cpumask_any_distribute(const struct cpumask *srcp) +unsigned int cpumask_any_distribute(const struct cpumask *srcp) { - int next, prev; + unsigned int next, prev; /* NOTE: our first selection will skip 0. */ prev = __this_cpu_read(distribute_cpu_mask_prev);