From patchwork Tue Dec 12 02:27:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488397 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UQBudszo" Received: from mail-yb1-xb34.google.com (mail-yb1-xb34.google.com [IPv6:2607:f8b0:4864:20::b34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8FA7B5; Mon, 11 Dec 2023 18:27:55 -0800 (PST) Received: by mail-yb1-xb34.google.com with SMTP id 3f1490d57ef6-db3fa47c2f7so5477929276.0; Mon, 11 Dec 2023 18:27:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348074; x=1702952874; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C+PiVVlt0R1rAcNbEfljLBH21XDutnmdsosj1LGuGRo=; b=UQBudszotTGe06Bk8UCTi8Uma8P8G8Msfc7Sa2OFaBp7+vgjGg1LL0YSqMdZKz1wMm tIv6gvWAI4f/BQsxV3j18nlNp34tj+G0cwrp5/oKlpWpLHhyhU0av2WRiq+yRfG8MxF1 bxPOZQBItoy9c0ybPvigf2Ich+ihh3XO9m2zOxb5TBHPTalUlC21uBe3flLXjQmrs3ba iRC+WJDkgSobVP8IdCkBqztmZUohxB/nY7wnEgiOiqLy5NLCDqPqxka3ytPmPiR2Zlya xi62tG98qrBqO9OxLNT3oYggUi64ss5ntVJM57lO0nGSCSqBIo3KOEONmblBxVmnSVov HxYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348074; x=1702952874; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C+PiVVlt0R1rAcNbEfljLBH21XDutnmdsosj1LGuGRo=; b=XTyeK4oNOLjka8V2Gy6xZUokNAzUJzQuAaqb2v5ZpDxOWNXpL5amT2LTmG+FKepNL6 dN5S9J58zUELkF2+RHydeBd33GJESRFAgcF2rylGegL3JKg0Gb6RzTiNqL9Tu+zHnT7E EwBvQM9PHr82SBJWfaExPIynvoWqPm/1CLQLtPgBb+mJ3W9zHi6NvdepugGif0eGOWon 6ojBt/rWpHvZ56vl8EDxdAMt6bswxjYK6ly7eFY89o+s8IovujemEePLEGuXRpYcIoGk Uu7sC9U9XPKfhUA8cG3r/qY/8oujcw3WMh+CZ1JsBzxXkCy9yf8YKbd8x3V5/0G/dta5 4cnQ== X-Gm-Message-State: AOJu0YwtxDj/+8iYpfc87Zn/4Y4I9rVcfcV6aE2WOqv7RnNxNaMhoq/l wDtiQD0To3YXrANy4+si9ijF2i8QSiEzxw== X-Google-Smtp-Source: AGHT+IGYzhPnMEAMXEWXRuumf6RoaD+DrzMTkOOTF2Jfa9W4CrJeSe04vsvmLiH9e40/y+d/NnxEJA== X-Received: by 2002:a25:3604:0:b0:db9:909c:ab0a with SMTP id d4-20020a253604000000b00db9909cab0amr3469170yba.121.1702348073753; Mon, 11 Dec 2023 18:27:53 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id e195-20020a2569cc000000b00db3fca90d6esm2974453ybc.2.2023.12.11.18.27.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:27:52 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, "David S. Miller" , "H. Peter Anvin" , "James E.J. Bottomley" , "K. Y. Srinivasan" , "Md. Haris Iqbal" , Akinobu Mita , Andrew Morton , Bjorn Andersson , Borislav Petkov , Chaitanya Kulkarni , Christian Brauner , Damien Le Moal , Dave Hansen , David Disseldorp , Edward Cree , Eric Dumazet , Fenghua Yu , Geert Uytterhoeven , Greg Kroah-Hartman , Gregory Greenman , Hans Verkuil , Hans de Goede , Hugh Dickins , Ingo Molnar , Jakub Kicinski , Jaroslav Kysela , Jason Gunthorpe , Jens Axboe , Jiri Pirko , Jiri Slaby , Kalle Valo , Karsten Graul , Karsten Keil , Kees Cook , Leon Romanovsky , Mark Rutland , Martin Habets , Mauro Carvalho Chehab , Michael Ellerman , Michal Simek , Nicholas Piggin , Oliver Neukum , Paolo Abeni , Paolo Bonzini , Peter Zijlstra , Ping-Ke Shih , Rich Felker , Rob Herring , Robin Murphy , Sean Christopherson , Shuai Xue , Stanislaw Gruszka , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Valentin Schneider , Vitaly Kuznetsov , Wenjia Zhang , Will Deacon , Yoshinori Sato , GR-QLogic-Storage-Upstream@marvell.com, alsa-devel@alsa-project.org, ath10k@lists.infradead.org, dmaengine@vger.kernel.org, iommu@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-block@vger.kernel.org, linux-bluetooth@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-media@vger.kernel.org, linux-mips@vger.kernel.org, linux-net-drivers@amd.com, linux-pci@vger.kernel.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, linux-serial@vger.kernel.org, linux-sh@vger.kernel.org, linux-sound@vger.kernel.org, linux-usb@vger.kernel.org, linux-wireless@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, mpi3mr-linuxdrv.pdl@broadcom.com, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Cc: Yury Norov , Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 01/35] lib/find: add atomic find_bit() primitives Date: Mon, 11 Dec 2023 18:27:15 -0800 Message-Id: <20231212022749.625238-2-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add helpers around test_and_{set,clear}_bit() that allow to search for clear or set bits and flip them atomically. The target patterns may look like this: for (idx = 0; idx < nbits; idx++) if (test_and_clear_bit(idx, bitmap)) do_something(idx); Or like this: do { bit = find_first_bit(bitmap, nbits); if (bit >= nbits) return nbits; } while (!test_and_clear_bit(bit, bitmap)); return bit; In both cases, the opencoded loop may be converted to a single function or iterator call. Correspondingly: for_each_test_and_clear_bit(idx, bitmap, nbits) do_something(idx); Or: return find_and_clear_bit(bitmap, nbits); Obviously, the less routine code people have to write themself, the less probability to make a mistake. Those are not only handy helpers but also resolve a non-trivial issue of using non-atomic find_bit() together with atomic test_and_{set,clear)_bit(). The trick is that find_bit() implies that the bitmap is a regular non-volatile piece of memory, and compiler is allowed to use such optimization techniques like re-fetching memory instead of caching it. For example, find_first_bit() is implemented like this: for (idx = 0; idx * BITS_PER_LONG < sz; idx++) { val = addr[idx]; if (val) { sz = min(idx * BITS_PER_LONG + __ffs(val), sz); break; } } On register-memory architectures, like x86, compiler may decide to access memory twice - first time to compare against 0, and second time to fetch its value to pass it to __ffs(). When running find_first_bit() on volatile memory, the memory may get changed in-between, and for instance, it may lead to passing 0 to __ffs(), which is undefined. This is a potentially dangerous call. find_and_clear_bit() as a wrapper around test_and_clear_bit() naturally treats underlying bitmap as a volatile memory and prevents compiler from such optimizations. Now that KCSAN is catching exactly this type of situations and warns on undercover memory modifications. We can use it to reveal improper usage of find_bit(), and convert it to atomic find_and_*_bit() as appropriate. In some cases concurrent operations with plain find_bit() are acceptable. For example: - two threads running find_*_bit(): safe wrt ffs(0) and returns correct value, because underlying bitmap is unchanged; - find_next_bit() in parallel with set or clear_bit(), when modifying a bit prior to the start bit to search: safe and correct; - find_first_bit() in parallel with set_bit(): safe, but may return wrong bit number; - find_first_zero_bit() in parallel with clear_bit(): same as above. In last 2 cases find_bit() may not return a correct bit number, but it may be OK if caller requires any (not exactly the first) set or clear bit, correspondingly. In such cases, KCSAN may be safely silenced with data_race(). But in most cases where KCSAN detects concurrency people should carefully review their code and likely protect critical sections or switch to atomic find_and_bit(), as appropriate. The 1st patch of the series adds the following atomic primitives: find_and_set_bit(addr, nbits); find_and_set_next_bit(addr, nbits, start); ... Here find_and_{set,clear} part refers to the corresponding test_and_{set,clear}_bit function. Suffixes like _wrap or _lock derive their semantics from corresponding find() or test() functions. For brevity, the naming omits the fact that we search for zero bit in find_and_set, and correspondingly search for set bit in find_and_clear functions. The patch also adds iterators with atomic semantics, like for_each_test_and_set_bit(). Here, the naming rule is to simply prefix corresponding atomic operation with 'for_each'. CC: Bart Van Assche CC: Sergey Shtylyov Signed-off-by: Yury Norov --- include/linux/find.h | 293 +++++++++++++++++++++++++++++++++++++++++++ lib/find_bit.c | 85 +++++++++++++ 2 files changed, 378 insertions(+) diff --git a/include/linux/find.h b/include/linux/find.h index 5e4f39ef2e72..237513356ffa 100644 --- a/include/linux/find.h +++ b/include/linux/find.h @@ -32,6 +32,16 @@ extern unsigned long _find_first_and_bit(const unsigned long *addr1, extern unsigned long _find_first_zero_bit(const unsigned long *addr, unsigned long size); extern unsigned long _find_last_bit(const unsigned long *addr, unsigned long size); +unsigned long _find_and_set_bit(volatile unsigned long *addr, unsigned long nbits); +unsigned long _find_and_set_next_bit(volatile unsigned long *addr, unsigned long nbits, + unsigned long start); +unsigned long _find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits); +unsigned long _find_and_set_next_bit_lock(volatile unsigned long *addr, unsigned long nbits, + unsigned long start); +unsigned long _find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits); +unsigned long _find_and_clear_next_bit(volatile unsigned long *addr, unsigned long nbits, + unsigned long start); + #ifdef __BIG_ENDIAN unsigned long _find_first_zero_bit_le(const unsigned long *addr, unsigned long size); unsigned long _find_next_zero_bit_le(const unsigned long *addr, unsigned @@ -460,6 +470,267 @@ unsigned long __for_each_wrap(const unsigned long *bitmap, unsigned long size, return bit < start ? bit : size; } +/** + * find_and_set_bit - Find a zero bit and set it atomically + * @addr: The address to base the search on + * @nbits: The bitmap size in bits + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the bitmap. It's also not + * guaranteed that if @nbits is returned, the bitmap is empty. + * + * The function does guarantee that if returned value is in range [0 .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and set bit, or @nbits if no bits found + */ +static inline +unsigned long find_and_set_bit(volatile unsigned long *addr, unsigned long nbits) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr | ~GENMASK(nbits - 1, 0); + if (val == ~0UL) + return nbits; + ret = ffz(val); + } while (test_and_set_bit(ret, addr)); + + return ret; + } + + return _find_and_set_bit(addr, nbits); +} + + +/** + * find_and_set_next_bit - Find a zero bit and set it, starting from @offset + * @addr: The address to base the search on + * @nbits: The bitmap nbits in bits + * @offset: The bitnumber to start searching at + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the bitmap, starting from @offset. + * It's also not guaranteed that if @nbits is returned, the bitmap is empty. + * + * The function does guarantee that if returned value is in range [@offset .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and set bit, or @nbits if no bits found + */ +static inline +unsigned long find_and_set_next_bit(volatile unsigned long *addr, + unsigned long nbits, unsigned long offset) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr | ~GENMASK(nbits - 1, offset); + if (val == ~0UL) + return nbits; + ret = ffz(val); + } while (test_and_set_bit(ret, addr)); + + return ret; + } + + return _find_and_set_next_bit(addr, nbits, offset); +} + +/** + * find_and_set_bit_wrap - find and set bit starting at @offset, wrapping around zero + * @addr: The first address to base the search on + * @nbits: The bitmap size in bits + * @offset: The bitnumber to start searching at + * + * Returns: the bit number for the next clear bit, or first clear bit up to @offset, + * while atomically setting it. If no bits are found, returns @nbits. + */ +static inline +unsigned long find_and_set_bit_wrap(volatile unsigned long *addr, + unsigned long nbits, unsigned long offset) +{ + unsigned long bit = find_and_set_next_bit(addr, nbits, offset); + + if (bit < nbits || offset == 0) + return bit; + + bit = find_and_set_bit(addr, offset); + return bit < offset ? bit : nbits; +} + +/** + * find_and_set_bit_lock - find a zero bit, then set it atomically with lock + * @addr: The address to base the search on + * @nbits: The bitmap nbits in bits + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the bitmap. It's also not + * guaranteed that if @nbits is returned, the bitmap is empty. + * + * The function does guarantee that if returned value is in range [0 .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and set bit, or @nbits if no bits found + */ +static inline +unsigned long find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr | ~GENMASK(nbits - 1, 0); + if (val == ~0UL) + return nbits; + ret = ffz(val); + } while (test_and_set_bit_lock(ret, addr)); + + return ret; + } + + return _find_and_set_bit_lock(addr, nbits); +} + +/** + * find_and_set_next_bit_lock - find a zero bit and set it atomically with lock + * @addr: The address to base the search on + * @nbits: The bitmap size in bits + * @offset: The bitnumber to start searching at + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the range. It's also not + * guaranteed that if @nbits is returned, the bitmap is empty. + * + * The function does guarantee that if returned value is in range [@offset .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and set bit, or @nbits if no bits found + */ +static inline +unsigned long find_and_set_next_bit_lock(volatile unsigned long *addr, + unsigned long nbits, unsigned long offset) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr | ~GENMASK(nbits - 1, offset); + if (val == ~0UL) + return nbits; + ret = ffz(val); + } while (test_and_set_bit_lock(ret, addr)); + + return ret; + } + + return _find_and_set_next_bit_lock(addr, nbits, offset); +} + +/** + * find_and_set_bit_wrap_lock - find zero bit starting at @ofset and set it + * with lock, and wrap around zero if nothing found + * @addr: The first address to base the search on + * @nbits: The bitmap size in bits + * @offset: The bitnumber to start searching at + * + * Returns: the bit number for the next set bit, or first set bit up to @offset + * If no bits are set, returns @nbits. + */ +static inline +unsigned long find_and_set_bit_wrap_lock(volatile unsigned long *addr, + unsigned long nbits, unsigned long offset) +{ + unsigned long bit = find_and_set_next_bit_lock(addr, nbits, offset); + + if (bit < nbits || offset == 0) + return bit; + + bit = find_and_set_bit_lock(addr, offset); + return bit < offset ? bit : nbits; +} + +/** + * find_and_clear_bit - Find a set bit and clear it atomically + * @addr: The address to base the search on + * @nbits: The bitmap nbits in bits + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the bitmap. It's also not + * guaranteed that if @nbits is returned, the bitmap is empty. + * + * The function does guarantee that if returned value is in range [0 .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and cleared bit, or @nbits if no bits found + */ +static inline unsigned long find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr & GENMASK(nbits - 1, 0); + if (val == 0) + return nbits; + ret = __ffs(val); + } while (!test_and_clear_bit(ret, addr)); + + return ret; + } + + return _find_and_clear_bit(addr, nbits); +} + +/** + * find_and_clear_next_bit - Find a set bit next after @offset, and clear it atomically + * @addr: The address to base the search on + * @nbits: The bitmap nbits in bits + * @offset: bit offset at which to start searching + * + * This function is designed to operate in concurrent access environment. + * + * Because of concurrency and volatile nature of underlying bitmap, it's not + * guaranteed that the found bit is the 1st bit in the range It's also not + * guaranteed that if @nbits is returned, there's no set bits after @offset. + * + * The function does guarantee that if returned value is in range [@offset .. @nbits), + * the acquired bit belongs to the caller exclusively. + * + * Returns: found and cleared bit, or @nbits if no bits found + */ +static inline +unsigned long find_and_clear_next_bit(volatile unsigned long *addr, + unsigned long nbits, unsigned long offset) +{ + if (small_const_nbits(nbits)) { + unsigned long val, ret; + + do { + val = *addr & GENMASK(nbits - 1, offset); + if (val == 0) + return nbits; + ret = __ffs(val); + } while (!test_and_clear_bit(ret, addr)); + + return ret; + } + + return _find_and_clear_next_bit(addr, nbits, offset); +} + /** * find_next_clump8 - find next 8-bit clump with set bits in a memory region * @clump: location to store copy of found clump @@ -577,6 +848,28 @@ unsigned long find_next_bit_le(const void *addr, unsigned #define for_each_set_bit_from(bit, addr, size) \ for (; (bit) = find_next_bit((addr), (size), (bit)), (bit) < (size); (bit)++) +/* same as for_each_set_bit() but atomically clears each found bit */ +#define for_each_test_and_clear_bit(bit, addr, size) \ + for ((bit) = 0; \ + (bit) = find_and_clear_next_bit((addr), (size), (bit)), (bit) < (size); \ + (bit)++) + +/* same as for_each_set_bit_from() but atomically clears each found bit */ +#define for_each_test_and_clear_bit_from(bit, addr, size) \ + for (; (bit) = find_and_clear_next_bit((addr), (size), (bit)), (bit) < (size); (bit)++) + +/* same as for_each_clear_bit() but atomically sets each found bit */ +#define for_each_test_and_set_bit(bit, addr, size) \ + for ((bit) = 0; \ + (bit) = find_and_set_next_bit((addr), (size), (bit)), (bit) < (size); \ + (bit)++) + +/* same as for_each_clear_bit_from() but atomically clears each found bit */ +#define for_each_test_and_set_bit_from(bit, addr, size) \ + for (; \ + (bit) = find_and_set_next_bit((addr), (size), (bit)), (bit) < (size); \ + (bit)++) + #define for_each_clear_bit(bit, addr, size) \ for ((bit) = 0; \ (bit) = find_next_zero_bit((addr), (size), (bit)), (bit) < (size); \ diff --git a/lib/find_bit.c b/lib/find_bit.c index 32f99e9a670e..c9b6b9f96610 100644 --- a/lib/find_bit.c +++ b/lib/find_bit.c @@ -116,6 +116,91 @@ unsigned long _find_first_and_bit(const unsigned long *addr1, EXPORT_SYMBOL(_find_first_and_bit); #endif +unsigned long _find_and_set_bit(volatile unsigned long *addr, unsigned long nbits) +{ + unsigned long bit; + + do { + bit = FIND_FIRST_BIT(~addr[idx], /* nop */, nbits); + if (bit >= nbits) + return nbits; + } while (test_and_set_bit(bit, addr)); + + return bit; +} +EXPORT_SYMBOL(_find_and_set_bit); + +unsigned long _find_and_set_next_bit(volatile unsigned long *addr, + unsigned long nbits, unsigned long start) +{ + unsigned long bit; + + do { + bit = FIND_NEXT_BIT(~addr[idx], /* nop */, nbits, start); + if (bit >= nbits) + return nbits; + } while (test_and_set_bit(bit, addr)); + + return bit; +} +EXPORT_SYMBOL(_find_and_set_next_bit); + +unsigned long _find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits) +{ + unsigned long bit; + + do { + bit = FIND_FIRST_BIT(~addr[idx], /* nop */, nbits); + if (bit >= nbits) + return nbits; + } while (test_and_set_bit_lock(bit, addr)); + + return bit; +} +EXPORT_SYMBOL(_find_and_set_bit_lock); + +unsigned long _find_and_set_next_bit_lock(volatile unsigned long *addr, + unsigned long nbits, unsigned long start) +{ + unsigned long bit; + + do { + bit = FIND_NEXT_BIT(~addr[idx], /* nop */, nbits, start); + if (bit >= nbits) + return nbits; + } while (test_and_set_bit_lock(bit, addr)); + + return bit; +} +EXPORT_SYMBOL(_find_and_set_next_bit_lock); + +unsigned long _find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits) +{ + unsigned long bit; + + do { + bit = FIND_FIRST_BIT(addr[idx], /* nop */, nbits); + if (bit >= nbits) + return nbits; + } while (!test_and_clear_bit(bit, addr)); + + return bit; +} +EXPORT_SYMBOL(_find_and_clear_bit); + +unsigned long _find_and_clear_next_bit(volatile unsigned long *addr, + unsigned long nbits, unsigned long start) +{ + do { + start = FIND_NEXT_BIT(addr[idx], /* nop */, nbits, start); + if (start >= nbits) + return nbits; + } while (!test_and_clear_bit(start, addr)); + + return start; +} +EXPORT_SYMBOL(_find_and_clear_next_bit); + #ifndef find_first_zero_bit /* * Find the first cleared bit in a memory region. From patchwork Tue Dec 12 02:27:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488398 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ALmaFoSj" Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com [IPv6:2607:f8b0:4864:20::1130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80B7CD2; Mon, 11 Dec 2023 18:27:58 -0800 (PST) Received: by mail-yw1-x1130.google.com with SMTP id 00721157ae682-5d8a772157fso44759417b3.3; Mon, 11 Dec 2023 18:27:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348076; x=1702952876; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HmvgTcSgTSYSGQ9OhXSlfMgozAivgkoJ1Cnrtk56obw=; b=ALmaFoSjjUmhUdcoPPCJ5ucJ4Mw6EJ6M6IMpyRxfY2iAdP2Khfsqq2SaP8q74dXX6y 6anMXn+gOkaF7yBmLrVre7Bu1ROrLXmZ0pwcMOcMn9GReTq2Mg14kAjP9KDpo4ZVS+SB EMGIr9H+40dHRtYSl+fP3/8olGDf0syIMY6QZKBe8pzZYnQFqKuWMPtZuZuKRmktxqkA a9dXHkMz1Mejjt+9l1LaeEL9lyGywTbYZYssx5E3ChU3594S3t5HMz1a30HvodRm46O6 o2zMaTdZfDOjfC1iMLdpe4dAVsilcqTwVV72quzkJ5a0nL2xXvl0yWwSyn9vQsmHLNTN iYKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348076; x=1702952876; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HmvgTcSgTSYSGQ9OhXSlfMgozAivgkoJ1Cnrtk56obw=; b=gj73M48Hj4p0M5RA2PDVQrt10Mou5AbGWu4ucRvX/8vRtcKthQa3YFyXlEnWpDEh/Y mbphfSWyKrx51BHR2Dzs8PljtNjAjY1lQYM9q8Bm4RkVkWNdq/GpE3+Sq1Ck+bLgEKw4 hUZGsX1aEEqtjytN6h/dfv4yXA/MPK9EqdW26g/QUOli0EVl6MTc8k/VItOFdiVjGjzW d/KFEzVAGORlvqXDMAyNQ+m/yQ8hwAgyDZzsZYGROsalHb/wpUMGIuCGJUND0vOlNCsG XrNeCVq1QpdlKk5TgZ8wjjVFCXxvSG9uMg+9lUYxxdiEMTc77mmuMZj7bJJZgWeqyZa3 /n4w== X-Gm-Message-State: AOJu0YzXkPiFZgPtzYyz/MaYpC+EF/jphCXGeIOPa0Smv+r7D6ptVHqu KR3vwe0KZt8NpDvxjUsV4hNTyMmYiUI8Cg== X-Google-Smtp-Source: AGHT+IFPs4AvQxtYv4UrJlD5lGh/BMqsDvh12ZnXOoQbI3s9B0hPdqcJ7FPe01OhhQlwKaBiv0CCJg== X-Received: by 2002:a81:a505:0:b0:5d7:1941:aa7 with SMTP id u5-20020a81a505000000b005d719410aa7mr4037805ywg.66.1702348076388; Mon, 11 Dec 2023 18:27:56 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id a200-20020a0dd8d1000000b005d35a952324sm3448999ywe.56.2023.12.11.18.27.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:27:55 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, "David S. Miller" , "H. Peter Anvin" , "James E.J. Bottomley" , "K. Y. Srinivasan" , "Md. Haris Iqbal" , Akinobu Mita , Andrew Morton , Bjorn Andersson , Borislav Petkov , Chaitanya Kulkarni , Christian Brauner , Damien Le Moal , Dave Hansen , David Disseldorp , Edward Cree , Eric Dumazet , Fenghua Yu , Geert Uytterhoeven , Greg Kroah-Hartman , Gregory Greenman , Hans Verkuil , Hans de Goede , Hugh Dickins , Ingo Molnar , Jakub Kicinski , Jaroslav Kysela , Jason Gunthorpe , Jens Axboe , Jiri Pirko , Jiri Slaby , Kalle Valo , Karsten Graul , Karsten Keil , Kees Cook , Leon Romanovsky , Mark Rutland , Martin Habets , Mauro Carvalho Chehab , Michael Ellerman , Michal Simek , Nicholas Piggin , Oliver Neukum , Paolo Abeni , Paolo Bonzini , Peter Zijlstra , Ping-Ke Shih , Rich Felker , Rob Herring , Robin Murphy , Sean Christopherson , Shuai Xue , Stanislaw Gruszka , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Valentin Schneider , Vitaly Kuznetsov , Wenjia Zhang , Will Deacon , Yoshinori Sato , GR-QLogic-Storage-Upstream@marvell.com, alsa-devel@alsa-project.org, ath10k@lists.infradead.org, dmaengine@vger.kernel.org, iommu@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-block@vger.kernel.org, linux-bluetooth@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-media@vger.kernel.org, linux-mips@vger.kernel.org, linux-net-drivers@amd.com, linux-pci@vger.kernel.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, linux-serial@vger.kernel.org, linux-sh@vger.kernel.org, linux-sound@vger.kernel.org, linux-usb@vger.kernel.org, linux-wireless@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, mpi3mr-linuxdrv.pdl@broadcom.com, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Cc: Yury Norov , Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 02/35] lib/find: add test for atomic find_bit() ops Date: Mon, 11 Dec 2023 18:27:16 -0800 Message-Id: <20231212022749.625238-3-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add basic functionality test for new API. Signed-off-by: Yury Norov --- lib/test_bitmap.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index 65f22c2578b0..277e1ca9fd28 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -221,6 +221,65 @@ static void __init test_zero_clear(void) expect_eq_pbl("", bmap, 1024); } +static void __init test_find_and_bit(void) +{ + unsigned long w, w_part, bit, cnt = 0; + DECLARE_BITMAP(bmap, EXP1_IN_BITS); + + /* + * Test find_and_clear{_next}_bit() and corresponding + * iterators + */ + bitmap_copy(bmap, exp1, EXP1_IN_BITS); + w = bitmap_weight(bmap, EXP1_IN_BITS); + + for_each_test_and_clear_bit(bit, bmap, EXP1_IN_BITS) + cnt++; + + expect_eq_uint(w, cnt); + expect_eq_uint(0, bitmap_weight(bmap, EXP1_IN_BITS)); + + bitmap_copy(bmap, exp1, EXP1_IN_BITS); + w = bitmap_weight(bmap, EXP1_IN_BITS); + w_part = bitmap_weight(bmap, EXP1_IN_BITS / 3); + + cnt = 0; + bit = EXP1_IN_BITS / 3; + for_each_test_and_clear_bit_from(bit, bmap, EXP1_IN_BITS) + cnt++; + + expect_eq_uint(bitmap_weight(bmap, EXP1_IN_BITS), bitmap_weight(bmap, EXP1_IN_BITS / 3)); + expect_eq_uint(w_part, bitmap_weight(bmap, EXP1_IN_BITS)); + expect_eq_uint(w - w_part, cnt); + + /* + * Test find_and_set{_next}_bit() and corresponding + * iterators + */ + bitmap_copy(bmap, exp1, EXP1_IN_BITS); + w = bitmap_weight(bmap, EXP1_IN_BITS); + cnt = 0; + + for_each_test_and_set_bit(bit, bmap, EXP1_IN_BITS) + cnt++; + + expect_eq_uint(EXP1_IN_BITS - w, cnt); + expect_eq_uint(EXP1_IN_BITS, bitmap_weight(bmap, EXP1_IN_BITS)); + + bitmap_copy(bmap, exp1, EXP1_IN_BITS); + w = bitmap_weight(bmap, EXP1_IN_BITS); + w_part = bitmap_weight(bmap, EXP1_IN_BITS / 3); + cnt = 0; + + bit = EXP1_IN_BITS / 3; + for_each_test_and_set_bit_from(bit, bmap, EXP1_IN_BITS) + cnt++; + + expect_eq_uint(EXP1_IN_BITS - bitmap_weight(bmap, EXP1_IN_BITS), + EXP1_IN_BITS / 3 - bitmap_weight(bmap, EXP1_IN_BITS / 3)); + expect_eq_uint(EXP1_IN_BITS * 2 / 3 - (w - w_part), cnt); +} + static void __init test_find_nth_bit(void) { unsigned long b, bit, cnt = 0; @@ -1273,6 +1332,8 @@ static void __init selftest(void) test_for_each_clear_bitrange_from(); test_for_each_set_clump8(); test_for_each_set_bit_wrap(); + + test_find_and_bit(); } KSTM_MODULE_LOADERS(test_bitmap); From patchwork Tue Dec 12 02:27:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488399 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Gw2B8i2Z" Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com [IPv6:2607:f8b0:4864:20::b32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B64BE8; Mon, 11 Dec 2023 18:28:17 -0800 (PST) Received: by mail-yb1-xb32.google.com with SMTP id 3f1490d57ef6-db53f8cf4afso4217026276.3; Mon, 11 Dec 2023 18:28:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348096; x=1702952896; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4CKJG0QWFDRGpjAibRGhHIP11SM03HdEpZd2mC7AZ0Q=; b=Gw2B8i2ZDxUP+oP5D9HXJGW9mgoGNCdB+aBNZImdcfGR+Z4vuS0rjdz0qOy8bIQv3A l6zfgaua0++Bj6lkn6d0/Fklf11Aq891lJKqooLWpqdIlD4R2dMLHVQvmdP/eRfJyJo9 3hnqkth/I5gJo4hr9hX0T8f13KqkWEgvQ+CPJXfFolekk7IQSx0UjGFbTtUD2fe9WqaE LLq+vSs03pTIfwxsrFxw77/En5DP6OGVX9n3ZIp3Acj7wmP7TC9wyEMSs3wphJRH0tQU UsuESDbXyD68iRepKKzgIXYIW1xi9CHV28qMl6+LnMfx0V6c+j6031AK2EOWjPA+HwEm 0zVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348096; x=1702952896; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4CKJG0QWFDRGpjAibRGhHIP11SM03HdEpZd2mC7AZ0Q=; b=OletdryjyntYLxSDZwtyj44bdClNuZfHfbOwW7xkWqxJYtd2nzT8jaR6zcwEQ/p1A2 L8gTnbrbiQpp28BgmOl2uwxdgFVRX1oxuhkWMz/UsdjQEGOVWbvNNvK3WSQEuoqfdN5W RvpZJs0a/k80/Qc0T+rl5cfl9nmAyXaJyO5TKo7ofadOX0PlQ7zphOpIAwjsSBuVESFv P0g7zeNm97ag2iuMT4dwgLmV1peBo0NiJokdY7tX88/Dj3eTHIfClPGx7mkyhGDlNJfR GmeSeoFOzZd3DVta7JojbBmTTc0aam+X82BdtMhY5KqT+cY9O8LtSmdcTj6S3pJujxCK sknw== X-Gm-Message-State: AOJu0YzJgFA+DIy/AkBjZ/qGdWXhA3mqN4+TvOtLusloTz/YbdcE+6ix 7WyYnHgx3rWvxa8q5YDCsR2pjuG4rUF05A== X-Google-Smtp-Source: AGHT+IER6G6vf1boBQWR1f/NVaytAL1cbhoauw3YNUGdL/VGqMmYe1/wvEgro9ni7os7+BQBQDlnMQ== X-Received: by 2002:a05:6902:1ac2:b0:db5:48f9:aff1 with SMTP id db2-20020a0569021ac200b00db548f9aff1mr3726685ybb.21.1702348096184; Mon, 11 Dec 2023 18:28:16 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id g13-20020a258a0d000000b00d8674371317sm2881700ybl.36.2023.12.11.18.28.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:28:15 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Sathya Prakash Veerichetty , Kashyap Desai , Sumit Saxena , Sreekanth Reddy , "James E.J. Bottomley" , "Martin K. Petersen" , Nilesh Javali , Manish Rangankar , GR-QLogic-Storage-Upstream@marvell.com, mpi3mr-linuxdrv.pdl@broadcom.com, linux-scsi@vger.kernel.org Cc: Yury Norov , Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 15/35] scsi: core: optimize scsi_evt_emit() by using an atomic iterator Date: Mon, 11 Dec 2023 18:27:29 -0800 Message-Id: <20231212022749.625238-16-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 A plain loop in scsi_evt_thread() opencodes optimized atomic bit traversing macro. Simplify it by using the dedicated iterator. CC: Bart Van Assche Signed-off-by: Yury Norov --- drivers/scsi/scsi_lib.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index cf3864f72093..a4c5c9b4bfc9 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -2494,14 +2494,13 @@ static void scsi_evt_emit(struct scsi_device *sdev, struct scsi_event *evt) void scsi_evt_thread(struct work_struct *work) { struct scsi_device *sdev; - enum scsi_device_event evt_type; + enum scsi_device_event evt_type = SDEV_EVT_FIRST; LIST_HEAD(event_list); sdev = container_of(work, struct scsi_device, event_work); - for (evt_type = SDEV_EVT_FIRST; evt_type <= SDEV_EVT_LAST; evt_type++) - if (test_and_clear_bit(evt_type, sdev->pending_events)) - sdev_evt_send_simple(sdev, evt_type, GFP_KERNEL); + for_each_test_and_clear_bit_from(evt_type, sdev->pending_events, SDEV_EVT_LAST + 1) + sdev_evt_send_simple(sdev, evt_type, GFP_KERNEL); while (1) { struct scsi_event *evt; From patchwork Tue Dec 12 02:27:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488400 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BjtRSh3I" Received: from mail-yw1-x112e.google.com (mail-yw1-x112e.google.com [IPv6:2607:f8b0:4864:20::112e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B52A9101; Mon, 11 Dec 2023 18:28:18 -0800 (PST) Received: by mail-yw1-x112e.google.com with SMTP id 00721157ae682-5e176babd4eso9934877b3.2; Mon, 11 Dec 2023 18:28:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348097; x=1702952897; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=czhSXAh440OIvaVFdm7Ai9lj2nzvHorDl5GV/YP7fGM=; b=BjtRSh3IWCNseVR+csP0ND2JJ0Ub+jnuOEp4oGWSaWCN1drlKOmgmgSWc6bRzrJSaA fR35bKHhVhuWVILPNHLofkGU+9fJyDOqZPgBIGiBxXLxihAq0UAwShB3y/50n9+Ui/3F MYE4716zFe/nJFMS1/gnDbrMYH926oY0BaSXeVT47s3z86TQMQqUNzRyXgclTCShZKbv KKRhLJDlcx65+gFsFCmygNap+54/G5vD4TZ7SZFSVkJ2UKNmUTNVnaItJn8DCBsoSgtf vXU6AktdJ8JJARirmLNuoF9lHycNzU2BBvN1lP1C1S9uwb4k+Cgs2isWHhLac0DNgYZ3 WgVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348097; x=1702952897; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=czhSXAh440OIvaVFdm7Ai9lj2nzvHorDl5GV/YP7fGM=; b=LzaIK6rWRSFJeyffNHUovSP0Ib3NbOZD2AF8vm/jfdFxEtUOGaCkcf2T45JvttGjeu geIWAEq8L6b41IUQah/WBrKvYycpNpYHmQWVBILdNpEHsmV0UxBqv4ccPZuitFpOYWFa lHIzizB0M0L1JZ5nbhLSaAu+RCrJe/pi1lySmdEsKELtkkMoquFFv6v8kT2QWChsVNCV 3260mj53uM5zFLDVBLGGTITbfOidC/MDl1wvkYsFgjQwrRabapKKylphgwmUVpMlyUqg 4VDN8ChyXep8EE+VogWFTiON1EdSxXuA+qB2tcKnn+qgv/ZjDmLLpH5o50IcxD5XUDDA LmyQ== X-Gm-Message-State: AOJu0Yx+7rdHeOAB85DCHaMeJuwwmFX6/3hIwSZ0ikrJYqo6/WCgMnfc +lAgYQrO2l49B5i0HjRD9dxj0T8dxPX68w== X-Google-Smtp-Source: AGHT+IH7eIiDWyUTJ7/p4pl9/UGdniMwqi1z758tMWmHf8Wuo26y56frYL+SZRZ8oKSuU6rNYq22XA== X-Received: by 2002:a0d:ea56:0:b0:5de:7be5:b0d4 with SMTP id t83-20020a0dea56000000b005de7be5b0d4mr4526608ywe.23.1702348097341; Mon, 11 Dec 2023 18:28:17 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id l5-20020a0de205000000b005d37278f973sm3440959ywe.36.2023.12.11.18.28.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:28:16 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Sathya Prakash Veerichetty , Kashyap Desai , Sumit Saxena , Sreekanth Reddy , "James E.J. Bottomley" , "Martin K. Petersen" , Nilesh Javali , Manish Rangankar , GR-QLogic-Storage-Upstream@marvell.com, mpi3mr-linuxdrv.pdl@broadcom.com, linux-scsi@vger.kernel.org Cc: Yury Norov , Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 16/35] scsi: mpi3mr: optimize the driver by using find_and_set_bit() Date: Mon, 11 Dec 2023 18:27:30 -0800 Message-Id: <20231212022749.625238-17-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 mpi3mr_dev_rmhs_send_tm() and mpi3mr_send_event_ack() opencode find_and_set_bit(). Simplify them by using dedicated function. CC: Bart Van Assche Signed-off-by: Yury Norov --- drivers/scsi/mpi3mr/mpi3mr_os.c | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index 040031eb0c12..11139a2008fd 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -2276,13 +2276,9 @@ static void mpi3mr_dev_rmhs_send_tm(struct mpi3mr_ioc *mrioc, u16 handle, if (drv_cmd) goto issue_cmd; do { - cmd_idx = find_first_zero_bit(mrioc->devrem_bitmap, - MPI3MR_NUM_DEVRMCMD); - if (cmd_idx < MPI3MR_NUM_DEVRMCMD) { - if (!test_and_set_bit(cmd_idx, mrioc->devrem_bitmap)) - break; - cmd_idx = MPI3MR_NUM_DEVRMCMD; - } + cmd_idx = find_and_set_bit(mrioc->devrem_bitmap, MPI3MR_NUM_DEVRMCMD); + if (cmd_idx < MPI3MR_NUM_DEVRMCMD) + break; } while (retrycount--); if (cmd_idx >= MPI3MR_NUM_DEVRMCMD) { @@ -2417,14 +2413,9 @@ static void mpi3mr_send_event_ack(struct mpi3mr_ioc *mrioc, u8 event, "sending event ack in the top half for event(0x%02x), event_ctx(0x%08x)\n", event, event_ctx); do { - cmd_idx = find_first_zero_bit(mrioc->evtack_cmds_bitmap, - MPI3MR_NUM_EVTACKCMD); - if (cmd_idx < MPI3MR_NUM_EVTACKCMD) { - if (!test_and_set_bit(cmd_idx, - mrioc->evtack_cmds_bitmap)) - break; - cmd_idx = MPI3MR_NUM_EVTACKCMD; - } + cmd_idx = find_and_set_bit(mrioc->evtack_cmds_bitmap, MPI3MR_NUM_EVTACKCMD); + if (cmd_idx < MPI3MR_NUM_EVTACKCMD) + break; } while (retrycount--); if (cmd_idx >= MPI3MR_NUM_EVTACKCMD) { From patchwork Tue Dec 12 02:27:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13488401 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="c5Do9fdr" Received: from mail-oo1-xc30.google.com (mail-oo1-xc30.google.com [IPv6:2607:f8b0:4864:20::c30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49EBDB5; Mon, 11 Dec 2023 18:28:20 -0800 (PST) Received: by mail-oo1-xc30.google.com with SMTP id 006d021491bc7-591341db3a1so70488eaf.3; Mon, 11 Dec 2023 18:28:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702348099; x=1702952899; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9tKG4WVl45TB2YunCHhniL08YBM1/uYaQavYz+dOQLk=; b=c5Do9fdrvNlgsvxKmOXubSHYbw5IByjEzwLXYzvGflKFUIzahrCh4MfD5RVyWNb3zt hECclnmOhcWHyA08AaE2pITiZypfnVCbcVYypiwf7YJe2dYOOr131i2v8deaKEW7pUFn xLEUVslbrv+sbTZyfvzXAI7XAY2B5ZnB6ytx0thxNhx7rgqQHwUXHnlyspPZQHuc6ShH 4sUEec933NbAXVGf/4+CTWSlskR9GRL5tF+iCMzGlG4pXwutBoB10YiNwoak/Jm82r+Z oThLNI3Qqv7RpiVPGd/8NAcmEXkQroLC4lAJBqmc8+Xh4XcjxWiwlzdZegP/RjyD7KML qGIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702348099; x=1702952899; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9tKG4WVl45TB2YunCHhniL08YBM1/uYaQavYz+dOQLk=; b=D2K0Siqo9v4vXwpuZniBmMA9Z9A9iGBfPIw0GrPeSuIGB6/MiAOUqcWUwD5h8U8KmQ sWSZAwd5Bsed6PqCBn0YVX7FNzsE4Pbnrt2BkE9Xfo8FFf/XQxu+tRHII4ztgFff4A4a +/VN/WBmpgsb4KKMM8/l+rcR5Sg1nMCBL4Ucqm4nhkJgAwQbCMw2R3X0ZqBuMi3tlt52 p4GQVPRjN0qhv7P5WmS9mf4etXRLP3/+ou0RnxFjGFg+5V769qxCdKB2pXnX7CKs2pjs gmjj3noX7gjlK2s2eDT7p1YbjHCfrxLua7pWC+Q/dE7dsGuxpl/QtgwtmLx2l2ZAgoJW VY4g== X-Gm-Message-State: AOJu0YxcG7xRaIhP9E9Dsb5RptZGwI51i64bpQ6EJbOZgz9NPxWt3Nvl aO/tQRlwugXEEQYEslMI4/yOqYrdj4LvPA== X-Google-Smtp-Source: AGHT+IEscRoCcdNEu5b4gPtdNv417w7Q2p6hqBpRJ0mghg3Bm69GtWMzaPxlc4WSZMdkdvbYqttDLQ== X-Received: by 2002:a05:6870:7ec4:b0:1fa:f7b1:b6d6 with SMTP id wz4-20020a0568707ec400b001faf7b1b6d6mr5231672oab.55.1702348098544; Mon, 11 Dec 2023 18:28:18 -0800 (PST) Received: from localhost ([2601:344:8301:57f0:38aa:1c88:df05:9b73]) by smtp.gmail.com with ESMTPSA id i187-20020a8154c4000000b005d40a826831sm3435431ywb.115.2023.12.11.18.28.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 18:28:18 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Sathya Prakash Veerichetty , Kashyap Desai , Sumit Saxena , Sreekanth Reddy , "James E.J. Bottomley" , "Martin K. Petersen" , Nilesh Javali , Manish Rangankar , GR-QLogic-Storage-Upstream@marvell.com, mpi3mr-linuxdrv.pdl@broadcom.com, linux-scsi@vger.kernel.org Cc: Yury Norov , Jan Kara , Mirsad Todorovac , Matthew Wilcox , Rasmus Villemoes , Andy Shevchenko , Maxim Kuvyrkov , Alexey Klimov , Bart Van Assche , Sergey Shtylyov Subject: [PATCH v3 17/35] scsi: qedi: optimize qedi_get_task_idx() by using find_and_set_bit() Date: Mon, 11 Dec 2023 18:27:31 -0800 Message-Id: <20231212022749.625238-18-yury.norov@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212022749.625238-1-yury.norov@gmail.com> References: <20231212022749.625238-1-yury.norov@gmail.com> Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 qedi_get_task_idx() opencodes find_and_set_bit(). Simplify it and make the whole function a simiple almost one-liner. CC: Bart Van Assche Signed-off-by: Yury Norov --- drivers/scsi/qedi/qedi_main.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c index cd0180b1f5b9..2f940c6898ef 100644 --- a/drivers/scsi/qedi/qedi_main.c +++ b/drivers/scsi/qedi/qedi_main.c @@ -1824,20 +1824,13 @@ int qedi_get_task_idx(struct qedi_ctx *qedi) { s16 tmp_idx; -again: - tmp_idx = find_first_zero_bit(qedi->task_idx_map, - MAX_ISCSI_TASK_ENTRIES); + tmp_idx = find_and_set_bit(qedi->task_idx_map, MAX_ISCSI_TASK_ENTRIES); if (tmp_idx >= MAX_ISCSI_TASK_ENTRIES) { QEDI_ERR(&qedi->dbg_ctx, "FW task context pool is full.\n"); tmp_idx = -1; - goto err_idx; } - if (test_and_set_bit(tmp_idx, qedi->task_idx_map)) - goto again; - -err_idx: return tmp_idx; }