From patchwork Wed Oct 4 16:53:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13409079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E350CE7C4E2 for ; Wed, 4 Oct 2023 16:53:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F5106B0290; Wed, 4 Oct 2023 12:53:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A50C6B0292; Wed, 4 Oct 2023 12:53:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31F3F6B0294; Wed, 4 Oct 2023 12:53:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 23A856B0290 for ; Wed, 4 Oct 2023 12:53:44 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DD1881A03AF for ; Wed, 4 Oct 2023 16:53:43 +0000 (UTC) X-FDA: 81308375526.26.889771D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 383D620008 for ; Wed, 4 Oct 2023 16:53:41 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=UqleOktT; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696438422; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OIlQfxJf3xAAiVPkn5SRGC4SYIlWa7s9rUyJZz1UNfs=; b=AAaaGnMwhAR6OmvvfefROYXvuasVppucA4uLTmOXkY4c/kwyQfxFhkDQUplWpfuHpqAQ6/ QZa1xigFXbepwcIhPWqVoOZNuWcogIwyhC4EqP18DKFDbvqWPZPUWNEPSzRHrOxIoklwFP FhvcPxLFxHe6P85VnbSMRyb3zDZP7Jo= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=UqleOktT; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696438422; a=rsa-sha256; cv=none; b=SxsRmSfl6p6Y2Syto/sCyKiKdVz3TRMaKONwSASVLxLa+S32HXZPdQhysGzlWQZp3BkLZD Lhz3FPMT6+7RPh00RD4tA7KNkB75soFHq3Tw42xrFZ7rdr2WHAHkfEOk8bMmSq4XTAVCB6 vQ/TIiQUvT04Iu8boImqpezPZttFdIA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OIlQfxJf3xAAiVPkn5SRGC4SYIlWa7s9rUyJZz1UNfs=; b=UqleOktTdw71dOuqvUTkD/lTJC /1rNyzDWQux+76a/MyNZ7b97z52LjXlT+ak8UrubShO37NVtXE7k3Ph8/mT57SkTDQpAHSUKSTuu+ lZqaCxa7E81SY6oE4eEl1ohjTq6wwoE+ETJHlDi14DrN4NtXUJL1S8AYaArG2RntaEi1z1S66ufX+ Mla38SKlbyTzqxALsm+yu60psa1+hSMwo8NDSoRKlVfcWn5ElJqp3R/QObk1kXt0Yh7Ug4IhOY1zT cVDoxMJkxwI2p0Mf4fu1N1VjR7bmROWaxwNBJO4NFBFdxiVWxWWp4jYgraRylFw9dv/tKj+OrBI/9 BBp2jzDA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qo57f-004SF7-LD; Wed, 04 Oct 2023 16:53:19 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, torvalds@linux-foundation.org, npiggin@gmail.com Subject: [PATCH v2 07/17] bitops: Add xor_unlock_is_negative_byte() Date: Wed, 4 Oct 2023 17:53:07 +0100 Message-Id: <20231004165317.1061855-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231004165317.1061855-1-willy@infradead.org> References: <20231004165317.1061855-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 383D620008 X-Rspam-User: X-Stat-Signature: mpciet3ipbcodqr831aa4mp89hqh9maz X-Rspamd-Server: rspam01 X-HE-Tag: 1696438421-734316 X-HE-Meta: U2FsdGVkX1/Sk49x9+cPQJ7R2tyYgNsnVqRWSWBjGDQ7XOz+QdrMChMIYvn1y5HyKkVbO4sAESVnGH0h5QebKGIjIpDFnwthK0whMHe3TVdzSm3RyAcUsIwG64ZQDe+kJR0xLMBm9ubl0ph/rvU7H4fcjrMRWg/C08TzULWh/QkCRv2mT9p0BZIQ+gxczz8jlq+nA2Sv99soQS8esNi6Uy15YvwGJtnBOcyf+SX6YCIzxrszdxTlzt8M8jHWfnbbJ6eEQ3oe3Got+h9wIKoSDX5v1qm3+TN5ASF9qDomMlaBIJu6x2KdZ0voXYVT7ziJ/McPpfepKIk7YU/hVnBk7THlJF5Q09XBvuTspmwiU9thFiN2ViZJqFJ7ko18iu/AMXJV04HLiJFqkidkF+XUJDxJ8tUXhHP3HMSyePVnfCecwvE6ZcjdbruSPOc6n7E39W9iSc6kD3EbVXvNP98kA2oLQiZyvxtpk3i+UqYArsANl13F1A1JoNzEYb65yKyEkhBXgXNTtU2eJQNMar8XPOLn22JCK7JYi+qUZmYM1KJVy5r9FZeN0AQ1OTj2AjTI0JaadNHqb3/sLRJw1Yps9SbJBqWKAhLQyxUY5bQjqI/tq0/HU1SlzfpLM/wvaK1rGp14t3ENzg+PqdM17GurouPdgE2eeK8Rq7ZOCMT5zYTJOr9cI1/JG0d3xVKu9UGgyLMt0IQ//fc+Ekw/7JfwdyjmRI4Fi9WDVpOAl4A9dTtxF1/LNKMz54cvLuKDmzEhOpBDGpv7Hzr64ztQUfj9mp0somGoOBV8uXCqXCy176QLDeiYP/zLACpWUGHhbyQXUchW5LT0PjBTecCgyBgdmB6ELFcELCE6aTitW9x9MnufyFQ9G+4Ol7Zc+rpN74I/sPF6rp1BA4G4TpYap5i5coHrVzX18bbs2f6SjapnHQnPUSDTLfdtcYT1tNTU1WW9QEhaHBevLuznnwQq0FG RrpKN5Z0 qhTc6RM34g9A5/QgQ47+6vnZy+CJ333+a8wraBspda+glK/ZVY6RKeQbwqTY0SE3e1gG/LbJagl1+IexMeGa3BKDFFYHBm909Cegm31/2Nz/s28av98hmOCc4APDavT8coIjQPkasw+QDKO/vBxxWTfNfpS7xqfaXaOGtHdGAf1tZO2AoWbl7cRhAix7kZWUCQTxk3o+sHYyUoBmoRDeU9tq7WoTrZYy+eA6k X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace clear_bit_and_unlock_is_negative_byte() with xor_unlock_is_negative_byte(). We have a few places that like to lock a folio, set a flag and unlock it again. Allow for the possibility of combining the latter two operations for efficiency. We are guaranteed that the caller holds the lock, so it is safe to unlock it with the xor. The caller must guarantee that nobody else will set the flag without holding the lock; it is not safe to do this with the PG_dirty flag, for example. Signed-off-by: Matthew Wilcox (Oracle) --- arch/powerpc/include/asm/bitops.h | 17 ++++-------- arch/x86/include/asm/bitops.h | 11 ++++---- .../asm-generic/bitops/instrumented-lock.h | 27 ++++++++++--------- include/asm-generic/bitops/lock.h | 21 ++++----------- kernel/kcsan/kcsan_test.c | 8 +++--- kernel/kcsan/selftest.c | 8 +++--- mm/filemap.c | 5 ++++ mm/kasan/kasan_test.c | 7 ++--- 8 files changed, 47 insertions(+), 57 deletions(-) diff --git a/arch/powerpc/include/asm/bitops.h b/arch/powerpc/include/asm/bitops.h index 7e0f0322912b..40cc3ded60cb 100644 --- a/arch/powerpc/include/asm/bitops.h +++ b/arch/powerpc/include/asm/bitops.h @@ -234,32 +234,25 @@ static inline int arch_test_and_change_bit(unsigned long nr, } #ifdef CONFIG_PPC64 -static inline unsigned long -clear_bit_unlock_return_word(int nr, volatile unsigned long *addr) +static inline bool arch_xor_unlock_is_negative_byte(unsigned long mask, + volatile unsigned long *p) { unsigned long old, t; - unsigned long *p = (unsigned long *)addr + BIT_WORD(nr); - unsigned long mask = BIT_MASK(nr); __asm__ __volatile__ ( PPC_RELEASE_BARRIER "1:" PPC_LLARX "%0,0,%3,0\n" - "andc %1,%0,%2\n" + "xor %1,%0,%2\n" PPC_STLCX "%1,0,%3\n" "bne- 1b\n" : "=&r" (old), "=&r" (t) : "r" (mask), "r" (p) : "cc", "memory"); - return old; + return (old & BIT_MASK(7)) != 0; } -/* - * This is a special function for mm/filemap.c - * Bit 7 corresponds to PG_waiters. - */ -#define arch_clear_bit_unlock_is_negative_byte(nr, addr) \ - (clear_bit_unlock_return_word(nr, addr) & BIT_MASK(7)) +#define arch_xor_unlock_is_negative_byte arch_xor_unlock_is_negative_byte #endif /* CONFIG_PPC64 */ diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h index 2edf68475fec..f03c0a50ec3a 100644 --- a/arch/x86/include/asm/bitops.h +++ b/arch/x86/include/asm/bitops.h @@ -94,18 +94,17 @@ arch___clear_bit(unsigned long nr, volatile unsigned long *addr) asm volatile(__ASM_SIZE(btr) " %1,%0" : : ADDR, "Ir" (nr) : "memory"); } -static __always_inline bool -arch_clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) +static __always_inline bool arch_xor_unlock_is_negative_byte(unsigned long mask, + volatile unsigned long *addr) { bool negative; - asm volatile(LOCK_PREFIX "andb %2,%1" + asm volatile(LOCK_PREFIX "xorb %2,%1" CC_SET(s) : CC_OUT(s) (negative), WBYTE_ADDR(addr) - : "ir" ((char) ~(1 << nr)) : "memory"); + : "iq" ((char)mask) : "memory"); return negative; } -#define arch_clear_bit_unlock_is_negative_byte \ - arch_clear_bit_unlock_is_negative_byte +#define arch_xor_unlock_is_negative_byte arch_xor_unlock_is_negative_byte static __always_inline void arch___clear_bit_unlock(long nr, volatile unsigned long *addr) diff --git a/include/asm-generic/bitops/instrumented-lock.h b/include/asm-generic/bitops/instrumented-lock.h index eb64bd4f11f3..e8ea3aeda9a9 100644 --- a/include/asm-generic/bitops/instrumented-lock.h +++ b/include/asm-generic/bitops/instrumented-lock.h @@ -58,27 +58,30 @@ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) return arch_test_and_set_bit_lock(nr, addr); } -#if defined(arch_clear_bit_unlock_is_negative_byte) +#if defined(arch_xor_unlock_is_negative_byte) /** - * clear_bit_unlock_is_negative_byte - Clear a bit in memory and test if bottom - * byte is negative, for unlock. - * @nr: the bit to clear - * @addr: the address to start counting from + * xor_unlock_is_negative_byte - XOR a single byte in memory and test if + * it is negative, for unlock. + * @mask: Change the bits which are set in this mask. + * @addr: The address of the word containing the byte to change. * + * Changes some of bits 0-6 in the word pointed to by @addr. * This operation is atomic and provides release barrier semantics. + * Used to optimise some folio operations which are commonly paired + * with an unlock or end of writeback. Bit 7 is used as PG_waiters to + * indicate whether anybody is waiting for the unlock. * - * This is a bit of a one-trick-pony for the filemap code, which clears - * PG_locked and tests PG_waiters, + * Return: Whether the top bit of the byte is set. */ -static inline bool -clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) +static inline bool xor_unlock_is_negative_byte(unsigned long mask, + volatile unsigned long *addr) { kcsan_release(); - instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); - return arch_clear_bit_unlock_is_negative_byte(nr, addr); + instrument_atomic_write(addr, sizeof(long)); + return arch_xor_unlock_is_negative_byte(mask, addr); } /* Let everybody know we have it. */ -#define clear_bit_unlock_is_negative_byte clear_bit_unlock_is_negative_byte +#define xor_unlock_is_negative_byte xor_unlock_is_negative_byte #endif #endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H */ diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h index 40913516e654..6a638e89d130 100644 --- a/include/asm-generic/bitops/lock.h +++ b/include/asm-generic/bitops/lock.h @@ -66,27 +66,16 @@ arch___clear_bit_unlock(unsigned int nr, volatile unsigned long *p) raw_atomic_long_set_release((atomic_long_t *)p, old); } -/** - * arch_clear_bit_unlock_is_negative_byte - Clear a bit in memory and test if bottom - * byte is negative, for unlock. - * @nr: the bit to clear - * @addr: the address to start counting from - * - * This is a bit of a one-trick-pony for the filemap code, which clears - * PG_locked and tests PG_waiters, - */ -#ifndef arch_clear_bit_unlock_is_negative_byte -static inline bool arch_clear_bit_unlock_is_negative_byte(unsigned int nr, - volatile unsigned long *p) +#ifndef arch_xor_unlock_is_negative_byte +static inline bool arch_xor_unlock_is_negative_byte(unsigned long mask, + volatile unsigned long *p) { long old; - unsigned long mask = BIT_MASK(nr); - p += BIT_WORD(nr); - old = raw_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p); + old = raw_atomic_long_fetch_xor_release(mask, (atomic_long_t *)p); return !!(old & BIT(7)); } -#define arch_clear_bit_unlock_is_negative_byte arch_clear_bit_unlock_is_negative_byte +#define arch_xor_unlock_is_negative_byte arch_xor_unlock_is_negative_byte #endif #include diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c index 0ddbdab5903d..1333d23ac4ef 100644 --- a/kernel/kcsan/kcsan_test.c +++ b/kernel/kcsan/kcsan_test.c @@ -700,10 +700,10 @@ static void test_barrier_nothreads(struct kunit *test) KCSAN_EXPECT_RW_BARRIER(mutex_lock(&test_mutex), false); KCSAN_EXPECT_RW_BARRIER(mutex_unlock(&test_mutex), true); -#ifdef clear_bit_unlock_is_negative_byte - KCSAN_EXPECT_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); - KCSAN_EXPECT_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); - KCSAN_EXPECT_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); +#ifdef xor_unlock_is_negative_byte + KCSAN_EXPECT_READ_BARRIER(xor_unlock_is_negative_byte(1, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(xor_unlock_is_negative_byte(1, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(xor_unlock_is_negative_byte(1, &test_var), true); #endif kcsan_nestable_atomic_end(); } diff --git a/kernel/kcsan/selftest.c b/kernel/kcsan/selftest.c index 8679322450f2..619be7417420 100644 --- a/kernel/kcsan/selftest.c +++ b/kernel/kcsan/selftest.c @@ -228,10 +228,10 @@ static bool __init test_barrier(void) spin_lock(&test_spinlock); KCSAN_CHECK_RW_BARRIER(spin_unlock(&test_spinlock)); -#ifdef clear_bit_unlock_is_negative_byte - KCSAN_CHECK_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); - KCSAN_CHECK_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); - KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); +#ifdef xor_unlock_is_negative_byte + KCSAN_CHECK_RW_BARRIER(xor_unlock_is_negative_byte(1, &test_var)); + KCSAN_CHECK_READ_BARRIER(xor_unlock_is_negative_byte(1, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(xor_unlock_is_negative_byte(1, &test_var)); #endif kcsan_nestable_atomic_end(); diff --git a/mm/filemap.c b/mm/filemap.c index ea317cdf9532..7da465616317 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1484,6 +1484,11 @@ void folio_add_wait_queue(struct folio *folio, wait_queue_entry_t *waiter) } EXPORT_SYMBOL_GPL(folio_add_wait_queue); +#ifdef xor_unlock_is_negative_byte +#define clear_bit_unlock_is_negative_byte(nr, p) \ + xor_unlock_is_negative_byte(1 << nr, p) +#endif + #ifndef clear_bit_unlock_is_negative_byte /* diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c index b61cc6a42541..c1de091dfc92 100644 --- a/mm/kasan/kasan_test.c +++ b/mm/kasan/kasan_test.c @@ -1098,9 +1098,10 @@ static void kasan_bitops_test_and_modify(struct kunit *test, int nr, void *addr) KUNIT_EXPECT_KASAN_FAIL(test, __test_and_change_bit(nr, addr)); KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr)); -#if defined(clear_bit_unlock_is_negative_byte) - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = - clear_bit_unlock_is_negative_byte(nr, addr)); +#if defined(xor_unlock_is_negative_byte) + if (nr < 7) + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = + xor_unlock_is_negative_byte(1 << nr, addr)); #endif }