From patchwork Mon Jun 10 20:48:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 13692417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B2246C27C55 for ; Mon, 10 Jun 2024 20:49:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IDkqjH7ZiXwwM/w5utA2A4JqURtVDLBPSoM+Cwta0KM=; b=xGd06oVi91ttC1 VEpJtR5FjLRCkbQBvOknFp0inxm6BgxBYEZ+E1SX6HR0NZwHhaX/QsqQlXRZ67iVQYugDE5Mc5F28 WeRUERaAEYieu6wQQe3XIVHXMoBZFfSHUta9GLKSZ7V2SWxeNf/TmNXEIr3IfVkjL/brGhSN7NNwp rSxuUft+PbimMQs9mEkouXnP9iC/ZmqA6/z5ttZ3bsCHa2YhqDzaAnZ8Pdxdt0Y29qN+Da8555hKN 0CHZ4CnvtAJccQTqify+rKHfh7R2bOEkrhfVjdVLIgqF+tx6tdK063gs7b4dznPJkJwCRZIDI2k72 pfqjnGSxme58sl+bUpNw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sGlwp-00000006R4w-2KF4; Mon, 10 Jun 2024 20:48:59 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sGlwg-00000006QwC-2oKn for linux-arm-kernel@lists.infradead.org; Mon, 10 Jun 2024 20:48:52 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id EC14760C47; Mon, 10 Jun 2024 20:48:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 48743C32789; Mon, 10 Jun 2024 20:48:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1718052527; bh=AxeTD70Xdnmf73ygmURvb2qqMwfuW1qpYQt8rNtFhzM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ijnhJX3k6ykhDoLXMHH3gTvuCLj8D856jE/hjb4+4CVZsd077UMEAHeeBYoxA7MNu 6gVgX6tWfYfvQ4iuKpUERomSR+MkULWmDebnRvSAWDrDIuf0QBvJmtQShosHJJipgt gYo/n7fprCYiVs/jtM6i1XJrxG14F3QwJ56XOj8Y= From: Linus Torvalds To: Peter Anvin , Ingo Molnar , Borislav Petkov , Thomas Gleixner , Rasmus Villemoes , Josh Poimboeuf , Catalin Marinas , Will Deacon Cc: Linux Kernel Mailing List , the arch/x86 maintainers , linux-arm-kernel@lists.infradead.org, linux-arch , Linus Torvalds Subject: [PATCH 1/7] vfs: dcache: move hashlen_hash() from callers into d_hash() Date: Mon, 10 Jun 2024 13:48:15 -0700 Message-ID: <20240610204821.230388-2-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.45.1.209.gc6f12300df In-Reply-To: <20240610204821.230388-1-torvalds@linux-foundation.org> References: <20240610204821.230388-1-torvalds@linux-foundation.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240610_134850_829593_E2A77F15 X-CRM114-Status: GOOD ( 17.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Both __d_lookup_rcu() and __d_lookup_rcu_op_compare() have the full 'name_hash' value of the qstr that they want to look up, and mask it off to just the low 32-bit hash before calling down to d_hash(). Other callers just load the 32-bit hash and pass it as the argument. If we move the masking into d_hash() itself, it simplifies the two callers that currently do the masking, and is a no-op for the other cases. It doesn't actually change the generated code since the compiler will inline d_hash() and see that the end result is the same. [ Technically, since the parse tree changes, the code generation may not be 100% the same, and for me on x86-64, this does result in gcc switching the operands around for one 'cmpl' instruction. So not necessarily the exact same code generation, but equivalent ] However, this does encapsulate the 'd_hash()' operation more, and makes the shift operation in particular be a "shift 32 bits right, return full word". Which matches the instruction semantics on both x86-64 and arm64 better, since a 32-bit shift will clear the upper bits. That makes the next step of introducing a "shift by runtime constant" more obvious and generates the shift with no extraneous type masking. Signed-off-by: Linus Torvalds --- fs/dcache.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/dcache.c b/fs/dcache.c index 407095188f83..8b4382f5c99d 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -100,9 +100,9 @@ static unsigned int d_hash_shift __ro_after_init; static struct hlist_bl_head *dentry_hashtable __ro_after_init; -static inline struct hlist_bl_head *d_hash(unsigned int hash) +static inline struct hlist_bl_head *d_hash(unsigned long hashlen) { - return dentry_hashtable + (hash >> d_hash_shift); + return dentry_hashtable + ((u32)hashlen >> d_hash_shift); } #define IN_LOOKUP_SHIFT 10 @@ -2104,7 +2104,7 @@ static noinline struct dentry *__d_lookup_rcu_op_compare( unsigned *seqp) { u64 hashlen = name->hash_len; - struct hlist_bl_head *b = d_hash(hashlen_hash(hashlen)); + struct hlist_bl_head *b = d_hash(hashlen); struct hlist_bl_node *node; struct dentry *dentry; @@ -2171,7 +2171,7 @@ struct dentry *__d_lookup_rcu(const struct dentry *parent, { u64 hashlen = name->hash_len; const unsigned char *str = name->name; - struct hlist_bl_head *b = d_hash(hashlen_hash(hashlen)); + struct hlist_bl_head *b = d_hash(hashlen); struct hlist_bl_node *node; struct dentry *dentry;