From patchwork Tue Jan 17 00:09:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paulo Alcantara X-Patchwork-Id: 13104006 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 603D6C67871 for ; Tue, 17 Jan 2023 00:11:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232712AbjAQALB (ORCPT ); Mon, 16 Jan 2023 19:11:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233461AbjAQALA (ORCPT ); Mon, 16 Jan 2023 19:11:00 -0500 Received: from mx.cjr.nz (mx.cjr.nz [51.158.111.142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B02D54EF7 for ; Mon, 16 Jan 2023 16:10:58 -0800 (PST) Received: from authenticated-user (mx.cjr.nz [51.158.111.142]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: pc) by mx.cjr.nz (Postfix) with ESMTPSA id D6FC980268; Tue, 17 Jan 2023 00:10:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cjr.nz; s=dkim; t=1673914257; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5+axe9MxvuWApoHIx+qWh9J5EHP57rnmfzB+bezSWo4=; b=NyF4jOUvcSjABHzmNN1RKbj2zboRZvHogfjCELxoqIGOQY7aBul4vU4pG+78WHQIDS6azq /MHTyNIMm69SCJlfVqbe5AFiXRuevCF7ovv8Nio9IfDodof9HAf/AsEXGxE/avUOycFcT+ q1fTrD2lYYIk1VUkuXKs0bWTgpGIoUqE6o4hZafTHqPd4jjRjjlz2HAurYM0CQA+Bc82L1 7A0bqhBcTf2Z0t/W13wDsW6hRqVEjaNrYdn1vkWqmwqVv3n76j23t426rkBTJVB0WbCOW2 8PJOE7M6RaYr4DJPNcMkp4sfBofJoBi33gqhLgRwPRMwAWI/czh9x6TGzx1Vew== From: Paulo Alcantara To: smfrench@gmail.com Cc: linux-cifs@vger.kernel.org, Paulo Alcantara Subject: [PATCH 1/5] cifs: fix potential deadlock in cache_refresh_path() Date: Mon, 16 Jan 2023 21:09:48 -0300 Message-Id: <20230117000952.9965-2-pc@cjr.nz> In-Reply-To: <20230117000952.9965-1-pc@cjr.nz> References: <20230117000952.9965-1-pc@cjr.nz> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org Avoid getting DFS referral from an exclusive lock in cache_refresh_path() because the tcon IPC used for getting the referral could be disconnected and thus causing a deadlock as shown below: task A ------ cifs_demultiplex_thread() cifs_handle_standard() reconnect_dfs_server() dfs_cache_noreq_find() down_read() task B ------ dfs_cache_find() cache_refresh_path() down_write() get_dfs_referral() smb2_get_dfs_refer() SMB2_ioctl() cifs_send_recv() compound_send_recv() wait_for_response() where task A cannot wake up task B because it is blocked due to the exclusive lock held in cache_refresh_path(). Fixes: c9f711039905 ("cifs: keep referral server sessions alive") Signed-off-by: Paulo Alcantara (SUSE) --- fs/cifs/dfs_cache.c | 37 ++++++++++++++++++------------------- 1 file changed, 18 insertions(+), 19 deletions(-) diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c index e20f8880363f..a8ddac1c054c 100644 --- a/fs/cifs/dfs_cache.c +++ b/fs/cifs/dfs_cache.c @@ -770,46 +770,45 @@ static int get_dfs_referral(const unsigned int xid, struct cifs_ses *ses, const */ static int cache_refresh_path(const unsigned int xid, struct cifs_ses *ses, const char *path) { - int rc; - struct cache_entry *ce; struct dfs_info3_param *refs = NULL; + struct cache_entry *ce; int numrefs = 0; - bool newent = false; + int rc; cifs_dbg(FYI, "%s: search path: %s\n", __func__, path); - down_write(&htable_rw_lock); + down_read(&htable_rw_lock); ce = lookup_cache_entry(path); - if (!IS_ERR(ce)) { - if (!cache_entry_expired(ce)) { - dump_ce(ce); - up_write(&htable_rw_lock); - return 0; - } - } else { - newent = true; + if (!IS_ERR(ce) && !cache_entry_expired(ce)) { + up_read(&htable_rw_lock); + return 0; } + up_read(&htable_rw_lock); + /* * Either the entry was not found, or it is expired. * Request a new DFS referral in order to create or update a cache entry. */ rc = get_dfs_referral(xid, ses, path, &refs, &numrefs); if (rc) - goto out_unlock; + goto out; dump_refs(refs, numrefs); - if (!newent) { - rc = update_cache_entry_locked(ce, refs, numrefs); - goto out_unlock; + down_write(&htable_rw_lock); + /* Re-check as another task might have it added or refreshed already */ + ce = lookup_cache_entry(path); + if (!IS_ERR(ce)) { + if (cache_entry_expired(ce)) + rc = update_cache_entry_locked(ce, refs, numrefs); + } else { + rc = add_cache_entry_locked(refs, numrefs); } - rc = add_cache_entry_locked(refs, numrefs); - -out_unlock: up_write(&htable_rw_lock); +out: free_dfs_info_array(refs, numrefs); return rc; } From patchwork Tue Jan 17 00:09:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paulo Alcantara X-Patchwork-Id: 13104005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5696FC54EBE for ; Tue, 17 Jan 2023 00:11:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232611AbjAQALD (ORCPT ); Mon, 16 Jan 2023 19:11:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233570AbjAQALC (ORCPT ); Mon, 16 Jan 2023 19:11:02 -0500 Received: from mx.cjr.nz (mx.cjr.nz [51.158.111.142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D5B910AB7 for ; Mon, 16 Jan 2023 16:11:00 -0800 (PST) Received: from authenticated-user (mx.cjr.nz [51.158.111.142]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: pc) by mx.cjr.nz (Postfix) with ESMTPSA id D830C80CF7; Tue, 17 Jan 2023 00:10:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cjr.nz; s=dkim; t=1673914259; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KtTcGJJzxOGMR2JtVqNVLg3IK+lsHca4enejQ/Aa9Sg=; b=1agOlHuHpES479tQ87IbBNj3Y2M20MvYhZhyT1b+3/KJg85jyxfpu2PZ42g98u/R8DHVQF 0bzuK4E/W7DLF/tSkijTuA7+4/4I/ZvkFEFbnqJCynEHNck+7TB3QmSmIQBzNzM9VZvyQN M7xNwXZCTcBrAcQMLoBcFK6zRxS+8Av31PNUL1ab27vNoo3s0DP7RheytQCdWWrYJnRQam nKQCcc3rABECmrbhoCSaIBq7EaoB//2L7LVpowv+0agnnEcDuFI6n08QVdhIuZnK7cIGev s18bshpnf8i9mFYv4E3BXtEZ3M5C1CYrD6yetCtJ/H0DDFy04UvZw9jNiHzfew== From: Paulo Alcantara To: smfrench@gmail.com Cc: linux-cifs@vger.kernel.org, Paulo Alcantara Subject: [PATCH 2/5] cifs: avoid re-lookups in dfs_cache_find() Date: Mon, 16 Jan 2023 21:09:49 -0300 Message-Id: <20230117000952.9965-3-pc@cjr.nz> In-Reply-To: <20230117000952.9965-1-pc@cjr.nz> References: <20230117000952.9965-1-pc@cjr.nz> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org Simply downgrade the write lock on cache updates from cache_refresh_path() and avoid unnecessary re-lookup in dfs_cache_find(). Signed-off-by: Paulo Alcantara (SUSE) --- fs/cifs/dfs_cache.c | 57 ++++++++++++++++++++++++++------------------- 1 file changed, 33 insertions(+), 24 deletions(-) diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c index a8ddac1c054c..c82721b3277c 100644 --- a/fs/cifs/dfs_cache.c +++ b/fs/cifs/dfs_cache.c @@ -558,7 +558,8 @@ static void remove_oldest_entry_locked(void) } /* Add a new DFS cache entry */ -static int add_cache_entry_locked(struct dfs_info3_param *refs, int numrefs) +static struct cache_entry *add_cache_entry_locked(struct dfs_info3_param *refs, + int numrefs) { int rc; struct cache_entry *ce; @@ -573,11 +574,11 @@ static int add_cache_entry_locked(struct dfs_info3_param *refs, int numrefs) rc = cache_entry_hash(refs[0].path_name, strlen(refs[0].path_name), &hash); if (rc) - return rc; + return ERR_PTR(rc); ce = alloc_cache_entry(refs, numrefs); if (IS_ERR(ce)) - return PTR_ERR(ce); + return ce; spin_lock(&cache_ttl_lock); if (!cache_ttl) { @@ -594,7 +595,7 @@ static int add_cache_entry_locked(struct dfs_info3_param *refs, int numrefs) atomic_inc(&cache_count); - return 0; + return ce; } /* Check if two DFS paths are equal. @s1 and @s2 are expected to be in @cache_cp's charset */ @@ -767,8 +768,12 @@ static int get_dfs_referral(const unsigned int xid, struct cifs_ses *ses, const * * For interlinks, cifs_mount() and expand_dfs_referral() are supposed to * handle them properly. + * + * On success, return entry with acquired lock for reading, otherwise error ptr. */ -static int cache_refresh_path(const unsigned int xid, struct cifs_ses *ses, const char *path) +static struct cache_entry *cache_refresh_path(const unsigned int xid, + struct cifs_ses *ses, + const char *path) { struct dfs_info3_param *refs = NULL; struct cache_entry *ce; @@ -780,10 +785,8 @@ static int cache_refresh_path(const unsigned int xid, struct cifs_ses *ses, cons down_read(&htable_rw_lock); ce = lookup_cache_entry(path); - if (!IS_ERR(ce) && !cache_entry_expired(ce)) { - up_read(&htable_rw_lock); - return 0; - } + if (!IS_ERR(ce) && !cache_entry_expired(ce)) + return ce; up_read(&htable_rw_lock); @@ -792,8 +795,10 @@ static int cache_refresh_path(const unsigned int xid, struct cifs_ses *ses, cons * Request a new DFS referral in order to create or update a cache entry. */ rc = get_dfs_referral(xid, ses, path, &refs, &numrefs); - if (rc) + if (rc) { + ce = ERR_PTR(rc); goto out; + } dump_refs(refs, numrefs); @@ -801,16 +806,24 @@ static int cache_refresh_path(const unsigned int xid, struct cifs_ses *ses, cons /* Re-check as another task might have it added or refreshed already */ ce = lookup_cache_entry(path); if (!IS_ERR(ce)) { - if (cache_entry_expired(ce)) + if (cache_entry_expired(ce)) { rc = update_cache_entry_locked(ce, refs, numrefs); + if (rc) + ce = ERR_PTR(rc); + } } else { - rc = add_cache_entry_locked(refs, numrefs); + ce = add_cache_entry_locked(refs, numrefs); } - up_write(&htable_rw_lock); + if (IS_ERR(ce)) { + up_write(&htable_rw_lock); + goto out; + } + + downgrade_write(&htable_rw_lock); out: free_dfs_info_array(refs, numrefs); - return rc; + return ce; } /* @@ -930,15 +943,8 @@ int dfs_cache_find(const unsigned int xid, struct cifs_ses *ses, const struct nl if (IS_ERR(npath)) return PTR_ERR(npath); - rc = cache_refresh_path(xid, ses, npath); - if (rc) - goto out_free_path; - - down_read(&htable_rw_lock); - - ce = lookup_cache_entry(npath); + ce = cache_refresh_path(xid, ses, npath); if (IS_ERR(ce)) { - up_read(&htable_rw_lock); rc = PTR_ERR(ce); goto out_free_path; } @@ -1034,10 +1040,13 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses, cifs_dbg(FYI, "%s: update target hint - path: %s\n", __func__, npath); - rc = cache_refresh_path(xid, ses, npath); - if (rc) + ce = cache_refresh_path(xid, ses, npath); + if (IS_ERR(ce)) { + rc = PTR_ERR(ce); goto out_free_path; + } + up_read(&htable_rw_lock); down_write(&htable_rw_lock); ce = lookup_cache_entry(npath); From patchwork Tue Jan 17 00:09:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paulo Alcantara X-Patchwork-Id: 13104007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADAD9C46467 for ; Tue, 17 Jan 2023 00:11:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233489AbjAQALI (ORCPT ); Mon, 16 Jan 2023 19:11:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233570AbjAQALF (ORCPT ); Mon, 16 Jan 2023 19:11:05 -0500 Received: from mx.cjr.nz (mx.cjr.nz [51.158.111.142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3201A1E5D2 for ; Mon, 16 Jan 2023 16:11:01 -0800 (PST) Received: from authenticated-user (mx.cjr.nz [51.158.111.142]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: pc) by mx.cjr.nz (Postfix) with ESMTPSA id 9439D80267; Tue, 17 Jan 2023 00:10:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cjr.nz; s=dkim; t=1673914260; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PeKtnOZDO63lfFPPXqoP7AsH6XTXkD47QyC1poGYkdI=; b=lDa72YlpFdr2rUc849ChHkSBVcUOTCP3+qV+kgkzLND1aGFZ0W/VqM0u7jNPHgVI2MLayS w8ucUvP+sRmd4tPdX/MLy77HPxc0HZUi25pd1Mo6GuJNa7DJZ51cwtqm5+Qo4O5n6hc+qv Uiuxr7X4jeykzZvC3T+3wtumwp3upKAouqbjFIqVJl/KFB2F/QvW0W8YmlDuGffuyRc2rW gNDVNeMhWsaJ6d4YMhebj18clu+Lg+sCEp2HxD1s455tPYBFiPhQRF6UYAAhd4RiFmy1Sy 65VhERKAH5iLZlHiMbUwOjpP6c3NnmSnEIJSa61X9cYkb/uz/hlGhuLUe33K7w== From: Paulo Alcantara To: smfrench@gmail.com Cc: linux-cifs@vger.kernel.org, Paulo Alcantara Subject: [PATCH 3/5] cifs: don't take exclusive lock for updating target hints Date: Mon, 16 Jan 2023 21:09:50 -0300 Message-Id: <20230117000952.9965-4-pc@cjr.nz> In-Reply-To: <20230117000952.9965-1-pc@cjr.nz> References: <20230117000952.9965-1-pc@cjr.nz> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org Avoid contention while updating dfs target hints. This should be perfectly fine to update them under shared locks. Signed-off-by: Paulo Alcantara (SUSE) --- fs/cifs/dfs_cache.c | 47 +++++++++++++++++++-------------------------- 1 file changed, 20 insertions(+), 27 deletions(-) diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c index c82721b3277c..49d1f390a6b8 100644 --- a/fs/cifs/dfs_cache.c +++ b/fs/cifs/dfs_cache.c @@ -269,7 +269,7 @@ static int dfscache_proc_show(struct seq_file *m, void *v) list_for_each_entry(t, &ce->tlist, list) { seq_printf(m, " %s%s\n", t->name, - ce->tgthint == t ? " (target hint)" : ""); + READ_ONCE(ce->tgthint) == t ? " (target hint)" : ""); } } } @@ -321,7 +321,7 @@ static inline void dump_tgts(const struct cache_entry *ce) cifs_dbg(FYI, "target list:\n"); list_for_each_entry(t, &ce->tlist, list) { cifs_dbg(FYI, " %s%s\n", t->name, - ce->tgthint == t ? " (target hint)" : ""); + READ_ONCE(ce->tgthint) == t ? " (target hint)" : ""); } } @@ -427,7 +427,7 @@ static int cache_entry_hash(const void *data, int size, unsigned int *hash) /* Return target hint of a DFS cache entry */ static inline char *get_tgt_name(const struct cache_entry *ce) { - struct cache_dfs_tgt *t = ce->tgthint; + struct cache_dfs_tgt *t = READ_ONCE(ce->tgthint); return t ? t->name : ERR_PTR(-ENOENT); } @@ -470,6 +470,7 @@ static struct cache_dfs_tgt *alloc_target(const char *name, int path_consumed) static int copy_ref_data(const struct dfs_info3_param *refs, int numrefs, struct cache_entry *ce, const char *tgthint) { + struct cache_dfs_tgt *target; int i; ce->ttl = max_t(int, refs[0].ttl, CACHE_MIN_TTL); @@ -496,8 +497,9 @@ static int copy_ref_data(const struct dfs_info3_param *refs, int numrefs, ce->numtgts++; } - ce->tgthint = list_first_entry_or_null(&ce->tlist, - struct cache_dfs_tgt, list); + target = list_first_entry_or_null(&ce->tlist, struct cache_dfs_tgt, + list); + WRITE_ONCE(ce->tgthint, target); return 0; } @@ -712,14 +714,15 @@ void dfs_cache_destroy(void) static int update_cache_entry_locked(struct cache_entry *ce, const struct dfs_info3_param *refs, int numrefs) { + struct cache_dfs_tgt *target; + char *th = NULL; int rc; - char *s, *th = NULL; WARN_ON(!rwsem_is_locked(&htable_rw_lock)); - if (ce->tgthint) { - s = ce->tgthint->name; - th = kstrdup(s, GFP_ATOMIC); + target = READ_ONCE(ce->tgthint); + if (target) { + th = kstrdup(target->name, GFP_ATOMIC); if (!th) return -ENOMEM; } @@ -890,7 +893,7 @@ static int get_targets(struct cache_entry *ce, struct dfs_cache_tgt_list *tl) } it->it_path_consumed = t->path_consumed; - if (ce->tgthint == t) + if (READ_ONCE(ce->tgthint) == t) list_add(&it->it_list, head); else list_add_tail(&it->it_list, head); @@ -1046,23 +1049,14 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses, goto out_free_path; } - up_read(&htable_rw_lock); - down_write(&htable_rw_lock); - - ce = lookup_cache_entry(npath); - if (IS_ERR(ce)) { - rc = PTR_ERR(ce); - goto out_unlock; - } - - t = ce->tgthint; + t = READ_ONCE(ce->tgthint); if (likely(!strcasecmp(it->it_name, t->name))) goto out_unlock; list_for_each_entry(t, &ce->tlist, list) { if (!strcasecmp(t->name, it->it_name)) { - ce->tgthint = t; + WRITE_ONCE(ce->tgthint, t); cifs_dbg(FYI, "%s: new target hint: %s\n", __func__, it->it_name); break; @@ -1070,7 +1064,7 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses, } out_unlock: - up_write(&htable_rw_lock); + up_read(&htable_rw_lock); out_free_path: kfree(npath); return rc; @@ -1100,21 +1094,20 @@ void dfs_cache_noreq_update_tgthint(const char *path, const struct dfs_cache_tgt cifs_dbg(FYI, "%s: path: %s\n", __func__, path); - if (!down_write_trylock(&htable_rw_lock)) - return; + down_read(&htable_rw_lock); ce = lookup_cache_entry(path); if (IS_ERR(ce)) goto out_unlock; - t = ce->tgthint; + t = READ_ONCE(ce->tgthint); if (unlikely(!strcasecmp(it->it_name, t->name))) goto out_unlock; list_for_each_entry(t, &ce->tlist, list) { if (!strcasecmp(t->name, it->it_name)) { - ce->tgthint = t; + WRITE_ONCE(ce->tgthint, t); cifs_dbg(FYI, "%s: new target hint: %s\n", __func__, it->it_name); break; @@ -1122,7 +1115,7 @@ void dfs_cache_noreq_update_tgthint(const char *path, const struct dfs_cache_tgt } out_unlock: - up_write(&htable_rw_lock); + up_read(&htable_rw_lock); } /** From patchwork Tue Jan 17 00:09:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paulo Alcantara X-Patchwork-Id: 13104009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A756EC67871 for ; Tue, 17 Jan 2023 00:11:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232630AbjAQALM (ORCPT ); Mon, 16 Jan 2023 19:11:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233671AbjAQALF (ORCPT ); Mon, 16 Jan 2023 19:11:05 -0500 Received: from mx.cjr.nz (mx.cjr.nz [51.158.111.142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFC8022DF8 for ; Mon, 16 Jan 2023 16:11:03 -0800 (PST) Received: from authenticated-user (mx.cjr.nz [51.158.111.142]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: pc) by mx.cjr.nz (Postfix) with ESMTPSA id 4E54180268; Tue, 17 Jan 2023 00:11:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cjr.nz; s=dkim; t=1673914262; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+MCZ1OFwQjYrlJIvd01NqujMzwPZLGT/B1OJUKq+5Y4=; b=BReTzWrkVHofDuFxlLGxRcNHU+nFw7nwHLvFH0HRhv2w8yDwDfOWqZk1NFfDicV8jglYSd 8xOvPqSQvxxcs1vSO9zWNrmUt9WqFslYw3qEAvaRRnXXxHiQJ5DKe35Qy9W5bbMnwz4Kz/ ZUy0rixjzBgjyt/PEft4T/WA7/0Tt3bwAzK/TJWgGZcrWP4GaB4cZp2wanEQmOJiaBEVtr /3OHgBPHOc8gmuO/vLvu4fGQ1Z+T5zWouGCtdqRcZyQBvA6wJ+BMCH1VurNMet70KGj4gM 0oncF2ZlbKGIze8CGzEjq43UiXnBZ2NXRkI8WhT7AML6KATrOk/4g3NyDJZRvw== From: Paulo Alcantara To: smfrench@gmail.com Cc: linux-cifs@vger.kernel.org, Paulo Alcantara Subject: [PATCH 4/5] cifs: remove duplicate code in __refresh_tcon() Date: Mon, 16 Jan 2023 21:09:51 -0300 Message-Id: <20230117000952.9965-5-pc@cjr.nz> In-Reply-To: <20230117000952.9965-1-pc@cjr.nz> References: <20230117000952.9965-1-pc@cjr.nz> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org The logic for creating or updating a cache entry in __refresh_tcon() could be simply done with cache_refresh_path(), so use it instead. Signed-off-by: Paulo Alcantara (SUSE) --- fs/cifs/dfs_cache.c | 69 +++++++++++++++++++++------------------------ 1 file changed, 32 insertions(+), 37 deletions(-) diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c index 49d1f390a6b8..67890960c763 100644 --- a/fs/cifs/dfs_cache.c +++ b/fs/cifs/dfs_cache.c @@ -776,7 +776,8 @@ static int get_dfs_referral(const unsigned int xid, struct cifs_ses *ses, const */ static struct cache_entry *cache_refresh_path(const unsigned int xid, struct cifs_ses *ses, - const char *path) + const char *path, + bool force_refresh) { struct dfs_info3_param *refs = NULL; struct cache_entry *ce; @@ -788,13 +789,14 @@ static struct cache_entry *cache_refresh_path(const unsigned int xid, down_read(&htable_rw_lock); ce = lookup_cache_entry(path); - if (!IS_ERR(ce) && !cache_entry_expired(ce)) + if (!IS_ERR(ce) && !force_refresh && !cache_entry_expired(ce)) return ce; up_read(&htable_rw_lock); /* - * Either the entry was not found, or it is expired. + * Either the entry was not found, or it is expired, or it is a forced + * refresh. * Request a new DFS referral in order to create or update a cache entry. */ rc = get_dfs_referral(xid, ses, path, &refs, &numrefs); @@ -809,7 +811,7 @@ static struct cache_entry *cache_refresh_path(const unsigned int xid, /* Re-check as another task might have it added or refreshed already */ ce = lookup_cache_entry(path); if (!IS_ERR(ce)) { - if (cache_entry_expired(ce)) { + if (force_refresh || cache_entry_expired(ce)) { rc = update_cache_entry_locked(ce, refs, numrefs); if (rc) ce = ERR_PTR(rc); @@ -946,7 +948,7 @@ int dfs_cache_find(const unsigned int xid, struct cifs_ses *ses, const struct nl if (IS_ERR(npath)) return PTR_ERR(npath); - ce = cache_refresh_path(xid, ses, npath); + ce = cache_refresh_path(xid, ses, npath, false); if (IS_ERR(ce)) { rc = PTR_ERR(ce); goto out_free_path; @@ -1043,7 +1045,7 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses, cifs_dbg(FYI, "%s: update target hint - path: %s\n", __func__, npath); - ce = cache_refresh_path(xid, ses, npath); + ce = cache_refresh_path(xid, ses, npath, false); if (IS_ERR(ce)) { rc = PTR_ERR(ce); goto out_free_path; @@ -1321,35 +1323,37 @@ static bool target_share_equal(struct TCP_Server_Info *server, const char *s1, c * Mark dfs tcon for reconnecting when the currently connected tcon does not match any of the new * target shares in @refs. */ -static void mark_for_reconnect_if_needed(struct cifs_tcon *tcon, struct dfs_cache_tgt_list *tl, - const struct dfs_info3_param *refs, int numrefs) +static void mark_for_reconnect_if_needed(struct TCP_Server_Info *server, + struct dfs_cache_tgt_list *old_tl, + struct dfs_cache_tgt_list *new_tl) { - struct dfs_cache_tgt_iterator *it; - int i; + struct dfs_cache_tgt_iterator *oit, *nit; - for (it = dfs_cache_get_tgt_iterator(tl); it; it = dfs_cache_get_next_tgt(tl, it)) { - for (i = 0; i < numrefs; i++) { - if (target_share_equal(tcon->ses->server, dfs_cache_get_tgt_name(it), - refs[i].node_name)) + for (oit = dfs_cache_get_tgt_iterator(old_tl); oit; + oit = dfs_cache_get_next_tgt(old_tl, oit)) { + for (nit = dfs_cache_get_tgt_iterator(new_tl); nit; + nit = dfs_cache_get_next_tgt(new_tl, nit)) { + if (target_share_equal(server, + dfs_cache_get_tgt_name(oit), + dfs_cache_get_tgt_name(nit))) return; } } cifs_dbg(FYI, "%s: no cached or matched targets. mark dfs share for reconnect.\n", __func__); - cifs_signal_cifsd_for_reconnect(tcon->ses->server, true); + cifs_signal_cifsd_for_reconnect(server, true); } /* Refresh dfs referral of tcon and mark it for reconnect if needed */ static int __refresh_tcon(const char *path, struct cifs_tcon *tcon, bool force_refresh) { - struct dfs_cache_tgt_list tl = DFS_CACHE_TGT_LIST_INIT(tl); + struct dfs_cache_tgt_list old_tl = DFS_CACHE_TGT_LIST_INIT(old_tl); + struct dfs_cache_tgt_list new_tl = DFS_CACHE_TGT_LIST_INIT(new_tl); struct cifs_ses *ses = CIFS_DFS_ROOT_SES(tcon->ses); struct cifs_tcon *ipc = ses->tcon_ipc; - struct dfs_info3_param *refs = NULL; bool needs_refresh = false; struct cache_entry *ce; unsigned int xid; - int numrefs = 0; int rc = 0; xid = get_xid(); @@ -1358,9 +1362,8 @@ static int __refresh_tcon(const char *path, struct cifs_tcon *tcon, bool force_r ce = lookup_cache_entry(path); needs_refresh = force_refresh || IS_ERR(ce) || cache_entry_expired(ce); if (!IS_ERR(ce)) { - rc = get_targets(ce, &tl); - if (rc) - cifs_dbg(FYI, "%s: could not get dfs targets: %d\n", __func__, rc); + rc = get_targets(ce, &old_tl); + cifs_dbg(FYI, "%s: get_targets: %d\n", __func__, rc); } up_read(&htable_rw_lock); @@ -1377,26 +1380,18 @@ static int __refresh_tcon(const char *path, struct cifs_tcon *tcon, bool force_r } spin_unlock(&ipc->tc_lock); - rc = get_dfs_referral(xid, ses, path, &refs, &numrefs); - if (!rc) { - /* Create or update a cache entry with the new referral */ - dump_refs(refs, numrefs); - - down_write(&htable_rw_lock); - ce = lookup_cache_entry(path); - if (IS_ERR(ce)) - add_cache_entry_locked(refs, numrefs); - else if (force_refresh || cache_entry_expired(ce)) - update_cache_entry_locked(ce, refs, numrefs); - up_write(&htable_rw_lock); - - mark_for_reconnect_if_needed(tcon, &tl, refs, numrefs); + ce = cache_refresh_path(xid, ses, path, true); + if (!IS_ERR(ce)) { + rc = get_targets(ce, &new_tl); + up_read(&htable_rw_lock); + cifs_dbg(FYI, "%s: get_targets: %d\n", __func__, rc); + mark_for_reconnect_if_needed(tcon->ses->server, &old_tl, &new_tl); } out: free_xid(xid); - dfs_cache_free_tgts(&tl); - free_dfs_info_array(refs, numrefs); + dfs_cache_free_tgts(&old_tl); + dfs_cache_free_tgts(&new_tl); return rc; } From patchwork Tue Jan 17 00:09:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paulo Alcantara X-Patchwork-Id: 13104008 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E2D1C54EBE for ; Tue, 17 Jan 2023 00:11:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232913AbjAQALL (ORCPT ); Mon, 16 Jan 2023 19:11:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233684AbjAQALG (ORCPT ); Mon, 16 Jan 2023 19:11:06 -0500 Received: from mx.cjr.nz (mx.cjr.nz [51.158.111.142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7242D23106 for ; Mon, 16 Jan 2023 16:11:05 -0800 (PST) Received: from authenticated-user (mx.cjr.nz [51.158.111.142]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: pc) by mx.cjr.nz (Postfix) with ESMTPSA id 08E6880CF7; Tue, 17 Jan 2023 00:11:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cjr.nz; s=dkim; t=1673914264; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DlHBdLSxl44ooVxXOrRAiTr1FQ+eYdz3IrO4gdLCEK0=; b=XjIHxPdYkvb7sF/kER8SbibSQ841DS2HulwB3zC4VG195wJMr8Qqwj4YGBbDdV+POaAw7e ccAilpmD0F+VI/bfqsZi++0lxLseggvh2BIe579krbAaut1DwmnZ3/0ScJXqeFVpnBy41q DGRspXkwiEOEb1/S/flEMCB8AAfHIGfGEkOrmZV89YDW7Uj63OmQ9e0WESsfFLOFgg5iaO VDhwD/b3rBGfm4nOGKL9fxbAAKcHFn1Wh0pJepkMoe2wtOIUN20MZC5bjkYT3wS3+pljPx bu+zFGfbUIBo7bebLPRY2OZMDi6/rWKBOj+zGWs8IM1Iw+aEvWgtgOdBjcMIsg== From: Paulo Alcantara To: smfrench@gmail.com Cc: linux-cifs@vger.kernel.org, Paulo Alcantara Subject: [PATCH 5/5] cifs: handle cache lookup errors different than -ENOENT Date: Mon, 16 Jan 2023 21:09:52 -0300 Message-Id: <20230117000952.9965-6-pc@cjr.nz> In-Reply-To: <20230117000952.9965-1-pc@cjr.nz> References: <20230117000952.9965-1-pc@cjr.nz> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org lookup_cache_entry() might return an error different than -ENOENT (e.g. from ->char2uni), so handle those as well in cache_refresh_path(). Signed-off-by: Paulo Alcantara (SUSE) --- fs/cifs/dfs_cache.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c index 67890960c763..f426d1473bea 100644 --- a/fs/cifs/dfs_cache.c +++ b/fs/cifs/dfs_cache.c @@ -644,7 +644,9 @@ static struct cache_entry *__lookup_cache_entry(const char *path, unsigned int h * * Use whole path components in the match. Must be called with htable_rw_lock held. * + * Return cached entry if successful. * Return ERR_PTR(-ENOENT) if the entry is not found. + * Return error ptr otherwise. */ static struct cache_entry *lookup_cache_entry(const char *path) { @@ -789,8 +791,13 @@ static struct cache_entry *cache_refresh_path(const unsigned int xid, down_read(&htable_rw_lock); ce = lookup_cache_entry(path); - if (!IS_ERR(ce) && !force_refresh && !cache_entry_expired(ce)) + if (!IS_ERR(ce)) { + if (!force_refresh && !cache_entry_expired(ce)) + return ce; + } else if (PTR_ERR(ce) != -ENOENT) { + up_read(&htable_rw_lock); return ce; + } up_read(&htable_rw_lock); @@ -816,7 +823,7 @@ static struct cache_entry *cache_refresh_path(const unsigned int xid, if (rc) ce = ERR_PTR(rc); } - } else { + } else if (PTR_ERR(ce) == -ENOENT) { ce = add_cache_entry_locked(refs, numrefs); }