From patchwork Fri Mar 1 15:21:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Ni X-Patchwork-Id: 13578595 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B59C139AC4 for ; Fri, 1 Mar 2024 15:21:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709306505; cv=none; b=jOARZ8QqlJj0p/QSfOeL+5U9z6XGxejfJc0H/oSJP4r6c2dnChsjeiHTCm17NuYz5wBFQPwQ3HVUl8anFs/MTHCDQ9lpqtByORlWAfJ7o7dXZPY9nRisBxWcMD6IBoBrLIdGNoXdNCPpV/D9wBMXKaZA67BEVA+Z8bWAFJDqc4Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709306505; c=relaxed/simple; bh=vEqcjpRsJ+ymHW0SAOe0HJPDbgpPW3ZgJYkU6M1pu/o=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ffdFVCGa7l3RzvpO7n1d1+lpe8VsR1Z/OmkrtIJ1bPQ9ZxgqE5V2n6KqUxTkZAShBoe1Aauhl38HVSdM2qvjfFtHarYAOmnUQS/5OMTAbo7b8e0VFnJSY67iicxtYEZbDG5EH5Glvuae0E/AuLJhoC74b27pGRBoG2vRYPU3ej0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XEQ0RX/2; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XEQ0RX/2" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709306502; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8hY8wKrmrPYhSBN3Tuj0rq8VL+4sl5GthNYEjh9QoQY=; b=XEQ0RX/2FVt+HFZHzd0CrlLaxPfwKmhnbtukMr1IRMYtL2ZOscC4eDXHsFy2mo0R+Ogidu SwToRD6BvgWlhmNOn9A5DqCtolO/MWCeLD8V4D2AI5gBgmUmIVUw0QgwkGfV7IwN0Zko+4 r8X1NtZHtw1BJRyfRwJ7JiiZkhx6vN0= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-503-KDdy2XpCPiWnl84NoVjaBg-1; Fri, 01 Mar 2024 10:21:38 -0500 X-MC-Unique: KDdy2XpCPiWnl84NoVjaBg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1B4F4381644B; Fri, 1 Mar 2024 15:21:38 +0000 (UTC) Received: from localhost.localdomain (unknown [10.72.120.8]) by smtp.corp.redhat.com (Postfix) with ESMTP id 904901121312; Fri, 1 Mar 2024 15:21:34 +0000 (UTC) From: Xiao Ni To: song@kernel.org Cc: yukuai1@huaweicloud.com, bmarzins@redhat.com, heinzm@redhat.com, snitzer@kernel.org, ncroxon@redhat.com, linux-raid@vger.kernel.org, dm-devel@lists.linux.dev Subject: [PATCH 1/4] md: Revert "md: Don't register sync_thread for reshape directly" Date: Fri, 1 Mar 2024 23:21:25 +0800 Message-Id: <20240301152128.13465-2-xni@redhat.com> In-Reply-To: <20240301152128.13465-1-xni@redhat.com> References: <20240301152128.13465-1-xni@redhat.com> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 This reverts commit ad39c08186f8a0f221337985036ba86731d6aafe. The reverted patch says there is no way to guarantee that md_do_sync will be executed. Users should choose a sutiable chance to wake up sync thread after registering sync thread. And this patch set tries to use a minimal change to fix dmraid regressions. With patch03 and patch04 and commit 82ec0ae59d02 ("md: Make sure md_do_sync() will set MD_RECOVERY_DONE"), all deadlock problems can be fixed. So revert this one and we can rethink about this in future. Signed-off-by: Xiao Ni --- drivers/md/md.c | 5 +---- drivers/md/raid10.c | 16 ++++++++++++++-- drivers/md/raid5.c | 29 +++++++++++++++++++++++++++-- 3 files changed, 42 insertions(+), 8 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 9e41a9aaba8b..db4743ba7f6c 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9376,7 +9376,6 @@ static void md_start_sync(struct work_struct *ws) struct mddev *mddev = container_of(ws, struct mddev, sync_work); int spares = 0; bool suspend = false; - char *name; /* * If reshape is still in progress, spares won't be added or removed @@ -9414,10 +9413,8 @@ static void md_start_sync(struct work_struct *ws) if (spares) md_bitmap_write_all(mddev->bitmap); - name = test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) ? - "reshape" : "resync"; rcu_assign_pointer(mddev->sync_thread, - md_register_thread(md_do_sync, mddev, name)); + md_register_thread(md_do_sync, mddev, "resync")); if (!mddev->sync_thread) { pr_warn("%s: could not start resync thread...\n", mdname(mddev)); diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index a5f8419e2df1..7412066ea22c 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -4175,7 +4175,11 @@ static int raid10_run(struct mddev *mddev) clear_bit(MD_RECOVERY_SYNC, &mddev->recovery); clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); - set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + rcu_assign_pointer(mddev->sync_thread, + md_register_thread(md_do_sync, mddev, "reshape")); + if (!mddev->sync_thread) + goto out_free_conf; } return 0; @@ -4569,8 +4573,16 @@ static int raid10_start_reshape(struct mddev *mddev) clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); clear_bit(MD_RECOVERY_DONE, &mddev->recovery); set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); - set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + + rcu_assign_pointer(mddev->sync_thread, + md_register_thread(md_do_sync, mddev, "reshape")); + if (!mddev->sync_thread) { + ret = -EAGAIN; + goto abort; + } conf->reshape_checkpoint = jiffies; + md_wakeup_thread(mddev->sync_thread); md_new_event(); return 0; diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 6a7a32f7fb91..4c1f572cc00f 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7936,7 +7936,11 @@ static int raid5_run(struct mddev *mddev) clear_bit(MD_RECOVERY_SYNC, &mddev->recovery); clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); - set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + rcu_assign_pointer(mddev->sync_thread, + md_register_thread(md_do_sync, mddev, "reshape")); + if (!mddev->sync_thread) + goto abort; } /* Ok, everything is just fine now */ @@ -8502,8 +8506,29 @@ static int raid5_start_reshape(struct mddev *mddev) clear_bit(MD_RECOVERY_CHECK, &mddev->recovery); clear_bit(MD_RECOVERY_DONE, &mddev->recovery); set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery); - set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + rcu_assign_pointer(mddev->sync_thread, + md_register_thread(md_do_sync, mddev, "reshape")); + if (!mddev->sync_thread) { + mddev->recovery = 0; + spin_lock_irq(&conf->device_lock); + write_seqcount_begin(&conf->gen_lock); + mddev->raid_disks = conf->raid_disks = conf->previous_raid_disks; + mddev->new_chunk_sectors = + conf->chunk_sectors = conf->prev_chunk_sectors; + mddev->new_layout = conf->algorithm = conf->prev_algo; + rdev_for_each(rdev, mddev) + rdev->new_data_offset = rdev->data_offset; + smp_wmb(); + conf->generation--; + conf->reshape_progress = MaxSector; + mddev->reshape_position = MaxSector; + write_seqcount_end(&conf->gen_lock); + spin_unlock_irq(&conf->device_lock); + return -EAGAIN; + } conf->reshape_checkpoint = jiffies; + md_wakeup_thread(mddev->sync_thread); md_new_event(); return 0; } From patchwork Fri Mar 1 15:21:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Ni X-Patchwork-Id: 13578596 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E22753D0BE for ; Fri, 1 Mar 2024 15:21:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709306509; cv=none; b=VWcjWik0yAAram/pu1YNczuPjn7BwV967gOLhnCFruE52lYBJT0IRL4LmWQ7efRFJTH7gA6O3Xaw+qFFIFjtDFeVCCX+4/CvtWPUfthcuBt8twMaYhYzpF1kT6gNEiAIWEh46ZK8w5xS2u/uHFYJI9E2yfT0MuSAYIuswqXe/mo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709306509; c=relaxed/simple; bh=m+V8CUAm6L5FgD/lMv6u/nUWgVgY7CFIGYZOhIAc10Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DrAOQaJslonNgo74FpFPgJiU56z3K1f0FVEv1uondU5oCA+POijgsfxwyNQxfM1N0Oaoz0J126ODRDgGn4CsWkGpGnQs+W+J2Yjw3XXCnKG+nmR64XNRufanXdcuuCrRTBAEk+g/2oiPpQp6g6XCxLA7IQnFH/ZMCc40nm7Z5Uc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=N8ycOLXJ; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="N8ycOLXJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709306507; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WcNNQUkDZViYg7vPWVV3aHpQjklVij2EKy0H2bDFA5k=; b=N8ycOLXJCZsoxYLh8CRb1+b4KqUlGnglSaVGdl+YqoScVPPb5334OLkKNr6rAa1vTTmUmF 361rj//tiMYSdy1n0ivTeTGeQR51tXjVere5kqfcrNGA6mzx5wHokEdnOVeDyX3ZKK+V2N IxDwRz++aAXnfFUIzuvqA3OzGxEIqSI= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-568-QRoIEdzzN4OmGDn8T89klQ-1; Fri, 01 Mar 2024 10:21:42 -0500 X-MC-Unique: QRoIEdzzN4OmGDn8T89klQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D7EB1C05B05; Fri, 1 Mar 2024 15:21:42 +0000 (UTC) Received: from localhost.localdomain (unknown [10.72.120.8]) by smtp.corp.redhat.com (Postfix) with ESMTP id C45E21121312; Fri, 1 Mar 2024 15:21:38 +0000 (UTC) From: Xiao Ni To: song@kernel.org Cc: yukuai1@huaweicloud.com, bmarzins@redhat.com, heinzm@redhat.com, snitzer@kernel.org, ncroxon@redhat.com, linux-raid@vger.kernel.org, dm-devel@lists.linux.dev Subject: [PATCH 2/4] md: Revert "md: Don't ignore suspended array in md_check_recovery()" Date: Fri, 1 Mar 2024 23:21:26 +0800 Message-Id: <20240301152128.13465-3-xni@redhat.com> In-Reply-To: <20240301152128.13465-1-xni@redhat.com> References: <20240301152128.13465-1-xni@redhat.com> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 This reverts commit 1baae052cccd08daf9a9d64c3f959d8cdb689757. For dmraid, it doesn't allow any io including sync io when array is suspended. Although it's a simple change in this patch, it still needs more work to support it. Now we're trying to fix regression problems. So let's keep as small changes as we can. We can rethink about this in future. Signed-off-by: Xiao Ni --- drivers/md/md.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/md/md.c b/drivers/md/md.c index db4743ba7f6c..c4624814d94c 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9496,6 +9496,9 @@ static void unregister_sync_thread(struct mddev *mddev) */ void md_check_recovery(struct mddev *mddev) { + if (READ_ONCE(mddev->suspended)) + return; + if (mddev->bitmap) md_bitmap_daemon_work(mddev); From patchwork Fri Mar 1 15:21:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Ni X-Patchwork-Id: 13578597 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 881E26F517 for ; Fri, 1 Mar 2024 15:21:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709306514; cv=none; b=f3L7QiFJnaNiUZD6u6QlM2rVagwWErKj3wH0POxdAA3Znqr/Xv5u1bVEDt4p1fZeiVLkcepe+iPXGx3iWvi36IWPOr7I/1+mofSv6WM6PcVH6d+012xgvpZ2koGHU/fDCIN/J6xDyWgCjug4l494uNhXCIgFglG1UXLr2UogYZE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709306514; c=relaxed/simple; bh=twKyr/AQCXonyQJFYLl2e+MyrEq0SqDqPOri9QgRcZg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hDpzr7bIdu5sIBeL7/d7+M3Qh+yNLlYFjRn7nrZxy99cafDUmm8PINx09oAbXQEpjr/kkW1PZk4DQOx0FDNbPP+f9I2e5k1Bq/FowPGNtNZOk3Ew2NCB4wBcvmOwJD+z4r8x4Y80F4w0wMR0N0rsNIq5RLmZL7hgyAod0T0Xa30= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=P66fVc9v; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="P66fVc9v" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709306511; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3TJ5epbe1waIGIp78Jg9/7z1apy7Rzqgqd/bd6euDGg=; b=P66fVc9vjoUwgyQYRZ2ZtmUpS7eAoxNstm4YCLjJf9Sr6QEcvJFgWEDFVXT9qVd3vgJjgS Yt1jLX0irLx9Y+K2wqkrFkEfLeQ3VBgRsGyu0d3QFUlBM0xi+uYyfuzyn3kyoWf0+EL8CN 4iokXsFdZO4IcWMMFLnyuaLsAaLR1U8= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-681-IPu4BHsoMYWHmc_VPCy4ew-1; Fri, 01 Mar 2024 10:21:48 -0500 X-MC-Unique: IPu4BHsoMYWHmc_VPCy4ew-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A0248282D3CE; Fri, 1 Mar 2024 15:21:46 +0000 (UTC) Received: from localhost.localdomain (unknown [10.72.120.8]) by smtp.corp.redhat.com (Postfix) with ESMTP id 26010112132A; Fri, 1 Mar 2024 15:21:42 +0000 (UTC) From: Xiao Ni To: song@kernel.org Cc: yukuai1@huaweicloud.com, bmarzins@redhat.com, heinzm@redhat.com, snitzer@kernel.org, ncroxon@redhat.com, linux-raid@vger.kernel.org, dm-devel@lists.linux.dev Subject: [PATCH 3/4] md: Set MD_RECOVERY_FROZEN before stop sync thread Date: Fri, 1 Mar 2024 23:21:27 +0800 Message-Id: <20240301152128.13465-4-xni@redhat.com> In-Reply-To: <20240301152128.13465-1-xni@redhat.com> References: <20240301152128.13465-1-xni@redhat.com> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 After patch commit f52f5c71f3d4b ("md: fix stopping sync thread"), dmraid stops sync thread asynchronously. The calling process is: dev_remove->dm_destroy->__dm_destroy->raid_postsuspend->raid_dtr raid_postsuspend does two jobs. First, it stops sync thread. Then it suspend array. Now it can stop sync thread successfully. But it doesn't set MD_RECOVERY_FROZEN. It's introduced by patch f52f5c71f3d4b. So after raid_postsuspend, the sync thread starts again. raid_dtr can't stop the sync thread because the array is already suspended. This can be reproduced easily by those commands: while [ 1 ]; do vgcreate test_vg /dev/loop0 /dev/loop1 lvcreate --type raid1 -L 400M -m 1 -n test_lv test_vg lvchange -an test_vg vgremove test_vg -ff done Fixes: f52f5c71f3d4 ("md: fix stopping sync thread") Signed-off-by: Xiao Ni --- drivers/md/md.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/md.c b/drivers/md/md.c index c4624814d94c..c96a3bb073c4 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -6340,6 +6340,7 @@ static void __md_stop_writes(struct mddev *mddev) void md_stop_writes(struct mddev *mddev) { mddev_lock_nointr(mddev); + set_bit(MD_RECOVERY_FROZEN, &mddev->recovery); __md_stop_writes(mddev); mddev_unlock(mddev); } From patchwork Fri Mar 1 15:21:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Ni X-Patchwork-Id: 13578598 X-Patchwork-Delegate: snitzer@redhat.com Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC07D39AC4 for ; Fri, 1 Mar 2024 15:21:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709306520; cv=none; b=XCSCzCIDlHal8DOCQ7XnMUEv6i1q14tZxByFG52154NbNFD02L2NhiMkDcs+EwRZe2/vW1f64cEER+wf7lRIYS9u47HM5iP2PRJHOf/pJCm1uFXH6aOY0toVcmB+6ZEYFWzTOuXD8OwaAc/L50S59Ziwlx3DV6ZQHQ6VCrmGLJU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709306520; c=relaxed/simple; bh=6e+1jXXtCepzK8hrhCWjn2kM/sSuASqeVbE240TKykY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Xg+Q9mr64JFcYMkS8+/S8YnXEHqyxEY67oC0N07kAt+pQ5GLBmFdmykzUkDSXAxDx1gc8NaiwuJ3nonJGdKL1wqPN/QL/jYOJO3+pA9l13vcWDICHlDZxEkFba0N5h5r6P/sEjdHwfzrWiJW/4M7HpBR8RppxLVo8Qvv35k03Ds= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=KhM+WAn3; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KhM+WAn3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709306518; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=69y1pCQHT5Ls9fEh8AKza2wRDYR+w4p2us5TEx1Nugk=; b=KhM+WAn3Sd37C/Qcd9WQ69GHS+UyUuRlABA7AQZPZ/wUHNe82T4g9PuYcUTea8mCJmGGvJ L1N5b9HC+AWhM6jEpUE9eErkf8vU+oFEVlyQr5AL+RJWpxQULxjU0ZF8X9axRCenO0oYCk fS9howrRyOSuZNIHnAXa79qN2xZIU2g= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-314-hYSBQMa6OL64K-R_5dIP5Q-1; Fri, 01 Mar 2024 10:21:52 -0500 X-MC-Unique: hYSBQMa6OL64K-R_5dIP5Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 142BC1C05EB7; Fri, 1 Mar 2024 15:21:51 +0000 (UTC) Received: from localhost.localdomain (unknown [10.72.120.8]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5AB051121312; Fri, 1 Mar 2024 15:21:47 +0000 (UTC) From: Xiao Ni To: song@kernel.org Cc: yukuai1@huaweicloud.com, bmarzins@redhat.com, heinzm@redhat.com, snitzer@kernel.org, ncroxon@redhat.com, linux-raid@vger.kernel.org, dm-devel@lists.linux.dev Subject: [PATCH 4/4] md/raid5: Don't check crossing reshape when reshape hasn't started Date: Fri, 1 Mar 2024 23:21:28 +0800 Message-Id: <20240301152128.13465-5-xni@redhat.com> In-Reply-To: <20240301152128.13465-1-xni@redhat.com> References: <20240301152128.13465-1-xni@redhat.com> Precedence: bulk X-Mailing-List: dm-devel@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 stripe_ahead_of_reshape is used to check if a stripe region cross the reshape position. So first, change the function name to stripe_across_reshape to describe the usage of this function. For reshape backwards, it starts reshape from the end of array and conf-> reshape_progress is init to raid5_size. During reshape, if previous is true (set in make_stripe_request) and max_sector >= conf->reshape_progress, ios should wait until reshape window moves forward. But ios don't need to wait if max_sector is raid5_size. And put the conditions into the function directly to make understand the codes easily. This can be reproduced easily by lvm2 test shell/lvconvert-raid-reshape.sh For dm raid reshape, before starting sync thread, it needs to reload table some times. In one time dm raid uses MD_RECOVERY_WAIT to delay reshape and it doesn't start sync thread this time. Then one io comes in and it waits because stripe_ahead_of_reshape returns true because it's a backward reshape and max_sectors > conf->reshape_progress. But the reshape hasn't started. So skip this check when reshape_progress is raid5_size Fixes: 486f60558607 ("md/raid5: Check all disks in a stripe_head for reshape progress") Signed-off-by: Xiao Ni --- drivers/md/raid5.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 4c1f572cc00f..8d562c1344f4 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -5832,17 +5832,12 @@ static bool ahead_of_reshape(struct mddev *mddev, sector_t sector, sector >= reshape_sector; } -static bool range_ahead_of_reshape(struct mddev *mddev, sector_t min, - sector_t max, sector_t reshape_sector) -{ - return mddev->reshape_backwards ? max < reshape_sector : - min >= reshape_sector; -} - -static bool stripe_ahead_of_reshape(struct mddev *mddev, struct r5conf *conf, +static sector_t raid5_size(struct mddev *mddev, sector_t sectors, int raid_disks); +static bool stripe_across_reshape(struct mddev *mddev, struct r5conf *conf, struct stripe_head *sh) { sector_t max_sector = 0, min_sector = MaxSector; + sector_t reshape_pos = 0; bool ret = false; int dd_idx; @@ -5856,9 +5851,12 @@ static bool stripe_ahead_of_reshape(struct mddev *mddev, struct r5conf *conf, spin_lock_irq(&conf->device_lock); - if (!range_ahead_of_reshape(mddev, min_sector, max_sector, - conf->reshape_progress)) - /* mismatch, need to try again */ + reshape_pos = conf->reshape_progress; + if (mddev->reshape_backwards) { + if (max_sector >= reshape_pos && + reshape_pos != raid5_size(mddev, 0, 0)) + ret = true; + } else if (min_sector < reshape_pos) ret = true; spin_unlock_irq(&conf->device_lock); @@ -5969,7 +5967,7 @@ static enum stripe_result make_stripe_request(struct mddev *mddev, } if (unlikely(previous) && - stripe_ahead_of_reshape(mddev, conf, sh)) { + stripe_across_reshape(mddev, conf, sh)) { /* * Expansion moved on while waiting for a stripe. * Expansion could still move past after this