From patchwork Tue Jun 25 11:05:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13710874 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2BFE14AD29; Tue, 25 Jun 2024 11:06:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719313581; cv=none; b=UccyezBTYixKApfr9wH2oHZ6VvgGzupHVMccu90WxFULaBf/35VGkZpvhxeHnUYDs1+LqETPHOnLBQ0WhtLkpgN+RV7Nt51jFv6nt99kSxw1/B0/TZ1tb481ajahAg6C+ZdbDOGAD8XpmnutozUiuVmFoWv6SS1gHb+JZGOvqRM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719313581; c=relaxed/simple; bh=pXcp1IQcf5Avfi9gPpS6hyWOYJLZFNjlA3dGoCfhXJY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oXTtOHLqO47k9VdFt78TqL5UbvDLbpvhC2aPEijGXfJu8Md59bbPOBzxSHVqtIIZz5cih8I4+h48BiRXWDWLEDtiVZEsWD1Es/9VWKGp83PwZd64pmT3kLGw/f/wKoZR965kmFG1SebrVG9S5DDnQMLR71ZX9iFobvnSbdUQe5g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=4EdlA2uy; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="4EdlA2uy" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=WBw4zpC4gCRwuSmqqg615KKP+eUNXA8HQ+9IWVnRf4o=; b=4EdlA2uyPzHOip5fHcWZfrIw8t XHVWOSNZ2qTAMrbloDRRSpWFyZVyeEnECbqJoNS1J8QfPgzLVRToLS3WzvNbDEZa6jod9QX6olIIQ /dMlPDa2kmK1F8+x9tYFDMV6t98R+y1j7CQy8zeCRIMPhJQRnST96THg8np79zHU3kvoTXAHBx8zu GrKzmjKvPiwwaoMABNbSEUtJJ0aZfbhvMjQet2ybo0F4b3reUqgLCENzq4Mu5rzU9SS8j0PJWpZOq zbYpdvXCqG9RIyKoq37XBBxkZJKIyZ/TiVwTBzOeCAbBJgNqtJJvpxCjux3Non654eJKhaBZ5QyNC MXaNKAhA==; Received: from [2001:4bb8:2dc:2ee2:6df6:d2e9:d402:6e6b] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1sM402-00000002UJV-0Ikj; Tue, 25 Jun 2024 11:06:10 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Damien Le Moal , Niklas Cassel , Song Liu , Yu Kuai , "Martin K. Petersen" , Alim Akhtar , Avri Altman , Bart Van Assche , linux-block@vger.kernel.org, linux-ide@vger.kernel.org, linux-raid@vger.kernel.org, linux-scsi@vger.kernel.org, kernel test robot Subject: [PATCH 1/7] md: set md-specific flags for all queue limits Date: Tue, 25 Jun 2024 13:05:41 +0200 Message-ID: <20240625110603.50885-2-hch@lst.de> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240625110603.50885-1-hch@lst.de> References: <20240625110603.50885-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html The md driver wants to enforce a number of flags to an all devices, even when not inheriting them from the underlying devices. To make sure these flags survive the queue_limits_set calls that md uses to update the queue limits without deriving them form the previous limits add a new md_init_stacking_limits helper that calls blk_set_stacking_limits and sets these flags. Fixes: 1122c0c1cc71 ("block: move cache control settings out of queue->flags") Reported-by: kernel test robot Signed-off-by: Christoph Hellwig --- drivers/md/md.c | 13 ++++++++----- drivers/md/md.h | 1 + drivers/md/raid0.c | 2 +- drivers/md/raid1.c | 2 +- drivers/md/raid10.c | 2 +- drivers/md/raid5.c | 2 +- 6 files changed, 13 insertions(+), 9 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 69ea54aedd99a1..8368438e58e989 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5853,6 +5853,13 @@ static void mddev_delayed_delete(struct work_struct *ws) kobject_put(&mddev->kobj); } +void md_init_stacking_limits(struct queue_limits *lim) +{ + blk_set_stacking_limits(lim); + lim->features = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | + BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT; +} + struct mddev *md_alloc(dev_t dev, char *name) { /* @@ -5871,10 +5878,6 @@ struct mddev *md_alloc(dev_t dev, char *name) int shift; int unit; int error; - struct queue_limits lim = { - .features = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | - BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT, - }; /* * Wait for any previous instance of this device to be completely @@ -5914,7 +5917,7 @@ struct mddev *md_alloc(dev_t dev, char *name) */ mddev->hold_active = UNTIL_STOP; - disk = blk_alloc_disk(&lim, NUMA_NO_NODE); + disk = blk_alloc_disk(NULL, NUMA_NO_NODE); if (IS_ERR(disk)) { error = PTR_ERR(disk); goto out_free_mddev; diff --git a/drivers/md/md.h b/drivers/md/md.h index c4d7ebf9587d07..28cb4b0b6c1740 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -893,6 +893,7 @@ extern int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale); extern int mddev_init(struct mddev *mddev); extern void mddev_destroy(struct mddev *mddev); +void md_init_stacking_limits(struct queue_limits *lim); struct mddev *md_alloc(dev_t dev, char *name); void mddev_put(struct mddev *mddev); extern int md_run(struct mddev *mddev); diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 62634e2a33bd0f..32d58752477847 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -379,7 +379,7 @@ static int raid0_set_limits(struct mddev *mddev) struct queue_limits lim; int err; - blk_set_stacking_limits(&lim); + md_init_stacking_limits(&lim); lim.max_hw_sectors = mddev->chunk_sectors; lim.max_write_zeroes_sectors = mddev->chunk_sectors; lim.io_min = mddev->chunk_sectors << 9; diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 1a0eba65b8a92b..04a0c2ca173245 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3194,7 +3194,7 @@ static int raid1_set_limits(struct mddev *mddev) struct queue_limits lim; int err; - blk_set_stacking_limits(&lim); + md_init_stacking_limits(&lim); lim.max_write_zeroes_sectors = 0; err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY); if (err) { diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 3334aa803c8380..2a9c4ee982e023 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -3974,7 +3974,7 @@ static int raid10_set_queue_limits(struct mddev *mddev) struct queue_limits lim; int err; - blk_set_stacking_limits(&lim); + md_init_stacking_limits(&lim); lim.max_write_zeroes_sectors = 0; lim.io_min = mddev->chunk_sectors << 9; lim.io_opt = lim.io_min * raid10_nr_stripes(conf); diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 0192a6323f09ba..10219205160bbf 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7708,7 +7708,7 @@ static int raid5_set_limits(struct mddev *mddev) */ stripe = roundup_pow_of_two(data_disks * (mddev->chunk_sectors << 9)); - blk_set_stacking_limits(&lim); + md_init_stacking_limits(&lim); lim.io_min = mddev->chunk_sectors << 9; lim.io_opt = lim.io_min * (conf->raid_disks - conf->max_degraded); lim.features |= BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE;