From patchwork Fri May 27 02:42:49 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Martin K. Petersen" X-Patchwork-Id: 822592 Received: from mx4-phx2.redhat.com (mx4-phx2.redhat.com [209.132.183.25]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p4R2kGtO015676 for ; Fri, 27 May 2011 02:46:36 GMT Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p4R2hVqN010130; Thu, 26 May 2011 22:43:42 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p4R2hUoN013298 for ; Thu, 26 May 2011 22:43:30 -0400 Received: from mx1.redhat.com (ext-mx14.extmail.prod.ext.phx2.redhat.com [10.5.110.19]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p4R2hPtr013338; Thu, 26 May 2011 22:43:25 -0400 Received: from rcsinet10.oracle.com (rcsinet10.oracle.com [148.87.113.121]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p4R2hG70021580; Thu, 26 May 2011 22:43:16 -0400 Received: from rtcsinet22.oracle.com (rtcsinet22.oracle.com [66.248.204.30]) by rcsinet10.oracle.com (Switch-3.4.2/Switch-3.4.2) with ESMTP id p4R2hD9O031454 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 27 May 2011 02:43:15 GMT Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156]) by rtcsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id p4R2hC5X025096 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 27 May 2011 02:43:13 GMT Received: from abhmt014.oracle.com (abhmt014.oracle.com [141.146.116.23]) by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id p4R2h78i021918; Thu, 26 May 2011 21:43:07 -0500 Received: from groovelator.mkp.net (/209.217.122.111) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 26 May 2011 19:43:06 -0700 From: "Martin K. Petersen" To: jaxboe@fusionio.com Date: Thu, 26 May 2011 22:42:49 -0400 Message-Id: <1306464169-4291-4-git-send-email-martin.petersen@oracle.com> In-Reply-To: <1306464169-4291-1-git-send-email-martin.petersen@oracle.com> References: <4DDEA689.2090004@fusionio.com> <1306464169-4291-1-git-send-email-martin.petersen@oracle.com> X-Source-IP: rtcsinet22.oracle.com [66.248.204.30] X-CT-RefId: str=0001.0A090208.4DDF0FC3.0126:SCFSTAT5015188,ss=1,fgs=0 X-RedHat-Spam-Score: -102.309 (RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY, USER_IN_WHITELIST) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.19 X-loop: dm-devel@redhat.com Cc: msb@chromium.org, dm-devel@redhat.com, linux-kernel@vger.kernel.org, snitzer@redhat.com, "Martin K. Petersen" Subject: [dm-devel] [PATCH 3/3] block: Move discard and secure discard flags to queue limits X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Fri, 27 May 2011 02:46:36 +0000 (UTC) Whether a device supports discard is currently stored two places: max_discard_sectors in the queue limits and the discard request_queue flag. Deprecate the queue flag and always use the topology. Also move the secure discard flag to the queue limits so it can be stacked as well. Signed-off-by: Martin K. Petersen --- block/blk-settings.c | 3 +++ drivers/block/brd.c | 1 - drivers/md/dm-table.c | 5 ----- drivers/mmc/card/queue.c | 4 +--- drivers/mtd/mtd_blkdevs.c | 4 +--- drivers/scsi/sd.c | 3 +-- include/linux/blkdev.h | 21 +++++++++++++-------- 7 files changed, 19 insertions(+), 22 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index f95760d..feb3e40 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -118,6 +118,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->discard_alignment = 0; lim->discard_misaligned = 0; lim->discard_zeroes_data = 0; + lim->discard_secure = 0; lim->logical_block_size = lim->physical_block_size = lim->io_min = 512; lim->bounce_pfn = (unsigned long)(BLK_BOUNCE_ANY >> PAGE_SHIFT); lim->alignment_offset = 0; @@ -144,6 +145,7 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_hw_sectors = INT_MAX; lim->max_sectors = BLK_DEF_MAX_SECTORS; lim->discard_zeroes_data = 1; + lim->discard_secure = 1; lim->non_rotational = 1; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -570,6 +572,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->cluster &= b->cluster; t->discard_zeroes_data &= b->discard_zeroes_data; + t->discard_secure &= b->discard_secure; t->non_rotational &= b->non_rotational; /* Physical block size a multiple of the logical block size? */ diff --git a/drivers/block/brd.c b/drivers/block/brd.c index b7f51e4..3ade4e1 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -489,7 +489,6 @@ static struct brd_device *brd_alloc(int i) brd->brd_queue->limits.discard_granularity = PAGE_SIZE; brd->brd_queue->limits.max_discard_sectors = UINT_MAX; brd->brd_queue->limits.discard_zeroes_data = 1; - queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, brd->brd_queue); disk = brd->brd_disk = alloc_disk(1 << part_shift); if (!disk) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 35792bf..b5c6a1b 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1185,11 +1185,6 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, */ q->limits = *limits; - if (!dm_table_supports_discards(t)) - queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, q); - else - queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q); - dm_table_set_integrity(t); /* diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c index 9adce86..b5c11a0 100644 --- a/drivers/mmc/card/queue.c +++ b/drivers/mmc/card/queue.c @@ -129,7 +129,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock blk_queue_prep_rq(mq->queue, mmc_prep_request); blk_queue_non_rotational(mq->queue); if (mmc_can_erase(card)) { - queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mq->queue); mq->queue->limits.max_discard_sectors = UINT_MAX; if (card->erased_byte == 0) mq->queue->limits.discard_zeroes_data = 1; @@ -140,8 +139,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock card->erase_size << 9; } if (mmc_can_secure_erase_trim(card)) - queue_flag_set_unlocked(QUEUE_FLAG_SECDISCARD, - mq->queue); + mq->queue->limits.discard_secure = 1; } #ifdef CONFIG_MMC_BLOCK_BOUNCE diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c index a534e1f..5315163 100644 --- a/drivers/mtd/mtd_blkdevs.c +++ b/drivers/mtd/mtd_blkdevs.c @@ -408,10 +408,8 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new) new->rq->queuedata = new; blk_queue_logical_block_size(new->rq, tr->blksize); - if (tr->discard) { - queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, new->rq); + if (tr->discard) new->rq->limits.max_discard_sectors = UINT_MAX; - } gd->queue = new->rq; diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 7a5cf28..c958ac5 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -499,7 +499,7 @@ static void sd_config_discard(struct scsi_disk *sdkp, unsigned int mode) case SD_LBP_DISABLE: q->limits.max_discard_sectors = 0; - queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, q); + max_blocks = 0; return; case SD_LBP_UNMAP: @@ -521,7 +521,6 @@ static void sd_config_discard(struct scsi_disk *sdkp, unsigned int mode) } q->limits.max_discard_sectors = max_blocks * (logical_block_size >> 9); - queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q); sdkp->provisioning_mode = mode; } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 52a3f4c..42a374f 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -258,7 +258,7 @@ struct queue_limits { unsigned char discard_misaligned; unsigned char cluster; unsigned char discard_zeroes_data; - + unsigned char discard_secure; unsigned char non_rotational; }; @@ -399,10 +399,8 @@ struct request_queue #define QUEUE_FLAG_FAIL_IO 10 /* fake timeout */ #define QUEUE_FLAG_STACKABLE 11 /* supports request stacking */ #define QUEUE_FLAG_IO_STAT 12 /* do IO stats */ -#define QUEUE_FLAG_DISCARD 13 /* supports DISCARD */ -#define QUEUE_FLAG_NOXMERGES 14 /* No extended merges */ -#define QUEUE_FLAG_ADD_RANDOM 15 /* Contributes to random pool */ -#define QUEUE_FLAG_SECDISCARD 16 /* supports SECDISCARD */ +#define QUEUE_FLAG_NOXMERGES 13 /* No extended merges */ +#define QUEUE_FLAG_ADD_RANDOM 14 /* Contributes to random pool */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_STACKABLE) | \ @@ -483,9 +481,6 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) #define blk_queue_add_random(q) test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags) #define blk_queue_stackable(q) \ test_bit(QUEUE_FLAG_STACKABLE, &(q)->queue_flags) -#define blk_queue_discard(q) test_bit(QUEUE_FLAG_DISCARD, &(q)->queue_flags) -#define blk_queue_secdiscard(q) (blk_queue_discard(q) && \ - test_bit(QUEUE_FLAG_SECDISCARD, &(q)->queue_flags)) #define blk_noretry_request(rq) \ ((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \ @@ -1033,6 +1028,16 @@ static inline unsigned int blk_queue_nonrot(struct request_queue *q) return q->limits.non_rotational; } +static inline unsigned int blk_queue_discard(struct request_queue *q) +{ + return !!q->limits.max_discard_sectors; +} + +static inline unsigned int blk_queue_secdiscard(struct request_queue *q) +{ + return q->limits.discard_secure; +} + static inline int queue_alignment_offset(struct request_queue *q) { if (q->limits.misaligned)