From patchwork Fri Apr 19 16:42:20 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namjae Jeon X-Patchwork-Id: 2466361 Return-Path: X-Original-To: patchwork-linux-mmc@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 4755E3FD8C for ; Fri, 19 Apr 2013 16:43:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1031068Ab3DSQmb (ORCPT ); Fri, 19 Apr 2013 12:42:31 -0400 Received: from mail-pa0-f48.google.com ([209.85.220.48]:51186 "EHLO mail-pa0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031065Ab3DSQm3 (ORCPT ); Fri, 19 Apr 2013 12:42:29 -0400 Received: by mail-pa0-f48.google.com with SMTP id lj1so2361809pab.7 for ; Fri, 19 Apr 2013 09:42:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer; bh=Sd5xTFIY9AkR1W0q7/c1gUZDJKonZ/7z/ZOzyweFGLw=; b=XxBNBJxlE/fSYiHxnEnrfnGXVe7kDvZQ5YHhtfMgGMu1ysU+feYbjmhmoI68K0y0GW 4ek5/jwxqLeasIQtuwu8lmuHFMOw0/ptBAgaw5ar6k6EBjlXLN63hxVGmerAPGSScrpH PKg9rC5RL7P4TJemUY/Rd1L3Jp83cOHuBnidSur49Qa4ksTMIiAFuTlTbQZj3TFaju1z NRvJiba+Lg3Xt2KQr6GByV19Aodw9+7cho2AMKUfDVSNz5beXTRxXEW4iBiYliJPrCvU qU5JMJxe+gDqaiSiHYHIdYbxGQ43usFvK4JBU9xksUuc5DkjntHWRtxymcQDb4rCZ5Da Ow/g== X-Received: by 10.68.91.131 with SMTP id ce3mr20417726pbb.46.1366389748468; Fri, 19 Apr 2013 09:42:28 -0700 (PDT) Received: from linkinjeon-Aspire-One-522.kornet ([121.143.184.70]) by mx.google.com with ESMTPS id iy2sm13931736pbb.31.2013.04.19.09.42.25 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 19 Apr 2013 09:42:28 -0700 (PDT) From: Namjae Jeon To: dwmw2@infradead.org, axboe@kernel.dk, shli@kernel.org, Paul.Clements@steeleye.com, npiggin@kernel.dk, neilb@suse.de, cjb@laptop.org, adrian.hunter@intel.com, James.Bottomley@HansenPartnership.com, JBottomley@parallels.com Cc: linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org, nbd-general@lists.sourceforge.net, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, jcmvbkbc@gmail.com, Namjae Jeon , Namjae Jeon , Vivek Trivedi Subject: [PATCH v2 8/9] dm thin: use generic helper to set max_discard_sectors Date: Sat, 20 Apr 2013 01:42:20 +0900 Message-Id: <1366389740-19587-1-git-send-email-linkinjeon@gmail.com> X-Mailer: git-send-email 1.7.9.5 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Namjae Jeon It is better to use blk_queue_max_discard_sectors helper function to set max_discard_sectors as it checks max_discard_sectors upper limit UINT_MAX >> 9 similar issue was reported for mmc in below link https://lkml.org/lkml/2013/4/1/292 If multiple discard requests get merged, merged discard request's size exceeds 4GB, there is possibility that merged discard request's __data_len field may overflow. This patch fixes this issue. Signed-off-by: Namjae Jeon Signed-off-by: Vivek Trivedi --- drivers/md/dm-thin.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index 905b75f..237295a 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -2513,7 +2513,8 @@ static void set_discard_limits(struct pool_c *pt, struct queue_limits *limits) struct pool *pool = pt->pool; struct queue_limits *data_limits; - limits->max_discard_sectors = pool->sectors_per_block; + blk_queue_max_discard_sectors(bdev_get_queue(pt->data_dev->bdev), + pool->sectors_per_block); /* * discard_granularity is just a hint, and not enforced.