From patchwork Sat Apr 13 13:38:13 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namjae Jeon X-Patchwork-Id: 2440601 Return-Path: X-Original-To: patchwork-linux-mmc@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 17B4C3FC71 for ; Sat, 13 Apr 2013 13:38:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753790Ab3DMNib (ORCPT ); Sat, 13 Apr 2013 09:38:31 -0400 Received: from mail-pd0-f174.google.com ([209.85.192.174]:52194 "EHLO mail-pd0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753776Ab3DMNiY (ORCPT ); Sat, 13 Apr 2013 09:38:24 -0400 Received: by mail-pd0-f174.google.com with SMTP id p12so1833566pdj.33 for ; Sat, 13 Apr 2013 06:38:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer; bh=BJ6TkL/QzlciWIFb/kln6QLMVML55A0qGVBq7YEJhPQ=; b=Qq5gML1cDUIPpZHUiSbprni5e0TWoFilF2kXFM1ReYd6ojV1s2JwL3FPRgbvLV9kqt 95jFoZaGACqwZzJTHUpZbzR/AMyHwlIpcaFdAG4fYwFh5oqpVO0nfdn4CI1XrxXFPdRb /ahw6Ykje3XKjHunZPbbdEic+RdUG4yBuc+1GeJW1cI8sJBVjqir3ZBb/bjKnfZJLGVb jHYaQfmqeZ4bUdFomY9apj0SiGH/5pa4XHEgMVrkT+JUk0ncky1bm76HM1Bu/gM/KtGP +2vEpuVWEonrCZ/Hs6iEDlF5zOwldaNOClARZZNq7wNOUkMIoujGO7t0eq/6D/99/scy GuZA== X-Received: by 10.68.200.162 with SMTP id jt2mr19853104pbc.138.1365860303946; Sat, 13 Apr 2013 06:38:23 -0700 (PDT) Received: from linkinjeon-Aspire-One-522.kornet ([121.143.184.70]) by mx.google.com with ESMTPS id fq1sm12638672pbb.33.2013.04.13.06.38.19 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 13 Apr 2013 06:38:23 -0700 (PDT) From: Namjae Jeon To: dwmw2@infradead.org, axboe@kernel.dk, shli@kernel.org, Paul.Clements@steeleye.com, npiggin@kernel.dk, neilb@suse.de, cjb@laptop.org, adrian.hunter@intel.com Cc: linux-mtd@lists.infradead.org, nbd-general@lists.sourceforge.net, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, Namjae Jeon , Namjae Jeon , Vivek Trivedi Subject: [PATCH 1/8] block: fix max discard sectors limit Date: Sat, 13 Apr 2013 22:38:13 +0900 Message-Id: <1365860293-21227-1-git-send-email-linkinjeon@gmail.com> X-Mailer: git-send-email 1.7.9.5 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Namjae Jeon https://lkml.org/lkml/2013/4/1/292 As per above discussion, it has been oberved that few drivers are setting q->limits.max_discard_sectors to more than (UINT_MAX >> 9) If multiple discard requests get merged, merged discard request's size exceeds 4GB, there is possibility that merged discard request's __data_len field may overflow. This patch fixes this issue. Also, adding BLK_DEF_MAX_DISCARD_SECTORS macro to use it instead of UINT_MAX >> 9 Signed-off-by: Namjae Jeon Signed-off-by: Vivek Trivedi --- block/blk-settings.c | 3 ++- include/linux/blkdev.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index c50ecf0..994d91c 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -283,7 +283,8 @@ EXPORT_SYMBOL(blk_queue_max_hw_sectors); void blk_queue_max_discard_sectors(struct request_queue *q, unsigned int max_discard_sectors) { - q->limits.max_discard_sectors = max_discard_sectors; + q->limits.max_discard_sectors = min_t(unsigned int, max_discard_sectors, + BLK_DEF_MAX_DISCARD_SECTORS); } EXPORT_SYMBOL(blk_queue_max_discard_sectors); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 07aa5f6..efff505 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1105,6 +1105,7 @@ enum blk_default_limits { BLK_DEF_MAX_SECTORS = 1024, BLK_MAX_SEGMENT_SIZE = 65536, BLK_SEG_BOUNDARY_MASK = 0xFFFFFFFFUL, + BLK_DEF_MAX_DISCARD_SECTORS = UINT_MAX >> 9, }; #define blkdev_entry_to_request(entry) list_entry((entry), struct request, queuelist)