From patchwork Fri Apr 19 16:40:33 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namjae Jeon X-Patchwork-Id: 2466241 Return-Path: X-Original-To: patchwork-linux-mmc@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id E5FCCDF25A for ; Fri, 19 Apr 2013 16:40:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968377Ab3DSQkn (ORCPT ); Fri, 19 Apr 2013 12:40:43 -0400 Received: from mail-pd0-f177.google.com ([209.85.192.177]:36906 "EHLO mail-pd0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S968355Ab3DSQkm (ORCPT ); Fri, 19 Apr 2013 12:40:42 -0400 Received: by mail-pd0-f177.google.com with SMTP id u11so2339449pdi.36 for ; Fri, 19 Apr 2013 09:40:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer; bh=Wg8bhI2rPFBScXLt+1E1/2rqZjanqFLzjuu6HWT5/cI=; b=EsV3+k0WIcTyxmTBPAoJuiLApjXN4ym3qtbYvgVzQkSmNy0wfjWmM387/hWoeVczGT VIU5lgGazYizkTlpjPL4wk1f0yM8pOLRgiufmlz2yta9Gb7gDb/M5q8RWhGJryRIwswd 03iCRxbfdlE1ZH+AJrkBa84rPNuQnEKmnqrggRzQCZTTK2u6Uz9VrbBGZtkzSTYSSKnc O7IYDAo2p97uKkrR1AQMSvqpLxhuAh4OWUKRBEZPaH7KTq8CG2Wa+lSA6P1kx2WUHKtw uOJVB/cVWln9WtDg+Dn7hndWohMnXIyqz/tLknIdwfnwfR7eQc5XucvtAZYT/lt/pmQH 2A9A== X-Received: by 10.68.184.132 with SMTP id eu4mr20589173pbc.87.1366389642136; Fri, 19 Apr 2013 09:40:42 -0700 (PDT) Received: from linkinjeon-Aspire-One-522.kornet ([121.143.184.70]) by mx.google.com with ESMTPS id j13sm15282314pat.17.2013.04.19.09.40.38 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 19 Apr 2013 09:40:41 -0700 (PDT) From: Namjae Jeon To: dwmw2@infradead.org, axboe@kernel.dk, shli@kernel.org, Paul.Clements@steeleye.com, npiggin@kernel.dk, neilb@suse.de, cjb@laptop.org, adrian.hunter@intel.com, James.Bottomley@HansenPartnership.com, JBottomley@parallels.com Cc: linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org, nbd-general@lists.sourceforge.net, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, jcmvbkbc@gmail.com, Namjae Jeon , Namjae Jeon , Vivek Trivedi Subject: [PATCH v2 1/9] block: fix max discard sectors limit Date: Sat, 20 Apr 2013 01:40:33 +0900 Message-Id: <1366389634-19348-1-git-send-email-linkinjeon@gmail.com> X-Mailer: git-send-email 1.7.9.5 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Namjae Jeon https://lkml.org/lkml/2013/4/1/292 As per above discussion, it has been oberved that few drivers are setting q->limits.max_discard_sectors to more than (UINT_MAX >> 9) If multiple discard requests get merged, merged discard request's size exceeds 4GB, there is possibility that merged discard request's __data_len field may overflow. This patch fixes this issue. Also, adding BLK_DEF_MAX_DISCARD_SECTORS macro to use it instead of UINT_MAX >> 9. Reported-by: Max Filippov Signed-off-by: Namjae Jeon Signed-off-by: Vivek Trivedi --- block/blk-settings.c | 3 ++- include/linux/blkdev.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index c50ecf0..34e6b61 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -283,7 +283,8 @@ EXPORT_SYMBOL(blk_queue_max_hw_sectors); void blk_queue_max_discard_sectors(struct request_queue *q, unsigned int max_discard_sectors) { - q->limits.max_discard_sectors = max_discard_sectors; + q->limits.max_discard_sectors = min_t(unsigned int, max_discard_sectors, + BLK_DEF_MAX_DISCARD_SECTORS); } EXPORT_SYMBOL(blk_queue_max_discard_sectors); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 07aa5f6..efff505 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1105,6 +1105,7 @@ enum blk_default_limits { BLK_DEF_MAX_SECTORS = 1024, BLK_MAX_SEGMENT_SIZE = 65536, BLK_SEG_BOUNDARY_MASK = 0xFFFFFFFFUL, + BLK_DEF_MAX_DISCARD_SECTORS = UINT_MAX >> 9, }; #define blkdev_entry_to_request(entry) list_entry((entry), struct request, queuelist)