From patchwork Sat Apr 13 13:39:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namjae Jeon X-Patchwork-Id: 2440671 Return-Path: X-Original-To: patchwork-linux-mmc@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 5240B3FC71 for ; Sat, 13 Apr 2013 13:39:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754030Ab3DMNja (ORCPT ); Sat, 13 Apr 2013 09:39:30 -0400 Received: from mail-da0-f50.google.com ([209.85.210.50]:57507 "EHLO mail-da0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753568Ab3DMNj3 (ORCPT ); Sat, 13 Apr 2013 09:39:29 -0400 Received: by mail-da0-f50.google.com with SMTP id t1so1486479dae.9 for ; Sat, 13 Apr 2013 06:39:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer; bh=645ED+e5NwsjeGtwGBmQOtucg7gwhZec7lBHzjNBYrM=; b=Or2KrvUbBgee5/ii+IqEC7Pdqh/fRM4ZznzDLLnAyN+ElUh5UneXHc3eYAyeiZ9Orq wH1mUustttnPtUCYNFn55QHOUl3DGuxGPfa1NNgP3sEYoZ57+PQZ4pdFVdq3RIFEf+Wv Tmrt+wCApL1qnqobShPtfIbG0bgtc3Iryz+433RDmo4fMxJEOzVl1msXlKrVEMOWO6EF OW5+Vq4kJMCxVBAf7oCL4wYTJd52mWiWsNAuLM/513Cv4sDtZArN9a8lwh4rWbUl8hDY 2rRXMXRQg+2v5Oc0kSX+GjzessYAsh/jHlZjCCO2yIuMCyJXtYc7tYNvWQw5NT9MG8y+ cQwQ== X-Received: by 10.66.7.228 with SMTP id m4mr14440245paa.173.1365860368708; Sat, 13 Apr 2013 06:39:28 -0700 (PDT) Received: from linkinjeon-Aspire-One-522.kornet ([121.143.184.70]) by mx.google.com with ESMTPS id lf12sm13885388pab.13.2013.04.13.06.39.23 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 13 Apr 2013 06:39:27 -0700 (PDT) From: Namjae Jeon To: dwmw2@infradead.org, axboe@kernel.dk, shli@kernel.org, Paul.Clements@steeleye.com, npiggin@kernel.dk, neilb@suse.de, cjb@laptop.org, adrian.hunter@intel.com Cc: linux-mtd@lists.infradead.org, nbd-general@lists.sourceforge.net, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, Namjae Jeon , Namjae Jeon , Vivek Trivedi Subject: [PATCH 5/8] nbd: use generic helper to set max_discard_sectors Date: Sat, 13 Apr 2013 22:39:18 +0900 Message-Id: <1365860358-21363-1-git-send-email-linkinjeon@gmail.com> X-Mailer: git-send-email 1.7.9.5 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Namjae Jeon It is better to use blk_queue_max_discard_sectors helper function to set max_discard_sectors as it checks max_discard_sectors upper limit UINT_MAX >> 9 similar issue was reported for mmc in below link https://lkml.org/lkml/2013/4/1/292 If multiple discard requests get merged, merged discard request's size exceeds 4GB, there is possibility that merged discard request's __data_len field may overflow. This patch fixes this issue. Signed-off-by: Namjae Jeon Signed-off-by: Vivek Trivedi --- drivers/block/nbd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 7fecc78..0a081bc 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -854,7 +854,7 @@ static int __init nbd_init(void) */ queue_flag_set_unlocked(QUEUE_FLAG_NONROT, disk->queue); disk->queue->limits.discard_granularity = 512; - disk->queue->limits.max_discard_sectors = UINT_MAX; + blk_queue_max_discard_sectors(disk->queue, UINT_MAX >> 9); disk->queue->limits.discard_zeroes_data = 0; }