From patchwork Wed Mar 30 22:38:53 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Ball X-Patchwork-Id: 678001 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p2UMX4GO012379 for ; Wed, 30 Mar 2011 22:33:05 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754441Ab1C3WdD (ORCPT ); Wed, 30 Mar 2011 18:33:03 -0400 Received: from void.printf.net ([89.145.121.20]:38916 "EHLO void.printf.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754184Ab1C3WdC (ORCPT ); Wed, 30 Mar 2011 18:33:02 -0400 Received: from pullcord.laptop.org ([18.85.46.20]) by void.printf.net with esmtp (Exim 4.69) (envelope-from ) id 1Q53wj-0007eA-HX; Wed, 30 Mar 2011 23:33:01 +0100 From: Chris Ball To: Arnd Bergmann Cc: Andrei Warkentin , linux-mmc@vger.kernel.org Subject: Re: [comments] MMC: Reliable write support. References: <1301001751-30785-1-git-send-email-andreiw@motorola.com> <201103290901.31680.arnd@arndb.de> <201103301405.21047.arnd@arndb.de> Date: Wed, 30 Mar 2011 18:38:53 -0400 In-Reply-To: <201103301405.21047.arnd@arndb.de> (Arnd Bergmann's message of "Wed, 30 Mar 2011 14:05:20 +0200") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.0.60 (gnu/linux) MIME-Version: 1.0 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Wed, 30 Mar 2011 22:33:05 +0000 (UTC) diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c index 712fe96..91a6767 100644 --- a/drivers/mmc/card/block.c +++ b/drivers/mmc/card/block.c @@ -340,9 +340,9 @@ static int mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) struct mmc_blk_data *md = mq->data; /* - No-op, only service this because we need REQ_FUA - for reliable writes. - */ + * No-op, only service this because we need REQ_FUA for reliable + * writes. + */ spin_lock_irq(&md->lock); __blk_end_request_all(req, 0); spin_unlock_irq(&md->lock); @@ -364,16 +364,14 @@ static inline int mmc_apply_rel_rw(struct mmc_blk_request *brq, int err; struct mmc_command set_count; - if (!(card->ext_csd.rel_param & - EXT_CSD_WR_REL_PARAM_EN)) { - + if (!(card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN)) { /* Legacy mode imposes restrictions on transfers. */ if (!IS_ALIGNED(brq->cmd.arg, card->ext_csd.rel_sectors)) brq->data.blocks = 1; if (brq->data.blocks > card->ext_csd.rel_sectors) brq->data.blocks = card->ext_csd.rel_sectors; - else if (brq->data.blocks != card->ext_csd.rel_sectors) + else if (brq->data.blocks < card->ext_csd.rel_sectors) brq->data.blocks = 1; } @@ -396,8 +394,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) int ret = 1, disable_multi = 0; /* - Reliable writes are used to implement Forced Unit Access and - REQ_META accesses, and it's supported only on MMCs. + * Reliable writes are used to implement Forced Unit Access and + * REQ_META accesses, and are supported only on MMCs. */ bool do_rel_wr = ((req->cmd_flags & REQ_FUA) || (req->cmd_flags & REQ_META)) && @@ -464,10 +462,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) brq.data.flags |= MMC_DATA_WRITE; } - if (do_rel_wr) { - if (mmc_apply_rel_rw(&brq, card, req)) - goto cmd_err; - } + if (do_rel_wr && mmc_apply_rel_rw(&brq, card, req)) + goto cmd_err; mmc_set_data_timeout(&brq.data, card);