From patchwork Thu Mar 10 00:54:08 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrei Warkentin X-Patchwork-Id: 622911 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p2A0Af4E003053 for ; Thu, 10 Mar 2011 00:10:46 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753199Ab1CJAKq (ORCPT ); Wed, 9 Mar 2011 19:10:46 -0500 Received: from exprod5og101.obsmtp.com ([64.18.0.141]:56208 "EHLO exprod5og101.obsmtp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752651Ab1CJAKp (ORCPT ); Wed, 9 Mar 2011 19:10:45 -0500 Received: from source ([144.188.21.13]) (using TLSv1) by exprod5ob101.postini.com ([64.18.4.12]) with SMTP ID DSNKTXgXBDph/0h5AwDQpPRe4NzVw69SyXuE@postini.com; Wed, 09 Mar 2011 16:10:45 PST Received: from il93mgrg01.am.mot-mobility.com ([10.176.130.20]) by il93mgrg01.am.mot-mobility.com (8.14.3/8.14.3) with ESMTP id p2A09OgD017041 for ; Wed, 9 Mar 2011 19:09:25 -0500 (EST) Received: from mail-gw0-f42.google.com (mail-gw0-f42.google.com [74.125.83.42]) by il93mgrg01.am.mot-mobility.com (8.14.3/8.14.3) with ESMTP id p2A08SW7016828 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=OK) for ; Wed, 9 Mar 2011 19:09:24 -0500 (EST) Received: by mail-gw0-f42.google.com with SMTP id 17so348331gwb.15 for ; Wed, 09 Mar 2011 16:10:43 -0800 (PST) Received: by 10.236.78.165 with SMTP id g25mr501255yhe.261.1299715843331; Wed, 09 Mar 2011 16:10:43 -0800 (PST) Received: from localhost.localdomain (dyngate-ca119-13.motorola.com [144.189.96.13]) by mx.google.com with ESMTPS id 23sm1670066yhl.23.2011.03.09.16.10.42 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 09 Mar 2011 16:10:42 -0800 (PST) From: Andrei Warkentin To: linux-mmc@vger.kernel.org Cc: Andrei Warkentin Subject: [RFC 4/5] MMC: Adjust unaligned write accesses. Date: Wed, 9 Mar 2011 18:54:08 -0600 Message-Id: <1299718449-15172-5-git-send-email-andreiw@motorola.com> X-Mailer: git-send-email 1.7.0.4 In-Reply-To: <1299718449-15172-1-git-send-email-andreiw@motorola.com> References: <1299718449-15172-1-git-send-email-andreiw@motorola.com> X-CFilter-Loop: Reflected Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Thu, 10 Mar 2011 00:10:46 +0000 (UTC) diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c index 913f394..a8f18c7 100644 --- a/drivers/mmc/card/block.c +++ b/drivers/mmc/card/block.c @@ -63,6 +63,8 @@ struct mmc_blk_data { unsigned int usage; unsigned int read_only; + unsigned int write_align_size; + unsigned int write_align_limit; }; static DEFINE_MUTEX(open_lock); @@ -312,6 +314,43 @@ out: return err ? 0 : 1; } +/* + * If the request is not aligned, split it into an unaligned + * and an aligned portion. Here we can adjust + * the size of the MMC request and let the block layer request handle + * deal with generating another MMC request. + */ + +static void mmc_adjust_write(struct mmc_card *card, + struct mmc_request *mrq) +{ + unsigned int left_in_page; + unsigned int wa_size_blocks; + struct mmc_blk_data *md = mmc_get_drvdata(card); + + if (!md->write_align_size) + return; + + if (md->write_align_limit && + (md->write_align_limit / mrq->data->blksz) + < mrq->data->blocks) + return; + + wa_size_blocks = md->write_align_size / mrq->data->blksz; + left_in_page = wa_size_blocks - + (mrq->cmd->arg % wa_size_blocks); + + /* Aligned access. */ + if (left_in_page == wa_size_blocks) + return; + + /* Not straddling page boundary. */ + if (mrq->data->blocks <= left_in_page) + return; + + mrq->data->blocks = left_in_page; +} + static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) { struct mmc_blk_data *md = mq->data; @@ -339,6 +378,10 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *req) brq.stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; brq.data.blocks = blk_rq_sectors(req); + /* Check for unaligned accesses straddling pages. */ + if (rq_data_dir(req) == WRITE) + mmc_adjust_write(card, &brq.mrq); + /* * The block layer doesn't support all sector count * restrictions, so we need to be prepared for too big