From patchwork Wed May 13 08:17:38 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Baruch Siach X-Patchwork-Id: 6394921 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 830B49F32E for ; Wed, 13 May 2015 08:21:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9C58A20272 for ; Wed, 13 May 2015 08:21:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 986732041A for ; Wed, 13 May 2015 08:21:54 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YsRsm-000764-1a; Wed, 13 May 2015 08:19:12 +0000 Received: from guitar.tcltek.co.il ([192.115.133.116] helo=mx.tkos.co.il) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YsRsZ-0006sJ-JY; Wed, 13 May 2015 08:19:01 +0000 Received: from tarshish.tkos.co.il (unknown [10.0.8.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mx.tkos.co.il (Postfix) with ESMTPSA id 0C60F440887; Wed, 13 May 2015 11:18:35 +0300 (IDT) From: Baruch Siach To: David Woodhouse , Brian Norris Subject: [PATCH v3 3/4] mtd: mxc_nand: fix truncate of unaligned oob copying Date: Wed, 13 May 2015 11:17:38 +0300 Message-Id: X-Mailer: git-send-email 2.1.4 In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150513_011900_004794_35FE7697 X-CRM114-Status: GOOD ( 14.74 ) X-Spam-Score: 0.0 (/) Cc: Fabio Estevam , Baruch Siach , linux-mtd@lists.infradead.org, Sascha Hauer , =?UTF-8?q?Uwe=20Kleine-K=C3=B6nig?= , Shawn Guo , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Copy to/from oob io area might not be aligned to 4 bytes. When 8 bit ECC is used, the buffer size is 26. Add memcpy16_{to,from}io, and use them to avoid truncating the buffer. Prefer memcpy32_{to,from}io when the buffer is properly aligned for better performance. Reviewed-by: Sascha Hauer Acked-by: Uwe Kleine-König Signed-off-by: Baruch Siach --- v2: * Use memcpy16_{to,from}io (Uwe Kleine-König) --- drivers/mtd/nand/mxc_nand.c | 40 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 36 insertions(+), 4 deletions(-) diff --git a/drivers/mtd/nand/mxc_nand.c b/drivers/mtd/nand/mxc_nand.c index 51c1600d7eb9..010be8aa41d4 100644 --- a/drivers/mtd/nand/mxc_nand.c +++ b/drivers/mtd/nand/mxc_nand.c @@ -281,12 +281,44 @@ static void memcpy32_fromio(void *trg, const void __iomem *src, size_t size) *t++ = __raw_readl(s++); } +static void memcpy16_fromio(void *trg, const void __iomem *src, size_t size) +{ + int i; + u16 *t = trg; + const __iomem u16 *s = src; + + /* We assume that src (IO) is always 32bit aligned */ + if (PTR_ALIGN(trg, 4) == trg && IS_ALIGNED(size, 4)) { + memcpy32_fromio(trg, src, size); + return; + } + + for (i = 0; i < (size >> 1); i++) + *t++ = __raw_readw(s++); +} + static inline void memcpy32_toio(void __iomem *trg, const void *src, int size) { /* __iowrite32_copy use 32bit size values so divide by 4 */ __iowrite32_copy(trg, src, size / 4); } +static void memcpy16_toio(void __iomem *trg, const void *src, int size) +{ + int i; + __iomem u16 *t = trg; + const u16 *s = src; + + /* We assume that trg (IO) is always 32bit aligned */ + if (PTR_ALIGN(src, 4) == src && IS_ALIGNED(size, 4)) { + memcpy32_toio(trg, src, size); + return; + } + + for (i = 0; i < (size >> 1); i++) + __raw_writew(*s++, t++); +} + static int check_int_v3(struct mxc_nand_host *host) { uint32_t tmp; @@ -832,22 +864,22 @@ static void copy_spare(struct mtd_info *mtd, bool bfrom) if (bfrom) { for (i = 0; i < num_chunks - 1; i++) - memcpy32_fromio(d + i * oob_chunk_size, + memcpy16_fromio(d + i * oob_chunk_size, s + i * sparebuf_size, oob_chunk_size); /* the last chunk */ - memcpy32_fromio(d + i * oob_chunk_size, + memcpy16_fromio(d + i * oob_chunk_size, s + i * sparebuf_size, host->used_oobsize - i * oob_chunk_size); } else { for (i = 0; i < num_chunks - 1; i++) - memcpy32_toio(&s[i * sparebuf_size], + memcpy16_toio(&s[i * sparebuf_size], &d[i * oob_chunk_size], oob_chunk_size); /* the last chunk */ - memcpy32_toio(&s[oob_chunk_size * sparebuf_size], + memcpy16_toio(&s[oob_chunk_size * sparebuf_size], &d[i * oob_chunk_size], host->used_oobsize - i * oob_chunk_size); }