From patchwork Tue Nov 30 08:31:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiangsheng Hou X-Patchwork-Id: 12646515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9E196C433EF for ; Tue, 30 Nov 2021 08:32:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NruMwNr9P3nD8Owc0eQJRsNvKwcf/rxLL+GRUeF/w80=; b=RugHmS0F8agHJe BFHGJcdaxbzJu5Too48USq/rz9fr8yCKFBofGoyA/hrEiz4f4n2PBouSY1Go6kFigtjjrVtMdB++M FlxZr4Hr09xm30npDVIE6WU9vxGRd65YldffYDYJFnALibB7zBDt9ooXsvaxocxo5nqzeiMrmt85a njd+QDtCM6d0RnnG09bNIW9BaEL4ood5jm2W6qt9rElqiF0hT38hRk2XFciZ9+n33pvZPqUguN6uX 8IC4h2r6+b/7qJLUuQvLQ53bjI6KcxhTyQYYXIBD28NzI5joh5I3nz1Dh2lHh8n0SOnqN2tIEk7dD 0qqwE7Vxrhhv2iJ1lYyw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mryZF-0047dV-6S; Tue, 30 Nov 2021 08:32:49 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mryYp-0047XT-U6; Tue, 30 Nov 2021 08:32:25 +0000 X-UUID: 7b613ba0ccc14449adee0f2a2997351e-20211130 X-UUID: 7b613ba0ccc14449adee0f2a2997351e-20211130 Received: from mtkcas67.mediatek.inc [(172.29.193.45)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1918130777; Tue, 30 Nov 2021 01:32:21 -0700 Received: from mtkmbs10n2.mediatek.inc (172.21.101.183) by MTKMBS62N1.mediatek.inc (172.29.193.41) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 30 Nov 2021 00:32:19 -0800 Received: from mtkcas11.mediatek.inc (172.21.101.40) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.3; Tue, 30 Nov 2021 16:32:17 +0800 Received: from mhfsdcap04.gcn.mediatek.inc (10.17.3.154) by mtkcas11.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 30 Nov 2021 16:32:17 +0800 From: Xiangsheng Hou To: , CC: , , , , , , , , , , , , Subject: [RFC,v4,1/5] mtd: nand: ecc: Move mediatek ECC driver Date: Tue, 30 Nov 2021 16:31:58 +0800 Message-ID: <20211130083202.14228-2-xiangsheng.hou@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211130083202.14228-1-xiangsheng.hou@mediatek.com> References: <20211130083202.14228-1-xiangsheng.hou@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211130_003224_005276_B92B37D3 X-CRM114-Status: GOOD ( 12.47 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Move mediatek on host ECC driver to comply with the generic ECC framework. The ECC engine can be used by mediatek raw nand and spi nand controller. Signed-off-by: Xiangsheng Hou --- drivers/mtd/nand/Kconfig | 9 +++++++++ drivers/mtd/nand/Makefile | 1 + drivers/mtd/nand/{raw/mtk_ecc.c => ecc-mtk.c} | 2 +- drivers/mtd/nand/raw/Kconfig | 1 + drivers/mtd/nand/raw/Makefile | 2 +- drivers/mtd/nand/raw/mtk_nand.c | 2 +- .../raw/mtk_ecc.h => include/linux/mtd/nand-ecc-mtk.h | 0 7 files changed, 14 insertions(+), 3 deletions(-) rename drivers/mtd/nand/{raw/mtk_ecc.c => ecc-mtk.c} (99%) rename drivers/mtd/nand/raw/mtk_ecc.h => include/linux/mtd/nand-ecc-mtk.h (100%) diff --git a/drivers/mtd/nand/Kconfig b/drivers/mtd/nand/Kconfig index 8431292ff49d..a96fddff5ba5 100644 --- a/drivers/mtd/nand/Kconfig +++ b/drivers/mtd/nand/Kconfig @@ -52,6 +52,15 @@ config MTD_NAND_ECC_MXIC help This enables support for the hardware ECC engine from Macronix. +config MTD_NAND_ECC_MTK + bool "Mediatek hardware ECC engine" + select MTD_NAND_ECC + help + This enables support for Mediatek hardware ECC engine which + used for error correction. This correction strength depends + on different project. The ECC engine can be used with Mediatek + raw nand and spi nand controller driver. + endmenu endmenu diff --git a/drivers/mtd/nand/Makefile b/drivers/mtd/nand/Makefile index a4e6b7ae0614..686f0d635ddf 100644 --- a/drivers/mtd/nand/Makefile +++ b/drivers/mtd/nand/Makefile @@ -11,3 +11,4 @@ nandcore-$(CONFIG_MTD_NAND_ECC) += ecc.o nandcore-$(CONFIG_MTD_NAND_ECC_SW_HAMMING) += ecc-sw-hamming.o nandcore-$(CONFIG_MTD_NAND_ECC_SW_BCH) += ecc-sw-bch.o nandcore-$(CONFIG_MTD_NAND_ECC_MXIC) += ecc-mxic.o +nandcore-$(CONFIG_MTD_NAND_ECC_MTK) += ecc-mtk.o diff --git a/drivers/mtd/nand/raw/mtk_ecc.c b/drivers/mtd/nand/ecc-mtk.c similarity index 99% rename from drivers/mtd/nand/raw/mtk_ecc.c rename to drivers/mtd/nand/ecc-mtk.c index 1b47964cb6da..31d7c77d5c59 100644 --- a/drivers/mtd/nand/raw/mtk_ecc.c +++ b/drivers/mtd/nand/ecc-mtk.c @@ -16,7 +16,7 @@ #include #include -#include "mtk_ecc.h" +#include #define ECC_IDLE_MASK BIT(0) #define ECC_IRQ_EN BIT(0) diff --git a/drivers/mtd/nand/raw/Kconfig b/drivers/mtd/nand/raw/Kconfig index 67b7cb67c030..c90bc166034b 100644 --- a/drivers/mtd/nand/raw/Kconfig +++ b/drivers/mtd/nand/raw/Kconfig @@ -362,6 +362,7 @@ config MTD_NAND_MTK tristate "MTK NAND controller" depends on ARCH_MEDIATEK || COMPILE_TEST depends on HAS_IOMEM + select MTD_NAND_ECC_MTK help Enables support for NAND controller on MTK SoCs. This controller is found on mt27xx, mt81xx, mt65xx SoCs. diff --git a/drivers/mtd/nand/raw/Makefile b/drivers/mtd/nand/raw/Makefile index 2f97958c3a33..49d3946c166b 100644 --- a/drivers/mtd/nand/raw/Makefile +++ b/drivers/mtd/nand/raw/Makefile @@ -48,7 +48,7 @@ obj-$(CONFIG_MTD_NAND_SUNXI) += sunxi_nand.o obj-$(CONFIG_MTD_NAND_HISI504) += hisi504_nand.o obj-$(CONFIG_MTD_NAND_BRCMNAND) += brcmnand/ obj-$(CONFIG_MTD_NAND_QCOM) += qcom_nandc.o -obj-$(CONFIG_MTD_NAND_MTK) += mtk_ecc.o mtk_nand.o +obj-$(CONFIG_MTD_NAND_MTK) += mtk_nand.o obj-$(CONFIG_MTD_NAND_MXIC) += mxic_nand.o obj-$(CONFIG_MTD_NAND_TEGRA) += tegra_nand.o obj-$(CONFIG_MTD_NAND_STM32_FMC2) += stm32_fmc2_nand.o diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c index 66f04c693c87..d540454cbbdf 100644 --- a/drivers/mtd/nand/raw/mtk_nand.c +++ b/drivers/mtd/nand/raw/mtk_nand.c @@ -17,7 +17,7 @@ #include #include #include -#include "mtk_ecc.h" +#include /* NAND controller register definition */ #define NFI_CNFG (0x00) diff --git a/drivers/mtd/nand/raw/mtk_ecc.h b/include/linux/mtd/nand-ecc-mtk.h similarity index 100% rename from drivers/mtd/nand/raw/mtk_ecc.h rename to include/linux/mtd/nand-ecc-mtk.h From patchwork Tue Nov 30 08:31:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiangsheng Hou X-Patchwork-Id: 12646517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1C619C433EF for ; Tue, 30 Nov 2021 08:33:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kGKfl4QgwfjJJKFQsBzGDcrpev57xF1qaELp9S6JjsQ=; b=yenSf7ExLhDHv7 MR4ozUQj17swj8USR64ZSz772tj9NJ1XgLBRA/I9TbXQkNmBG/bM32gmVAqt5H/0ztzFFHHUQuxI7 6/1a+lS+QWL+zxKrtGFVlWwlH0iLHhnBfvZO3rOaL3g+gRRTqytmEltr4jmVyekKx8gPfeEv/SzS8 49AYb82Hoj02ltax41NzIWTN2gwnr1dyHfcCSuiqhMpHQrx7P9WjPy80nXNUgj2ANu9SmpyBtl1Se jsaOE14bDyT52gwfg5AR8apt5hjRrkixT8L/SX9eE1Hd5d1wtjRQjPH/Sd/lLQMZAfibbKY+q09wD qW0uQw+7dKdO0+dR62Uw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mryZV-0047iq-1i; Tue, 30 Nov 2021 08:33:05 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mryYw-0047YX-49; Tue, 30 Nov 2021 08:32:32 +0000 X-UUID: 5cc9fdd974e64fc78e90ca5e6f4c80be-20211130 X-UUID: 5cc9fdd974e64fc78e90ca5e6f4c80be-20211130 Received: from mtkcas68.mediatek.inc [(172.29.94.19)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 472084470; Tue, 30 Nov 2021 01:32:24 -0700 Received: from mtkmbs10n2.mediatek.inc (172.21.101.183) by MTKMBS62DR.mediatek.inc (172.29.94.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 30 Nov 2021 00:32:22 -0800 Received: from mtkcas11.mediatek.inc (172.21.101.40) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.3; Tue, 30 Nov 2021 16:32:20 +0800 Received: from mhfsdcap04.gcn.mediatek.inc (10.17.3.154) by mtkcas11.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 30 Nov 2021 16:32:19 +0800 From: Xiangsheng Hou To: , CC: , , , , , , , , , , , , Subject: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure Date: Tue, 30 Nov 2021 16:31:59 +0800 Message-ID: <20211130083202.14228-3-xiangsheng.hou@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211130083202.14228-1-xiangsheng.hou@mediatek.com> References: <20211130083202.14228-1-xiangsheng.hou@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211130_003230_211941_E9FD312D X-CRM114-Status: GOOD ( 23.95 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Convert the Mediatek HW ECC engine to the ECC infrastructure with pipelined case. Signed-off-by: Xiangsheng Hou --- drivers/mtd/nand/ecc-mtk.c | 614 +++++++++++++++++++++++++++++++ include/linux/mtd/nand-ecc-mtk.h | 68 ++++ 2 files changed, 682 insertions(+) diff --git a/drivers/mtd/nand/ecc-mtk.c b/drivers/mtd/nand/ecc-mtk.c index 31d7c77d5c59..c44499b3d0a5 100644 --- a/drivers/mtd/nand/ecc-mtk.c +++ b/drivers/mtd/nand/ecc-mtk.c @@ -16,6 +16,7 @@ #include #include +#include #include #define ECC_IDLE_MASK BIT(0) @@ -41,11 +42,17 @@ #define ECC_IDLE_REG(op) ((op) == ECC_ENCODE ? ECC_ENCIDLE : ECC_DECIDLE) #define ECC_CTL_REG(op) ((op) == ECC_ENCODE ? ECC_ENCCON : ECC_DECCON) +#define OOB_FREE_MAX_SIZE 8 +#define OOB_FREE_MIN_SIZE 1 + struct mtk_ecc_caps { u32 err_mask; const u8 *ecc_strength; const u32 *ecc_regs; u8 num_ecc_strength; + const u8 *spare_size; + u8 num_spare_size; + u32 max_section_size; u8 ecc_mode_shift; u32 parity_bits; int pg_irq_sel; @@ -79,6 +86,12 @@ static const u8 ecc_strength_mt7622[] = { 4, 6, 8, 10, 12, 14, 16 }; +/* spare size for each section that each IP supports */ +static const u8 spare_size_mt7622[] = { + 16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51, + 52, 62, 61, 63, 64, 67, 74 +}; + enum mtk_ecc_regs { ECC_ENCPAR00, ECC_ENCIRQ_EN, @@ -447,6 +460,604 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc) } EXPORT_SYMBOL(mtk_ecc_get_parity_bits); +static inline int mtk_ecc_data_off(struct nand_device *nand, int i) +{ + int eccsize = nand->ecc.ctx.conf.step_size; + + return i * eccsize; +} + +static inline int mtk_ecc_oob_free_position(struct nand_device *nand, int i) +{ + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + int position; + + if (i < eng->bbm_ctl.section) + position = (i + 1) * eng->oob_free; + else if (i == eng->bbm_ctl.section) + position = 0; + else + position = i * eng->oob_free; + + return position; +} + +static inline int mtk_ecc_data_len(struct nand_device *nand) +{ + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + int eccsize = nand->ecc.ctx.conf.step_size; + int eccbytes = eng->oob_ecc; + + return eccsize + eng->oob_free + eccbytes; +} + +static inline u8 *mtk_ecc_section_ptr(struct nand_device *nand, int i) +{ + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + + return eng->bounce_page_buf + i * mtk_ecc_data_len(nand); +} + +static inline u8 *mtk_ecc_oob_free_ptr(struct nand_device *nand, int i) +{ + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + int eccsize = nand->ecc.ctx.conf.step_size; + + return eng->bounce_page_buf + i * mtk_ecc_data_len(nand) + eccsize; +} + +static void mtk_ecc_no_bbm_swap(struct nand_device *a, u8 *b, u8 *c) +{ + /* nop */ +} + +static void mtk_ecc_bbm_swap(struct nand_device *nand, u8 *databuf, u8 *oobbuf) +{ + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + int step_size = nand->ecc.ctx.conf.step_size; + u32 bbm_pos = eng->bbm_ctl.position; + + bbm_pos += eng->bbm_ctl.section * step_size; + + swap(oobbuf[0], databuf[bbm_pos]); +} + +static void mtk_ecc_set_bbm_ctl(struct mtk_ecc_bbm_ctl *bbm_ctl, + struct nand_device *nand) +{ + if (nanddev_page_size(nand) == 512) { + bbm_ctl->bbm_swap = mtk_ecc_no_bbm_swap; + } else { + bbm_ctl->bbm_swap = mtk_ecc_bbm_swap; + bbm_ctl->section = nanddev_page_size(nand) / + mtk_ecc_data_len(nand); + bbm_ctl->position = nanddev_page_size(nand) % + mtk_ecc_data_len(nand); + } +} + +static int mtk_ecc_ooblayout_free(struct mtd_info *mtd, int section, + struct mtd_oob_region *oob_region) +{ + struct nand_device *nand = mtd_to_nanddev(mtd); + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; + u32 eccsteps, bbm_bytes = 0; + + eccsteps = mtd->writesize / conf->step_size; + + if (section >= eccsteps) + return -ERANGE; + + /* Reserve 1 byte for BBM only for section 0 */ + if (section == 0) + bbm_bytes = 1; + + oob_region->length = eng->oob_free - bbm_bytes; + oob_region->offset = section * eng->oob_free + bbm_bytes; + + return 0; +} + +static int mtk_ecc_ooblayout_ecc(struct mtd_info *mtd, int section, + struct mtd_oob_region *oob_region) +{ + struct nand_device *nand = mtd_to_nanddev(mtd); + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + + if (section) + return -ERANGE; + + oob_region->offset = eng->oob_free * eng->nsteps; + oob_region->length = mtd->oobsize - oob_region->offset; + + return 0; +} + +static const struct mtd_ooblayout_ops mtk_ecc_ooblayout_ops = { + .free = mtk_ecc_ooblayout_free, + .ecc = mtk_ecc_ooblayout_ecc, +}; + +const struct mtd_ooblayout_ops *mtk_ecc_get_ooblayout(void) +{ + return &mtk_ecc_ooblayout_ops; +} + +static struct device *mtk_ecc_get_engine_dev(struct device *dev) +{ + struct platform_device *eccpdev; + struct device_node *np; + + /* + * The device node is only the host controller, + * not the actual ECC engine when pipelined case. + */ + np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0); + if (!np) + return NULL; + + eccpdev = of_find_device_by_node(np); + if (!eccpdev) { + of_node_put(np); + return NULL; + } + + platform_device_put(eccpdev); + of_node_put(np); + + return &eccpdev->dev; +} + +/* + * mtk_ecc_data_format() - Convert to/from MTK ECC on-flash data format + * + * MTK ECC engine organize page data by section, the on-flash format as bellow: + * || section 0 || section 1 || ... + * || data | OOB free | OOB ECC || data || OOB free | OOB ECC || ... + * + * Terefore, it`s necessary to convert data when reading/writing in raw mode. + */ +static void mtk_ecc_data_format(struct nand_device *nand, + struct nand_page_io_req *req) +{ + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + int step_size = nand->ecc.ctx.conf.step_size; + void *databuf, *oobbuf; + int i; + + if (req->type == NAND_PAGE_WRITE) { + databuf = (void *)req->databuf.out; + oobbuf = (void *)req->oobbuf.out; + + /* + * Convert the source databuf and oobbuf to MTK ECC + * on-flash data format. + */ + for (i = 0; i < eng->nsteps; i++) { + if (i == eng->bbm_ctl.section) + eng->bbm_ctl.bbm_swap(nand, + databuf, oobbuf); + memcpy(mtk_ecc_section_ptr(nand, i), + databuf + mtk_ecc_data_off(nand, i), + step_size); + + memcpy(mtk_ecc_oob_free_ptr(nand, i), + oobbuf + mtk_ecc_oob_free_position(nand, i), + eng->oob_free); + + memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free, + oobbuf + eng->oob_free * eng->nsteps + + i * eng->oob_ecc, + eng->oob_ecc); + } + + req->databuf.out = eng->bounce_page_buf; + req->oobbuf.out = eng->bounce_oob_buf; + } else { + databuf = req->databuf.in; + oobbuf = req->oobbuf.in; + + /* + * Convert the on-flash MTK ECC data format to + * destination databuf and oobbuf. + */ + memcpy(eng->bounce_page_buf, databuf, + nanddev_page_size(nand)); + memcpy(eng->bounce_oob_buf, oobbuf, + nanddev_per_page_oobsize(nand)); + + for (i = 0; i < eng->nsteps; i++) { + memcpy(databuf + mtk_ecc_data_off(nand, i), + mtk_ecc_section_ptr(nand, i), step_size); + + memcpy(oobbuf + mtk_ecc_oob_free_position(nand, i), + mtk_ecc_section_ptr(nand, i) + step_size, + eng->oob_free); + + memcpy(oobbuf + eng->oob_free * eng->nsteps + + i * eng->oob_ecc, + mtk_ecc_section_ptr(nand, i) + step_size + + eng->oob_free, + eng->oob_ecc); + + if (i == eng->bbm_ctl.section) + eng->bbm_ctl.bbm_swap(nand, + databuf, oobbuf); + } + } +} + +static void mtk_ecc_oob_free_shift(struct nand_device *nand, + u8 *dst_buf, u8 *src_buf, bool write) +{ + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + u32 position; + int i; + + for (i = 0; i < eng->nsteps; i++) { + if (i < eng->bbm_ctl.section) + position = (i + 1) * eng->oob_free; + else if (i == eng->bbm_ctl.section) + position = 0; + else + position = i * eng->oob_free; + + if (write) + memcpy(dst_buf + i * eng->oob_free, src_buf + position, + eng->oob_free); + else + memcpy(dst_buf + position, src_buf + i * eng->oob_free, + eng->oob_free); + } +} + +static void mtk_ecc_set_section_size_and_strength(struct nand_device *nand) +{ + struct nand_ecc_props *reqs = &nand->ecc.requirements; + struct nand_ecc_props *user = &nand->ecc.user_conf; + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + + /* Configure the correction depending on the NAND device topology */ + if (user->step_size && user->strength) { + conf->step_size = user->step_size; + conf->strength = user->strength; + } else if (reqs->step_size && reqs->strength) { + conf->step_size = reqs->step_size; + conf->strength = reqs->strength; + } + + /* + * Align ECC strength and ECC size. + * The MTK HW ECC engine only support 512 and 1024 ECC size. + */ + if (conf->step_size < 1024) { + if (nanddev_page_size(nand) > 512 && + eng->ecc->caps->max_section_size > 512) { + conf->step_size = 1024; + conf->strength <<= 1; + } else { + conf->step_size = 512; + } + } else { + conf->step_size = 1024; + } + + eng->section_size = conf->step_size; +} + +static int mtk_ecc_set_spare_per_section(struct nand_device *nand) +{ + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + const u8 *spare = eng->ecc->caps->spare_size; + u32 i, closest_spare = 0; + + eng->nsteps = nanddev_page_size(nand) / conf->step_size; + eng->oob_per_section = nanddev_per_page_oobsize(nand) / eng->nsteps; + + if (conf->step_size == 1024) + eng->oob_per_section >>= 1; + + if (eng->oob_per_section < spare[0]) { + dev_err(eng->ecc->dev, "OOB size per section too small %d\n", + eng->oob_per_section); + return -EINVAL; + } + + for (i = 0; i < eng->ecc->caps->num_spare_size; i++) { + if (eng->oob_per_section >= spare[i] && + spare[i] >= spare[closest_spare]) { + closest_spare = i; + if (eng->oob_per_section == spare[i]) + break; + } + } + + eng->oob_per_section = spare[closest_spare]; + eng->oob_per_section_idx = closest_spare; + + if (conf->step_size == 1024) + eng->oob_per_section <<= 1; + + return 0; +} + +int mtk_ecc_prepare_io_req_pipelined(struct nand_device *nand, + struct nand_page_io_req *req) +{ + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + struct mtd_info *mtd = nanddev_to_mtd(nand); + int ret; + + nand_ecc_tweak_req(&eng->req_ctx, req); + + /* Store the source buffer data to avoid modify source data */ + if (req->type == NAND_PAGE_WRITE) { + if (req->datalen) + memcpy(eng->src_page_buf + req->dataoffs, + req->databuf.out, + req->datalen); + + if (req->ooblen) + memcpy(eng->src_oob_buf + req->ooboffs, + req->oobbuf.out, + req->ooblen); + } + + if (req->mode == MTD_OPS_RAW) { + if (req->type == NAND_PAGE_WRITE) + mtk_ecc_data_format(nand, req); + + return 0; + } + + eng->ecc_cfg.mode = ECC_NFI_MODE; + eng->ecc_cfg.sectors = eng->nsteps; + eng->ecc_cfg.op = ECC_DECODE; + + if (req->type == NAND_PAGE_READ) + return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg); + + memset(eng->bounce_oob_buf, 0xff, nanddev_per_page_oobsize(nand)); + if (req->ooblen) { + if (req->mode == MTD_OPS_AUTO_OOB) { + ret = mtd_ooblayout_set_databytes(mtd, + req->oobbuf.out, + eng->bounce_oob_buf, + req->ooboffs, + mtd->oobavail); + if (ret) + return ret; + } else { + memcpy(eng->bounce_oob_buf + req->ooboffs, + req->oobbuf.out, + req->ooblen); + } + } + + eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out, + eng->bounce_oob_buf); + mtk_ecc_oob_free_shift(nand, (void *)req->oobbuf.out, + eng->bounce_oob_buf, true); + + eng->ecc_cfg.op = ECC_ENCODE; + + return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg); +} + +int mtk_ecc_finish_io_req_pipelined(struct nand_device *nand, + struct nand_page_io_req *req) +{ + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + struct mtd_info *mtd = nanddev_to_mtd(nand); + struct mtk_ecc_stats stats; + int ret; + + if (req->type == NAND_PAGE_WRITE) { + /* Restore the source buffer data */ + if (req->datalen) + memcpy((void *)req->databuf.out, + eng->src_page_buf + req->dataoffs, + req->datalen); + + if (req->ooblen) + memcpy((void *)req->oobbuf.out, + eng->src_oob_buf + req->ooboffs, + req->ooblen); + + if (req->mode != MTD_OPS_RAW) + mtk_ecc_disable(eng->ecc); + + nand_ecc_restore_req(&eng->req_ctx, req); + + return 0; + } + + if (req->mode == MTD_OPS_RAW) { + mtk_ecc_data_format(nand, req); + nand_ecc_restore_req(&eng->req_ctx, req); + + return 0; + } + + ret = mtk_ecc_wait_done(eng->ecc, ECC_DECODE); + if (ret) { + ret = -ETIMEDOUT; + goto out; + } + + if (eng->read_empty) { + memset(req->databuf.in, 0xff, nanddev_page_size(nand)); + memset(req->oobbuf.in, 0xff, nanddev_per_page_oobsize(nand)); + ret = 0; + + goto out; + } + + mtk_ecc_get_stats(eng->ecc, &stats, eng->nsteps); + mtd->ecc_stats.corrected += stats.corrected; + mtd->ecc_stats.failed += stats.failed; + + /* + * Return -EBADMSG when exit uncorrect ECC error. + * Otherwise, return the bitflips. + */ + if (stats.failed) + ret = -EBADMSG; + else + ret = stats.bitflips; + + memset(eng->bounce_oob_buf, 0xff, nanddev_per_page_oobsize(nand)); + mtk_ecc_oob_free_shift(nand, eng->bounce_oob_buf, req->oobbuf.in, false); + eng->bbm_ctl.bbm_swap(nand, req->databuf.in, eng->bounce_oob_buf); + + if (req->ooblen) { + if (req->mode == MTD_OPS_AUTO_OOB) + ret = mtd_ooblayout_get_databytes(mtd, + req->oobbuf.in, + eng->bounce_oob_buf, + req->ooboffs, + mtd->oobavail); + else + memcpy(req->oobbuf.in, + eng->bounce_oob_buf + req->ooboffs, + req->ooblen); + } + +out: + mtk_ecc_disable(eng->ecc); + nand_ecc_restore_req(&eng->req_ctx, req); + + return ret; +} + +int mtk_ecc_init_ctx_pipelined(struct nand_device *nand) +{ + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; + struct mtd_info *mtd = nanddev_to_mtd(nand); + struct mtk_ecc_engine *eng; + struct device *dev; + int free, ret; + + /* + * In the case of a pipelined engine, the device registering the ECC + * engine is not the actual ECC engine device but the host controller. + */ + dev = mtk_ecc_get_engine_dev(nand->ecc.engine->dev); + if (!dev) + return -EINVAL; + + eng = devm_kzalloc(dev, sizeof(*eng), GFP_KERNEL); + if (!eng) + return -ENOMEM; + + nand->ecc.ctx.priv = eng; + nand->ecc.engine->priv = eng; + + eng->ecc = dev_get_drvdata(dev); + + mtk_ecc_set_section_size_and_strength(nand); + + ret = mtk_ecc_set_spare_per_section(nand); + if (ret) + return ret; + + clk_prepare_enable(eng->ecc->clk); + mtk_ecc_hw_init(eng->ecc); + + /* Calculate OOB free bytes except ECC parity data */ + free = (conf->strength * mtk_ecc_get_parity_bits(eng->ecc) + + 7) >> 3; + free = eng->oob_per_section - free; + + /* + * Enhance ECC strength if OOB left is bigger than max FDM size + * or reduce ECC strength if OOB size is not enough for ECC + * parity data. + */ + if (free > OOB_FREE_MAX_SIZE) + eng->oob_ecc = eng->oob_per_section - OOB_FREE_MAX_SIZE; + else if (free < 0) + eng->oob_ecc = eng->oob_per_section - OOB_FREE_MIN_SIZE; + + /* Calculate and adjust ECC strenth based on OOB ECC bytes */ + conf->strength = (eng->oob_ecc << 3) / + mtk_ecc_get_parity_bits(eng->ecc); + mtk_ecc_adjust_strength(eng->ecc, &conf->strength); + + eng->oob_ecc = DIV_ROUND_UP(conf->strength * + mtk_ecc_get_parity_bits(eng->ecc), 8); + + eng->oob_free = eng->oob_per_section - eng->oob_ecc; + if (eng->oob_free > OOB_FREE_MAX_SIZE) + eng->oob_free = OOB_FREE_MAX_SIZE; + + eng->oob_free_protected = OOB_FREE_MIN_SIZE; + + eng->oob_ecc = eng->oob_per_section - eng->oob_free; + + if (!mtd->ooblayout) + mtd_set_ooblayout(mtd, mtk_ecc_get_ooblayout()); + + ret = nand_ecc_init_req_tweaking(&eng->req_ctx, nand); + if (ret) + return ret; + + eng->src_page_buf = kmalloc(nanddev_page_size(nand) + + nanddev_per_page_oobsize(nand), GFP_KERNEL); + eng->bounce_page_buf = kmalloc(nanddev_page_size(nand) + + nanddev_per_page_oobsize(nand), GFP_KERNEL); + if (!eng->src_page_buf || !eng->bounce_page_buf) { + ret = -ENOMEM; + goto cleanup_req_tweak; + } + + eng->src_oob_buf = eng->src_page_buf + nanddev_page_size(nand); + eng->bounce_oob_buf = eng->bounce_page_buf + nanddev_page_size(nand); + + mtk_ecc_set_bbm_ctl(&eng->bbm_ctl, nand); + eng->ecc_cfg.strength = conf->strength; + eng->ecc_cfg.len = conf->step_size + eng->oob_free_protected; + mtd->bitflip_threshold = conf->strength; + + return 0; + +cleanup_req_tweak: + nand_ecc_cleanup_req_tweaking(&eng->req_ctx); + + return ret; +} + +void mtk_ecc_cleanup_ctx_pipelined(struct nand_device *nand) +{ + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + + if (eng) { + nand_ecc_cleanup_req_tweaking(&eng->req_ctx); + kfree(eng->src_page_buf); + kfree(eng->bounce_page_buf); + } +} + +/* + * The MTK ECC engine work at pipelined situation, + * will be registered by the drivers that wrap it. + */ +static struct nand_ecc_engine_ops mtk_ecc_engine_pipelined_ops = { + .init_ctx = mtk_ecc_init_ctx_pipelined, + .cleanup_ctx = mtk_ecc_cleanup_ctx_pipelined, + .prepare_io_req = mtk_ecc_prepare_io_req_pipelined, + .finish_io_req = mtk_ecc_finish_io_req_pipelined, +}; + +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void) +{ + return &mtk_ecc_engine_pipelined_ops; +} +EXPORT_SYMBOL(mtk_ecc_get_pipelined_ops); + static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = { .err_mask = 0x3f, .ecc_strength = ecc_strength_mt2701, @@ -472,6 +1083,9 @@ static const struct mtk_ecc_caps mtk_ecc_caps_mt7622 = { .ecc_strength = ecc_strength_mt7622, .ecc_regs = mt7622_ecc_regs, .num_ecc_strength = 7, + .spare_size = spare_size_mt7622, + .num_spare_size = 19, + .max_section_size = 1024, .ecc_mode_shift = 4, .parity_bits = 13, .pg_irq_sel = 0, diff --git a/include/linux/mtd/nand-ecc-mtk.h b/include/linux/mtd/nand-ecc-mtk.h index 0e48c36e6ca0..6d550032cbd9 100644 --- a/include/linux/mtd/nand-ecc-mtk.h +++ b/include/linux/mtd/nand-ecc-mtk.h @@ -33,6 +33,61 @@ struct mtk_ecc_config { u32 len; }; +/** + * struct mtk_ecc_bbm_ctl - Information relative to the BBM swap + * @bbm_swap: BBM swap function + * @section: Section number in data area for swap + * @position: Position in @section for swap with BBM + */ +struct mtk_ecc_bbm_ctl { + void (*bbm_swap)(struct nand_device *nand, u8 *databuf, u8 *oobbuf); + u32 section; + u32 position; +}; + +/** + * struct mtk_ecc_engine - Information relative to the ECC + * @req_ctx: Save request context and tweak the original request to fit the + * engine needs + * @oob_per_section: OOB size for each section to store OOB free/ECC bytes + * @oob_per_section_idx: The index for @oob_per_section in spare size array + * @oob_ecc: OOB size for each section to store the ECC parity + * @oob_free: OOB size for each section to store the OOB free bytes + * @oob_free_protected: OOB free bytes will be protected by the ECC engine + * @section_size: The size of each section + * @read_empty: Indicate whether empty page for one read operation + * @nsteps: The number of the sections + * @src_page_buf: Buffer used to store source data buffer when write + * @src_oob_buf: Buffer used to store source OOB buffer when write + * @bounce_page_buf: Data bounce buffer + * @bounce_oob_buf: OOB bounce buffer + * @ecc: The ECC engine private data structure + * @ecc_cfg: The configuration of each ECC operation + * @bbm_ctl: Information relative to the BBM swap + */ +struct mtk_ecc_engine { + struct nand_ecc_req_tweak_ctx req_ctx; + + u32 oob_per_section; + u32 oob_per_section_idx; + u32 oob_ecc; + u32 oob_free; + u32 oob_free_protected; + u32 section_size; + + bool read_empty; + u32 nsteps; + + u8 *src_page_buf; + u8 *src_oob_buf; + u8 *bounce_page_buf; + u8 *bounce_oob_buf; + + struct mtk_ecc *ecc; + struct mtk_ecc_config ecc_cfg; + struct mtk_ecc_bbm_ctl bbm_ctl; +}; + int mtk_ecc_encode(struct mtk_ecc *, struct mtk_ecc_config *, u8 *, u32); void mtk_ecc_get_stats(struct mtk_ecc *, struct mtk_ecc_stats *, int); int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation); @@ -44,4 +99,17 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc); struct mtk_ecc *of_mtk_ecc_get(struct device_node *); void mtk_ecc_release(struct mtk_ecc *); +#if IS_ENABLED(CONFIG_MTD_NAND_ECC_MTK) + +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void); + +#else /* !CONFIG_MTD_NAND_ECC_MTK */ + +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void) +{ + return NULL; +} + +#endif /* CONFIG_MTD_NAND_ECC_MTK */ + #endif From patchwork Tue Nov 30 08:32:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiangsheng Hou X-Patchwork-Id: 12646519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87195C433F5 for ; Tue, 30 Nov 2021 08:33:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/R/ULPkmdI56iq1K9cZYQKT8gvsvHye2F1BngaV7+Vg=; b=Q2EzehMadt9BET /P+kPaAGaV/nJhytJBI3kIh5hYcMZ/4JgrhABI2PLyfR9dBGrTtEC3TzkfL47zSrUarOnXkJY4oLg NDvyCi6QWAUkNd89+DhSwsbPiMvKMl8mVHVAn2hdmrWSNEagrWxWO7ZTIXHfa0uB48aVFNMKWeDWR BzkVCVZxr3MHzzD35sBsV20R9u4XHM8nWac6gSVtOVrbbvVLuDAhW0/kKKtAYe3UyoekGYPwO2Ltn EcIdWea3YJYrNIclgv1T4Ad93HDdw7U3PCx+mF+sEg5lxlZMH2280V6TNLcv3RAxXdkBytExkzOsk xM2l1+laimROqQ2GfIPA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mryZw-0047wc-Sm; Tue, 30 Nov 2021 08:33:32 +0000 Received: from mailgw02.mediatek.com ([216.200.240.185]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mryYx-0047Yn-QI; Tue, 30 Nov 2021 08:32:35 +0000 X-UUID: 5a6e3b1e614f46649e4a1a17e45f1ed8-20211130 X-UUID: 5a6e3b1e614f46649e4a1a17e45f1ed8-20211130 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1357177006; Tue, 30 Nov 2021 01:32:27 -0700 Received: from mtkmbs10n1.mediatek.inc (172.21.101.34) by MTKMBS62N2.mediatek.inc (172.29.193.42) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 30 Nov 2021 00:32:25 -0800 Received: from mtkcas11.mediatek.inc (172.21.101.40) by mtkmbs10n1.mediatek.inc (172.21.101.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.15; Tue, 30 Nov 2021 16:32:23 +0800 Received: from mhfsdcap04.gcn.mediatek.inc (10.17.3.154) by mtkcas11.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 30 Nov 2021 16:32:22 +0800 From: Xiangsheng Hou To: , CC: , , , , , , , , , , , , Subject: [RFC,v4,3/5] spi: mtk: Add mediatek SPI Nand Flash interface driver Date: Tue, 30 Nov 2021 16:32:00 +0800 Message-ID: <20211130083202.14228-4-xiangsheng.hou@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211130083202.14228-1-xiangsheng.hou@mediatek.com> References: <20211130083202.14228-1-xiangsheng.hou@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211130_003231_908121_BC9716C2 X-CRM114-Status: GOOD ( 22.93 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org The SPI Nand Flash interface driver cowork with Mediatek pipelined HW ECC engine. Signed-off-by: Xiangsheng Hou --- drivers/spi/Kconfig | 11 + drivers/spi/Makefile | 1 + drivers/spi/spi-mtk-snfi.c | 1117 ++++++++++++++++++++++++++++++++++++ 3 files changed, 1129 insertions(+) create mode 100644 drivers/spi/spi-mtk-snfi.c diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig index 596705d24400..9cb6a173b1ef 100644 --- a/drivers/spi/Kconfig +++ b/drivers/spi/Kconfig @@ -535,6 +535,17 @@ config SPI_MT65XX say Y or M here.If you are not sure, say N. SPI drivers for Mediatek MT65XX and MT81XX series ARM SoCs. +config SPI_MTK_SNFI + tristate "MediaTek SPI NAND interface" + depends on MTD + select MTD_SPI_NAND + select MTD_NAND_ECC_MTK + help + This selects the SPI NAND FLASH interface(SNFI), + which could be found on MediaTek Soc. + Say Y or M here.If you are not sure, say N. + Note Parallel Nand and SPI NAND is alternative on MediaTek SoCs. + config SPI_MT7621 tristate "MediaTek MT7621 SPI Controller" depends on RALINK || COMPILE_TEST diff --git a/drivers/spi/Makefile b/drivers/spi/Makefile index dd7393a6046f..57d11eecf662 100644 --- a/drivers/spi/Makefile +++ b/drivers/spi/Makefile @@ -73,6 +73,7 @@ obj-$(CONFIG_SPI_MPC52xx) += spi-mpc52xx.o obj-$(CONFIG_SPI_MT65XX) += spi-mt65xx.o obj-$(CONFIG_SPI_MT7621) += spi-mt7621.o obj-$(CONFIG_SPI_MTK_NOR) += spi-mtk-nor.o +obj-$(CONFIG_SPI_MTK_SNFI) += spi-mtk-snfi.o obj-$(CONFIG_SPI_MXIC) += spi-mxic.o obj-$(CONFIG_SPI_MXS) += spi-mxs.o obj-$(CONFIG_SPI_NPCM_FIU) += spi-npcm-fiu.o diff --git a/drivers/spi/spi-mtk-snfi.c b/drivers/spi/spi-mtk-snfi.c new file mode 100644 index 000000000000..b4dce6d78176 --- /dev/null +++ b/drivers/spi/spi-mtk-snfi.c @@ -0,0 +1,1117 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Driver for MediaTek SPI Nand Flash interface + * + * Copyright (C) 2021 MediaTek Inc. + * Authors: Xiangsheng Hou + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Registers used by the driver */ +#define NFI_CNFG (0x00) +#define CNFG_DMA BIT(0) +#define CNFG_READ_EN BIT(1) +#define CNFG_DMA_BURST_EN BIT(2) +#define CNFG_HW_ECC_EN BIT(8) +#define CNFG_AUTO_FMT_EN BIT(9) +#define CNFG_OP_CUST GENMASK(14, 13) +#define NFI_PAGEFMT (0x04) +#define PAGEFMT_512_2K (0) +#define PAGEFMT_2K_4K (1) +#define PAGEFMT_4K_8K (2) +#define PAGEFMT_8K_16K (3) +#define PAGEFMT_PAGE_MASK GENMASK(1, 0) +#define PAGEFMT_SEC_SEL_512 BIT(2) +#define PAGEFMT_FDM_SHIFT (8) +#define PAGEFMT_FDM_ECC_SHIFT (12) +#define PAGEFMT_SPARE_SHIFT (16) +#define PAGEFMT_SPARE_MASK GENMASK(21, 16) +#define NFI_CON (0x08) +#define CON_FIFO_FLUSH BIT(0) +#define CON_NFI_RST BIT(1) +#define CON_BRD BIT(8) +#define CON_BWR BIT(9) +#define CON_SEC_SHIFT (12) +#define CON_SEC_MASK GENMASK(16, 12) +#define NFI_INTR_EN (0x10) +#define INTR_CUS_PROG_EN BIT(7) +#define INTR_CUS_READ_EN BIT(8) +#define INTR_IRQ_EN BIT(31) +#define NFI_INTR_STA (0x14) +#define NFI_CMD (0x20) +#define CMD_DUMMY (0x00) +#define NFI_STRDATA (0x40) +#define STAR_EN BIT(0) +#define NFI_STA (0x60) +#define NFI_FSM_MASK GENMASK(19, 16) +#define STA_EMP_PAGE BIT(12) +#define NFI_ADDRCNTR (0x70) +#define CNTR_MASK GENMASK(16, 12) +#define ADDRCNTR_SEC_SHIFT (12) +#define ADDRCNTR_SEC(val) \ + (((val) & CNTR_MASK) >> ADDRCNTR_SEC_SHIFT) +#define NFI_STRADDR (0x80) +#define NFI_BYTELEN (0x84) +#define NFI_FDML(x) (0xA0 + (x) * sizeof(u32) * 2) +#define NFI_FDMM(x) (0xA4 + (x) * sizeof(u32) * 2) +#define NFI_MASTERSTA (0x224) +#define AHB_BUS_BUSY GENMASK(1, 0) +#define SNFI_MAC_CTL (0x500) +#define MAC_WIP BIT(0) +#define MAC_WIP_READY BIT(1) +#define MAC_TRIG BIT(2) +#define MAC_EN BIT(3) +#define MAC_SIO_SEL BIT(4) +#define SNFI_MAC_OUTL (0x504) +#define SNFI_MAC_INL (0x508) +#define SNFI_RD_CTL2 (0x510) +#define RD_CMD_MASK GENMASK(7, 0) +#define RD_DUMMY_SHIFT (8) +#define SNFI_RD_CTL3 (0x514) +#define RD_ADDR_MASK GENMASK(16, 0) +#define SNFI_PG_CTL1 (0x524) +#define WR_LOAD_CMD_MASK GENMASK(15, 8) +#define WR_LOAD_CMD_SHIFT (8) +#define SNFI_PG_CTL2 (0x528) +#define WR_LOAD_ADDR_MASK GENMASK(15, 0) +#define SNFI_MISC_CTL (0x538) +#define RD_CUSTOM_EN BIT(6) +#define WR_CUSTOM_EN BIT(7) +#define LATCH_LAT_SHIFT (8) +#define LATCH_LAT_MASK GENMASK(9, 8) +#define RD_MODE_X2 BIT(16) +#define RD_MODE_X4 BIT(17) +#define RD_MODE_DQUAL BIT(18) +#define RD_MODE_MASK GENMASK(18, 16) +#define WR_X4_EN BIT(20) +#define SW_RST BIT(28) +#define SNFI_MISC_CTL2 (0x53c) +#define WR_LEN_SHIFT (16) +#define SNFI_DLY_CTL3 (0x548) +#define SAM_DLY_MASK GENMASK(5, 0) +#define SNFI_STA_CTL1 (0x550) +#define SPI_STATE GENMASK(3, 0) +#define CUS_READ_DONE BIT(27) +#define CUS_PROG_DONE BIT(28) +#define SNFI_CNFG (0x55c) +#define SNFI_MODE_EN BIT(0) +#define SNFI_GPRAM_DATA (0x800) +#define SNFI_GPRAM_MAX_LEN (160) + +#define MTK_SNFI_TIMEOUT (500000) +#define MTK_SNFI_RESET_TIMEOUT (1000000) +#define MTK_SNFI_AUTOSUSPEND_DELAY (1000) +#define KB(x) ((x) * 1024UL) + +struct mtk_snfi_caps { + u8 pageformat_spare_shift; +}; + +struct mtk_snfi { + struct device *dev; + struct completion done; + void __iomem *regs; + const struct mtk_snfi_caps *caps; + + struct clk *nfi_clk; + struct clk *snfi_clk; + struct clk *hclk; + + struct nand_ecc_engine *engine; + + u32 sample_delay; + u32 read_latency; + + void *tx_buf; + dma_addr_t dma_addr; +}; + +static struct mtk_ecc_engine *mtk_snfi_to_ecc_engine(struct mtk_snfi *snfi) +{ + return snfi->engine->priv; +} + +static void mtk_snfi_mac_enable(struct mtk_snfi *snfi) +{ + u32 val; + + val = readl(snfi->regs + SNFI_MAC_CTL); + val &= ~MAC_SIO_SEL; + val |= MAC_EN; + + writel(val, snfi->regs + SNFI_MAC_CTL); +} + +static int mtk_snfi_mac_trigger(struct mtk_snfi *snfi) +{ + int ret; + u32 val; + + val = readl(snfi->regs + SNFI_MAC_CTL); + val |= MAC_TRIG; + writel(val, snfi->regs + SNFI_MAC_CTL); + + ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL, + val, val & MAC_WIP_READY, + 0, MTK_SNFI_TIMEOUT); + if (ret < 0) { + dev_err(snfi->dev, "wait for wip ready timeout\n"); + return -EIO; + } + + ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL, + val, !(val & MAC_WIP), 0, + MTK_SNFI_TIMEOUT); + if (ret < 0) { + dev_err(snfi->dev, "command write timeout\n"); + return -EIO; + } + + return 0; +} + +static void mtk_snfi_mac_disable(struct mtk_snfi *snfi) +{ + u32 val; + + val = readl(snfi->regs + SNFI_MAC_CTL); + val &= ~(MAC_TRIG | MAC_EN); + writel(val, snfi->regs + SNFI_MAC_CTL); +} + +static int mtk_snfi_mac_op(struct mtk_snfi *snfi) +{ + int ret; + + mtk_snfi_mac_enable(snfi); + ret = mtk_snfi_mac_trigger(snfi); + mtk_snfi_mac_disable(snfi); + + return ret; +} + +static inline void mtk_snfi_read_oob_free(struct mtk_snfi *snfi, + const struct spi_mem_op *op) +{ + struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi); + u8 *oobptr = op->data.buf.in; + u32 vall, valm; + int i, j; + + oobptr += eng->section_size * eng->nsteps; + for (i = 0; i < eng->nsteps; i++) { + vall = readl(snfi->regs + NFI_FDML(i)); + valm = readl(snfi->regs + NFI_FDMM(i)); + + for (j = 0; j < eng->oob_free; j++) + oobptr[j] = (j >= 4 ? valm : vall) >> ((j % 4) * 8); + + oobptr += eng->oob_free; + } +} + +static inline void mtk_snfi_write_oob_free(struct mtk_snfi *snfi, + const struct spi_mem_op *op) +{ + struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi); + const u8 *oobptr = op->data.buf.out; + u32 vall, valm; + int i, j; + + oobptr += eng->section_size * eng->nsteps; + for (i = 0; i < eng->nsteps; i++) { + vall = 0; + valm = 0; + for (j = 0; j < 8; j++) { + if (j < 4) + vall |= (j < eng->oob_free ? oobptr[j] : 0xff) + << (j * 8); + else + valm |= (j < eng->oob_free ? oobptr[j] : 0xff) + << ((j - 4) * 8); + } + + writel(vall, snfi->regs + NFI_FDML(i)); + writel(valm, snfi->regs + NFI_FDMM(i)); + oobptr += eng->oob_free; + } +} + +static irqreturn_t mtk_snfi_irq(int irq, void *id) +{ + struct mtk_snfi *snfi = id; + u32 sta, ien; + + sta = readl(snfi->regs + NFI_INTR_STA); + ien = readl(snfi->regs + NFI_INTR_EN); + + if (!(sta & ien)) + return IRQ_NONE; + + writel(0, snfi->regs + NFI_INTR_EN); + complete(&snfi->done); + + return IRQ_HANDLED; +} + +static int mtk_snfi_enable_clk(struct device *dev, struct mtk_snfi *snfi) +{ + int ret; + + ret = clk_prepare_enable(snfi->nfi_clk); + if (ret) { + dev_err(dev, "failed to enable nfi clk\n"); + return ret; + } + + ret = clk_prepare_enable(snfi->snfi_clk); + if (ret) { + dev_err(dev, "failed to enable snfi clk\n"); + clk_disable_unprepare(snfi->nfi_clk); + return ret; + } + + ret = clk_prepare_enable(snfi->hclk); + if (ret) { + dev_err(dev, "failed to enable hclk\n"); + clk_disable_unprepare(snfi->nfi_clk); + clk_disable_unprepare(snfi->snfi_clk); + return ret; + } + + return 0; +} + +static void mtk_snfi_disable_clk(struct mtk_snfi *snfi) +{ + clk_disable_unprepare(snfi->nfi_clk); + clk_disable_unprepare(snfi->snfi_clk); + clk_disable_unprepare(snfi->hclk); +} + +static int mtk_snfi_reset(struct mtk_snfi *snfi) +{ + u32 val; + int ret; + + val = readl(snfi->regs + SNFI_MISC_CTL) | SW_RST; + writel(val, snfi->regs + SNFI_MISC_CTL); + + ret = readw_poll_timeout(snfi->regs + SNFI_STA_CTL1, val, + !(val & SPI_STATE), 0, + MTK_SNFI_RESET_TIMEOUT); + if (ret) { + dev_warn(snfi->dev, "wait spi idle timeout 0x%x\n", val); + return ret; + } + + val = readl(snfi->regs + SNFI_MISC_CTL); + val &= ~SW_RST; + writel(val, snfi->regs + SNFI_MISC_CTL); + + writew(CON_FIFO_FLUSH | CON_NFI_RST, snfi->regs + NFI_CON); + ret = readw_poll_timeout(snfi->regs + NFI_STA, val, + !(val & NFI_FSM_MASK), 0, + MTK_SNFI_RESET_TIMEOUT); + if (ret) { + dev_warn(snfi->dev, "wait nfi fsm idle timeout 0x%x\n", val); + return ret; + } + + val = readl(snfi->regs + NFI_STRDATA); + val &= ~STAR_EN; + writew(val, snfi->regs + NFI_STRDATA); + + return 0; +} + +static int mtk_snfi_init(struct mtk_snfi *snfi) +{ + int ret; + u32 val; + + ret = mtk_snfi_reset(snfi); + if (ret) + return ret; + + writel(SNFI_MODE_EN, snfi->regs + SNFI_CNFG); + + if (snfi->sample_delay) { + val = readl(snfi->regs + SNFI_DLY_CTL3); + val &= ~SAM_DLY_MASK; + val |= snfi->sample_delay; + writel(val, snfi->regs + SNFI_DLY_CTL3); + } + + if (snfi->read_latency) { + val = readl(snfi->regs + SNFI_MISC_CTL); + val &= ~LATCH_LAT_MASK; + val |= (snfi->read_latency << LATCH_LAT_SHIFT); + writel(val, snfi->regs + SNFI_MISC_CTL); + } + + return 0; +} + +static void mtk_snfi_prepare_for_tx(struct mtk_snfi *snfi, + const struct spi_mem_op *op) +{ + struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi); + u32 val; + + val = readl(snfi->regs + SNFI_PG_CTL1); + val &= ~WR_LOAD_CMD_MASK; + val |= op->cmd.opcode << WR_LOAD_CMD_SHIFT; + writel(val, snfi->regs + SNFI_PG_CTL1); + + writel(op->addr.val & WR_LOAD_ADDR_MASK, + snfi->regs + SNFI_PG_CTL2); + + val = readl(snfi->regs + SNFI_MISC_CTL); + val |= WR_CUSTOM_EN; + if (op->data.buswidth == 4) + val |= WR_X4_EN; + writel(val, snfi->regs + SNFI_MISC_CTL); + + val = eng->nsteps * (eng->oob_per_section + eng->section_size); + writel(val << WR_LEN_SHIFT, snfi->regs + SNFI_MISC_CTL2); + + writel(INTR_CUS_PROG_EN | INTR_IRQ_EN, snfi->regs + NFI_INTR_EN); +} + +static void mtk_snfi_prepare_for_rx(struct mtk_snfi *snfi, + const struct spi_mem_op *op) +{ + struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi); + u32 val, dummy_cycle; + + dummy_cycle = (op->dummy.nbytes << 3) >> + (ffs(op->dummy.buswidth) - 1); + val = (op->cmd.opcode & RD_CMD_MASK) | + (dummy_cycle << RD_DUMMY_SHIFT); + writel(val, snfi->regs + SNFI_RD_CTL2); + + writel(op->addr.val & RD_ADDR_MASK, + snfi->regs + SNFI_RD_CTL3); + + val = readl(snfi->regs + SNFI_MISC_CTL); + val |= RD_CUSTOM_EN; + val &= ~RD_MODE_MASK; + if (op->data.buswidth == 4) + val |= RD_MODE_X4; + else if (op->data.buswidth == 2) + val |= RD_MODE_X2; + + if (op->addr.buswidth != 1) + val |= RD_MODE_DQUAL; + + writel(val, snfi->regs + SNFI_MISC_CTL); + + val = eng->nsteps * (eng->oob_per_section + eng->section_size); + writel(val, snfi->regs + SNFI_MISC_CTL2); + + writel(INTR_CUS_READ_EN | INTR_IRQ_EN, snfi->regs + NFI_INTR_EN); +} + +static int mtk_snfi_prepare(struct mtk_snfi *snfi, + const struct spi_mem_op *op, bool rx) +{ + struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi); + dma_addr_t addr; + int ret; + u32 val; + + addr = dma_map_single(snfi->dev, + op->data.buf.in, op->data.nbytes, + rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE); + ret = dma_mapping_error(snfi->dev, addr); + if (ret) { + dev_err(snfi->dev, "dma mapping error\n"); + return -EINVAL; + } + + snfi->dma_addr = addr; + writel(lower_32_bits(addr), snfi->regs + NFI_STRADDR); + + if (op->ecc_en && !rx) + mtk_snfi_write_oob_free(snfi, op); + + val = readw(snfi->regs + NFI_CNFG); + val |= CNFG_DMA | CNFG_DMA_BURST_EN | CNFG_OP_CUST; + val |= rx ? CNFG_READ_EN : 0; + + if (op->ecc_en) + val |= CNFG_HW_ECC_EN | CNFG_AUTO_FMT_EN; + + writew(val, snfi->regs + NFI_CNFG); + + writel(eng->nsteps << CON_SEC_SHIFT, snfi->regs + NFI_CON); + + init_completion(&snfi->done); + + /* trigger state machine to custom op mode */ + writel(CMD_DUMMY, snfi->regs + NFI_CMD); + + if (rx) + mtk_snfi_prepare_for_rx(snfi, op); + else + mtk_snfi_prepare_for_tx(snfi, op); + + return 0; +} + +static void mtk_snfi_trigger(struct mtk_snfi *snfi, + const struct spi_mem_op *op, bool rx) +{ + u32 val; + + val = readl(snfi->regs + NFI_CON); + val |= rx ? CON_BRD : CON_BWR; + writew(val, snfi->regs + NFI_CON); + + writew(STAR_EN, snfi->regs + NFI_STRDATA); +} + +static int mtk_snfi_wait_done(struct mtk_snfi *snfi, + const struct spi_mem_op *op, bool rx) +{ + struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi); + struct device *dev = snfi->dev; + u32 val; + int ret; + + ret = wait_for_completion_timeout(&snfi->done, msecs_to_jiffies(500)); + if (!ret) { + dev_err(dev, "wait for %d completion done timeout\n", rx); + return -ETIMEDOUT; + } + + if (rx) { + ret = readl_poll_timeout_atomic(snfi->regs + NFI_BYTELEN, + val, + ADDRCNTR_SEC(val) >= + eng->nsteps, + 0, MTK_SNFI_TIMEOUT); + if (ret) { + dev_err(dev, "wait for rx section count timeout\n"); + return -ETIMEDOUT; + } + + ret = readl_poll_timeout_atomic(snfi->regs + NFI_MASTERSTA, + val, + !(val & AHB_BUS_BUSY), + 0, MTK_SNFI_TIMEOUT); + if (ret) { + dev_err(dev, "wait for bus busy timeout\n"); + return -ETIMEDOUT; + } + } else { + ret = readl_poll_timeout_atomic(snfi->regs + NFI_ADDRCNTR, + val, + ADDRCNTR_SEC(val) >= + eng->nsteps, + 0, MTK_SNFI_TIMEOUT); + if (ret) { + dev_err(dev, "wait for tx section count timeout\n"); + return -ETIMEDOUT; + } + } + + return 0; +} + +static void mtk_snfi_complete(struct mtk_snfi *snfi, + const struct spi_mem_op *op, bool rx) +{ + u32 val; + + dma_unmap_single(snfi->dev, + snfi->dma_addr, op->data.nbytes, + rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE); + + if (op->ecc_en && rx) + mtk_snfi_read_oob_free(snfi, op); + + val = readl(snfi->regs + SNFI_MISC_CTL); + val &= rx ? ~RD_CUSTOM_EN : ~WR_CUSTOM_EN; + writel(val, snfi->regs + SNFI_MISC_CTL); + + val = readl(snfi->regs + SNFI_STA_CTL1); + val |= rx ? CUS_READ_DONE : CUS_PROG_DONE; + writew(val, snfi->regs + SNFI_STA_CTL1); + val &= rx ? ~CUS_READ_DONE : ~CUS_PROG_DONE; + writew(val, snfi->regs + SNFI_STA_CTL1); + + /* Disable interrupt */ + val = readl(snfi->regs + NFI_INTR_EN); + val &= rx ? ~INTR_CUS_READ_EN : ~INTR_CUS_PROG_EN; + writew(val, snfi->regs + NFI_INTR_EN); + + writew(0, snfi->regs + NFI_CNFG); + writew(0, snfi->regs + NFI_CON); +} + +static int mtk_snfi_transfer_dma(struct mtk_snfi *snfi, + const struct spi_mem_op *op, bool rx) +{ + int ret; + + ret = mtk_snfi_prepare(snfi, op, rx); + if (ret) + return ret; + + mtk_snfi_trigger(snfi, op, rx); + + ret = mtk_snfi_wait_done(snfi, op, rx); + + mtk_snfi_complete(snfi, op, rx); + + return ret; +} + +static int mtk_snfi_transfer_mac(struct mtk_snfi *snfi, + const u8 *txbuf, u8 *rxbuf, + const u32 txlen, const u32 rxlen) +{ + u32 i, j, val, tmp; + u8 *p_tmp = (u8 *)(&tmp); + u32 offset = 0; + int ret = 0; + + /* Move tx data to gpram in snfi mac mode */ + for (i = 0; i < txlen; ) { + for (j = 0, tmp = 0; i < txlen && j < 4; i++, j++) + p_tmp[j] = txbuf[i]; + + writel(tmp, snfi->regs + SNFI_GPRAM_DATA + offset); + offset += 4; + } + + writel(txlen, snfi->regs + SNFI_MAC_OUTL); + writel(rxlen, snfi->regs + SNFI_MAC_INL); + + ret = mtk_snfi_mac_op(snfi); + if (ret) { + dev_warn(snfi->dev, "snfi mac operation fail\n"); + return ret; + } + + /* Get tx data from gpram in snfi mac mode */ + if (rxlen) + for (i = 0, offset = rounddown(txlen, 4); i < rxlen; ) { + val = readl(snfi->regs + + SNFI_GPRAM_DATA + offset); + for (j = 0; i < rxlen && j < 4; i++, j++, rxbuf++) { + if (i == 0) + j = txlen % 4; + *rxbuf = (val >> (j * 8)) & 0xff; + } + offset += 4; + } + + return ret; +} + +static int mtk_snfi_exec_op(struct spi_mem *mem, + const struct spi_mem_op *op) +{ + struct mtk_snfi *snfi = spi_controller_get_devdata(mem->spi->master); + u8 *buf, *txbuf = snfi->tx_buf, *rxbuf = NULL; + u32 txlen = 0, rxlen = 0; + int i, ret = 0; + bool rx; + + rx = op->data.dir == SPI_MEM_DATA_IN; + + ret = mtk_snfi_reset(snfi); + if (ret) { + dev_warn(snfi->dev, "snfi reset fail\n"); + return ret; + } + + /* + * If tx/rx data buswidth is not 0/1, use snfi DMA mode. + * Otherwise, use snfi mac mode. + */ + if (op->data.buswidth != 1 && op->data.buswidth != 0) { + ret = mtk_snfi_transfer_dma(snfi, op, rx); + if (ret) + dev_warn(snfi->dev, "snfi dma transfer %d fail %d\n", + rx, ret); + return ret; + } + + txbuf[txlen++] = op->cmd.opcode; + + if (op->addr.nbytes) + for (i = 0; i < op->addr.nbytes; i++) + txbuf[txlen++] = op->addr.val >> + (8 * (op->addr.nbytes - i - 1)); + + txlen += op->dummy.nbytes; + + if (op->data.dir == SPI_MEM_DATA_OUT) { + buf = (u8 *)op->data.buf.out; + for (i = 0; i < op->data.nbytes; i++) + txbuf[txlen++] = buf[i]; + } + + if (op->data.dir == SPI_MEM_DATA_IN) { + rxbuf = (u8 *)op->data.buf.in; + rxlen = op->data.nbytes; + } + + ret = mtk_snfi_transfer_mac(snfi, txbuf, rxbuf, txlen, rxlen); + if (ret) + dev_warn(snfi->dev, "snfi mac transfer %d fail %d\n", + op->data.dir, ret); + + return ret; +} + +static int mtk_snfi_check_buswidth(u8 width) +{ + switch (width) { + case 1: + case 2: + case 4: + return 0; + + default: + break; + } + + return -EOPNOTSUPP; +} + +static bool mtk_snfi_supports_op(struct spi_mem *mem, + const struct spi_mem_op *op) +{ + int ret = 0; + + if (!spi_mem_default_supports_op(mem, op)) + return false; + + if (op->cmd.buswidth != 1) + return false; + + /* + * For one operation will use snfi mac mode when data + * buswidth is 0/1. However, the HW ECC engine can not + * be used in mac mode. + */ + if (op->ecc_en && op->data.buswidth == 1 && + op->data.nbytes >= SNFI_GPRAM_MAX_LEN) + return false; + + switch (op->data.dir) { + /* For spi mem data in, can support 1/2/4 buswidth */ + case SPI_MEM_DATA_IN: + if (op->addr.nbytes) + ret |= mtk_snfi_check_buswidth(op->addr.buswidth); + + if (op->dummy.nbytes) + ret |= mtk_snfi_check_buswidth(op->dummy.buswidth); + + if (op->data.nbytes) + ret |= mtk_snfi_check_buswidth(op->data.buswidth); + + if (ret) + return false; + + break; + case SPI_MEM_DATA_OUT: + /* + * For spi mem data out, can support 0/1 buswidth + * for addr/dummy and 1/4 buswidth for data. + */ + if (op->addr.buswidth != 0 && op->addr.buswidth != 1) + return false; + + if (op->dummy.buswidth != 0 && op->dummy.buswidth != 1) + return false; + + if (op->data.buswidth != 1 && op->data.buswidth != 4) + return false; + + break; + default: + break; + } + + return true; +} + +static int mtk_snfi_adjust_op_size(struct spi_mem *mem, + struct spi_mem_op *op) +{ + u32 len, max_len; + + /* + * The op size only support SNFI_GPRAM_MAX_LEN which will + * use the snfi mac mode when data buswidth is 0/1. + * Otherwise, the snfi can max support 16KB. + */ + if (op->data.buswidth == 1 || op->data.buswidth == 0) + max_len = SNFI_GPRAM_MAX_LEN; + else + max_len = KB(16); + + len = op->cmd.nbytes + op->addr.nbytes + op->dummy.nbytes; + if (len > max_len) + return -EOPNOTSUPP; + + if ((len + op->data.nbytes) > max_len) + op->data.nbytes = max_len - len; + + return 0; +} + +static const struct mtk_snfi_caps mtk_snfi_caps_mt7622 = { + .pageformat_spare_shift = 16, +}; + +static const struct spi_controller_mem_ops mtk_snfi_ops = { + .adjust_op_size = mtk_snfi_adjust_op_size, + .supports_op = mtk_snfi_supports_op, + .exec_op = mtk_snfi_exec_op, +}; + +static const struct of_device_id mtk_snfi_id_table[] = { + { .compatible = "mediatek,mt7622-snfi", + .data = &mtk_snfi_caps_mt7622, + }, + { /* sentinel */ } +}; + +/* ECC wrapper */ +static struct mtk_snfi *mtk_nand_to_spi(struct nand_device *nand) +{ + struct device *dev = nand->ecc.engine->dev; + struct spi_master *master = dev_get_drvdata(dev); + struct mtk_snfi *snfi = spi_master_get_devdata(master); + + return snfi; +} + +static int mtk_snfi_config(struct nand_device *nand, + struct mtk_snfi *snfi) +{ + struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi); + u32 val; + + switch (nanddev_page_size(nand)) { + case 512: + val = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512; + break; + case KB(2): + if (eng->section_size == 512) + val = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512; + else + val = PAGEFMT_512_2K; + break; + case KB(4): + if (eng->section_size == 512) + val = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512; + else + val = PAGEFMT_2K_4K; + break; + case KB(8): + if (eng->section_size == 512) + val = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512; + else + val = PAGEFMT_4K_8K; + break; + case KB(16): + val = PAGEFMT_8K_16K; + break; + default: + dev_err(snfi->dev, "invalid page len: %d\n", + nanddev_page_size(nand)); + return -EINVAL; + } + + val |= eng->oob_per_section_idx << PAGEFMT_SPARE_SHIFT; + val |= eng->oob_free << PAGEFMT_FDM_SHIFT; + val |= eng->oob_free_protected << PAGEFMT_FDM_ECC_SHIFT; + writel(val, snfi->regs + NFI_PAGEFMT); + + return 0; +} + +static int mtk_snfi_ecc_init_ctx(struct nand_device *nand) +{ + struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops(); + + return ops->init_ctx(nand); +} + +static void mtk_snfi_ecc_cleanup_ctx(struct nand_device *nand) +{ + struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops(); + + ops->cleanup_ctx(nand); +} + +static int mtk_snfi_ecc_prepare_io_req(struct nand_device *nand, + struct nand_page_io_req *req) +{ + struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops(); + struct mtk_snfi *snfi = mtk_nand_to_spi(nand); + int ret; + + ret = mtk_snfi_config(nand, snfi); + if (ret) + return ret; + + return ops->prepare_io_req(nand, req); +} + +static int mtk_snfi_ecc_finish_io_req(struct nand_device *nand, + struct nand_page_io_req *req) +{ + struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops(); + struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand); + struct mtk_snfi *snfi = mtk_nand_to_spi(nand); + + if (req->mode != MTD_OPS_RAW) + eng->read_empty = readl(snfi->regs + NFI_STA) & STA_EMP_PAGE; + + return ops->finish_io_req(nand, req); +} + +static struct nand_ecc_engine_ops mtk_snfi_ecc_engine_pipelined_ops = { + .init_ctx = mtk_snfi_ecc_init_ctx, + .cleanup_ctx = mtk_snfi_ecc_cleanup_ctx, + .prepare_io_req = mtk_snfi_ecc_prepare_io_req, + .finish_io_req = mtk_snfi_ecc_finish_io_req, +}; + +static int mtk_snfi_ecc_probe(struct platform_device *pdev, + struct mtk_snfi *snfi) +{ + struct nand_ecc_engine *ecceng; + + if (!mtk_ecc_get_pipelined_ops()) + return -EOPNOTSUPP; + + ecceng = devm_kzalloc(&pdev->dev, sizeof(*ecceng), GFP_KERNEL); + if (!ecceng) + return -ENOMEM; + + ecceng->dev = &pdev->dev; + ecceng->ops = &mtk_snfi_ecc_engine_pipelined_ops; + + nand_ecc_register_on_host_hw_engine(ecceng); + + snfi->engine = ecceng; + + return 0; +} + +static int mtk_snfi_probe(struct platform_device *pdev) +{ + struct device_node *np = pdev->dev.of_node; + struct spi_controller *ctlr; + struct mtk_snfi *snfi; + struct resource *res; + int ret, irq; + u32 val = 0; + + ctlr = spi_alloc_master(&pdev->dev, sizeof(*snfi)); + if (!ctlr) + return -ENOMEM; + + snfi = spi_controller_get_devdata(ctlr); + snfi->dev = &pdev->dev; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + snfi->regs = devm_ioremap_resource(snfi->dev, res); + if (IS_ERR(snfi->regs)) { + ret = PTR_ERR(snfi->regs); + goto err_put_master; + } + + ret = of_property_read_u32(np, "sample-delay", &val); + if (!ret) + snfi->sample_delay = val; + + ret = of_property_read_u32(np, "read-latency", &val); + if (!ret) + snfi->read_latency = val; + + snfi->nfi_clk = devm_clk_get(snfi->dev, "nfi_clk"); + if (IS_ERR(snfi->nfi_clk)) { + dev_err(snfi->dev, "not found nfi clk\n"); + ret = PTR_ERR(snfi->nfi_clk); + goto err_put_master; + } + + snfi->snfi_clk = devm_clk_get(snfi->dev, "snfi_clk"); + if (IS_ERR(snfi->snfi_clk)) { + dev_err(snfi->dev, "not found snfi clk\n"); + ret = PTR_ERR(snfi->snfi_clk); + goto err_put_master; + } + + snfi->hclk = devm_clk_get(snfi->dev, "hclk"); + if (IS_ERR(snfi->hclk)) { + dev_err(snfi->dev, "not found hclk\n"); + ret = PTR_ERR(snfi->hclk); + goto err_put_master; + } + + ret = mtk_snfi_enable_clk(snfi->dev, snfi); + if (ret) + goto err_put_master; + + snfi->caps = of_device_get_match_data(snfi->dev); + + irq = platform_get_irq(pdev, 0); + if (irq < 0) { + dev_err(snfi->dev, "not found snfi irq resource\n"); + ret = -EINVAL; + goto clk_disable; + } + + ret = devm_request_irq(snfi->dev, irq, mtk_snfi_irq, + 0, "mtk-snfi", snfi); + if (ret) { + dev_err(snfi->dev, "failed to request snfi irq\n"); + goto clk_disable; + } + + ret = dma_set_mask(snfi->dev, DMA_BIT_MASK(32)); + if (ret) { + dev_err(snfi->dev, "failed to set dma mask\n"); + goto clk_disable; + } + + snfi->tx_buf = kzalloc(SNFI_GPRAM_MAX_LEN, GFP_KERNEL); + if (!snfi->tx_buf) { + ret = -ENOMEM; + goto clk_disable; + } + + ctlr->dev.of_node = np; + ctlr->mem_ops = &mtk_snfi_ops; + ctlr->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_QUAD; + ctlr->auto_runtime_pm = true; + + dev_set_drvdata(snfi->dev, ctlr); + + ret = mtk_snfi_init(snfi); + if (ret) { + dev_err(snfi->dev, "failed to init snfi\n"); + goto free_buf; + } + + ret = mtk_snfi_ecc_probe(pdev, snfi); + if (ret) { + dev_warn(snfi->dev, "ECC engine not available\n"); + goto free_buf; + } + + pm_runtime_enable(snfi->dev); + + ret = devm_spi_register_master(snfi->dev, ctlr); + if (ret) { + dev_err(snfi->dev, "failed to register spi master\n"); + goto disable_pm_runtime; + } + + return 0; + +disable_pm_runtime: + pm_runtime_disable(snfi->dev); + +free_buf: + kfree(snfi->tx_buf); + +clk_disable: + mtk_snfi_disable_clk(snfi); + +err_put_master: + spi_master_put(ctlr); + + return ret; +} + +static int mtk_snfi_remove(struct platform_device *pdev) +{ + struct spi_controller *ctlr = dev_get_drvdata(&pdev->dev); + struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr); + struct nand_ecc_engine *eng = snfi->engine; + + pm_runtime_disable(snfi->dev); + nand_ecc_unregister_on_host_hw_engine(eng); + kfree(snfi->tx_buf); + spi_master_put(ctlr); + + return 0; +} + +#ifdef CONFIG_PM +static int mtk_snfi_runtime_suspend(struct device *dev) +{ + struct spi_controller *ctlr = dev_get_drvdata(dev); + struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr); + + mtk_snfi_disable_clk(snfi); + + return 0; +} + +static int mtk_snfi_runtime_resume(struct device *dev) +{ + struct spi_controller *ctlr = dev_get_drvdata(dev); + struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr); + int ret; + + ret = mtk_snfi_enable_clk(dev, snfi); + if (ret) + return ret; + + ret = mtk_snfi_init(snfi); + if (ret) + dev_err(dev, "failed to init snfi\n"); + + return ret; +} +#endif /* CONFIG_PM */ + +static const struct dev_pm_ops mtk_snfi_pm_ops = { + SET_RUNTIME_PM_OPS(mtk_snfi_runtime_suspend, + mtk_snfi_runtime_resume, NULL) + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, + pm_runtime_force_resume) +}; + +static struct platform_driver mtk_snfi_driver = { + .driver = { + .name = "mtk-snfi", + .of_match_table = mtk_snfi_id_table, + .pm = &mtk_snfi_pm_ops, + }, + .probe = mtk_snfi_probe, + .remove = mtk_snfi_remove, +}; + +module_platform_driver(mtk_snfi_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Xiangsheng Hou "); +MODULE_DESCRIPTION("Mediatek SPI Nand Flash interface driver"); From patchwork Tue Nov 30 08:32:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiangsheng Hou X-Patchwork-Id: 12646521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 38D88C433F5 for ; Tue, 30 Nov 2021 08:34:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bli4tNHggZdfWLyjXSDafrkaKzZKXD1xUHxnz1ixCxA=; b=lfYPSlpukW65Wq 3imlRG2g+xiCcOd7KldiqbTkibfy80eYNfBtnJ9VpQMVFGYjhWsy3X/yWk8eRkq4WJJDvNWBCfY5L 3fuf8rHMT5GENTXKNFejahtnGxcokE4SnGYaa52sEFcykiKfgfaCXZfuU506Kwq9Q7gZ4tOy90huv zJRSO/awCEEZ/x3VUb//fwIclZYpq67y/uRT7Cg4earuiVwTj4Qw5v57TlWMHbVv7025p95Z3wBXe vohnwEOeqd/JKBxMcRp05XObUgFgDixSlxQL1CY0hySOjynzIgzvfeUBpF5tGtH7Serwb2U9iyylq l5fsrbW1iiyCd2mDKN5w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mryaW-0048B6-3w; Tue, 30 Nov 2021 08:34:08 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mryYy-0047YX-Qp; Tue, 30 Nov 2021 08:32:35 +0000 X-UUID: 7bcff6161cbf4d92a4cc44e31ff87d01-20211130 X-UUID: 7bcff6161cbf4d92a4cc44e31ff87d01-20211130 Received: from mtkcas67.mediatek.inc [(172.29.193.45)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1933531433; Tue, 30 Nov 2021 01:32:30 -0700 Received: from mtkmbs10n2.mediatek.inc (172.21.101.183) by MTKMBS62N2.mediatek.inc (172.29.193.42) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 30 Nov 2021 00:32:28 -0800 Received: from mtkcas11.mediatek.inc (172.21.101.40) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.3; Tue, 30 Nov 2021 16:32:27 +0800 Received: from mhfsdcap04.gcn.mediatek.inc (10.17.3.154) by mtkcas11.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 30 Nov 2021 16:32:25 +0800 From: Xiangsheng Hou To: , CC: , , , , , , , , , , , , Subject: [RFC, v4, 4/5] mtd: spinand: Move set/get OOB databytes to each ECC engines Date: Tue, 30 Nov 2021 16:32:01 +0800 Message-ID: <20211130083202.14228-5-xiangsheng.hou@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211130083202.14228-1-xiangsheng.hou@mediatek.com> References: <20211130083202.14228-1-xiangsheng.hou@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211130_003232_947559_17AB1478 X-CRM114-Status: GOOD ( 21.32 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Move set/get OOB databytes to each ECC engines when in AUTO mode. For read/write in AUTO mode, the OOB bytes include not only free date bytes but also ECC data bytes. And for some special ECC engine, the data bytes in OOB may be mixed with main data. For example, mediatek ECC engine will be one more main data byte swap with BBM. So, just put these operation in each ECC engine to distinguish the differentiation. Signed-off-by: Xiangsheng Hou --- drivers/mtd/nand/ecc-sw-bch.c | 71 ++++++++++++++++--- drivers/mtd/nand/ecc-sw-hamming.c | 71 ++++++++++++++++--- drivers/mtd/nand/spi/core.c | 93 +++++++++++++++++-------- include/linux/mtd/nand-ecc-sw-bch.h | 4 ++ include/linux/mtd/nand-ecc-sw-hamming.h | 4 ++ include/linux/mtd/spinand.h | 4 ++ 6 files changed, 198 insertions(+), 49 deletions(-) diff --git a/drivers/mtd/nand/ecc-sw-bch.c b/drivers/mtd/nand/ecc-sw-bch.c index 405552d014a8..bda31ef8f0b8 100644 --- a/drivers/mtd/nand/ecc-sw-bch.c +++ b/drivers/mtd/nand/ecc-sw-bch.c @@ -238,7 +238,9 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand) engine_conf->code_size = code_size; engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL); engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL); - if (!engine_conf->calc_buf || !engine_conf->code_buf) { + engine_conf->oob_buf = kzalloc(mtd->oobsize, GFP_KERNEL); + if (!engine_conf->calc_buf || !engine_conf->code_buf || + !engine_conf->oob_buf) { ret = -ENOMEM; goto free_bufs; } @@ -267,6 +269,7 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand) nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx); kfree(engine_conf->calc_buf); kfree(engine_conf->code_buf); + kfree(engine_conf->oob_buf); free_engine_conf: kfree(engine_conf); @@ -283,6 +286,7 @@ void nand_ecc_sw_bch_cleanup_ctx(struct nand_device *nand) nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx); kfree(engine_conf->calc_buf); kfree(engine_conf->code_buf); + kfree(engine_conf->oob_buf); kfree(engine_conf); } } @@ -299,22 +303,42 @@ static int nand_ecc_sw_bch_prepare_io_req(struct nand_device *nand, int total = nand->ecc.ctx.total; u8 *ecccalc = engine_conf->calc_buf; const u8 *data; - int i; + int i, ret = 0; /* Nothing to do for a raw operation */ if (req->mode == MTD_OPS_RAW) return 0; - /* This engine does not provide BBM/free OOB bytes protection */ - if (!req->datalen) - return 0; - nand_ecc_tweak_req(&engine_conf->req_ctx, req); /* No more preparation for page read */ if (req->type == NAND_PAGE_READ) return 0; + if (req->ooblen) { + memset(engine_conf->oob_buf, 0xff, + nanddev_per_page_oobsize(nand)); + + if (req->mode == MTD_OPS_AUTO_OOB) { + ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out, + engine_conf->oob_buf, + req->ooboffs, + mtd->oobavail); + if (ret) + return ret; + } else { + memcpy(engine_conf->oob_buf + req->ooboffs, + req->oobbuf.out, req->ooblen); + } + + engine_conf->src_oob_buf = (void *)req->oobbuf.out; + req->oobbuf.out = engine_conf->oob_buf; + } + + /* This engine does not provide BBM/free OOB bytes protection */ + if (!req->datalen) + return 0; + /* Preparation for page write: derive the ECC bytes and place them */ for (i = 0, data = req->databuf.out; eccsteps; @@ -344,12 +368,36 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand, if (req->mode == MTD_OPS_RAW) return 0; - /* This engine does not provide BBM/free OOB bytes protection */ - if (!req->datalen) - return 0; - /* No more preparation for page write */ if (req->type == NAND_PAGE_WRITE) { + if (req->ooblen) + req->oobbuf.out = engine_conf->src_oob_buf; + + nand_ecc_restore_req(&engine_conf->req_ctx, req); + return 0; + } + + if (req->ooblen) { + memset(engine_conf->oob_buf, 0xff, + nanddev_per_page_oobsize(nand)); + + if (req->mode == MTD_OPS_AUTO_OOB) { + ret = mtd_ooblayout_get_databytes(mtd, + engine_conf->oob_buf, + req->oobbuf.in, + req->ooboffs, + mtd->oobavail); + if (ret) + return ret; + } else { + memcpy(engine_conf->oob_buf, + req->oobbuf.in + req->ooboffs, req->ooblen); + } + } + + /* This engine does not provide BBM/free OOB bytes protection */ + if (!req->datalen) { + req->oobbuf.in = engine_conf->oob_buf; nand_ecc_restore_req(&engine_conf->req_ctx, req); return 0; } @@ -379,6 +427,9 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand, } } + if (req->ooblen) + req->oobbuf.in = engine_conf->oob_buf; + nand_ecc_restore_req(&engine_conf->req_ctx, req); return max_bitflips; diff --git a/drivers/mtd/nand/ecc-sw-hamming.c b/drivers/mtd/nand/ecc-sw-hamming.c index 254db2e7f8bb..c90ff31e9656 100644 --- a/drivers/mtd/nand/ecc-sw-hamming.c +++ b/drivers/mtd/nand/ecc-sw-hamming.c @@ -507,7 +507,9 @@ int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand) engine_conf->code_size = 3; engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL); engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL); - if (!engine_conf->calc_buf || !engine_conf->code_buf) { + engine_conf->oob_buf = kzalloc(mtd->oobsize, GFP_KERNEL); + if (!engine_conf->calc_buf || !engine_conf->code_buf || + !engine_conf->oob_buf) { ret = -ENOMEM; goto free_bufs; } @@ -522,6 +524,7 @@ int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand) nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx); kfree(engine_conf->calc_buf); kfree(engine_conf->code_buf); + kfree(engine_conf->oob_buf); free_engine_conf: kfree(engine_conf); @@ -537,6 +540,7 @@ void nand_ecc_sw_hamming_cleanup_ctx(struct nand_device *nand) nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx); kfree(engine_conf->calc_buf); kfree(engine_conf->code_buf); + kfree(engine_conf->oob_buf); kfree(engine_conf); } } @@ -553,22 +557,42 @@ static int nand_ecc_sw_hamming_prepare_io_req(struct nand_device *nand, int total = nand->ecc.ctx.total; u8 *ecccalc = engine_conf->calc_buf; const u8 *data; - int i; + int i, ret; /* Nothing to do for a raw operation */ if (req->mode == MTD_OPS_RAW) return 0; - /* This engine does not provide BBM/free OOB bytes protection */ - if (!req->datalen) - return 0; - nand_ecc_tweak_req(&engine_conf->req_ctx, req); /* No more preparation for page read */ if (req->type == NAND_PAGE_READ) return 0; + if (req->ooblen) { + memset(engine_conf->oob_buf, 0xff, + nanddev_per_page_oobsize(nand)); + + if (req->mode == MTD_OPS_AUTO_OOB) { + ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out, + engine_conf->oob_buf, + req->ooboffs, + mtd->oobavail); + if (ret) + return ret; + } else { + memcpy(engine_conf->oob_buf + req->ooboffs, + req->oobbuf.out, req->ooblen); + } + + engine_conf->src_oob_buf = (void *)req->oobbuf.out; + req->oobbuf.out = engine_conf->oob_buf; + } + + /* This engine does not provide BBM/free OOB bytes protection */ + if (!req->datalen) + return 0; + /* Preparation for page write: derive the ECC bytes and place them */ for (i = 0, data = req->databuf.out; eccsteps; @@ -598,12 +622,36 @@ static int nand_ecc_sw_hamming_finish_io_req(struct nand_device *nand, if (req->mode == MTD_OPS_RAW) return 0; - /* This engine does not provide BBM/free OOB bytes protection */ - if (!req->datalen) - return 0; - /* No more preparation for page write */ if (req->type == NAND_PAGE_WRITE) { + if (req->ooblen) + req->oobbuf.out = engine_conf->src_oob_buf; + + nand_ecc_restore_req(&engine_conf->req_ctx, req); + return 0; + } + + if (req->ooblen) { + memset(engine_conf->oob_buf, 0xff, + nanddev_per_page_oobsize(nand)); + + if (req->mode == MTD_OPS_AUTO_OOB) { + ret = mtd_ooblayout_get_databytes(mtd, + engine_conf->oob_buf, + req->oobbuf.in, + req->ooboffs, + mtd->oobavail); + if (ret) + return ret; + } else { + memcpy(engine_conf->oob_buf, + req->oobbuf.in + req->ooboffs, req->ooblen); + } + } + + /* This engine does not provide BBM/free OOB bytes protection */ + if (!req->datalen) { + req->oobbuf.in = engine_conf->oob_buf; nand_ecc_restore_req(&engine_conf->req_ctx, req); return 0; } @@ -633,6 +681,9 @@ static int nand_ecc_sw_hamming_finish_io_req(struct nand_device *nand, } } + if (req->ooblen) + req->oobbuf.in = engine_conf->oob_buf; + nand_ecc_restore_req(&engine_conf->req_ctx, req); return max_bitflips; diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c index c58f558302c4..9033036086f2 100644 --- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -267,6 +267,12 @@ static int spinand_ondie_ecc_init_ctx(struct nand_device *nand) if (!engine_conf) return -ENOMEM; + engine_conf->oob_buf = kzalloc(nand->memorg.oobsize, GFP_KERNEL); + if (!engine_conf->oob_buf) { + kfree(engine_conf); + return -ENOMEM; + } + nand->ecc.ctx.priv = engine_conf; if (spinand->eccinfo.ooblayout) @@ -279,16 +285,40 @@ static int spinand_ondie_ecc_init_ctx(struct nand_device *nand) static void spinand_ondie_ecc_cleanup_ctx(struct nand_device *nand) { - kfree(nand->ecc.ctx.priv); + struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv; + + kfree(engine_conf->oob_buf); + kfree(engine_conf); } static int spinand_ondie_ecc_prepare_io_req(struct nand_device *nand, struct nand_page_io_req *req) { + struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv; struct spinand_device *spinand = nand_to_spinand(nand); + struct mtd_info *mtd = spinand_to_mtd(spinand); bool enable = (req->mode != MTD_OPS_RAW); + int ret; - memset(spinand->oobbuf, 0xff, nanddev_per_page_oobsize(nand)); + if (req->ooblen && req->type == NAND_PAGE_WRITE) { + memset(engine_conf->oob_buf, 0xff, + nanddev_per_page_oobsize(nand)); + + if (req->mode == MTD_OPS_AUTO_OOB) { + ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out, + engine_conf->oob_buf, + req->ooboffs, + mtd->oobavail); + if (ret) + return ret; + } else { + memcpy(engine_conf->oob_buf + req->ooboffs, + req->oobbuf.out, req->ooblen); + } + + engine_conf->src_oob_buf = (void *)req->oobbuf.out; + req->oobbuf.out = engine_conf->oob_buf; + } /* Only enable or disable the engine */ return spinand_ecc_enable(spinand, enable); @@ -306,8 +336,32 @@ static int spinand_ondie_ecc_finish_io_req(struct nand_device *nand, return 0; /* Nothing to do when finishing a page write */ - if (req->type == NAND_PAGE_WRITE) + if (req->type == NAND_PAGE_WRITE) { + if (req->ooblen) + req->oobbuf.out = engine_conf->src_oob_buf; + return 0; + } + + if (req->ooblen) { + memset(engine_conf->oob_buf, 0xff, + nanddev_per_page_oobsize(nand)); + + if (req->mode == MTD_OPS_AUTO_OOB) { + ret = mtd_ooblayout_get_databytes(mtd, + engine_conf->oob_buf, + req->oobbuf.in, + req->ooboffs, + mtd->oobavail); + if (ret) + return ret; + } else { + memcpy(engine_conf->oob_buf, + req->oobbuf.in + req->ooboffs, req->ooblen); + } + + req->oobbuf.in = engine_conf->oob_buf; + } /* Finish a page read: check the status, report errors/bitflips */ ret = spinand_check_ecc_status(spinand, engine_conf->status); @@ -360,7 +414,6 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand, const struct nand_page_io_req *req) { struct nand_device *nand = spinand_to_nand(spinand); - struct mtd_info *mtd = spinand_to_mtd(spinand); struct spi_mem_dirmap_desc *rdesc; unsigned int nbytes = 0; void *buf = NULL; @@ -403,16 +456,9 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand, memcpy(req->databuf.in, spinand->databuf + req->dataoffs, req->datalen); - if (req->ooblen) { - if (req->mode == MTD_OPS_AUTO_OOB) - mtd_ooblayout_get_databytes(mtd, req->oobbuf.in, - spinand->oobbuf, - req->ooboffs, - req->ooblen); - else - memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs, - req->ooblen); - } + if (req->ooblen) + memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs, + req->ooblen); return 0; } @@ -421,7 +467,6 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand, const struct nand_page_io_req *req) { struct nand_device *nand = spinand_to_nand(spinand); - struct mtd_info *mtd = spinand_to_mtd(spinand); struct spi_mem_dirmap_desc *wdesc; unsigned int nbytes, column = 0; void *buf = spinand->databuf; @@ -433,27 +478,17 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand, * must fill the page cache entirely even if we only want to program * the data portion of the page, otherwise we might corrupt the BBM or * user data previously programmed in OOB area. - * - * Only reset the data buffer manually, the OOB buffer is prepared by - * ECC engines ->prepare_io_req() callback. */ nbytes = nanddev_page_size(nand) + nanddev_per_page_oobsize(nand); - memset(spinand->databuf, 0xff, nanddev_page_size(nand)); + memset(spinand->databuf, 0xff, nbytes); if (req->datalen) memcpy(spinand->databuf + req->dataoffs, req->databuf.out, req->datalen); - if (req->ooblen) { - if (req->mode == MTD_OPS_AUTO_OOB) - mtd_ooblayout_set_databytes(mtd, req->oobbuf.out, - spinand->oobbuf, - req->ooboffs, - req->ooblen); - else - memcpy(spinand->oobbuf + req->ooboffs, req->oobbuf.out, - req->ooblen); - } + if (req->ooblen) + memcpy(spinand->oobbuf + req->ooboffs, req->oobbuf.out, + req->ooblen); if (req->mode == MTD_OPS_RAW) wdesc = spinand->dirmaps[req->pos.plane].wdesc; diff --git a/include/linux/mtd/nand-ecc-sw-bch.h b/include/linux/mtd/nand-ecc-sw-bch.h index 9da9969505a8..c4730badb77b 100644 --- a/include/linux/mtd/nand-ecc-sw-bch.h +++ b/include/linux/mtd/nand-ecc-sw-bch.h @@ -18,6 +18,8 @@ * @code_size: Number of bytes needed to store a code (one code per step) * @calc_buf: Buffer to use when calculating ECC bytes * @code_buf: Buffer to use when reading (raw) ECC bytes from the chip + * @oob_buf: Buffer to use when handle the data in OOB area. + * @src_oob_buf: Pointer be used to store source OOB buffer pointer when write * @bch: BCH control structure * @errloc: error location array * @eccmask: XOR ecc mask, allows erased pages to be decoded as valid @@ -27,6 +29,8 @@ struct nand_ecc_sw_bch_conf { unsigned int code_size; u8 *calc_buf; u8 *code_buf; + u8 *oob_buf; + void *src_oob_buf; struct bch_control *bch; unsigned int *errloc; unsigned char *eccmask; diff --git a/include/linux/mtd/nand-ecc-sw-hamming.h b/include/linux/mtd/nand-ecc-sw-hamming.h index c6c71894c575..88788d53b911 100644 --- a/include/linux/mtd/nand-ecc-sw-hamming.h +++ b/include/linux/mtd/nand-ecc-sw-hamming.h @@ -19,6 +19,8 @@ * @code_size: Number of bytes needed to store a code (one code per step) * @calc_buf: Buffer to use when calculating ECC bytes * @code_buf: Buffer to use when reading (raw) ECC bytes from the chip + * @oob_buf: Buffer to use when handle the data in OOB area. + * @src_oob_buf: Pointer be used to store source OOB buffer pointer when write * @sm_order: Smart Media special ordering */ struct nand_ecc_sw_hamming_conf { @@ -26,6 +28,8 @@ struct nand_ecc_sw_hamming_conf { unsigned int code_size; u8 *calc_buf; u8 *code_buf; + u8 *oob_buf; + void *src_oob_buf; unsigned int sm_order; }; diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h index 3aa28240a77f..23b86941fbf6 100644 --- a/include/linux/mtd/spinand.h +++ b/include/linux/mtd/spinand.h @@ -312,9 +312,13 @@ struct spinand_ecc_info { * struct spinand_ondie_ecc_conf - private SPI-NAND on-die ECC engine structure * @status: status of the last wait operation that will be used in case * ->get_status() is not populated by the spinand device. + * @oob_buf: Buffer to use when handle the data in OOB area. + * @src_oob_buf: Pointer be used to store source OOB buffer pointer when write */ struct spinand_ondie_ecc_conf { u8 status; + u8 *oob_buf; + void *src_oob_buf; }; /** From patchwork Tue Nov 30 08:32:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiangsheng Hou X-Patchwork-Id: 12646523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9AD3C433EF for ; Tue, 30 Nov 2021 08:35:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CmuHsNf4BQsW92AaYPVzjioVR6l6Ty30tUhu33OKSeA=; b=F+b60zZHxbHMWU 17XR0qoaSsuYC424QFrar7i/SZLk5o7O2hCvw3DzVqPUAvWOfMaPsfzLjOOn3cVMW/adifg0Hyf1P EIkSwxLjnNZbv27Ydy3zoJ+C7wAGA/lumKtyblswcVQ860jv0pDOGBQedZhAuDp4Ez/+HMjGE6p3f wo/pReYErnPWylqm7AVljxQbBBm72rvYvM4AWWbKOXmbLcZLSUaeUl4RTJc9WSFTjW738lZJfC7m3 gB62hSwy4qlM7cYLyRBlWpC61v8PmGbZyNg4nz+10hOyIBn4S+XZ+kGC0hTg2mP2ehlv6G3cCfEDI vtncsGuo8MidZyuTyLqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mrybG-0048UN-2j; Tue, 30 Nov 2021 08:34:54 +0000 Received: from mailgw02.mediatek.com ([216.200.240.185]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mryZ0-0047ZS-QR; Tue, 30 Nov 2021 08:32:36 +0000 X-UUID: 11837ba579aa4401bcbc1b851fcbe23e-20211130 X-UUID: 11837ba579aa4401bcbc1b851fcbe23e-20211130 Received: from mtkcas68.mediatek.inc [(172.29.94.19)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1321315218; Tue, 30 Nov 2021 01:32:34 -0700 Received: from mtkmbs10n2.mediatek.inc (172.21.101.183) by MTKMBS62N1.mediatek.inc (172.29.193.41) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 30 Nov 2021 00:32:32 -0800 Received: from mtkcas11.mediatek.inc (172.21.101.40) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.3; Tue, 30 Nov 2021 16:32:30 +0800 Received: from mhfsdcap04.gcn.mediatek.inc (10.17.3.154) by mtkcas11.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 30 Nov 2021 16:32:29 +0800 From: Xiangsheng Hou To: , CC: , , , , , , , , , , , , Subject: [RFC,v4,5/5] arm64: dts: mtk: Add snfi node Date: Tue, 30 Nov 2021 16:32:02 +0800 Message-ID: <20211130083202.14228-6-xiangsheng.hou@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211130083202.14228-1-xiangsheng.hou@mediatek.com> References: <20211130083202.14228-1-xiangsheng.hou@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211130_003234_894310_3B5B5D46 X-CRM114-Status: UNSURE ( 9.05 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Add snfi node for SPI NAND controller. Just take MT7622 for example at present. Signed-off-by: Xiangsheng Hou --- arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts | 16 ++++++++++++++++ arch/arm64/boot/dts/mediatek/mt7622.dtsi | 13 +++++++++++++ 2 files changed, 29 insertions(+) diff --git a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts index 596c073d8b05..1a5bf553f3a3 100644 --- a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts +++ b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts @@ -530,6 +530,22 @@ &spi0 { status = "okay"; }; +&snfi { + pinctrl-names = "default"; + pinctrl-0 = <&snfi_pins>; + nand-ecc-engine = <&bch>; + status = "disabled"; + + spi_nand@0 { + compatible = "spi-nand"; + reg = <0>; + spi-max-frequency = <104000000>; + spi-tx-bus-width = <4>; + spi-rx-bus-width = <4>; + nand-ecc-engine = <&snfi>; + }; +}; + &spi1 { pinctrl-names = "default"; pinctrl-0 = <&spic1_pins>; diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi index 6f8cb3ad1e84..229ec2a3a65e 100644 --- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi @@ -497,6 +497,19 @@ spi0: spi@1100a000 { status = "disabled"; }; + snfi: spi@1100d000 { + compatible = "mediatek,mt7622-snfi"; + reg = <0 0x1100d000 0 0x1000>; + interrupts = ; + clocks = <&infracfg_ao CLK_INFRA_AO_NFI_BCLK_CK_SET>, + <&infracfg_ao CLK_INFRA_AO_NFI_INFRA_BCLK_CK_SET>, + <&infracfg_ao CLK_INFRA_AO_NFI_HCLK_CK_SET>; + clock-names = "nfi_clk", "snfi_clk", "hclk"; + #address-cells = <1>; + #size-cells = <0>; + status = "disabled"; + }; + thermal: thermal@1100b000 { #thermal-sensor-cells = <1>; compatible = "mediatek,mt7622-thermal";