From patchwork Mon Jun 13 03:42:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Chiu X-Patchwork-Id: 12878983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 25451C433EF for ; Mon, 13 Jun 2022 03:46:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Du7Y/X7ZZ70zj39tXB5OoJRHceKyCx1zlu7Zlt9IxBk=; b=hSpzQkNd/5pff+ GmmqXQuB2V4SfvXqAUKQpn6ZVfzAyJ1HbgNLZlK2nzSG4mSysHDLLTRjl9FDipNN9iHY36bl6DGol kl2HZBQX6SfOZVAeMd+vTMWA3d9KOMjh5tEZ/PkXQ5gQvftkzOb3L6uH5urRvUVZhGHA4ysmOJAMK EmeuKYjuaadzZ2S3wPcL2UnhI/5rF9yFOnzyNfUvtuh75xxse+GiOaWxY0ouTqNUu0zI4UAJnSu2d tfcZQtob9VO3FOn9JhXa83r3Dkva7cvyQNZRuA5KA2Aqhg2CGD6kidF5h9lCEOmBfWOampH3MlDdy K0Yf2JbWumNoUqzvaBfA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0b1V-001B7a-EV; Mon, 13 Jun 2022 03:45:53 +0000 Received: from mail-pj1-x102e.google.com ([2607:f8b0:4864:20::102e]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0b1I-001B34-5r for linux-arm-kernel@lists.infradead.org; Mon, 13 Jun 2022 03:45:41 +0000 Received: by mail-pj1-x102e.google.com with SMTP id g10-20020a17090a708a00b001ea8aadd42bso4799579pjk.0 for ; Sun, 12 Jun 2022 20:45:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hwoUcC4N3eJpcdnFiKCSHbinMlN/WVBskoSA8QPFN8U=; b=jFu8+5pSwIxgSFYBA2w9Ny5fjOaJxEl/fSvstKix40CJ0YPfidBqp78DMhoDb4lPNp 0Pd7TsXBwD8XKlWFvAYG7UGGDoPPGFgOHHpr/KG6laY+VrA/VebX5XDxE9FasueGSflW VJGTGkBtN5Uw5HkPcRN3fQbgOnx2lkS4RVSKEsU1YK/qkIPSJh7wT1QRJdBrUOjZYsmw KZyuXIauCjz81SsmFv3XfrfTX9QwGQclZT2nfHzkDnUstWkYxqUq4xNx3/ordv2zMI48 zPOdGtoJ0LmAw+xDkWfFeaPoink3psGZQS6+32NPQ4UWQdOXDnEN1x0u8mvEbgiO32Y/ KMug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hwoUcC4N3eJpcdnFiKCSHbinMlN/WVBskoSA8QPFN8U=; b=byQnVmalaYd5yg+OTg7X1EDiytck5saC1kt48dgwGbRQk5LK9LegsrbGL/1tAa3ntJ AVVTiw/rq0BosDeAaJjSFy5vRDj+5kR5MCD4yV5diNqCqXQ7sliozXpVENMKAnJGtz7Z WMP08UBaqfuXZz45fNQ+yduRYSWML1nb3TxAPyyg2x1O9lBxqVBM8Ee0tRkpfIlYdvMy sbPTzvflk+/vMreAJYnjBCOBU+fQTSy66wTcWsw0vclR+fO/YBkwEanDVfJHXH7qY0ut FKN6s8Yn6VQ58NcBwEHp9nt6VoieQ8dEnXtULnlRyqALTbFjX7Zrk3VL+QGktO7nOM1E iLYw== X-Gm-Message-State: AOAM532pNV+nBpr1WIcFMqdUD6FDas1mHe+Rf8U1hT7pIiL+Bp4Gr2JC viW8qeYTfND+NzqtOBG4Xc9itw== X-Google-Smtp-Source: ABdhPJxXp7DLawbfykwgg8qXeI7AF9qXxXFCxX/eKCEcMWo4few1r1Qx1LSevlp9wohMvhmogd5h1A== X-Received: by 2002:a17:90b:4c44:b0:1e8:6ed8:db56 with SMTP id np4-20020a17090b4c4400b001e86ed8db56mr13441345pjb.202.1655091936056; Sun, 12 Jun 2022 20:45:36 -0700 (PDT) Received: from archlinux.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id u13-20020a170902714d00b0015e8d4eb1dfsm3810769plm.41.2022.06.12.20.45.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 12 Jun 2022 20:45:35 -0700 (PDT) From: Andy Chiu To: radhey.shyam.pandey@xilinx.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, michal.simek@xilinx.com, netdev@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Andy Chiu , Max Hsu Subject: [PATCH net-next 2/2] net: axienet: Use iowrite64 to write all 64b descriptor pointers Date: Mon, 13 Jun 2022 11:42:02 +0800 Message-Id: <20220613034202.3777248-3-andy.chiu@sifive.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220613034202.3777248-1-andy.chiu@sifive.com> References: <20220613034202.3777248-1-andy.chiu@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220612_204540_283076_5D420810 X-CRM114-Status: GOOD ( 17.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org According to commit f735c40ed93c ("net: axienet: Autodetect 64-bit DMA capability") and AXI-DMA spec (pg021), on 64-bit capable dma, only writing MSB part of tail descriptor pointer causes DMA engine to start fetching descriptors. However, we found that it is true only if dma is in idle state. In other words, dma would use a tailp even if it only has LSB updated, when the dma is running. The non-atomicity of this behavior could be problematic if enough delay were introduced in between the 2 writes. For example, if an interrupt comes right after the LSB write and the cpu spends long enough time in the handler for the dma to get back into idle state by completing descriptors, then the seconcd write to MSB would treat dma to start fetching descriptors again. Since the descriptor next to the one pointed by current tail pointer is not filled by the kernel yet, fetching a null descriptor here causes a dma internal error and halt the dma engine down. We suggest that the dma engine should start process a 64-bit MMIO write to the descriptor pointer only if ONE 32-bit part of it is written on all states. Or we should restrict the use of 64-bit addressable dma on 32-bit platforms, since those devices have no instruction to guarantee the write to LSB and MSB part of tail pointer occurs atomically to the dma. initial condition: curp = x-3; tailp = x-2; LSB = x; MSB = 0; cpu: |dma: iowrite32(LSB, tailp) | completes #(x-3) desc, curp = x-3 ... | tailp updated => irq | completes #(x-2) desc, curp = x-2 ... | completes #(x-1) desc, curp = x-1 ... | ... ... | completes #x desc, curp = tailp = x <= irqreturn | reaches tailp == curp = x, idle iowrite32(MSB, tailp + 4) | ... | tailp updated, starts fetching... | fetches #(x + 1) desc, sees cntrl = 0 | post Tx error, halt Signed-off-by: Andy Chiu Reported-by: Max Hsu Reviewed-by: Greentime Hu Reported-by: kernel test robot --- drivers/net/ethernet/xilinx/xilinx_axienet.h | 21 +++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h index 6c95676ba172..97ddc0273b8a 100644 --- a/drivers/net/ethernet/xilinx/xilinx_axienet.h +++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h @@ -564,13 +564,28 @@ static inline void axienet_dma_out32(struct axienet_local *lp, } #ifdef CONFIG_64BIT +/** + * axienet_dma_out64 - Memory mapped Axi DMA register write. + * @lp: Pointer to axienet local structure + * @reg: Address offset from the base address of the Axi DMA core + * @value: Value to be written into the Axi DMA register + * + * This function writes the desired value into the corresponding Axi DMA + * register. + */ +static inline void axienet_dma_out64(struct axienet_local *lp, + off_t reg, u64 value) +{ + iowrite64(value, lp->dma_regs + reg); +} + static void axienet_dma_out_addr(struct axienet_local *lp, off_t reg, dma_addr_t addr) { - axienet_dma_out32(lp, reg, lower_32_bits(addr)); - if (lp->features & XAE_FEATURE_DMA_64BIT) - axienet_dma_out32(lp, reg + 4, upper_32_bits(addr)); + axienet_dma_out64(lp, reg, addr); + else + axienet_dma_out32(lp, reg, lower_32_bits(addr)); } #else /* CONFIG_64BIT */