From patchwork Mon Mar 27 12:13:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 13189175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 01BBEC76195 for ; Mon, 27 Mar 2023 12:16:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hOtQq7kNMAxEhA8+TzVZKMjRSr9ZFyLqS2w3GgId78U=; b=H7La/Xp1ySZSwu Uw8FXFyjLUQARLr8EnDOnKjLfjiVGOqLd73JCBM1hpff65Y2KMwHRWW+eZk9wj3Zd4+YIhNvpTYSw 5LWm5YJtxWJ7lRAs49LFFmPZD237vH/VDXJ9CU/lDfdj30XCab6ahOghv2My385kpZaJ6+jJchJcE vW0es+1zwTymSISi0c78Wkju3hYe/dSStOG3MOlsovwRZcmNF+tW1R9SZ6PgfY3SrGcRCBhl7CRHk ZNFe+Y+8pMY+2hBbumeASpF0HvhtT+geXtcUjv4RXCdSiuuoHhYvt6gX80OtwvFswOQoAubCaFwFB 7fGkyB/m08fm8x7Egb3g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pglkn-00ArVX-1t; Mon, 27 Mar 2023 12:15:13 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pglkR-00ArEg-2I; Mon, 27 Mar 2023 12:14:53 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 333CB611F4; Mon, 27 Mar 2023 12:14:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B8E9EC433EF; Mon, 27 Mar 2023 12:14:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679919289; bh=deueI+j9QpGrVEyOG4DDYuCvj98ECBO1BAKmt0h8mOk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XVuAqsjP7YxlTH/jfP4+hsuYEQziw3g24pme7hHONNL+tw53tUh634JeGsQ9qMsJb QiINm6FdyBSv5TDyEB7Icm81k8m52mJn/Oxz3tHbi/ytF+O96JNQpZ/Drh8XHi5GkJ DmnZaeE1/m3voIm0tXrGCkwwIwVqBzNYdHR+o9A0oJjYUFrIsmjSaMN81JqJ9bMdUz ALnkbRdTQARnjKcrPQsU1ISirc/GQsLgUD4/A9nuA1lqc+8ADpy1N7lxpgLiuI99q/ wavZSpV769B5sIjhDLnf+Uwl5ZpVrfQbbb6B1+ZErklaPLQABWTLh2J99jHRm4BWCH eD4ekbguj3pOg== From: Arnd Bergmann To: linux-kernel@vger.kernel.org Cc: Arnd Bergmann , Vineet Gupta , Russell King , Neil Armstrong , Linus Walleij , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Stafford Horne , Helge Deller , Michael Ellerman , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Max Filippov , Christoph Hellwig , Robin Murphy , Lad Prabhakar , Conor Dooley , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-oxnas@groups.io, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org Subject: [PATCH 06/21] powerpc: dma-mapping: minimize for_cpu flushing Date: Mon, 27 Mar 2023 14:13:02 +0200 Message-Id: <20230327121317.4081816-7-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230327121317.4081816-1-arnd@kernel.org> References: <20230327121317.4081816-1-arnd@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230327_051451_853572_610B75A1 X-CRM114-Status: GOOD ( 17.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Arnd Bergmann The powerpc dma_sync_*_for_cpu() variants do more flushes than on other architectures. Reduce it to what everyone else does: - No flush is needed after data has been sent to a device - When data has been received from a device, the cache only needs to be invalidated to clear out cache lines that were speculatively prefetched. In particular, the second flushing of partial cache lines of bidirectional buffers is actively harmful -- if a single cache line is written by both the CPU and the device, flushing it again does not maintain coherency but instead overwrite the data that was just received from the device. Signed-off-by: Arnd Bergmann --- arch/powerpc/mm/dma-noncoherent.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/mm/dma-noncoherent.c b/arch/powerpc/mm/dma-noncoherent.c index f10869d27de5..e108cacf877f 100644 --- a/arch/powerpc/mm/dma-noncoherent.c +++ b/arch/powerpc/mm/dma-noncoherent.c @@ -132,21 +132,11 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, switch (direction) { case DMA_NONE: BUG(); - case DMA_FROM_DEVICE: - /* - * invalidate only when cache-line aligned otherwise there is - * the potential for discarding uncommitted data from the cache - */ - if ((start | end) & (L1_CACHE_BYTES - 1)) - __dma_phys_op(start, end, DMA_CACHE_FLUSH); - else - __dma_phys_op(start, end, DMA_CACHE_INVAL); - break; - case DMA_TO_DEVICE: /* writeback only */ - __dma_phys_op(start, end, DMA_CACHE_CLEAN); + case DMA_TO_DEVICE: break; - case DMA_BIDIRECTIONAL: /* writeback and invalidate */ - __dma_phys_op(start, end, DMA_CACHE_FLUSH); + case DMA_FROM_DEVICE: + case DMA_BIDIRECTIONAL: + __dma_phys_op(start, end, DMA_CACHE_INVAL); break; } }