From patchwork Wed Aug 23 19:18:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13363062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4AFCAC3DA6F for ; Wed, 23 Aug 2023 19:19:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=ZWnO+B3q2nruwmLNISu2X47MbLjCo2fdkLyArWkrIb8=; b=0sarTbOXk+xq4H 1qZmY4oFBQ5PzkbBFUSBBarb6VCN96v1TYCqsj61SRbgPTaGSGaYZnSYSjlwwxqLnnHdV/JrIDmaq +qLDNJtiwlPzLC25OVw6Trk0nlkCJZOwQpoUR2VSRxTgsQbWY7I2wBEAh6d1AIx5JE+5uxQkwFPDQ sd/wC0OOoqykZc4SF6CXlvExZlJkDM5WbG3hNrgopgjbQsrw2EZY5sc0/NJ5WRBbPWPB/LMVgJsdv yT2xkeDWxVNIAMu3EBBDdib3i6o/Tf0Xa9VR0qaKDCbiwt5TabtdjbOawlJgC8vJzpbY8Dqn00dxi 6onwCPnBYikoPvpcuHjA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qYtNf-001Ou8-2s; Wed, 23 Aug 2023 19:19:03 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qYtNd-001Otv-34 for linux-arm-kernel@bombadil.infradead.org; Wed, 23 Aug 2023 19:19:01 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=TMNmrd+rOYSZliS9Ed1c+i0bae31CLWzR0pXsj6pnmY=; b=cv7bFkx4oMF9MyUNm420+PgiyM a6c9YB1XyQpZqjYTDEunwzYMNHKMKl/MFftdGgF/21e0dKVSY6MxcvVbOV7HIAUgfQGSnWshgobi9 KhpdUlmWvfzaarmu18S1BY8CIISjU+7vi9/t2WLZDmsdMY7kDvcJkTTJlo7HHxzjGAl9dJAiWJL8u 5sQ8G2TiIXHFliU7m6pGGtJZOhmOg7pt6lWrpFdJnth+BQb2ZzFbAwFsu671RMgPii0Xnxhna0/UF szik4bxrhwrklmO4LKRylslyvZrBIoZsNl/P7s4Fp78oaEdLAQMZ20hHUfdGcCtQBEZdaxTIs9u8N qc424BMQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qYtNa-006WxW-Ab; Wed, 23 Aug 2023 19:18:58 +0000 From: "Matthew Wilcox (Oracle)" To: Russell King , Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Marek Szyprowski , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Subject: [PATCH] Fix folio conversion in __dma_page_dev_to_cpu() Date: Wed, 23 Aug 2023 20:18:52 +0100 Message-Id: <20230823191852.1556561-1-willy@infradead.org> X-Mailer: git-send-email 2.37.1 MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Russell and Marek pointed out some assumptions I was making about how sg lists work; eg that they are limited to 2GB and that the initial offset lies within the first page (or at least within the first folio that a page belongs to). While I think those assumptions are true, it's not too hard to write a version which does not have those assumptions and also calculates folio_size() only once per loop iteration. --- arch/arm/mm/dma-mapping.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 0474840224d9..5409225b4abc 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -695,7 +695,6 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { - struct folio *folio = page_folio(page); phys_addr_t paddr = page_to_phys(page) + off; /* FIXME: non-speculating: not required */ @@ -710,18 +709,19 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, * Mark the D-cache clean for these pages to avoid extra flushing. */ if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) { - ssize_t left = size; + struct folio *folio = pfn_folio(paddr / PAGE_SIZE); size_t offset = offset_in_folio(folio, paddr); - if (offset) { - left -= folio_size(folio) - offset; - folio = folio_next(folio); - } + for (;;) { + size_t sz = folio_size(folio) - offset; - while (left >= (ssize_t)folio_size(folio)) { - left -= folio_size(folio); - set_bit(PG_dcache_clean, &folio->flags); - if (!left) + if (size < sz) + break; + if (!offset) + set_bit(PG_dcache_clean, &folio->flags); + offset = 0; + size -= sz; + if (!size) break; folio = folio_next(folio); }