From patchwork Mon Apr 9 16:59:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacopo Mondi X-Patchwork-Id: 10331825 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9078A60236 for ; Mon, 9 Apr 2018 16:59:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80765288B7 for ; Mon, 9 Apr 2018 16:59:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 73504288C6; Mon, 9 Apr 2018 16:59:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 94C31288B7 for ; Mon, 9 Apr 2018 16:59:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753210AbeDIQ7b (ORCPT ); Mon, 9 Apr 2018 12:59:31 -0400 Received: from relay4-d.mail.gandi.net ([217.70.183.196]:46341 "EHLO relay4-d.mail.gandi.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753097AbeDIQ7a (ORCPT ); Mon, 9 Apr 2018 12:59:30 -0400 X-Originating-IP: 2.224.242.101 Received: from w540.lan (unknown [2.224.242.101]) (Authenticated sender: jacopo@jmondi.org) by relay4-d.mail.gandi.net (Postfix) with ESMTPSA id 03A07E0009; Mon, 9 Apr 2018 18:59:26 +0200 (CEST) From: Jacopo Mondi To: laurent.pinchart@ideasonboard.com, robin.murphy@arm.com Cc: Jacopo Mondi , ysato@users.sourceforge.jp, dalias@libc.org, iommu@lists.linux-foundation.org, linux-sh@vger.kernel.org, linux-renesas-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap() Date: Mon, 9 Apr 2018 18:59:08 +0200 Message-Id: <1523293148-18726-1-git-send-email-jacopo+renesas@jmondi.org> X-Mailer: git-send-email 2.7.4 Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Postpone calling virt_to_page() translation on memory locations not guaranteed to be backed by a struct page. This patch fixes a specific issue of SH architecture configured with SPARSEMEM memory model, when mapping buffers allocated with the memblock APIs at system initialization time, and thus not backed by the page infrastructure. It does apply to the general case though, as an early translation is anyhow incorrect and shall be postponed after trying to map memory from the device coherent memory pool first. Suggested-by: Laurent Pinchart Signed-off-by: Jacopo Mondi --- Compared to the RFC version I have tried to generalize the commit message, please suggest any improvement to that. I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails. Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the successive virt_to_page() isn't problematic as it is today? Or is it the if (off < count && user_count <= (count - off)) check that makes the translation safe? Thanks j --- drivers/base/dma-mapping.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) -- 2.7.4 -- To unsubscribe from this list: send the line "unsubscribe linux-sh" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c index 3b11835..8b4ec34 100644 --- a/drivers/base/dma-mapping.c +++ b/drivers/base/dma-mapping.c @@ -226,8 +226,8 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP unsigned long user_count = vma_pages(vma); unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; - unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr)); unsigned long off = vma->vm_pgoff; + unsigned long pfn; vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); @@ -235,6 +235,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return ret; if (off < count && user_count <= (count - off)) { + pfn = page_to_pfn(virt_to_page(cpu_addr)); ret = remap_pfn_range(vma, vma->vm_start, pfn + off, user_count << PAGE_SHIFT,