From patchwork Thu Nov 21 07:13:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11255707 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8AD5B112B for ; Thu, 21 Nov 2019 07:17:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6789D20898 for ; Thu, 21 Nov 2019 07:17:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="l02+ePxv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727216AbfKUHOD (ORCPT ); Thu, 21 Nov 2019 02:14:03 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:4295 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727175AbfKUHOC (ORCPT ); Thu, 21 Nov 2019 02:14:02 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 20 Nov 2019 23:13:57 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 20 Nov 2019 23:13:56 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 20 Nov 2019 23:13:56 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 21 Nov 2019 07:13:56 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 21 Nov 2019 07:13:55 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 20 Nov 2019 23:13:55 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , Hans Verkuil Subject: [PATCH v7 08/24] media/v4l2-core: set pages dirty upon releasing DMA buffers Date: Wed, 20 Nov 2019 23:13:38 -0800 Message-ID: <20191121071354.456618-9-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191121071354.456618-1-jhubbard@nvidia.com> References: <20191121071354.456618-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1574320437; bh=baa1oHQC+1Mr0fCveGP8H7qjkYYEgQstEl5cvxrT4lI=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=l02+ePxvyNgAM8Jq9iVmvL89ZIGCWhL51aR8TLCdsj1IuxYjPgmcTTvEcClj+kdLG yX0iZsvpJz11Jp/ma+4TLdhqCGe/biUJQLm1TzQc3ga1SDkp92M0QjeXtcDHUAbq8p 4eholacjz/7av+JG9AuYr08DNgvP+x98oMMiFLEPqwWjRxpz60A+fyLBAz1k1hAtKc xNXlJ9sUI5XAZM7ZtwcMZynCfXg5lCwiGv13JlW9HmqOhR4T6/M1CAv6IQi84ZICFl YOQiPFSfOZoS5jpUKNBMOJhfULQdszsViMhBSbqB4n4970ZOSd4O9UVriQ7pFPk/yY oSuCMn2wZuTOQ== Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org After DMA is complete, and the device and CPU caches are synchronized, it's still required to mark the CPU pages as dirty, if the data was coming from the device. However, this driver was just issuing a bare put_page() call, without any set_page_dirty*() call. Fix the problem, by calling set_page_dirty_lock() if the CPU pages were potentially receiving data from the device. Acked-by: Hans Verkuil Cc: Mauro Carvalho Chehab Signed-off-by: John Hubbard Reviewed-by: Christoph Hellwig --- drivers/media/v4l2-core/videobuf-dma-sg.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index 66a6c6c236a7..28262190c3ab 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -349,8 +349,11 @@ int videobuf_dma_free(struct videobuf_dmabuf *dma) BUG_ON(dma->sglen); if (dma->pages) { - for (i = 0; i < dma->nr_pages; i++) + for (i = 0; i < dma->nr_pages; i++) { + if (dma->direction == DMA_FROM_DEVICE) + set_page_dirty_lock(dma->pages[i]); put_page(dma->pages[i]); + } kfree(dma->pages); dma->pages = NULL; }