Message ID | 20191030224930.3990755-5-jhubbard@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show
Return-Path: <SRS0=yN0i=YX=vger.kernel.org=linux-block-owner@kernel.org> Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D919A13B1 for <patchwork-linux-block@patchwork.kernel.org>; Wed, 30 Oct 2019 22:53:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B7171218AC for <patchwork-linux-block@patchwork.kernel.org>; Wed, 30 Oct 2019 22:53:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="TDqU50c8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727269AbfJ3Wtm (ORCPT <rfc822;patchwork-linux-block@patchwork.kernel.org>); Wed, 30 Oct 2019 18:49:42 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:10798 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727250AbfJ3Wtm (ORCPT <rfc822;linux-block@vger.kernel.org>); Wed, 30 Oct 2019 18:49:42 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id <B5dba13890000>; Wed, 30 Oct 2019 15:49:45 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 30 Oct 2019 15:49:39 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 30 Oct 2019 15:49:39 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 30 Oct 2019 22:49:39 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 30 Oct 2019 22:49:38 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id <B5dba13800002>; Wed, 30 Oct 2019 15:49:38 -0700 From: John Hubbard <jhubbard@nvidia.com> To: Andrew Morton <akpm@linux-foundation.org> CC: Al Viro <viro@zeniv.linux.org.uk>, Alex Williamson <alex.williamson@redhat.com>, Benjamin Herrenschmidt <benh@kernel.crashing.org>, =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= <bjorn.topel@intel.com>, Christoph Hellwig <hch@infradead.org>, Dan Williams <dan.j.williams@intel.com>, Daniel Vetter <daniel@ffwll.ch>, Dave Chinner <david@fromorbit.com>, David Airlie <airlied@linux.ie>, "David S . Miller" <davem@davemloft.net>, Ira Weiny <ira.weiny@intel.com>, Jan Kara <jack@suse.cz>, Jason Gunthorpe <jgg@ziepe.ca>, Jens Axboe <axboe@kernel.dk>, Jonathan Corbet <corbet@lwn.net>, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= <jglisse@redhat.com>, Magnus Karlsson <magnus.karlsson@intel.com>, Mauro Carvalho Chehab <mchehab@kernel.org>, Michael Ellerman <mpe@ellerman.id.au>, Michal Hocko <mhocko@suse.com>, Mike Kravetz <mike.kravetz@oracle.com>, Paul Mackerras <paulus@samba.org>, Shuah Khan <shuah@kernel.org>, Vlastimil Babka <vbabka@suse.cz>, <bpf@vger.kernel.org>, <dri-devel@lists.freedesktop.org>, <kvm@vger.kernel.org>, <linux-block@vger.kernel.org>, <linux-doc@vger.kernel.org>, <linux-fsdevel@vger.kernel.org>, <linux-kselftest@vger.kernel.org>, <linux-media@vger.kernel.org>, <linux-rdma@vger.kernel.org>, <linuxppc-dev@lists.ozlabs.org>, <netdev@vger.kernel.org>, <linux-mm@kvack.org>, LKML <linux-kernel@vger.kernel.org>, John Hubbard <jhubbard@nvidia.com> Subject: [PATCH 04/19] media/v4l2-core: set pages dirty upon releasing DMA buffers Date: Wed, 30 Oct 2019 15:49:15 -0700 Message-ID: <20191030224930.3990755-5-jhubbard@nvidia.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191030224930.3990755-1-jhubbard@nvidia.com> References: <20191030224930.3990755-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1572475785; bh=XaBaipSsjoikJvkLH7RaOydN/WuSGfkJsZAiwvM/xPE=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=TDqU50c8TwyKmMgf/QDOTe3Vi1bVOitaLNonh0qiMoxrjojMp0wTfRux4Swk2FHl5 JJGYBcwT3ZevDeLVWf4S6QuZMQ0Lgc76irFx0ekQH8Fnv8kyfOshtxUwj3keS3tCY7 F5D1m4A2EQr/qED8mKHWtZbO0i+cEcxHpr70Q6RxWH8mk6sYhzR72caMlrH7U/2DJ9 cwN4vjQSS67t9JZPJ0DxjZGJhW0NmpX9Xjg5px4Z3HhCMFeEogkJ4W63rXbQ49reo0 Ta0Ru6XcEQojtc0AiLrAqhP/0gk5Gf5yaXkfGpmKAPqaQeMlSaBzLhwCOZG/jsUpVJ ChKFrWebViYAg== Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: <linux-block.vger.kernel.org> X-Mailing-List: linux-block@vger.kernel.org |
Series |
mm/gup: track dma-pinned pages: FOLL_PIN, FOLL_LONGTERM
|
expand
|
diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index 66a6c6c236a7..28262190c3ab 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -349,8 +349,11 @@ int videobuf_dma_free(struct videobuf_dmabuf *dma) BUG_ON(dma->sglen); if (dma->pages) { - for (i = 0; i < dma->nr_pages; i++) + for (i = 0; i < dma->nr_pages; i++) { + if (dma->direction == DMA_FROM_DEVICE) + set_page_dirty_lock(dma->pages[i]); put_page(dma->pages[i]); + } kfree(dma->pages); dma->pages = NULL; }
After DMA is complete, and the device and CPU caches are synchronized, it's still required to mark the CPU pages as dirty, if the data was coming from the device. However, this driver was just issuing a bare put_page() call, without any set_page_dirty*() call. Fix the problem, by calling set_page_dirty_lock() if the CPU pages were potentially receiving data from the device. Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- drivers/media/v4l2-core/videobuf-dma-sg.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)