From patchwork Tue Jul 16 17:43:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11046573 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE3A76C5 for ; Tue, 16 Jul 2019 17:51:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5D3428694 for ; Tue, 16 Jul 2019 17:51:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A6CF828681; Tue, 16 Jul 2019 17:51:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 417C928681 for ; Tue, 16 Jul 2019 17:51:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9EE1B6E16D; Tue, 16 Jul 2019 17:51:38 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by gabe.freedesktop.org (Postfix) with ESMTPS id 17C656E16D; Tue, 16 Jul 2019 17:51:38 +0000 (UTC) Received: by mail-pl1-x644.google.com with SMTP id 4so3531663pld.10; Tue, 16 Jul 2019 10:51:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UiLaRqQi+KN5P/cP6egPb78oRpeEozQB/t5Ad8Uztd4=; b=sEAAPG2MjtWLaBOmuz6d3L8QoVd0abF8irrZyTFrDgLYFZAPTtTrFz8axbtknWFfPx +IPloko84Q2rB/ebRzPo98H1CLbZFurtsAnl7XQ3Lxy0Cfw8WuBhRYwt7H8UTt4Z7U39 1KmOarMzIMTAVDYpXuUA5YeR9JHBRIscbvr0aXWuDVOb+EzDnrWUBPIjlZdfwglL2ss3 Bg/M2RbFnLSYirltwbO8Xnk5M4FPJ+4E2hRMpOu9cZuEYlPBMRBNGPmLyrwozGX/T305 fkYP6OIN4Rg9GVr8zgpAhcxVZhFmJ3Jnu0xo5PKGCvDx+cAj1RgQp/uSuz2NL9tHAV6j QdWQ== X-Gm-Message-State: APjAAAUwvaWjwsC5rO3N9wNTvbVaByZIE50fpjS+Bl1XbxQO2z58349U 1TVSde5yJsulU8qS3iIbx457pMbVe/I= X-Google-Smtp-Source: APXvYqzFKslRA486uFuMA8KNQnLtCFo8RE9hfK9nYj7zlrGlay5yFSTuwTPyyWNjseq42zgOr7Agbw== X-Received: by 2002:a17:902:b20c:: with SMTP id t12mr37712885plr.285.1563299497373; Tue, 16 Jul 2019 10:51:37 -0700 (PDT) Received: from localhost ([100.118.89.203]) by smtp.gmail.com with ESMTPSA id g8sm2243238pgk.1.2019.07.16.10.51.36 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 16 Jul 2019 10:51:36 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Date: Tue, 16 Jul 2019 10:43:23 -0700 Message-Id: <20190716174331.7371-3-robdclark@gmail.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190716174331.7371-1-robdclark@gmail.com> References: <20190716174331.7371-1-robdclark@gmail.com> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UiLaRqQi+KN5P/cP6egPb78oRpeEozQB/t5Ad8Uztd4=; b=hE+P9H/tFUK8qpoBOYsg1V+ORebpPk0bjpfOrC11X3dwjJrq2UGU4kOsytw14cbTLr wOCN+LCKewt/JS2Z7zT+JsuLCfkk4iUXQE8zLGFD5oXwY1X2j/uwAXBu3t6Q4Bnvn1Wx vFmYwvPOkVkNb6SrLMRO9kIvJjMAHedXhrxRwMH2aOEKlBKWbDH52oySlExA8sO0L5qN Z3Y5nlT3eMF64tjLSabZV4HyjvDWAdJhcAkwUJkcqINqNFuuzyiWRzNwT8mlUVt6FN25 WCb2Gw3g9txNOZzeaUw2/X1747vo9vF+zVUp2X9B2xpJwRDDKucIlInL8yT74rg4dCQj Rqng== Subject: [Intel-gfx] [PATCH v2 3/3] drm/vgem: use normal cached mmap'ings X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rob Clark , Deepak Sharma , Thomas Zimmermann , Eric Biggers , David Airlie , Intel Graphics Development , linux-kernel@vger.kernel.org, Emil Velikov Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Rob Clark Since there is no real device associated with VGEM, it is impossible to end up with appropriate dev->dma_ops, meaning that we have no way to invalidate the shmem pages allocated by VGEM. So, at least on platforms without drm_cflush_pages(), we end up with corruption when cache lines from previous usage of VGEM bo pages get evicted to memory. The only sane option is to use cached mappings. Signed-off-by: Rob Clark --- drivers/gpu/drm/vgem/vgem_drv.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c index a179e962b79e..b6071a466b92 100644 --- a/drivers/gpu/drm/vgem/vgem_drv.c +++ b/drivers/gpu/drm/vgem/vgem_drv.c @@ -259,9 +259,6 @@ static int vgem_mmap(struct file *filp, struct vm_area_struct *vma) if (ret) return ret; - /* Keep the WC mmaping set by drm_gem_mmap() but our pages - * are ordinary and not special. - */ vma->vm_flags = flags | VM_DONTEXPAND | VM_DONTDUMP; return 0; } @@ -310,17 +307,17 @@ static void vgem_unpin_pages(struct drm_vgem_gem_object *bo) static int vgem_prime_pin(struct drm_gem_object *obj, struct device *dev) { struct drm_vgem_gem_object *bo = to_vgem_bo(obj); - long n_pages = obj->size >> PAGE_SHIFT; + long i, n_pages = obj->size >> PAGE_SHIFT; struct page **pages; pages = vgem_pin_pages(bo); if (IS_ERR(pages)) return PTR_ERR(pages); - /* Flush the object from the CPU cache so that importers can rely - * on coherent indirect access via the exported dma-address. - */ - drm_clflush_pages(pages, n_pages); + for (i = 0; i < n_pages; i++) { + dma_sync_single_for_device(dev, page_to_phys(pages[i]), + PAGE_SIZE, DMA_BIDIRECTIONAL); + } return 0; } @@ -328,6 +325,13 @@ static int vgem_prime_pin(struct drm_gem_object *obj, struct device *dev) static void vgem_prime_unpin(struct drm_gem_object *obj, struct device *dev) { struct drm_vgem_gem_object *bo = to_vgem_bo(obj); + long i, n_pages = obj->size >> PAGE_SHIFT; + struct page **pages = bo->pages; + + for (i = 0; i < n_pages; i++) { + dma_sync_single_for_cpu(dev, page_to_phys(pages[i]), + PAGE_SIZE, DMA_BIDIRECTIONAL); + } vgem_unpin_pages(bo); } @@ -382,7 +386,7 @@ static void *vgem_prime_vmap(struct drm_gem_object *obj) if (IS_ERR(pages)) return NULL; - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); + return vmap(pages, n_pages, 0, PAGE_KERNEL); } static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) @@ -411,7 +415,7 @@ static int vgem_prime_mmap(struct drm_gem_object *obj, fput(vma->vm_file); vma->vm_file = get_file(obj->filp); vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP; - vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); + vma->vm_page_prot = vm_get_page_prot(vma->vm_flags); return 0; }