From patchwork Mon Sep 7 06:33:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gerd Hoffmann X-Patchwork-Id: 11759983 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EBA791599 for ; Mon, 7 Sep 2020 06:34:04 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BDEA921582 for ; Mon, 7 Sep 2020 06:34:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="SpdypvgQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BDEA921582 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 61A856E132; Mon, 7 Sep 2020 06:34:01 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id F09D76E0DE for ; Mon, 7 Sep 2020 06:33:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1599460439; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dLw72UhWbVyKPPxph6X23m/KpX2bD305XHa6zn0hzFM=; b=SpdypvgQ68hh2e02SZ2juv1wLmBW7H7/fmQZT39XuLrvJ8KdFZKaWdxokWg2g4wRkHJWfG UnkF8+W/TYHidYEbNgvkKbv5dgXFaYGNoVaS7nxoZYdHpoTHo/NqkHeWq0ZtZQc0Mv3y50 r+v+KfZ1BXneZvYRZ9IC6+jFHsANXm0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-336-OhYxQ9WWO4yHLfQHHSSF-g-1; Mon, 07 Sep 2020 02:33:55 -0400 X-MC-Unique: OhYxQ9WWO4yHLfQHHSSF-g-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7B5F6802B5E; Mon, 7 Sep 2020 06:33:50 +0000 (UTC) Received: from sirius.home.kraxel.org (ovpn-112-56.ams2.redhat.com [10.36.112.56]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4ED285D9D2; Mon, 7 Sep 2020 06:33:46 +0000 (UTC) Received: by sirius.home.kraxel.org (Postfix, from userid 1000) id 72D7A9A83; Mon, 7 Sep 2020 08:33:44 +0200 (CEST) From: Gerd Hoffmann To: dri-devel@lists.freedesktop.org Subject: [PATCH v3 1/2] drm: allow limiting the scatter list size. Date: Mon, 7 Sep 2020 08:33:42 +0200 Message-Id: <20200907063343.18097-2-kraxel@redhat.com> In-Reply-To: <20200907063343.18097-1-kraxel@redhat.com> References: <20200907063343.18097-1-kraxel@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Airlie , "open list:DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS" , Sandy Huang , Thierry Reding , Gerd Hoffmann , Oleksandr Andrushchenko , "open list:RADEON and AMDGPU DRM DRIVERS" , Jonathan Hunter , "open list:ARM/Rockchip SoC support" , Ben Skeggs , Russell King , "moderated list:DRM DRIVERS FOR XEN" , "open list:DRM DRIVER FOR MSM ADRENO GPU" , "moderated list:DRM DRIVERS FOR VIVANTE GPU IP" , "open list:DRM DRIVERS FOR NVIDIA TEGRA" , Sean Paul , "moderated list:ARM/Rockchip SoC support" , open list , Thomas Zimmermann , Alex Deucher , "open list:DRM DRIVER FOR MSM ADRENO GPU" , christian.koenig@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add max_segment argument to drm_prime_pages_to_sg(). When set pass it through to the __sg_alloc_table_from_pages() call, otherwise use SCATTERLIST_MAX_SEGMENT. Also add max_segment field to drm driver and pass it to drm_prime_pages_to_sg() calls in drivers and helpers. v2: place max_segment in drm driver not gem object. v3: move max_segment next to the other gem fields. Signed-off-by: Gerd Hoffmann --- include/drm/drm_device.h | 8 ++++++++ include/drm/drm_prime.h | 3 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 3 ++- drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++- drivers/gpu/drm/drm_prime.c | 10 +++++++--- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 3 ++- drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 3 ++- drivers/gpu/drm/msm/msm_gem.c | 3 ++- drivers/gpu/drm/msm/msm_gem_prime.c | 3 ++- drivers/gpu/drm/nouveau/nouveau_prime.c | 3 ++- drivers/gpu/drm/radeon/radeon_prime.c | 3 ++- drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 6 ++++-- drivers/gpu/drm/tegra/gem.c | 3 ++- drivers/gpu/drm/vgem/vgem_drv.c | 3 ++- drivers/gpu/drm/xen/xen_drm_front_gem.c | 3 ++- 15 files changed, 43 insertions(+), 17 deletions(-) diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h index f4f68e7a9149..c455ef404ca6 100644 --- a/include/drm/drm_device.h +++ b/include/drm/drm_device.h @@ -308,6 +308,14 @@ struct drm_device { /** @vma_offset_manager: GEM information */ struct drm_vma_offset_manager *vma_offset_manager; + /** + * @max_segment: + * + * Max size for scatter list segments for GEM objects. When + * unset the default (SCATTERLIST_MAX_SEGMENT) is used. + */ + size_t max_segment; + /** @vram_mm: VRAM MM memory manager */ struct drm_vram_mm *vram_mm; diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h index 9af7422b44cf..2c3689435cb4 100644 --- a/include/drm/drm_prime.h +++ b/include/drm/drm_prime.h @@ -88,7 +88,8 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr); int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma); -struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages); +struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages, + size_t max_segment); struct dma_buf *drm_gem_prime_export(struct drm_gem_object *obj, int flags); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index 519ce4427fce..8f6a647757e7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -303,7 +303,8 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, switch (bo->tbo.mem.mem_type) { case TTM_PL_TT: sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages, - bo->tbo.num_pages); + bo->tbo.num_pages, + obj->dev->max_segment); if (IS_ERR(sgt)) return sgt; diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 4b7cfbac4daa..8f47b41b0b2f 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -656,7 +656,8 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj) WARN_ON(shmem->base.import_attach); - return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT); + return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT, + obj->dev->max_segment); } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 1693aa7c14b5..27c783fd6633 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -802,7 +802,8 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = { * * This is useful for implementing &drm_gem_object_funcs.get_sg_table. */ -struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages) +struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages, + size_t max_segment) { struct sg_table *sg = NULL; int ret; @@ -813,8 +814,11 @@ struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_page goto out; } - ret = sg_alloc_table_from_pages(sg, pages, nr_pages, 0, - nr_pages << PAGE_SHIFT, GFP_KERNEL); + if (max_segment == 0 || max_segment > SCATTERLIST_MAX_SEGMENT) + max_segment = SCATTERLIST_MAX_SEGMENT; + ret = __sg_alloc_table_from_pages(sg, pages, nr_pages, 0, + nr_pages << PAGE_SHIFT, + max_segment, GFP_KERNEL); if (ret) goto out; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index f06e19e7be04..90654246b335 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -103,7 +103,8 @@ struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *etnaviv_obj) int npages = etnaviv_obj->base.size >> PAGE_SHIFT; struct sg_table *sgt; - sgt = drm_prime_pages_to_sg(etnaviv_obj->pages, npages); + sgt = drm_prime_pages_to_sg(etnaviv_obj->pages, npages, + etnaviv_obj->base.dev->max_segment); if (IS_ERR(sgt)) { dev_err(dev->dev, "failed to allocate sgt: %ld\n", PTR_ERR(sgt)); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c index 6d9e5c3c4dd5..f65be0fffb3d 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c @@ -19,7 +19,8 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj) if (WARN_ON(!etnaviv_obj->pages)) /* should have already pinned! */ return ERR_PTR(-EINVAL); - return drm_prime_pages_to_sg(etnaviv_obj->pages, npages); + return drm_prime_pages_to_sg(etnaviv_obj->pages, npages, + obj->dev->max_segment); } void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b2f49152b4d4..dbf1437c3dac 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -126,7 +126,8 @@ static struct page **get_pages(struct drm_gem_object *obj) msm_obj->pages = p; - msm_obj->sgt = drm_prime_pages_to_sg(p, npages); + msm_obj->sgt = drm_prime_pages_to_sg(p, npages, + obj->dev->max_segment); if (IS_ERR(msm_obj->sgt)) { void *ptr = ERR_CAST(msm_obj->sgt); diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c index d7c8948427fe..6337cd1f9428 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -19,7 +19,8 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ return NULL; - return drm_prime_pages_to_sg(msm_obj->pages, npages); + return drm_prime_pages_to_sg(msm_obj->pages, npages, + obj->dev->max_segment); } void *msm_gem_prime_vmap(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index bae6a3eccee0..dd0ff032ae16 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -32,7 +32,8 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj) struct nouveau_bo *nvbo = nouveau_gem_object(obj); int npages = nvbo->bo.num_pages; - return drm_prime_pages_to_sg(nvbo->bo.ttm->pages, npages); + return drm_prime_pages_to_sg(nvbo->bo.ttm->pages, npages, + obj->dev->max_segment); } void *nouveau_gem_prime_vmap(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c index b906e8fbd5f3..61a3fe147489 100644 --- a/drivers/gpu/drm/radeon/radeon_prime.c +++ b/drivers/gpu/drm/radeon/radeon_prime.c @@ -36,7 +36,8 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj) struct radeon_bo *bo = gem_to_radeon_bo(obj); int npages = bo->tbo.num_pages; - return drm_prime_pages_to_sg(bo->tbo.ttm->pages, npages); + return drm_prime_pages_to_sg(bo->tbo.ttm->pages, npages, + obj->dev->max_segment); } void *radeon_gem_prime_vmap(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c index b9275ba7c5a5..5ddb2d31a607 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c @@ -85,7 +85,8 @@ static int rockchip_gem_get_pages(struct rockchip_gem_object *rk_obj) rk_obj->num_pages = rk_obj->base.size >> PAGE_SHIFT; - rk_obj->sgt = drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages); + rk_obj->sgt = drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages, + rk_obj->base.dev->max_segment); if (IS_ERR(rk_obj->sgt)) { ret = PTR_ERR(rk_obj->sgt); goto err_put_pages; @@ -442,7 +443,8 @@ struct sg_table *rockchip_gem_prime_get_sg_table(struct drm_gem_object *obj) int ret; if (rk_obj->pages) - return drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages); + return drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages, + obj->dev->max_segment); sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); if (!sgt) diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index 723df142a981..a0abde747e95 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -284,7 +284,8 @@ static int tegra_bo_get_pages(struct drm_device *drm, struct tegra_bo *bo) bo->num_pages = bo->gem.size >> PAGE_SHIFT; - bo->sgt = drm_prime_pages_to_sg(bo->pages, bo->num_pages); + bo->sgt = drm_prime_pages_to_sg(bo->pages, bo->num_pages, + bo->gem.dev->max_segment); if (IS_ERR(bo->sgt)) { err = PTR_ERR(bo->sgt); goto put_pages; diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c index 313339bbff90..045461dc6319 100644 --- a/drivers/gpu/drm/vgem/vgem_drv.c +++ b/drivers/gpu/drm/vgem/vgem_drv.c @@ -321,7 +321,8 @@ static struct sg_table *vgem_prime_get_sg_table(struct drm_gem_object *obj) { struct drm_vgem_gem_object *bo = to_vgem_bo(obj); - return drm_prime_pages_to_sg(bo->pages, bo->base.size >> PAGE_SHIFT); + return drm_prime_pages_to_sg(bo->pages, bo->base.size >> PAGE_SHIFT, + obj->dev->max_segment); } static struct drm_gem_object* vgem_prime_import(struct drm_device *dev, diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c index 39ff95b75357..7865d5026856 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c @@ -179,7 +179,8 @@ struct sg_table *xen_drm_front_gem_get_sg_table(struct drm_gem_object *gem_obj) if (!xen_obj->pages) return ERR_PTR(-ENOMEM); - return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages); + return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages, + gem_obj->dev->max_segment); } struct drm_gem_object * From patchwork Mon Sep 7 06:33:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gerd Hoffmann X-Patchwork-Id: 11759977 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1690713B1 for ; Mon, 7 Sep 2020 06:33:58 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D3DCD21582 for ; Mon, 7 Sep 2020 06:33:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fL3o6MyI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D3DCD21582 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 20BA66E0B6; Mon, 7 Sep 2020 06:33:55 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by gabe.freedesktop.org (Postfix) with ESMTPS id B59876E0B6 for ; Mon, 7 Sep 2020 06:33:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1599460432; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=17hx+uQnRYN37kt9J2TS1rwaCj5ZQT0lilbznWEFqTQ=; b=fL3o6MyIcQsBrTwrd+aE403ajYSyNlvZYCDwbE+xFspZeP39onyqwmrzM+FlcnauDV4n3H I3xoFhfgsxD+Lc2Ko+bFSNPu0pVB7pv1x+JDEgn0ngVzXRUnw69ZKCPtOQH6xB/pXu6QoE CG27XknfRnjQz0UMxYNVEP1mkbV397M= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-335-AZ6I_DA3P3amXemm0ORAjA-1; Mon, 07 Sep 2020 02:33:50 -0400 X-MC-Unique: AZ6I_DA3P3amXemm0ORAjA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 74DC480046B; Mon, 7 Sep 2020 06:33:49 +0000 (UTC) Received: from sirius.home.kraxel.org (ovpn-112-56.ams2.redhat.com [10.36.112.56]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7101E805E0; Mon, 7 Sep 2020 06:33:45 +0000 (UTC) Received: by sirius.home.kraxel.org (Postfix, from userid 1000) id 8C06F204A1; Mon, 7 Sep 2020 08:33:44 +0200 (CEST) From: Gerd Hoffmann To: dri-devel@lists.freedesktop.org Subject: [PATCH v3 2/2] drm/virtio: set max_segment Date: Mon, 7 Sep 2020 08:33:43 +0200 Message-Id: <20200907063343.18097-3-kraxel@redhat.com> In-Reply-To: <20200907063343.18097-1-kraxel@redhat.com> References: <20200907063343.18097-1-kraxel@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Airlie , open list , "open list:VIRTIO GPU DRIVER" , Gerd Hoffmann , christian.koenig@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" When initializing call virtio_max_dma_size() to figure the scatter list limit. Needed to make virtio-gpu work properly with SEV. v2: place max_segment in drm driver not gem object. Signed-off-by: Gerd Hoffmann --- drivers/gpu/drm/virtio/virtgpu_kms.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 75d0dc2f6d28..151471acdfcf 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -167,6 +167,7 @@ int virtio_gpu_init(struct drm_device *dev) DRM_ERROR("failed to alloc vbufs\n"); goto err_vbufs; } + dev->max_segment = virtio_max_dma_size(vgdev->vdev); /* get display info */ virtio_cread_le(vgdev->vdev, struct virtio_gpu_config,