From patchwork Wed Sep 4 06:59:13 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 2853533 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 948339F4D5 for ; Wed, 4 Sep 2013 07:00:13 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C8144202F0 for ; Wed, 4 Sep 2013 07:00:08 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 1DFE9202DB for ; Wed, 4 Sep 2013 07:00:07 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E1C55E6B52 for ; Wed, 4 Sep 2013 00:00:05 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from youngberry.canonical.com (youngberry.canonical.com [91.189.89.112]) by gabe.freedesktop.org (Postfix) with ESMTP id EC376E5F4D; Tue, 3 Sep 2013 23:59:17 -0700 (PDT) Received: from 5ed49945.cm-7-5c.dynamic.ziggo.nl ([94.212.153.69] helo=[192.168.1.128]) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1VH73a-0000KC-OI; Wed, 04 Sep 2013 06:59:14 +0000 Message-ID: <5226DA41.5060203@canonical.com> Date: Wed, 04 Sep 2013 08:59:13 +0200 From: Maarten Lankhorst User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130804 Thunderbird/17.0.8 MIME-Version: 1.0 To: Ben Skeggs Subject: Re: [PATCH] drm/nouveau: avoid null deref on bad arguments to nouveau_vma_getmap References: <1377130214-17522-1-git-send-email-imirkin@alum.mit.edu> <5215B9E8.5080108@canonical.com> In-Reply-To: Cc: "nouveau@lists.freedesktop.org" , =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= , "dri-devel@lists.freedesktop.org" , Ben Skeggs X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org X-Spam-Status: No, score=-6.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Op 04-09-13 05:41, Ben Skeggs schreef: > On Thu, Aug 22, 2013 at 5:12 PM, Maarten Lankhorst > wrote: >> Op 22-08-13 02:10, Ilia Mirkin schreef: >>> The code expects non-VRAM mem nodes to have a pages list. If that's not >>> set, it will do a null deref down the line. Warn on that condition and >>> return an error. >>> >>> See https://bugs.freedesktop.org/show_bug.cgi?id=64774 >>> >>> Reported-by: Pasi Kärkkäinen >>> Tested-by: Pasi Kärkkäinen >>> Signed-off-by: Ilia Mirkin >>> Cc: # 3.8+ >>> --- >>> >>> I don't exactly understand what's going on, but this is just a >>> straightforward way to avoid a null deref that you see happens in the >>> bug. I haven't figured out the root cause of this, but it's getting >>> well into the "I have no idea how TTM works" space. However this seems >>> like a bit of defensive programming -- nouveau_vm_map_sg will pass >>> node->pages as a list down, which will be dereferenced by >>> nvc0_vm_map_sg. Perhaps the other arguments should make that >>> dereferencing not happen, but it definitely was happening here, as you >>> can see in the bug. >>> >>> Ben/Maarten, I'll let you judge whether this check is appropriate, >>> since like I hope I was able to convey above, I'm just not really sure :) >> Not it really isn't appropriate.. >> >> You'd have to call call nouveau_vm_map_sg_table instead, the only place that doesn't handle that correctly >> is where it's not expected to be called. >> >> Here, have a completely untested patch to fix things... >> >> diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c >> --- a/drivers/gpu/drm/nouveau/nouveau_display.c >> +++ b/drivers/gpu/drm/nouveau/nouveau_display.c >> @@ -138,17 +143,26 @@ nouveau_user_framebuffer_create(struct drm_device *dev, >> { >> struct nouveau_framebuffer *nouveau_fb; >> struct drm_gem_object *gem; >> + struct nouveau_bo *nvbo; >> int ret = -ENOMEM; >> >> gem = drm_gem_object_lookup(dev, file_priv, mode_cmd->handles[0]); >> if (!gem) >> return ERR_PTR(-ENOENT); >> >> + nvbo = nouveau_gem_object(gem); >> + if (!(nvbo->valid_domains & NOUVEAU_GEM_DOMAIN_VRAM)) { >> + nv_warn(nouveau_drm(dev), "Trying to create a fb in vram with" >> + " valid_domains=%08x\n", nvbo->valid_domains); >> + ret = -EINVAL; >> + goto err_unref; >> + } >> + > Definitely the right idea, we can't handle this case right now. > However, we may someday want/need to be able to scan out of system > memory, so this is the wrong place. > > I suspect the correct thing to do (which'll also handle the > "defensive" part) is to bail in nouveau_bo_move() on attempts to move > a DMA-BUF backed object into VRAM. > > Sound OK? > If it has a WARN_ON or something that would be ok, I didn't find any other places that attempt to move buffers to VRAM though, so it's probably harmless. When looking into this bug I noticed that nouveau_bo_vma_add needs to have a check for nvbo->page_shift == vma->vm->vmm->spg_shift, and only if the check is true it should map the page in TTM_PL_TT. Patch below. Should probably also be cc'd to stable. ~Maarten diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 89b992e..355a1b7 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1560,7 +1560,8 @@ nouveau_bo_vma_add(struct nouveau_bo *nvbo, struct nouveau_vm *vm, if (nvbo->bo.mem.mem_type == TTM_PL_VRAM) nouveau_vm_map(vma, nvbo->bo.mem.mm_node); - else if (nvbo->bo.mem.mem_type == TTM_PL_TT) { + else if (nvbo->bo.mem.mem_type == TTM_PL_TT && + nvbo->page_shift == vma->vm->vmm->spg_shift) { if (node->sg) nouveau_vm_map_sg_table(vma, 0, size, node); else