From patchwork Tue Feb 23 10:59:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 12100155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7412C433E0 for ; Tue, 23 Feb 2021 11:01:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 81C8364E57 for ; Tue, 23 Feb 2021 11:01:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232231AbhBWLBA (ORCPT ); Tue, 23 Feb 2021 06:01:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232003AbhBWLAl (ORCPT ); Tue, 23 Feb 2021 06:00:41 -0500 Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com [IPv6:2a00:1450:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07AC9C061574 for ; Tue, 23 Feb 2021 03:00:01 -0800 (PST) Received: by mail-wr1-x431.google.com with SMTP id n8so22069321wrm.10 for ; Tue, 23 Feb 2021 03:00:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=TCS4m8CExYvTe3t96KtZUt4x3xd6xphRzxJvYQrl6d0=; b=ZjevW08xduM4t42jQ1DyCffSj6Ta9vK771ojPnqVC8mG52e5xU5pgRMsQ2SYaLxqB8 Zy/eECBUqT2h9bBdYzWDjgl23ARS7aaIQYhZumA+xdYdV0T8LJLYolO1krGiQO/g2FL5 OpBVMheoTQ/VRbNgPiNx41pHgr7fKJMW6od48= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=TCS4m8CExYvTe3t96KtZUt4x3xd6xphRzxJvYQrl6d0=; b=ABDyzLpAkQqCHs5ZRtmFxZD5LfpeE1YYvksOeYvc9VD1u1Uw+wLC93ncJx5hh18KcX XTOVSxmqOW/psid9gu5MSDPpLqOg7CixCINICGVJzm3dYHuXVnJ1WN6YAVX/9h2d0Y94 xbiUUPfAAmtGbx0mYX5Tf+okrMBdTr6vREHRfmCVkz1MpkMiA/oWTvQ4q3zcr0jK0qNB d6s0mtiPfeiFc1sEg32BRdl4GRnr8nrFIx/3/IWUUFUL59x4G3UumMH/eW1rkCt8mgcD 3EuvjuudZ1UWoSkAvt4STOuWCuKO/OKWcLTd5kR0pAFxyhjfCpOzLf+5eBHlxSBDhQNF zuXA== X-Gm-Message-State: AOAM532wS3lNhzQ8EvWBF9sjKJ/Cba1Sc3SutPJ5+R3UGVKUCWCV+39o NfU35kYTDod23lucUOO6GhIbJA== X-Google-Smtp-Source: ABdhPJxQTwE9MDYoC1XtAuV5pJybX2yCzZaTw3cJ1dfG+48vMaUJlukV+M6HYQAE2xkeIC/m6huYog== X-Received: by 2002:a5d:4ed1:: with SMTP id s17mr8180376wrv.402.1614077999813; Tue, 23 Feb 2021 02:59:59 -0800 (PST) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id g9sm2196208wmq.25.2021.02.23.02.59.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Feb 2021 02:59:59 -0800 (PST) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , =?utf-8?q?Christian_K=C3=B6nig?= , Jason Gunthorpe , Suren Baghdasaryan , Matthew Wilcox , John Stultz , Daniel Vetter , Sumit Semwal , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 1/2] dma-buf: Require VM_PFNMAP vma for mmap Date: Tue, 23 Feb 2021 11:59:50 +0100 Message-Id: <20210223105951.912577-1-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.30.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org tldr; DMA buffers aren't normal memory, expecting that you can use them like that (like calling get_user_pages works, or that they're accounting like any other normal memory) cannot be guaranteed. Since some userspace only runs on integrated devices, where all buffers are actually all resident system memory, there's a huge temptation to assume that a struct page is always present and useable like for any more pagecache backed mmap. This has the potential to result in a uapi nightmare. To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which blocks get_user_pages and all the other struct page based infrastructure for everyone. In spirit this is the uapi counterpart to the kernel-internal CONFIG_DMABUF_DEBUG. Motivated by a recent patch which wanted to swich the system dma-buf heap to vm_insert_page instead of vm_insert_pfn. v2: Jason brought up that we also want to guarantee that all ptes have the pte_special flag set, to catch fast get_user_pages (on architectures that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would still allow vm_insert_page, but limiting to VM_PFNMAP will catch that. From auditing the various functions to insert pfn pte entires (vm_insert_pfn_prot, remap_pfn_range and all it's callers like dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so this should be the correct flag to check for. References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-WbWzkRg@mail.gmail.com/ Acked-by: Christian König Cc: Jason Gunthorpe Cc: Suren Baghdasaryan Cc: Matthew Wilcox Cc: John Stultz Signed-off-by: Daniel Vetter Cc: Sumit Semwal Cc: "Christian König" Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Acked-by: John Stultz --- drivers/dma-buf/dma-buf.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f264b70c383e..06cb1d2e9fdc 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -127,6 +127,7 @@ static struct file_system_type dma_buf_fs_type = { static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) { struct dma_buf *dmabuf; + int ret; if (!is_dma_buf_file(file)) return -EINVAL; @@ -142,7 +143,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) dmabuf->size >> PAGE_SHIFT) return -EINVAL; - return dmabuf->ops->mmap(dmabuf, vma); + ret = dmabuf->ops->mmap(dmabuf, vma); + + WARN_ON(!(vma->vm_flags & VM_PFNMAP)); + + return ret; } static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) @@ -1244,6 +1249,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access); int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, unsigned long pgoff) { + int ret; + if (WARN_ON(!dmabuf || !vma)) return -EINVAL; @@ -1264,7 +1271,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, vma_set_file(vma, dmabuf->file); vma->vm_pgoff = pgoff; - return dmabuf->ops->mmap(dmabuf, vma); + ret = dmabuf->ops->mmap(dmabuf, vma); + + WARN_ON(!(vma->vm_flags & VM_PFNMAP)); + + return ret; } EXPORT_SYMBOL_GPL(dma_buf_mmap);