From patchwork Mon Mar 1 08:33:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12109007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C8A8C433E6 for ; Mon, 1 Mar 2021 08:34:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4F30764DDE for ; Mon, 1 Mar 2021 08:34:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F30764DDE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E92528D0056; Mon, 1 Mar 2021 03:34:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E1C2C8D0054; Mon, 1 Mar 2021 03:34:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D0A638D0056; Mon, 1 Mar 2021 03:34:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id B6B478D0054 for ; Mon, 1 Mar 2021 03:34:34 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 819958249980 for ; Mon, 1 Mar 2021 08:34:34 +0000 (UTC) X-FDA: 77870644068.02.4B78B3E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 1173913A for ; Mon, 1 Mar 2021 08:34:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5okII3/8h58rgn4jp5TNF5PQZbN/aK3fOX2J854+QJw=; b=C6Q2MCZ2aX31fQTNK5IHPR7eU5 sZLuU5AFntTYfe0IH9nbPNeqgsobaM2Bx9+oayfhZA2B+wUkOWkm7JHOEWCd3z6EB/KnJIDQ1Jo59 aMqp8o8OnFsAz3ieMpl4qA2a5JISLxjBpERJEIx9FpurnEPzwPI0FurR4I0vpmFt8HkObYYBbhJ6w skBtu6QwrVIGimFo1Sn0VLMYA3efIGRNwGMIrmVp/ZtMeuIBM9Q3n4taUDAaAOB8QXn3yke6TbSId uh4IrqYoKKem33eWgBsr3vz/dOGF1/3zpHWiYu4xVa2N0/YrstoSYbjOyPcUJERztsTJYJyyOLRb6 2mrDghIQ==; Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGdzr-00FTyT-RE; Mon, 01 Mar 2021 08:33:54 +0000 From: Christoph Hellwig To: Andrew Morton , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi Cc: Chris Wilson , Daniel Vetter , Peter Zijlstra , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 1/2] mm: add remap_pfn_range_notrack Date: Mon, 1 Mar 2021 09:33:19 +0100 Message-Id: <20210301083320.943079-2-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301083320.943079-1-hch@lst.de> References: <20210301083320.943079-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 1173913A X-Stat-Signature: ppiskjn1dfs73etbb6dq3giz6w97ugtn Received-SPF: none (casper.srs.infradead.org>: No applicable sender policy available) receiver=imf20; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614587673-256008 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a version of remap_pfn_range that does not call track_pfn_range. This will be used to fix horrible abuses of VM internals in the i915 driver. Signed-off-by: Christoph Hellwig --- include/linux/mm.h | 2 ++ mm/memory.c | 52 ++++++++++++++++++++++++++++------------------ 2 files changed, 34 insertions(+), 20 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 77e64e3eac80bd..fc3438daf5cfd8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2688,6 +2688,8 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, struct vm_area_struct *find_extend_vma(struct mm_struct *, unsigned long addr); int remap_pfn_range(struct vm_area_struct *, unsigned long addr, unsigned long pfn, unsigned long size, pgprot_t); +int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr, + unsigned long pfn, unsigned long size, pgprot_t prot); int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *); int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr, struct page **pages, unsigned long *num); diff --git a/mm/memory.c b/mm/memory.c index c8e35762731861..d038c13f489b78 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2266,26 +2266,17 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd, return 0; } -/** - * remap_pfn_range - remap kernel memory to userspace - * @vma: user vma to map to - * @addr: target page aligned user address to start at - * @pfn: page frame number of kernel physical memory address - * @size: size of mapping area - * @prot: page protection flags for this mapping - * - * Note: this is only safe if the mm semaphore is held when called. - * - * Return: %0 on success, negative error code otherwise. +/* + * Variant of remap_pfn_range that does not call track_pfn_remap. The caller + * must have pre-validated the caching bits of the pgprot_t. */ -int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, - unsigned long pfn, unsigned long size, pgprot_t prot) +int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr, + unsigned long pfn, unsigned long size, pgprot_t prot) { pgd_t *pgd; unsigned long next; unsigned long end = addr + PAGE_ALIGN(size); struct mm_struct *mm = vma->vm_mm; - unsigned long remap_pfn = pfn; int err; if (WARN_ON_ONCE(!PAGE_ALIGNED(addr))) @@ -2315,10 +2306,6 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, vma->vm_pgoff = pfn; } - err = track_pfn_remap(vma, &prot, remap_pfn, addr, PAGE_ALIGN(size)); - if (err) - return -EINVAL; - vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP; BUG_ON(addr >= end); @@ -2330,12 +2317,37 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, err = remap_p4d_range(mm, pgd, addr, next, pfn + (addr >> PAGE_SHIFT), prot); if (err) - break; + return err; } while (pgd++, addr = next, addr != end); + return 0; +} +EXPORT_SYMBOL_GPL(remap_pfn_range_notrack); + +/** + * remap_pfn_range - remap kernel memory to userspace + * @vma: user vma to map to + * @addr: target page aligned user address to start at + * @pfn: page frame number of kernel physical memory address + * @size: size of mapping area + * @prot: page protection flags for this mapping + * + * Note: this is only safe if the mm semaphore is held when called. + * + * Return: %0 on success, negative error code otherwise. + */ +int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, + unsigned long pfn, unsigned long size, pgprot_t prot) +{ + int err; + + err = track_pfn_remap(vma, &prot, pfn, addr, PAGE_ALIGN(size)); if (err) - untrack_pfn(vma, remap_pfn, PAGE_ALIGN(size)); + return -EINVAL; + err = remap_pfn_range_notrack(vma, addr, pfn, size, prot); + if (err) + untrack_pfn(vma, pfn, PAGE_ALIGN(size)); return err; } EXPORT_SYMBOL(remap_pfn_range); From patchwork Mon Mar 1 08:33:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12109009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC3CDC433E6 for ; Mon, 1 Mar 2021 08:34:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7E6E264DDE for ; Mon, 1 Mar 2021 08:34:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E6E264DDE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1C07A8D005A; Mon, 1 Mar 2021 03:34:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 170B28D0059; Mon, 1 Mar 2021 03:34:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 062128D005A; Mon, 1 Mar 2021 03:34:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id E43CB8D0059 for ; Mon, 1 Mar 2021 03:34:49 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id AB107180ACF76 for ; Mon, 1 Mar 2021 08:34:49 +0000 (UTC) X-FDA: 77870644698.24.FDE04E5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 35337E0011E3 for ; Mon, 1 Mar 2021 08:34:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=u8wenE83xNnepC5igb4zL7nbRigZYhYPUAh9K0DcSt4=; b=le2X1i9G+q9874lCk+S1kQo4w0 +ExgRDaa/Omact5Rb4dB1EeYpycDSNeMU32iwrMYNoEmsf4vC8i7w11GJO51oKZYFE9+xnxvirUSR XfiAzTgXS+8FyRuncAqy+hK+xu7Cdp/H5dtKumAnGsQdW5ivFyoPG4zy9Un5SsV+S2UedInfXjDIm 89aLhh7pPjsMppJ7JsWu1SK9Wuwu/zIGIp3n2CK/bCkfeO2UNDm0rwBse8G8NjDZIiOTpCgdzOK5M RconJ4hCSWP33FgcLJzkAqvYVJTuy04fP8GdCXyMhP7VXmk7fZNDipSgIJQkzGQaPzht7taUYjSgG 77MghFfQ==; Received: from [2001:4bb8:19b:e4b7:cdf9:733f:4874:8eb4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lGe0J-00FTzw-Ew; Mon, 01 Mar 2021 08:34:16 +0000 From: Christoph Hellwig To: Andrew Morton , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi Cc: Chris Wilson , Daniel Vetter , Peter Zijlstra , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 2/2] i915: use remap_pfn_range_notrack Date: Mon, 1 Mar 2021 09:33:20 +0100 Message-Id: <20210301083320.943079-3-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210301083320.943079-1-hch@lst.de> References: <20210301083320.943079-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-Stat-Signature: izdzzunqxyb5ekuxy8krtxa7f1tamdce X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 35337E0011E3 Received-SPF: none (casper.srs.infradead.org>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614587686-208662 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the remap_pfn_range_notrack helper instead of directly messing with PTEs. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/i915/i915_mm.c | 101 +++++++++------------------------ 1 file changed, 26 insertions(+), 75 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/i915_mm.c index 666808cb3a3260..a6bafac5ade0bd 100644 --- a/drivers/gpu/drm/i915/i915_mm.c +++ b/drivers/gpu/drm/i915/i915_mm.c @@ -28,55 +28,10 @@ #include "i915_drv.h" -struct remap_pfn { - struct mm_struct *mm; - unsigned long pfn; - pgprot_t prot; - - struct sgt_iter sgt; - resource_size_t iobase; -}; - -static int remap_pfn(pte_t *pte, unsigned long addr, void *data) -{ - struct remap_pfn *r = data; - - /* Special PTE are not associated with any struct page */ - set_pte_at(r->mm, addr, pte, pte_mkspecial(pfn_pte(r->pfn, r->prot))); - r->pfn++; - - return 0; -} +#define EXPECTED_FLAGS (VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP) #define use_dma(io) ((io) != -1) -static inline unsigned long sgt_pfn(const struct remap_pfn *r) -{ - if (use_dma(r->iobase)) - return (r->sgt.dma + r->sgt.curr + r->iobase) >> PAGE_SHIFT; - else - return r->sgt.pfn + (r->sgt.curr >> PAGE_SHIFT); -} - -static int remap_sg(pte_t *pte, unsigned long addr, void *data) -{ - struct remap_pfn *r = data; - - if (GEM_WARN_ON(!r->sgt.sgp)) - return -EINVAL; - - /* Special PTE are not associated with any struct page */ - set_pte_at(r->mm, addr, pte, - pte_mkspecial(pfn_pte(sgt_pfn(r), r->prot))); - r->pfn++; /* track insertions in case we need to unwind later */ - - r->sgt.curr += PAGE_SIZE; - if (r->sgt.curr >= r->sgt.max) - r->sgt = __sgt_iter(__sg_next(r->sgt.sgp), use_dma(r->iobase)); - - return 0; -} - /** * remap_io_mapping - remap an IO mapping to userspace * @vma: user vma to map to @@ -91,25 +46,12 @@ int remap_io_mapping(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn, unsigned long size, struct io_mapping *iomap) { - struct remap_pfn r; - int err; - -#define EXPECTED_FLAGS (VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP) GEM_BUG_ON((vma->vm_flags & EXPECTED_FLAGS) != EXPECTED_FLAGS); /* We rely on prevalidation of the io-mapping to skip track_pfn(). */ - r.mm = vma->vm_mm; - r.pfn = pfn; - r.prot = __pgprot((pgprot_val(iomap->prot) & _PAGE_CACHE_MASK) | - (pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK)); - - err = apply_to_page_range(r.mm, addr, size, remap_pfn, &r); - if (unlikely(err)) { - zap_vma_ptes(vma, addr, (r.pfn - pfn) << PAGE_SHIFT); - return err; - } - - return 0; + return remap_pfn_range_notrack(vma, addr, pfn, size, + __pgprot((pgprot_val(iomap->prot) & _PAGE_CACHE_MASK) | + (pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK))); } /** @@ -126,12 +68,7 @@ int remap_io_sg(struct vm_area_struct *vma, unsigned long addr, unsigned long size, struct scatterlist *sgl, resource_size_t iobase) { - struct remap_pfn r = { - .mm = vma->vm_mm, - .prot = vma->vm_page_prot, - .sgt = __sgt_iter(sgl, use_dma(iobase)), - .iobase = iobase, - }; + unsigned long pfn, len, remapped = 0; int err; /* We rely on prevalidation of the io-mapping to skip track_pfn(). */ @@ -140,11 +77,25 @@ int remap_io_sg(struct vm_area_struct *vma, if (!use_dma(iobase)) flush_cache_range(vma, addr, size); - err = apply_to_page_range(r.mm, addr, size, remap_sg, &r); - if (unlikely(err)) { - zap_vma_ptes(vma, addr, r.pfn << PAGE_SHIFT); - return err; - } - - return 0; + do { + if (use_dma(iobase)) { + if (!sg_dma_len(sgl)) + break; + pfn = (sg_dma_address(sgl) + iobase) >> PAGE_SHIFT; + len = sg_dma_len(sgl); + } else { + pfn = page_to_pfn(sg_page(sgl)); + len = sgl->length; + } + + err = remap_pfn_range_notrack(vma, addr + remapped, pfn, len, + vma->vm_page_prot); + if (err) + break; + remapped += len; + } while ((sgl = __sg_next(sgl))); + + if (err) + zap_vma_ptes(vma, addr, remapped); + return err; }