From patchwork Sun Mar 12 23:40:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13172055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 83E1FC6FD19 for ; Mon, 13 Mar 2023 08:18:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CF67410E4A2; Mon, 13 Mar 2023 08:17:57 +0000 (UTC) Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by gabe.freedesktop.org (Postfix) with ESMTPS id A785910E158 for ; Sun, 12 Mar 2023 23:42:32 +0000 (UTC) Received: by mail-wm1-x32c.google.com with SMTP id j19-20020a05600c191300b003eb3e1eb0caso9755832wmq.1 for ; Sun, 12 Mar 2023 16:42:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678664551; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Q+mP9OxDgYsiVDAz5z+bfOZyYeqjlm5fObrSUjf3HHE=; b=L1EvxvspnWq8VbJ/cBPuwim/GfX7GyDcQh25eB8GiiGs7xV6czGIVCDJwylbb/WhpH jDFWtcEGZEkI905j9O6EBHcKvsGVqjZK04JkCGnQY528FTc/xGSKUQoJ1yY0pPvKCbzb IBNvOTkgsqZyZycwsx8m2Sth1E7Aozlkfg5MUoZCKaaMJPw/6tRWZDo07r5CQODqxUMT 1IeCaq70mfPubXlyZAz0EF4+uo1ZBp0Z928tN4A3fNLcS0TV/nntMus6BBh64bjYsk9h tNxjHKAtpE6j09aXXplCYPYqgmxOLm6RPMn+1oaBSQ4+kPQxO6ASvcd39FdgX1bjTARn 4pFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678664551; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Q+mP9OxDgYsiVDAz5z+bfOZyYeqjlm5fObrSUjf3HHE=; b=2mJTU3JDYqvCS9q+ZCE48kMmeAc7GCSvIp5HXcKiOwqa9aSsPEr6O/7LN2Ef95HI9+ 6RWRUnG3qhJVgRZm7fl557oSsmd5Pe2UL0hMBSbco0uubw+V7KvZaspydVfewz1qx/kF sBobOoPBdD+xl2Z60KpALEj/mmgP4FzqzqXg0sPd6Y2ET9AwwGuZvygDZZ+H1WXRhdHM +HDjSxBar1nN8DBEVArPKKoq4/+gaaAXHm3CRGS1IEj4KOIg1L3FeC0jL3UxQTwwR3E9 uti50jyc6xg/orY6FkwfkrlGYN6+QS6BVcJ5kM+7TG025Ypska8VZheo/29HZzUSQMB1 GDSw== X-Gm-Message-State: AO0yUKXdkeaxuOBZuojW3Zfl2KHpOj9PS+RY+eAUKiMx04rig0vBh5YX xHcogkIKdppBLZpyb9kRBjM= X-Google-Smtp-Source: AK7set+tGc+DGYJtdNRAq9fRAAuOfnBu6NAnqmDjDmjxC2dzIZIZWRmGTBsrN67MRDZeVRNHzuZI8g== X-Received: by 2002:a05:600c:350c:b0:3eb:3692:6450 with SMTP id h12-20020a05600c350c00b003eb36926450mr10229727wmq.18.1678664550942; Sun, 12 Mar 2023 16:42:30 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id iz20-20020a05600c555400b003ed201ddef2sm3698376wmb.2.2023.03.12.16.42.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 12 Mar 2023 16:42:30 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Andrew Morton Subject: [PATCH 1/3] mm: remove unused vmf_insert_mixed_prot() Date: Sun, 12 Mar 2023 23:40:13 +0000 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Mailman-Approved-At: Mon, 13 Mar 2023 08:17:55 +0000 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Hellstrom , Michal Hocko , Lorenzo Stoakes , Matthew Wilcox , Jason Gunthorpe , Dan Williams , Christian Konig , "Kirill A . Shutemov" Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The sole user of vmf_insert_mixed_prot(), the drm ttm module, stopped using this in commit f91142c62161 ("drm/ttm: nuke VM_MIXEDMAP on BO mappings v3") citing use of VM_MIXEDMAP in this case being terribly broken. Remove this now-dead code and references to it, but retain the useful description of the prot != vma->vm_page_prot case, moving it to vmf_insert_pfn_prot() instead. Signed-off-by: Lorenzo Stoakes --- include/linux/mm.h | 2 -- include/linux/mm_types.h | 7 +---- mm/memory.c | 57 +++++++++++++--------------------------- 3 files changed, 19 insertions(+), 47 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ce1590933995..ee755bb4e1c1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3330,8 +3330,6 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn, pgprot_t pgprot); vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn); -vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr, - pfn_t pfn, pgprot_t pgprot); vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn); int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3a028db80ffb..5ef0d0da328a 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -502,12 +502,7 @@ struct vm_area_struct { }; struct mm_struct *vm_mm; /* The address space we belong to. */ - - /* - * Access permissions of this VMA. - * See vmf_insert_mixed_prot() for discussion. - */ - pgprot_t vm_page_prot; + pgprot_t vm_page_prot; /* Access permissions of this VMA. */ /* * Flags, see mm.h. diff --git a/mm/memory.c b/mm/memory.c index 7ca7951adcf5..ee6bcd747867 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2147,8 +2147,20 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr, * vmf_insert_pfn_prot should only be used if using multiple VMAs is * impractical. * - * See vmf_insert_mixed_prot() for a discussion of the implication of using - * a value of @pgprot different from that of @vma->vm_page_prot. + * pgprot typically only differs from @vma->vm_page_prot when drivers set + * caching- and encryption bits different than those of @vma->vm_page_prot, + * because the caching- or encryption mode may not be known at mmap() time. + * + * This is ok as long as @vma->vm_page_prot is not used by the core vm + * to set caching and encryption bits for those vmas (except for COW pages). + * This is ensured by core vm only modifying these page table entries using + * functions that don't touch caching- or encryption bits, using pte_modify() + * if needed. (See for example mprotect()). + * + * Also when new page-table entries are created, this is only done using the + * fault() callback, and never using the value of vma->vm_page_prot, + * except for page-table entries that point to anonymous pages as the result + * of COW. * * Context: Process context. May allocate using %GFP_KERNEL. * Return: vm_fault_t value. @@ -2223,9 +2235,9 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn) } static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, - unsigned long addr, pfn_t pfn, pgprot_t pgprot, - bool mkwrite) + unsigned long addr, pfn_t pfn, bool mkwrite) { + pgprot_t pgprot = vma->vm_page_prot; int err; BUG_ON(!vm_mixed_ok(vma, pfn)); @@ -2268,43 +2280,10 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, return VM_FAULT_NOPAGE; } -/** - * vmf_insert_mixed_prot - insert single pfn into user vma with specified pgprot - * @vma: user vma to map to - * @addr: target user address of this page - * @pfn: source kernel pfn - * @pgprot: pgprot flags for the inserted page - * - * This is exactly like vmf_insert_mixed(), except that it allows drivers - * to override pgprot on a per-page basis. - * - * Typically this function should be used by drivers to set caching- and - * encryption bits different than those of @vma->vm_page_prot, because - * the caching- or encryption mode may not be known at mmap() time. - * This is ok as long as @vma->vm_page_prot is not used by the core vm - * to set caching and encryption bits for those vmas (except for COW pages). - * This is ensured by core vm only modifying these page table entries using - * functions that don't touch caching- or encryption bits, using pte_modify() - * if needed. (See for example mprotect()). - * Also when new page-table entries are created, this is only done using the - * fault() callback, and never using the value of vma->vm_page_prot, - * except for page-table entries that point to anonymous pages as the result - * of COW. - * - * Context: Process context. May allocate using %GFP_KERNEL. - * Return: vm_fault_t value. - */ -vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr, - pfn_t pfn, pgprot_t pgprot) -{ - return __vm_insert_mixed(vma, addr, pfn, pgprot, false); -} -EXPORT_SYMBOL(vmf_insert_mixed_prot); - vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn) { - return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, false); + return __vm_insert_mixed(vma, addr, pfn, false); } EXPORT_SYMBOL(vmf_insert_mixed); @@ -2316,7 +2295,7 @@ EXPORT_SYMBOL(vmf_insert_mixed); vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn) { - return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, true); + return __vm_insert_mixed(vma, addr, pfn, true); } EXPORT_SYMBOL(vmf_insert_mixed_mkwrite); From patchwork Sun Mar 12 23:40:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13172054 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D6D44C61DA4 for ; Mon, 13 Mar 2023 08:18:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D592E10E4A3; Mon, 13 Mar 2023 08:17:57 +0000 (UTC) Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3342310E440 for ; Sun, 12 Mar 2023 23:42:34 +0000 (UTC) Received: by mail-wm1-x334.google.com with SMTP id ay8so1525025wmb.1 for ; Sun, 12 Mar 2023 16:42:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678664552; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l5BwzPdb3pP6HQP4bpfNX6Ih3sOxfK6TuzT4vYnYmzI=; b=maz9e2QTegb5aGWaMrUn0W1T4Yb7pAPV4qShHMkAP0jmTAZnGYuceUYaHypxk3ZK3n ZDBqAaSh4+ktG95WP3CVzvvsS2QYlwldP0jD7mTJmW1UGRN168RC1lxMVOZZ1PQh6TOP +y2rQ91YejaLYBVYdHPFlUg3gA+SBPTgCXZkunTCQq/KKrnRLP5bUKBFIC/AQiIgFbRE 9s1UuvtwQdsPaN9SJ22GP58vW9sS051TMmx+qiNlQayml8SjCelQJT8pch0Nw/dG8/SL tYw6OY+zWYtLnjRJ7LUlFPQgbzFk8ileybTIoNKp4/+CPVyIglWvzyNCWtisu9Kn/kcB nHrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678664552; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l5BwzPdb3pP6HQP4bpfNX6Ih3sOxfK6TuzT4vYnYmzI=; b=5atoY1PiMiLetrusRR7NrCr6Agmlm0e3840cjG1MBXfrqSPvSyOrfY0HwXj8oEGtIc 1D/JMeWufhuc5BWUDggGPnUONd1N44TAlymxq8xLdG+TDWrwhL1687w0HfpgMsSsw8ac tgLozYaxS0E1Ev44UGhu3iuqumPL/MU0mUKc90lZ4ZeO/qeEKdehnh8hHrDJcdEpY4Nm VZ8feZkt7AUcTl7oBE6inGeEYCInJ/kHHIMiaWe2jhkfIHcs7sVPFQ+VJS56VrbFO6gn A5Zch8+Ij531vISD03q29HhkktmBuoj7zeVXwbYq4LegwyzUHSegMAxBFB73YgtQICmz NG7g== X-Gm-Message-State: AO0yUKXoqhDkx0C5qHUgPZfx2m2ErHGWaT8bIUsRe6MpFRmN1ETikl5q Xc+Yt2WE77MbIM1C0xm6GxY= X-Google-Smtp-Source: AK7set88QQ6jrd1M0hrmNofJDYtEfX1glIjDn7W2t6EdL0uThkNjlqD3H5zcW/MHqEOtS/rhvVi/CQ== X-Received: by 2002:a05:600c:1d29:b0:3eb:3945:d3f4 with SMTP id l41-20020a05600c1d2900b003eb3945d3f4mr10055016wms.2.1678664552343; Sun, 12 Mar 2023 16:42:32 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id iz20-20020a05600c555400b003ed201ddef2sm3698376wmb.2.2023.03.12.16.42.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 12 Mar 2023 16:42:31 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Andrew Morton Subject: [PATCH 2/3] mm: Remove vmf_insert_pfn_xxx_prot() for huge page-table entries Date: Sun, 12 Mar 2023 23:40:14 +0000 Message-Id: <604c2ad79659d4b8a6e3e1611c6219d5d3233988.1678661628.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Mailman-Approved-At: Mon, 13 Mar 2023 08:17:55 +0000 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Hellstrom , Michal Hocko , Lorenzo Stoakes , Matthew Wilcox , Jason Gunthorpe , Dan Williams , Christian Konig , "Kirill A . Shutemov" Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This functionality's sole user, the drm ttm module, removed support for it in commit 0d979509539e ("drm/ttm: remove ttm_bo_vm_insert_huge()") as the whole approach is currently unworkable without a PMD/PUD special bit and updates to GUP. Signed-off-by: Lorenzo Stoakes --- include/linux/huge_mm.h | 39 ++------------------------------------- mm/huge_memory.c | 31 +++++++++++++------------------ 2 files changed, 15 insertions(+), 55 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 70bd867eba94..3c52fc9d85dc 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -39,44 +39,9 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, pgprot_t newprot, unsigned long cp_flags); -vm_fault_t vmf_insert_pfn_pmd_prot(struct vm_fault *vmf, pfn_t pfn, - pgprot_t pgprot, bool write); -/** - * vmf_insert_pfn_pmd - insert a pmd size pfn - * @vmf: Structure describing the fault - * @pfn: pfn to insert - * @pgprot: page protection to use - * @write: whether it's a write fault - * - * Insert a pmd size pfn. See vmf_insert_pfn() for additional info. - * - * Return: vm_fault_t value. - */ -static inline vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, - bool write) -{ - return vmf_insert_pfn_pmd_prot(vmf, pfn, vmf->vma->vm_page_prot, write); -} -vm_fault_t vmf_insert_pfn_pud_prot(struct vm_fault *vmf, pfn_t pfn, - pgprot_t pgprot, bool write); - -/** - * vmf_insert_pfn_pud - insert a pud size pfn - * @vmf: Structure describing the fault - * @pfn: pfn to insert - * @pgprot: page protection to use - * @write: whether it's a write fault - * - * Insert a pud size pfn. See vmf_insert_pfn() for additional info. - * - * Return: vm_fault_t value. - */ -static inline vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, - bool write) -{ - return vmf_insert_pfn_pud_prot(vmf, pfn, vmf->vma->vm_page_prot, write); -} +vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write); +vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write); enum transparent_hugepage_flag { TRANSPARENT_HUGEPAGE_NEVER_DAX, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b0ab247939e0..5a0e5e84ab13 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -889,23 +889,20 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, } /** - * vmf_insert_pfn_pmd_prot - insert a pmd size pfn + * vmf_insert_pfn_pmd - insert a pmd size pfn * @vmf: Structure describing the fault * @pfn: pfn to insert - * @pgprot: page protection to use * @write: whether it's a write fault * - * Insert a pmd size pfn. See vmf_insert_pfn() for additional info and - * also consult the vmf_insert_mixed_prot() documentation when - * @pgprot != @vmf->vma->vm_page_prot. + * Insert a pmd size pfn. See vmf_insert_pfn() for additional info. * * Return: vm_fault_t value. */ -vm_fault_t vmf_insert_pfn_pmd_prot(struct vm_fault *vmf, pfn_t pfn, - pgprot_t pgprot, bool write) +vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) { unsigned long addr = vmf->address & PMD_MASK; struct vm_area_struct *vma = vmf->vma; + pgprot_t pgprot = vma->vm_page_prot; pgtable_t pgtable = NULL; /* @@ -933,7 +930,7 @@ vm_fault_t vmf_insert_pfn_pmd_prot(struct vm_fault *vmf, pfn_t pfn, insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write, pgtable); return VM_FAULT_NOPAGE; } -EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd_prot); +EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd); #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma) @@ -944,9 +941,10 @@ static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma) } static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, pfn_t pfn, pgprot_t prot, bool write) + pud_t *pud, pfn_t pfn, bool write) { struct mm_struct *mm = vma->vm_mm; + pgprot_t prot = vma->vm_page_prot; pud_t entry; spinlock_t *ptl; @@ -980,23 +978,20 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, } /** - * vmf_insert_pfn_pud_prot - insert a pud size pfn + * vmf_insert_pfn_pud - insert a pud size pfn * @vmf: Structure describing the fault * @pfn: pfn to insert - * @pgprot: page protection to use * @write: whether it's a write fault * - * Insert a pud size pfn. See vmf_insert_pfn() for additional info and - * also consult the vmf_insert_mixed_prot() documentation when - * @pgprot != @vmf->vma->vm_page_prot. + * Insert a pud size pfn. See vmf_insert_pfn() for additional info. * * Return: vm_fault_t value. */ -vm_fault_t vmf_insert_pfn_pud_prot(struct vm_fault *vmf, pfn_t pfn, - pgprot_t pgprot, bool write) +vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) { unsigned long addr = vmf->address & PUD_MASK; struct vm_area_struct *vma = vmf->vma; + pgprot_t pgprot = vma->vm_page_prot; /* * If we had pud_special, we could avoid all these restrictions, @@ -1014,10 +1009,10 @@ vm_fault_t vmf_insert_pfn_pud_prot(struct vm_fault *vmf, pfn_t pfn, track_pfn_insert(vma, &pgprot, pfn); - insert_pfn_pud(vma, addr, vmf->pud, pfn, pgprot, write); + insert_pfn_pud(vma, addr, vmf->pud, pfn, write); return VM_FAULT_NOPAGE; } -EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud_prot); +EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud); #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ static void touch_pmd(struct vm_area_struct *vma, unsigned long addr, From patchwork Sun Mar 12 23:40:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13172051 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 361C5C61DA4 for ; Mon, 13 Mar 2023 08:17:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 286E810E490; Mon, 13 Mar 2023 08:17:57 +0000 (UTC) Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2E91A10E28D for ; Sun, 12 Mar 2023 23:42:34 +0000 (UTC) Received: by mail-wm1-x32c.google.com with SMTP id j19-20020a05600c191300b003eb3e1eb0caso9755854wmq.1 for ; Sun, 12 Mar 2023 16:42:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678664553; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=la0p/jvvcd1pW9pC/kzK82qIi7c6KvzFPifjBlwZbrc=; b=aLZj8JfH0eQ7HFdZx/+iprS/tlBFopYdB1aCKQXTt5WK8FgM8bcDbsa+wSY1iKQJxE Sq7nlwxxzv+OkfrXORItUJ0aliFlsx2GYC4bVIbkqOdoz3XJ/8X+6UsBq/gt7H0EZtMt nuAc/KhC3rf0zvdcWc/OLcAHwR9ZY+peyrzGWB24fyZW1aAv3sBtA5JYpdpHACiwUM4C Zn2OTxsvJxQeZs50umFz+7v+OiiQaf/ZCNb2VEVVGtNQsaBRYYbpPPVuAQjUJokXjnKZ skXJHxntFAccCGCVcChxVjMizrL+8AfSuhDiDuW1ANf8qzQ5/J6eKWecB6/XPCdmkVUt dJcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678664553; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=la0p/jvvcd1pW9pC/kzK82qIi7c6KvzFPifjBlwZbrc=; b=St3Xcr97EiXsulnRyGenkwO4O/IVFZyzjqS8KmfU5dvUUMOgVcTe98Y59H9kUqi57y X5HlfJJJFqqvl+dxDDY+enPdgMD44y9en6Oh9DqJd9NsAQgMYOqVMgkqk3wJ0JvKAOUh K/0IKLAjPpaETQctE9+WBtwHr5/HTMmpAKM4Zkal8pyA+APJn7FUvsTT3WuZ6plgWufI yZcEWvrC3HT01vJ4eiIiIUagSsPb0D+oPRSFzMmB7aDD+/8/LVmxI/RkVFYvMUusZz49 Cvxe2p3CVB9rLYYSGJfCiLzUc7vNK94lKJ0FFwWizRHCJGO3Tl/xjE9yxkiiPDJZv43y PXTQ== X-Gm-Message-State: AO0yUKULOFZzChqzgK8SjxMAm1ZgHgJt6F+ymIvq+timgfgpzWx29klh DKxoN9jCyJW1n6oNWoTPNCc= X-Google-Smtp-Source: AK7set91yp0phYi1oU83T07w5p/4LtduuFcFeFbpdusqdMZ60+SpKowFcXg2lljxmC60wI0iAyj0hw== X-Received: by 2002:a05:600c:4e87:b0:3eb:38a2:2bcd with SMTP id f7-20020a05600c4e8700b003eb38a22bcdmr9079693wmq.28.1678664553621; Sun, 12 Mar 2023 16:42:33 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id iz20-20020a05600c555400b003ed201ddef2sm3698376wmb.2.2023.03.12.16.42.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 12 Mar 2023 16:42:32 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Andrew Morton Subject: [PATCH 3/3] drm/ttm: Remove comment referencing now-removed vmf_insert_mixed_prot() Date: Sun, 12 Mar 2023 23:40:15 +0000 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Mailman-Approved-At: Mon, 13 Mar 2023 08:17:55 +0000 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Hellstrom , Michal Hocko , Lorenzo Stoakes , Matthew Wilcox , Jason Gunthorpe , Dan Williams , Christian Konig , "Kirill A . Shutemov" Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This function no longer exists, however the prot != vma->vm_page_prot case discussion has been retained and moved to vmf_insert_pfn_prot() so refer to this instead. Signed-off-by: Lorenzo Stoakes --- drivers/gpu/drm/ttm/ttm_bo_vm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index ca7744b852f5..5df3edadb808 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -254,7 +254,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, * encryption bits. This is because the exact location of the * data may not be known at mmap() time and may also change * at arbitrary times while the data is mmap'ed. - * See vmf_insert_mixed_prot() for a discussion. + * See vmf_insert_pfn_prot() for a discussion. */ ret = vmf_insert_pfn_prot(vma, address, pfn, prot);