From patchwork Fri Nov 27 16:41:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 11936943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A001CC3E8C5 for ; Fri, 27 Nov 2020 17:02:08 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3A107208B3 for ; Fri, 27 Nov 2020 17:02:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="RmxzBxhd"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ufezci+N"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=ffwll.ch header.i=@ffwll.ch header.b="OtmAW3iO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A107208B3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=eOIcYF1kP6sNVykejv3UIRLS+UK4+GFX+nQLhtHkPCM=; b=RmxzBxhdL6lhXO6bByrJuh4DU OfsjNescfO1Va87PibOCiIynmUObHpgoebyBmhlWcbVmjFApW66tr1Z62wx7DXjVa9Sl8tJzAHJsf CU1Z49Yj5n8E+3XO02b+E53LzkLqhk8DuiQ0h42WCmIuH2odEc11u8RYrCdmUXCTVOJUZicwcZZjQ ySRhUp7UA0w48e0BhI3S417kkakF+r0ep9VYIL4ed4LHcNVr2C7GI9op3Pjb57tNpywjp8itzpI+o hcof1u4rdHCWfH0h71eSiDP+FUiZ1TZ3+wiD5EthLOhRZcSeIBjrKM7wLZZL7pa98Rug7YAO1PdCd UILOC1Qng==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kih6p-0003Jt-4P; Fri, 27 Nov 2020 17:00:35 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kih6l-00029D-1R for linux-arm-kernel@merlin.infradead.org; Fri, 27 Nov 2020 17:00:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:Content-Type: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-ID:Content-Description; bh=IQJA1R0+ZH6n/mEKPADOgPQGKUArlTZB9O6d5bikysY=; b=ufezci+NmlJUytyIlDio2i05Km 1+6odn/5/zVaLkqkKpI3aGw9TenAjgZILPQqpzmiduLcI3L+k+a2PBblhBlM/5Q+dmEXTk8YX+NW8 dZiTTevPU+Pu9941xAdEzWGL3GbsHI2yGnrITmLm+mzoaSZpXdW8UhPqqb0mu61VDmKuE7Z5qcutr vDOvyLj4napVMNyiq7reYsd2zVBQujw9A09YzKR9SQuJF+1zvakX7qu7vlzJcmhqArYw/+6SWb8+9 KiChCQ6WVers2SO0qkHrueHR0wuQyZe39oKOJRfvpunqAm77O+3jqYSV2OLUUpdccXH5pmyMtPNJc 68XK/vVw==; Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by casper.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kigpC-00033a-Lh for linux-arm-kernel@lists.infradead.org; Fri, 27 Nov 2020 16:42:26 +0000 Received: by mail-wm1-x341.google.com with SMTP id 3so3345210wmg.4 for ; Fri, 27 Nov 2020 08:42:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IQJA1R0+ZH6n/mEKPADOgPQGKUArlTZB9O6d5bikysY=; b=OtmAW3iOTc0HSyFQEwC0X0vHiCdbf0tuM40NfRGt+FHWZYtusfAnhybqEVYgX5kDKI AwsRf7JPHaB7KR0lk+QvX0Rm3bB4VooWt/7DvonmcX64OLYl8a0EkJMkJpwUj9a2zhom ogpnl2izGcaJ7n5DKWa46N9m9wCn2IMsyebDQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IQJA1R0+ZH6n/mEKPADOgPQGKUArlTZB9O6d5bikysY=; b=mZRmSnSNeJVsnL2tnhEAHzn5yncvSfRquY3TtGwUYOnYaTyGEIbV9lkNU4kBo+zuVM is6ch1c85mr++aOCPDG4/9SIDRM7J9ifESJ/+whE7uZc2YUGXKTvMStYvfNQLpVkirJt I15QTfhN76mpgyrZmsuCW8ElWZg/YnYt4IEqV9kqVqcLHdsToQIwubA/mydw1eoCrf8C Z+vx11lKZIrbH48aiT3jfVJiBGYyiJ1CxKXf++cIe9zesUwDU2vKis3cIdV2wh9zzvfa FOKLTM5sh3/tWbp+Fvit/aSAOARDzGfC2GzKxFW7adzlSiXky3nfRRuPJmlwIQ7QUCqB ahDg== X-Gm-Message-State: AOAM53059VB2LD8ZtS5G4klTbO1kcgMMdWG+bpibeGSKuxcuZa2OUpwR Ex4udarYHerbaLUTUOO0BQARuA== X-Google-Smtp-Source: ABdhPJy48zVCdGXpcjP3gsbBjW7jvfyFIGztqRyOmt6l9YLmPyEcfoLcyF0d5n5x9HOet3r6YUkIWQ== X-Received: by 2002:a1c:9c53:: with SMTP id f80mr9983697wme.19.1606495340315; Fri, 27 Nov 2020 08:42:20 -0800 (PST) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id q12sm14859078wrx.86.2020.11.27.08.42.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Nov 2020 08:42:19 -0800 (PST) From: Daniel Vetter To: DRI Development , LKML Subject: [PATCH v7 17/17] mm: add mmu_notifier argument to follow_pfn Date: Fri, 27 Nov 2020 17:41:31 +0100 Message-Id: <20201127164131.2244124-18-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201127164131.2244124-1-daniel.vetter@ffwll.ch> References: <20201127164131.2244124-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201127_164223_097390_9721B289 X-CRM114-Status: GOOD ( 23.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-samsung-soc@vger.kernel.org, Jan Kara , Kees Cook , kvm@vger.kernel.org, Jason Gunthorpe , Daniel Vetter , Christoph Hellwig , linux-mm@kvack.org, =?utf-8?b?SsOp?= =?utf-8?b?csO0bWUgR2xpc3Nl?= , John Hubbard , Daniel Vetter , Dan Williams , Andrew Morton , linux-arm-kernel@lists.infradead.org, linux-media@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The only safe way for non core/arch code to use follow_pfn() is together with an mmu_notifier subscription. follow_pfn() is already marked as _GPL and the kerneldoc explains this restriction. This patch here enforces all this by adding a mmu_notifier argument and verifying that it is registered for the correct mm_struct. Motivated by discussions with Christoph Hellwig and Jason Gunthorpe. Since requiring an mmu_notifier makes it very clear that follow_pfn() cannot be used on !CONFIG_MMU hardware, remove it from there. The sole user kvm not existing on such hardware also supports that. Signed-off-by: Daniel Vetter Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Kees Cook Cc: Dan Williams Cc: Andrew Morton Cc: John Hubbard Cc: Jérôme Glisse Cc: Jan Kara Cc: Dan Williams Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-samsung-soc@vger.kernel.org Cc: linux-media@vger.kernel.org Cc: kvm@vger.kernel.org Signed-off-by: Daniel Vetter Reported-by: kernel test robot --- v7: Comments from Jason: - ditch follow_pfn from nommu.c - simplify mmu_notifer->mm check --- include/linux/mm.h | 3 ++- mm/memory.c | 38 ++++++++++++++++++++++++-------------- mm/nommu.c | 27 +++++---------------------- virt/kvm/kvm_main.c | 4 ++-- 4 files changed, 33 insertions(+), 39 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bb3e926afd91..2a564bfd818c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1651,6 +1651,7 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, unsigned long start, unsigned long end); struct mmu_notifier_range; +struct mmu_notifier; void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); @@ -1660,7 +1661,7 @@ int follow_pte_pmd(struct mm_struct *mm, unsigned long address, struct mmu_notifier_range *range, pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, - unsigned long *pfn); + unsigned long *pfn, struct mmu_notifier *subscription); int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn); int follow_phys(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/memory.c b/mm/memory.c index 0db0c5e233fd..a27b9b9c22c2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4789,11 +4789,30 @@ int follow_pte_pmd(struct mm_struct *mm, unsigned long address, } EXPORT_SYMBOL(follow_pte_pmd); +static int __follow_pfn(struct vm_area_struct *vma, unsigned long address, + unsigned long *pfn) +{ + int ret = -EINVAL; + spinlock_t *ptl; + pte_t *ptep; + + if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) + return ret; + + ret = follow_pte(vma->vm_mm, address, &ptep, &ptl); + if (ret) + return ret; + *pfn = pte_pfn(*ptep); + pte_unmap_unlock(ptep, ptl); + return 0; +} + /** * follow_pfn - look up PFN at a user virtual address * @vma: memory mapping * @address: user virtual address * @pfn: location to store found PFN + * @subscription: mmu_notifier subscription for the mm @vma is part of * * Only IO mappings and raw PFN mappings are allowed. Note that callers must * ensure coherency with pte updates by using a &mmu_notifier to follow updates. @@ -4805,21 +4824,12 @@ EXPORT_SYMBOL(follow_pte_pmd); * Return: zero and the pfn at @pfn on success, -ve otherwise. */ int follow_pfn(struct vm_area_struct *vma, unsigned long address, - unsigned long *pfn) + unsigned long *pfn, struct mmu_notifier *subscription) { - int ret = -EINVAL; - spinlock_t *ptl; - pte_t *ptep; - - if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) - return ret; + if (WARN_ON(subscription->mm != vma->vm_mm)) + return -EINVAL; - ret = follow_pte(vma->vm_mm, address, &ptep, &ptl); - if (ret) - return ret; - *pfn = pte_pfn(*ptep); - pte_unmap_unlock(ptep, ptl); - return 0; + return __follow_pfn(vma, address, pfn); } EXPORT_SYMBOL_GPL(follow_pfn); @@ -4844,7 +4854,7 @@ int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, WARN_ONCE(1, "unsafe follow_pfn usage\n"); add_taint(TAINT_USER, LOCKDEP_STILL_OK); - return follow_pfn(vma, address, pfn); + return __follow_pfn(vma, address, pfn); } EXPORT_SYMBOL(unsafe_follow_pfn); diff --git a/mm/nommu.c b/mm/nommu.c index 79fc98a6c94a..a1e178401146 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -111,27 +111,6 @@ unsigned int kobjsize(const void *objp) return page_size(page); } -/** - * follow_pfn - look up PFN at a user virtual address - * @vma: memory mapping - * @address: user virtual address - * @pfn: location to store found PFN - * - * Only IO mappings and raw PFN mappings are allowed. - * - * Returns zero and the pfn at @pfn on success, -ve otherwise. - */ -int follow_pfn(struct vm_area_struct *vma, unsigned long address, - unsigned long *pfn) -{ - if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) - return -EINVAL; - - *pfn = address >> PAGE_SHIFT; - return 0; -} -EXPORT_SYMBOL_GPL(follow_pfn); - /** * unsafe_follow_pfn - look up PFN at a user virtual address * @vma: memory mapping @@ -153,7 +132,11 @@ int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, WARN_ONCE(1, "unsafe follow_pfn usage\n"); add_taint(TAINT_USER, LOCKDEP_STILL_OK); - return follow_pfn(vma, address, pfn); + if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) + return -EINVAL; + + *pfn = address >> PAGE_SHIFT; + return 0; } EXPORT_SYMBOL(unsafe_follow_pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 417f3d470c3e..6f6786524eff 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1891,7 +1891,7 @@ static int hva_to_pfn_remapped(struct kvm *kvm, struct vm_area_struct *vma, unsigned long pfn; int r; - r = follow_pfn(vma, addr, &pfn); + r = follow_pfn(vma, addr, &pfn, &kvm->mmu_notifier); if (r) { /* * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does @@ -1906,7 +1906,7 @@ static int hva_to_pfn_remapped(struct kvm *kvm, struct vm_area_struct *vma, if (r) return r; - r = follow_pfn(vma, addr, &pfn); + r = follow_pfn(vma, addr, &pfn, &kvm->mmu_notifier); if (r) return r;