From patchwork Fri Oct 30 10:08:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 11869057 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5BF121130 for ; Fri, 30 Oct 2020 10:10:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2AA0B2087E for ; Fri, 30 Oct 2020 10:10:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ffwll.ch header.i=@ffwll.ch header.b="GO2lvH63" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726427AbgJ3KIw (ORCPT ); Fri, 30 Oct 2020 06:08:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726402AbgJ3KIu (ORCPT ); Fri, 30 Oct 2020 06:08:50 -0400 Received: from mail-wr1-x441.google.com (mail-wr1-x441.google.com [IPv6:2a00:1450:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FE13C0613D5 for ; Fri, 30 Oct 2020 03:08:50 -0700 (PDT) Received: by mail-wr1-x441.google.com with SMTP id w14so5782028wrs.9 for ; Fri, 30 Oct 2020 03:08:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=I4ilcVBAHuOCTtZMsF1m8kT7eVt7mrBkTULM8ihAU5Q=; b=GO2lvH63CKIIvm4pmukV3hDOR0RNqeUA9AeBrEF8NltqrZhXote2bDxTwEJdVvTtJd xkKihu5rtfTbifyZElLLYgP0VDNzFe0Jt/QvYOwQWROiwJr0/YeFXLdGoKQhPmhfWICx 70SdqYqWB/N/SS3b98Sha2tiQj/Nzf1byR0IU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=I4ilcVBAHuOCTtZMsF1m8kT7eVt7mrBkTULM8ihAU5Q=; b=m4LzyeQ9pR84wQIOzse+JLP3WcIqE5k+wU99mMuGK6tx4oQlZEXziVw0WcAnFvDtEj P/ikpPtKKw1wbufyEwFfmrLDM3qqLviZQUV0uBHQk2iBXI9xt2RJx9oIrsSp3Bf5gppw occGwn2qo/6xOaAisfy2TJXgBegwh3+x2szJP3qKfby7eeU/fPVLwIVfo6051K0CgBQl 5flJk0zacIAXpxMpUBHkpBZ/Swms58RV8J/77bvpzqHF6EuPGQsxGLnkTqkHphs5q1Bl +1mzfgrndrfHTP6Qz+U8Tird/AGDDICfRYP2sJuaVspigDRy16mf5eNrFze7e3RkYa+W 5ssQ== X-Gm-Message-State: AOAM5327uTlNew4ytC7HGpgwX1T5iiSRtijxg+UKiqErcEAV1ehKGRsH wOMHWAP+3L2aN/zDVC+mnMAZLg== X-Google-Smtp-Source: ABdhPJwl+0pBrq79hYTZxdZJ4HUKk5mm6G3nf6ZchRV1nc1hfz+MJY+K7zHup1CZanOLY9UfEQro7Q== X-Received: by 2002:a5d:4bd1:: with SMTP id l17mr2236438wrt.38.1604052529128; Fri, 30 Oct 2020 03:08:49 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id v189sm4430947wmg.14.2020.10.30.03.08.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Oct 2020 03:08:48 -0700 (PDT) From: Daniel Vetter To: DRI Development , LKML Cc: kvm@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-media@vger.kernel.org, Daniel Vetter , Daniel Vetter , Christoph Hellwig , Jason Gunthorpe , Kees Cook , Dan Williams , Andrew Morton , John Hubbard , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jan Kara Subject: [PATCH v5 08/15] mm: Add unsafe_follow_pfn Date: Fri, 30 Oct 2020 11:08:08 +0100 Message-Id: <20201030100815.2269-9-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201030100815.2269-1-daniel.vetter@ffwll.ch> References: <20201030100815.2269-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org Way back it was a reasonable assumptions that iomem mappings never change the pfn range they point at. But this has changed: - gpu drivers dynamically manage their memory nowadays, invalidating ptes with unmap_mapping_range when buffers get moved - contiguous dma allocations have moved from dedicated carvetouts to cma regions. This means if we miss the unmap the pfn might contain pagecache or anon memory (well anything allocated with GFP_MOVEABLE) - even /dev/mem now invalidates mappings when the kernel requests that iomem region when CONFIG_IO_STRICT_DEVMEM is set, see 3234ac664a87 ("/dev/mem: Revoke mappings when a driver claims the region") Accessing pfns obtained from ptes without holding all the locks is therefore no longer a good idea. Unfortunately there's some users where this is not fixable (like v4l userptr of iomem mappings) or involves a pile of work (vfio type1 iommu). For now annotate these as unsafe and splat appropriately. This patch adds an unsafe_follow_pfn, which later patches will then roll out to all appropriate places. Also mark up follow_pfn as EXPORT_SYMBOL_GPL. The only safe way to use that by drivers/modules is together with an mmu_notifier, and that's all _GPL stuff. Signed-off-by: Daniel Vetter Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Kees Cook Cc: Dan Williams Cc: Andrew Morton Cc: John Hubbard Cc: Jérôme Glisse Cc: Jan Kara Cc: Dan Williams Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-samsung-soc@vger.kernel.org Cc: linux-media@vger.kernel.org Cc: kvm@vger.kernel.org Signed-off-by: Daniel Vetter --- v5: Suggestions from Christoph - reindent for less weirdness - use IS_ENABLED instead of #ifdef - same checks for nommu, for consistency - EXPORT_SYMBOL_GPL for follow_pfn. - kerneldoc was already updated in previous versions to explain when follow_pfn can be used safely --- include/linux/mm.h | 2 ++ mm/memory.c | 34 ++++++++++++++++++++++++++++++++-- mm/nommu.c | 27 ++++++++++++++++++++++++++- security/Kconfig | 13 +++++++++++++ 4 files changed, 73 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 83d0be101a38..d0fe8bf46a9d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1661,6 +1661,8 @@ int follow_pte_pmd(struct mm_struct *mm, unsigned long address, pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn); +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, + unsigned long *pfn); int follow_phys(struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned long *prot, resource_size_t *phys); int generic_access_phys(struct vm_area_struct *vma, unsigned long addr, diff --git a/mm/memory.c b/mm/memory.c index ac32039ce941..0db0c5e233fd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4795,7 +4795,12 @@ EXPORT_SYMBOL(follow_pte_pmd); * @address: user virtual address * @pfn: location to store found PFN * - * Only IO mappings and raw PFN mappings are allowed. + * Only IO mappings and raw PFN mappings are allowed. Note that callers must + * ensure coherency with pte updates by using a &mmu_notifier to follow updates. + * If this is not feasible, or the access to the @pfn is only very short term, + * use follow_pte_pmd() instead and hold the pagetable lock for the duration of + * the access instead. Any caller not following these requirements must use + * unsafe_follow_pfn() instead. * * Return: zero and the pfn at @pfn on success, -ve otherwise. */ @@ -4816,7 +4821,32 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, pte_unmap_unlock(ptep, ptl); return 0; } -EXPORT_SYMBOL(follow_pfn); +EXPORT_SYMBOL_GPL(follow_pfn); + +/** + * unsafe_follow_pfn - look up PFN at a user virtual address + * @vma: memory mapping + * @address: user virtual address + * @pfn: location to store found PFN + * + * Only IO mappings and raw PFN mappings are allowed. + * + * Returns zero and the pfn at @pfn on success, -ve otherwise. + */ +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, + unsigned long *pfn) +{ + if (IS_ENABLED(CONFIG_STRICT_FOLLOW_PFN)) { + pr_info("unsafe follow_pfn usage rejected, see CONFIG_STRICT_FOLLOW_PFN\n"); + return -EINVAL; + } + + WARN_ONCE(1, "unsafe follow_pfn usage\n"); + add_taint(TAINT_USER, LOCKDEP_STILL_OK); + + return follow_pfn(vma, address, pfn); +} +EXPORT_SYMBOL(unsafe_follow_pfn); #ifdef CONFIG_HAVE_IOREMAP_PROT int follow_phys(struct vm_area_struct *vma, diff --git a/mm/nommu.c b/mm/nommu.c index 0faf39b32cdb..79fc98a6c94a 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -130,7 +130,32 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, *pfn = address >> PAGE_SHIFT; return 0; } -EXPORT_SYMBOL(follow_pfn); +EXPORT_SYMBOL_GPL(follow_pfn); + +/** + * unsafe_follow_pfn - look up PFN at a user virtual address + * @vma: memory mapping + * @address: user virtual address + * @pfn: location to store found PFN + * + * Only IO mappings and raw PFN mappings are allowed. + * + * Returns zero and the pfn at @pfn on success, -ve otherwise. + */ +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, + unsigned long *pfn) +{ + if (IS_ENABLED(CONFIG_STRICT_FOLLOW_PFN)) { + pr_info("unsafe follow_pfn usage rejected, see CONFIG_STRICT_FOLLOW_PFN\n"); + return -EINVAL; + } + + WARN_ONCE(1, "unsafe follow_pfn usage\n"); + add_taint(TAINT_USER, LOCKDEP_STILL_OK); + + return follow_pfn(vma, address, pfn); +} +EXPORT_SYMBOL(unsafe_follow_pfn); LIST_HEAD(vmap_area_list); diff --git a/security/Kconfig b/security/Kconfig index 7561f6f99f1d..48945402e103 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -230,6 +230,19 @@ config STATIC_USERMODEHELPER_PATH If you wish for all usermode helper programs to be disabled, specify an empty string here (i.e. ""). +config STRICT_FOLLOW_PFN + bool "Disable unsafe use of follow_pfn" + depends on MMU + help + Some functionality in the kernel follows userspace mappings to iomem + ranges in an unsafe matter. Examples include v4l userptr for zero-copy + buffers sharing. + + If this option is switched on, such access is rejected. Only enable + this option when you must run userspace which requires this. + + If in doubt, say Y. + source "security/selinux/Kconfig" source "security/smack/Kconfig" source "security/tomoyo/Kconfig"