Message ID | 20201026105818.2585306-9-daniel.vetter@ffwll.ch (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | follow_pfn and other iomap races | expand |
> +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, > + unsigned long *pfn) The one tab indent here looks weird, normally tis would be two tabs or aligned aftetthe opening brace. > +{ > +#ifdef CONFIG_STRICT_FOLLOW_PFN > + pr_info("unsafe follow_pfn usage rejected, see CONFIG_STRICT_FOLLOW_PFN\n"); > + return -EINVAL; > +#else > + WARN_ONCE(1, "unsafe follow_pfn usage\n"); > + add_taint(TAINT_USER, LOCKDEP_STILL_OK); > + > + return follow_pfn(vma, address, pfn); > +#endif Woudn't this be a pretty good use case of "if (IS_ENABLED(...)))"? Also I'd expect the inverse polarity of the config option, that is a USAFE_FOLLOW_PFN option to enable to unsafe behavior. > +/** > + * unsafe_follow_pfn - look up PFN at a user virtual address > + * @vma: memory mapping > + * @address: user virtual address > + * @pfn: location to store found PFN > + * > + * Only IO mappings and raw PFN mappings are allowed. > + * > + * Returns zero and the pfn at @pfn on success, -ve otherwise. > + */ > +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, > + unsigned long *pfn) > +{ > + return follow_pfn(vma, address, pfn); > +} > +EXPORT_SYMBOL(unsafe_follow_pfn); Any reason this doesn't use the warn and disable logic?
On Thu, Oct 29, 2020 at 9:56 AM Christoph Hellwig <hch@infradead.org> wrote: > > > +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, > > + unsigned long *pfn) > > The one tab indent here looks weird, normally tis would be two tabs > or aligned aftetthe opening brace. > > > +{ > > +#ifdef CONFIG_STRICT_FOLLOW_PFN > > + pr_info("unsafe follow_pfn usage rejected, see CONFIG_STRICT_FOLLOW_PFN\n"); > > + return -EINVAL; > > +#else > > + WARN_ONCE(1, "unsafe follow_pfn usage\n"); > > + add_taint(TAINT_USER, LOCKDEP_STILL_OK); > > + > > + return follow_pfn(vma, address, pfn); > > +#endif > > Woudn't this be a pretty good use case of "if (IS_ENABLED(...)))"? > > Also I'd expect the inverse polarity of the config option, that is > a USAFE_FOLLOW_PFN option to enable to unsafe behavior. Was just about to send out v5, will apply your suggestions for that using IS_ENABLED. Wrt negative or positive Kconfig, I was following STRICT_DEVMEM symbol as precedence. But easy to invert if there's strong feeling the other way round, I'm not attached to either. > > +/** > > + * unsafe_follow_pfn - look up PFN at a user virtual address > > + * @vma: memory mapping > > + * @address: user virtual address > > + * @pfn: location to store found PFN > > + * > > + * Only IO mappings and raw PFN mappings are allowed. > > + * > > + * Returns zero and the pfn at @pfn on success, -ve otherwise. > > + */ > > +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, > > + unsigned long *pfn) > > +{ > > + return follow_pfn(vma, address, pfn); > > +} > > +EXPORT_SYMBOL(unsafe_follow_pfn); > > Any reason this doesn't use the warn and disable logic? I figured without an mmu there's not much guarantees anyway. But I guess I can put it in here too for consistency. -Daniel
diff --git a/include/linux/mm.h b/include/linux/mm.h index 2a16631c1fda..ec8c90928fc9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1653,6 +1653,8 @@ int follow_pte_pmd(struct mm_struct *mm, unsigned long address, pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn); +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, + unsigned long *pfn); int follow_phys(struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned long *prot, resource_size_t *phys); int generic_access_phys(struct vm_area_struct *vma, unsigned long addr, diff --git a/mm/memory.c b/mm/memory.c index 1b46eae3b703..9a2ec07ff20b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4788,7 +4788,12 @@ EXPORT_SYMBOL(follow_pte_pmd); * @address: user virtual address * @pfn: location to store found PFN * - * Only IO mappings and raw PFN mappings are allowed. + * Only IO mappings and raw PFN mappings are allowed. Note that callers must + * ensure coherency with pte updates by using a &mmu_notifier to follow updates. + * If this is not feasible, or the access to the @pfn is only very short term, + * use follow_pte_pmd() instead and hold the pagetable lock for the duration of + * the access instead. Any caller not following these requirements must use + * unsafe_follow_pfn() instead. * * Return: zero and the pfn at @pfn on success, -ve otherwise. */ @@ -4811,6 +4816,31 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, } EXPORT_SYMBOL(follow_pfn); +/** + * unsafe_follow_pfn - look up PFN at a user virtual address + * @vma: memory mapping + * @address: user virtual address + * @pfn: location to store found PFN + * + * Only IO mappings and raw PFN mappings are allowed. + * + * Returns zero and the pfn at @pfn on success, -ve otherwise. + */ +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, + unsigned long *pfn) +{ +#ifdef CONFIG_STRICT_FOLLOW_PFN + pr_info("unsafe follow_pfn usage rejected, see CONFIG_STRICT_FOLLOW_PFN\n"); + return -EINVAL; +#else + WARN_ONCE(1, "unsafe follow_pfn usage\n"); + add_taint(TAINT_USER, LOCKDEP_STILL_OK); + + return follow_pfn(vma, address, pfn); +#endif +} +EXPORT_SYMBOL(unsafe_follow_pfn); + #ifdef CONFIG_HAVE_IOREMAP_PROT int follow_phys(struct vm_area_struct *vma, unsigned long address, unsigned int flags, diff --git a/mm/nommu.c b/mm/nommu.c index 75a327149af1..3db2910f0d64 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -132,6 +132,23 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, } EXPORT_SYMBOL(follow_pfn); +/** + * unsafe_follow_pfn - look up PFN at a user virtual address + * @vma: memory mapping + * @address: user virtual address + * @pfn: location to store found PFN + * + * Only IO mappings and raw PFN mappings are allowed. + * + * Returns zero and the pfn at @pfn on success, -ve otherwise. + */ +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, + unsigned long *pfn) +{ + return follow_pfn(vma, address, pfn); +} +EXPORT_SYMBOL(unsafe_follow_pfn); + LIST_HEAD(vmap_area_list); void vfree(const void *addr) diff --git a/security/Kconfig b/security/Kconfig index 7561f6f99f1d..48945402e103 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -230,6 +230,19 @@ config STATIC_USERMODEHELPER_PATH If you wish for all usermode helper programs to be disabled, specify an empty string here (i.e. ""). +config STRICT_FOLLOW_PFN + bool "Disable unsafe use of follow_pfn" + depends on MMU + help + Some functionality in the kernel follows userspace mappings to iomem + ranges in an unsafe matter. Examples include v4l userptr for zero-copy + buffers sharing. + + If this option is switched on, such access is rejected. Only enable + this option when you must run userspace which requires this. + + If in doubt, say Y. + source "security/selinux/Kconfig" source "security/smack/Kconfig" source "security/tomoyo/Kconfig"