From patchwork Fri Oct 23 12:21:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 11853021 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1AA6B1580 for ; Fri, 23 Oct 2020 12:23:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C01F421527 for ; Fri, 23 Oct 2020 12:23:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=ffwll.ch header.i=@ffwll.ch header.b="e5hphnls" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C01F421527 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C11756B0080; Fri, 23 Oct 2020 08:23:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BEA5F6B0081; Fri, 23 Oct 2020 08:23:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A88D96B0082; Fri, 23 Oct 2020 08:23:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0075.hostedemail.com [216.40.44.75]) by kanga.kvack.org (Postfix) with ESMTP id 7B5896B0080 for ; Fri, 23 Oct 2020 08:23:14 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 29CF01DED for ; Fri, 23 Oct 2020 12:23:14 +0000 (UTC) X-FDA: 77403105108.12.copy93_110f52d27259 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 0287C18054902 for ; Fri, 23 Oct 2020 12:23:13 +0000 (UTC) X-Spam-Summary: 1,0,0,3a9466429d0fc9bb,d41d8cd98f00b204,daniel.vetter@ffwll.ch,,RULES_HIT:1:2:41:69:152:355:379:541:800:960:966:973:982:988:989:1260:1277:1311:1313:1314:1345:1359:1431:1437:1500:1515:1516:1518:1593:1594:1605:1730:1747:1777:1792:2196:2198:2199:2200:2393:2553:2559:2562:2894:2898:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4052:4321:4385:4605:5007:6119:6261:6653:6742:7875:7903:8603:9592:9707:10004:11026:11232:11473:11657:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13846:13894:14394:14659:21080:21433:21444:21451:21611:21627:21987:21990:30012:30029:30054:30055:30064:30070:30089:30090,0,RBL:209.85.221.67:@ffwll.ch:.lbl8.mailshell.net-62.8.0.100 66.201.201.201;04y8o7r7zmtdnurmkuqpiexf5umwsycu1cjr35s14oapkidfw17yj1bus7d1q98.np9u6cpywb9c7xcm3jpo8rysarh9uinourmbo33jdom3wsfcm1xdbswdxqy9u88.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNS BL:none, X-HE-Tag: copy93_110f52d27259 X-Filterd-Recvd-Size: 13106 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Fri, 23 Oct 2020 12:23:13 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id n15so1599437wrq.2 for ; Fri, 23 Oct 2020 05:23:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Eh+QFd2rUGVvEdUIq62q2ZCwC5wIklTNSjxdtgUk2/Y=; b=e5hphnlsMYsasyR/YI69NuX59uzEdX2qcimMYImOZm1M/LT2BHrGVpvWCoO3kcKnbz Ov4evujMm+wUa0eaaQI03YI0d3JG8wQ5tVEwmAcc8BYd5ts9ASXWcikA5h+6IZPLJgkT Q5UjRznd6tk9kE/aYdy5AZJulxidfuYuKArx4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Eh+QFd2rUGVvEdUIq62q2ZCwC5wIklTNSjxdtgUk2/Y=; b=aLsdrJ9ibJoYCU9p6QHRuPzGsmsl3CqIaMfaAHyX6ziRU9kFhC2yeV6nHcGrh1VaZE D/mCTagWRUHHeuBXGwrPs8vHvKlQtyfki8ToRGICC1NG2OV/631EIdLdJdvVnmMST6K+ WxaSWy+dqRItHkPgVEU2rAdINV68J692IkXP0MthSZNKUbBVkKCx1gv/CL9y+tH7hCrO cwCM3mFGryealPPXDw0CiP1oQqV1vF1zjyRmyz2+EpVNcBgXRNwVe+XQRjOaIEtwURa7 mmak9R/2RKojGDFon/A+dcZZG1hON2h9T5AJ0m9AbjWza4TEYyOtKzIm62uzMYWB6r+k VMDw== X-Gm-Message-State: AOAM530j7MHtHEoYUhNh9ELUW+fwufyftGGgZuRz4H0Se8q0oH9YNQHq AgR9pZKE5G/B7EUp7DsmTMVsUQ== X-Google-Smtp-Source: ABdhPJwnOL4dnmc6lyGNP0x67dHlVVixkc5nrJQk3+7ioO7moXdzTELeRqRSEuYEwVn0xorg8Qblnw== X-Received: by 2002:adf:e549:: with SMTP id z9mr2524031wrm.12.1603455792411; Fri, 23 Oct 2020 05:23:12 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id y4sm3056484wrp.74.2020.10.23.05.23.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Oct 2020 05:23:11 -0700 (PDT) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , Greg Kroah-Hartman , Daniel Vetter , Jason Gunthorpe , Kees Cook , Dan Williams , Andrew Morton , John Hubbard , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jan Kara , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-media@vger.kernel.org, Arnd Bergmann , David Hildenbrand , "Rafael J. Wysocki" Subject: [PATCH 42/65] resource: Move devmem revoke code to resource framework Date: Fri, 23 Oct 2020 14:21:53 +0200 Message-Id: <20201023122216.2373294-42-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201023122216.2373294-1-daniel.vetter@ffwll.ch> References: <20201021163242.1458885-1-daniel.vetter@ffwll.ch> <20201023122216.2373294-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We want all iomem mmaps to consistently revoke ptes when the kernel takes over and CONFIG_IO_STRICT_DEVMEM is enabled. This includes the pci bar mmaps available through procfs and sysfs, which currently do not revoke mappings. To prepare for this, move the code from the /dev/kmem driver to kernel/resource.c. Reviewed-by: Greg Kroah-Hartman Signed-off-by: Daniel Vetter Cc: Jason Gunthorpe Cc: Kees Cook Cc: Dan Williams Cc: Andrew Morton Cc: John Hubbard Cc: Jérôme Glisse Cc: Jan Kara Cc: Dan Williams Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-samsung-soc@vger.kernel.org Cc: linux-media@vger.kernel.org Cc: Arnd Bergmann Cc: Greg Kroah-Hartman Cc: Daniel Vetter Cc: David Hildenbrand Cc: "Rafael J. Wysocki" Signed-off-by: Daniel Vetter --- v3: - add barrier for consistency and document why we don't have to check for NULL (Jason) v4 - Adjust comments to reflect the general nature of this iomem revoke code now (Dan) --- drivers/char/mem.c | 85 +--------------------------------- include/linux/ioport.h | 6 +-- kernel/resource.c | 101 ++++++++++++++++++++++++++++++++++++++++- 3 files changed, 102 insertions(+), 90 deletions(-) diff --git a/drivers/char/mem.c b/drivers/char/mem.c index 5502f56f3655..53338aad8d28 100644 --- a/drivers/char/mem.c +++ b/drivers/char/mem.c @@ -31,9 +31,6 @@ #include #include #include -#include -#include -#include #ifdef CONFIG_IA64 # include @@ -809,42 +806,6 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig) return ret; } -static struct inode *devmem_inode; - -#ifdef CONFIG_IO_STRICT_DEVMEM -void revoke_devmem(struct resource *res) -{ - /* pairs with smp_store_release() in devmem_init_inode() */ - struct inode *inode = smp_load_acquire(&devmem_inode); - - /* - * Check that the initialization has completed. Losing the race - * is ok because it means drivers are claiming resources before - * the fs_initcall level of init and prevent /dev/mem from - * establishing mappings. - */ - if (!inode) - return; - - /* - * The expectation is that the driver has successfully marked - * the resource busy by this point, so devmem_is_allowed() - * should start returning false, however for performance this - * does not iterate the entire resource range. - */ - if (devmem_is_allowed(PHYS_PFN(res->start)) && - devmem_is_allowed(PHYS_PFN(res->end))) { - /* - * *cringe* iomem=relaxed says "go ahead, what's the - * worst that can happen?" - */ - return; - } - - unmap_mapping_range(inode->i_mapping, res->start, resource_size(res), 1); -} -#endif - static int open_port(struct inode *inode, struct file *filp) { int rc; @@ -864,7 +825,7 @@ static int open_port(struct inode *inode, struct file *filp) * revocations when drivers want to take over a /dev/mem mapped * range. */ - filp->f_mapping = inode->i_mapping; + filp->f_mapping = iomem_get_mapping(); return 0; } @@ -995,48 +956,6 @@ static char *mem_devnode(struct device *dev, umode_t *mode) static struct class *mem_class; -static int devmem_fs_init_fs_context(struct fs_context *fc) -{ - return init_pseudo(fc, DEVMEM_MAGIC) ? 0 : -ENOMEM; -} - -static struct file_system_type devmem_fs_type = { - .name = "devmem", - .owner = THIS_MODULE, - .init_fs_context = devmem_fs_init_fs_context, - .kill_sb = kill_anon_super, -}; - -static int devmem_init_inode(void) -{ - static struct vfsmount *devmem_vfs_mount; - static int devmem_fs_cnt; - struct inode *inode; - int rc; - - rc = simple_pin_fs(&devmem_fs_type, &devmem_vfs_mount, &devmem_fs_cnt); - if (rc < 0) { - pr_err("Cannot mount /dev/mem pseudo filesystem: %d\n", rc); - return rc; - } - - inode = alloc_anon_inode(devmem_vfs_mount->mnt_sb); - if (IS_ERR(inode)) { - rc = PTR_ERR(inode); - pr_err("Cannot allocate inode for /dev/mem: %d\n", rc); - simple_release_fs(&devmem_vfs_mount, &devmem_fs_cnt); - return rc; - } - - /* - * Publish /dev/mem initialized. - * Pairs with smp_load_acquire() in revoke_devmem(). - */ - smp_store_release(&devmem_inode, inode); - - return 0; -} - static int __init chr_dev_init(void) { int minor; @@ -1058,8 +977,6 @@ static int __init chr_dev_init(void) */ if ((minor == DEVPORT_MINOR) && !arch_has_dev_port()) continue; - if ((minor == DEVMEM_MINOR) && devmem_init_inode() != 0) - continue; device_create(mem_class, NULL, MKDEV(MEM_MAJOR, minor), NULL, devlist[minor].name); diff --git a/include/linux/ioport.h b/include/linux/ioport.h index 6c2b06fe8beb..8ffb61b36606 100644 --- a/include/linux/ioport.h +++ b/include/linux/ioport.h @@ -302,11 +302,7 @@ struct resource *devm_request_free_mem_region(struct device *dev, struct resource *request_free_mem_region(struct resource *base, unsigned long size, const char *name); -#ifdef CONFIG_IO_STRICT_DEVMEM -void revoke_devmem(struct resource *res); -#else -static inline void revoke_devmem(struct resource *res) { }; -#endif +extern struct address_space *iomem_get_mapping(void); #endif /* __ASSEMBLY__ */ #endif /* _LINUX_IOPORT_H */ diff --git a/kernel/resource.c b/kernel/resource.c index 841737bbda9e..a800acbc578c 100644 --- a/kernel/resource.c +++ b/kernel/resource.c @@ -18,12 +18,15 @@ #include #include #include +#include #include #include #include #include #include +#include #include +#include #include @@ -1112,6 +1115,58 @@ resource_size_t resource_alignment(struct resource *res) static DECLARE_WAIT_QUEUE_HEAD(muxed_resource_wait); +static struct inode *iomem_inode; + +#ifdef CONFIG_IO_STRICT_DEVMEM +static void revoke_iomem(struct resource *res) +{ + /* pairs with smp_store_release() in iomem_init_inode() */ + struct inode *inode = smp_load_acquire(&iomem_inode); + + /* + * Check that the initialization has completed. Losing the race + * is ok because it means drivers are claiming resources before + * the fs_initcall level of init and prevent iomem_get_mapping users + * from establishing mappings. + */ + if (!inode) + return; + + /* + * The expectation is that the driver has successfully marked + * the resource busy by this point, so devmem_is_allowed() + * should start returning false, however for performance this + * does not iterate the entire resource range. + */ + if (devmem_is_allowed(PHYS_PFN(res->start)) && + devmem_is_allowed(PHYS_PFN(res->end))) { + /* + * *cringe* iomem=relaxed says "go ahead, what's the + * worst that can happen?" + */ + return; + } + + unmap_mapping_range(inode->i_mapping, res->start, resource_size(res), 1); +} +struct address_space *iomem_get_mapping(void) +{ + /* + * This function is only called from file open paths, hence guaranteed + * that fs_initcalls have completed and no need to check for NULL. But + * since revoke_iomem can be called before the initcall we still need + * the barrier to appease checkers. + */ + return smp_load_acquire(&iomem_inode)->i_mapping; +} +#else +static void revoke_iomem(struct resource *res) {} +struct address_space *iomem_get_mapping(void) +{ + return NULL; +} +#endif + /** * __request_region - create a new busy resource region * @parent: parent resource descriptor @@ -1179,7 +1234,7 @@ struct resource * __request_region(struct resource *parent, write_unlock(&resource_lock); if (res && orig_parent == &iomem_resource) - revoke_devmem(res); + revoke_iomem(res); return res; } @@ -1713,4 +1768,48 @@ static int __init strict_iomem(char *str) return 1; } +static int iomem_fs_init_fs_context(struct fs_context *fc) +{ + return init_pseudo(fc, DEVMEM_MAGIC) ? 0 : -ENOMEM; +} + +static struct file_system_type iomem_fs_type = { + .name = "iomem", + .owner = THIS_MODULE, + .init_fs_context = iomem_fs_init_fs_context, + .kill_sb = kill_anon_super, +}; + +static int __init iomem_init_inode(void) +{ + static struct vfsmount *iomem_vfs_mount; + static int iomem_fs_cnt; + struct inode *inode; + int rc; + + rc = simple_pin_fs(&iomem_fs_type, &iomem_vfs_mount, &iomem_fs_cnt); + if (rc < 0) { + pr_err("Cannot mount iomem pseudo filesystem: %d\n", rc); + return rc; + } + + inode = alloc_anon_inode(iomem_vfs_mount->mnt_sb); + if (IS_ERR(inode)) { + rc = PTR_ERR(inode); + pr_err("Cannot allocate inode for iomem: %d\n", rc); + simple_release_fs(&iomem_vfs_mount, &iomem_fs_cnt); + return rc; + } + + /* + * Publish iomem revocation inode initialized. + * Pairs with smp_load_acquire() in revoke_iomem(). + */ + smp_store_release(&iomem_inode, inode); + + return 0; +} + +fs_initcall(iomem_init_inode); + __setup("iomem=", strict_iomem);