From patchwork Mon Nov 25 04:19:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11259721 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 958CC930 for ; Mon, 25 Nov 2019 04:22:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6B94C20815 for ; Mon, 25 Nov 2019 04:22:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="JJGpn2Fg" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727715AbfKYEWC (ORCPT ); Sun, 24 Nov 2019 23:22:02 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:9431 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727240AbfKYEUV (ORCPT ); Sun, 24 Nov 2019 23:20:21 -0500 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Sun, 24 Nov 2019 20:20:15 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Sun, 24 Nov 2019 20:20:13 -0800 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Sun, 24 Nov 2019 20:20:13 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 25 Nov 2019 04:20:13 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 25 Nov 2019 04:20:13 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Sun, 24 Nov 2019 20:20:13 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , Christoph Hellwig Subject: [PATCH 03/19] mm: Cleanup __put_devmap_managed_page() vs ->page_free() Date: Sun, 24 Nov 2019 20:19:55 -0800 Message-ID: <20191125042011.3002372-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191125042011.3002372-1-jhubbard@nvidia.com> References: <20191125042011.3002372-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1574655615; bh=EJqkD7sGkSGDaQsT1wW3i8i5ps2YtP/qJv7KgteSHaI=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=JJGpn2FgHulb2OJcLpWw9cPkUnx+oQsuTuCdzLGpJhi6y3xcWmuDzAyyxHuy5v0lw YzKvmBMXJXe5wA+qgmquG5qxJqWSFJcmQxEDl57A0NsLGeI4+sB+SGii5Cqxwoa6zV U/oh+r44YeP31sU1tirW1AkbiZ57y3BnI2uclsHpuK1goNR7y3UqbzwVlH3HVUOZU4 6ifBk2wxc59FyGTmhqsVsrrgPoZPq2FG2eLktTHFYqYNW86s6BuGloUvA/7ymOQPpt WMoDd9nGK2Jn2rqKBqpQJl5F1EMgOpdb6gcOFzIXsrdZ0wZGCpDKEsQk1xj54E/0UK zsU9uh2q2oFSQ== Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dan Williams After the removal of the device-public infrastructure there are only 2 ->page_free() call backs in the kernel. One of those is a device-private callback in the nouveau driver, the other is a generic wakeup needed in the DAX case. In the hopes that all ->page_free() callbacks can be migrated to common core kernel functionality, move the device-private specific actions in __put_devmap_managed_page() under the is_device_private_page() conditional, including the ->page_free() callback. For the other page types just open-code the generic wakeup. Yes, the wakeup is only needed in the MEMORY_DEVICE_FSDAX case, but it does no harm in the MEMORY_DEVICE_DEVDAX and MEMORY_DEVICE_PCI_P2PDMA case. Reviewed-by: Christoph Hellwig Reviewed-by: Jérôme Glisse Cc: Jan Kara Cc: Ira Weiny Signed-off-by: Dan Williams Signed-off-by: John Hubbard --- drivers/nvdimm/pmem.c | 6 ---- mm/memremap.c | 80 ++++++++++++++++++++++++------------------- 2 files changed, 44 insertions(+), 42 deletions(-) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index ad8e4df1282b..4eae441f86c9 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -337,13 +337,7 @@ static void pmem_release_disk(void *__pmem) put_disk(pmem->disk); } -static void pmem_pagemap_page_free(struct page *page) -{ - wake_up_var(&page->_refcount); -} - static const struct dev_pagemap_ops fsdax_pagemap_ops = { - .page_free = pmem_pagemap_page_free, .kill = pmem_pagemap_kill, .cleanup = pmem_pagemap_cleanup, }; diff --git a/mm/memremap.c b/mm/memremap.c index 022e78e68ea0..e1678e575d9f 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -27,7 +27,8 @@ static void devmap_managed_enable_put(void) static int devmap_managed_enable_get(struct dev_pagemap *pgmap) { - if (!pgmap->ops || !pgmap->ops->page_free) { + if (pgmap->type == MEMORY_DEVICE_PRIVATE && + (!pgmap->ops || !pgmap->ops->page_free)) { WARN(1, "Missing page_free method\n"); return -EINVAL; } @@ -444,44 +445,51 @@ void __put_devmap_managed_page(struct page *page) { int count = page_ref_dec_return(page); - /* - * If refcount is 1 then page is freed and refcount is stable as nobody - * holds a reference on the page. - */ - if (count == 1) { - /* Clear Active bit in case of parallel mark_page_accessed */ - __ClearPageActive(page); - __ClearPageWaiters(page); + /* still busy */ + if (count > 1) + return; - mem_cgroup_uncharge(page); + /* only triggered by the dev_pagemap shutdown path */ + if (count == 0) { + __put_page(page); + return; + } - /* - * When a device_private page is freed, the page->mapping field - * may still contain a (stale) mapping value. For example, the - * lower bits of page->mapping may still identify the page as - * an anonymous page. Ultimately, this entire field is just - * stale and wrong, and it will cause errors if not cleared. - * One example is: - * - * migrate_vma_pages() - * migrate_vma_insert_page() - * page_add_new_anon_rmap() - * __page_set_anon_rmap() - * ...checks page->mapping, via PageAnon(page) call, - * and incorrectly concludes that the page is an - * anonymous page. Therefore, it incorrectly, - * silently fails to set up the new anon rmap. - * - * For other types of ZONE_DEVICE pages, migration is either - * handled differently or not done at all, so there is no need - * to clear page->mapping. - */ - if (is_device_private_page(page)) - page->mapping = NULL; + /* notify page idle for dax */ + if (!is_device_private_page(page)) { + wake_up_var(&page->_refcount); + return; + } - page->pgmap->ops->page_free(page); - } else if (!count) - __put_page(page); + /* Clear Active bit in case of parallel mark_page_accessed */ + __ClearPageActive(page); + __ClearPageWaiters(page); + + mem_cgroup_uncharge(page); + + /* + * When a device_private page is freed, the page->mapping field + * may still contain a (stale) mapping value. For example, the + * lower bits of page->mapping may still identify the page as an + * anonymous page. Ultimately, this entire field is just stale + * and wrong, and it will cause errors if not cleared. One + * example is: + * + * migrate_vma_pages() + * migrate_vma_insert_page() + * page_add_new_anon_rmap() + * __page_set_anon_rmap() + * ...checks page->mapping, via PageAnon(page) call, + * and incorrectly concludes that the page is an + * anonymous page. Therefore, it incorrectly, + * silently fails to set up the new anon rmap. + * + * For other types of ZONE_DEVICE pages, migration is either + * handled differently or not done at all, so there is no need + * to clear page->mapping. + */ + page->mapping = NULL; + page->pgmap->ops->page_free(page); } EXPORT_SYMBOL(__put_devmap_managed_page); #endif /* CONFIG_DEV_PAGEMAP_OPS */