From patchwork Thu Mar 18 04:08:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12147287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A669CC433E6 for ; Thu, 18 Mar 2021 04:08:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F380464E6B for ; Thu, 18 Mar 2021 04:08:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F380464E6B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 598156B0070; Thu, 18 Mar 2021 00:08:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5480A6B0071; Thu, 18 Mar 2021 00:08:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E8D36B0072; Thu, 18 Mar 2021 00:08:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 213466B0070 for ; Thu, 18 Mar 2021 00:08:07 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C7D131E1A for ; Thu, 18 Mar 2021 04:08:06 +0000 (UTC) X-FDA: 77931662172.27.E316003 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf29.hostedemail.com (Postfix) with ESMTP id A068B13A for ; Thu, 18 Mar 2021 04:08:05 +0000 (UTC) IronPort-SDR: jhc8Xmr0GuYRj95+LKTLyohaXxxgiXBHCXF9d6JvH/5TXpndxxRAPMsbaMKc6bEu3Dseppz51h QgBe5FFBE6OQ== X-IronPort-AV: E=McAfee;i="6000,8403,9926"; a="176726640" X-IronPort-AV: E=Sophos;i="5.81,257,1610438400"; d="scan'208";a="176726640" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2021 21:08:03 -0700 IronPort-SDR: xedkW7m47DneM2lj0B/SURqsM0slD1zr1weIjCPKTRIjhraCuZdK8jDotAnT7nhf2EZn3hR8kC P3SNjFE1zdrA== X-IronPort-AV: E=Sophos;i="5.81,257,1610438400"; d="scan'208";a="511964232" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Mar 2021 21:08:03 -0700 Subject: [PATCH 0/3] mm, pmem: Force unmap pmem on surprise remove From: Dan Williams To: linux-mm@kvack.org, linux-nvdimm@lists.01.org Cc: Ira Weiny , Jason Gunthorpe , Dave Jiang , Vishal Verma , Jan Kara , Christoph Hellwig , "Darrick J. Wong" , Dave Chinner , Matthew Wilcox , Naoya Horiguchi , Shiyang Ruan , Andrew Morton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org Date: Wed, 17 Mar 2021 21:08:02 -0700 Message-ID: <161604048257.1463742.1374527716381197629.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 X-Stat-Signature: a33f8arhsred9pfwp6kk1i538a1qpiiu X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A068B13A Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf29; identity=mailfrom; envelope-from=""; helo=mga02.intel.com; client-ip=134.134.136.20 X-HE-DKIM-Result: none/none X-HE-Tag: 1616040485-281896 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Summary: A dax_dev can be unbound from its driver at any time. Unbind can not fail. The driver-core will always trigger ->remove() and the result from ->remove() is ignored. After ->remove() the driver-core proceeds to tear down context. The filesystem-dax implementation can leave pfns mapped after ->remove() if it is triggered while the filesystem is mounted. Security and data-integrity is forfeit if the dax_dev is repurposed for another security domain (new filesystem or change device modes), or if the dax_dev is physically replaced. CXL is a hotplug bus that makes dax_dev physical replace a real world prospect. All dax_dev pfns must be unmapped at remove. Detect the "remove while mounted" case and trigger memory_failure() over the entire dax_dev range. Details: The get_user_pages_fast() path expects all synchronization to be handled by the pattern of checking for pte presence, taking a page reference, and then validating that the pte was stable over that event. The gup-fast path for devmap / DAX pages additionally attempts to take/hold a live reference against the hosting pgmap over the page pin. The rational for the pgmap reference is to synchronize against a dax-device unbind / ->remove() event, but that is unnecessary if pte invalidation is guaranteed in the ->remove() path. Global dax-device pte invalidation *does* happen when the device is in raw "device-dax" mode where there is a single shared inode to unmap at remove, but the filesystem-dax path has a large number of actively mapped inodes unknown to the driver at ->remove() time. So, that unmap does not happen today for filesystem-dax. However, as Jason points out, that unmap / invalidation *needs* to happen not only to cleanup get_user_pages_fast() semantics, but in a future (see CXL) where dax_dev ->remove() is correlated with actual physical removal / replacement the implications of allowing a physical pfn to be exchanged without tearing down old mappings are severe (security and data-integrity). What is not in this patch set is coordination with the dax_kmem driver to trigger memory_failure() when the dax_dev is onlined as "System RAM". The remove_memory() API was built with the assumption that platform firmware negotiates all removal requests and the OS has a chance to say "no". This is why dax_kmem today simply leaks request_region() to burn that physical address space for any other usage until the next reboot on a manual unbind event if the memory can't be offlined. However a future to make sure that remove_memory() succeeds after memory_failure() of the same range seems a better semantic than permanently burning physical address space. The topic of remove_memory() failures gets to the question of what happens to active page references when the inopportune ->remove() event happens. For transient pins the ->remove() event will wait for for all pins to be dropped before allowing ->remove() to complete. Since fileystem-dax forbids longterm pins all those pins are transient. Device-dax, on the other hand, does allow longterm pins which means that ->remove() will hang unless / until the longterm pin is dropped. Hopefully an unmap_mapping_range() event is sufficient to get the pin dropped, but I suspect device-dax might need to trigger memory_failure() as well to get the longterm pin holder to wake up and get out of the way (TBD). Lest we repeat the "longterm-pin-revoke" debate, which highlighted that RDMA devices do not respond well to having context torn down, keep in mind that this proposal is to do a best effort recovery of an event that should not happen (surprise removal) under nominal operation. --- Dan Williams (3): mm/memory-failure: Prepare for mass memory_failure() mm, dax, pmem: Introduce dev_pagemap_failure() mm/devmap: Remove pgmap accounting in the get_user_pages_fast() path drivers/dax/super.c | 15 +++++++++++++++ drivers/nvdimm/pmem.c | 10 +++++++++- drivers/nvdimm/pmem.h | 1 + include/linux/dax.h | 5 +++++ include/linux/memremap.h | 5 +++++ include/linux/mm.h | 3 +++ mm/gup.c | 38 ++++++++++++++++---------------------- mm/memory-failure.c | 36 +++++++++++++++++++++++------------- mm/memremap.c | 11 +++++++++++ 9 files changed, 88 insertions(+), 36 deletions(-)