From patchwork Fri Oct 14 23:59:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 13007537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA330C43219 for ; Fri, 14 Oct 2022 23:59:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F2286B0093; Fri, 14 Oct 2022 19:59:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A33E6B0095; Fri, 14 Oct 2022 19:59:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 542BF6B0096; Fri, 14 Oct 2022 19:59:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 43D4E6B0093 for ; Fri, 14 Oct 2022 19:59:15 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 221AC16048C for ; Fri, 14 Oct 2022 23:59:15 +0000 (UTC) X-FDA: 80021223870.12.D831BE7 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf11.hostedemail.com (Postfix) with ESMTP id 456A940020 for ; Fri, 14 Oct 2022 23:59:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665791954; x=1697327954; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zJkGyA2USCATPfeLAvbcj+LxFwO3lVwTXahzZIEoOwI=; b=m3WSE11xiXPWbbbcL7Gpi3RxOF3Use61ddE0ghjBysJJdyiIZ0ZYWM+S Op2TchIupdLcFaqyigE5kOW0HAGV4rSsl6/21gesLvycC3oA9u/wkIfde dia5UdCatM6wDo9KdzugSwb26Fdni+UfgnY5mNMNfrSLxWWYwChAXX8BG mdaqw5t5HQsAbZkSjt4w3iBAep92rSKvUciUa91mgbGzvRZopm9VnACaO dZGZPl7cA2laX4pd1/Ldj18Ylx+JkIlTMhVWojoVtebtUCAbnSnl6ZB8v LpslPEwqtd4kRRES4vuI6Mw5RmFnLmT1CcIAVF5dzi/h0C6cFO/J/RYry A==; X-IronPort-AV: E=McAfee;i="6500,9779,10500"; a="303112552" X-IronPort-AV: E=Sophos;i="5.95,185,1661842800"; d="scan'208";a="303112552" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2022 16:59:13 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10500"; a="605541373" X-IronPort-AV: E=Sophos;i="5.95,185,1661842800"; d="scan'208";a="605541373" Received: from uyoon-mobl.amr.corp.intel.com (HELO dwillia2-xfh.jf.intel.com) ([10.209.90.112]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2022 16:59:12 -0700 Subject: [PATCH v3 23/25] mm/memremap_pages: Initialize all ZONE_DEVICE pages to start at refcount 0 From: Dan Williams To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , "Darrick J. Wong" , Christoph Hellwig , John Hubbard , Alistair Popple , Jason Gunthorpe , david@fromorbit.com, nvdimm@lists.linux.dev, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org Date: Fri, 14 Oct 2022 16:59:12 -0700 Message-ID: <166579195218.2236710.8731183545033177929.stgit@dwillia2-xfh.jf.intel.com> In-Reply-To: <166579181584.2236710.17813547487183983273.stgit@dwillia2-xfh.jf.intel.com> References: <166579181584.2236710.17813547487183983273.stgit@dwillia2-xfh.jf.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=m3WSE11x; spf=pass (imf11.hostedemail.com: domain of dan.j.williams@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665791954; a=rsa-sha256; cv=none; b=kogt97fV+30Jn0Ctr81unX63HMhml06zwQuPPt6B/EdlwCX+dWiVNKiuxfDnwI5s/fzdsT YiTJCW9jLrdG1kHnICemvlL2tUKu0QStoqxhGXQWAfvqXGYvSX9ank1FAG7LjIyOI7ZixL HWLRI2Osa12o0oHnKk/yHwIoLjuxeoA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665791954; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NnXhTYa8aytbi04mrulNACjS+UyDFXjU6oCQjrOepf0=; b=ifsduRzUtAc3e0JkvsZTUWMXhmL7gIhnJsBbqEsC633sLaOLoEDf/yz9IBWwLFL5WV4n5g q4h70WPFM36Qw63Yh/aZnUkjfJPIVdpRk9rUHtAroDwUBCCWobJAyQYAUXbEYxmvW4Bbjf en4q3xX57OCbvd1XvAQ+jl4RomFsu2Y= X-Rspam-User: X-Stat-Signature: tkjn33ohueijrmhe661rzsa8tkjhaquf X-Rspamd-Queue-Id: 456A940020 Authentication-Results: imf11.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=m3WSE11x; spf=pass (imf11.hostedemail.com: domain of dan.j.williams@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspamd-Server: rspam07 X-HE-Tag: 1665791954-488110 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The initial memremap_pages() implementation inherited the __init_single_page() default of pages starting life with an elevated reference count. This originally allowed for the page->pgmap pointer to alias with the storage for page->lru since a page was only allowed to be on an lru list when its reference count was zero. Since then, 'struct page' definition cleanups have arranged for dedicated space for the ZONE_DEVICE page metadata, the MEMORY_DEVICE_{PRIVATE,COHERENT} work has arranged for the 1 -> 0 page->_refcount transition to route the page to free_zone_device_page() and not the core-mm page-free, and MEMORY_DEVICE_{PRIVATE,COHERENT} now arranges for its ZONE_DEVICE pages to start at _refcount 0. With those cleanups in place and with filesystem-dax and device-dax now converted to take and drop references at map and truncate time, it is possible to start MEMORY_DEVICE_FS_DAX and MEMORY_DEVICE_GENERIC reference counts at 0 as well. This conversion also unifies all @pgmap accounting to be relative to pgmap_request_folio() and the paired folio_put() calls for those requested folios. This allows pgmap_release_folios() to be simplified to just a folio_put() helper. Cc: Matthew Wilcox Cc: Jan Kara Cc: "Darrick J. Wong" Cc: Christoph Hellwig Cc: John Hubbard Cc: Alistair Popple Cc: Jason Gunthorpe Signed-off-by: Dan Williams Reviewed-by: Alistair Popple --- drivers/dax/mapping.c | 2 +- include/linux/dax.h | 2 +- include/linux/memremap.h | 6 ++---- mm/memremap.c | 36 ++++++++++++++++-------------------- mm/page_alloc.c | 9 +-------- 5 files changed, 21 insertions(+), 34 deletions(-) diff --git a/drivers/dax/mapping.c b/drivers/dax/mapping.c index 07caaa23d476..ca06f2515644 100644 --- a/drivers/dax/mapping.c +++ b/drivers/dax/mapping.c @@ -691,7 +691,7 @@ static struct page *dax_zap_pages(struct xa_state *xas, void *entry) dax_for_each_folio(entry, folio, i) { if (zap) - pgmap_release_folios(folio_pgmap(folio), folio, 1); + pgmap_release_folios(folio, 1); if (!ret && !dax_folio_idle(folio)) ret = folio_page(folio, 0); } diff --git a/include/linux/dax.h b/include/linux/dax.h index f2fbb5746ffa..f4fc37933fc2 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -235,7 +235,7 @@ static inline void dax_unlock_mapping_entry(struct address_space *mapping, */ static inline bool dax_page_idle(struct page *page) { - return page_ref_count(page) == 1; + return page_ref_count(page) == 0; } static inline bool dax_folio_idle(struct folio *folio) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 3fb3809d71f3..ddb196ae0696 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -195,8 +195,7 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn, struct dev_pagemap *pgmap); bool pgmap_request_folios(struct dev_pagemap *pgmap, struct folio *folio, int nr_folios); -void pgmap_release_folios(struct dev_pagemap *pgmap, struct folio *folio, - int nr_folios); +void pgmap_release_folios(struct folio *folio, int nr_folios); bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn); unsigned long vmem_altmap_offset(struct vmem_altmap *altmap); @@ -238,8 +237,7 @@ static inline bool pgmap_request_folios(struct dev_pagemap *pgmap, return false; } -static inline void pgmap_release_folios(struct dev_pagemap *pgmap, - struct folio *folio, int nr_folios) +static inline void pgmap_release_folios(struct folio *folio, int nr_folios) { } diff --git a/mm/memremap.c b/mm/memremap.c index c46e700f5245..368ff41c560b 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -469,8 +469,10 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap); void free_zone_device_page(struct page *page) { - if (WARN_ON_ONCE(!page->pgmap->ops || !page->pgmap->ops->page_free)) - return; + struct dev_pagemap *pgmap = page->pgmap; + + /* wake filesystem 'break dax layouts' waiters */ + wake_up_var(page); mem_cgroup_uncharge(page_folio(page)); @@ -505,17 +507,9 @@ void free_zone_device_page(struct page *page) * to clear page->mapping. */ page->mapping = NULL; - page->pgmap->ops->page_free(page); - - if (page->pgmap->type != MEMORY_DEVICE_PRIVATE && - page->pgmap->type != MEMORY_DEVICE_COHERENT) - /* - * Reset the page count to 1 to prepare for handing out the page - * again. - */ - set_page_count(page, 1); - else - put_dev_pagemap(page->pgmap); + if (pgmap->ops && pgmap->ops->page_free) + pgmap->ops->page_free(page); + put_dev_pagemap(page->pgmap); } static bool folio_span_valid(struct dev_pagemap *pgmap, struct folio *folio, @@ -576,17 +570,19 @@ bool pgmap_request_folios(struct dev_pagemap *pgmap, struct folio *folio, } EXPORT_SYMBOL_GPL(pgmap_request_folios); -void pgmap_release_folios(struct dev_pagemap *pgmap, struct folio *folio, int nr_folios) +/* + * A symmetric helper to undo the page references acquired by + * pgmap_request_folios(), but the caller can also just arrange + * folio_put() on all the folios it acquired previously for the same + * effect. + */ +void pgmap_release_folios(struct folio *folio, int nr_folios) { struct folio *iter; int i; - for (iter = folio, i = 0; i < nr_folios; iter = folio_next(iter), i++) { - if (!put_devmap_managed_page(&iter->page)) - folio_put(iter); - if (!folio_ref_count(iter)) - put_dev_pagemap(pgmap); - } + for (iter = folio, i = 0; i < nr_folios; iter = folio_next(folio), i++) + folio_put(iter); } #ifdef CONFIG_FS_DAX diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8e9b7f08a32c..e35d1eb3308d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6787,6 +6787,7 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, { __init_single_page(page, pfn, zone_idx, nid); + set_page_count(page, 0); /* * Mark page reserved as it will need to wait for onlining @@ -6819,14 +6820,6 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, set_pageblock_migratetype(page, MIGRATE_MOVABLE); cond_resched(); } - - /* - * ZONE_DEVICE pages are released directly to the driver page allocator - * which will set the page count to 1 when allocating the page. - */ - if (pgmap->type == MEMORY_DEVICE_PRIVATE || - pgmap->type == MEMORY_DEVICE_COHERENT) - set_page_count(page, 0); } /*