From patchwork Mon Jun 24 06:36:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kasireddy, Vivek" X-Patchwork-Id: 13709043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 240B4C2BD09 for ; Mon, 24 Jun 2024 07:05:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D64C6B00E7; Mon, 24 Jun 2024 03:05:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C6E06B00ED; Mon, 24 Jun 2024 03:05:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 580C96B00EA; Mon, 24 Jun 2024 03:05:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2D4C36B00E5 for ; Mon, 24 Jun 2024 03:05:30 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E235E40B19 for ; Mon, 24 Jun 2024 07:05:29 +0000 (UTC) X-FDA: 82264896378.20.D70D887 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf26.hostedemail.com (Postfix) with ESMTP id A897014001A for ; Mon, 24 Jun 2024 07:05:27 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=P1720VQu; spf=pass (imf26.hostedemail.com: domain of vivek.kasireddy@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719212720; a=rsa-sha256; cv=none; b=NByp/lc42ApSALpvsq7xbSbkKrIDwhfhlF0A5uL/DaO+QUk6QGujdnJi5pLAYgAHgO6kOq ypwIloWX6N+YvsKKfVLB3NZJIF0QLMlYVdpiWcT/h438gJag7PMwAZezUN7msoPBJDxfHV 3fjCeBTMZvY++fbYUITej+7WBgF61aQ= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=P1720VQu; spf=pass (imf26.hostedemail.com: domain of vivek.kasireddy@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719212720; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WdfJs2fGTcT6wyOt02CtufQGS6XpiVjlndTuH5LRebI=; b=ahz8STDlkhqJt8QEh/F+8lsjpyDAfCWfcXXWPUpC5QzS91gJKBkim1xfypqj4ZSw7kddIc F4NvEbF/bw1YzYNr8UViaADmxZial0xBHLUBpokXfG8xOBmt5H9ipg01DhpsyE/U2/gIgl NB5ag77aA8c8E82e8jqlWCyqM9Ng+ik= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1719212728; x=1750748728; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=A1Q4hK2VF6Kkq0SWMg39yfc5/Z639gK4zvJ914UtXkE=; b=P1720VQuJcwC+dYaOHzZEK0dzpBcwhX+Ju6XuLI0+eJ7uSuKuEOlQfJ0 eHygeo06O51i72RTfC4cuwVZwV7bNmMQql6ssjjdIz5C/Hfxd7TeE+mb7 ybxmULkfmba0BhYpplHcENdbRVnulCy/9tOBsblpNTMFw7fRMB0BLu1Le fZ4VTSZJcV1NF9PMW4RXi4aURf6ZCT1WO2INq3vsBsKjpF5GmEblgiwMe lLNE0uXglxu3HPxY8Hkin0qMbQTSCaIKJcbqWd6Km91fUkssm0OmGUvkv p4GaIfS+udAgDetfBTLmACHhkDqJ7us6xw8KZThgsJlk72bCty712lCZ+ g==; X-CSE-ConnectionGUID: pgl/zWuDRG+2UxX7hvA8uA== X-CSE-MsgGUID: Aei1hPpHR7O26IQc41aXHQ== X-IronPort-AV: E=McAfee;i="6700,10204,11112"; a="16134944" X-IronPort-AV: E=Sophos;i="6.08,261,1712646000"; d="scan'208";a="16134944" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2024 00:05:25 -0700 X-CSE-ConnectionGUID: gpkKtEG4T1+9ybtVXI0VYg== X-CSE-MsgGUID: WSKpN+jnQVyrB6TvReaglg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,261,1712646000"; d="scan'208";a="73955864" Received: from vkasired-desk2.fm.intel.com ([10.105.128.132]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2024 00:05:25 -0700 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , David Hildenbrand , Matthew Wilcox , Christoph Hellwig , Jason Gunthorpe , Peter Xu , Dave Airlie , Gerd Hoffmann Subject: [PATCH v16 2/9] mm/gup: Introduce check_and_migrate_movable_folios() Date: Sun, 23 Jun 2024 23:36:10 -0700 Message-ID: <20240624063952.1572359-3-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240624063952.1572359-1-vivek.kasireddy@intel.com> References: <20240624063952.1572359-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 X-Stat-Signature: ptaxaj5bqjt3kgbkdthkqzf5y8o5dmrt X-Rspamd-Queue-Id: A897014001A X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1719212727-83668 X-HE-Meta: U2FsdGVkX1+jdFhV5HxIKR35YT3dP6KsbsMHw+TKw0PRg1h217vNOrpHgIOL9n4ycxbt/brqBEZT3qe9GRUz1D10RGLq8HwWCKwE2mBkonHiCLPsSP0qVEh/bqLfRFz+xw9h/YZtk/o3CaZ69VVB2qTLd3CTxa4c34HGYrPmI4BXbiDCg7aHlu26pj2quViXUj2rSwtzLhp8SVrY2S9AXWMw5eKWF2CHa8FI5/dzkKL8gDQJ7jk1VQWcjaPyu41Va3HrRtYQ+jF01E7jR/85GnPIDxuvZfUaxJhgCOEaINIAszdsNBIjqiEWiVsCImlzeU0Dk5wI7Po3C2A9ts2weYkTq+LtIH5n1uiK+F4VBQA72kgs9mjYfVqb1jbbaTCe7snDEUTHA81PQdsWPdNT1FB7x+jINb8AH1ajaM2YX+IVdQQRpXQn/2xDgpyeiT5lz/vT7pEUXj6UUxA+b7U5ZXrqSKR9lttxCSC3Mxj9mqwOOMu1RVNa5FsOB0KPOgK7TenYVK182iPK7/6jZ4ULi41gsjnzyKielhRI6CLrT3zbfHo6yVTBPOc5a3tXHGmMFIhlPCQleIeIFbXZStU8OLJGHJTM72Ug2/gwpQub8qsom+jDgi+V4GUzS8u7Tu/qflW8Lfi2XNfJGk8YZxbEphkrl3dNteMHth/jQ3Yib/3HbExK2Jj1PsKbLl7ccgDZNTmdN8jBmH21PnDJXlcHAOdpTfBIDMANuaEdowwAir7yRk21sC4E7bOuVwaCZCq/Sgb/zYijW/6ZegbS9tsNoDHvdV+ACgWSub0Rca3rnKE4614Sh/s7GMxFF6ZZZOqKqhBEAOl8HKtnG4OkwmFsYoR/oqEdYZUIuBskhSakuC+yok0DEO+Nll2nLph36tvTX18AQZGH8oADO/QQt8QJtqoHJA39/0BfAhHASm2n4WA6z2rxGGVXWcKbkm+rfW7briCovYr0vyMmFX8zWPL hsc4c5cR /UJT11G6e5jQO00UsTHsOz10RYCLrJ1tEEJbTSzB8fw21c1rtybEHHF6xTmhxRi0iSEOT+0p8Td5zA7k+A+xydWOyVHV95WWPcIMAJZawJ0W4lUoHkT/g1XPdUOdxfIyAJ2xvXQM9q20N2HneBOhnI3uTd/cpdVAhMoSOeBFSkqiIX0I3N3bYuWK1O03yXsMLjWWV/UQNs5895pmk5JooBC09vvwXQkqA+b+ctKOSA2pE+LkX5Y0UB5w+Kj3aLYk6ZadhCphmdFRAYUh2E1faFIkGMJDy/HrqGlP0yIUYix8DARmKelsb7FgG3Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This helper is the folio equivalent of check_and_migrate_movable_pages(). Therefore, all the rules that apply to check_and_migrate_movable_pages() also apply to this one as well. Currently, this helper is only used by memfd_pin_folios(). This patch also includes changes to rename and convert the internal functions collect_longterm_unpinnable_pages() and migrate_longterm_unpinnable_pages() to work on folios. As a result, check_and_migrate_movable_pages() is now a wrapper around check_and_migrate_movable_folios(). Cc: David Hildenbrand Cc: Matthew Wilcox Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Peter Xu Suggested-by: David Hildenbrand Acked-by: David Hildenbrand Acked-by: Dave Airlie Acked-by: Gerd Hoffmann Signed-off-by: Vivek Kasireddy --- mm/gup.c | 124 ++++++++++++++++++++++++++++++++++--------------------- 1 file changed, 77 insertions(+), 47 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index d9ea60621628..a88e19c78730 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2427,19 +2427,19 @@ struct page *get_dump_page(unsigned long addr) #ifdef CONFIG_MIGRATION /* - * Returns the number of collected pages. Return value is always >= 0. + * Returns the number of collected folios. Return value is always >= 0. */ -static unsigned long collect_longterm_unpinnable_pages( - struct list_head *movable_page_list, - unsigned long nr_pages, - struct page **pages) +static unsigned long collect_longterm_unpinnable_folios( + struct list_head *movable_folio_list, + unsigned long nr_folios, + struct folio **folios) { unsigned long i, collected = 0; struct folio *prev_folio = NULL; bool drain_allow = true; - for (i = 0; i < nr_pages; i++) { - struct folio *folio = page_folio(pages[i]); + for (i = 0; i < nr_folios; i++) { + struct folio *folio = folios[i]; if (folio == prev_folio) continue; @@ -2454,7 +2454,7 @@ static unsigned long collect_longterm_unpinnable_pages( continue; if (folio_test_hugetlb(folio)) { - isolate_hugetlb(folio, movable_page_list); + isolate_hugetlb(folio, movable_folio_list); continue; } @@ -2466,7 +2466,7 @@ static unsigned long collect_longterm_unpinnable_pages( if (!folio_isolate_lru(folio)) continue; - list_add_tail(&folio->lru, movable_page_list); + list_add_tail(&folio->lru, movable_folio_list); node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio), folio_nr_pages(folio)); @@ -2476,27 +2476,28 @@ static unsigned long collect_longterm_unpinnable_pages( } /* - * Unpins all pages and migrates device coherent pages and movable_page_list. - * Returns -EAGAIN if all pages were successfully migrated or -errno for failure - * (or partial success). + * Unpins all folios and migrates device coherent folios and movable_folio_list. + * Returns -EAGAIN if all folios were successfully migrated or -errno for + * failure (or partial success). */ -static int migrate_longterm_unpinnable_pages( - struct list_head *movable_page_list, - unsigned long nr_pages, - struct page **pages) +static int migrate_longterm_unpinnable_folios( + struct list_head *movable_folio_list, + unsigned long nr_folios, + struct folio **folios) { int ret; unsigned long i; - for (i = 0; i < nr_pages; i++) { - struct folio *folio = page_folio(pages[i]); + for (i = 0; i < nr_folios; i++) { + struct folio *folio = folios[i]; if (folio_is_device_coherent(folio)) { /* - * Migration will fail if the page is pinned, so convert - * the pin on the source page to a normal reference. + * Migration will fail if the folio is pinned, so + * convert the pin on the source folio to a normal + * reference. */ - pages[i] = NULL; + folios[i] = NULL; folio_get(folio); gup_put_folio(folio, 1, FOLL_PIN); @@ -2509,24 +2510,24 @@ static int migrate_longterm_unpinnable_pages( } /* - * We can't migrate pages with unexpected references, so drop + * We can't migrate folios with unexpected references, so drop * the reference obtained by __get_user_pages_locked(). - * Migrating pages have been added to movable_page_list after + * Migrating folios have been added to movable_folio_list after * calling folio_isolate_lru() which takes a reference so the - * page won't be freed if it's migrating. + * folio won't be freed if it's migrating. */ - unpin_user_page(pages[i]); - pages[i] = NULL; + unpin_folio(folios[i]); + folios[i] = NULL; } - if (!list_empty(movable_page_list)) { + if (!list_empty(movable_folio_list)) { struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, .reason = MR_LONGTERM_PIN, }; - if (migrate_pages(movable_page_list, alloc_migration_target, + if (migrate_pages(movable_folio_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_LONGTERM_PIN, NULL)) { ret = -ENOMEM; @@ -2534,48 +2535,71 @@ static int migrate_longterm_unpinnable_pages( } } - putback_movable_pages(movable_page_list); + putback_movable_pages(movable_folio_list); return -EAGAIN; err: - for (i = 0; i < nr_pages; i++) - if (pages[i]) - unpin_user_page(pages[i]); - putback_movable_pages(movable_page_list); + unpin_folios(folios, nr_folios); + putback_movable_pages(movable_folio_list); return ret; } /* - * Check whether all pages are *allowed* to be pinned. Rather confusingly, all - * pages in the range are required to be pinned via FOLL_PIN, before calling - * this routine. + * Check whether all folios are *allowed* to be pinned indefinitely (longterm). + * Rather confusingly, all folios in the range are required to be pinned via + * FOLL_PIN, before calling this routine. * - * If any pages in the range are not allowed to be pinned, then this routine - * will migrate those pages away, unpin all the pages in the range and return + * If any folios in the range are not allowed to be pinned, then this routine + * will migrate those folios away, unpin all the folios in the range and return * -EAGAIN. The caller should re-pin the entire range with FOLL_PIN and then * call this routine again. * * If an error other than -EAGAIN occurs, this indicates a migration failure. * The caller should give up, and propagate the error back up the call stack. * - * If everything is OK and all pages in the range are allowed to be pinned, then - * this routine leaves all pages pinned and returns zero for success. + * If everything is OK and all folios in the range are allowed to be pinned, + * then this routine leaves all folios pinned and returns zero for success. */ -static long check_and_migrate_movable_pages(unsigned long nr_pages, - struct page **pages) +static long check_and_migrate_movable_folios(unsigned long nr_folios, + struct folio **folios) { unsigned long collected; - LIST_HEAD(movable_page_list); + LIST_HEAD(movable_folio_list); - collected = collect_longterm_unpinnable_pages(&movable_page_list, - nr_pages, pages); + collected = collect_longterm_unpinnable_folios(&movable_folio_list, + nr_folios, folios); if (!collected) return 0; - return migrate_longterm_unpinnable_pages(&movable_page_list, nr_pages, - pages); + return migrate_longterm_unpinnable_folios(&movable_folio_list, + nr_folios, folios); +} + +/* + * This routine just converts all the pages in the @pages array to folios and + * calls check_and_migrate_movable_folios() to do the heavy lifting. + * + * Please see the check_and_migrate_movable_folios() documentation for details. + */ +static long check_and_migrate_movable_pages(unsigned long nr_pages, + struct page **pages) +{ + struct folio **folios; + long i, ret; + + folios = kmalloc_array(nr_pages, sizeof(*folios), GFP_KERNEL); + if (!folios) + return -ENOMEM; + + for (i = 0; i < nr_pages; i++) + folios[i] = page_folio(pages[i]); + + ret = check_and_migrate_movable_folios(nr_pages, folios); + + kfree(folios); + return ret; } #else static long check_and_migrate_movable_pages(unsigned long nr_pages, @@ -2583,6 +2607,12 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages, { return 0; } + +static long check_and_migrate_movable_folios(unsigned long nr_folios, + struct folio **folios) +{ + return 0; +} #endif /* CONFIG_MIGRATION */ /*