From patchwork Sat Jan 13 06:52:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Kasireddy X-Patchwork-Id: 13518880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E318FC4707B for ; Sat, 13 Jan 2024 07:16:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D6228D0005; Sat, 13 Jan 2024 02:16:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 80EE28D0003; Sat, 13 Jan 2024 02:16:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 662948D0005; Sat, 13 Jan 2024 02:16:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 486FA8D0003 for ; Sat, 13 Jan 2024 02:16:44 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 148A3140168 for ; Sat, 13 Jan 2024 07:16:44 +0000 (UTC) X-FDA: 81673430328.19.9C344A8 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) by imf30.hostedemail.com (Postfix) with ESMTP id 0CC5680003 for ; Sat, 13 Jan 2024 07:16:41 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lsewqWpz; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf30.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.198.163.11 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705130202; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=i0zAFHA+E4FLBv78drUCnI4hJVhfpf+CECYBKELK8Yo=; b=bsGCB1DBllWzH4bhzK0YGh/uOTDBiFcjFvSEg6iL8P7oGLWOBG+jVOFpKLOK+f0eawn+mS K80v7OJDrsND1hM7vgRknC/yNWXq4Jr6/FKJjTn1WV+UlB6RD4rfj+ez2rnVW45hmM6mwI R7OoB3ekNUY+goi7OJDX70JfjuzVJGU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lsewqWpz; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf30.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.198.163.11 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705130202; a=rsa-sha256; cv=none; b=NHT4QX7NRzco+rNBVFIMHWGgUzFN/WHN0VqVxbInNFk6KL1WiUe7CHCpu+yYA/px8D/PWb VmMqT8x+BrBVrPnIqUUkHqtn4p368Ej41wOs6WOrc0F4mMmPlJV35W1Jf6p/8W0jZTrJig xCc2+ZJ+12AL8KI2bOKXC1p5bS1JZac= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705130202; x=1736666202; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O22F+haVAFNBQJcPOK3PWw+AnkziG39TSsVTrN+ZPik=; b=lsewqWpzB0aRztIUY0s1smHJyilDUdQyQ+8IibFhBKG/xzmrsLBsF+ev LdZKJPyaEfU9Yr+vsnjMT4acnNiMRUwglcermO/k4cQNDM9S5hps6J7hB r6U6SXEffZfjHZW3/QlTjT0XZCh/k8sglOBEmrzdlWbimWGvH6+JQvTvo so3+2qIwrBScvg90QGBADXtm099Dq6oV/gBTIUK0l70sm8pw8BwaTBnsi 30bTKyYj9Shl51oi5e+wptQCiDOoHxLT6jnnqIzYr3I1XX58VVoc1aTwk O05oT53iV5dcOX9l0PjRLF+V8xH/InUIrxmE/G11YWsARd0KA0PMNXSyj A==; X-IronPort-AV: E=McAfee;i="6600,9927,10951"; a="6078119" X-IronPort-AV: E=Sophos;i="6.04,191,1695711600"; d="scan'208";a="6078119" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2024 23:16:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10951"; a="783269491" X-IronPort-AV: E=Sophos;i="6.04,191,1695711600"; d="scan'208";a="783269491" Received: from vkasired-desk2.fm.intel.com ([10.105.128.132]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2024 23:16:37 -0800 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , David Hildenbrand , Matthew Wilcox , Christoph Hellwig , Jason Gunthorpe , Peter Xu Subject: [PATCH v11 2/8] mm/gup: Introduce check_and_migrate_movable_folios() Date: Fri, 12 Jan 2024 22:52:17 -0800 Message-Id: <20240113065223.1532987-3-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240113065223.1532987-1-vivek.kasireddy@intel.com> References: <20240113065223.1532987-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 95tu7prnhnt15s1k3akomwdwdbxee6gt X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0CC5680003 X-HE-Tag: 1705130201-653980 X-HE-Meta: U2FsdGVkX1+zydwZksk8QD4xzujRo8rksQ7ofBFZkl969uDrok+16lwr5tvr++mP/4HGfuSexJtYpEb6A4w32KDT1VuvMz9x48O4m+JHvLzcUKEtJtjpnb4evT80tALMKb0Z1AooAmd6pPCKD0FJgKwnLv24fzYEDxpFGAHIka40utOHZ3WnFIw+iuwx4IJ0ZUTw9Oy47HbG8qYJIGUk2mXsx3ddmFFco7zZ9yg3pGvGNcZ/Z9Q1yr9fuWh8OvrrQNX6Y99wuCjeLM4+QS8G0G/1d//CQohfOs2pmqdBvO+PlMjKVh7fPZWWasONj7zPcXf0jAScaNy5H2jEVG1oxJpLpCOJv2EtYsofOUehx2/XjSjhJHLZgjoru5nuPDuod2quqt4JingyjRj+EY/xyGtsZX0fvU+1CPpHaPTBawxDrBK1Z/DTLDQ1blqp8AhB3EefTizPhyjlGX0ZXCzY1ZLmhDv9RaJRpQ+8jMPB2td/siaGMqyzgKs99X5qi0xqEwdckBwPBpOGji4Kez46Heet4gL9Tuw6iD03T43mtzK4BfoodIHYjIKDfryysxDzLHZ4mOi2T9WquHHfTKbhoJDrUHckKVxBaj1gvPvonVYVI+fuUKK4czlGxkXIopqFUe7FzC8uFZR9u1b5BEel87Gld/QH9UqovJtd0oKJuJgTU6mTXsFAb1WQuvAUl58w5AYVy3SZ2++sOgrmwYOK0BiIEZ8mA5JDPE1svmm2R7IyTcwVe+UDJrp00VhEKEnV2SjMXs4G9Q+u61sxOwiGllRpHpoADhYhjKpbu+sHzHCo3c4Ie8I/MQJmnTVYK0yCR4HotnQK4z3W4n9SOqcsxci50jUjFmKLDnM8bKhVIvx8P35FxQWZxcyptbeWS2MWewdQWDMGKrcq2e3bwbvxi1cl/J79CqkWeYQh0LjhVOz1zE3M0ExNfVxH3KBME9IFPbhVwsBubdEvLKTjTS7 SZ414yRn LDaE+vr20Ch61KoO34IoRoTwIfoUpoEeJ02FjHSsniuPcxTdC3GKOqIG+qMmEAUJq0TXDPPpsPSXthOskobfGbeGexS1hNf3RBgGDA4mEDJFG5mz216iW7XZETf8itTygcbwoBtr9aydqJyqO1BOzvztT0V6k9TdIKytfbKDjWkMS+Wu3gnycucf+Ax2dH/7EQdZQX3pKfXbJROgfwOFp2nS/jdm7+bNEFp0bLgLiY7keRSZYFsU2h3xIcvV2UiJhFucWgnr3e4Tdvs9ue+89LqDDAPiW0MMvatStHaJNvZZoSK2VCuhEr6WykA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This helper is the folio equivalent of check_and_migrate_movable_pages(). Therefore, all the rules that apply to check_and_migrate_movable_pages() also apply to this one as well. Currently, this helper is only used by memfd_pin_folios(). This patch also includes changes to rename and convert the internal functions collect_longterm_unpinnable_pages() and migrate_longterm_unpinnable_pages() to work on folios. Since they are also used by check_and_migrate_movable_pages(), a temporary array is used to collect and share the folios with these functions. Cc: David Hildenbrand Cc: Matthew Wilcox Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Peter Xu Suggested-by: David Hildenbrand Signed-off-by: Vivek Kasireddy --- mm/gup.c | 129 +++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 92 insertions(+), 37 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 4d7bc4453819..00b24a429ba8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2097,20 +2097,24 @@ struct page *get_dump_page(unsigned long addr) #ifdef CONFIG_MIGRATION /* - * Returns the number of collected pages. Return value is always >= 0. + * Returns the number of collected folios. Return value is always >= 0. */ -static unsigned long collect_longterm_unpinnable_pages( - struct list_head *movable_page_list, - unsigned long nr_pages, +static unsigned long collect_longterm_unpinnable_folios( + struct list_head *movable_folio_list, + unsigned long nr_folios, + struct folio **folios, struct page **pages) { unsigned long i, collected = 0; struct folio *prev_folio = NULL; bool drain_allow = true; + struct folio *folio; - for (i = 0; i < nr_pages; i++) { - struct folio *folio = page_folio(pages[i]); + for (i = 0; i < nr_folios; i++) { + if (pages) + folios[i] = page_folio(pages[i]); + folio = folios[i]; if (folio == prev_folio) continue; prev_folio = folio; @@ -2124,7 +2128,7 @@ static unsigned long collect_longterm_unpinnable_pages( continue; if (folio_test_hugetlb(folio)) { - isolate_hugetlb(folio, movable_page_list); + isolate_hugetlb(folio, movable_folio_list); continue; } @@ -2136,7 +2140,7 @@ static unsigned long collect_longterm_unpinnable_pages( if (!folio_isolate_lru(folio)) continue; - list_add_tail(&folio->lru, movable_page_list); + list_add_tail(&folio->lru, movable_folio_list); node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio), folio_nr_pages(folio)); @@ -2146,27 +2150,28 @@ static unsigned long collect_longterm_unpinnable_pages( } /* - * Unpins all pages and migrates device coherent pages and movable_page_list. - * Returns -EAGAIN if all pages were successfully migrated or -errno for failure - * (or partial success). + * Unpins all folios and migrates device coherent folios and movable_folio_list. + * Returns -EAGAIN if all folios were successfully migrated or -errno for + * failure (or partial success). */ -static int migrate_longterm_unpinnable_pages( - struct list_head *movable_page_list, - unsigned long nr_pages, - struct page **pages) +static int migrate_longterm_unpinnable_folios( + struct list_head *movable_folio_list, + unsigned long nr_folios, + struct folio **folios) { int ret; unsigned long i; - for (i = 0; i < nr_pages; i++) { - struct folio *folio = page_folio(pages[i]); + for (i = 0; i < nr_folios; i++) { + struct folio *folio = folios[i]; if (folio_is_device_coherent(folio)) { /* - * Migration will fail if the page is pinned, so convert - * the pin on the source page to a normal reference. + * Migration will fail if the folio is pinned, so + * convert the pin on the source folio to a normal + * reference. */ - pages[i] = NULL; + folios[i] = NULL; folio_get(folio); gup_put_folio(folio, 1, FOLL_PIN); @@ -2179,23 +2184,23 @@ static int migrate_longterm_unpinnable_pages( } /* - * We can't migrate pages with unexpected references, so drop + * We can't migrate folios with unexpected references, so drop * the reference obtained by __get_user_pages_locked(). - * Migrating pages have been added to movable_page_list after + * Migrating folios have been added to movable_folio_list after * calling folio_isolate_lru() which takes a reference so the - * page won't be freed if it's migrating. + * folio won't be freed if it's migrating. */ - unpin_user_page(pages[i]); - pages[i] = NULL; + unpin_folio(folios[i]); + folios[i] = NULL; } - if (!list_empty(movable_page_list)) { + if (!list_empty(movable_folio_list)) { struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, }; - if (migrate_pages(movable_page_list, alloc_migration_target, + if (migrate_pages(movable_folio_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_LONGTERM_PIN, NULL)) { ret = -ENOMEM; @@ -2203,15 +2208,15 @@ static int migrate_longterm_unpinnable_pages( } } - putback_movable_pages(movable_page_list); + putback_movable_pages(movable_folio_list); return -EAGAIN; err: - for (i = 0; i < nr_pages; i++) - if (pages[i]) - unpin_user_page(pages[i]); - putback_movable_pages(movable_page_list); + for (i = 0; i < nr_folios; i++) + if (folios[i]) + unpin_folio(folios[i]); + putback_movable_pages(movable_folio_list); return ret; } @@ -2235,16 +2240,60 @@ static int migrate_longterm_unpinnable_pages( static long check_and_migrate_movable_pages(unsigned long nr_pages, struct page **pages) { + unsigned long nr_folios = nr_pages; unsigned long collected; - LIST_HEAD(movable_page_list); + LIST_HEAD(movable_folio_list); + struct folio **folios; + long ret; - collected = collect_longterm_unpinnable_pages(&movable_page_list, - nr_pages, pages); + folios = kmalloc_array(nr_folios, sizeof(*folios), GFP_KERNEL); + if (!folios) + return -ENOMEM; + + collected = collect_longterm_unpinnable_folios(&movable_folio_list, + nr_folios, folios, + pages); + if (!collected) { + kfree(folios); + return 0; + } + + ret = migrate_longterm_unpinnable_folios(&movable_folio_list, + nr_folios, folios); + kfree(folios); + return ret; +} + +/* + * Check whether all folios are *allowed* to be pinned. Rather confusingly, all + * folios in the range are required to be pinned via FOLL_PIN, before calling + * this routine. + * + * If any folios in the range are not allowed to be pinned, then this routine + * will migrate those folios away, unpin all the folios in the range and return + * -EAGAIN. The caller should re-pin the entire range with FOLL_PIN and then + * call this routine again. + * + * If an error other than -EAGAIN occurs, this indicates a migration failure. + * The caller should give up, and propagate the error back up the call stack. + * + * If everything is OK and all folios in the range are allowed to be pinned, + * then this routine leaves all folios pinned and returns zero for success. + */ +static long check_and_migrate_movable_folios(unsigned long nr_folios, + struct folio **folios) +{ + unsigned long collected; + LIST_HEAD(movable_folio_list); + + collected = collect_longterm_unpinnable_folios(&movable_folio_list, + nr_folios, folios, + NULL); if (!collected) return 0; - return migrate_longterm_unpinnable_pages(&movable_page_list, nr_pages, - pages); + return migrate_longterm_unpinnable_folios(&movable_folio_list, + nr_folios, folios); } #else static long check_and_migrate_movable_pages(unsigned long nr_pages, @@ -2252,6 +2301,12 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages, { return 0; } + +static long check_and_migrate_movable_folios(unsigned long nr_folios, + struct folio **folios) +{ + return 0; +} #endif /* CONFIG_MIGRATION */ /*