From patchwork Thu Apr 4 07:26:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kasireddy, Vivek" X-Patchwork-Id: 13617392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3A18CD1284 for ; Thu, 4 Apr 2024 07:54:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3EBCD6B008A; Thu, 4 Apr 2024 03:54:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 39CD46B008C; Thu, 4 Apr 2024 03:54:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17AFB6B0092; Thu, 4 Apr 2024 03:54:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id EC4436B008A for ; Thu, 4 Apr 2024 03:54:36 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id AF7881C1150 for ; Thu, 4 Apr 2024 07:54:36 +0000 (UTC) X-FDA: 81971087352.27.67F0B6E Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by imf16.hostedemail.com (Postfix) with ESMTP id 6E44C18000E for ; Thu, 4 Apr 2024 07:54:34 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=PoLeD+6G; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf16.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.198.163.13 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712217274; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Fr6hftSb77xo8ynJlx4hz/8WpMiNeovf6F5pnqyo6/w=; b=eZ0fckceHxqcuqU9yTbk1LLxeM37ePSsy3M7JM/ADoWPDFt9Gk8OGCF+UyhkD4y8YOiGIH /vre737SpiEP5btz1hBhSjhIAdG/78oTqEtLnMpl1X0w0R5G0UX8ECmdgcjytSx1E3hDi+ y4hV4r5hu9CoZRnb43b1pxjOpwJzDfU= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=PoLeD+6G; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf16.hostedemail.com: domain of vivek.kasireddy@intel.com designates 192.198.163.13 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712217274; a=rsa-sha256; cv=none; b=X2b08j6AEdNSPfBwkg22EQRHd78oT9xkN8h7+2HO1vtvmamcolzS3uHOR+CVUQLIP7WRvb S+OFV7Nd++CcyS4wiY7Kq/SupxQ1unYvAXS9a5vrmtoAQBqg0mSgXXgfXg9+n5kSLFVt7Y Sg4DwkntRngDq4BreyjxCeJl2gR4b44= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712217274; x=1743753274; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9qpUkYyGQ6L0Mu+KSteX8nLnFmHei1juZzZ9jluV9vI=; b=PoLeD+6GqC4qb6M1oeGX8DMwJ52qCg0VJp9QpnVfUhAwfdNOBY+Chi00 NoivQgfhAq7W9T2LsOW7GKkGA8nHUX5kN24Chkx2ivKM3wQM4RtbO6yhM bV0wJBAruYRvfatpqHVJlmheMYjKuSjbzetyXhIVr3oI9KLRk3qbyLmVr 0A1LBzxEdThnEkCmiksaaAiWCJsB72lF3WK4RYQnbrCb0pPhht1/AFnLu n3Pt+NjmDjjBAxTS66BiQP/jl34SeVB38GVDI81voEsoFB72R8uTd/WXn mUe2qkUUERaPVYnj8nYF2aQyDhjLGgMsMbOahok1BTHBzOKV8fRj2/YLz Q==; X-CSE-ConnectionGUID: zizxumJ/QJSM9gFjTUBiBQ== X-CSE-MsgGUID: Imfddzv6TIewStYULWQbBQ== X-IronPort-AV: E=McAfee;i="6600,9927,11033"; a="10450782" X-IronPort-AV: E=Sophos;i="6.07,178,1708416000"; d="scan'208";a="10450782" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 00:54:27 -0700 X-CSE-ConnectionGUID: 4trDXvagRGyRG4Ql3bqG2Q== X-CSE-MsgGUID: yv8NIvqbSyySHFMUE5nu2A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,178,1708416000"; d="scan'208";a="19298759" Received: from vkasired-desk2.fm.intel.com ([10.105.128.132]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 00:54:27 -0700 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , David Hildenbrand , Matthew Wilcox , Christoph Hellwig , Jason Gunthorpe , Peter Xu Subject: [PATCH v13 2/8] mm/gup: Introduce check_and_migrate_movable_folios() Date: Thu, 4 Apr 2024 00:26:09 -0700 Message-ID: <20240404073053.3073706-3-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240404073053.3073706-1-vivek.kasireddy@intel.com> References: <20240404073053.3073706-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6E44C18000E X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: xh4ddc5d1p9n4u439frwnp1xnqfbkutf X-HE-Tag: 1712217274-791646 X-HE-Meta: U2FsdGVkX18Na/WOXKEa3XsJd5ASPhOa/1CQkamOOpNd50oeOyzu+Wpta7ZEsRHuEr1e8pwUTncNekEz7uXJAIQZGsyrm4xwV6T1hBCCfD+/iL4YXXRaALlLi3QnnvqqlyyiXNgXdE/dhs5Va73FpKH1VbVtbr2aLx9NeXYlBNCgeuWUHTMVVOAYdLiO4odhBeYqUekhGiBsENGzYvo4YWUyrCvfVQKliYERLVzbc6YsN2gJsOm/FHwb813RZljHOK8rmqgaMzm7YRYwiXOf5q73r5vipAV4/W1PgtrPuk8xtbLu8gUScID0b3LLc/gTAo/V3nhbfS6LdncwG5XDTPpFEFmf1EavMd7nv61hsXTVz6hibmD+jz3MH3Vz7cpIakuVyJSSu09EZ0UQiDbOVT3lAEQE0Ts89NT6ftYu+2vvPbCfalosISEqtWLBR7JGeLof5qxi5bZNdH4XKB7PPfOS15rccBMJUymVktZiBvf47r1rwdsDQJyG2GMo4vcFdFLNMlKzjmphH1f8NihUuTz6WwpxtfIAVQ+1MVDPdjY/8t9Ttw97CQqEjKrNVVYMXTII3HpEIqdbdgErdUVwuwYQQ4pGuhPxnrE4uEUdG951s9qyVG92frYtDNJheC8suQm9KD92YO7BF7i189nQIpzLuWoegqh2BW9kWDLFDyLNxGuc6mt2iSuD+cSiBXx5Xhhgx1tqBg1ghyR/pS/QBvZ7DO0WhKs0DrFucFxS8VXuSSUxzX+3h+MU/Lly4nY2GpjTVPabr1DZNsdbDBNeH2wD147Tk6Vtu5Jm355JF6stijCIk7J1dwTW2x3SkMrTiLQcBnHPQDdO8hb5eSX1CFA5kRGUGkKYDY4mXrezowzwObKNr3F4iZxou0v7n0Ha+m8yPfLTayiEzdfFFTMpb0KBJg1mrgx/unkCt3v5KeE1mIHSYb6429VWv+Rh3fEttn4UaK/EpV1bjHWgnRR I2NExv9l MLZyMpDPhOyd9E6tFtUQif/4tivLtuVhZWIxT3qwDoKjcT0LSpOiAJPla6SdVTaO1JQwAnziS/sl/oq6qY+jK3H/AV+N3L9m0uew+1qGA/WA0rPBis8vxA4wY6quEZFhKVBm1Us6vpwzlTROpVG8faeGrZckskI2+KxK7dFCcJohAZjgYkiWAr9+AXttCdnb/2EZvp52ZRmoHFeJ6PovL41ZfltwTu3ZXlQkODioN2SrdO9P8N4Wafdw0E0BdqOFW5PPHgP7+Qqjo3uE/nzj9EQV7cH08dWwQ1CZHkVBp917etP9wOKWjjjRK3A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This helper is the folio equivalent of check_and_migrate_movable_pages(). Therefore, all the rules that apply to check_and_migrate_movable_pages() also apply to this one as well. Currently, this helper is only used by memfd_pin_folios(). This patch also includes changes to rename and convert the internal functions collect_longterm_unpinnable_pages() and migrate_longterm_unpinnable_pages() to work on folios. As a result, check_and_migrate_movable_pages() is now a wrapper around check_and_migrate_movable_folios(). Cc: David Hildenbrand Cc: Matthew Wilcox Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Peter Xu Suggested-by: David Hildenbrand Signed-off-by: Vivek Kasireddy Acked-by: David Hildenbrand --- mm/gup.c | 122 ++++++++++++++++++++++++++++++++++++------------------- 1 file changed, 81 insertions(+), 41 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 9cf2adfa4ce5..00ee3b987307 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2416,19 +2416,19 @@ struct page *get_dump_page(unsigned long addr) #ifdef CONFIG_MIGRATION /* - * Returns the number of collected pages. Return value is always >= 0. + * Returns the number of collected folios. Return value is always >= 0. */ -static unsigned long collect_longterm_unpinnable_pages( - struct list_head *movable_page_list, - unsigned long nr_pages, - struct page **pages) +static unsigned long collect_longterm_unpinnable_folios( + struct list_head *movable_folio_list, + unsigned long nr_folios, + struct folio **folios) { unsigned long i, collected = 0; struct folio *prev_folio = NULL; bool drain_allow = true; - for (i = 0; i < nr_pages; i++) { - struct folio *folio = page_folio(pages[i]); + for (i = 0; i < nr_folios; i++) { + struct folio *folio = folios[i]; if (folio == prev_folio) continue; @@ -2443,7 +2443,7 @@ static unsigned long collect_longterm_unpinnable_pages( continue; if (folio_test_hugetlb(folio)) { - isolate_hugetlb(folio, movable_page_list); + isolate_hugetlb(folio, movable_folio_list); continue; } @@ -2455,7 +2455,7 @@ static unsigned long collect_longterm_unpinnable_pages( if (!folio_isolate_lru(folio)) continue; - list_add_tail(&folio->lru, movable_page_list); + list_add_tail(&folio->lru, movable_folio_list); node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio), folio_nr_pages(folio)); @@ -2465,27 +2465,28 @@ static unsigned long collect_longterm_unpinnable_pages( } /* - * Unpins all pages and migrates device coherent pages and movable_page_list. - * Returns -EAGAIN if all pages were successfully migrated or -errno for failure - * (or partial success). + * Unpins all folios and migrates device coherent folios and movable_folio_list. + * Returns -EAGAIN if all folios were successfully migrated or -errno for + * failure (or partial success). */ -static int migrate_longterm_unpinnable_pages( - struct list_head *movable_page_list, - unsigned long nr_pages, - struct page **pages) +static int migrate_longterm_unpinnable_folios( + struct list_head *movable_folio_list, + unsigned long nr_folios, + struct folio **folios) { int ret; unsigned long i; - for (i = 0; i < nr_pages; i++) { - struct folio *folio = page_folio(pages[i]); + for (i = 0; i < nr_folios; i++) { + struct folio *folio = folios[i]; if (folio_is_device_coherent(folio)) { /* - * Migration will fail if the page is pinned, so convert - * the pin on the source page to a normal reference. + * Migration will fail if the folio is pinned, so + * convert the pin on the source folio to a normal + * reference. */ - pages[i] = NULL; + folios[i] = NULL; folio_get(folio); gup_put_folio(folio, 1, FOLL_PIN); @@ -2498,24 +2499,24 @@ static int migrate_longterm_unpinnable_pages( } /* - * We can't migrate pages with unexpected references, so drop + * We can't migrate folios with unexpected references, so drop * the reference obtained by __get_user_pages_locked(). - * Migrating pages have been added to movable_page_list after + * Migrating folios have been added to movable_folio_list after * calling folio_isolate_lru() which takes a reference so the - * page won't be freed if it's migrating. + * folio won't be freed if it's migrating. */ - unpin_user_page(pages[i]); - pages[i] = NULL; + unpin_folio(folios[i]); + folios[i] = NULL; } - if (!list_empty(movable_page_list)) { + if (!list_empty(movable_folio_list)) { struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, .reason = MR_LONGTERM_PIN, }; - if (migrate_pages(movable_page_list, alloc_migration_target, + if (migrate_pages(movable_folio_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_LONGTERM_PIN, NULL)) { ret = -ENOMEM; @@ -2523,19 +2524,48 @@ static int migrate_longterm_unpinnable_pages( } } - putback_movable_pages(movable_page_list); + putback_movable_pages(movable_folio_list); return -EAGAIN; err: - for (i = 0; i < nr_pages; i++) - if (pages[i]) - unpin_user_page(pages[i]); - putback_movable_pages(movable_page_list); + unpin_folios(folios, nr_folios); + putback_movable_pages(movable_folio_list); return ret; } +/* + * Check whether all folios are *allowed* to be pinned indefinitely (longterm). + * Rather confusingly, all folios in the range are required to be pinned via + * FOLL_PIN, before calling this routine. + * + * If any folios in the range are not allowed to be pinned, then this routine + * will migrate those folios away, unpin all the folios in the range and return + * -EAGAIN. The caller should re-pin the entire range with FOLL_PIN and then + * call this routine again. + * + * If an error other than -EAGAIN occurs, this indicates a migration failure. + * The caller should give up, and propagate the error back up the call stack. + * + * If everything is OK and all folios in the range are allowed to be pinned, + * then this routine leaves all folios pinned and returns zero for success. + */ +static long check_and_migrate_movable_folios(unsigned long nr_folios, + struct folio **folios) +{ + unsigned long collected; + LIST_HEAD(movable_folio_list); + + collected = collect_longterm_unpinnable_folios(&movable_folio_list, + nr_folios, folios); + if (!collected) + return 0; + + return migrate_longterm_unpinnable_folios(&movable_folio_list, + nr_folios, folios); +} + /* * Check whether all pages are *allowed* to be pinned. Rather confusingly, all * pages in the range are required to be pinned via FOLL_PIN, before calling @@ -2555,16 +2585,20 @@ static int migrate_longterm_unpinnable_pages( static long check_and_migrate_movable_pages(unsigned long nr_pages, struct page **pages) { - unsigned long collected; - LIST_HEAD(movable_page_list); + struct folio **folios; + long i, ret; - collected = collect_longterm_unpinnable_pages(&movable_page_list, - nr_pages, pages); - if (!collected) - return 0; + folios = kmalloc_array(nr_pages, sizeof(*folios), GFP_KERNEL); + if (!folios) + return -ENOMEM; - return migrate_longterm_unpinnable_pages(&movable_page_list, nr_pages, - pages); + for (i = 0; i < nr_pages; i++) + folios[i] = page_folio(pages[i]); + + ret = check_and_migrate_movable_folios(nr_pages, folios); + + kfree(folios); + return ret; } #else static long check_and_migrate_movable_pages(unsigned long nr_pages, @@ -2572,6 +2606,12 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages, { return 0; } + +static long check_and_migrate_movable_folios(unsigned long nr_folios, + struct folio **folios) +{ + return 0; +} #endif /* CONFIG_MIGRATION */ /*