From patchwork Thu Mar 21 20:01:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 10864263 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1FFB91575 for ; Thu, 21 Mar 2019 20:03:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F37192A45C for ; Thu, 21 Mar 2019 20:03:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E7CBA2A47F; Thu, 21 Mar 2019 20:03:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 87B372A45C for ; Thu, 21 Mar 2019 20:03:52 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id D36DA2194EB7E; Thu, 21 Mar 2019 13:03:48 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=134.134.136.126; helo=mga18.intel.com; envelope-from=keith.busch@intel.com; receiver=linux-nvdimm@lists.01.org Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 85ADC2194EB70 for ; Thu, 21 Mar 2019 13:02:57 -0700 (PDT) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Mar 2019 13:02:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,254,1549958400"; d="scan'208";a="309246237" Received: from unknown (HELO localhost.lm.intel.com) ([10.232.112.69]) by orsmga005.jf.intel.com with ESMTP; 21 Mar 2019 13:02:56 -0700 From: Keith Busch To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@lists.01.org Subject: [PATCH 2/5] mm: Split handling old page for migration Date: Thu, 21 Mar 2019 14:01:54 -0600 Message-Id: <20190321200157.29678-3-keith.busch@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190321200157.29678-1-keith.busch@intel.com> References: <20190321200157.29678-1-keith.busch@intel.com> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dave Hansen MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP Refactor unmap_and_move() handling for the new page into a separate function from locking and preparing the old page. No functional change here: this is just making it easier to reuse this part of the page migration from contexts that already locked the old page. Signed-off-by: Keith Busch --- mm/migrate.c | 115 +++++++++++++++++++++++++++++++---------------------------- 1 file changed, 61 insertions(+), 54 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index ac6f4939bb59..705b320d4b35 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1000,57 +1000,14 @@ static int move_to_new_page(struct page *newpage, struct page *page, return rc; } -static int __unmap_and_move(struct page *page, struct page *newpage, - int force, enum migrate_mode mode) +static int __unmap_and_move_locked(struct page *page, struct page *newpage, + enum migrate_mode mode) { int rc = -EAGAIN; int page_was_mapped = 0; struct anon_vma *anon_vma = NULL; bool is_lru = !__PageMovable(page); - if (!trylock_page(page)) { - if (!force || mode == MIGRATE_ASYNC) - goto out; - - /* - * It's not safe for direct compaction to call lock_page. - * For example, during page readahead pages are added locked - * to the LRU. Later, when the IO completes the pages are - * marked uptodate and unlocked. However, the queueing - * could be merging multiple pages for one bio (e.g. - * mpage_readpages). If an allocation happens for the - * second or third page, the process can end up locking - * the same page twice and deadlocking. Rather than - * trying to be clever about what pages can be locked, - * avoid the use of lock_page for direct compaction - * altogether. - */ - if (current->flags & PF_MEMALLOC) - goto out; - - lock_page(page); - } - - if (PageWriteback(page)) { - /* - * Only in the case of a full synchronous migration is it - * necessary to wait for PageWriteback. In the async case, - * the retry loop is too short and in the sync-light case, - * the overhead of stalling is too much - */ - switch (mode) { - case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: - break; - default: - rc = -EBUSY; - goto out_unlock; - } - if (!force) - goto out_unlock; - wait_on_page_writeback(page); - } - /* * By try_to_unmap(), page->mapcount goes down to 0 here. In this case, * we cannot notice that anon_vma is freed while we migrates a page. @@ -1077,11 +1034,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage, * This is much like races on refcount of oldpage: just don't BUG(). */ if (unlikely(!trylock_page(newpage))) - goto out_unlock; + goto out; if (unlikely(!is_lru)) { rc = move_to_new_page(newpage, page, mode); - goto out_unlock_both; + goto out_unlock; } /* @@ -1100,7 +1057,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, VM_BUG_ON_PAGE(PageAnon(page), page); if (page_has_private(page)) { try_to_free_buffers(page); - goto out_unlock_both; + goto out_unlock; } } else if (page_mapped(page)) { /* Establish migration ptes */ @@ -1110,22 +1067,19 @@ static int __unmap_and_move(struct page *page, struct page *newpage, TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS); page_was_mapped = 1; } - if (!page_mapped(page)) rc = move_to_new_page(newpage, page, mode); if (page_was_mapped) remove_migration_ptes(page, rc == MIGRATEPAGE_SUCCESS ? newpage : page, false); - -out_unlock_both: - unlock_page(newpage); out_unlock: + unlock_page(newpage); /* Drop an anon_vma reference if we took one */ +out: if (anon_vma) put_anon_vma(anon_vma); - unlock_page(page); -out: + /* * If migration is successful, decrease refcount of the newpage * which will not free the page because new page owner increased @@ -1141,7 +1095,60 @@ static int __unmap_and_move(struct page *page, struct page *newpage, else putback_lru_page(newpage); } + return rc; +} + +static int __unmap_and_move(struct page *page, struct page *newpage, + int force, enum migrate_mode mode) +{ + int rc = -EAGAIN; + + if (!trylock_page(page)) { + if (!force || mode == MIGRATE_ASYNC) + goto out; + + /* + * It's not safe for direct compaction to call lock_page. + * For example, during page readahead pages are added locked + * to the LRU. Later, when the IO completes the pages are + * marked uptodate and unlocked. However, the queueing + * could be merging multiple pages for one bio (e.g. + * mpage_readpages). If an allocation happens for the + * second or third page, the process can end up locking + * the same page twice and deadlocking. Rather than + * trying to be clever about what pages can be locked, + * avoid the use of lock_page for direct compaction + * altogether. + */ + if (current->flags & PF_MEMALLOC) + goto out; + + lock_page(page); + } + if (PageWriteback(page)) { + /* + * Only in the case of a full synchronous migration is it + * necessary to wait for PageWriteback. In the async case, + * the retry loop is too short and in the sync-light case, + * the overhead of stalling is too much + */ + switch (mode) { + case MIGRATE_SYNC: + case MIGRATE_SYNC_NO_COPY: + break; + default: + rc = -EBUSY; + goto out_unlock; + } + if (!force) + goto out_unlock; + wait_on_page_writeback(page); + } + rc = __unmap_and_move_locked(page, newpage, mode); +out_unlock: + unlock_page(page); +out: return rc; }