From patchwork Wed Sep 2 16:58:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11751381 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3B318138A for ; Wed, 2 Sep 2020 17:01:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 21E5520829 for ; Wed, 2 Sep 2020 17:01:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="B77jHf3y" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728185AbgIBRBE (ORCPT ); Wed, 2 Sep 2020 13:01:04 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:12897 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727895AbgIBQ6j (ORCPT ); Wed, 2 Sep 2020 12:58:39 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 02 Sep 2020 09:58:22 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 02 Sep 2020 09:58:36 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 02 Sep 2020 09:58:36 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 2 Sep 2020 16:58:35 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 2 Sep 2020 16:58:35 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 02 Sep 2020 09:58:35 -0700 From: Ralph Campbell To: , , , CC: Jerome Glisse , John Hubbard , Alistair Popple , Christoph Hellwig , Jason Gunthorpe , Bharata B Rao , Ben Skeggs , Shuah Khan , Andrew Morton , Ralph Campbell Subject: [PATCH v2 2/7] mm/migrate: move migrate_vma_collect_skip() Date: Wed, 2 Sep 2020 09:58:25 -0700 Message-ID: <20200902165830.5367-3-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200902165830.5367-1-rcampbell@nvidia.com> References: <20200902165830.5367-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1599065902; bh=oATABFM4AEF4PG+JY10uewS8j4geL66qhQa5ZF2kPD8=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=B77jHf3y79/2XEe/LYHBFYAuS7tNokE/fPkCaOiD2Zzy/OfnFzHxs8qES48AKO200 AFdxhEIGisWoadUewWILipVMHr8CChvUPlE7/bE3g908d1bNfESUMYGblvJsKruIOZ E8DH3bdqqhleXqqcRXIajrrf5dfd3+jeSzTtO55EtCP2ja+Ptjyb8uE62VWXaeWxY/ YifNhlvTBxLDaoCskQThXbxSKHv9rlNuTU4whUznLptJsH+NL0w0tnR7XjF1wbtdFt +jStmy9f267ONZ/jgCJlXupvNqXFuDsKMEOHnHvYdpdnAUPiRtG+8nwipElzH+T27t NTP5Yp3jPxpjQ== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Move the definition of migrate_vma_collect_skip() to make it callable by migrate_vma_collect_hole(). This helps make the next patch easier to read. Signed-off-by: Ralph Campbell --- mm/migrate.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 4f89360d9e77..ce16ed3deab6 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2254,6 +2254,21 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, #endif /* CONFIG_NUMA */ #ifdef CONFIG_DEVICE_PRIVATE +static int migrate_vma_collect_skip(unsigned long start, + unsigned long end, + struct mm_walk *walk) +{ + struct migrate_vma *migrate = walk->private; + unsigned long addr; + + for (addr = start; addr < end; addr += PAGE_SIZE) { + migrate->dst[migrate->npages] = 0; + migrate->src[migrate->npages++] = 0; + } + + return 0; +} + static int migrate_vma_collect_hole(unsigned long start, unsigned long end, __always_unused int depth, @@ -2282,21 +2297,6 @@ static int migrate_vma_collect_hole(unsigned long start, return 0; } -static int migrate_vma_collect_skip(unsigned long start, - unsigned long end, - struct mm_walk *walk) -{ - struct migrate_vma *migrate = walk->private; - unsigned long addr; - - for (addr = start; addr < end; addr += PAGE_SIZE) { - migrate->dst[migrate->npages] = 0; - migrate->src[migrate->npages++] = 0; - } - - return 0; -} - static int migrate_vma_collect_pmd(pmd_t *pmdp, unsigned long start, unsigned long end,