From patchwork Wed Jan 29 00:26:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Yang X-Patchwork-Id: 11355385 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 506FF1580 for ; Wed, 29 Jan 2020 00:27:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2795F22522 for ; Wed, 29 Jan 2020 00:27:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2795F22522 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E6DB66B0008; Tue, 28 Jan 2020 19:27:15 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DCF9F6B000A; Tue, 28 Jan 2020 19:27:15 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4D6D6B000C; Tue, 28 Jan 2020 19:27:15 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0075.hostedemail.com [216.40.44.75]) by kanga.kvack.org (Postfix) with ESMTP id AA8556B0008 for ; Tue, 28 Jan 2020 19:27:15 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6AC68180AD801 for ; Wed, 29 Jan 2020 00:27:15 +0000 (UTC) X-FDA: 76428782430.02.horn51_5c63f6953bf27 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,richardw.yang@linux.intel.com,:akpm@linux-foundation.org:aneesh.kumar@linux.ibm.com:kirill@shutemov.name:dan.j.williams@intel.com:yang.shi@linux.alibaba.com:thellstrom@vmware.com:richardw.yang@linux.intel.com:linux-kernel@vger.kernel.org::digetx@gmail.com,RULES_HIT:30012:30054:30070,0,RBL:192.55.52.136:@linux.intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:33,LUA_SUMMARY:none X-HE-Tag: horn51_5c63f6953bf27 X-Filterd-Recvd-Size: 2817 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Wed, 29 Jan 2020 00:27:14 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2020 16:27:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,375,1574150400"; d="scan'208";a="229456104" Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by orsmga003.jf.intel.com with ESMTP; 28 Jan 2020 16:27:12 -0800 From: Wei Yang To: akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com, kirill@shutemov.name, dan.j.williams@intel.com, yang.shi@linux.alibaba.com, thellstrom@vmware.com, richardw.yang@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, digetx@gmail.com Subject: [Patch v2 3/4] mm/mremap: calculate extent in one place Date: Wed, 29 Jan 2020 08:26:41 +0800 Message-Id: <20200129002642.13508-4-richardw.yang@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200129002642.13508-1-richardw.yang@linux.intel.com> References: <20200129002642.13508-1-richardw.yang@linux.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Page tables is moved on the base of PMD. This requires both source and destination range should meet the requirement. Current code works well since move_huge_pmd() and move_normal_pmd() would check old_addr and new_addr again. And then return to move_ptes() if the either of them is not aligned. In stead of calculating the extent separately, it is better to calculate in one place, so we know it is not necessary to try move pmd. By doing so, the logic seems a little clear. Signed-off-by: Wei Yang --- mm/mremap.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/mremap.c b/mm/mremap.c index c2af8ba4ba43..b2f3344d090a 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -258,6 +258,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma, extent = next - old_addr; if (extent > old_end - old_addr) extent = old_end - old_addr; + next = (new_addr + PMD_SIZE) & PMD_MASK; + if (extent > next - new_addr) + extent = next - new_addr; old_pmd = get_old_pmd(vma->vm_mm, old_addr); if (!old_pmd) continue; @@ -301,9 +304,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (pte_alloc(new_vma->vm_mm, new_pmd)) break; - next = (new_addr + PMD_SIZE) & PMD_MASK; - if (extent > next - new_addr) - extent = next - new_addr; move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma, new_pmd, new_addr, need_rmap_locks); }