From patchwork Fri Jan 17 23:22:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Yang X-Patchwork-Id: 11339985 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DCAF3109A for ; Fri, 17 Jan 2020 23:23:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BF70622522 for ; Fri, 17 Jan 2020 23:23:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF70622522 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D90746B0502; Fri, 17 Jan 2020 18:23:04 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D407C6B0503; Fri, 17 Jan 2020 18:23:04 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C561E6B0504; Fri, 17 Jan 2020 18:23:04 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id AF4E66B0502 for ; Fri, 17 Jan 2020 18:23:04 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 86A9F824805A for ; Fri, 17 Jan 2020 23:23:04 +0000 (UTC) X-FDA: 76388703888.27.tax86_37ddac6afff08 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,richardw.yang@linux.intel.com,:akpm@linux-foundation.org:dan.j.williams@intel.com:aneesh.kumar@linux.ibm.com:kirill@shutemov.name:yang.shi@linux.alibaba.com:richardw.yang@linux.intel.com:thellstrom@vmware.com:linux-kernel@vger.kernel.org:,RULES_HIT:30012:30054:30070,0,RBL:134.134.136.20:@linux.intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: tax86_37ddac6afff08 X-Filterd-Recvd-Size: 2086 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Fri, 17 Jan 2020 23:23:03 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jan 2020 15:23:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,332,1574150400"; d="scan'208";a="373800922" Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by orsmga004.jf.intel.com with ESMTP; 17 Jan 2020 15:23:00 -0800 From: Wei Yang To: akpm@linux-foundation.org, dan.j.williams@intel.com, aneesh.kumar@linux.ibm.com, kirill@shutemov.name, yang.shi@linux.alibaba.com, richardw.yang@linux.intel.com, thellstrom@vmware.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 0/5] mm/mremap.c: cleanup move_page_tables() a little Date: Sat, 18 Jan 2020 07:22:49 +0800 Message-Id: <20200117232254.2792-1-richardw.yang@linux.intel.com> X-Mailer: git-send-email 2.17.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: move_page_tables() tries to move page table by PMD or PTE. The root reason is if it tries to move PMD, both old and new range should be PMD aligned. But current code calculate old range and new range separately. This leads to some redundant check and calculation. This cleanup tries to consolidate the range check in one place to reduce some extra range handling. Wei Yang (5): mm/mremap: format the check in move_normal_pmd() same as move_huge_pmd() mm/mremap: it is sure to have enough space when extent meets requirement mm/mremap: use pmd_addr_end to calculate next in move_page_tables() mm/mremap: calculate extent in one place mm/mremap: start addresses are properly aligned include/linux/huge_mm.h | 2 +- mm/huge_memory.c | 8 +------- mm/mremap.c | 24 +++++++----------------- 3 files changed, 9 insertions(+), 25 deletions(-)