From patchwork Fri Sep 27 07:00:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Yang X-Patchwork-Id: 11163879 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E7F5976 for ; Fri, 27 Sep 2019 07:01:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BE8CC207FF for ; Fri, 27 Sep 2019 07:01:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BE8CC207FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E8D836B0003; Fri, 27 Sep 2019 03:01:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E3D136B0005; Fri, 27 Sep 2019 03:01:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D52466B0006; Fri, 27 Sep 2019 03:01:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id B42A96B0003 for ; Fri, 27 Sep 2019 03:01:00 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 506A3181AC9C4 for ; Fri, 27 Sep 2019 07:01:00 +0000 (UTC) X-FDA: 75979803480.12.gate87_5fff66b586b0e X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,richardw.yang@linux.intel.com,:akpm@linux-foundation.org:aarcange@redhat.com:hughd@google.com:mike.kravetz@oracle.com::richardw.yang@linux.intel.com,RULES_HIT:30051:30054,0,RBL:192.55.52.88:@linux.intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: gate87_5fff66b586b0e X-Filterd-Recvd-Size: 2704 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 27 Sep 2019 07:00:59 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Sep 2019 00:00:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,554,1559545200"; d="scan'208";a="214738130" Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by fmsmga004.fm.intel.com with ESMTP; 27 Sep 2019 00:00:57 -0700 From: Wei Yang To: akpm@linux-foundation.org, aarcange@redhat.com, hughd@google.com, mike.kravetz@oracle.com Cc: linux-mm@kvack.org, Wei Yang Subject: [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Date: Fri, 27 Sep 2019 15:00:30 +0800 Message-Id: <20190927070032.2129-1-richardw.yang@linux.intel.com> X-Mailer: git-send-email 2.17.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In function __mcopy_atomic_hugetlb, we use two variables to deal with huge page size: vma_hpagesize and huge_page_size. Since they are the same, it is not necessary to use two different mechanism. This patch makes it consistent by all using vma_hpagesize. Signed-off-by: Wei Yang Reviewed-by: Mike Kravetz --- mm/userfaultfd.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index a998c1b4d8a1..01ad48621bb7 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -262,7 +262,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, pte_t dst_pteval; BUG_ON(dst_addr >= dst_start + len); - VM_BUG_ON(dst_addr & ~huge_page_mask(h)); + VM_BUG_ON(dst_addr & (vma_hpagesize - 1)); /* * Serialize via hugetlb_fault_mutex @@ -273,7 +273,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, mutex_lock(&hugetlb_fault_mutex_table[hash]); err = -ENOMEM; - dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h)); + dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize); if (!dst_pte) { mutex_unlock(&hugetlb_fault_mutex_table[hash]); goto out_unlock; @@ -300,7 +300,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, err = copy_huge_page_from_user(page, (const void __user *)src_addr, - pages_per_huge_page(h), true); + vma_hpagesize / PAGE_SIZE, + true); if (unlikely(err)) { err = -EFAULT; goto out;