From patchwork Thu Apr 2 02:00:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 11469779 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22E5D14B4 for ; Thu, 2 Apr 2020 02:00:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E2B992072E for ; Thu, 2 Apr 2020 02:00:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E2B992072E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F26878E0008; Wed, 1 Apr 2020 22:00:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ED73A8E0007; Wed, 1 Apr 2020 22:00:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DECE98E0008; Wed, 1 Apr 2020 22:00:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id C50768E0007 for ; Wed, 1 Apr 2020 22:00:47 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7A6B040DD for ; Thu, 2 Apr 2020 02:00:47 +0000 (UTC) X-FDA: 76661261334.28.fold11_7b3f8632b2824 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ying.huang@intel.com,,RULES_HIT:30012:30054:30064:30090,0,RBL:134.134.136.31:@intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: fold11_7b3f8632b2824 X-Filterd-Recvd-Size: 5547 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Thu, 2 Apr 2020 02:00:46 +0000 (UTC) IronPort-SDR: lXDaLzOTatryZmbGJeSX9r52mmusUscyF7jzSoSNzoLIQTko+lVid4ijLo0urYeoioEdOKf+td oQNDC1AhYO4Q== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2020 19:00:44 -0700 IronPort-SDR: VtO7W3BqsOK5s4SUFIxlwyf+pUZfzge24frnaaFGaEjCrFIv3Tvvh4qvC1WKhAJZBsok67jmof MBrpBAmPnF2Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,333,1580803200"; d="scan'208";a="240690407" Received: from yhuang-dev.sh.intel.com ([10.239.159.23]) by fmsmga007.fm.intel.com with ESMTP; 01 Apr 2020 19:00:41 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Zi Yan , Andrea Arcangeli , "Kirill A . Shutemov" , Vlastimil Babka , Alexey Dobriyan , Michal Hocko , Konstantin Khlebnikov , =?utf-8?b?SsOpcsO0bWUg?= =?utf-8?b?R2xpc3Nl?= , Yang Shi Subject: [PATCH -V2] /proc/PID/smaps: Add PMD migration entry parsing Date: Thu, 2 Apr 2020 10:00:31 +0800 Message-Id: <20200402020031.1611223-1-ying.huang@intel.com> X-Mailer: git-send-email 2.25.0 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Huang Ying Now, when read /proc/PID/smaps, the PMD migration entry in page table is simply ignored. To improve the accuracy of /proc/PID/smaps, its parsing and processing is added. Before the patch, for a fully populated 400 MB anonymous VMA, sometimes some THP pages under migration may be lost as follows. 7f3f6a7e5000-7f3f837e5000 rw-p 00000000 00:00 0 Size: 409600 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Rss: 407552 kB Pss: 407552 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 407552 kB Referenced: 301056 kB Anonymous: 407552 kB LazyFree: 0 kB AnonHugePages: 405504 kB ShmemPmdMapped: 0 kB FilePmdMapped: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB Swap: 0 kB SwapPss: 0 kB Locked: 0 kB THPeligible: 1 VmFlags: rd wr mr mw me ac After the patch, it will be always, 7f3f6a7e5000-7f3f837e5000 rw-p 00000000 00:00 0 Size: 409600 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Rss: 409600 kB Pss: 409600 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 409600 kB Referenced: 294912 kB Anonymous: 409600 kB LazyFree: 0 kB AnonHugePages: 407552 kB ShmemPmdMapped: 0 kB FilePmdMapped: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB Swap: 0 kB SwapPss: 0 kB Locked: 0 kB THPeligible: 1 VmFlags: rd wr mr mw me ac Signed-off-by: "Huang, Ying" Reviewed-by: Zi Yan Cc: Andrea Arcangeli Cc: Kirill A. Shutemov Cc: Vlastimil Babka Cc: Alexey Dobriyan Cc: Michal Hocko Cc: Konstantin Khlebnikov Cc: "Jérôme Glisse" Cc: Yang Shi --- v2: - Use thp_migration_supported() in condition to reduce code size if THP migration isn't enabled. - Replace VM_BUG_ON() with VM_WARN_ON_ONCE(), it's not necessary to nuking kernel for this. --- fs/proc/task_mmu.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 8d382d4ec067..9c72f9ce2dd8 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -546,10 +546,19 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, struct mem_size_stats *mss = walk->private; struct vm_area_struct *vma = walk->vma; bool locked = !!(vma->vm_flags & VM_LOCKED); - struct page *page; + struct page *page = NULL; - /* FOLL_DUMP will return -EFAULT on huge zero page */ - page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP); + if (pmd_present(*pmd)) { + /* FOLL_DUMP will return -EFAULT on huge zero page */ + page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP); + } else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) { + swp_entry_t entry = pmd_to_swp_entry(*pmd); + + if (is_migration_entry(entry)) + page = migration_entry_to_page(entry); + else + VM_WARN_ON_ONCE(1); + } if (IS_ERR_OR_NULL(page)) return; if (PageAnon(page)) @@ -578,8 +587,7 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { - if (pmd_present(*pmd)) - smaps_pmd_entry(pmd, addr, walk); + smaps_pmd_entry(pmd, addr, walk); spin_unlock(ptl); goto out; }