From patchwork Sun Jan 31 12:09:32 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wilcox, Matthew R" X-Patchwork-Id: 8173781 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1947F9FE6B for ; Sun, 31 Jan 2016 12:09:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 274B720109 for ; Sun, 31 Jan 2016 12:09:53 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8212020351 for ; Sun, 31 Jan 2016 12:09:51 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 753B51A21A3; Sun, 31 Jan 2016 04:09:51 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by ml01.01.org (Postfix) with ESMTP id AE4931A21A3 for ; Sun, 31 Jan 2016 04:09:50 -0800 (PST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP; 31 Jan 2016 04:09:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,374,1449561600"; d="scan'208";a="902231156" Received: from vsundhar-mobl.amr.corp.intel.com (HELO thog.int.wil.cx) ([10.252.206.106]) by orsmga002.jf.intel.com with SMTP; 31 Jan 2016 04:09:47 -0800 Received: by thog.int.wil.cx (Postfix, from userid 1000) id 43CDF61C97; Sun, 31 Jan 2016 07:09:42 -0500 (EST) From: Matthew Wilcox To: Andrew Morton Subject: [PATCH v4 5/8] procfs: Add support for PUDs to smaps, clear_refs and pagemap Date: Sun, 31 Jan 2016 23:09:32 +1100 Message-Id: <1454242175-16870-6-git-send-email-matthew.r.wilcox@intel.com> X-Mailer: git-send-email 2.7.0.rc3 In-Reply-To: <1454242175-16870-1-git-send-email-matthew.r.wilcox@intel.com> References: <1454242175-16870-1-git-send-email-matthew.r.wilcox@intel.com> Cc: linux-nvdimm@lists.01.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox Because there's no 'struct page' for DAX THPs, a lot of this code is simpler than the PMD code it mimics. Extra code would need to be added to support PUDs of anonymous or page-cache THPs. Signed-off-by: Matthew Wilcox --- fs/proc/task_mmu.c | 109 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 109 insertions(+) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3ba3c64..ea20ce4 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -586,6 +586,33 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, } #endif +static int smaps_pud_range(pud_t *pud, unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + struct vm_area_struct *vma = walk->vma; + struct mem_size_stats *mss = walk->private; + + if (is_huge_zero_pud(*pud)) + return 0; + + mss->resident += HPAGE_PUD_SIZE; + if (vma->vm_flags & VM_SHARED) { + if (pud_dirty(*pud)) + mss->shared_dirty += HPAGE_PUD_SIZE; + else + mss->shared_clean += HPAGE_PUD_SIZE; + } else { + if (pud_dirty(*pud)) + mss->private_dirty += HPAGE_PUD_SIZE; + else + mss->private_clean += HPAGE_PUD_SIZE; + } +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + + return 0; +} + static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { @@ -707,6 +734,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid) struct vm_area_struct *vma = v; struct mem_size_stats mss; struct mm_walk smaps_walk = { + .pud_entry = smaps_pud_range, .pmd_entry = smaps_pte_range, #ifdef CONFIG_HUGETLB_PAGE .hugetlb_entry = smaps_hugetlb_range, @@ -889,13 +917,50 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, set_pmd_at(vma->vm_mm, addr, pmdp, pmd); } +static inline void clear_soft_dirty_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp) +{ +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + pud_t pud = pudp_huge_get_and_clear(vma->vm_mm, addr, pudp); + + pud = pud_wrprotect(pud); + pud = pud_clear_soft_dirty(pud); + + if (vma->vm_flags & VM_SOFTDIRTY) + vma->vm_flags &= ~VM_SOFTDIRTY; + + set_pud_at(vma->vm_mm, addr, pudp, pud); +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ +} #else static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { } +static inline void clear_soft_dirty_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp) +{ +} #endif +static int clear_refs_pud_range(pud_t *pud, unsigned long addr, + unsigned long end, struct mm_walk *walk) +{ +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + struct clear_refs_private *cp = walk->private; + struct vm_area_struct *vma = walk->vma; + + if (cp->type == CLEAR_REFS_SOFT_DIRTY) { + clear_soft_dirty_pud(vma, addr, pud); + } else { + /* Clear accessed and referenced bits. */ + pudp_test_and_clear_young(vma, addr, pud); + } +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + + return 0; +} + static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { @@ -1006,6 +1071,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, .type = type, }; struct mm_walk clear_refs_walk = { + .pud_entry = clear_refs_pud_range, .pmd_entry = clear_refs_pte_range, .test_walk = clear_refs_test_walk, .mm = mm, @@ -1170,6 +1236,48 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm, return make_pme(frame, flags); } +static int pagemap_pud_range(pud_t *pudp, unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + int err = 0; +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + struct vm_area_struct *vma = walk->vma; + struct pagemapread *pm = walk->private; + u64 flags = 0, frame = 0; + pud_t pud = *pudp; + + if ((vma->vm_flags & VM_SOFTDIRTY) || pud_soft_dirty(pud)) + flags |= PM_SOFT_DIRTY; + + /* + * Currently pud for thp is always present because thp + * can not be swapped-out, migrated, or HWPOISONed + * (split in such cases instead.) + * This if-check is just to prepare for future implementation. + */ + if (pud_present(pud)) { + flags |= PM_PRESENT; + if (!(vma->vm_flags & VM_SHARED)) + flags |= PM_MMAP_EXCLUSIVE; + + if (pm->show_pfn) + frame = pud_pfn(pud) + + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + + for (; addr != end; addr += PAGE_SIZE) { + pagemap_entry_t pme = make_pme(frame, flags); + + err = add_to_pagemap(addr, &pme, pm); + if (err) + break; + if (pm->show_pfn && (flags & PM_PRESENT)) + frame++; + } + } +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + return err; +} + static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, struct mm_walk *walk) { @@ -1349,6 +1457,7 @@ static ssize_t pagemap_read(struct file *file, char __user *buf, if (!pm.buffer) goto out_mm; + pagemap_walk.pud_entry = pagemap_pud_range; pagemap_walk.pmd_entry = pagemap_pmd_range; pagemap_walk.pte_hole = pagemap_pte_hole; #ifdef CONFIG_HUGETLB_PAGE