From patchwork Sat Nov 14 00:06:45 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 7616031 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 398369F1C2 for ; Sat, 14 Nov 2015 00:07:40 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 42B2E20843 for ; Sat, 14 Nov 2015 00:07:39 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 56BA620848 for ; Sat, 14 Nov 2015 00:07:38 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 1B9561A21AE; Fri, 13 Nov 2015 16:07:38 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by ml01.01.org (Postfix) with ESMTP id 0E22D1A21AB for ; Fri, 13 Nov 2015 16:07:37 -0800 (PST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP; 13 Nov 2015 16:07:10 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,289,1444719600"; d="scan'208";a="684969856" Received: from theros.lm.intel.com ([10.232.112.87]) by orsmga003.jf.intel.com with ESMTP; 13 Nov 2015 16:07:08 -0800 From: Ross Zwisler To: linux-kernel@vger.kernel.org Subject: [PATCH v2 06/11] mm: add pgoff_mkclean() Date: Fri, 13 Nov 2015 17:06:45 -0700 Message-Id: <1447459610-14259-7-git-send-email-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1447459610-14259-1-git-send-email-ross.zwisler@linux.intel.com> References: <1447459610-14259-1-git-send-email-ross.zwisler@linux.intel.com> Cc: Dave Hansen , Dave Chinner , "J. Bruce Fields" , linux-mm@kvack.org, Andreas Dilger , "H. Peter Anvin" , Jeff Layton , linux-nvdimm@lists.01.org, x86@kernel.org, Ingo Molnar , linux-ext4@vger.kernel.org, xfs@oss.sgi.com, Alexander Viro , Thomas Gleixner , Theodore Ts'o , Jan Kara , linux-fsdevel@vger.kernel.org, Andrew Morton , Matthew Wilcox X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce pgoff_mkclean() which conceptually is similar to page_mkclean() except it works in the absence of struct page and it can also be used to clean PMDs. This is needed for DAX's dirty page handling. pgoff_mkclean() doesn't return an error for a missing PTE/PMD when looping through the VMAs because it's not a requirement that each of the potentially many VMAs associated with a given struct address_space have a mapping set up for our pgoff. Signed-off-by: Ross Zwisler --- include/linux/rmap.h | 5 +++++ mm/rmap.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 56 insertions(+) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 29446ae..171a4ac 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -223,6 +223,11 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); int page_mkclean(struct page *); /* + * Cleans and write protects the PTEs of shared mappings. + */ +void pgoff_mkclean(pgoff_t, struct address_space *); + +/* * called in munlock()/munmap() path to check for other vmas holding * the page mlocked. */ diff --git a/mm/rmap.c b/mm/rmap.c index f5b5c1f..8114862 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -586,6 +586,16 @@ vma_address(struct page *page, struct vm_area_struct *vma) return address; } +static inline unsigned long +pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma) +{ + unsigned long address; + + address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); + return address; +} + #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH static void percpu_flush_tlb_batch_pages(void *data) { @@ -1040,6 +1050,47 @@ int page_mkclean(struct page *page) } EXPORT_SYMBOL_GPL(page_mkclean); +void pgoff_mkclean(pgoff_t pgoff, struct address_space *mapping) +{ + struct vm_area_struct *vma; + int ret = 0; + + i_mmap_lock_read(mapping); + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { + struct mm_struct *mm = vma->vm_mm; + pmd_t pmd, *pmdp = NULL; + pte_t pte, *ptep = NULL; + unsigned long address; + spinlock_t *ptl; + + address = pgoff_address(pgoff, vma); + + /* when this returns successfully ptl is locked */ + ret = follow_pte_pmd(mm, address, &ptep, &pmdp, &ptl); + if (ret) + continue; + + if (pmdp) { + flush_cache_page(vma, address, pmd_pfn(*pmdp)); + pmd = pmdp_huge_clear_flush(vma, address, pmdp); + pmd = pmd_wrprotect(pmd); + pmd = pmd_mkclean(pmd); + set_pmd_at(mm, address, pmdp, pmd); + spin_unlock(ptl); + } else { + BUG_ON(!ptep); + flush_cache_page(vma, address, pte_pfn(*ptep)); + pte = ptep_clear_flush(vma, address, ptep); + pte = pte_wrprotect(pte); + pte = pte_mkclean(pte); + set_pte_at(mm, address, ptep, pte); + pte_unmap_unlock(ptep, ptl); + } + } + i_mmap_unlock_read(mapping); +} +EXPORT_SYMBOL_GPL(pgoff_mkclean); + /** * page_move_anon_rmap - move a page to our anon_vma * @page: the page to move to our anon_vma