From patchwork Fri Jul 22 12:19:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 9243461 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2DCC9602F0 for ; Fri, 22 Jul 2016 12:20:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1EAFF27FA3 for ; Fri, 22 Jul 2016 12:20:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 123C427FA4; Fri, 22 Jul 2016 12:20:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D455427FA5 for ; Fri, 22 Jul 2016 12:20:04 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 8418C1A1E6B; Fri, 22 Jul 2016 05:20:49 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 0FAE61A1E40 for ; Fri, 22 Jul 2016 05:20:46 -0700 (PDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 39D4FAD93; Fri, 22 Jul 2016 12:19:51 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id B417E1E0F31; Fri, 22 Jul 2016 14:19:47 +0200 (CEST) From: Jan Kara To: linux-mm@kvack.org Subject: [PATCH 13/15] mm: Provide helper for finishing mkwrite faults Date: Fri, 22 Jul 2016 14:19:39 +0200 Message-Id: <1469189981-19000-14-git-send-email-jack@suse.cz> X-Mailer: git-send-email 2.6.6 In-Reply-To: <1469189981-19000-1-git-send-email-jack@suse.cz> References: <1469189981-19000-1-git-send-email-jack@suse.cz> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-fsdevel@vger.kernel.org, Jan Kara , linux-nvdimm@lists.01.org MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP Provide a helper function for finishing write faults due to PTE being read-only. The helper will be used by DAX to avoid the need of complicating generic MM code with DAX locking specifics. Signed-off-by: Jan Kara --- include/linux/mm.h | 1 + mm/memory.c | 62 +++++++++++++++++++++++++++++++++++------------------- 2 files changed, 41 insertions(+), 22 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index daf690fccc0c..32ff572a6e6c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -601,6 +601,7 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) void do_set_pte(struct vm_area_struct *vma, unsigned long address, struct page *page, pte_t *pte, bool write, bool anon); int finish_fault(struct vm_area_struct *vma, struct vm_fault *vmf); +int finish_mkwrite_fault(struct vm_area_struct *vma, struct vm_fault *vmf); #endif /* diff --git a/mm/memory.c b/mm/memory.c index 1d2916c53d43..30cf7b36df48 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2262,6 +2262,41 @@ oom: return VM_FAULT_OOM; } +/** + * finish_mkrite_fault - finish page fault making PTE writeable once the page + * page is prepared + * + * @vma: virtual memory area + * @vmf: structure describing the fault + * + * This function handles all that is needed to finish a write page fault due + * to PTE being read-only once the mapped page is prepared. It handles locking + * of PTE and modifying it. The function returns 0 on success, error in case + * the PTE changed before we acquired PTE lock. + * + * The function expects the page to be locked or other protection against + * concurrent faults / writeback (such as DAX radix tree locks). + */ +int finish_mkwrite_fault(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + unsigned long address = (unsigned long)vmf->virtual_address; + pte_t *pte; + spinlock_t *ptl; + + pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, address, &ptl); + /* + * We might have raced with another page fault while we + * released the pte_offset_map_lock. + */ + if (!pte_same(*pte, vmf->orig_pte)) { + pte_unmap_unlock(pte, ptl); + return -EBUSY; + } + wp_page_reuse(vma->vm_mm, vma, address, pte, ptl, vmf->orig_pte, + vmf->page); + return 0; +} + /* * Handle write page faults for VM_MIXEDMAP or VM_PFNMAP for a VM_SHARED * mapping @@ -2282,17 +2317,12 @@ static int wp_pfn_shared(struct mm_struct *mm, ret = vma->vm_ops->pfn_mkwrite(vma, &vmf); if (ret & VM_FAULT_ERROR) return ret; - page_table = pte_offset_map_lock(mm, pmd, address, &ptl); - /* - * We might have raced with another page fault while we - * released the pte_offset_map_lock. - */ - if (!pte_same(*page_table, orig_pte)) { - pte_unmap_unlock(page_table, ptl); + if (finish_mkwrite_fault(vma, &vmf) < 0) return 0; - } + } else { + wp_page_reuse(mm, vma, address, page_table, ptl, orig_pte, + NULL); } - wp_page_reuse(mm, vma, address, page_table, ptl, orig_pte, NULL); return VM_FAULT_WRITE; } @@ -2319,28 +2349,16 @@ static int wp_page_shared(struct mm_struct *mm, struct vm_area_struct *vma, put_page(old_page); return tmp; } - /* - * Since we dropped the lock we need to revalidate - * the PTE as someone else may have changed it. If - * they did, we just return, as we can count on the - * MMU to tell us if they didn't also make it writable. - */ - page_table = pte_offset_map_lock(mm, pmd, address, - &ptl); - if (!pte_same(*page_table, orig_pte)) { + if (finish_mkwrite_fault(vma, &vmf) < 0) { unlock_page(old_page); - pte_unmap_unlock(page_table, ptl); put_page(old_page); return 0; } - wp_page_reuse(mm, vma, address, page_table, ptl, orig_pte, - old_page); } else { wp_page_reuse(mm, vma, address, page_table, ptl, orig_pte, old_page); lock_page(old_page); } - fault_dirty_shared_page(vma, old_page); put_page(old_page); return VM_FAULT_WRITE;