From patchwork Fri Jul 22 12:19:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 9243423 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 46DF66088F for ; Fri, 22 Jul 2016 12:19:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36C6527FA5 for ; Fri, 22 Jul 2016 12:19:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2186C27FA3; Fri, 22 Jul 2016 12:19:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C80BC27FA3 for ; Fri, 22 Jul 2016 12:19:54 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 32E4D1A1E47; Fri, 22 Jul 2016 05:20:46 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 6605F1A1E0A for ; Fri, 22 Jul 2016 05:20:44 -0700 (PDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id E3164AC32; Fri, 22 Jul 2016 12:19:49 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 9A6D61E0F19; Fri, 22 Jul 2016 14:19:47 +0200 (CEST) From: Jan Kara To: linux-mm@kvack.org Subject: [PATCH 02/15] mm: Propagate original vm_fault into do_fault_around() Date: Fri, 22 Jul 2016 14:19:28 +0200 Message-Id: <1469189981-19000-3-git-send-email-jack@suse.cz> X-Mailer: git-send-email 2.6.6 In-Reply-To: <1469189981-19000-1-git-send-email-jack@suse.cz> References: <1469189981-19000-1-git-send-email-jack@suse.cz> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-fsdevel@vger.kernel.org, Jan Kara , linux-nvdimm@lists.01.org MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP Propagate vm_fault structure of the original fault into do_fault_around(). Currently it saves just two arguments of do_fault_around() but when adding more into struct vm_fault it will be a bigger win. Signed-off-by: Jan Kara --- mm/memory.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 4ee0aa96d78d..651accbe34cc 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2950,13 +2950,14 @@ late_initcall(fault_around_debugfs); * fault_around_pages() value (and therefore to page order). This way it's * easier to guarantee that we don't cross page table boundaries. */ -static void do_fault_around(struct vm_area_struct *vma, unsigned long address, - pte_t *pte, pgoff_t pgoff, unsigned int flags) +static void do_fault_around(struct vm_area_struct *vma, struct vm_fault *vmf, + pte_t *pte) { unsigned long start_addr, nr_pages, mask; - pgoff_t max_pgoff; - struct vm_fault vmf; + pgoff_t pgoff = vmf->pgoff, max_pgoff; + struct vm_fault vmfaround; int off; + unsigned long address = (unsigned long)vmf->virtual_address; nr_pages = READ_ONCE(fault_around_bytes) >> PAGE_SHIFT; mask = ~(nr_pages * PAGE_SIZE - 1) & PAGE_MASK; @@ -2985,10 +2986,10 @@ static void do_fault_around(struct vm_area_struct *vma, unsigned long address, pte++; } - init_vmf(&vmf, vma, start_addr, pgoff, flags); - vmf.pte = pte; - vmf.max_pgoff = max_pgoff; - vma->vm_ops->map_pages(vma, &vmf); + init_vmf(&vmfaround, vma, start_addr, pgoff, vmf->flags); + vmfaround.pte = pte; + vmfaround.max_pgoff = max_pgoff; + vma->vm_ops->map_pages(vma, &vmfaround); } static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma, @@ -3006,7 +3007,7 @@ static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma, */ if (vma->vm_ops->map_pages && fault_around_bytes >> PAGE_SHIFT > 1) { pte = pte_offset_map_lock(mm, pmd, address, &ptl); - do_fault_around(vma, address, pte, vmf->pgoff, vmf->flags); + do_fault_around(vma, vmf, pte); if (!pte_same(*pte, orig_pte)) goto unlock_out; pte_unmap_unlock(pte, ptl);