From patchwork Thu Oct 18 20:23:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10648127 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D059112B for ; Thu, 18 Oct 2018 20:24:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F63328D8F for ; Thu, 18 Oct 2018 20:24:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4112D28D94; Thu, 18 Oct 2018 20:24:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8687228D8F for ; Thu, 18 Oct 2018 20:24:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727445AbeJSE0L (ORCPT ); Fri, 19 Oct 2018 00:26:11 -0400 Received: from mail-qt1-f196.google.com ([209.85.160.196]:44487 "EHLO mail-qt1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726916AbeJSE0K (ORCPT ); Fri, 19 Oct 2018 00:26:10 -0400 Received: by mail-qt1-f196.google.com with SMTP id x24-v6so4293640qtx.11 for ; Thu, 18 Oct 2018 13:23:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=epHd1o6KPupVYmL3VXxvF5Z1JP8sYlPDk5pYju44IUA=; b=Nn2z7qLS1B+jXFy6MBMwp+CrujTK5hWQIvWXvIKYiCXZBEkc+Zm/YxOCHqjRgPbkgc Z5AQDLFvx3RGwu8BmWQ0eA9BPCqG+BAm/4iY6nHbEXbOOj7ye6iJ7/5B3/UwMyEM2iqu YwmJilgDXnCqkZC02x0bJRYppB4B6XXhNt3u77hXDGrddoCsFYh5GgS32E5jmCnys8Gq rSQpIiyg/e6UziyOF9ep6Ffc+zXMisEowv7yc+7Ekj9eotr2aypi2moJud9myU9KXYRe awab9+FM2+bPs+nN+zwUVZbk2DEDVLgLFhAXxN2vXp1imeaEC8rt10xrJN51R6D5pbn8 H2oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=epHd1o6KPupVYmL3VXxvF5Z1JP8sYlPDk5pYju44IUA=; b=ImQTi6fIoUUQGllHBobLXaxQ4MCFX6bvG2lUs2GwcFbUfXgMerRK0O1anesA07j73z dSVnYJL95UiXwcbtUXsXOkqLScJS7QR+LCJMMB8clhZNd70+3XfXwhmOzqllBbd0OrWj zjHgweoQAtJOWtmkdyf/3XTWYlkAqLK3+/t/JqEEhMyhPXoJDmahwDl9WrbU2nCako0p yqGW+4xAmC+Y+EoIM1/JqgiKteQ1qbLhkMqOGQCxP7SobkIW912hSvEso4yBn7YS3pDz 7AjDxRtuwnIUdv7Dw3wi1/+Wu8uUZlF5u6owGxI2UziDNmjwawl2uyxuYEGGlxNy1VLj 6mLg== X-Gm-Message-State: ABuFfoh2Tzv4a+E5wv94xbEonR34sXc+PDm1SO17lBCNksK958jmF2Uy vdgk5xTnGNMcdbVuiCSsYcICQQ== X-Google-Smtp-Source: ACcGV60QhlEhA2u9OPmOUNshVmeMn661T3D0mucsYsbEhGwfAv2ZgSqc3+Y6LjDLUpS+iIeky1agow== X-Received: by 2002:a0c:b484:: with SMTP id c4mr11910024qve.245.1539894209082; Thu, 18 Oct 2018 13:23:29 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id l43-v6sm17526410qtc.80.2018.10.18.13.23.27 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 18 Oct 2018 13:23:28 -0700 (PDT) From: Josef Bacik To: kernel-team@fb.com, hannes@cmpxchg.org, linux-kernel@vger.kernel.org, tj@kernel.org, david@fromorbit.com, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, riel@fb.com, linux-mm@kvack.org Subject: [PATCH 1/7] mm: infrastructure for page fault page caching Date: Thu, 18 Oct 2018 16:23:12 -0400 Message-Id: <20181018202318.9131-2-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20181018202318.9131-1-josef@toxicpanda.com> References: <20181018202318.9131-1-josef@toxicpanda.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We want to be able to cache the result of a previous loop of a page fault in the case that we use VM_FAULT_RETRY, so introduce handle_mm_fault_cacheable that will take a struct vm_fault directly, add a ->cached_page field to vm_fault, and add helpers to init/cleanup the struct vm_fault. I've converted x86, other arch's can follow suit if they so wish, it's relatively straightforward. Signed-off-by: Josef Bacik --- arch/x86/mm/fault.c | 6 +++- include/linux/mm.h | 31 +++++++++++++++++++++ mm/memory.c | 79 ++++++++++++++++++++++++++++++++--------------------- 3 files changed, 84 insertions(+), 32 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 47bebfe6efa7..ef6e538c4931 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1211,6 +1211,7 @@ static noinline void __do_page_fault(struct pt_regs *regs, unsigned long error_code, unsigned long address) { + struct vm_fault vmf = {}; struct vm_area_struct *vma; struct task_struct *tsk; struct mm_struct *mm; @@ -1392,7 +1393,8 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code, * fault, so we read the pkey beforehand. */ pkey = vma_pkey(vma); - fault = handle_mm_fault(vma, address, flags); + vm_fault_init(&vmf, vma, address, flags); + fault = handle_mm_fault_cacheable(&vmf); major |= fault & VM_FAULT_MAJOR; /* @@ -1408,6 +1410,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code, if (!fatal_signal_pending(tsk)) goto retry; } + vm_fault_cleanup(&vmf); /* User mode? Just return to handle the fatal exception */ if (flags & FAULT_FLAG_USER) @@ -1418,6 +1421,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code, return; } + vm_fault_cleanup(&vmf); up_read(&mm->mmap_sem); if (unlikely(fault & VM_FAULT_ERROR)) { mm_fault_error(regs, error_code, address, &pkey, fault); diff --git a/include/linux/mm.h b/include/linux/mm.h index a61ebe8ad4ca..4a84ec976dfc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -360,6 +360,12 @@ struct vm_fault { * is set (which is also implied by * VM_FAULT_ERROR). */ + struct page *cached_page; /* ->fault handlers that return + * VM_FAULT_RETRY can store their + * previous page here to be reused the + * next time we loop through the fault + * handler for faster lookup. + */ /* These three entries are valid only while holding ptl lock */ pte_t *pte; /* Pointer to pte entry matching * the 'address'. NULL if the page @@ -378,6 +384,16 @@ struct vm_fault { */ }; +static inline void vm_fault_init(struct vm_fault *vmf, + struct vm_area_struct *vma, + unsigned long address, + unsigned int flags) +{ + vmf->vma = vma; + vmf->address = address; + vmf->flags = flags; +} + /* page entry size for vm->huge_fault() */ enum page_entry_size { PE_SIZE_PTE = 0, @@ -943,6 +959,14 @@ static inline void put_page(struct page *page) __put_page(page); } +static inline void vm_fault_cleanup(struct vm_fault *vmf) +{ + if (vmf->cached_page) { + put_page(vmf->cached_page); + vmf->cached_page = NULL; + } +} + #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTION_IN_PAGE_FLAGS #endif @@ -1405,6 +1429,7 @@ int invalidate_inode_page(struct page *page); #ifdef CONFIG_MMU extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags); +extern vm_fault_t handle_mm_fault_cacheable(struct vm_fault *vmf); extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); @@ -1420,6 +1445,12 @@ static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, BUG(); return VM_FAULT_SIGBUS; } +static inline vm_fault_t handle_mm_fault_cacheable(struct vm_fault *vmf) +{ + /* should never happen if there's no MMU */ + BUG(); + return VM_FAULT_SIGBUS; +} static inline int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked) diff --git a/mm/memory.c b/mm/memory.c index c467102a5cbc..433075f722ea 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4024,36 +4024,34 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) * The mmap_sem may have been released depending on flags and our * return value. See filemap_fault() and __lock_page_or_retry(). */ -static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags) +static vm_fault_t __handle_mm_fault(struct vm_fault *vmf) { - struct vm_fault vmf = { - .vma = vma, - .address = address & PAGE_MASK, - .flags = flags, - .pgoff = linear_page_index(vma, address), - .gfp_mask = __get_fault_gfp_mask(vma), - }; - unsigned int dirty = flags & FAULT_FLAG_WRITE; + struct vm_area_struct *vma = vmf->vma; + unsigned long address = vmf->address; + unsigned int dirty = vmf->flags & FAULT_FLAG_WRITE; struct mm_struct *mm = vma->vm_mm; pgd_t *pgd; p4d_t *p4d; vm_fault_t ret; + vmf->address = address & PAGE_MASK; + vmf->pgoff = linear_page_index(vma, address); + vmf->gfp_mask = __get_fault_gfp_mask(vma); + pgd = pgd_offset(mm, address); p4d = p4d_alloc(mm, pgd, address); if (!p4d) return VM_FAULT_OOM; - vmf.pud = pud_alloc(mm, p4d, address); - if (!vmf.pud) + vmf->pud = pud_alloc(mm, p4d, address); + if (!vmf->pud) return VM_FAULT_OOM; - if (pud_none(*vmf.pud) && transparent_hugepage_enabled(vma)) { - ret = create_huge_pud(&vmf); + if (pud_none(*vmf->pud) && transparent_hugepage_enabled(vma)) { + ret = create_huge_pud(vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; } else { - pud_t orig_pud = *vmf.pud; + pud_t orig_pud = *vmf->pud; barrier(); if (pud_trans_huge(orig_pud) || pud_devmap(orig_pud)) { @@ -4061,50 +4059,50 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, /* NUMA case for anonymous PUDs would go here */ if (dirty && !pud_write(orig_pud)) { - ret = wp_huge_pud(&vmf, orig_pud); + ret = wp_huge_pud(vmf, orig_pud); if (!(ret & VM_FAULT_FALLBACK)) return ret; } else { - huge_pud_set_accessed(&vmf, orig_pud); + huge_pud_set_accessed(vmf, orig_pud); return 0; } } } - vmf.pmd = pmd_alloc(mm, vmf.pud, address); - if (!vmf.pmd) + vmf->pmd = pmd_alloc(mm, vmf->pud, address); + if (!vmf->pmd) return VM_FAULT_OOM; - if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) { - ret = create_huge_pmd(&vmf); + if (pmd_none(*vmf->pmd) && transparent_hugepage_enabled(vma)) { + ret = create_huge_pmd(vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; } else { - pmd_t orig_pmd = *vmf.pmd; + pmd_t orig_pmd = *vmf->pmd; barrier(); if (unlikely(is_swap_pmd(orig_pmd))) { VM_BUG_ON(thp_migration_supported() && !is_pmd_migration_entry(orig_pmd)); if (is_pmd_migration_entry(orig_pmd)) - pmd_migration_entry_wait(mm, vmf.pmd); + pmd_migration_entry_wait(mm, vmf->pmd); return 0; } if (pmd_trans_huge(orig_pmd) || pmd_devmap(orig_pmd)) { if (pmd_protnone(orig_pmd) && vma_is_accessible(vma)) - return do_huge_pmd_numa_page(&vmf, orig_pmd); + return do_huge_pmd_numa_page(vmf, orig_pmd); if (dirty && !pmd_write(orig_pmd)) { - ret = wp_huge_pmd(&vmf, orig_pmd); + ret = wp_huge_pmd(vmf, orig_pmd); if (!(ret & VM_FAULT_FALLBACK)) return ret; } else { - huge_pmd_set_accessed(&vmf, orig_pmd); + huge_pmd_set_accessed(vmf, orig_pmd); return 0; } } } - return handle_pte_fault(&vmf); + return handle_pte_fault(vmf); } /* @@ -4113,9 +4111,10 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, * The mmap_sem may have been released depending on flags and our * return value. See filemap_fault() and __lock_page_or_retry(). */ -vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, - unsigned int flags) +static vm_fault_t do_handle_mm_fault(struct vm_fault *vmf) { + struct vm_area_struct *vma = vmf->vma; + unsigned int flags = vmf->flags; vm_fault_t ret; __set_current_state(TASK_RUNNING); @@ -4139,9 +4138,9 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, mem_cgroup_enter_user_fault(); if (unlikely(is_vm_hugetlb_page(vma))) - ret = hugetlb_fault(vma->vm_mm, vma, address, flags); + ret = hugetlb_fault(vma->vm_mm, vma, vmf->address, flags); else - ret = __handle_mm_fault(vma, address, flags); + ret = __handle_mm_fault(vmf); if (flags & FAULT_FLAG_USER) { mem_cgroup_exit_user_fault(); @@ -4157,8 +4156,26 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, return ret; } + +vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, + unsigned int flags) +{ + struct vm_fault vmf = {}; + vm_fault_t ret; + + vm_fault_init(&vmf, vma, address, flags); + ret = do_handle_mm_fault(&vmf); + vm_fault_cleanup(&vmf); + return ret; +} EXPORT_SYMBOL_GPL(handle_mm_fault); +vm_fault_t handle_mm_fault_cacheable(struct vm_fault *vmf) +{ + return do_handle_mm_fault(vmf); +} +EXPORT_SYMBOL_GPL(handle_mm_fault_cacheable); + #ifndef __PAGETABLE_P4D_FOLDED /* * Allocate p4d page table. From patchwork Thu Oct 18 20:23:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10648125 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9CB8617D4 for ; Thu, 18 Oct 2018 20:24:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F53B28D8F for ; Thu, 18 Oct 2018 20:24:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 831F628D94; Thu, 18 Oct 2018 20:24:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB9FE28D8F for ; Thu, 18 Oct 2018 20:24:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727462AbeJSE0M (ORCPT ); Fri, 19 Oct 2018 00:26:12 -0400 Received: from mail-qt1-f196.google.com ([209.85.160.196]:38601 "EHLO mail-qt1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727410AbeJSE0M (ORCPT ); Fri, 19 Oct 2018 00:26:12 -0400 Received: by mail-qt1-f196.google.com with SMTP id l9-v6so35847564qtf.5 for ; Thu, 18 Oct 2018 13:23:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=FY4PvfFFgkG4rCGf8NvSi/OKKAo2xpKioHzicQvhcOI=; b=L5/pCy/gPsV0ofzn3TUULb+VIWeVJ1YdWREgQOC05qgetxeyc/gooBv0/7P1TprVEK zN3AcQkua6WC4y8j+OVVn7h38ObbMlDCmLRGDMONosHbBPAVjdTFsmicgMz49RblXBQs IcnthT0aIsVcZyB7Eb40druS3N91WGeYrki035XZGFqCdIWvsh9in2sHO3sBkEm7iiEm l7DczhjMfarG3XTGID9ITMRzBXWvJrVDQ9+Ul4n807bdgXABCoMOfrbhGGhPLAAdjBnO eoQSrbg/V/JVnhs9nsuAPDj1AfS1d/J+QoPyTfAqdqe912W2H1BZnvXfcbZ4Xy4OfptH otrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=FY4PvfFFgkG4rCGf8NvSi/OKKAo2xpKioHzicQvhcOI=; b=Lo3eMwHdrJ85eQkBflTkkTtWTkFj2SvvJ+HscFaZzAgs6S395fU4jlaDS+yVDMfwae S0Pma1HaQkzYqbO248M1Sbjnw+zbzaZVHhs7VIYll6CoBsCXngj85QtOAGaNc1oZu1WT vSfn9M3AOW/NBHWnzVjwK8bhRN4u+Yzq9amMzas9LahBtegF1vR3R9fQZRK663/aMTSa 7nQSMdbU23K7bs4ERvivsCf+731cBVzc0PRfiW+0RngSavRITwaE3xnR4psOx6xHECUB 244a2dT8ScFXQ0cek1EGjs3BxRKECeUmOTxEldQ47Iwj3yWSuZE3tZtKUIHnUAcTCpLN iD6w== X-Gm-Message-State: ABuFfog/ZwaZ+twnes+JWlCgVywzepBfQSJ5mOGTyLXAXUk1FV46QkHl v9v1A7drlSMPTrvzP1HVst9iew== X-Google-Smtp-Source: ACcGV60gZjzwHJ0OsBPoG+m0NpaS9G+lB3FxI11r5t1/cPnWTBZZDswZrgJkgarGYA3Zn1iwTLsU3g== X-Received: by 2002:aed:3fb9:: with SMTP id s54-v6mr29909677qth.208.1539894210875; Thu, 18 Oct 2018 13:23:30 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id m71-v6sm11781158qke.71.2018.10.18.13.23.29 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 18 Oct 2018 13:23:29 -0700 (PDT) From: Josef Bacik To: kernel-team@fb.com, hannes@cmpxchg.org, linux-kernel@vger.kernel.org, tj@kernel.org, david@fromorbit.com, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, riel@fb.com, linux-mm@kvack.org Subject: [PATCH 2/7] mm: drop mmap_sem for page cache read IO submission Date: Thu, 18 Oct 2018 16:23:13 -0400 Message-Id: <20181018202318.9131-3-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20181018202318.9131-1-josef@toxicpanda.com> References: <20181018202318.9131-1-josef@toxicpanda.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Weiner Reads can take a long time, and if anybody needs to take a write lock on the mmap_sem it'll block any subsequent readers to the mmap_sem while the read is outstanding, which could cause long delays. Instead drop the mmap_sem if we do any reads at all. Signed-off-by: Johannes Weiner Signed-off-by: Josef Bacik --- mm/filemap.c | 119 ++++++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 90 insertions(+), 29 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 52517f28e6f4..1ed35cd99b2c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2366,6 +2366,18 @@ generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) EXPORT_SYMBOL(generic_file_read_iter); #ifdef CONFIG_MMU +static struct file *maybe_unlock_mmap_for_io(struct vm_area_struct *vma, int flags) +{ + if ((flags & (FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT)) == FAULT_FLAG_ALLOW_RETRY) { + struct file *file; + + file = get_file(vma->vm_file); + up_read(&vma->vm_mm->mmap_sem); + return file; + } + return NULL; +} + /** * page_cache_read - adds requested page to the page cache if not already there * @file: file to read @@ -2405,23 +2417,28 @@ static int page_cache_read(struct file *file, pgoff_t offset, gfp_t gfp_mask) * Synchronous readahead happens when we don't even find * a page in the page cache at all. */ -static void do_sync_mmap_readahead(struct vm_area_struct *vma, - struct file_ra_state *ra, - struct file *file, - pgoff_t offset) +static int do_sync_mmap_readahead(struct vm_area_struct *vma, + struct file_ra_state *ra, + struct file *file, + pgoff_t offset, + int flags) { struct address_space *mapping = file->f_mapping; + struct file *fpin; /* If we don't want any read-ahead, don't bother */ if (vma->vm_flags & VM_RAND_READ) - return; + return 0; if (!ra->ra_pages) - return; + return 0; if (vma->vm_flags & VM_SEQ_READ) { + fpin = maybe_unlock_mmap_for_io(vma, flags); page_cache_sync_readahead(mapping, ra, file, offset, ra->ra_pages); - return; + if (fpin) + fput(fpin); + return fpin ? -EAGAIN : 0; } /* Avoid banging the cache line if not needed */ @@ -2433,7 +2450,9 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, * stop bothering with read-ahead. It will only hurt. */ if (ra->mmap_miss > MMAP_LOTSAMISS) - return; + return 0; + + fpin = maybe_unlock_mmap_for_io(vma, flags); /* * mmap read-around @@ -2442,28 +2461,40 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, ra->size = ra->ra_pages; ra->async_size = ra->ra_pages / 4; ra_submit(ra, mapping, file); + + if (fpin) + fput(fpin); + + return fpin ? -EAGAIN : 0; } /* * Asynchronous readahead happens when we find the page and PG_readahead, * so we want to possibly extend the readahead further.. */ -static void do_async_mmap_readahead(struct vm_area_struct *vma, - struct file_ra_state *ra, - struct file *file, - struct page *page, - pgoff_t offset) +static int do_async_mmap_readahead(struct vm_area_struct *vma, + struct file_ra_state *ra, + struct file *file, + struct page *page, + pgoff_t offset, + int flags) { struct address_space *mapping = file->f_mapping; + struct file *fpin; /* If we don't want any read-ahead, don't bother */ if (vma->vm_flags & VM_RAND_READ) - return; + return 0; if (ra->mmap_miss > 0) ra->mmap_miss--; - if (PageReadahead(page)) - page_cache_async_readahead(mapping, ra, file, - page, offset, ra->ra_pages); + if (!PageReadahead(page)) + return 0; + fpin = maybe_unlock_mmap_for_io(vma, flags); + page_cache_async_readahead(mapping, ra, file, + page, offset, ra->ra_pages); + if (fpin) + fput(fpin); + return fpin ? -EAGAIN : 0; } /** @@ -2479,10 +2510,8 @@ static void do_async_mmap_readahead(struct vm_area_struct *vma, * * vma->vm_mm->mmap_sem must be held on entry. * - * If our return value has VM_FAULT_RETRY set, it's because - * lock_page_or_retry() returned 0. - * The mmap_sem has usually been released in this case. - * See __lock_page_or_retry() for the exception. + * If our return value has VM_FAULT_RETRY set, the mmap_sem has + * usually been released. * * If our return value does not have VM_FAULT_RETRY set, the mmap_sem * has not been released. @@ -2492,11 +2521,13 @@ static void do_async_mmap_readahead(struct vm_area_struct *vma, vm_fault_t filemap_fault(struct vm_fault *vmf) { int error; + struct mm_struct *mm = vmf->vma->vm_mm; struct file *file = vmf->vma->vm_file; struct address_space *mapping = file->f_mapping; struct file_ra_state *ra = &file->f_ra; struct inode *inode = mapping->host; pgoff_t offset = vmf->pgoff; + int flags = vmf->flags; pgoff_t max_off; struct page *page; vm_fault_t ret = 0; @@ -2509,27 +2540,44 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * Do we have something in the page cache already? */ page = find_get_page(mapping, offset); - if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) { + if (likely(page) && !(flags & FAULT_FLAG_TRIED)) { /* * We found the page, so try async readahead before * waiting for the lock. */ - do_async_mmap_readahead(vmf->vma, ra, file, page, offset); + error = do_async_mmap_readahead(vmf->vma, ra, file, page, offset, vmf->flags); + if (error == -EAGAIN) + goto out_retry_wait; } else if (!page) { /* No page in the page cache at all */ - do_sync_mmap_readahead(vmf->vma, ra, file, offset); - count_vm_event(PGMAJFAULT); - count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT); ret = VM_FAULT_MAJOR; + count_vm_event(PGMAJFAULT); + count_memcg_event_mm(mm, PGMAJFAULT); + error = do_sync_mmap_readahead(vmf->vma, ra, file, offset, vmf->flags); + if (error == -EAGAIN) + goto out_retry_wait; retry_find: page = find_get_page(mapping, offset); if (!page) goto no_cached_page; } - if (!lock_page_or_retry(page, vmf->vma->vm_mm, vmf->flags)) { - put_page(page); - return ret | VM_FAULT_RETRY; + if (!trylock_page(page)) { + if (flags & FAULT_FLAG_ALLOW_RETRY) { + if (flags & FAULT_FLAG_RETRY_NOWAIT) + goto out_retry; + up_read(&mm->mmap_sem); + goto out_retry_wait; + } + if (flags & FAULT_FLAG_KILLABLE) { + int ret = __lock_page_killable(page); + + if (ret) { + up_read(&mm->mmap_sem); + goto out_retry; + } + } else + __lock_page(page); } /* Did it get truncated? */ @@ -2607,6 +2655,19 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) /* Things didn't work out. Return zero to tell the mm layer so. */ shrink_readahead_size_eio(file, ra); return VM_FAULT_SIGBUS; + +out_retry_wait: + if (page) { + if (flags & FAULT_FLAG_KILLABLE) + wait_on_page_locked_killable(page); + else + wait_on_page_locked(page); + } + +out_retry: + if (page) + put_page(page); + return ret | VM_FAULT_RETRY; } EXPORT_SYMBOL(filemap_fault); From patchwork Thu Oct 18 20:23:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10648089 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5FB0F181D for ; Thu, 18 Oct 2018 20:23:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5110928D72 for ; Thu, 18 Oct 2018 20:23:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4560628D8C; Thu, 18 Oct 2018 20:23:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E374928D68 for ; Thu, 18 Oct 2018 20:23:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726764AbeJSE0P (ORCPT ); Fri, 19 Oct 2018 00:26:15 -0400 Received: from mail-qt1-f193.google.com ([209.85.160.193]:41642 "EHLO mail-qt1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727464AbeJSE0O (ORCPT ); Fri, 19 Oct 2018 00:26:14 -0400 Received: by mail-qt1-f193.google.com with SMTP id l41-v6so35829134qtl.8 for ; Thu, 18 Oct 2018 13:23:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=fxF9egp/0MfNC6eP3oXuHL949t8miVwsCl5P35THCQ0=; b=CsGd4MKY/m+BPmdLJs1XHbIfFvNHSiitKVVvPPLGWo5GU4tkWD2chfSUrV5zjUnnpd MEUkQhixYy33qgfBuHB19UlLntUG7w/9YmUYulSdDJbj9IDIkEvv27avzNxO+ld+Rpcb 1ujI9o6Iwu5OcYN1l13rg0lrebrNiNCw/UqOPQOsCT68Lv0v1fdwV0hUAAYn6XNSfJ4N P5pZU2FQGAX3Trf0a/N1E5HQlDnUmAPue33l0Pp0OwRsnHUU0XbNvmOCnCcNBJWK4HWB z9TPDqwvl3zC3f308y+eCt9LAclba0uYOz3mN0EnOEzR/QNHs1iqquFfov2zw8VvRWqJ po3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=fxF9egp/0MfNC6eP3oXuHL949t8miVwsCl5P35THCQ0=; b=PXcnZqtk4WgTobV+3W13TIGmGnMLT2a8AgaYG1EkeB9hsSpdy0qVpp47cIgyivDbX3 /tAS3u4mYn4MfPdbDgIdWAQuj3oPuONaPpzD9O5Y1lXxaClVT7nQeN2BPpfTI49o9k12 UuvxNNTbPBIkuS3qv5f+knzdzqe+0W7bBLLyGgewldkVaFXRy90a12fU78JMh1jOIHwC MiLdAXDym5yd0mkPUQ1vCDzeOvVHfVOCgGgYkOjKW+QAWuAwXmNkdCxqHZWVH7qJeZv2 0iea3ZNsgTlJBVPJDxUIqEUjygUr2y0fZkHq9nFUOquMDeiYhLZ3FN8gX4aePtgUiUYT NriQ== X-Gm-Message-State: ABuFfogWrd+8QdBIX8lxjQVxdUSXi5OnEzLlTLAHEKbNppnvJALc1JS7 pq5JndCX9B/Ym5Ug781O4c7KCY1v+ghH6A== X-Google-Smtp-Source: ACcGV630Z4wH07uH115hll3vDXkYV06Zr9DVywG5L0C4yssKYm3wAfDGCW8PcK+V6OEPe9bJVZ2VLQ== X-Received: by 2002:aed:2791:: with SMTP id a17-v6mr30183020qtd.303.1539894212661; Thu, 18 Oct 2018 13:23:32 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id n3-v6sm13614949qkc.48.2018.10.18.13.23.31 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 18 Oct 2018 13:23:31 -0700 (PDT) From: Josef Bacik To: kernel-team@fb.com, hannes@cmpxchg.org, linux-kernel@vger.kernel.org, tj@kernel.org, david@fromorbit.com, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, riel@fb.com, linux-mm@kvack.org Subject: [PATCH 3/7] mm: drop the mmap_sem in all read fault cases Date: Thu, 18 Oct 2018 16:23:14 -0400 Message-Id: <20181018202318.9131-4-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20181018202318.9131-1-josef@toxicpanda.com> References: <20181018202318.9131-1-josef@toxicpanda.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Johannes' patches didn't quite cover all of the IO cases that we need to drop the mmap_sem for, this patch covers the rest of them. Signed-off-by: Josef Bacik --- mm/filemap.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/filemap.c b/mm/filemap.c index 1ed35cd99b2c..65395ee132a0 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2523,6 +2523,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) int error; struct mm_struct *mm = vmf->vma->vm_mm; struct file *file = vmf->vma->vm_file; + struct file *fpin = NULL; struct address_space *mapping = file->f_mapping; struct file_ra_state *ra = &file->f_ra; struct inode *inode = mapping->host; @@ -2610,11 +2611,15 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) return ret | VM_FAULT_LOCKED; no_cached_page: + fpin = maybe_unlock_mmap_for_io(vmf->vma, vmf->flags); + /* * We're only likely to ever get here if MADV_RANDOM is in * effect. */ error = page_cache_read(file, offset, vmf->gfp_mask); + if (fpin) + goto out_retry; /* * The page we want has now been added to the page cache. @@ -2634,6 +2639,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) return VM_FAULT_SIGBUS; page_not_uptodate: + fpin = maybe_unlock_mmap_for_io(vmf->vma, vmf->flags); + /* * Umm, take care of errors if the page isn't up-to-date. * Try to re-read it _once_. We do this synchronously, @@ -2647,6 +2654,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) if (!PageUptodate(page)) error = -EIO; } + if (fpin) + goto out_retry; put_page(page); if (!error || error == AOP_TRUNCATED_PAGE) @@ -2665,6 +2674,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) } out_retry: + if (fpin) + fput(fpin); if (page) put_page(page); return ret | VM_FAULT_RETRY; From patchwork Thu Oct 18 20:23:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10648091 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C2AF6112B for ; Thu, 18 Oct 2018 20:23:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B174A28D72 for ; Thu, 18 Oct 2018 20:23:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A526E28D8C; Thu, 18 Oct 2018 20:23:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4C38628D68 for ; Thu, 18 Oct 2018 20:23:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727232AbeJSE0Q (ORCPT ); Fri, 19 Oct 2018 00:26:16 -0400 Received: from mail-qt1-f196.google.com ([209.85.160.196]:34093 "EHLO mail-qt1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726678AbeJSE0P (ORCPT ); Fri, 19 Oct 2018 00:26:15 -0400 Received: by mail-qt1-f196.google.com with SMTP id o17-v6so35927178qtr.1 for ; Thu, 18 Oct 2018 13:23:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=hGTHjPlv3HOhn5TB2Ipo9k7piqoFCis94sG3VyiCA5k=; b=nT39Y/I3v+R7DJdgx+w05J/2Y+m3MDiWnk1nL7t4/7+CM7XiPVYhj3l2z9w5ARkDux SFUAlxO8JdvxirqMmPZTGRvyXzf/F1gf27MRw/gbTBAQeYjmUZFp91y8pO5oLUh9paj3 4Mg/pRVwYZoQ+VQkybYJi/38gUFMY+wk3NRr5cxlCblJoaTeZtKimCERIJSs/YPNhsQw Yj4RoxkBR9WrEqUR55b9S912+TOdECaPpCVpoCo8EWzHw4uHPszCM0G+xVZCMip+v2kq ZdJGjXwHzmN0UQp67pYVQo+e1u4jv2h70jkcXBy0hGhY7XII8xnk9N7bqGUI5ApV7HA9 xy2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=hGTHjPlv3HOhn5TB2Ipo9k7piqoFCis94sG3VyiCA5k=; b=WJ4tBfQJfAVoO9P+9DmDhVBfgJncNgGGVdTaJk6vPNP3U2RwJs7K0SQ/GsPX8wY7Tr lXJbh3UQY0gcMO+6RW/OND4CuYT2Xxs8wg9s4mUAmNi+D6BZwAsBUfGRb32inKVtzFT5 0zI6QxsCcbvqVo04FI46TuAw6L6s3G8K+0oAY4zkP8ZhXWckJ1xWKgPX+4cbuRDc61pu WM3EVUYao/WH5gwenATtMAQtT28Ji7NWaJJ2pXYXRUW/e4sojnMIJah34+SRoRtpc/Xr 3FKXqnQQiB34dfAbP3y8Km+BPViWpJ0ftt8r2b7d87nXfRx4XJrtw322vtyJ5M/3U6Zy 3LeQ== X-Gm-Message-State: ABuFfoi/9SNWyWZSOTzkkUcibhw3z7X+wfzlwD8W8hEuVXkshsVDguIr hE6zeP62vCAlUHyOZux11tb3Gw== X-Google-Smtp-Source: ACcGV60ajb9VJhgqek0PPLPRGAIgXrsXiS83GVhmSwgELdoGf1qud/rSb0MUwUUr9Cw3VlbImR7Ugg== X-Received: by 2002:a0c:b137:: with SMTP id q52mr11208025qvc.58.1539894214457; Thu, 18 Oct 2018 13:23:34 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id i79-v6sm14094657qke.17.2018.10.18.13.23.33 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 18 Oct 2018 13:23:33 -0700 (PDT) From: Josef Bacik To: kernel-team@fb.com, hannes@cmpxchg.org, linux-kernel@vger.kernel.org, tj@kernel.org, david@fromorbit.com, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, riel@fb.com, linux-mm@kvack.org Subject: [PATCH 4/7] mm: use the cached page for filemap_fault Date: Thu, 18 Oct 2018 16:23:15 -0400 Message-Id: <20181018202318.9131-5-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20181018202318.9131-1-josef@toxicpanda.com> References: <20181018202318.9131-1-josef@toxicpanda.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If we drop the mmap_sem we have to redo the vma lookup which requires redoing the fault handler. Chances are we will just come back to the same page, so save this page in our vmf->cached_page and reuse it in the next loop through the fault handler. Signed-off-by: Josef Bacik --- mm/filemap.c | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 65395ee132a0..5212ab637832 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2530,13 +2530,38 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) pgoff_t offset = vmf->pgoff; int flags = vmf->flags; pgoff_t max_off; - struct page *page; + struct page *page = NULL; + struct page *cached_page = vmf->cached_page; vm_fault_t ret = 0; max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); if (unlikely(offset >= max_off)) return VM_FAULT_SIGBUS; + /* + * We may have read in the page already and have a page from an earlier + * loop. If so we need to see if this page is still valid, and if not + * do the whole dance over again. + */ + if (cached_page) { + if (flags & FAULT_FLAG_KILLABLE) { + error = lock_page_killable(cached_page); + if (error) { + up_read(&mm->mmap_sem); + goto out_retry; + } + } else + lock_page(cached_page); + vmf->cached_page = NULL; + if (cached_page->mapping == mapping && + cached_page->index == offset) { + page = cached_page; + goto have_cached_page; + } + unlock_page(cached_page); + put_page(cached_page); + } + /* * Do we have something in the page cache already? */ @@ -2587,6 +2612,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) put_page(page); goto retry_find; } +have_cached_page: VM_BUG_ON_PAGE(page->index != offset, page); /* @@ -2677,7 +2703,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) if (fpin) fput(fpin); if (page) - put_page(page); + vmf->cached_page = page; return ret | VM_FAULT_RETRY; } EXPORT_SYMBOL(filemap_fault); From patchwork Thu Oct 18 20:23:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10648121 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2749A17D4 for ; Thu, 18 Oct 2018 20:24:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A62628D8F for ; Thu, 18 Oct 2018 20:24:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0E58A28D9D; Thu, 18 Oct 2018 20:24:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED5A028D8F for ; Thu, 18 Oct 2018 20:23:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727158AbeJSE0S (ORCPT ); Fri, 19 Oct 2018 00:26:18 -0400 Received: from mail-qk1-f193.google.com ([209.85.222.193]:45380 "EHLO mail-qk1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727081AbeJSE0R (ORCPT ); Fri, 19 Oct 2018 00:26:17 -0400 Received: by mail-qk1-f193.google.com with SMTP id m8-v6so19644235qka.12 for ; Thu, 18 Oct 2018 13:23:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=rRSaxONrZWxSfcObSyXaZynV5IrpMUJ0BUHUIDN5YaE=; b=sYnIKpWZb0ethNv7ygrSHuJM9bQ6yfODonTwuNpvn0aI3FJoNvgjPlgWfQoMmpR9gd 7KRCq4lfipETSrHqyEqxuaddeM6zru6h3oAj8UJ5ykK1EBWDbIiAodZcibkyhUoAeBGy rD+gE3mR4nq06NrFz1vZrWP4WHOoRZjdK/HfW9puyW+5LilQXWGBL+cwUrZMBQBx5Dmb TaT/v+QFKp3TUaOn357uQhnB8TbIpUsCjB7mfJY97nZyBVWA317u1+V2sumi31QG+D3Q gNSmWotTbFD1mmUwlAeAibeuaiBJgKswHrw7ri/QJqkOO2D1moAu7/MGP0weNtMfSGzd YYNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=rRSaxONrZWxSfcObSyXaZynV5IrpMUJ0BUHUIDN5YaE=; b=AClCrLVyV1MFuEJ9s6/rT2awNC0BRwtYm2qv7qrsyEBEq5ogAwEALJucbpGd8WtyPq LDNuzl9Vx2j30n/adcGMk/yV3IhloKrY6m6nUt/1ljo7pR+Qjo3KjB03xPq9a7XP4DeP dm7TiLg0bTF2iA9aS3KYaFWGnQLvlXL1V/kY1pGafDr3k0JTPAkR+aF/JOSZyPho1FIS RVKcbFuAL+Y100n0BImEY7xvoU5/23BUCJ/R0o9qx1pYG5Lw65KYGAg5NuBsDyofyZcT 8TbmSmNW2RwpSj3+6/PZftCX0U0Ft6jfJLJzHjgj96fMj5+6mee75EpT6A+LuaJ5V0E1 FN2A== X-Gm-Message-State: ABuFfohaD9mIv0JkItKLyPNmcHmDvynuHQRzwOJ0XIai7s82Oe2O1AmV hct5V9JfOhMtyTMnCJT+Cj4Rqg== X-Google-Smtp-Source: ACcGV62G7ya4wMwJkdmFy96WMrqdf+XZFz2cKbWI2in4JRemu28Kc13+O7n5qHte247QRx/a8tgXUA== X-Received: by 2002:a37:4dc5:: with SMTP id a188-v6mr30144112qkb.326.1539894216216; Thu, 18 Oct 2018 13:23:36 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id a13-v6sm13703972qkg.94.2018.10.18.13.23.35 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 18 Oct 2018 13:23:35 -0700 (PDT) From: Josef Bacik To: kernel-team@fb.com, hannes@cmpxchg.org, linux-kernel@vger.kernel.org, tj@kernel.org, david@fromorbit.com, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, riel@fb.com, linux-mm@kvack.org Subject: [PATCH 5/7] mm: add a flag to indicate we used a cached page Date: Thu, 18 Oct 2018 16:23:16 -0400 Message-Id: <20181018202318.9131-6-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20181018202318.9131-1-josef@toxicpanda.com> References: <20181018202318.9131-1-josef@toxicpanda.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This is preparation for dropping the mmap_sem in page_mkwrite. We need to know if we used our cached page so we can be sure it is the page we already did the page_mkwrite stuff on so we don't have to redo all of that work. Signed-off-by: Josef Bacik --- include/linux/mm.h | 6 +++++- mm/filemap.c | 5 ++++- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4a84ec976dfc..a7305d193c71 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -318,6 +318,9 @@ extern pgprot_t protection_map[16]; #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */ #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */ #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */ +#define FAULT_FLAG_USED_CACHED 0x200 /* Our vmf->page was from a previous + * loop through the fault handler. + */ #define FAULT_FLAG_TRACE \ { FAULT_FLAG_WRITE, "WRITE" }, \ @@ -328,7 +331,8 @@ extern pgprot_t protection_map[16]; { FAULT_FLAG_TRIED, "TRIED" }, \ { FAULT_FLAG_USER, "USER" }, \ { FAULT_FLAG_REMOTE, "REMOTE" }, \ - { FAULT_FLAG_INSTRUCTION, "INSTRUCTION" } + { FAULT_FLAG_INSTRUCTION, "INSTRUCTION" }, \ + { FAULT_FLAG_USED_CACHED, "USED_CACHED" } /* * vm_fault is filled by the the pagefault handler and passed to the vma's diff --git a/mm/filemap.c b/mm/filemap.c index 5212ab637832..e9cb44bd35aa 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2556,6 +2556,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) if (cached_page->mapping == mapping && cached_page->index == offset) { page = cached_page; + vmf->flags |= FAULT_FLAG_USED_CACHED; goto have_cached_page; } unlock_page(cached_page); @@ -2619,8 +2620,10 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * We have a locked page in the page cache, now we need to check * that it's up-to-date. If not, it is going to be due to an error. */ - if (unlikely(!PageUptodate(page))) + if (unlikely(!PageUptodate(page))) { + vmf->flags &= ~(FAULT_FLAG_USED_CACHED); goto page_not_uptodate; + } /* * Found the page and have a reference on it. From patchwork Thu Oct 18 20:23:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10648099 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C891F112B for ; Thu, 18 Oct 2018 20:23:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B893E28D91 for ; Thu, 18 Oct 2018 20:23:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ACBA028D98; Thu, 18 Oct 2018 20:23:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 53D4728D91 for ; Thu, 18 Oct 2018 20:23:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727477AbeJSE0T (ORCPT ); Fri, 19 Oct 2018 00:26:19 -0400 Received: from mail-qk1-f196.google.com ([209.85.222.196]:35382 "EHLO mail-qk1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727474AbeJSE0T (ORCPT ); Fri, 19 Oct 2018 00:26:19 -0400 Received: by mail-qk1-f196.google.com with SMTP id v68-v6so19659941qka.2 for ; Thu, 18 Oct 2018 13:23:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=Hfe+8/p9IgmtmhKjFsy8lPGB052ph7HB1j0v1iwnQq4=; b=UD0ffWnoYdPxKmTAMvZoPxfbJC2KjBM/0Dcke0Ix6eSsVKjz9w7PYY8IK5N/nliLs3 VKmL8ZilBFRVfgEFj4StsMtMacPmSj9fKdydrrp8MEdxHEBBHTjtfhNByMyaVQ6jmVDG 3pOD6FCc30sXmTBFxp10qrT6K5mIdQjc9qXJbO+NgJRMatDFX1llMM7vgC1AuDki7ooy M/mCG6iBBraimIwkStOhg1t99t6Gzt5NpSpRZA04TD8ixFlnHYQkbB07vorA+vrBGuUD J7eeJkEBe3nSSFsS7Uzho/RKCr5AJjLtHSEZQfwKA95Qog8VoemHHVSDUUd3bd6YSpOI 0YGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=Hfe+8/p9IgmtmhKjFsy8lPGB052ph7HB1j0v1iwnQq4=; b=kyn3OXOM7h2HMVxFrQxUfZoOug2S1llFluaylkVOORGrLCD+IiDo+h0KZIHDUTjCV8 fJAYlHzW46v16DIpOVMrlipdGVv/elXF1SJY9ohXbnaCCdNCCWpKB54Gz1QsX3nREkK1 ZMsGD4KNDCb1ip4Gz7rud78uQQUVJppdyEcSICOqI4TsJTzOpsWwMUXR1GU9bHSB59ed u7Zm8eUL10ZfGcA97IcIPLml2ShZe/yfEbY0veVxOOo1MUbqfjZiu+6HcOYsrXO3oq+I +e2akmBFwRUStSUT1GnM4ia0I/L7e0mg1EHMiVBA/cVCZlx4FiAnhtv59QFyO2yX7MWQ mlRQ== X-Gm-Message-State: ABuFfoi+/8B2EAcKkbwrszR6lh3JkssrmBKsOLwdz22dzBT0y/Dn/jUO cjShRK/FheuGng5IePsBUcQ0Kw== X-Google-Smtp-Source: ACcGV614w/+TGK2yLbQdZinPDCwxR2AIYh2D2xLFqtySMLNCGQJ40gas3jrhoUBZu3wueQSc2xXrgA== X-Received: by 2002:a37:9ac2:: with SMTP id c185-v6mr29524234qke.162.1539894217991; Thu, 18 Oct 2018 13:23:37 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id 14-v6sm5525307qki.73.2018.10.18.13.23.36 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 18 Oct 2018 13:23:37 -0700 (PDT) From: Josef Bacik To: kernel-team@fb.com, hannes@cmpxchg.org, linux-kernel@vger.kernel.org, tj@kernel.org, david@fromorbit.com, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, riel@fb.com, linux-mm@kvack.org Subject: [PATCH 6/7] mm: allow ->page_mkwrite to do retries Date: Thu, 18 Oct 2018 16:23:17 -0400 Message-Id: <20181018202318.9131-7-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20181018202318.9131-1-josef@toxicpanda.com> References: <20181018202318.9131-1-josef@toxicpanda.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Before we didn't set the retry flag on our vm_fault. We want to allow file systems to drop the mmap_sem if they so choose, so set this flag and deal with VM_FAULT_RETRY appropriately. Signed-off-by: Josef Bacik --- mm/memory.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 433075f722ea..c5e81edd94f9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2384,11 +2384,13 @@ static vm_fault_t do_page_mkwrite(struct vm_fault *vmf) unsigned int old_flags = vmf->flags; vmf->flags = FAULT_FLAG_WRITE|FAULT_FLAG_MKWRITE; + vmf->flags |= old_flags & FAULT_FLAG_ALLOW_RETRY; ret = vmf->vma->vm_ops->page_mkwrite(vmf); /* Restore original flags so that caller is not surprised */ vmf->flags = old_flags; - if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))) + if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | + VM_FAULT_RETRY))) return ret; if (unlikely(!(ret & VM_FAULT_LOCKED))) { lock_page(page); @@ -2683,7 +2685,8 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); tmp = do_page_mkwrite(vmf); if (unlikely(!tmp || (tmp & - (VM_FAULT_ERROR | VM_FAULT_NOPAGE)))) { + (VM_FAULT_ERROR | VM_FAULT_NOPAGE | + VM_FAULT_RETRY)))) { put_page(vmf->page); return tmp; } @@ -3716,7 +3719,8 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf) unlock_page(vmf->page); tmp = do_page_mkwrite(vmf); if (unlikely(!tmp || - (tmp & (VM_FAULT_ERROR | VM_FAULT_NOPAGE)))) { + (tmp & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | + VM_FAULT_RETRY)))) { put_page(vmf->page); return tmp; } From patchwork Thu Oct 18 20:23:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10648115 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A686E181D for ; Thu, 18 Oct 2018 20:23:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 923C128D92 for ; Thu, 18 Oct 2018 20:23:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 86C2C28D8F; Thu, 18 Oct 2018 20:23:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E663128D92 for ; Thu, 18 Oct 2018 20:23:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727549AbeJSE0e (ORCPT ); Fri, 19 Oct 2018 00:26:34 -0400 Received: from mail-qt1-f195.google.com ([209.85.160.195]:44517 "EHLO mail-qt1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727474AbeJSE0U (ORCPT ); Fri, 19 Oct 2018 00:26:20 -0400 Received: by mail-qt1-f195.google.com with SMTP id x24-v6so4294347qtx.11 for ; Thu, 18 Oct 2018 13:23:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=SEqSY4t8p9DCOeSfmdiOT3jbO6A8Qq1SIfNhhSeNGXU=; b=tZV7WZ86PDsJVCMVc2m/xnObzxWpTqThyVHl1H7tR6r0DaUfthzmYYIPs09urF90I7 3jQW8OLObDgwno6x8ayDmpQjT8L+ZfKko7LqCQbhoYcJmDDtWv/fFsefu+TQepktAyqs E9EKtrTrJYcHdtNY/lD+fB8HsfUlw0TfGRfts+v/PHsKFzZ9w1Xi3FLJC47euGwz82AM iK8jkB2UPmojher8v7COrY8iD6NLwtMUE7RMnOg0vhu2vnLXHLGU6p4t7aJheyNUfR09 DJ1X/qk/SzJFrb7fl3/98QRbdS3JfwaJSBRmanO83guk2V5vuVrJ6eGnEGhu0ja/Ap2c G5Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=SEqSY4t8p9DCOeSfmdiOT3jbO6A8Qq1SIfNhhSeNGXU=; b=XA5XwkcnWWXDwp0p04Vt+QqdcZx3GCqC5UCOnfpF0m61IdgvqtRGXXRe7zaqBHxi+U qHkY9i92MlP1/I2MRDVKC1Uo+9mSNAT8iumJtCOmLH0R9nNUJAtNFnNZeAdV1YW6ypnG 48XWTpYiPIkgclmdwyivQ9wptSgFPZCEkBhouW/fH9dTlEkZzrHebAfGb3UiLfanfWlx NLgMhBTkzbrAyacXnJJpmVO4v/hnC1555fr27YSva7KWlHIW9YX/EChs7YedWBEEQxh3 ArzDPf5rSLkiCQVQvf3oLgGzpnlxDAEg4QJ+D7GgPCDLy7edH6PfbLvBCipbGhEd10x5 EIrw== X-Gm-Message-State: ABuFfoiBF1MOhMVwcJdycvGtnF81CYM+jp6MS/khOoBO4oKnrVnqMnAZ CXifY2musjUWuRbHLrDxNLfxNw== X-Google-Smtp-Source: ACcGV63lmj6JK1y8zCHg7zYBXOI5ZGshCJgVfTREVwxCmlu9Njt/3I0cpq7Sp1LwCjyQMLpdxOa6HA== X-Received: by 2002:ac8:3290:: with SMTP id z16-v6mr30103001qta.209.1539894219819; Thu, 18 Oct 2018 13:23:39 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id m54-v6sm15077864qtb.97.2018.10.18.13.23.38 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 18 Oct 2018 13:23:38 -0700 (PDT) From: Josef Bacik To: kernel-team@fb.com, hannes@cmpxchg.org, linux-kernel@vger.kernel.org, tj@kernel.org, david@fromorbit.com, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org, riel@fb.com, linux-mm@kvack.org Subject: [PATCH 7/7] btrfs: drop mmap_sem in mkwrite for btrfs Date: Thu, 18 Oct 2018 16:23:18 -0400 Message-Id: <20181018202318.9131-8-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20181018202318.9131-1-josef@toxicpanda.com> References: <20181018202318.9131-1-josef@toxicpanda.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ->page_mkwrite is extremely expensive in btrfs. We have to reserve space, which can take 6 lifetimes, and we could possibly have to wait on writeback on the page, another several lifetimes. To avoid this simply drop the mmap_sem if we didn't have the cached page and do all of our work and return the appropriate retry error. If we have the cached page we know we did all the right things to set this page up and we can just carry on. Signed-off-by: Josef Bacik --- fs/btrfs/inode.c | 41 +++++++++++++++++++++++++++++++++++++++-- include/linux/mm.h | 14 ++++++++++++++ mm/filemap.c | 3 ++- 3 files changed, 55 insertions(+), 3 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 3ea5339603cf..6b723d29bc0c 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -8809,7 +8809,9 @@ static void btrfs_invalidatepage(struct page *page, unsigned int offset, vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) { struct page *page = vmf->page; - struct inode *inode = file_inode(vmf->vma->vm_file); + struct file *file = vmf->vma->vm_file, *fpin; + struct mm_struct *mm = vmf->vma->vm_mm; + struct inode *inode = file_inode(file); struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree; struct btrfs_ordered_extent *ordered; @@ -8828,6 +8830,29 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) reserved_space = PAGE_SIZE; + /* + * We have our cached page from a previous mkwrite, check it to make + * sure it's still dirty and our file size matches when we ran mkwrite + * the last time. If everything is OK then return VM_FAULT_LOCKED, + * otherwise do the mkwrite again. + */ + if (vmf->flags & FAULT_FLAG_USED_CACHED) { + lock_page(page); + if (vmf->cached_size == i_size_read(inode) && + PageDirty(page)) + return VM_FAULT_LOCKED; + unlock_page(page); + } + + /* + * mkwrite is extremely expensive, and we are holding the mmap_sem + * during this, which means we can starve out anybody trying to + * down_write(mmap_sem) for a long while, especially if we throw cgroups + * into the mix. So just drop the mmap_sem and do all of our work, + * we'll loop back through and verify everything is ok the next time and + * hopefully avoid doing the work twice. + */ + fpin = maybe_unlock_mmap_for_io(vmf->vma, vmf->flags); sb_start_pagefault(inode->i_sb); page_start = page_offset(page); page_end = page_start + PAGE_SIZE - 1; @@ -8844,7 +8869,7 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) ret2 = btrfs_delalloc_reserve_space(inode, &data_reserved, page_start, reserved_space); if (!ret2) { - ret2 = file_update_time(vmf->vma->vm_file); + ret2 = file_update_time(file); reserved = 1; } if (ret2) { @@ -8943,6 +8968,14 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE, true); sb_end_pagefault(inode->i_sb); extent_changeset_free(data_reserved); + if (fpin) { + unlock_page(page); + fput(fpin); + get_page(page); + vmf->cached_size = size; + vmf->cached_page = page; + return VM_FAULT_RETRY; + } return VM_FAULT_LOCKED; } @@ -8955,6 +8988,10 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) out_noreserve: sb_end_pagefault(inode->i_sb); extent_changeset_free(data_reserved); + if (fpin) { + fput(fpin); + down_read(&mm->mmap_sem); + } return ret; } diff --git a/include/linux/mm.h b/include/linux/mm.h index a7305d193c71..02b420be6b06 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -370,6 +370,13 @@ struct vm_fault { * next time we loop through the fault * handler for faster lookup. */ + loff_t cached_size; /* ->page_mkwrite handlers may drop + * the mmap_sem to avoid starvation, in + * which case they need to save the + * i_size in order to verify the cached + * page we're using the next loop + * through hasn't changed under us. + */ /* These three entries are valid only while holding ptl lock */ pte_t *pte; /* Pointer to pte entry matching * the 'address'. NULL if the page @@ -1437,6 +1444,8 @@ extern vm_fault_t handle_mm_fault_cacheable(struct vm_fault *vmf); extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); +extern struct file *maybe_unlock_mmap_for_io(struct vm_area_struct *vma, + int flags); void unmap_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t nr, bool even_cows); void unmap_mapping_range(struct address_space *mapping, @@ -1463,6 +1472,11 @@ static inline int fixup_user_fault(struct task_struct *tsk, BUG(); return -EFAULT; } +static inline struct file *maybe_unlock_mmap_for_io(struct vm_area_struct *vma, + int flags) +{ + return NULL; +} static inline void unmap_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t nr, bool even_cows) { } static inline void unmap_mapping_range(struct address_space *mapping, diff --git a/mm/filemap.c b/mm/filemap.c index e9cb44bd35aa..8027f082d74f 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2366,7 +2366,7 @@ generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) EXPORT_SYMBOL(generic_file_read_iter); #ifdef CONFIG_MMU -static struct file *maybe_unlock_mmap_for_io(struct vm_area_struct *vma, int flags) +struct file *maybe_unlock_mmap_for_io(struct vm_area_struct *vma, int flags) { if ((flags & (FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT)) == FAULT_FLAG_ALLOW_RETRY) { struct file *file; @@ -2377,6 +2377,7 @@ static struct file *maybe_unlock_mmap_for_io(struct vm_area_struct *vma, int fla } return NULL; } +EXPORT_SYMBOL_GPL(maybe_unlock_mmap_for_io); /** * page_cache_read - adds requested page to the page cache if not already there