From patchwork Fri Nov 30 19:58:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10707085 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5641414BD for ; Fri, 30 Nov 2018 19:58:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4686C30453 for ; Fri, 30 Nov 2018 19:58:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 35E4A304F0; Fri, 30 Nov 2018 19:58:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 738EC304C9 for ; Fri, 30 Nov 2018 19:58:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 596E76B59F1; Fri, 30 Nov 2018 14:58:21 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4CE966B59F2; Fri, 30 Nov 2018 14:58:21 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 323DB6B59F3; Fri, 30 Nov 2018 14:58:21 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yw1-f71.google.com (mail-yw1-f71.google.com [209.85.161.71]) by kanga.kvack.org (Postfix) with ESMTP id F0A256B59F1 for ; Fri, 30 Nov 2018 14:58:20 -0500 (EST) Received: by mail-yw1-f71.google.com with SMTP id l69so4404043ywb.7 for ; Fri, 30 Nov 2018 11:58:20 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id :in-reply-to:references; bh=nUZutRNzquUqiT8YQe9ytPV/656PUstfyd8NxlX/hPI=; b=lpkieQAIG/FupbMcswAkT+tzmOmpYkRQN1btftgv+IYg1XNFgCMG1p1TMbPRHHtlrm z6x1jkdTWwjaZ46KjVW8Wbx+471JHEkrgIg5S0wUtnP3+pIbxfisJbGH+AghldZ6DI1p vqkQQJ8G4fkyVMharl6FcAU/4HUa76pi92ni1ALaHe/SgF18LSmpG2DZjRi3xRZgONbZ puAUeshAbENNpnXWzyl8SDKtaDblExiZBG0j/7aS6BBPeB5Wssn3K2DbNC17YWm1MRDm Pa7Bw6D5xG1dAZh/YC3AfaVSyyiDIVwviVxa+jial0V0STmpBO58Juqz7wzkBn3Yp4KQ nZHA== X-Gm-Message-State: AA+aEWaVa2FcBIXTf+qUizb3iXbKZleVJneCgldDb4pNbO46ZMZixGi0 rzkKSLz1xTplDI45Df8ufriuEvUKL6/xIQ5cLhA+aNBFsveAfl/a7UdyYuQmXg5zDZmxnkzF6mK joI6TJlYSQ334w4WWC/RJa7FlVI71eTlNqn/EACdGB57LMo9U+w3j/j0DJ3wq6xVCJ1qcp8EyqF XJjyvQwqFpCRlglHNc07bepLB8FMBGQ4bgQvShgyFTt/chvkiK/9l5Pp+tR0KEJvcRtIxKReg1J jrKIHIUMMqiK0QtZz4Sv+EjDrDB6UZX78z5WaKx1KWrpN4mvtyeA0/yFDNuE5kYFvC3tIBs199C xCHubFrI3UozTdOAlUXZRpeblRzy6PqUVzgIiEa/66gBeQ9hlkZUWQk5Wmc0bB2Di0Dr3HxOqRl 2 X-Received: by 2002:a81:ec0d:: with SMTP id j13mr6914317ywm.5.1543607900673; Fri, 30 Nov 2018 11:58:20 -0800 (PST) X-Received: by 2002:a81:ec0d:: with SMTP id j13mr6914292ywm.5.1543607899779; Fri, 30 Nov 2018 11:58:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543607899; cv=none; d=google.com; s=arc-20160816; b=1EIEkTzs65G0kwuwUOkOORiAAwKMwqCxeUhqsM9ybPlOpqfqaIRgpat2C+aF+SPgsd uEKOhB4xWyy3jak9HTyKOG7qzHAcsE9XG+8ynE4KPSVaxDS0bKmoqBFE0AE3/tFDjf87 AAe8FtIUfuC/juHAkfpA19nCi/YZ2E2xJG3WE0U+tKK1cMLr6N9UWubGJTUgBVufICKl H6NW4uX0fvaVc9T9xkRqmcBmRYpvuGI3IyjzfVkCpoGjfAq/oXCrWFfOY0DVuTKUkRoF L+x1xzIeorbb1cFxQwn/5ASo9fbA0U4sICCONG737iZwMtawMtYpLvFs6nXKb1TupUUj sUUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=nUZutRNzquUqiT8YQe9ytPV/656PUstfyd8NxlX/hPI=; b=GCmfEVff6S8BJKSzjRgl2MzCVMGViw+1sDiqdCOHxDeD1rlDXNP3AC2W1Cw/NqDYUm ehuH82JvslkcgueRAnYsNG9+IGE4jZwJUlj9/G92UEggNLL732zLf9uZbfpZ5UHmcewI klyAyHP/pHNz8wtsuIYXe/37Zp2RFCkNYvEHrHdCCYfQqwPtDrZaWe8An2D/FMg4WO78 UXTqQrTp/XZN0k1Z9MZPWF+GwLLdh/pJbWnKyBX1CmPvDt9DjX2GTPkcrswRx2JIWZqZ PpHkQlvLFFnDELmIa2pgC3Zp9UhYPIQf4sq09Hm7BQ5jHTL+PVkVGS7wLZYG9HUkgB0k jj0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@toxicpanda-com.20150623.gappssmtp.com header.s=20150623 header.b="pGqfh1b/"; spf=neutral (google.com: 209.85.220.65 is neither permitted nor denied by best guess record for domain of josef@toxicpanda.com) smtp.mailfrom=josef@toxicpanda.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id z125sor1061669ywa.67.2018.11.30.11.58.19 for (Google Transport Security); Fri, 30 Nov 2018 11:58:19 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.65 is neither permitted nor denied by best guess record for domain of josef@toxicpanda.com) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@toxicpanda-com.20150623.gappssmtp.com header.s=20150623 header.b="pGqfh1b/"; spf=neutral (google.com: 209.85.220.65 is neither permitted nor denied by best guess record for domain of josef@toxicpanda.com) smtp.mailfrom=josef@toxicpanda.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=nUZutRNzquUqiT8YQe9ytPV/656PUstfyd8NxlX/hPI=; b=pGqfh1b/nsbK/ZbCXMPcYm87vbT8EJixts8PxW5qh/PS62xI0lfhXgstg/SfXmcgKx QL8eZ/eIwVWknS6P3BhLvtDQdWszrYyJ1QwEt94N/dKXdv2geai/8blz0IKRlaedV9a3 kTTXnVh6UVcSBASSiHhFEJ/XCNZ0i9pc8m87rH/W68Yqp/GhcX/uyHvsmhBMlQeJ+IMm nJzEv1nMrwT8VbQFp6GqCt4JSUMGfTDEbIdakFPs2kJcdTQfrMWq4lURis0hDKJXfhnq Te6J3KoAQ4jU9tgqDrcyIcSSHxaSDvRoVHvuDbyQj8YPy/lq7/kuL8g4zlM4OIEmaQWV BZQQ== X-Google-Smtp-Source: AFSGD/UTW3BkBMXkE9PGFk0hChnQLXXRjoCPzae6goRhgMpcuc2F0LFE+t5itICdCj2HY6SpfuWGxg== X-Received: by 2002:a81:3402:: with SMTP id b2mr7025741ywa.12.1543607899303; Fri, 30 Nov 2018 11:58:19 -0800 (PST) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id p201sm2705356ywe.45.2018.11.30.11.58.18 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 30 Nov 2018 11:58:18 -0800 (PST) From: Josef Bacik To: kernel-team@fb.com, hannes@cmpxchg.org, linux-kernel@vger.kernel.org, tj@kernel.org, david@fromorbit.com, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, riel@redhat.com, jack@suse.cz Subject: [PATCH 3/4] filemap: drop the mmap_sem for all blocking operations Date: Fri, 30 Nov 2018 14:58:11 -0500 Message-Id: <20181130195812.19536-4-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20181130195812.19536-1-josef@toxicpanda.com> References: <20181130195812.19536-1-josef@toxicpanda.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we only drop the mmap_sem if there is contention on the page lock. The idea is that we issue readahead and then go to lock the page while it is under IO and we want to not hold the mmap_sem during the IO. The problem with this is the assumption that the readahead does anything. In the case that the box is under extreme memory or IO pressure we may end up not reading anything at all for readahead, which means we will end up reading in the page under the mmap_sem. Instead rework filemap fault path to drop the mmap sem at any point that we may do IO or block for an extended period of time. This includes while issuing readahead, locking the page, or needing to call ->readpage because readahead did not occur. Then once we have a fully uptodate page we can return with VM_FAULT_RETRY and come back again to find our nicely in-cache page that was gotten outside of the mmap_sem. Signed-off-by: Josef Bacik Acked-by: Johannes Weiner --- mm/filemap.c | 113 ++++++++++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 93 insertions(+), 20 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index f068712c2525..5e76b24b2a0f 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2304,28 +2304,44 @@ EXPORT_SYMBOL(generic_file_read_iter); #ifdef CONFIG_MMU #define MMAP_LOTSAMISS (100) +static struct file *maybe_unlock_mmap_for_io(struct file *fpin, + struct vm_area_struct *vma, + int flags) +{ + if (fpin) + return fpin; + if ((flags & (FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT)) == + FAULT_FLAG_ALLOW_RETRY) { + fpin = get_file(vma->vm_file); + up_read(&vma->vm_mm->mmap_sem); + } + return fpin; +} /* * Synchronous readahead happens when we don't even find * a page in the page cache at all. */ -static void do_sync_mmap_readahead(struct vm_area_struct *vma, - struct file_ra_state *ra, - struct file *file, - pgoff_t offset) +static struct file *do_sync_mmap_readahead(struct vm_area_struct *vma, + struct file_ra_state *ra, + struct file *file, + pgoff_t offset, + int flags) { struct address_space *mapping = file->f_mapping; + struct file *fpin = NULL; /* If we don't want any read-ahead, don't bother */ if (vma->vm_flags & VM_RAND_READ) - return; + return fpin; if (!ra->ra_pages) - return; + return fpin; if (vma->vm_flags & VM_SEQ_READ) { + fpin = maybe_unlock_mmap_for_io(fpin, vma, flags); page_cache_sync_readahead(mapping, ra, file, offset, ra->ra_pages); - return; + return fpin; } /* Avoid banging the cache line if not needed */ @@ -2337,37 +2353,43 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, * stop bothering with read-ahead. It will only hurt. */ if (ra->mmap_miss > MMAP_LOTSAMISS) - return; + return fpin; /* * mmap read-around */ + fpin = maybe_unlock_mmap_for_io(fpin, vma, flags); ra->start = max_t(long, 0, offset - ra->ra_pages / 2); ra->size = ra->ra_pages; ra->async_size = ra->ra_pages / 4; ra_submit(ra, mapping, file); + return fpin; } /* * Asynchronous readahead happens when we find the page and PG_readahead, * so we want to possibly extend the readahead further.. */ -static void do_async_mmap_readahead(struct vm_area_struct *vma, - struct file_ra_state *ra, - struct file *file, - struct page *page, - pgoff_t offset) +static struct file *do_async_mmap_readahead(struct vm_area_struct *vma, + struct file_ra_state *ra, + struct file *file, + struct page *page, + pgoff_t offset, int flags) { struct address_space *mapping = file->f_mapping; + struct file *fpin = NULL; /* If we don't want any read-ahead, don't bother */ if (vma->vm_flags & VM_RAND_READ) - return; + return fpin; if (ra->mmap_miss > 0) ra->mmap_miss--; - if (PageReadahead(page)) + if (PageReadahead(page)) { + fpin = maybe_unlock_mmap_for_io(fpin, vma, flags); page_cache_async_readahead(mapping, ra, file, page, offset, ra->ra_pages); + } + return fpin; } /** @@ -2397,6 +2419,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) { int error; struct file *file = vmf->vma->vm_file; + struct file *fpin = NULL; struct address_space *mapping = file->f_mapping; struct file_ra_state *ra = &file->f_ra; struct inode *inode = mapping->host; @@ -2418,10 +2441,12 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * We found the page, so try async readahead before * waiting for the lock. */ - do_async_mmap_readahead(vmf->vma, ra, file, page, offset); + fpin = do_async_mmap_readahead(vmf->vma, ra, file, page, offset, + vmf->flags); } else if (!page) { /* No page in the page cache at all */ - do_sync_mmap_readahead(vmf->vma, ra, file, offset); + fpin = do_sync_mmap_readahead(vmf->vma, ra, file, offset, + vmf->flags); count_vm_event(PGMAJFAULT); count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT); ret = VM_FAULT_MAJOR; @@ -2433,9 +2458,32 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) return vmf_error(-ENOMEM); } - if (!lock_page_or_retry(page, vmf->vma->vm_mm, vmf->flags)) { - put_page(page); - return ret | VM_FAULT_RETRY; + /* + * We are open-coding lock_page_or_retry here because we want to do the + * readpage if necessary while the mmap_sem is dropped. If there + * happens to be a lock on the page but it wasn't being faulted in we'd + * come back around without ALLOW_RETRY set and then have to do the IO + * under the mmap_sem, which would be a bummer. + */ + if (!trylock_page(page)) { + fpin = maybe_unlock_mmap_for_io(fpin, vmf->vma, vmf->flags); + if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) + goto out_retry; + if (vmf->flags & FAULT_FLAG_KILLABLE) { + if (__lock_page_killable(page)) { + /* + * If we don't have the right flags for + * maybe_unlock_mmap_for_io to do it's thing we + * still need to drop the sem and return + * VM_FAULT_RETRY so the upper layer checks the + * signal and takes the appropriate action. + */ + if (!fpin) + up_read(&vmf->vma->vm_mm->mmap_sem); + goto out_retry; + } + } else + __lock_page(page); } /* Did it get truncated? */ @@ -2453,6 +2501,16 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) if (unlikely(!PageUptodate(page))) goto page_not_uptodate; + /* + * We've made it this far and we had to drop our mmap_sem, now is the + * time to return to the upper layer and have it re-find the vma and + * redo the fault. + */ + if (fpin) { + unlock_page(page); + goto out_retry; + } + /* * Found the page and have a reference on it. * We must recheck i_size under page lock. @@ -2475,12 +2533,15 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) * and we need to check for errors. */ ClearPageError(page); + fpin = maybe_unlock_mmap_for_io(fpin, vmf->vma, vmf->flags); error = mapping->a_ops->readpage(file, page); if (!error) { wait_on_page_locked(page); if (!PageUptodate(page)) error = -EIO; } + if (fpin) + goto out_retry; put_page(page); if (!error || error == AOP_TRUNCATED_PAGE) @@ -2489,6 +2550,18 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) /* Things didn't work out. Return zero to tell the mm layer so. */ shrink_readahead_size_eio(file, ra); return VM_FAULT_SIGBUS; + +out_retry: + /* + * We dropped the mmap_sem, we need to return to the fault handler to + * re-find the vma and come back and find our hopefully still populated + * page. + */ + if (page) + put_page(page); + if (fpin) + fput(fpin); + return ret | VM_FAULT_RETRY; } EXPORT_SYMBOL(filemap_fault);