From patchwork Sun Dec 1 01:50:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11268225 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 241E8112B for ; Sun, 1 Dec 2019 01:50:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D8ACE20880 for ; Sun, 1 Dec 2019 01:50:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="U6Y2c2ph" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8ACE20880 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A8E4C6B0284; Sat, 30 Nov 2019 20:50:25 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A17F86B0288; Sat, 30 Nov 2019 20:50:25 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 956626B0289; Sat, 30 Nov 2019 20:50:25 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id 7EA9E6B0284 for ; Sat, 30 Nov 2019 20:50:25 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 2469D2C7C for ; Sun, 1 Dec 2019 01:50:25 +0000 (UTC) X-FDA: 76214892810.04.boys05_39d043f18a49 X-Spam-Summary: 2,0,0,6041c52d889dffb1,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:hannes@cmpxchg.org:hdanton@sina.com:hughd@google.com:josef@toxicpanda.com:kirill.shutemov@linux.intel.com::mm-commits@vger.kernel.org:torvalds@linux-foundation.org:willy@infradead.org,RULES_HIT:2:41:69:355:379:800:960:967:973:988:989:1260:1263:1345:1381:1431:1437:1535:1605:1730:1747:1777:1792:2194:2199:2393:2525:2553:2559:2563:2682:2685:2689:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3151:3865:3866:3867:3868:3870:3871:3872:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4049:4118:4250:4321:5007:6119:6261:6653:7576:7875:8599:9025:9545:9592:10004:10226:10913:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12679:12683:12696:12737:12783:12986:13161:13229:13255:13846:14096:21080:21212:21324:21451:21600:21627:21939:30045:30054:30070:30090,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,Dom ainCache X-HE-Tag: boys05_39d043f18a49 X-Filterd-Recvd-Size: 7713 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Sun, 1 Dec 2019 01:50:24 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6C8AA215F1; Sun, 1 Dec 2019 01:50:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575165024; bh=NHzO46Gf37rIo80vMiJbO57r2FcV1wsRyqQ0gFtC+tE=; h=Date:From:To:Subject:From; b=U6Y2c2phtI+9lq/in8gn3LZXQqPBA3ZE4AzfLhnno+iH9OHsp9k6TUqHp2UQMKhKn 5nW6wkiFgPaJ2UMrBiSKFENqSaVhPvFvxIJ8YtvjVb73EFYa1/rxV4toSfJ1GlQKTg D58c+Zq1epXOxURLPM0CDIyyZq4XQPTUdbIbAmrs= Date: Sat, 30 Nov 2019 17:50:22 -0800 From: akpm@linux-foundation.org To: akpm@linux-foundation.org, hannes@cmpxchg.org, hdanton@sina.com, hughd@google.com, josef@toxicpanda.com, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, willy@infradead.org Subject: [patch 024/158] mm: drop mmap_sem before calling balance_dirty_pages() in write fault Message-ID: <20191201015022.dGmFTLZi2%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Johannes Weiner Subject: mm: drop mmap_sem before calling balance_dirty_pages() in write fault One of our services is observing hanging ps/top/etc under heavy write IO, and the task states show this is an mmap_sem priority inversion: A write fault is holding the mmap_sem in read-mode and waiting for (heavily cgroup-limited) IO in balance_dirty_pages(): [<0>] balance_dirty_pages+0x724/0x905 [<0>] balance_dirty_pages_ratelimited+0x254/0x390 [<0>] fault_dirty_shared_page.isra.96+0x4a/0x90 [<0>] do_wp_page+0x33e/0x400 [<0>] __handle_mm_fault+0x6f0/0xfa0 [<0>] handle_mm_fault+0xe4/0x200 [<0>] __do_page_fault+0x22b/0x4a0 [<0>] page_fault+0x45/0x50 [<0>] 0xffffffffffffffff Somebody tries to change the address space, contending for the mmap_sem in write-mode: [<0>] call_rwsem_down_write_failed_killable+0x13/0x20 [<0>] do_mprotect_pkey+0xa8/0x330 [<0>] SyS_mprotect+0xf/0x20 [<0>] do_syscall_64+0x5b/0x100 [<0>] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [<0>] 0xffffffffffffffff The waiting writer locks out all subsequent readers to avoid lock starvation, and several threads can be seen hanging like this: [<0>] call_rwsem_down_read_failed+0x14/0x30 [<0>] proc_pid_cmdline_read+0xa0/0x480 [<0>] __vfs_read+0x23/0x140 [<0>] vfs_read+0x87/0x130 [<0>] SyS_read+0x42/0x90 [<0>] do_syscall_64+0x5b/0x100 [<0>] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [<0>] 0xffffffffffffffff To fix this, do what we do for cache read faults already: drop the mmap_sem before calling into anything IO bound, in this case the balance_dirty_pages() function, and return VM_FAULT_RETRY. Link: http://lkml.kernel.org/r/20190924194238.GA29030@cmpxchg.org Signed-off-by: Johannes Weiner Reviewed-by: Matthew Wilcox (Oracle) Acked-by: Kirill A. Shutemov Cc: Josef Bacik Cc: Hillf Danton Cc: Hugh Dickins Signed-off-by: Andrew Morton --- mm/filemap.c | 21 --------------------- mm/internal.h | 21 +++++++++++++++++++++ mm/memory.c | 38 +++++++++++++++++++++++++++----------- 3 files changed, 48 insertions(+), 32 deletions(-) --- a/mm/filemap.c~mm-drop-mmap_sem-before-calling-balance_dirty_pages-in-write-fault +++ a/mm/filemap.c @@ -2329,27 +2329,6 @@ EXPORT_SYMBOL(generic_file_read_iter); #ifdef CONFIG_MMU #define MMAP_LOTSAMISS (100) -static struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf, - struct file *fpin) -{ - int flags = vmf->flags; - - if (fpin) - return fpin; - - /* - * FAULT_FLAG_RETRY_NOWAIT means we don't want to wait on page locks or - * anything, so we only pin the file and drop the mmap_sem if only - * FAULT_FLAG_ALLOW_RETRY is set. - */ - if ((flags & (FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT)) == - FAULT_FLAG_ALLOW_RETRY) { - fpin = get_file(vmf->vma->vm_file); - up_read(&vmf->vma->vm_mm->mmap_sem); - } - return fpin; -} - /* * lock_page_maybe_drop_mmap - lock the page, possibly dropping the mmap_sem * @vmf - the vm_fault for this fault. --- a/mm/internal.h~mm-drop-mmap_sem-before-calling-balance_dirty_pages-in-write-fault +++ a/mm/internal.h @@ -362,6 +362,27 @@ vma_address(struct page *page, struct vm return max(start, vma->vm_start); } +static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf, + struct file *fpin) +{ + int flags = vmf->flags; + + if (fpin) + return fpin; + + /* + * FAULT_FLAG_RETRY_NOWAIT means we don't want to wait on page locks or + * anything, so we only pin the file and drop the mmap_sem if only + * FAULT_FLAG_ALLOW_RETRY is set. + */ + if ((flags & (FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT)) == + FAULT_FLAG_ALLOW_RETRY) { + fpin = get_file(vmf->vma->vm_file); + up_read(&vmf->vma->vm_mm->mmap_sem); + } + return fpin; +} + #else /* !CONFIG_MMU */ static inline void clear_page_mlock(struct page *page) { } static inline void mlock_vma_page(struct page *page) { } --- a/mm/memory.c~mm-drop-mmap_sem-before-calling-balance_dirty_pages-in-write-fault +++ a/mm/memory.c @@ -2289,10 +2289,11 @@ static vm_fault_t do_page_mkwrite(struct * * The function expects the page to be locked and unlocks it. */ -static void fault_dirty_shared_page(struct vm_area_struct *vma, - struct page *page) +static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf) { + struct vm_area_struct *vma = vmf->vma; struct address_space *mapping; + struct page *page = vmf->page; bool dirtied; bool page_mkwrite = vma->vm_ops && vma->vm_ops->page_mkwrite; @@ -2307,16 +2308,30 @@ static void fault_dirty_shared_page(stru mapping = page_rmapping(page); unlock_page(page); + if (!page_mkwrite) + file_update_time(vma->vm_file); + + /* + * Throttle page dirtying rate down to writeback speed. + * + * mapping may be NULL here because some device drivers do not + * set page.mapping but still dirty their pages + * + * Drop the mmap_sem before waiting on IO, if we can. The file + * is pinning the mapping, as per above. + */ if ((dirtied || page_mkwrite) && mapping) { - /* - * Some device drivers do not set page.mapping - * but still dirty their pages - */ + struct file *fpin; + + fpin = maybe_unlock_mmap_for_io(vmf, NULL); balance_dirty_pages_ratelimited(mapping); + if (fpin) { + fput(fpin); + return VM_FAULT_RETRY; + } } - if (!page_mkwrite) - file_update_time(vma->vm_file); + return 0; } /* @@ -2571,6 +2586,7 @@ static vm_fault_t wp_page_shared(struct __releases(vmf->ptl) { struct vm_area_struct *vma = vmf->vma; + vm_fault_t ret = VM_FAULT_WRITE; get_page(vmf->page); @@ -2594,10 +2610,10 @@ static vm_fault_t wp_page_shared(struct wp_page_reuse(vmf); lock_page(vmf->page); } - fault_dirty_shared_page(vma, vmf->page); + ret |= fault_dirty_shared_page(vmf); put_page(vmf->page); - return VM_FAULT_WRITE; + return ret; } /* @@ -3641,7 +3657,7 @@ static vm_fault_t do_shared_fault(struct return ret; } - fault_dirty_shared_page(vma, vmf->page); + ret |= fault_dirty_shared_page(vmf); return ret; }