From patchwork Wed Sep 26 18:10:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 10616495 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3BC9F112B for ; Wed, 26 Sep 2018 18:10:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2BD8E2B717 for ; Wed, 26 Sep 2018 18:10:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 260672B6D7; Wed, 26 Sep 2018 18:10:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6ABB02B6DA for ; Wed, 26 Sep 2018 18:10:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 662348E0003; Wed, 26 Sep 2018 14:10:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 610D68E0001; Wed, 26 Sep 2018 14:10:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4DA618E0003; Wed, 26 Sep 2018 14:10:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id 0DAAA8E0001 for ; Wed, 26 Sep 2018 14:10:49 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id s1-v6so14976876pfm.22 for ; Wed, 26 Sep 2018 11:10:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id; bh=CGm1LSEbzXKmegWT1nJHj0hNpnQsGsJiQu33GgxTsIk=; b=DdGWgz7F3RpjzjqjaiiWp/oVz6sz7ZlcX5y47ciHMfHfYNdlHisZ3PD72yBiK+7ALW 74AynMkgzuWjLCW1UfsSEKkqKnv57kwmkfosBRAH9WHEY1SClh8o2KdIZ6maJov8Utvl OR7WOC/4u1+Wt7aY1UvAJAnAzaEROMrlpVEPm+H8kgjraHilaT7BiM5O37F0ceAlZ6gz W4A8wOo3UzqF+N2Gji9Pgezn/7LlOgSrV8/342Cr0NjtQYhIzf43yzyofV7b+3gDKESV PM6Feue2Hic3aajS+N99g+VAZHEpbJCx4RHbiJaLGZocZ02ODoagwvr9iAQlGm9ahqd0 odbQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Gm-Message-State: ABuFfohDPQSoanaiYygYqLRyxqIbcgGCrgqV48zKQUbzD+eY8l7ijwen pXmvTk3ngKO8CXBfsf9QntW7038dHKMD8j2RmMZkUcm7u+91HHzHtMs6JZtdryTim/l+YbKphKc KK+pLPdOaujmwhlrmTazqEgYP2T5sLHiBZ/DWz07WpUgJFc9wjP9pcEs4tW/7mdx+pw== X-Received: by 2002:a63:1d3:: with SMTP id 202-v6mr6778446pgb.136.1537985448693; Wed, 26 Sep 2018 11:10:48 -0700 (PDT) X-Google-Smtp-Source: ACcGV62qQnoHjXQknJIACahz4mnIE1Cb0ZoCrfJwUL9nPYbDlknTtZQyvFYvvPK2yg9r/SS2k9M3 X-Received: by 2002:a63:1d3:: with SMTP id 202-v6mr6778359pgb.136.1537985447071; Wed, 26 Sep 2018 11:10:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537985447; cv=none; d=google.com; s=arc-20160816; b=mT9PjVp55QahYVqsHCg0GTynLMg4Uu+ssekdtMMXlmGSHdTWEs8WB0IdHVnf7QYj0B RXyThg8HRjXb+CgvAKxyWiQ2iNJprmjtFBADIZNKxtiZaU6rs1JYtKikT8NnYGyXjQLv HiYXdGRCY+vP5nHU1UFSzT7biLg4R5DtPu3PlIqlMt2+PmHHM8CXPS8HPRSg1y2oHi0q EsG9BzzpK4zdSDxzD82aCTr5//bm1iZgTLunFqV6hgZZyOG+dfjL4UlOhTJdtmiOcTz5 ekK5YipFaYwi2wNqKSQA0rpv/SSf7TBCwMhBnWotgcLf5sSNVq/X9pfeck9n4d+lQv0v J9sA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from; bh=CGm1LSEbzXKmegWT1nJHj0hNpnQsGsJiQu33GgxTsIk=; b=r8daxZUJobACsPNiVfbndUUfkVL+g8OYVrv6FtouslG1DGXJtrGl/OjHXj6KygEtUf UDHvP8hbDy3IvRcvt5PDyfBTMCojMwIrj8eOHGnjwD7riBO7LelTSLGvtx01iITcFexX EndYf3+vGq4vUssDTwy3WUPY141lSdRpR5v6wRh50psRyhdUGV7zUn7zD2d7V/uMrzv2 DQd24O2hrj96bmOBu3Vh97umnp8FvmvjKzB7KdcZMlveQR8vm1dZOBrWhhRy33XWZLQ7 wQIrBEvw8eV46q3oSKu8hEZUrCFaYBZb830x7b1AIsYbQ61wPt1mGaLy4sAHXgg06xVY zPBg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com. [115.124.30.132]) by mx.google.com with ESMTPS id f33-v6si6125128plf.92.2018.09.26.11.10.46 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 26 Sep 2018 11:10:46 -0700 (PDT) Received-SPF: pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) client-ip=115.124.30.132; Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04392;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0T9Vumj3_1537985434; Received: from e19h19392.et15sqa.tbsite.net(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0T9Vumj3_1537985434) by smtp.aliyun-inc.com(127.0.0.1); Thu, 27 Sep 2018 02:10:43 +0800 From: Yang Shi To: mhocko@kernel.org, kirill@shutemov.name, willy@infradead.org, ldufour@linux.vnet.ibm.com, vbabka@suse.cz, akpm@linux-foundation.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 1/2 -mm] mm: mremap: dwongrade mmap_sem to read when shrinking Date: Thu, 27 Sep 2018 02:10:33 +0800 Message-Id: <1537985434-22655-1-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Other than munmap, mremap might be used to shrink memory mapping too. So, it may hold write mmap_sem for long time when shrinking large mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in munmap") described. The mremap() will not manipulate vmas anymore after __do_munmap() call for the mapping shrink use case, so it is safe to downgrade to read mmap_sem. So, the same optimization, which downgrades mmap_sem to read for zapping pages, is also feasible and reasonable to this case. The period of holding exclusive mmap_sem for shrinking large mapping would be reduced significantly with this optimization. MREMAP_FIXED and MREMAP_MAYMOVE are more complicated to adopt this optimization since they need manipulate vmas after do_munmap(), downgrading mmap_sem may create race window. Simple mapping shrink is the low hanging fruit, and it may cover the most cases of unmap with munmap together. Cc: Michal Hocko Cc: Kirill A. Shutemov Cc: Matthew Wilcox Cc: Laurent Dufour Cc: Vlastimil Babka Cc: Andrew Morton Signed-off-by: Yang Shi Acked-by: Vlastimil Babka Acked-by: Kirill A. Shutemov --- v2: Rephrase the commit log per Michal include/linux/mm.h | 2 ++ mm/mmap.c | 4 ++-- mm/mremap.c | 17 +++++++++++++---- 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a61ebe8..3028028 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2286,6 +2286,8 @@ extern unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, vm_flags_t vm_flags, unsigned long pgoff, unsigned long *populate, struct list_head *uf); +extern int __do_munmap(struct mm_struct *, unsigned long, size_t, + struct list_head *uf, bool downgrade); extern int do_munmap(struct mm_struct *, unsigned long, size_t, struct list_head *uf); diff --git a/mm/mmap.c b/mm/mmap.c index 847a17d..017bcfa 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2687,8 +2687,8 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma, * work. This now handles partial unmappings. * Jeremy Fitzhardinge */ -static int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len, - struct list_head *uf, bool downgrade) +int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len, + struct list_head *uf, bool downgrade) { unsigned long end; struct vm_area_struct *vma, *prev, *last; diff --git a/mm/mremap.c b/mm/mremap.c index 5c2e185..8f1ec2b 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -525,6 +525,7 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta) unsigned long ret = -EINVAL; unsigned long charged = 0; bool locked = false; + bool downgrade = false; struct vm_userfaultfd_ctx uf = NULL_VM_UFFD_CTX; LIST_HEAD(uf_unmap_early); LIST_HEAD(uf_unmap); @@ -561,12 +562,17 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta) /* * Always allow a shrinking remap: that just unmaps * the unnecessary pages.. - * do_munmap does all the needed commit accounting + * __do_munmap does all the needed commit accounting, and + * downgrade mmap_sem to read. */ if (old_len >= new_len) { - ret = do_munmap(mm, addr+new_len, old_len - new_len, &uf_unmap); - if (ret && old_len != new_len) + ret = __do_munmap(mm, addr+new_len, old_len - new_len, + &uf_unmap, true); + if (ret < 0 && old_len != new_len) goto out; + /* Returning 1 indicates mmap_sem is downgraded to read. */ + else if (ret == 1) + downgrade = true; ret = addr; goto out; } @@ -631,7 +637,10 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta) vm_unacct_memory(charged); locked = 0; } - up_write(¤t->mm->mmap_sem); + if (downgrade) + up_read(¤t->mm->mmap_sem); + else + up_write(¤t->mm->mmap_sem); if (locked && new_len > old_len) mm_populate(new_addr + old_len, new_len - old_len); userfaultfd_unmap_complete(mm, &uf_unmap_early);