From patchwork Thu Sep 27 16:59:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 10618345 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2BE4E913 for ; Thu, 27 Sep 2018 17:00:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 18B7F2B957 for ; Thu, 27 Sep 2018 17:00:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0CA4F2B9E8; Thu, 27 Sep 2018 17:00:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6B78D2B957 for ; Thu, 27 Sep 2018 16:59:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C49098E0002; Thu, 27 Sep 2018 12:59:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BF89E8E0001; Thu, 27 Sep 2018 12:59:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC3958E0002; Thu, 27 Sep 2018 12:59:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id 6B0CE8E0001 for ; Thu, 27 Sep 2018 12:59:54 -0400 (EDT) Received: by mail-pg1-f197.google.com with SMTP id 132-v6so3354759pga.18 for ; Thu, 27 Sep 2018 09:59:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id; bh=mLIryZfQcn3fyferW8ABDrBaJJkkZUdzDzshGWQDB8s=; b=AEHDLbeewzDzepL0C/3fx5Aw6G7ZwpQFWQqvz/F2G9Icfo6+1TMINOTjYekHHjJOrs eFZJC4VQZCCSzLFI25nXvTxATuQUMatf8A7y0ABy4dRBWtl2VnHaIFQN3i6j9qt8oLDu UKcrRnmK65atvQNAYSTwCaLGjmbs+LzXIWxegeoFdqskogm6A+ILvUYtGe5ZFtd3Wtbc CvGAHplqWAuBIj2t+f6F24SgPUnhWvCN3tUECejVu0NwaLIjTtE2Rj6bkXgxpMcst+hW oxy1+fO9pLqCo/Ztne9rV7Oc73iK/2GF5KUZMQ9/vhd3kXYz8u7bQHCSEiSXgPcJErmW Lgfg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Gm-Message-State: ABuFfoinM4I+hyyFqLgwf9nO2Dvmni/bWpa2x4BEd7B13yIwyj0bpuZi 5fTevlH7naH7+AVpYGZRfnF/pShl2grXqXYig+BMtg+1t2fCZvwhPRnK66V7uPEJ0WpLV2Rx31V Xqnnj4xzbHIviT1W7d3dY3g/3oNJXWBcHwY/iUerVk7UyWDyUDr/pH7vS41J8pwtZBA== X-Received: by 2002:a17:902:6b09:: with SMTP id o9-v6mr8402928plk.316.1538067594072; Thu, 27 Sep 2018 09:59:54 -0700 (PDT) X-Google-Smtp-Source: ACcGV636VjNNwP9JA3LLj2TALF/4tP14OgZv1Gncbv3ojvIy45Urt2alyfQBtV9m7/TkdhLEy6za X-Received: by 2002:a17:902:6b09:: with SMTP id o9-v6mr8402869plk.316.1538067593122; Thu, 27 Sep 2018 09:59:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538067593; cv=none; d=google.com; s=arc-20160816; b=o2VGgNfW9vLT0NjA1Zz3OsbD70iyJ14LpAUPMrIeCERkyQATIsRnh8tlqiOwVbNRT0 f/+CR2Ce1M3TPq7oep3QIfpGyK1QisxtvvIk5LtmWx/VPNRj+ZXe7PllzkZBGgx0Q3sb NrS5RTwqwkLMj/ChX/NxXFPDU2Xva0K9rYLGy3EVCFhwhn2LXIS7u6r9Pg0Bl6hVi2MI 2+Ztnnk5Bo/K4XpqsxirkhZrCWquGXhI5v9ugPSjfHkSkBL4lk84KWqdZuyXhknXI+pF DbzRlJ52ptomOCOkmMELVuf3F57xJobMkSmGQGbzN2gv8AzK0mrsMi4dBSKH9Up8Wvlw 9gDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from; bh=mLIryZfQcn3fyferW8ABDrBaJJkkZUdzDzshGWQDB8s=; b=QAO6pa1buS4gSmhDavhpHBusaLNX90dKq1ckN7xyJFrhhva0qMKbwYtNwxj+CdSTd6 9+v8jPvTuEPZTeHFM/4CuJvmaQtDzbU407ijUSEqeJ/l8sTtQiUM+DrGeSX8TRPkpeYx q0fxTfgscrbXQ51ezntV70suPEHReL+SyVNybBRl0Nw9E52IhFHHBbPbN3oezhHgo6uG k0WH7jolJOW+abHtO2Sd2YzOFYzNjCugz9X8GY+XR02TQwVR/cJdBx99u3aqUeE6UiFW 7H0BTtdjsMnIqfCEwB+E+Oe47xP16r8fcFA+SVM1vYdD7DrhUO/6Axwaek9yT1kvWolG rBtg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com. [115.124.30.132]) by mx.google.com with ESMTPS id d4-v6si2359186pgt.687.2018.09.27.09.59.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 09:59:53 -0700 (PDT) Received-SPF: pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) client-ip=115.124.30.132; Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07487;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0T9a2Upm_1538067582; Received: from e19h19392.et15sqa.tbsite.net(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0T9a2Upm_1538067582) by smtp.aliyun-inc.com(127.0.0.1); Fri, 28 Sep 2018 00:59:49 +0800 From: Yang Shi To: mhocko@kernel.org, kirill@shutemov.name, willy@infradead.org, ldufour@linux.vnet.ibm.com, vbabka@suse.cz, akpm@linux-foundation.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 1/2 -mm] mm: mremap: downgrade mmap_sem to read when shrinking Date: Fri, 28 Sep 2018 00:59:41 +0800 Message-Id: <1538067582-60038-1-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Other than munmap, mremap might be used to shrink memory mapping too. So, it may hold write mmap_sem for long time when shrinking large mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in munmap") described. The mremap() will not manipulate vmas anymore after __do_munmap() call for the mapping shrink use case, so it is safe to downgrade to read mmap_sem. So, the same optimization, which downgrades mmap_sem to read for zapping pages, is also feasible and reasonable to this case. The period of holding exclusive mmap_sem for shrinking large mapping would be reduced significantly with this optimization. MREMAP_FIXED and MREMAP_MAYMOVE are more complicated to adopt this optimization since they need manipulate vmas after do_munmap(), downgrading mmap_sem may create race window. Simple mapping shrink is the low hanging fruit, and it may cover the most cases of unmap with munmap together. Acked-by: Vlastimil Babka Acked-by: Kirill A. Shutemov Cc: Michal Hocko Cc: Matthew Wilcox Cc: Laurent Dufour Cc: Andrew Morton Signed-off-by: Yang Shi --- v3: Fixed the comments from Vlastimil and Kirill. And, added their Acked-by. Thanks. v2: Rephrase the commit log per Michal include/linux/mm.h | 2 ++ mm/mmap.c | 4 ++-- mm/mremap.c | 17 +++++++++++++---- 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a61ebe8..3028028 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2286,6 +2286,8 @@ extern unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, vm_flags_t vm_flags, unsigned long pgoff, unsigned long *populate, struct list_head *uf); +extern int __do_munmap(struct mm_struct *, unsigned long, size_t, + struct list_head *uf, bool downgrade); extern int do_munmap(struct mm_struct *, unsigned long, size_t, struct list_head *uf); diff --git a/mm/mmap.c b/mm/mmap.c index 847a17d..017bcfa 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2687,8 +2687,8 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma, * work. This now handles partial unmappings. * Jeremy Fitzhardinge */ -static int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len, - struct list_head *uf, bool downgrade) +int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len, + struct list_head *uf, bool downgrade) { unsigned long end; struct vm_area_struct *vma, *prev, *last; diff --git a/mm/mremap.c b/mm/mremap.c index 5c2e185..3524d16 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -525,6 +525,7 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta) unsigned long ret = -EINVAL; unsigned long charged = 0; bool locked = false; + bool downgraded = false; struct vm_userfaultfd_ctx uf = NULL_VM_UFFD_CTX; LIST_HEAD(uf_unmap_early); LIST_HEAD(uf_unmap); @@ -561,12 +562,17 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta) /* * Always allow a shrinking remap: that just unmaps * the unnecessary pages.. - * do_munmap does all the needed commit accounting + * __do_munmap does all the needed commit accounting, and + * downgrade mmap_sem to read. */ if (old_len >= new_len) { - ret = do_munmap(mm, addr+new_len, old_len - new_len, &uf_unmap); - if (ret && old_len != new_len) + ret = __do_munmap(mm, addr+new_len, old_len - new_len, + &uf_unmap, true); + if (ret < 0 && old_len != new_len) goto out; + /* Returning 1 indicates mmap_sem is downgraded to read. */ + else if (ret == 1) + downgraded = true; ret = addr; goto out; } @@ -631,7 +637,10 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta) vm_unacct_memory(charged); locked = 0; } - up_write(¤t->mm->mmap_sem); + if (downgraded) + up_read(¤t->mm->mmap_sem); + else + up_write(¤t->mm->mmap_sem); if (locked && new_len > old_len) mm_populate(new_addr + old_len, new_len - old_len); userfaultfd_unmap_complete(mm, &uf_unmap_early); From patchwork Thu Sep 27 16:59:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 10618347 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 43678112B for ; Thu, 27 Sep 2018 17:00:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 311B42B991 for ; Thu, 27 Sep 2018 17:00:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 248F32BA96; Thu, 27 Sep 2018 17:00:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 87F492B991 for ; Thu, 27 Sep 2018 16:59:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C296D8E0003; Thu, 27 Sep 2018 12:59:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B3A948E0001; Thu, 27 Sep 2018 12:59:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 964F38E0003; Thu, 27 Sep 2018 12:59:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 45C228E0001 for ; Thu, 27 Sep 2018 12:59:56 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id s1-v6so3366324pfm.22 for ; Thu, 27 Sep 2018 09:59:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=fE6BeDnlivlNWoNsln1vDbZGrKDxChcv59gLza2DCXY=; b=b7TcKMshH5htPOqWnvEyxng0PEzs60EuIKZSaRZ5T7thlv5jrB5LUEPk9+h9cva3gt Q09/m3hoEDv1p+MXttLMCGvtB5tLFYc7ThdbiLGQpYz4a1dvJoia7Qy6ixi9JChOGck8 MJOyHbFHdiOvWCapq5HlJXdVGGz8VnNisR1p1zHbCMF2PaMGq8P1JR2BKZz9QQc/FzkM ofJwUOKdzR4oWnSKsTtnzjwAQ+KobFXBlt7ssXdRMUPIY5NYZd2v+YJU6FqzU0Q1HWXD G7wnWt7i7r/hYYWrrKb+kiNfPjFn3oaOzyPsKJoBiXw3m0qSBK+r9ztr7aV7lQPHB/B0 34HQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.130 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Gm-Message-State: ABuFfogsR4ZOfAsSx5/Bj1Yrmgx64Hm5hO4pVd/6/MBxRhFJhK4wuLy7 2j8CRICDB6iCcZ1t+np+HhoIEZJekzwWP7ibnMoDZ/cWsYVF3QCANQXFBxMM5AC29fpQ6qLLXb4 /9lSKscYV3cCFUhul0uTlN3cLlWigVIOJ5xcM/p9RFj5bJhPqzWRKgbtL5AB/Zds3LA== X-Received: by 2002:a62:2b50:: with SMTP id r77-v6mr12338373pfr.51.1538067595936; Thu, 27 Sep 2018 09:59:55 -0700 (PDT) X-Google-Smtp-Source: ACcGV60jed/mPhogRZ3ImKUuhrR4XB4dfKe8uuZwi62rlaWRQdDA9m+ZipAkpXSeFl3vghd5mKPc X-Received: by 2002:a62:2b50:: with SMTP id r77-v6mr12338319pfr.51.1538067595046; Thu, 27 Sep 2018 09:59:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538067595; cv=none; d=google.com; s=arc-20160816; b=lhCpnSO0SCWfbuju8kWpoo3dJNU+/WXA+kG7PXcTgEvhZvDklhZePL1opsH5/PbBr/ CxiC3aC3SRa3bRJRaDr0YZ7t/jhY3NZDgSGj3gSGmy0gdhd8Q1CnhJeQTlF4mX2XWnYe +eLcd9gyorPE1M0urYNi/2b2F70lv2zxRn+EjQCVlkNj8doGMjgIA9p2IdDmHdZL5Z3k CSG7YuEnbS39bLuFXNOS9+lSMwMttgxTak0IHlFf89PSOCvmcC8Au09LfVECxvAO+k1i qTKPYj9jySvRmCbRVRiAw41nC3t5I34kS3g8AWRJvRO9WEZpq4g4+zEGflrYX9CjXVfG 0n5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=fE6BeDnlivlNWoNsln1vDbZGrKDxChcv59gLza2DCXY=; b=OW7skfYJ3aSrObJFG+uD5zyeq1KGKUA5+tmM65LrmDjB39uHcfWb18bqKWD/lI543e tWdPYJ6Gt8/m0W8ZyTQolPaPODS9Bpdd3EICnrRse/cCzWiAkzDITS4QmILdEoWE4h/o VbjmWVBuJJovGj61E8Z90YAXFITBaG4fQd1FAMfTNkw+SuPYyhT9CKLcxOuh78DZf3LK SBLKfdHrZ8O9LSuW1eybQqjp5ESkqT16jMHqSAWkfHTyO6+Sig1Ogk9PnLp/JrWDPZrO 9ehmtBU0+bkSeNwa5FYzTOvzXcywXurFGpLcK3dl8Q8zDgEMebNNmXMMHSRyEq6tMj/t rBXA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.130 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com. [115.124.30.130]) by mx.google.com with ESMTPS id r73-v6si2674788pfk.83.2018.09.27.09.59.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 09:59:54 -0700 (PDT) Received-SPF: pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.130 as permitted sender) client-ip=115.124.30.130; Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.130 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0T9a2Upm_1538067582; Received: from e19h19392.et15sqa.tbsite.net(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0T9a2Upm_1538067582) by smtp.aliyun-inc.com(127.0.0.1); Fri, 28 Sep 2018 00:59:49 +0800 From: Yang Shi To: mhocko@kernel.org, kirill@shutemov.name, willy@infradead.org, ldufour@linux.vnet.ibm.com, vbabka@suse.cz, akpm@linux-foundation.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 2/2 -mm] mm: brk: downgrade mmap_sem to read when shrinking Date: Fri, 28 Sep 2018 00:59:42 +0800 Message-Id: <1538067582-60038-2-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1538067582-60038-1-git-send-email-yang.shi@linux.alibaba.com> References: <1538067582-60038-1-git-send-email-yang.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP brk might be used to shrink memory mapping too other than munmap(). So, it may hold write mmap_sem for long time when shrinking large mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in munmap") described. The brk() will not manipulate vmas anymore after __do_munmap() call for the mapping shrink use case. But, it may set mm->brk after __do_munmap(), which needs hold write mmap_sem. However, a simple trick can workaround this by setting mm->brk before __do_munmap(). Then restore the original value if __do_munmap() fails. With this trick, it is safe to downgrade to read mmap_sem. So, the same optimization, which downgrades mmap_sem to read for zapping pages, is also feasible and reasonable to this case. The period of holding exclusive mmap_sem for shrinking large mapping would be reduced significantly with this optimization. Acked-by: Vlastimil Babka Acked-by: Kirill A. Shutemov Cc: Michal Hocko Cc: Matthew Wilcox Cc: Laurent Dufour Cc: Andrew Morton Signed-off-by: Yang Shi --- v3: Fixed the comments from Vlastimil and Kirill. And, added their Acked-by. Thanks. v2: Rephrase the commit log per Michal mm/mmap.c | 43 ++++++++++++++++++++++++++++++++----------- 1 file changed, 32 insertions(+), 11 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 017bcfa..68dc4fb 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -191,16 +191,19 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long SYSCALL_DEFINE1(brk, unsigned long, brk) { unsigned long retval; - unsigned long newbrk, oldbrk; + unsigned long newbrk, oldbrk, origbrk; struct mm_struct *mm = current->mm; struct vm_area_struct *next; unsigned long min_brk; bool populate; + bool downgraded = false; LIST_HEAD(uf); if (down_write_killable(&mm->mmap_sem)) return -EINTR; + origbrk = mm->brk; + #ifdef CONFIG_COMPAT_BRK /* * CONFIG_COMPAT_BRK can still be overridden by setting @@ -229,14 +232,29 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long newbrk = PAGE_ALIGN(brk); oldbrk = PAGE_ALIGN(mm->brk); - if (oldbrk == newbrk) - goto set_brk; + if (oldbrk == newbrk) { + mm->brk = brk; + goto success; + } - /* Always allow shrinking brk. */ + /* + * Always allow shrinking brk. + * __do_munmap() may downgrade mmap_sem to read. + */ if (brk <= mm->brk) { - if (!do_munmap(mm, newbrk, oldbrk-newbrk, &uf)) - goto set_brk; - goto out; + /* + * mm->brk need to be protected by write mmap_sem, update it + * before downgrading mmap_sem. + * When __do_munmap fail, it will be restored from origbrk. + */ + mm->brk = brk; + retval = __do_munmap(mm, newbrk, oldbrk-newbrk, &uf, true); + if (retval < 0) { + mm->brk = origbrk; + goto out; + } else if (retval == 1) + downgraded = true; + goto success; } /* Check against existing mmap mappings. */ @@ -247,18 +265,21 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long /* Ok, looks good - let it rip. */ if (do_brk_flags(oldbrk, newbrk-oldbrk, 0, &uf) < 0) goto out; - -set_brk: mm->brk = brk; + +success: populate = newbrk > oldbrk && (mm->def_flags & VM_LOCKED) != 0; - up_write(&mm->mmap_sem); + if (downgraded) + up_read(&mm->mmap_sem); + else + up_write(&mm->mmap_sem); userfaultfd_unmap_complete(mm, &uf); if (populate) mm_populate(oldbrk, newbrk - oldbrk); return brk; out: - retval = mm->brk; + retval = origbrk; up_write(&mm->mmap_sem); return retval; }