From patchwork Tue Feb 18 17:32:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Brian Geffon X-Patchwork-Id: 11389181 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E3DD2139A for ; Tue, 18 Feb 2020 17:32:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9522824656 for ; Tue, 18 Feb 2020 17:32:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PqZZqruQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9522824656 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C7DB76B0003; Tue, 18 Feb 2020 12:32:27 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C092B6B0006; Tue, 18 Feb 2020 12:32:27 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACF706B0007; Tue, 18 Feb 2020 12:32:27 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0074.hostedemail.com [216.40.44.74]) by kanga.kvack.org (Postfix) with ESMTP id 9163D6B0003 for ; Tue, 18 Feb 2020 12:32:27 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3D1A2180AD815 for ; Tue, 18 Feb 2020 17:32:27 +0000 (UTC) X-FDA: 76503941934.11.tent45_154e0b666b947 X-Spam-Summary: 2,0,0,61822a9311af1136,d41d8cd98f00b204,3qr9mxgckcbitywxx65y66y3w.u64305cf-442dsu2.69y@flex--bgeffon.bounces.google.com,:akpm@linux-foundation.org:mst@redhat.com:bgeffon@google.com:arnd@arndb.de:linux-kernel@vger.kernel.org::linux-api@vger.kernel.org:luto@amacapital.net:will@kernel.org:aarcange@redhat.com:sonnyrao@google.com:minchan@kernel.org:joel@joelfernandes.org:yuzhao@google.com:jsbarnes@google.com:fweimer@redhat.com:kirill@shutemov.name,RULES_HIT:1:2:41:152:355:379:541:800:960:968:973:988:989:1260:1277:1311:1313:1314:1345:1431:1437:1513:1515:1516:1518:1521:1593:1594:1605:1730:1747:1777:1792:1981:2194:2199:2393:2553:2559:2562:2693:2890:2898:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3873:3874:4052:4250:4321:4605:5007:6261:6653:6742:7875:7903:8603:8660:9036:9038:9592:9969:10004:11026:11473:11658:11783:11914:12043:12291:12294:12296:12297:12438:12555:12679:12683:12895:12986:13148:13161:13229:13230:14096:14097:14394:14659:21080:21324:214 33:21444 X-HE-Tag: tent45_154e0b666b947 X-Filterd-Recvd-Size: 12723 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Tue, 18 Feb 2020 17:32:26 +0000 (UTC) Received: by mail-pf1-f201.google.com with SMTP id x10so13682085pfn.4 for ; Tue, 18 Feb 2020 09:32:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc :content-transfer-encoding; bh=ebfF8pOo+xEWQraK5a885IMQv9bAJ6QGYlcTDoREdzI=; b=PqZZqruQ/7dDlrD/N87ChBTNgKXEg+PA3IzOPfPY6JoLyTSIIoNZfrmkoNGIgWlnTs mJmY7A/qGku4kmtVPfKi/Zo6BC402MLfDMvxVOC1bYuvBA4CqE2dAM5UXtjj8dRwnVS2 ZWycE0BlfttMwwDu3wZtwYs8RN6zyb2ZJi2elxfv6xuKqZnlWRC+tyC26N26M3IjlX/H pVZ6/Y4KzXWjd+g8+02SDzj4hbK5DkTUrjGkITeWzfVAxQjd4MIdl3OJhN1Ff8G7JF0I 369zZpBupgnlZEgVwEEpPhzMnoDflGlAY+d2b9LYyuAT+4Ss3RPgN+HaH5kRW3sH4OT7 F8Nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc :content-transfer-encoding; bh=ebfF8pOo+xEWQraK5a885IMQv9bAJ6QGYlcTDoREdzI=; b=F5iWANMbin5v3ukZkn7kU80OvpqMrWLkuXUgpUN43y7tpbr5gfn3qnu+PcVZ5pxuZh fymzObhOFFHaKtYgLDtvz5Io6MpDVh3soDilRvUeZgTUgtQ/CyIuSODuI3856qq2XDcs I/cNTM351Um4tPpKFzASs3sENa/fC7rrpAt0Z5Y4b6nOlbx9KaKrqNQtRg9auisF3Yjg OF50qeuzjDlmAsPiYQ25D5V07TUXJnsAntoPp1dRIfh/170xODkrV4HiY4g2wpeW2hvj n77XW5OjYaWvrDh0OG1BTkatWYgDLm5y7oWHoin2wMgRy5rn1JYHSPh7QyodbUpVRyX8 YBAA== X-Gm-Message-State: APjAAAVyzs9P08AJxJqUwmfZuMOMjOXBY2HpAY9T2OfJrX2GAqpwiqU3 Tx//F2x3r2opri6QN5w4WzQIpA3pWGLc X-Google-Smtp-Source: APXvYqwYqvWu4bmg78Vlk33yab/grWuGIdV3qkndrOt4/v8PJmuM+GXSDgDfhEudf6V7wArBXW9JD6w4Cb1N X-Received: by 2002:a63:e50a:: with SMTP id r10mr13835452pgh.27.1582047145076; Tue, 18 Feb 2020 09:32:25 -0800 (PST) Date: Tue, 18 Feb 2020 09:32:20 -0800 Message-Id: <20200218173221.237674-1-bgeffon@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v6 1/2] mm: Add MREMAP_DONTUNMAP to mremap(). From: Brian Geffon To: Andrew Morton Cc: "Michael S . Tsirkin" , Brian Geffon , Arnd Bergmann , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-api@vger.kernel.org, Andy Lutomirski , Will Deacon , Andrea Arcangeli , Sonny Rao , Minchan Kim , Joel Fernandes , Yu Zhao , Jesse Barnes , Florian Weimer , "Kirill A . Shutemov" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When remapping an anonymous, private mapping, if MREMAP_DONTUNMAP is set, the source mapping will not be removed. The remap operation will be performed as it would have been normally by moving over the page tables to the new mapping. The old vma will have any locked flags cleared, have no pagetables, and any userfaultfds that were watching that range will continue watching it. For a mapping that is shared or not anonymous, MREMAP_DONTUNMAP will cause the mremap() call to fail. Because MREMAP_DONTUNMAP always results in moving a VMA you MUST use the MREMAP_MAYMOVE flag. The final result is two equally sized VMAs where the destination contains the PTEs of the source. We hope to use this in Chrome OS where with userfaultfd we could write an anonymous mapping to disk without having to STOP the process or worry about VMA permission changes. This feature also has a use case in Android, Lokesh Gidra has said that "As part of using userfaultfd for GC, We'll have to move the physical pages of the java heap to a separate location. For this purpose mremap will be used. Without the MREMAP_DONTUNMAP flag, when I mremap the java heap, its virtual mapping will be removed as well. Therefore, we'll require performing mmap immediately after. This is not only time consuming but also opens a time window where a native thread may call mmap and reserve the java heap's address range for its own usage. This flag solves the problem." v5 -> v6: - Code cleanup suggested by Kirill. v4 -> v5: - Correct commit message to more accurately reflect the behavior. - Clear VM_LOCKED and VM_LOCKEDONFAULT on the old vma.     Signed-off-by: Brian Geffon Tested-by: Lokesh Gidra Reviewed-by: Minchan Kim --- include/uapi/linux/mman.h | 5 +- mm/mremap.c | 103 ++++++++++++++++++++++++++++++-------- 2 files changed, 85 insertions(+), 23 deletions(-) diff --git a/include/uapi/linux/mman.h b/include/uapi/linux/mman.h index fc1a64c3447b..923cc162609c 100644 --- a/include/uapi/linux/mman.h +++ b/include/uapi/linux/mman.h @@ -5,8 +5,9 @@ #include #include -#define MREMAP_MAYMOVE 1 -#define MREMAP_FIXED 2 +#define MREMAP_MAYMOVE 1 +#define MREMAP_FIXED 2 +#define MREMAP_DONTUNMAP 4 #define OVERCOMMIT_GUESS 0 #define OVERCOMMIT_ALWAYS 1 diff --git a/mm/mremap.c b/mm/mremap.c index 1fc8a29fbe3f..fa27103502c5 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -318,8 +318,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma, static unsigned long move_vma(struct vm_area_struct *vma, unsigned long old_addr, unsigned long old_len, unsigned long new_len, unsigned long new_addr, - bool *locked, struct vm_userfaultfd_ctx *uf, - struct list_head *uf_unmap) + bool *locked, unsigned long flags, + struct vm_userfaultfd_ctx *uf, struct list_head *uf_unmap) { struct mm_struct *mm = vma->vm_mm; struct vm_area_struct *new_vma; @@ -408,11 +408,46 @@ static unsigned long move_vma(struct vm_area_struct *vma, if (unlikely(vma->vm_flags & VM_PFNMAP)) untrack_pfn_moved(vma); + if (unlikely(!err && (flags & MREMAP_DONTUNMAP))) { + if (vm_flags & VM_ACCOUNT) { + /* Always put back VM_ACCOUNT since we won't unmap */ + vma->vm_flags |= VM_ACCOUNT; + + vm_acct_memory(vma_pages(new_vma)); + } + + /* + * locked_vm accounting: if the mapping remained the same size + * it will have just moved and we don't need to touch locked_vm + * because we skip the do_unmap. If the mapping shrunk before + * being moved then the do_unmap on that portion will have + * adjusted vm_locked. Only if the mapping grows do we need to + * do something special; the reason is locked_vm only accounts + * for old_len, but we're now adding new_len - old_len locked + * bytes to the new mapping. + */ + if (vm_flags & VM_LOCKED && new_len > old_len) { + mm->locked_vm += (new_len - old_len) >> PAGE_SHIFT; + *locked = true; + } + + /* We always clear VM_LOCKED[ONFAULT] on the old vma */ + vma->vm_flags &= VM_LOCKED_CLEAR_MASK; + + goto out; + } + if (do_munmap(mm, old_addr, old_len, uf_unmap) < 0) { /* OOM: unable to split vma, just get accounts right */ vm_unacct_memory(excess >> PAGE_SHIFT); excess = 0; } + + if (vm_flags & VM_LOCKED) { + mm->locked_vm += new_len >> PAGE_SHIFT; + *locked = true; + } +out: mm->hiwater_vm = hiwater_vm; /* Restore VM_ACCOUNT if one or two pieces of vma left */ @@ -422,16 +457,12 @@ static unsigned long move_vma(struct vm_area_struct *vma, vma->vm_next->vm_flags |= VM_ACCOUNT; } - if (vm_flags & VM_LOCKED) { - mm->locked_vm += new_len >> PAGE_SHIFT; - *locked = true; - } - return new_addr; } static struct vm_area_struct *vma_to_resize(unsigned long addr, - unsigned long old_len, unsigned long new_len, unsigned long *p) + unsigned long old_len, unsigned long new_len, unsigned long flags, + unsigned long *p) { struct mm_struct *mm = current->mm; struct vm_area_struct *vma = find_vma(mm, addr); @@ -453,6 +484,10 @@ static struct vm_area_struct *vma_to_resize(unsigned long addr, return ERR_PTR(-EINVAL); } + if (flags & MREMAP_DONTUNMAP && (!vma_is_anonymous(vma) || + vma->vm_flags & VM_SHARED)) + return ERR_PTR(-EINVAL); + if (is_vm_hugetlb_page(vma)) return ERR_PTR(-EINVAL); @@ -497,7 +532,7 @@ static struct vm_area_struct *vma_to_resize(unsigned long addr, static unsigned long mremap_to(unsigned long addr, unsigned long old_len, unsigned long new_addr, unsigned long new_len, bool *locked, - struct vm_userfaultfd_ctx *uf, + unsigned long flags, struct vm_userfaultfd_ctx *uf, struct list_head *uf_unmap_early, struct list_head *uf_unmap) { @@ -505,7 +540,7 @@ static unsigned long mremap_to(unsigned long addr, unsigned long old_len, struct vm_area_struct *vma; unsigned long ret = -EINVAL; unsigned long charged = 0; - unsigned long map_flags; + unsigned long map_flags = 0; if (offset_in_page(new_addr)) goto out; @@ -534,9 +569,11 @@ static unsigned long mremap_to(unsigned long addr, unsigned long old_len, if ((mm->map_count + 2) >= sysctl_max_map_count - 3) return -ENOMEM; - ret = do_munmap(mm, new_addr, new_len, uf_unmap_early); - if (ret) - goto out; + if (flags & MREMAP_FIXED) { + ret = do_munmap(mm, new_addr, new_len, uf_unmap_early); + if (ret) + goto out; + } if (old_len >= new_len) { ret = do_munmap(mm, addr+new_len, old_len - new_len, uf_unmap); @@ -545,13 +582,26 @@ static unsigned long mremap_to(unsigned long addr, unsigned long old_len, old_len = new_len; } - vma = vma_to_resize(addr, old_len, new_len, &charged); + vma = vma_to_resize(addr, old_len, new_len, flags, &charged); if (IS_ERR(vma)) { ret = PTR_ERR(vma); goto out; } - map_flags = MAP_FIXED; + /* + * MREMAP_DONTUNMAP expands by new_len - (new_len - old_len), we will + * check that we can expand by new_len and vma_to_resize will handle + * the vma growing which is (new_len - old_len). + */ + if (flags & MREMAP_DONTUNMAP && + !may_expand_vm(mm, vma->vm_flags, new_len >> PAGE_SHIFT)) { + ret = -ENOMEM; + goto out; + } + + if (flags & MREMAP_FIXED) + map_flags |= MAP_FIXED; + if (vma->vm_flags & VM_MAYSHARE) map_flags |= MAP_SHARED; @@ -561,10 +611,16 @@ static unsigned long mremap_to(unsigned long addr, unsigned long old_len, if (offset_in_page(ret)) goto out1; - ret = move_vma(vma, addr, old_len, new_len, new_addr, locked, uf, + /* We got a new mapping */ + if (!(flags & MREMAP_FIXED)) + new_addr = ret; + + ret = move_vma(vma, addr, old_len, new_len, new_addr, locked, flags, uf, uf_unmap); + if (!(offset_in_page(ret))) goto out; + out1: vm_unacct_memory(charged); @@ -609,12 +665,16 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, addr = untagged_addr(addr); new_addr = untagged_addr(new_addr); - if (flags & ~(MREMAP_FIXED | MREMAP_MAYMOVE)) + if (flags & ~(MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP)) return ret; if (flags & MREMAP_FIXED && !(flags & MREMAP_MAYMOVE)) return ret; + /* MREMAP_DONTUNMAP is always a move */ + if (flags & MREMAP_DONTUNMAP && !(flags & MREMAP_MAYMOVE)) + return ret; + if (offset_in_page(addr)) return ret; @@ -632,9 +692,10 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, if (down_write_killable(¤t->mm->mmap_sem)) return -EINTR; - if (flags & MREMAP_FIXED) { + if (flags & (MREMAP_FIXED | MREMAP_DONTUNMAP)) { ret = mremap_to(addr, old_len, new_addr, new_len, - &locked, &uf, &uf_unmap_early, &uf_unmap); + &locked, flags, &uf, &uf_unmap_early, + &uf_unmap); goto out; } @@ -662,7 +723,7 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, /* * Ok, we need to grow.. */ - vma = vma_to_resize(addr, old_len, new_len, &charged); + vma = vma_to_resize(addr, old_len, new_len, flags, &charged); if (IS_ERR(vma)) { ret = PTR_ERR(vma); goto out; @@ -712,7 +773,7 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, } ret = move_vma(vma, addr, old_len, new_len, new_addr, - &locked, &uf, &uf_unmap); + &locked, flags, &uf, &uf_unmap); } out: if (offset_in_page(ret)) {