From patchwork Wed Aug 15 18:49:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 10566745 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D679E921 for ; Wed, 15 Aug 2018 18:50:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ACDC62AF18 for ; Wed, 15 Aug 2018 18:50:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AB1862AF35; Wed, 15 Aug 2018 18:50:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E2AC2AF36 for ; Wed, 15 Aug 2018 18:50:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 909EC6B0007; Wed, 15 Aug 2018 14:50:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 890EB6B0008; Wed, 15 Aug 2018 14:50:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 732E66B000A; Wed, 15 Aug 2018 14:50:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 30AC26B0007 for ; Wed, 15 Aug 2018 14:50:22 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id z18-v6so912949pfe.19 for ; Wed, 15 Aug 2018 11:50:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=HK+nkvH3q+28k2vljA7/OO0Kl2USle1UcBkBLdrxdd4=; b=mEsWNQZ1j0r1i+Ur94tVJeR1xVBwqz6DTd5uJ5Z2PMz+US6g8/5Kbhhm8LbIW7Af/B WMPe+xDGy3jjLR9nmAHxIG32+rN/l368Hsws9QqppmhRJTW1sRYoHE1NX+l4Omnn3xY3 1QEINq9gHYwTeLgFegG9YQUEo/4NryZ5OwHRVsT/dhR0tMw0MqwNmQGzNuiDlta9OQ2V JHwxDZfm/E07dn0Fb/ZIBAy6dcF1bJwk078Fy3V8kL1CjKMF7u8EQI+QAI9r6IchR2nE atEzUU7+hR7kKtU114o1WACHdwyW+R6adk3zrI/cYlc6dihlFnLnAAmhNpyzzjqshq31 S0jQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Gm-Message-State: AOUpUlEgEAreLIG61NJ1FON40IcsvnP7YLGgydg3KHehXSlYEbcv3V8O D6bUpJQPsyUvLoNBcMI9q3GnYIOGAjszUWeZje7aYCzLWWTFDNOPqnshLtR/0s9ppzdes25UUsS AEhwih1vaDwiqHDzifsy1pjD8t+hTIMAH0KZ0tylAQUPCUTs/47LowXSOnYPJVO2+cA== X-Received: by 2002:a63:6345:: with SMTP id x66-v6mr26417386pgb.43.1534359021867; Wed, 15 Aug 2018 11:50:21 -0700 (PDT) X-Google-Smtp-Source: AA+uWPw8w+/3BgiIsWL2X0wzkf/HPmpwyQEcNPHqcbMA/CETeaeuuJzIcmt42w7WYLFmCzqq8JsF X-Received: by 2002:a63:6345:: with SMTP id x66-v6mr26417355pgb.43.1534359020951; Wed, 15 Aug 2018 11:50:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534359020; cv=none; d=google.com; s=arc-20160816; b=uEZ8jPT1aDZeQ3nuYOBYWpfIR+M/b0XiDaVUZ9pfJCQ/8DY1vtij2XtsyJ8xAu7DE2 Y4osj6LfprSKUSI5Z+h8KRLnHv7310JVt7xRQZUcJtTu5xh7sORH6vsXu6ZC8MEFHnnU 9KsFaKKLGoNcXkLawVeuu35PUrlhP24uEOtGkCZ0i+Yhkz6k8iWH32o5ZLc/pDUFP8/p 6bmELqQqmcSwjp/Dcxdp+sxdHZCFitzEfSOBNifhd/yzaZQRRUOjUnYhm3w9JJOWG39q xseQNmCpyGdIahkjJ0+3EM9fCrtXtS8wo2c/UHWiPu0Fy6223/uoXW1Uu6a7j7S4fYz2 QS5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=HK+nkvH3q+28k2vljA7/OO0Kl2USle1UcBkBLdrxdd4=; b=qZnsLBBfek14Lyh3uDeJTCpD21l1bqHr3UIaOCAN0zdS+pRdcoAzTG0KRZJucQeeb2 XBi8KE42i1hU7kN9ocwGCofqA9Us037rM7oCmpSOAmNJiKxqrwUOKy5rBo780kzOHVJh S0L+r0tK1KEBVA2oDXPD1WRPibqAnOD8rBVR7iM4r6ja4DN9CU3cKU7BfiYZCC+t5VdR mYDj6Bp/6NNWO3xFwhILjJcynmBD86XauBvFFK6eVnJ5/db6JJROEbin6mjvEcBpuXMi YbS8K+jpzZ0+0ffirPVyWLV9OB7w8kvq17Q7WPIS1Tl4RBSV/jRZ0ghkdYU48aa7cke9 LszA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com. [115.124.30.133]) by mx.google.com with ESMTPS id b1-v6si19328959pls.367.2018.08.15.11.50.20 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Aug 2018 11:50:20 -0700 (PDT) Received-SPF: pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.133 as permitted sender) client-ip=115.124.30.133; Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07402;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0T6gJtNa_1534358999; Received: from e19h19392.et15sqa.tbsite.net(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0T6gJtNa_1534358999) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Aug 2018 02:50:07 +0800 From: Yang Shi To: mhocko@kernel.org, willy@infradead.org, ldufour@linux.vnet.ibm.com, kirill@shutemov.name, vbabka@suse.cz, akpm@linux-foundation.org, peterz@infradead.org, mingo@redhat.com, acme@kernel.org, alexander.shishkin@linux.intel.com, jolsa@redhat.com, namhyung@kernel.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC v8 PATCH 1/5] mm: refactor do_munmap() to extract the common part Date: Thu, 16 Aug 2018 02:49:46 +0800 Message-Id: <1534358990-85530-2-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1534358990-85530-1-git-send-email-yang.shi@linux.alibaba.com> References: <1534358990-85530-1-git-send-email-yang.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Introduces three new helper functions: * addr_ok() * munmap_lookup_vma() * munlock_vmas() They will be used by do_munmap() and the new do_munmap with zapping large mapping early in the later patch. There is no functional change, just code refactor. Reviewed-by: Laurent Dufour Acked-by: Vlastimil Babka Signed-off-by: Yang Shi --- mm/mmap.c | 106 +++++++++++++++++++++++++++++++++++++++++++------------------- 1 file changed, 74 insertions(+), 32 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 17bbf4d..f05f49b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2681,35 +2681,42 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma, return __split_vma(mm, vma, addr, new_below); } -/* Munmap is split into 2 main parts -- this part which finds - * what needs doing, and the areas themselves, which do the - * work. This now handles partial unmappings. - * Jeremy Fitzhardinge - */ -int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, - struct list_head *uf) +static inline bool addr_ok(unsigned long start, size_t len) { - unsigned long end; - struct vm_area_struct *vma, *prev, *last; - if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start) - return -EINVAL; + return false; - len = PAGE_ALIGN(len); - if (len == 0) - return -EINVAL; + if (PAGE_ALIGN(len) == 0) + return false; + + return true; +} + +/* + * munmap_lookup_vma: find the first overlap vma and split overlap vmas. + * @mm: mm_struct + * @start: start address + * @end: end address + * + * Return: %NULL if no VMA overlaps this range. An ERR_PTR if an + * overlapping VMA could not be split. Otherwise a pointer to the first + * VMA which overlaps the range. + */ +static struct vm_area_struct *munmap_lookup_vma(struct mm_struct *mm, + unsigned long start, unsigned long end) +{ + struct vm_area_struct *vma, *prev, *last; /* Find the first overlapping VMA */ vma = find_vma(mm, start); if (!vma) - return 0; - prev = vma->vm_prev; - /* we have start < vma->vm_end */ + return NULL; + /* we have start < vma->vm_end */ /* if it doesn't overlap, we have nothing.. */ - end = start + len; if (vma->vm_start >= end) - return 0; + return NULL; + prev = vma->vm_prev; /* * If we need to split any vma, do it now to save pain later. @@ -2727,11 +2734,11 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, * its limit temporarily, to help free resources as expected. */ if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count) - return -ENOMEM; + return ERR_PTR(-ENOMEM); error = __split_vma(mm, vma, start, 0); if (error) - return error; + return ERR_PTR(error); prev = vma; } @@ -2740,10 +2747,53 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, if (last && end > last->vm_start) { int error = __split_vma(mm, last, end, 1); if (error) - return error; + return ERR_PTR(error); } vma = prev ? prev->vm_next : mm->mmap; + return vma; +} + +static inline void munlock_vmas(struct vm_area_struct *vma, + unsigned long end) +{ + struct mm_struct *mm = vma->vm_mm; + + while (vma && vma->vm_start < end) { + if (vma->vm_flags & VM_LOCKED) { + mm->locked_vm -= vma_pages(vma); + munlock_vma_pages_all(vma); + } + vma = vma->vm_next; + } +} + +/* Munmap is split into 2 main parts -- this part which finds + * what needs doing, and the areas themselves, which do the + * work. This now handles partial unmappings. + * Jeremy Fitzhardinge + */ +int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, + struct list_head *uf) +{ + unsigned long end; + struct vm_area_struct *vma, *prev; + + if (!addr_ok(start, len)) + return -EINVAL; + + len = PAGE_ALIGN(len); + + end = start + len; + + vma = munmap_lookup_vma(mm, start, end); + if (!vma) + return 0; + if (IS_ERR(vma)) + return PTR_ERR(vma); + + prev = vma->vm_prev; + if (unlikely(uf)) { /* * If userfaultfd_unmap_prep returns an error the vmas @@ -2762,16 +2812,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, /* * unlock any mlock()ed ranges before detaching vmas */ - if (mm->locked_vm) { - struct vm_area_struct *tmp = vma; - while (tmp && tmp->vm_start < end) { - if (tmp->vm_flags & VM_LOCKED) { - mm->locked_vm -= vma_pages(tmp); - munlock_vma_pages_all(tmp); - } - tmp = tmp->vm_next; - } - } + if (mm->locked_vm) + munlock_vmas(vma, end); /* * Remove the vma's, and unmap the actual pages