From patchwork Fri Nov 15 15:28:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Xinhai X-Patchwork-Id: 11246545 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BA7B1393 for ; Fri, 15 Nov 2019 15:29:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2F6862073B for ; Fri, 15 Nov 2019 15:29:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aj0AHUPF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2F6862073B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 576146B0005; Fri, 15 Nov 2019 10:29:05 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 527786B0006; Fri, 15 Nov 2019 10:29:05 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 43D006B0007; Fri, 15 Nov 2019 10:29:05 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id 2D8FE6B0005 for ; Fri, 15 Nov 2019 10:29:05 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id E49D468BE for ; Fri, 15 Nov 2019 15:29:04 +0000 (UTC) X-FDA: 76158895008.28.dad83_6c5065db76e53 X-Spam-Summary: 69,4.5,0,e81bc29b16d88daf,d41d8cd98f00b204,lixinhai.lxh@gmail.com,::akpm@linux-foundation.org:n-horiguchi@ah.jp.nec.com:mhocko@suse.com:vbabka@suse.cz:hughd@google.com:linux-man@vger.kernel.org,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1345:1431:1437:1535:1544:1605:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2740:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4118:4250:4321:4605:5007:6121:6261:6653:7514:7903:9413:9592:10011:11026:11473:11658:11914:12043:12050:12219:12291:12295:12296:12297:12438:12517:12519:12555:12679:12683:12895:13141:13161:13180:13229:13230:14096:14181:14394:14687:14721:21060:21080:21094:21323:21324:21444:21451:21627:21666:21740:21889:30012:30054:30070:30074:30083,0,RBL:209.85.214.194:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:1:0,LFtime:30,LUA_SUMMARY:none X-HE-Tag: dad83_6c5065db76e53 X-Filterd-Recvd-Size: 7715 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Fri, 15 Nov 2019 15:29:04 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id ay6so4893838plb.0 for ; Fri, 15 Nov 2019 07:29:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=vpMmpk2iFEnj8y0QkqI43ompgmnFwk5ZKVJJu7bsWBo=; b=aj0AHUPF7QInmjcbAtOPITbgZVzRStfF+WoKwqErATFS6oRR0vcq3G55/blhPTUW9o XX8mOP7MyT3rxRmSWclv7LkVDftEjXOmcx2/+dNdm2alSDTLLPQpgFpwOW2xUCU55y/h 61jmmXxLadXstgA05y0QRsB/T8hVC0MY5t7XQVT8yY3wJC80xYz51pDk47J/vzvP8x3V YSavhFwu7U5xFXm0672fAe5+cDBwtwzxboc/GBs1DmZCysMZQlF2Z8pxMpaslu1ZEua9 VC9s4gNCJcWh104FaUZsNqfybyBQCEdctKQ+4GiWJ2hyZY+kcAhnNszGgaR0pfdeb5c/ 7hUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=vpMmpk2iFEnj8y0QkqI43ompgmnFwk5ZKVJJu7bsWBo=; b=Tqe3MLYi6/6Hrubv/mF+8mLvtX8Hmx9PygDAbRkN9i6JpwJO0CgRpM3Vl/y+r47WSy 1hHC/x6xtg9p9Ge2ldnHL0f32EeeJEhpfJIm71wr/XGPCYTrU4CCP/MVEs/rheSrcdFj b09EoAKgKdno41kFKEXTY9RveDYYt9yvDp0eJ4aAAUEcgHjfohyq37RzJ6JLG7SM9uvm mMrvVbUodehUSsNMoLclwDb/XyeWZnJO5OVK/71hLN2QpiIF+75ruDmUCo4tqHgRfTQb KspdEYQ68UxFMaX+6yFKVkg8qWSKHBsdAse7ZdLWxMnFCu3KtG0J6nt5ytcl2qEfMORG k79g== X-Gm-Message-State: APjAAAXiu94JCGHe0oafFDUHewiVyME32xc25DY+OkK977jFsdzmL7Ed uhjQuduFJw8OMuMGPZNtDsY09lvd X-Google-Smtp-Source: APXvYqzyMfFEp3XqbK4oc3IHTq7CHUrjmffAsdK6oHQsITGLDpjeJYUXXG9sLuA3hxwp+mGv9iuJbg== X-Received: by 2002:a17:90b:30cc:: with SMTP id hi12mr20189706pjb.80.1573831742617; Fri, 15 Nov 2019 07:29:02 -0800 (PST) Received: from binjiang-hz-dhcp091254.china.nsn-net.net ([131.228.2.20]) by smtp.gmail.com with ESMTPSA id a28sm11949513pfg.51.2019.11.15.07.28.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 15 Nov 2019 07:29:01 -0800 (PST) From: Li Xinhai To: linux-mm@kvack.org Cc: Andrew Morton , Naoya Horiguchi , Michal Hocko , Vlastimil Babka , Hugh Dickins , linux-man Subject: [PATCH v4 2/2] mm: Fix checking unmapped holes for mbind Date: Fri, 15 Nov 2019 23:28:18 +0800 Message-Id: <1573831698-7700-1-git-send-email-lixinhai.lxh@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: mbind() is required to report EFAULT if range, specified by addr and len, contains unmapped holes. In current implementation, below rules are applied for this checking: 1 Unmapped holes at any part of the specified range should be reported as EFAULT if mbind() for none MPOL_DEFAULT cases; 2 Unmapped holes at any part of the specified range should be ignored (do not reprot EFAULT) if mbind() for MPOL_DEFAULT case; 3 The whole range in an unmapped hole should be reported as EFAULT; Note that rule 2 does not fullfill the mbind() API definition, but since that behavior has existed for long days (the internal flag MPOL_MF_DISCONTIG_OK is for this purpose), this patch does not plan to change it. In current code, application observed inconsistent behavior on rule 1 and rule 2 respectively. That inconsistency is fixed as below details. Cases of rule 1: 1) Hole at head side of range. Current code reprot EFAULT, no change by this patch. [ vma ][ hole ][ vma ] [ range ] 2) Hole at middle of range. Current code report EFAULT, no change by this patch. [ vma ][ hole ][ vma ] [ range ] 3) Hole at tail side of range. Current code do not report EFAULT, this patch fix it. [ vma ][ hole ][ vma ] [ range ] Cases of rule 2: 1) Hole at head side of range. Current code reprot EFAULT, this patch fix it. [ vma ][ hole ][ vma ] [ range ] 2) Hole at middle of range. Current code do not report EFAULT, no change by this patch. this patch. [ vma ][ hole ][ vma] [ range ] 3) Hole at tail side of range. Current code do not report EFAULT, no change by this patch. [ vma ][ hole ][ vma] [ range ] This patch has no changes to rule 3. The unmapped hole checking can also be handled by using .pte_hole(), instead of .test_walk(). But .pte_hole() is called for holes inside and outside vma, which causes more cost, so this patch keeps the original design with .test_walk(). Fixes: 6f4576e3687b ("mempolicy: apply page table walker on queue_pages_range()") Cc: Andrew Morton Cc: Naoya Horiguchi Cc: Michal Hocko Cc: Vlastimil Babka Cc: Hugh Dickins Cc: linux-man Signed-off-by: Li Xinhai --- send again for removing space mm/mempolicy.c | 40 +++++++++++++++++++++++++++------------- 1 file changed, 27 insertions(+), 13 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 807f06f..c697b29 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -410,7 +410,9 @@ struct queue_pages { struct list_head *pagelist; unsigned long flags; nodemask_t *nmask; - struct vm_area_struct *prev; + unsigned long start; + unsigned long end; + struct vm_area_struct *first; }; /* @@ -619,14 +621,20 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end, unsigned long flags = qp->flags; /* range check first */ - if (!(flags & MPOL_MF_DISCONTIG_OK)) { - if (!vma->vm_next && vma->vm_end < end) - return -EFAULT; - if (qp->prev && qp->prev->vm_end < vma->vm_start) + VM_BUG_ON((vma->vm_start > start) || (vma->vm_end < end)); + + if (!qp->first) { + qp->first = vma; + if (!(flags & MPOL_MF_DISCONTIG_OK) && + (qp->start < vma->vm_start)) + /* hole at head side of range */ return -EFAULT; } - - qp->prev = vma; + if (!(flags & MPOL_MF_DISCONTIG_OK) && + ((vma->vm_end < qp->end) && + (!vma->vm_next || vma->vm_end < vma->vm_next->vm_start))) + /* hole at middle or tail of range */ + return -EFAULT; /* * Need check MPOL_MF_STRICT to return -EIO if possible @@ -638,8 +646,6 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end, if (endvma > end) endvma = end; - if (vma->vm_start > start) - start = vma->vm_start; if (flags & MPOL_MF_LAZY) { /* Similar to task_numa_work, skip inaccessible VMAs */ @@ -680,14 +686,23 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end, nodemask_t *nodes, unsigned long flags, struct list_head *pagelist) { + int err; struct queue_pages qp = { .pagelist = pagelist, .flags = flags, .nmask = nodes, - .prev = NULL, + .start = start, + .end = end, + .first = NULL, }; - return walk_page_range(mm, start, end, &queue_pages_walk_ops, &qp); + err = walk_page_range(mm, start, end, &queue_pages_walk_ops, &qp); + + if (!qp.first) + /* whole range in hole */ + err = -EFAULT; + + return err; } /* @@ -739,8 +754,7 @@ static int mbind_range(struct mm_struct *mm, unsigned long start, unsigned long vmend; vma = find_vma(mm, start); - if (!vma || vma->vm_start > start) - return -EFAULT; + VM_BUG_ON(!vma); prev = vma->vm_prev; if (start > vma->vm_start)