From patchwork Mon May 18 01:20:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554509 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5BDA7913 for ; Mon, 18 May 2020 01:21:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 27E2C207ED for ; Mon, 18 May 2020 01:21:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aPVmiXy9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 27E2C207ED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 041D2900002; Sun, 17 May 2020 21:21:45 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F0BB380005; Sun, 17 May 2020 21:21:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DADF5900004; Sun, 17 May 2020 21:21:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id C0A8D900002 for ; Sun, 17 May 2020 21:21:44 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 878564821 for ; Mon, 18 May 2020 01:21:44 +0000 (UTC) X-FDA: 76828087728.11.grade29_839604d562527 X-Spam-Summary: 2,0,0,76a9826e4937a0b4,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1534:1540:1711:1714:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3350:3865:3866:3867:3872:5007:6261:6653:7576:9413:10004:11026:11658:11914:12297:12438:12517:12519:12555:12679:12895:13069:13311:13357:14181:14384:14394:14721:21080:21444:21451:21627:21666:21990:30054,0,RBL:209.85.215.196:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: grade29_839604d562527 X-Filterd-Recvd-Size: 3920 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:21:44 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id f4so4062302pgi.10 for ; Sun, 17 May 2020 18:21:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0sx8M+kAOaC1XfEd2srVxr37pMoaqzTLihZ3PqNhqQM=; b=aPVmiXy9N/XcoqFsJHLAt2SwQL/pBirwX94V5a2nMS1b3ZJqxUPKB9docoAInlS/rw 3xcHPVgfYLaaUkIgHN2BIE3cV4dbH6NGqjPxpSk6q8lIij9kN8tepoOnXXGP77SPjq0W L+v9NXDm80eeU7WDkvQF00GKY5SuclyY7w0xlOzVQRawNY0oTgtBLxdkaZLc5ilsxlMc 8VgS6RAEpWx1o1fWZ5UYo6jCH6vLtma4zirE8vRS0grexENaT7Alf6XZSGlAl4lE2ngM eJ9qpK0+AZw+PF0htXl5cz8WtuvqD6/mYiuoyaDyaRvj6WW++k0/PGysm/upgXcZVxgx pYGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0sx8M+kAOaC1XfEd2srVxr37pMoaqzTLihZ3PqNhqQM=; b=a5JlQ9BuIwejJ3cABWU8J/DhH9Ce0L7ypUATQ18YbG5el/Wwyx5K8ROFPBnYRIKRfl AMLD8EzQWu7/GrFjQBE64KGyHH/JMQhPsXoTAUx/tqkNKL90JF9EWfro5vpsfE0LPqMu HLrxqfJttpkSEQLgzDgXrQZ03/45rR6PTcYJklEIzCFieOHOUtkI+pK5bnv2LiJNFy0Q Vga+WHNRz9dXiejx4XBBma70u3mwNslGLTE0qTBFmbGcx+C8DBggmUPI5P5IZpDShQqy M6WuAeN1cShmd03Xr9Isx0oYaRIC/lpxaXBnkd3+uoCDwznuryf2xTkAQ7gwye+yh2p4 hHzQ== X-Gm-Message-State: AOAM531eQt8FIKrDoABAuJQ4binSYBnXzDmVxwxEHDH4NzCHtpMmZA3U 5xZfyITfmuglxkBk62rJIG8= X-Google-Smtp-Source: ABdhPJzXznuKGiK7xC4AyF4N3k3zf1+Wz9osR7hGqd/f+etgdFkqkuu06dwSJEiO4y+xYVkvOB44FA== X-Received: by 2002:a63:4926:: with SMTP id w38mr13342473pga.222.1589764903304; Sun, 17 May 2020 18:21:43 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.21.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:21:42 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 01/11] mm/page_isolation: prefer the node of the source page Date: Mon, 18 May 2020 10:20:47 +0900 Message-Id: <1589764857-6800-2-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim For locality, it's better to migrate the page to the same node rather than the node of the current caller's cpu. Signed-off-by: Joonsoo Kim Acked-by: Roman Gushchin --- mm/page_isolation.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 2c11a38..7df89bd 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -300,5 +300,7 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, struct page *alloc_migrate_target(struct page *page, unsigned long private) { - return new_page_nodemask(page, numa_node_id(), &node_states[N_MEMORY]); + int nid = page_to_nid(page); + + return new_page_nodemask(page, nid, &node_states[N_MEMORY]); } From patchwork Mon May 18 01:20:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554511 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 788AC913 for ; Mon, 18 May 2020 01:21:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 45AD3207ED for ; Mon, 18 May 2020 01:21:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MuabhevI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 45AD3207ED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4FBBD80006; Sun, 17 May 2020 21:21:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4ABB280005; Sun, 17 May 2020 21:21:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 34EA880006; Sun, 17 May 2020 21:21:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id 1906580005 for ; Sun, 17 May 2020 21:21:48 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BF28A181AC9B6 for ; Mon, 18 May 2020 01:21:47 +0000 (UTC) X-FDA: 76828087854.23.flesh26_840d39f928e3c X-Spam-Summary: 2,0,0,ffbddd241327a36a,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3355:3867:3870:3871:3872:4117:4385:4605:5007:6117:6261:6653:7576:7903:8957:9413:9592:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13149:13230:14096:14110:14181:14394:14721:21080:21444:21451:21627:21666:21990:30046:30051:30054:30070,0,RBL:209.85.216.65:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: flesh26_840d39f928e3c X-Filterd-Recvd-Size: 6718 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:21:47 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id s69so4338343pjb.4 for ; Sun, 17 May 2020 18:21:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SdhvkN9n6gLXPp8p9fdMhb+WcIMgtYO8SWbLKHJyjj4=; b=MuabhevIbTfQchpQvS5cn2iMMKfM/8aBJ3m/YWbNcOt3/i4wK1bg0d9kj3pSzWUOFW xaA023MDC/j7WagLjaLtIL7MK8mub3C3Se23T4C+Z2+Ke15npDeZuBRJa1Il5vBCzZMI T08r5KWO+PDfI5kwN5lpw0VjQpzrRXf6ywfshsZ3lPTVPI7/yYzDFNiGpeys0fD3bec7 pFSy3PJokHLHLpU5dCUFMqgGUeelgDS1fRDGtqCMDiMV13LjHvy9oLCUZJWz6ecwjlqS Yd2hp+W3dFu/dFKabSf3HALmnjDVHJZf8dsVeEwLcObCYEJh8YS7YdwQCLNhiF+WUuuR s/gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SdhvkN9n6gLXPp8p9fdMhb+WcIMgtYO8SWbLKHJyjj4=; b=WcT+xFPM5L9OAOdeysJGWIRmpdiVA4xqbPSgNJB9NUgzOKeMB1vAuxkudmFmLBCop7 vZmW22+MVPVAk9/vIhNUGplk3avBb7N4ooYfFirK4Z09aFyMaZkg+FE8WFABg8qwnhed AZnxPrSQ+5nZdnPIMcEhubFrIIVoPzahSQJpkB6bdq5zZmETW8zuop93QLAR9ZFQlUlN Kl6BhZGmovNR06m4k3kPCP008dxc4BLdML1SOI90ReVf9O7h7qp35yiX1eT+YLW9L+48 xkoRB6z+qo47R3dkzuSzJuPfIsh+tkCykiKIdNe98H3dCIEXEZd52ju/t1MWc2ssG1i8 lSTw== X-Gm-Message-State: AOAM533v+l1FoozMo2MwB1ukF3oq9x6xL5ZT/pSBqcXieJ+T16I+nPI2 9hFmHTJJp9FTcBau6GA2qJU= X-Google-Smtp-Source: ABdhPJx9mnMgjeutTyZLT+Z8mO/ZGkK19uLNJJsY2u/Fcns2KKXtmE2mLWv428N5YmcR5IoOF2ZaKg== X-Received: by 2002:a17:90a:eacb:: with SMTP id ev11mr16142233pjb.80.1589764906444; Sun, 17 May 2020 18:21:46 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.21.43 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:21:46 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 02/11] mm/migrate: move migration helper from .h to .c Date: Mon, 18 May 2020 10:20:48 +0900 Message-Id: <1589764857-6800-3-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim It's not performance sensitive function. Move it to .c. This is a preparation step for future change. Signed-off-by: Joonsoo Kim Acked-by: Mike Kravetz --- include/linux/migrate.h | 33 +++++---------------------------- mm/migrate.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+), 28 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 3e546cb..1d70b4a 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -31,34 +31,6 @@ enum migrate_reason { /* In mm/debug.c; also keep sync with include/trace/events/migrate.h */ extern const char *migrate_reason_names[MR_TYPES]; -static inline struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask) -{ - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; - unsigned int order = 0; - struct page *new_page = NULL; - - if (PageHuge(page)) - return alloc_huge_page_nodemask(page_hstate(compound_head(page)), - preferred_nid, nodemask); - - if (PageTransHuge(page)) { - gfp_mask |= GFP_TRANSHUGE; - order = HPAGE_PMD_ORDER; - } - - if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) - gfp_mask |= __GFP_HIGHMEM; - - new_page = __alloc_pages_nodemask(gfp_mask, order, - preferred_nid, nodemask); - - if (new_page && PageTransHuge(new_page)) - prep_transhuge_page(new_page); - - return new_page; -} - #ifdef CONFIG_MIGRATION extern void putback_movable_pages(struct list_head *l); @@ -67,6 +39,8 @@ extern int migrate_page(struct address_space *mapping, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason); +extern struct page *new_page_nodemask(struct page *page, + int preferred_nid, nodemask_t *nodemask); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); @@ -85,6 +59,9 @@ static inline int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason) { return -ENOSYS; } +static inline struct page *new_page_nodemask(struct page *page, + int preferred_nid, nodemask_t *nodemask) + { return NULL; } static inline int isolate_movable_page(struct page *page, isolate_mode_t mode) { return -EBUSY; } diff --git a/mm/migrate.c b/mm/migrate.c index 5fed030..a298a8c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1519,6 +1519,35 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, return rc; } +struct page *new_page_nodemask(struct page *page, + int preferred_nid, nodemask_t *nodemask) +{ + gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; + unsigned int order = 0; + struct page *new_page = NULL; + + if (PageHuge(page)) + return alloc_huge_page_nodemask( + page_hstate(compound_head(page)), + preferred_nid, nodemask); + + if (PageTransHuge(page)) { + gfp_mask |= GFP_TRANSHUGE; + order = HPAGE_PMD_ORDER; + } + + if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) + gfp_mask |= __GFP_HIGHMEM; + + new_page = __alloc_pages_nodemask(gfp_mask, order, + preferred_nid, nodemask); + + if (new_page && PageTransHuge(new_page)) + prep_transhuge_page(new_page); + + return new_page; +} + #ifdef CONFIG_NUMA static int store_status(int __user *status, int start, int value, int nr) From patchwork Mon May 18 01:20:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554513 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EE233912 for ; Mon, 18 May 2020 01:21:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A45AF207BB for ; Mon, 18 May 2020 01:21:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Cr7jEBcu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A45AF207BB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B163F80007; Sun, 17 May 2020 21:21:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AA11880005; Sun, 17 May 2020 21:21:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9405580007; Sun, 17 May 2020 21:21:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7766E80005 for ; Sun, 17 May 2020 21:21:51 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2F191824805A for ; Mon, 18 May 2020 01:21:51 +0000 (UTC) X-FDA: 76828088022.02.horse42_8487413a10436 X-Spam-Summary: 2,0,0,df52ec974caf361e,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:1:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2637:2693:2731:2898:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3872:3874:4250:4321:4385:4605:5007:6119:6261:6653:7576:7903:8957:9413:9592:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13161:13180:13229:13255:14096:14394:21080:21444:21451:21611:21627:21666:21796:21990:30003:30036:30054:30070,0,RBL:209.85.214.194:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: horse42_8487413a10436 X-Filterd-Recvd-Size: 14434 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:21:50 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id w19so1764713ply.11 for ; Sun, 17 May 2020 18:21:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FbiwIN0Eqv2g2RfAcusyp0+1xL1QG0WEl5uXNY0CaCU=; b=Cr7jEBcudjUwSV7Lvxc490fhb8YrWE0ASE7FOEf9LSdFfc4kaGghJENkKguqJLbZRz Wr8Ta52kFM8a7t/VYZT4WOL1eF4FZB7zUf0y/kASDV9TcgJo55HTk3ohWM2q7N8VH26A NN9WB58AhEu7iJiGwRiaspLHOb1nAl6LN/oTxNWKIDK9JHA6Z5Q6phlHDTnz55WoK6DR LAcKeWdNknPOAsiYQl2agzsypKlz+L2gs1cdGV4q+t7GTRGZdqyBjnkrVX29ps4jBCXI bynPdyEQBZjgWKn4PUxf39NIM5X5W1FP+hDQ/VWR11kCCeg2cHhQ4QMhCpOma9SJAIWP JxAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FbiwIN0Eqv2g2RfAcusyp0+1xL1QG0WEl5uXNY0CaCU=; b=rY+kOfLT26wPXwQPNGbwubImNwsPTU1rDgdEZ6NRbQ5TrtNah1YZPUIFkFQzFftY1D kaUo16xsZy3QsuRZHi5Ap5nCArbGTZEbUSuT3UNuS4Cft/YM73ioVr3Igg461Hepy2AY /DpC24OOZkC+/KsF3b9cNOmbUeAwC/jjQ/wB6gBWlgbcgoofGBdFEkEuiUrpJ4JCtBko HH5l2c/4/Do/7UHUe3LsU7SfpPzP9SXAL6cPyarjjVy5WpY8eQS+H577sKX41tUj07UR blikRXnKrOYbrB5GqRKyGU+pwDm7fRhYcsPaJfMpjrpC4P7Iy8EWm/xcmKT/n76lE+f5 u52Q== X-Gm-Message-State: AOAM532WRurfzEnMR0Ufmg8DMnn5b+zwIT43ZzdnRCfLxjw+uR3dY53c 8JcXWZumXosV7r5c4Wl3buU= X-Google-Smtp-Source: ABdhPJySN9wkUNF+0AS/utKC9LrqPrUmLF7sZXguoF99RsvNhZliEpXCff1SM+nIbmOR1gbwpJBLdw== X-Received: by 2002:a17:90a:290f:: with SMTP id g15mr14952644pjd.93.1589764909711; Sun, 17 May 2020 18:21:49 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.21.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:21:49 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 03/11] mm/hugetlb: introduce alloc_control structure to simplify migration target allocation APIs Date: Mon, 18 May 2020 10:20:49 +0900 Message-Id: <1589764857-6800-4-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim Currently, page allocation functions for migration requires some arguments. More worse, in the following patch, more argument will be needed to unify the similar functions. To simplify them, in this patch, unified data structure that controls allocation behaviour is introduced. For clean-up, function declarations are re-ordered. Note that, gfp_mask handling on alloc_huge_page_(node|nodemask) is slightly changed, from ASSIGN to OR. It's safe since caller of these functions doesn't pass extra gfp_mask except htlb_alloc_mask(). Signed-off-by: Joonsoo Kim --- include/linux/hugetlb.h | 35 +++++++++++++++------------- mm/gup.c | 11 ++++++--- mm/hugetlb.c | 62 ++++++++++++++++++++++++------------------------- mm/internal.h | 7 ++++++ mm/mempolicy.c | 13 +++++++---- mm/migrate.c | 13 +++++++---- 6 files changed, 83 insertions(+), 58 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 0cced41..6da217e 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -14,6 +14,7 @@ struct ctl_table; struct user_struct; struct mmu_gather; +struct alloc_control; #ifndef is_hugepd typedef struct { unsigned long pd; } hugepd_t; @@ -502,15 +503,16 @@ struct huge_bootmem_page { struct hstate *hstate; }; -struct page *alloc_huge_page(struct vm_area_struct *vma, - unsigned long addr, int avoid_reserve); -struct page *alloc_huge_page_node(struct hstate *h, int nid); -struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, - nodemask_t *nmask); +struct page *alloc_migrate_huge_page(struct hstate *h, + struct alloc_control *ac); +struct page *alloc_huge_page_node(struct hstate *h, + struct alloc_control *ac); +struct page *alloc_huge_page_nodemask(struct hstate *h, + struct alloc_control *ac); struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, unsigned long address); -struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, - int nid, nodemask_t *nmask); +struct page *alloc_huge_page(struct vm_area_struct *vma, + unsigned long addr, int avoid_reserve); int huge_add_to_page_cache(struct page *page, struct address_space *mapping, pgoff_t idx); @@ -752,20 +754,14 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; -static inline struct page *alloc_huge_page(struct vm_area_struct *vma, - unsigned long addr, - int avoid_reserve) -{ - return NULL; -} - -static inline struct page *alloc_huge_page_node(struct hstate *h, int nid) +static inline struct page * +alloc_huge_page_node(struct hstate *h, struct alloc_control *ac) { return NULL; } static inline struct page * -alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, nodemask_t *nmask) +alloc_huge_page_nodemask(struct hstate *h, struct alloc_control *ac) { return NULL; } @@ -777,6 +773,13 @@ static inline struct page *alloc_huge_page_vma(struct hstate *h, return NULL; } +static inline struct page *alloc_huge_page(struct vm_area_struct *vma, + unsigned long addr, + int avoid_reserve) +{ + return NULL; +} + static inline int __alloc_bootmem_huge_page(struct hstate *h) { return 0; diff --git a/mm/gup.c b/mm/gup.c index 0d64ea8..9890fb0 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1613,16 +1613,21 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private) if (PageHighMem(page)) gfp_mask |= __GFP_HIGHMEM; -#ifdef CONFIG_HUGETLB_PAGE if (PageHuge(page)) { struct hstate *h = page_hstate(page); + struct alloc_control ac = { + .nid = nid, + .nmask = NULL, + .gfp_mask = gfp_mask, + }; + /* * We don't want to dequeue from the pool because pool pages will * mostly be from the CMA region. */ - return alloc_migrate_huge_page(h, gfp_mask, nid, NULL); + return alloc_migrate_huge_page(h, &ac); } -#endif + if (PageTransHuge(page)) { struct page *thp; /* diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dcb34d7..859dba4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1054,8 +1054,8 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) return page; } -static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask, int nid, - nodemask_t *nmask) +static struct page *dequeue_huge_page_nodemask(struct hstate *h, + struct alloc_control *ac) { unsigned int cpuset_mems_cookie; struct zonelist *zonelist; @@ -1063,14 +1063,15 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask, struct zoneref *z; int node = NUMA_NO_NODE; - zonelist = node_zonelist(nid, gfp_mask); + zonelist = node_zonelist(ac->nid, ac->gfp_mask); retry_cpuset: cpuset_mems_cookie = read_mems_allowed_begin(); - for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), nmask) { + for_each_zone_zonelist_nodemask(zone, z, zonelist, + gfp_zone(ac->gfp_mask), ac->nmask) { struct page *page; - if (!cpuset_zone_allowed(zone, gfp_mask)) + if (!cpuset_zone_allowed(zone, ac->gfp_mask)) continue; /* * no need to ask again on the same node. Pool is node rather than @@ -1106,9 +1107,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, { struct page *page; struct mempolicy *mpol; - gfp_t gfp_mask; - nodemask_t *nodemask; - int nid; + struct alloc_control ac = {0}; /* * A child process with MAP_PRIVATE mappings created by their parent @@ -1123,9 +1122,10 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, if (avoid_reserve && h->free_huge_pages - h->resv_huge_pages == 0) goto err; - gfp_mask = htlb_alloc_mask(h); - nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + ac.gfp_mask = htlb_alloc_mask(h); + ac.nid = huge_node(vma, address, ac.gfp_mask, &mpol, &ac.nmask); + + page = dequeue_huge_page_nodemask(h, &ac); if (page && !avoid_reserve && vma_has_reserves(vma, chg)) { SetPagePrivate(page); h->resv_huge_pages--; @@ -1938,15 +1938,16 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, return page; } -struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, - int nid, nodemask_t *nmask) +struct page *alloc_migrate_huge_page(struct hstate *h, + struct alloc_control *ac) { struct page *page; if (hstate_is_gigantic(h)) return NULL; - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); + page = alloc_fresh_huge_page(h, ac->gfp_mask, + ac->nid, ac->nmask, NULL); if (!page) return NULL; @@ -1980,36 +1981,37 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, } /* page migration callback function */ -struct page *alloc_huge_page_node(struct hstate *h, int nid) +struct page *alloc_huge_page_node(struct hstate *h, + struct alloc_control *ac) { - gfp_t gfp_mask = htlb_alloc_mask(h); struct page *page = NULL; - if (nid != NUMA_NO_NODE) - gfp_mask |= __GFP_THISNODE; + ac->gfp_mask |= htlb_alloc_mask(h); + if (ac->nid != NUMA_NO_NODE) + ac->gfp_mask |= __GFP_THISNODE; spin_lock(&hugetlb_lock); if (h->free_huge_pages - h->resv_huge_pages > 0) - page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL); + page = dequeue_huge_page_nodemask(h, ac); spin_unlock(&hugetlb_lock); if (!page) - page = alloc_migrate_huge_page(h, gfp_mask, nid, NULL); + page = alloc_migrate_huge_page(h, ac); return page; } /* page migration callback function */ -struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, - nodemask_t *nmask) +struct page *alloc_huge_page_nodemask(struct hstate *h, + struct alloc_control *ac) { - gfp_t gfp_mask = htlb_alloc_mask(h); + ac->gfp_mask |= htlb_alloc_mask(h); spin_lock(&hugetlb_lock); if (h->free_huge_pages - h->resv_huge_pages > 0) { struct page *page; - page = dequeue_huge_page_nodemask(h, gfp_mask, preferred_nid, nmask); + page = dequeue_huge_page_nodemask(h, ac); if (page) { spin_unlock(&hugetlb_lock); return page; @@ -2017,22 +2019,20 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, } spin_unlock(&hugetlb_lock); - return alloc_migrate_huge_page(h, gfp_mask, preferred_nid, nmask); + return alloc_migrate_huge_page(h, ac); } /* mempolicy aware migration callback */ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, unsigned long address) { + struct alloc_control ac = {0}; struct mempolicy *mpol; - nodemask_t *nodemask; struct page *page; - gfp_t gfp_mask; - int node; - gfp_mask = htlb_alloc_mask(h); - node = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - page = alloc_huge_page_nodemask(h, node, nodemask); + ac.gfp_mask = htlb_alloc_mask(h); + ac.nid = huge_node(vma, address, ac.gfp_mask, &mpol, &ac.nmask); + page = alloc_huge_page_nodemask(h, &ac); mpol_cond_put(mpol); return page; diff --git a/mm/internal.h b/mm/internal.h index 791e4b5a..75b3f8e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -613,4 +613,11 @@ static inline bool is_migrate_highatomic_page(struct page *page) void setup_zone_pageset(struct zone *zone); extern struct page *alloc_new_node_page(struct page *page, unsigned long node); + +struct alloc_control { + int nid; + nodemask_t *nmask; + gfp_t gfp_mask; +}; + #endif /* __MM_INTERNAL_H */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 1965e26..06f60a5 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1068,10 +1068,15 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist, /* page allocation callback for NUMA node migration */ struct page *alloc_new_node_page(struct page *page, unsigned long node) { - if (PageHuge(page)) - return alloc_huge_page_node(page_hstate(compound_head(page)), - node); - else if (PageTransHuge(page)) { + if (PageHuge(page)) { + struct hstate *h = page_hstate(page); + struct alloc_control ac = { + .nid = node, + .nmask = NULL, + }; + + return alloc_huge_page_node(h, &ac); + } else if (PageTransHuge(page)) { struct page *thp; thp = alloc_pages_node(node, diff --git a/mm/migrate.c b/mm/migrate.c index a298a8c..94d2386 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1526,10 +1526,15 @@ struct page *new_page_nodemask(struct page *page, unsigned int order = 0; struct page *new_page = NULL; - if (PageHuge(page)) - return alloc_huge_page_nodemask( - page_hstate(compound_head(page)), - preferred_nid, nodemask); + if (PageHuge(page)) { + struct hstate *h = page_hstate(page); + struct alloc_control ac = { + .nid = preferred_nid, + .nmask = nodemask, + }; + + return alloc_huge_page_nodemask(h, &ac); + } if (PageTransHuge(page)) { gfp_mask |= GFP_TRANSHUGE; From patchwork Mon May 18 01:20:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554515 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D3ECB913 for ; Mon, 18 May 2020 01:21:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0DC7207BB for ; Mon, 18 May 2020 01:21:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Wnr40su/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0DC7207BB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9694C80008; Sun, 17 May 2020 21:21:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9249480005; Sun, 17 May 2020 21:21:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E3EF80008; Sun, 17 May 2020 21:21:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id 60E7680005 for ; Sun, 17 May 2020 21:21:54 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2B476442C for ; Mon, 18 May 2020 01:21:54 +0000 (UTC) X-FDA: 76828088148.27.net11_84fa9345c1f3f X-Spam-Summary: 2,0,0,3811072f97274522,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3867:3868:3871:4117:4321:4385:4605:5007:6261:6653:7576:8957:9413:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13138:13161:13229:13231:14096:14181:14394:14721:21080:21212:21444:21451:21627:21666:21990:30054,0,RBL:209.85.214.195:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: net11_84fa9345c1f3f X-Filterd-Recvd-Size: 6418 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:21:53 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id k22so3542214pls.10 for ; Sun, 17 May 2020 18:21:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ODtKSE0Od0SMbp4Vjhm8eJRPOKYNB3i5H6QmRpOi+Z0=; b=Wnr40su/PZ233GDcGfPStPxBV2EQiYtUEGGZnTd5JO6xmv/X9BLkGjHGeXutrxxxHS YBva+DZwV9iwqCPvFsTUylcaUOsiPU/yVdCcpTz2Lj8UA1oHEnrO5BZJXxG1L1WVGyrc j31GjOmdep/Rva0m5m/lqWZaraJbO+4CDuMByuXcRN7yPdq9hG/iw76CaPWtlpxgkcfd 22gzsyB/c8iFCdx76hZxl1qDNkYec+wZtKgfbBj62f1KCAyeFQZXGLvfu+GicleNPveD lyBOSmKjIWNaukH+O3wvfb/0MlrXg3UWlL9skFBmBn+9SD6zfmkrIxyReNzI2DG+GQFA FVoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ODtKSE0Od0SMbp4Vjhm8eJRPOKYNB3i5H6QmRpOi+Z0=; b=UGlagCZEZEP8VMpIjyg69zG/JkaMZ8J/JzaiCG87szvVTESRuwc4erWQQN8/tQP2k5 dLghTaKql6gAl4omQyu3bmn8T9aBfrwOlU1xJ6aC1h1UO/cU9FRS+4SUhmk0xH3iZ2pg 1XJ0qmb5wlEwOVMOE9V9ysQppIPEaPvzEsXajdXWTU162nnPQl/1OyHL1L7oavZpaMhe 8GW4SfnCmq/YYE4ihaTicH+UvnKS/c3C+Jo781+CvlTiUjCjBU4DJUOcmWRzFNdSkRVq fKzCOJD9KEVMQQycCOSxRP020o0JWTkkQNjlHCNUqieMgGUZfQneevKRPzLotcUNZnSq gMLw== X-Gm-Message-State: AOAM530WeE8pvDljA8lW5m67PdDBQOLF9l3xvZgpr6RwA7OfV7aimlx9 9aQe5FBgwtLTMqyZjDyGcgY= X-Google-Smtp-Source: ABdhPJy3dc+vcPbNB0TuCD0/Vkqb33F5xqfwaa+AkKhcXXzGh4y8yKQPyxeO0/wLED4A7mU5h7d87g== X-Received: by 2002:a17:90a:8a06:: with SMTP id w6mr17617575pjn.191.1589764912858; Sun, 17 May 2020 18:21:52 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.21.49 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:21:52 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 04/11] mm/hugetlb: unify hugetlb migration callback function Date: Mon, 18 May 2020 10:20:50 +0900 Message-Id: <1589764857-6800-5-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is no difference between two migration callback functions, alloc_huge_page_node() and alloc_huge_page_nodemask(), except __GFP_THISNODE handling. This patch adds one more field on to the alloc_control and handles this exception. Signed-off-by: Joonsoo Kim --- include/linux/hugetlb.h | 8 -------- mm/hugetlb.c | 23 ++--------------------- mm/internal.h | 1 + mm/mempolicy.c | 3 ++- 4 files changed, 5 insertions(+), 30 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 6da217e..4892ed3 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -505,8 +505,6 @@ struct huge_bootmem_page { struct page *alloc_migrate_huge_page(struct hstate *h, struct alloc_control *ac); -struct page *alloc_huge_page_node(struct hstate *h, - struct alloc_control *ac); struct page *alloc_huge_page_nodemask(struct hstate *h, struct alloc_control *ac); struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, @@ -755,12 +753,6 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, struct hstate {}; static inline struct page * -alloc_huge_page_node(struct hstate *h, struct alloc_control *ac) -{ - return NULL; -} - -static inline struct page * alloc_huge_page_nodemask(struct hstate *h, struct alloc_control *ac) { return NULL; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 859dba4..60b0983 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1981,31 +1981,12 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, } /* page migration callback function */ -struct page *alloc_huge_page_node(struct hstate *h, - struct alloc_control *ac) -{ - struct page *page = NULL; - - ac->gfp_mask |= htlb_alloc_mask(h); - if (ac->nid != NUMA_NO_NODE) - ac->gfp_mask |= __GFP_THISNODE; - - spin_lock(&hugetlb_lock); - if (h->free_huge_pages - h->resv_huge_pages > 0) - page = dequeue_huge_page_nodemask(h, ac); - spin_unlock(&hugetlb_lock); - - if (!page) - page = alloc_migrate_huge_page(h, ac); - - return page; -} - -/* page migration callback function */ struct page *alloc_huge_page_nodemask(struct hstate *h, struct alloc_control *ac) { ac->gfp_mask |= htlb_alloc_mask(h); + if (ac->thisnode && ac->nid != NUMA_NO_NODE) + ac->gfp_mask |= __GFP_THISNODE; spin_lock(&hugetlb_lock); if (h->free_huge_pages - h->resv_huge_pages > 0) { diff --git a/mm/internal.h b/mm/internal.h index 75b3f8e..574722d0 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -618,6 +618,7 @@ struct alloc_control { int nid; nodemask_t *nmask; gfp_t gfp_mask; + bool thisnode; }; #endif /* __MM_INTERNAL_H */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 06f60a5..629feaa 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1073,9 +1073,10 @@ struct page *alloc_new_node_page(struct page *page, unsigned long node) struct alloc_control ac = { .nid = node, .nmask = NULL, + .thisnode = true, }; - return alloc_huge_page_node(h, &ac); + return alloc_huge_page_nodemask(h, &ac); } else if (PageTransHuge(page)) { struct page *thp; From patchwork Mon May 18 01:20:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554517 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 53244912 for ; Mon, 18 May 2020 01:21:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 20283207ED for ; Mon, 18 May 2020 01:21:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="i6qFVekf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 20283207ED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 214D680009; Sun, 17 May 2020 21:21:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 19DFC80005; Sun, 17 May 2020 21:21:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 067CD80009; Sun, 17 May 2020 21:21:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0091.hostedemail.com [216.40.44.91]) by kanga.kvack.org (Postfix) with ESMTP id DDEFB80005 for ; Sun, 17 May 2020 21:21:57 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8297E180AD81A for ; Mon, 18 May 2020 01:21:57 +0000 (UTC) X-FDA: 76828088274.29.wing06_8578e7cee2855 X-Spam-Summary: 2,0,0,c1b34fbcfb6932e6,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:2892:2898:2903:3138:3139:3140:3141:3142:3355:3865:3867:3871:3872:3874:4117:4250:4321:4385:4605:5007:6261:6653:7576:7903:8957:9010:9413:10004:11026:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:13161:13229:13255:14181:14394:14721:21080:21444:21451:21611:21627:21666:21795:21990:30051:30054:30070:30075,0,RBL:209.85.210.196:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: wing06_8578e7cee2855 X-Filterd-Recvd-Size: 6806 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:21:57 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id 23so4194518pfy.8 for ; Sun, 17 May 2020 18:21:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=s7ZF/TSgVPMfBiVL+g87/XdCmCQwJ5rXkaWI13NQ9/4=; b=i6qFVekfMEKDNi1U1IPosOULPIAx+nSjOpugNRND1bVgup/cqL6ozD56uPC/QjGFZo pVh/5YrY/zGj+XVrQdk16oy7cGfc8RZ4iOhpdtyB8g46pnSWce2MGzFfcGaJ3X2BBVJE VwluPHIcLkkZY8ICNsLHt71CbIXxRoUjXpHsiqELgkIf+yZ4dDTEAynmegKvbL0j17yl FMgQRRNqZoiPTb1D7xioV23OQtdX38JJVHJd4dC1wdDmiq7ditwiOAQ25/F//Cza8VNF Ia7/XcKVEpqhAGOvbZ4o9yf+3jLghWOgNCKuclGFf8TBrv4KHJO8kiQsXU/veXUQXDl7 QR6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=s7ZF/TSgVPMfBiVL+g87/XdCmCQwJ5rXkaWI13NQ9/4=; b=ZTydczH35sn2JjtNAMVwyw/BF5IPpOdp65wBU4BDikd4b1MG3i0kUklM6kdcVvmNRM r9TUleZYukoGF6N9/x3LUCARC9Kyz6sPokJiqaEjpynTL0eTQq1zeOn2l0DWCVJZ3S9A 6NtyYLDMwGF51eVpxkwbUOHTJpJD0149JEjWgOqp9jCYL9P3qUq+1LtnLZmaxejpQyUi KC639Voima6cVsN0PWbZtmiCxi0NU3UxCsMjzu8d57q2V8G1zLHxUV4pjDvfOZw387qb gTXRwDEL/YrOpfK2o2P14jlRXFF2bj2ubvnrxeiHIkJ0jTiuHAkD3C8G5uzv3Kc4kxrW 9FTA== X-Gm-Message-State: AOAM531FRWf36LczxyjZjrP1J+FQyq11TpxRQhmw27a3uvfRlQDSGhcI DM8ML04nnDbDnayWCay3dVVb4AVb X-Google-Smtp-Source: ABdhPJzs7I2th1ALqmLEiamW68tTpAcgk740vzOfrPORpBpBUVeAsqL9hJ9ZCBdpBVIw38ka9LG+2g== X-Received: by 2002:a63:c146:: with SMTP id p6mr6866650pgi.55.1589764916099; Sun, 17 May 2020 18:21:56 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.21.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:21:55 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 05/11] mm/hugetlb: make hugetlb migration target allocation APIs CMA aware Date: Mon, 18 May 2020 10:20:51 +0900 Message-Id: <1589764857-6800-6-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is a user who do not want to use CMA memory for migration. Until now, it is implemented by caller side but it's not optimal since there is limited information on caller. This patch implements it on callee side to get better result. Signed-off-by: Joonsoo Kim Acked-by: Mike Kravetz --- include/linux/hugetlb.h | 2 -- mm/gup.c | 9 +++------ mm/hugetlb.c | 21 +++++++++++++++++---- mm/internal.h | 1 + 4 files changed, 21 insertions(+), 12 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 4892ed3..6485e92 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -503,8 +503,6 @@ struct huge_bootmem_page { struct hstate *hstate; }; -struct page *alloc_migrate_huge_page(struct hstate *h, - struct alloc_control *ac); struct page *alloc_huge_page_nodemask(struct hstate *h, struct alloc_control *ac); struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, diff --git a/mm/gup.c b/mm/gup.c index 9890fb0..1c86db5 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1618,14 +1618,11 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private) struct alloc_control ac = { .nid = nid, .nmask = NULL, - .gfp_mask = gfp_mask, + .gfp_mask = __GFP_NOWARN, + .skip_cma = true, }; - /* - * We don't want to dequeue from the pool because pool pages will - * mostly be from the CMA region. - */ - return alloc_migrate_huge_page(h, &ac); + return alloc_huge_page_nodemask(h, &ac); } if (PageTransHuge(page)) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 60b0983..53edd02 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1034,13 +1034,19 @@ static void enqueue_huge_page(struct hstate *h, struct page *page) h->free_huge_pages_node[nid]++; } -static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) +static struct page *dequeue_huge_page_node_exact(struct hstate *h, + int nid, bool skip_cma) { struct page *page; - list_for_each_entry(page, &h->hugepage_freelists[nid], lru) + list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { + if (skip_cma && is_migrate_cma_page(page)) + continue; + if (!PageHWPoison(page)) break; + } + /* * if 'non-isolated free hugepage' not found on the list, * the allocation fails. @@ -1081,7 +1087,7 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, continue; node = zone_to_nid(zone); - page = dequeue_huge_page_node_exact(h, node); + page = dequeue_huge_page_node_exact(h, node, ac->skip_cma); if (page) return page; } @@ -1938,7 +1944,7 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, return page; } -struct page *alloc_migrate_huge_page(struct hstate *h, +static struct page *alloc_migrate_huge_page(struct hstate *h, struct alloc_control *ac) { struct page *page; @@ -2000,6 +2006,13 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, } spin_unlock(&hugetlb_lock); + /* + * clearing __GFP_MOVABLE flag ensure that allocated page + * will not come from CMA area + */ + if (ac->skip_cma) + ac->gfp_mask &= ~__GFP_MOVABLE; + return alloc_migrate_huge_page(h, ac); } diff --git a/mm/internal.h b/mm/internal.h index 574722d0..6b6507e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -619,6 +619,7 @@ struct alloc_control { nodemask_t *nmask; gfp_t gfp_mask; bool thisnode; + bool skip_cma; }; #endif /* __MM_INTERNAL_H */ From patchwork Mon May 18 01:20:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554519 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5DA1B912 for ; Mon, 18 May 2020 01:22:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 21866207ED for ; Mon, 18 May 2020 01:22:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eq0txyu2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 21866207ED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 12C608000A; Sun, 17 May 2020 21:22:01 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0E0DE80005; Sun, 17 May 2020 21:22:01 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EBEF18000A; Sun, 17 May 2020 21:22:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id D29A880005 for ; Sun, 17 May 2020 21:22:00 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 858E74820 for ; Mon, 18 May 2020 01:22:00 +0000 (UTC) X-FDA: 76828088400.01.ear72_85e999a16021a X-Spam-Summary: 2,0,0,431cbc0d10e8fc17,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1542:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:2898:3138:3139:3140:3141:3142:3353:3865:3866:3867:3870:3871:3872:4250:4321:4385:5007:6261:6653:7576:9413:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:13138:13231:14181:14394:14721:21080:21444:21451:21611:21627:21666:21795:21990:30051:30054:30070,0,RBL:209.85.214.193:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: ear72_85e999a16021a X-Filterd-Recvd-Size: 5704 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:22:00 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id s20so3550583plp.6 for ; Sun, 17 May 2020 18:22:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iQHut3TWDpS+1BbAQqnoumWntS6DRZLFGDslgYPZ4Rg=; b=eq0txyu2Ftx04gtPzCcrZv9oqtdu6gNNNK2so7t2no0Y1RGxazrPHc1+d76NBLgDVD 1owbvFTrlzmCvDGLeY6SLFzUqsNXhM1+xLwQ3/cR0roYz58oBPMV/9agtNpGn8g9Mxi4 2ENRKxudLVWFx/bIPXUiszi4FUTOboW2PBN9CqUZRpcGXgz5a/z2FJ/QyXiOmN7JJ/gP dirI90dYsXwtA1SdOTvQFFMqqb26CYZC/o4KXNSkwHNvq73P6RduZw8M3VhWBHSjeyPy 7wvaez7i/5xNGeeiX/CQs010NSjG1W1iGG3qOfwvVarjKpUcaskICjupZ0p5YB7ZgW5X /sTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iQHut3TWDpS+1BbAQqnoumWntS6DRZLFGDslgYPZ4Rg=; b=rIKugc26aSNHXeapNFJSlASMz56VunHv6I5lKC2D3FxVrG/yB66cjvF5YjAgdxTP50 YA2Scifq32pnivsqx6NXPOYXWsodZ16khvDrwVRTXKWREFIf4j4e5SAp1I8RYBrGv2rw q2jiOBf5p0ocEkkjaGPbFXbsj7Asgy3Am3mYpP/V6v1scLj1sqd/QvbgEQp6uWJQjvWB BkWKZ4SoPNixvql11rR0pWSbP17qtwllJsHazJcdIYqG3gnowxiHrVQfrTaq8Q6R8CTF VWBnxx7nXRVGoK4JW234mSEnz/tVi6OM+p/4yKqsSi2uFhSyO1E4kyRogP/DURBZ2DKg ZyGw== X-Gm-Message-State: AOAM532CXyivpbQ0HuhiNUmv/Qdmu3drLtTYITCs4qWbwrAVlu/cSfae tnFRtKzPbsLzYB9AXhqHk6o= X-Google-Smtp-Source: ABdhPJwHx2hX6OBiQAwdJsZR3ZinBl6C7YdDZoo1I2M7LaNSjQ0Dt+eaBEgVh+xKlgw27CmAFAVuEQ== X-Received: by 2002:a17:90a:8c98:: with SMTP id b24mr17146622pjo.226.1589764919331; Sun, 17 May 2020 18:21:59 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.21.56 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:21:58 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 06/11] mm/hugetlb: do not modify user provided gfp_mask Date: Mon, 18 May 2020 10:20:52 +0900 Message-Id: <1589764857-6800-7-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim It's not good practice to modify user input. Instead of using it to build correct gfp_mask for APIs, this patch introduces another gfp_mask field, __gfp_mask, for internal usage. Signed-off-by: Joonsoo Kim --- mm/hugetlb.c | 15 ++++++++------- mm/internal.h | 2 ++ 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 53edd02..5f43b7e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1069,15 +1069,15 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, struct zoneref *z; int node = NUMA_NO_NODE; - zonelist = node_zonelist(ac->nid, ac->gfp_mask); + zonelist = node_zonelist(ac->nid, ac->__gfp_mask); retry_cpuset: cpuset_mems_cookie = read_mems_allowed_begin(); for_each_zone_zonelist_nodemask(zone, z, zonelist, - gfp_zone(ac->gfp_mask), ac->nmask) { + gfp_zone(ac->__gfp_mask), ac->nmask) { struct page *page; - if (!cpuset_zone_allowed(zone, ac->gfp_mask)) + if (!cpuset_zone_allowed(zone, ac->__gfp_mask)) continue; /* * no need to ask again on the same node. Pool is node rather than @@ -1952,7 +1952,7 @@ static struct page *alloc_migrate_huge_page(struct hstate *h, if (hstate_is_gigantic(h)) return NULL; - page = alloc_fresh_huge_page(h, ac->gfp_mask, + page = alloc_fresh_huge_page(h, ac->__gfp_mask, ac->nid, ac->nmask, NULL); if (!page) return NULL; @@ -1990,9 +1990,10 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, struct page *alloc_huge_page_nodemask(struct hstate *h, struct alloc_control *ac) { - ac->gfp_mask |= htlb_alloc_mask(h); + ac->__gfp_mask = htlb_alloc_mask(h); + ac->__gfp_mask |= ac->gfp_mask; if (ac->thisnode && ac->nid != NUMA_NO_NODE) - ac->gfp_mask |= __GFP_THISNODE; + ac->__gfp_mask |= __GFP_THISNODE; spin_lock(&hugetlb_lock); if (h->free_huge_pages - h->resv_huge_pages > 0) { @@ -2011,7 +2012,7 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, * will not come from CMA area */ if (ac->skip_cma) - ac->gfp_mask &= ~__GFP_MOVABLE; + ac->__gfp_mask &= ~__GFP_MOVABLE; return alloc_migrate_huge_page(h, ac); } diff --git a/mm/internal.h b/mm/internal.h index 6b6507e..3239d71 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -620,6 +620,8 @@ struct alloc_control { gfp_t gfp_mask; bool thisnode; bool skip_cma; + + gfp_t __gfp_mask; /* Used internally in API implementation */ }; #endif /* __MM_INTERNAL_H */ From patchwork Mon May 18 01:20:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554529 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B823B14C0 for ; Mon, 18 May 2020 01:22:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5DA5F207F9 for ; Mon, 18 May 2020 01:22:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="siXBVrpq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5DA5F207F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 96D0B8000C; Sun, 17 May 2020 21:22:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 881E68000B; Sun, 17 May 2020 21:22:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FC358000C; Sun, 17 May 2020 21:22:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id 33A8B8000B for ; Sun, 17 May 2020 21:22:04 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E053C4826 for ; Mon, 18 May 2020 01:22:03 +0000 (UTC) X-FDA: 76828088526.30.cows88_8660ab1575718 X-Spam-Summary: 2,0,0,591cea00d2c403fb,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:327:355:379:541:800:960:965:966:973:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2559:2562:2693:2731:2899:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3874:4250:4321:4385:4390:4395:4605:5007:6117:6119:6261:6653:7576:7903:8957:9010:9413:10004:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13161:13229:13255:14394:21080:21324:21444:21451:21611:21627:21666:21796:21990:30003:30012:30034:30036:30054:30070,0,RBL:209.85.216.68:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: cows88_8660ab1575718 X-Filterd-Recvd-Size: 23270 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:22:03 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id q24so4245648pjd.1 for ; Sun, 17 May 2020 18:22:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Ax7HelbESmDJU+3iItiFOyErZms85pUVAuGI5qs+cvo=; b=siXBVrpqVixbL1tBXFaK1EBLOonvpDc+xE+hX7puIX48kO4ooRtTDJFEGP2hUGHTp0 CXmD5l/hg4QuQMd1TfjBLy7+iIIQ1sTRGl3voRbWrZ97UHSNXLLJAo5zLfssaw8GVxXi AmThj8zAXQNkDZLInXcD/hxMZ/jmGmW5rIrlsFM1vVKEueDH9ar4h23+PRgcmLK6aWsY BQnyWqaGB5h7FN+ojfr7pAtzGdf1Z8NPKuutfx4xhB0/hTVn8tNQ3xw/HehDrqCi0wZx CgO3rY8/vWnIwOoLSALx/XYqirkgcbCKKXkheNtvsppAMZeI7MmZRUgQbH3s5Mcm7THF K2tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Ax7HelbESmDJU+3iItiFOyErZms85pUVAuGI5qs+cvo=; b=Mea15ipZZvn5FeQ9230DfKqQ3rvTriJxf0CsmdSay82hhhrpO4weaKq8ZEErniKLXO D1bF+i2nBpNbWxv79WPCi7yw4EtFDUjwkw/TJOnmYNvMNNcsBWadlAcuHfsEGRRlULOz dJexF2Jb3uA3VmL9tikmtsT6G6cYoJmIuuFlTrXxWqmn1rwTOT6FQnyB1m2md2cCQUXs 9zruhObSl/pne7Ml7hjyMPNiG6okcpNiouAuJqnbkBY4GkpZAzFSB0hjJ4j81YPaILWd Rru5BzFcfKqzDr8j4UcKZUFvQUYI6qRSvGuavFAeECdyKcRldg2dN2RWvfOBtqEvZstQ 8KLA== X-Gm-Message-State: AOAM530LDAmfcqropMgfodPYs3tAhLHee/DUwDiZhLxCXskjtPJF3erM LDB4lE1n7yAPJqEe9QjR+s0= X-Google-Smtp-Source: ABdhPJyS6Ber8qYEnLylA6TQqvJI2A/o3x+dzlz9FkIetkf72QcBtyCGYyB7Mt00WlY24xiQhw0kpw== X-Received: by 2002:a17:90b:10f:: with SMTP id p15mr4104221pjz.99.1589764922459; Sun, 17 May 2020 18:22:02 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.21.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:22:02 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 07/11] mm/migrate: change the interface of the migration target alloc/free functions Date: Mon, 18 May 2020 10:20:53 +0900 Message-Id: <1589764857-6800-8-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim To prepare unifying duplicated functions in following patches, this patch changes the interface of the migration target alloc/free functions. Functions now use struct alloc_control as an argument. There is no functional change. Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 15 +++++++------ include/linux/page-isolation.h | 4 +++- mm/compaction.c | 15 ++++++++----- mm/gup.c | 5 +++-- mm/internal.h | 5 ++++- mm/memory-failure.c | 13 ++++++----- mm/memory_hotplug.c | 9 +++++--- mm/mempolicy.c | 22 +++++++++++------- mm/migrate.c | 51 ++++++++++++++++++++++-------------------- mm/page_alloc.c | 2 +- mm/page_isolation.c | 9 +++++--- 11 files changed, 89 insertions(+), 61 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 1d70b4a..923c4f3 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -7,8 +7,9 @@ #include #include -typedef struct page *new_page_t(struct page *page, unsigned long private); -typedef void free_page_t(struct page *page, unsigned long private); +struct alloc_control; +typedef struct page *new_page_t(struct page *page, struct alloc_control *ac); +typedef void free_page_t(struct page *page, struct alloc_control *ac); /* * Return values from addresss_space_operations.migratepage(): @@ -38,9 +39,9 @@ extern int migrate_page(struct address_space *mapping, struct page *newpage, struct page *page, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, - unsigned long private, enum migrate_mode mode, int reason); + struct alloc_control *ac, enum migrate_mode mode, int reason); extern struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask); + struct alloc_control *ac); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); @@ -56,11 +57,11 @@ extern int migrate_page_move_mapping(struct address_space *mapping, static inline void putback_movable_pages(struct list_head *l) {} static inline int migrate_pages(struct list_head *l, new_page_t new, - free_page_t free, unsigned long private, enum migrate_mode mode, - int reason) + free_page_t free, struct alloc_control *ac, + enum migrate_mode mode, int reason) { return -ENOSYS; } static inline struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask) + struct alloc_control *ac) { return NULL; } static inline int isolate_movable_page(struct page *page, isolate_mode_t mode) { return -EBUSY; } diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 5724580..35e3bdb 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -2,6 +2,8 @@ #ifndef __LINUX_PAGEISOLATION_H #define __LINUX_PAGEISOLATION_H +struct alloc_control; + #ifdef CONFIG_MEMORY_ISOLATION static inline bool has_isolate_pageblock(struct zone *zone) { @@ -60,6 +62,6 @@ undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, int isol_flags); -struct page *alloc_migrate_target(struct page *page, unsigned long private); +struct page *alloc_migrate_target(struct page *page, struct alloc_control *ac); #endif diff --git a/mm/compaction.c b/mm/compaction.c index 67fd317..aec1c1f 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1561,9 +1561,9 @@ static void isolate_freepages(struct compact_control *cc) * from the isolated freelists in the block we are migrating to. */ static struct page *compaction_alloc(struct page *migratepage, - unsigned long data) + struct alloc_control *ac) { - struct compact_control *cc = (struct compact_control *)data; + struct compact_control *cc = (struct compact_control *)ac->private; struct page *freepage; if (list_empty(&cc->freepages)) { @@ -1585,9 +1585,9 @@ static struct page *compaction_alloc(struct page *migratepage, * freelist. All pages on the freelist are from the same zone, so there is no * special handling needed for NUMA. */ -static void compaction_free(struct page *page, unsigned long data) +static void compaction_free(struct page *page, struct alloc_control *ac) { - struct compact_control *cc = (struct compact_control *)data; + struct compact_control *cc = (struct compact_control *)ac->private; list_add(&page->lru, &cc->freepages); cc->nr_freepages++; @@ -2095,6 +2095,9 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) unsigned long last_migrated_pfn; const bool sync = cc->mode != MIGRATE_ASYNC; bool update_cached; + struct alloc_control alloc_control = { + .private = (unsigned long)cc, + }; /* * These counters track activities during zone compaction. Initialize @@ -2212,8 +2215,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) } err = migrate_pages(&cc->migratepages, compaction_alloc, - compaction_free, (unsigned long)cc, cc->mode, - MR_COMPACTION); + compaction_free, &alloc_control, + cc->mode, MR_COMPACTION); trace_mm_compaction_migratepages(cc->nr_migratepages, err, &cc->migratepages); diff --git a/mm/gup.c b/mm/gup.c index 1c86db5..be9cb79 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1594,7 +1594,8 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) } #ifdef CONFIG_CMA -static struct page *new_non_cma_page(struct page *page, unsigned long private) +static struct page *new_non_cma_page(struct page *page, + struct alloc_control *ac) { /* * We want to make sure we allocate the new page from the same node @@ -1707,7 +1708,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, put_page(pages[i]); if (migrate_pages(&cma_page_list, new_non_cma_page, - NULL, 0, MIGRATE_SYNC, MR_CONTIG_RANGE)) { + NULL, NULL, MIGRATE_SYNC, MR_CONTIG_RANGE)) { /* * some of the pages failed migration. Do get_user_pages * without migration. diff --git a/mm/internal.h b/mm/internal.h index 3239d71..abe94a7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -612,7 +612,9 @@ static inline bool is_migrate_highatomic_page(struct page *page) } void setup_zone_pageset(struct zone *zone); -extern struct page *alloc_new_node_page(struct page *page, unsigned long node); +struct alloc_control; +extern struct page *alloc_new_node_page(struct page *page, + struct alloc_control *ac); struct alloc_control { int nid; @@ -620,6 +622,7 @@ struct alloc_control { gfp_t gfp_mask; bool thisnode; bool skip_cma; + unsigned long private; gfp_t __gfp_mask; /* Used internally in API implementation */ }; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index a96364b..3f92e70 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1621,11 +1621,14 @@ int unpoison_memory(unsigned long pfn) } EXPORT_SYMBOL(unpoison_memory); -static struct page *new_page(struct page *p, unsigned long private) +static struct page *new_page(struct page *p, struct alloc_control *__ac) { - int nid = page_to_nid(p); + struct alloc_control ac = { + .nid = page_to_nid(p), + .nmask = &node_states[N_MEMORY], + }; - return new_page_nodemask(p, nid, &node_states[N_MEMORY]); + return new_page_nodemask(p, &ac); } /* @@ -1722,7 +1725,7 @@ static int soft_offline_huge_page(struct page *page, int flags) return -EBUSY; } - ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL, + ret = migrate_pages(&pagelist, new_page, NULL, NULL, MIGRATE_SYNC, MR_MEMORY_FAILURE); if (ret) { pr_info("soft offline: %#lx: hugepage migration failed %d, type %lx (%pGp)\n", @@ -1812,7 +1815,7 @@ static int __soft_offline_page(struct page *page, int flags) inc_node_page_state(page, NR_ISOLATED_ANON + page_is_file_lru(page)); list_add(&page->lru, &pagelist); - ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL, + ret = migrate_pages(&pagelist, new_page, NULL, NULL, MIGRATE_SYNC, MR_MEMORY_FAILURE); if (ret) { if (!list_empty(&pagelist)) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index c4d5c45..89642f9 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1232,10 +1232,11 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end) return 0; } -static struct page *new_node_page(struct page *page, unsigned long private) +static struct page *new_node_page(struct page *page, struct alloc_control *__ac) { int nid = page_to_nid(page); nodemask_t nmask = node_states[N_MEMORY]; + struct alloc_control ac = {0}; /* * try to allocate from a different node but reuse this node if there @@ -1246,7 +1247,9 @@ static struct page *new_node_page(struct page *page, unsigned long private) if (nodes_empty(nmask)) node_set(nid, nmask); - return new_page_nodemask(page, nid, &nmask); + ac.nid = nid; + ac.nmask = &nmask; + return new_page_nodemask(page, &ac); } static int @@ -1310,7 +1313,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) } if (!list_empty(&source)) { /* Allocate a new page from the nearest neighbor node */ - ret = migrate_pages(&source, new_node_page, NULL, 0, + ret = migrate_pages(&source, new_node_page, NULL, NULL, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); if (ret) { list_for_each_entry(page, &source, lru) { diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 629feaa..7241621 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1066,12 +1066,12 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist, } /* page allocation callback for NUMA node migration */ -struct page *alloc_new_node_page(struct page *page, unsigned long node) +struct page *alloc_new_node_page(struct page *page, struct alloc_control *__ac) { if (PageHuge(page)) { struct hstate *h = page_hstate(page); struct alloc_control ac = { - .nid = node, + .nid = __ac->nid, .nmask = NULL, .thisnode = true, }; @@ -1080,7 +1080,7 @@ struct page *alloc_new_node_page(struct page *page, unsigned long node) } else if (PageTransHuge(page)) { struct page *thp; - thp = alloc_pages_node(node, + thp = alloc_pages_node(__ac->nid, (GFP_TRANSHUGE | __GFP_THISNODE), HPAGE_PMD_ORDER); if (!thp) @@ -1088,7 +1088,7 @@ struct page *alloc_new_node_page(struct page *page, unsigned long node) prep_transhuge_page(thp); return thp; } else - return __alloc_pages_node(node, GFP_HIGHUSER_MOVABLE | + return __alloc_pages_node(__ac->nid, GFP_HIGHUSER_MOVABLE | __GFP_THISNODE, 0); } @@ -1102,6 +1102,9 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest, nodemask_t nmask; LIST_HEAD(pagelist); int err = 0; + struct alloc_control ac = { + .nid = dest, + }; nodes_clear(nmask); node_set(source, nmask); @@ -1116,7 +1119,7 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest, flags | MPOL_MF_DISCONTIG_OK, &pagelist); if (!list_empty(&pagelist)) { - err = migrate_pages(&pagelist, alloc_new_node_page, NULL, dest, + err = migrate_pages(&pagelist, alloc_new_node_page, NULL, &ac, MIGRATE_SYNC, MR_SYSCALL); if (err) putback_movable_pages(&pagelist); @@ -1237,10 +1240,11 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, * list of pages handed to migrate_pages()--which is how we get here-- * is in virtual address order. */ -static struct page *new_page(struct page *page, unsigned long start) +static struct page *new_page(struct page *page, struct alloc_control *ac) { struct vm_area_struct *vma; unsigned long uninitialized_var(address); + unsigned long start = ac->private; vma = find_vma(current->mm, start); while (vma) { @@ -1283,7 +1287,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, return -ENOSYS; } -static struct page *new_page(struct page *page, unsigned long start) +static struct page *new_page(struct page *page, struct alloc_control *ac) { return NULL; } @@ -1299,6 +1303,7 @@ static long do_mbind(unsigned long start, unsigned long len, int err; int ret; LIST_HEAD(pagelist); + struct alloc_control ac = {0}; if (flags & ~(unsigned long)MPOL_MF_VALID) return -EINVAL; @@ -1374,8 +1379,9 @@ static long do_mbind(unsigned long start, unsigned long len, if (!list_empty(&pagelist)) { WARN_ON_ONCE(flags & MPOL_MF_LAZY); + ac.private = start; nr_failed = migrate_pages(&pagelist, new_page, NULL, - start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND); + &ac, MIGRATE_SYNC, MR_MEMPOLICY_MBIND); if (nr_failed) putback_movable_pages(&pagelist); } diff --git a/mm/migrate.c b/mm/migrate.c index 94d2386..ba31153 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1183,7 +1183,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, */ static ICE_noinline int unmap_and_move(new_page_t get_new_page, free_page_t put_new_page, - unsigned long private, struct page *page, + struct alloc_control *ac, struct page *page, int force, enum migrate_mode mode, enum migrate_reason reason) { @@ -1206,7 +1206,7 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, goto out; } - newpage = get_new_page(page, private); + newpage = get_new_page(page, ac); if (!newpage) return -ENOMEM; @@ -1266,7 +1266,7 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, } put_new: if (put_new_page) - put_new_page(newpage, private); + put_new_page(newpage, ac); else put_page(newpage); } @@ -1293,9 +1293,9 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, * will wait in the page fault for migration to complete. */ static int unmap_and_move_huge_page(new_page_t get_new_page, - free_page_t put_new_page, unsigned long private, - struct page *hpage, int force, - enum migrate_mode mode, int reason) + free_page_t put_new_page, struct alloc_control *ac, + struct page *hpage, int force, + enum migrate_mode mode, int reason) { int rc = -EAGAIN; int page_was_mapped = 0; @@ -1315,7 +1315,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, return -ENOSYS; } - new_hpage = get_new_page(hpage, private); + new_hpage = get_new_page(hpage, ac); if (!new_hpage) return -ENOMEM; @@ -1402,7 +1402,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, * isolation. */ if (put_new_page) - put_new_page(new_hpage, private); + put_new_page(new_hpage, ac); else putback_active_hugepage(new_hpage); @@ -1431,7 +1431,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, * Returns the number of pages that were not migrated, or an error code. */ int migrate_pages(struct list_head *from, new_page_t get_new_page, - free_page_t put_new_page, unsigned long private, + free_page_t put_new_page, struct alloc_control *ac, enum migrate_mode mode, int reason) { int retry = 1; @@ -1455,11 +1455,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, if (PageHuge(page)) rc = unmap_and_move_huge_page(get_new_page, - put_new_page, private, page, + put_new_page, ac, page, pass > 2, mode, reason); else rc = unmap_and_move(get_new_page, put_new_page, - private, page, pass > 2, mode, + ac, page, pass > 2, mode, reason); switch(rc) { @@ -1519,8 +1519,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, return rc; } -struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask) +struct page *new_page_nodemask(struct page *page, struct alloc_control *ac) { gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; unsigned int order = 0; @@ -1528,12 +1527,12 @@ struct page *new_page_nodemask(struct page *page, if (PageHuge(page)) { struct hstate *h = page_hstate(page); - struct alloc_control ac = { - .nid = preferred_nid, - .nmask = nodemask, + struct alloc_control __ac = { + .nid = ac->nid, + .nmask = ac->nmask, }; - return alloc_huge_page_nodemask(h, &ac); + return alloc_huge_page_nodemask(h, &__ac); } if (PageTransHuge(page)) { @@ -1544,8 +1543,7 @@ struct page *new_page_nodemask(struct page *page, if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) gfp_mask |= __GFP_HIGHMEM; - new_page = __alloc_pages_nodemask(gfp_mask, order, - preferred_nid, nodemask); + new_page = __alloc_pages_nodemask(gfp_mask, order, ac->nid, ac->nmask); if (new_page && PageTransHuge(new_page)) prep_transhuge_page(new_page); @@ -1570,8 +1568,11 @@ static int do_move_pages_to_node(struct mm_struct *mm, struct list_head *pagelist, int node) { int err; + struct alloc_control ac = { + .nid = node, + }; - err = migrate_pages(pagelist, alloc_new_node_page, NULL, node, + err = migrate_pages(pagelist, alloc_new_node_page, NULL, &ac, MIGRATE_SYNC, MR_SYSCALL); if (err) putback_movable_pages(pagelist); @@ -1961,12 +1962,11 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat, } static struct page *alloc_misplaced_dst_page(struct page *page, - unsigned long data) + struct alloc_control *ac) { - int nid = (int) data; struct page *newpage; - newpage = __alloc_pages_node(nid, + newpage = __alloc_pages_node(ac->nid, (GFP_HIGHUSER_MOVABLE | __GFP_THISNODE | __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN) & @@ -2031,6 +2031,9 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int isolated; int nr_remaining; LIST_HEAD(migratepages); + struct alloc_control ac = { + .nid = node, + }; /* * Don't migrate file pages that are mapped in multiple processes @@ -2053,7 +2056,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, list_add(&page->lru, &migratepages); nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_page, - NULL, node, MIGRATE_ASYNC, + NULL, &ac, MIGRATE_ASYNC, MR_NUMA_MISPLACED); if (nr_remaining) { if (!list_empty(&migratepages)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cef05d3..afdd0fb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8315,7 +8315,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, cc->nr_migratepages -= nr_reclaimed; ret = migrate_pages(&cc->migratepages, alloc_migrate_target, - NULL, 0, cc->mode, MR_CONTIG_RANGE); + NULL, NULL, cc->mode, MR_CONTIG_RANGE); } if (ret < 0) { putback_movable_pages(&cc->migratepages); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 7df89bd..1e1828b 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -298,9 +298,12 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, return pfn < end_pfn ? -EBUSY : 0; } -struct page *alloc_migrate_target(struct page *page, unsigned long private) +struct page *alloc_migrate_target(struct page *page, struct alloc_control *__ac) { - int nid = page_to_nid(page); + struct alloc_control ac = { + .nid = page_to_nid(page), + .nmask = &node_states[N_MEMORY], + }; - return new_page_nodemask(page, nid, &node_states[N_MEMORY]); + return new_page_nodemask(page, &ac); } From patchwork Mon May 18 01:20:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554533 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D5FA913 for ; Mon, 18 May 2020 01:22:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2CC25207F9 for ; Mon, 18 May 2020 01:22:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="sUjdtCUx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2CC25207F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 637B58000F; Sun, 17 May 2020 21:22:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4FCBA8000B; Sun, 17 May 2020 21:22:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 376688000F; Sun, 17 May 2020 21:22:07 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0086.hostedemail.com [216.40.44.86]) by kanga.kvack.org (Postfix) with ESMTP id 124FD8000B for ; Sun, 17 May 2020 21:22:07 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C99EB181AEF07 for ; Mon, 18 May 2020 01:22:06 +0000 (UTC) X-FDA: 76828088652.14.crow89_86d3c8447422d X-Spam-Summary: 2,0,0,1ee00290263cbaf5,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1605:1606:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:2899:3138:3139:3140:3141:3142:3865:3867:3870:3871:3872:3874:4119:4321:4385:4605:5007:6119:6261:6653:7576:7903:8660:8957:9413:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13148:13161:13180:13229:13230:14394:21080:21433:21444:21451:21627:21666:21990:30054,0,RBL:209.85.214.194:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:0,LUA_SUMMARY:none X-HE-Tag: crow89_86d3c8447422d X-Filterd-Recvd-Size: 8207 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:22:06 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id t16so3550789plo.7 for ; Sun, 17 May 2020 18:22:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=m6uPPKA46HFm7qd/ohf4EnpdbNdhoH4yr+ZK/CbIpQY=; b=sUjdtCUxRDga2htnkeilcDZoqmFmobZHCGxrgS0Br9In4PWlmf6Z/PFqIAzqH+seLk rCS7fNPywxBVBEBaVHjrn9/pt04qc+meM+YkAa7XUMep4VxFc3L38bAddl/1N2O3BDtY UAIKFCfHQ1PSXBcLD5m0osrPhTZ5a1Mo7W/vJSoFwftVaGUX1QFsQA8dshzv26YRnuD7 C4zzBKyR/uj6DajCgOwXgEMjx5VATdJGKi340F+FRjqud4sDvNqCFF8zmWtTNm0ptPMc CBvcQtDgAHztV97n7f3wsubXb0aPjpNlegUUinn/QvXD8uGc8uZjR34WSau2nRW73S29 yQ7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m6uPPKA46HFm7qd/ohf4EnpdbNdhoH4yr+ZK/CbIpQY=; b=MoEBM1sqmOqopBzmAGELYMnAtrOUnWo/7tRpAGx28ldKmfozNCGcn+nEeNcyZZpu40 ZsbZ+iPOiPTgaCa2/ok3+g3YCUOluiaiyYP4h5TNR2paDoktVOveKWQjnPGHLrxRAaod TrvzENBNKvIHWY/lv39khuwggNjJBap7YPGlmlKhEvJkyT6MZYpgs9krehyAbPiknzev 852JKZbB1syce0fHELrWuT6roo/8u679R5PLgbELHxM4n0Mmus/TtUYzcEokWaFCiUXM qeK2B5fEfPuZ1pan8DRxhZRHOOPR2iw/y1WzxXkz3EkiFeow2wCNgOaVFIcVpvYkhUTj LrpA== X-Gm-Message-State: AOAM533MKn5AO3SMiz+3YAHjul6dm76RW95yoTeKhqZPDNkb04GXtL8Y yPfxLyWvd6Bppu/mrN41rPk= X-Google-Smtp-Source: ABdhPJwcP/wE3o2VggFkMMVemZRp109/1GcCE7J6JTY4IKDn6RKA/YxRhsLiAM3v8mqflLnX8qexdw== X-Received: by 2002:a17:902:8b82:: with SMTP id ay2mr13714077plb.94.1589764925541; Sun, 17 May 2020 18:22:05 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.22.02 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:22:05 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 08/11] mm/migrate: make standard migration target allocation functions Date: Mon, 18 May 2020 10:20:54 +0900 Message-Id: <1589764857-6800-9-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There are some similar functions for migration target allocation. Since there is no fundamental difference, it's better to keep just one rather than keeping all variants. This patch implements base migration target allocation function. In the following patches, variants will be converted to use this function. Note that PageHighmem() call in previous function is changed to open-code "is_highmem_idx()" since it provides more readability. Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 6 +++--- mm/memory-failure.c | 3 ++- mm/memory_hotplug.c | 3 ++- mm/migrate.c | 26 +++++++++++++++----------- mm/page_isolation.c | 3 ++- 5 files changed, 24 insertions(+), 17 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 923c4f3..abf09b3 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -40,8 +40,8 @@ extern int migrate_page(struct address_space *mapping, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, struct alloc_control *ac, enum migrate_mode mode, int reason); -extern struct page *new_page_nodemask(struct page *page, - struct alloc_control *ac); +extern struct page *alloc_migration_target(struct page *page, + struct alloc_control *ac); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); @@ -60,7 +60,7 @@ static inline int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, struct alloc_control *ac, enum migrate_mode mode, int reason) { return -ENOSYS; } -static inline struct page *new_page_nodemask(struct page *page, +static inline struct page *alloc_migration_target(struct page *page, struct alloc_control *ac) { return NULL; } static inline int isolate_movable_page(struct page *page, isolate_mode_t mode) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 3f92e70..b400161 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1626,9 +1626,10 @@ static struct page *new_page(struct page *p, struct alloc_control *__ac) struct alloc_control ac = { .nid = page_to_nid(p), .nmask = &node_states[N_MEMORY], + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, }; - return new_page_nodemask(p, &ac); + return alloc_migration_target(p, &ac); } /* diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 89642f9..185f4c9 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1249,7 +1249,8 @@ static struct page *new_node_page(struct page *page, struct alloc_control *__ac) ac.nid = nid; ac.nmask = &nmask; - return new_page_nodemask(page, &ac); + ac.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; + return alloc_migration_target(page, &ac); } static int diff --git a/mm/migrate.c b/mm/migrate.c index ba31153..029af0b 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1519,31 +1519,35 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, return rc; } -struct page *new_page_nodemask(struct page *page, struct alloc_control *ac) +struct page *alloc_migration_target(struct page *page, struct alloc_control *ac) { - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; unsigned int order = 0; struct page *new_page = NULL; + int zidx; + /* hugetlb has it's own gfp handling logic */ if (PageHuge(page)) { struct hstate *h = page_hstate(page); - struct alloc_control __ac = { - .nid = ac->nid, - .nmask = ac->nmask, - }; - return alloc_huge_page_nodemask(h, &__ac); + return alloc_huge_page_nodemask(h, ac); } + ac->__gfp_mask = ac->gfp_mask; if (PageTransHuge(page)) { - gfp_mask |= GFP_TRANSHUGE; + ac->__gfp_mask |= GFP_TRANSHUGE; order = HPAGE_PMD_ORDER; } + zidx = zone_idx(page_zone(page)); + if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) + ac->__gfp_mask |= __GFP_HIGHMEM; - if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) - gfp_mask |= __GFP_HIGHMEM; + if (ac->thisnode) + ac->__gfp_mask |= __GFP_THISNODE; + if (ac->skip_cma) + ac->__gfp_mask &= ~__GFP_MOVABLE; - new_page = __alloc_pages_nodemask(gfp_mask, order, ac->nid, ac->nmask); + new_page = __alloc_pages_nodemask(ac->__gfp_mask, order, + ac->nid, ac->nmask); if (new_page && PageTransHuge(new_page)) prep_transhuge_page(new_page); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 1e1828b..aba799d 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -303,7 +303,8 @@ struct page *alloc_migrate_target(struct page *page, struct alloc_control *__ac) struct alloc_control ac = { .nid = page_to_nid(page), .nmask = &node_states[N_MEMORY], + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, }; - return new_page_nodemask(page, &ac); + return alloc_migration_target(page, &ac); } From patchwork Mon May 18 01:20:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554535 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 15BB6913 for ; Mon, 18 May 2020 01:22:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D76ED207F9 for ; Mon, 18 May 2020 01:22:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TuGN6hvO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D76ED207F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 681A180010; Sun, 17 May 2020 21:22:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 60CBD8000B; Sun, 17 May 2020 21:22:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 43A0B80010; Sun, 17 May 2020 21:22:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id 21E5C8000B for ; Sun, 17 May 2020 21:22:10 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id CB6A38248047 for ; Mon, 18 May 2020 01:22:09 +0000 (UTC) X-FDA: 76828088778.22.thing89_8744c184e1810 X-Spam-Summary: 2,0,0,b7835c8d4ef01100,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1345:1359:1437:1535:1542:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2899:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:5007:6119:6261:6653:7576:8957:9413:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13161:13229:14110:14181:14394:14721:21080:21444:21451:21627:21666:21990:30054,0,RBL:209.85.215.193:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: thing89_8744c184e1810 X-Filterd-Recvd-Size: 5896 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:22:09 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id f23so4074835pgj.4 for ; Sun, 17 May 2020 18:22:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=hSmTRtwjBP1ZZownFG4HBeG6KLnsrTJB9GY/Fcznxzc=; b=TuGN6hvOEpkfqCs4Mvgl/vl8r+hMnu+bB+x9wBx+9bEp99DAdm6X8DZvpyWGMysK6v /a2//CE3tJV7QOfl68+JpvvSaiWYsBK5QKtenLwI9T3PfkxzRFAsmDi9A2LwYliCz6NQ FBlBWw78UvlD6iM4YTL+ZUBw2dJVxXSRp8Zk5MceYVwwqD/yvQ0V09whKPIbKyF7As5R udmPfK3hOaR+GxQVHDwnypOFO6AwuNFMUjFJQURv/38bBvLvFQBhCSz2159A/v7pp90R oyWkR3CXI0eWEdOb7Zsp7GlTYy7PWNwLc4mK6SCXDL2uWiYMB3hhK8Gu117NzaON1ftT iNQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hSmTRtwjBP1ZZownFG4HBeG6KLnsrTJB9GY/Fcznxzc=; b=MxXq3BvEeAN4KvVcZvJ42jcgdHIPF2JW/ZI1FS/yWnEx39bRWZOSdogJXnGMQyCzCV om2g6gVXFrh1i1UcWKeLmRXtbt2GhUa5gJ9zjBm+RRRhmRZX4Wg15KI96ccnrKdqpKsu BtyuhMX9ujxyWa0YWS5rp2m+1Wz+kFkTuFqo9u9mCz/K0uxRHmWs0JKXSXS8hrC4ZqeR P/dZDhoZleWO+4D2qtUKLIARbxBTOnl+wZC55Worhl5R6tuhHkxLMWvLawx8NMO1U+no 4Z8Y7JE+b4W51HUl1MZwzS5K261HJSpLjZlDAy41FJkn476SllbiqOSqVdzoo/tgDCPv UNKA== X-Gm-Message-State: AOAM531wyqj7aVDSj05Cd21t+iI+necfeXWpZXloqmDSV3/Ws1qH1CBE 67wEyOeyZmcZoF+5VbToXVQ= X-Google-Smtp-Source: ABdhPJwdffOTCzgAHEOzll3Uppf9LzYDzLMemDkCIRp0Bxicgs2qNDjNkDk572iKIGCGbwWZuGngjQ== X-Received: by 2002:a63:5209:: with SMTP id g9mr12546532pgb.162.1589764928658; Sun, 17 May 2020 18:22:08 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.22.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:22:08 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 09/11] mm/gup: use standard migration target allocation function Date: Mon, 18 May 2020 10:20:55 +0900 Message-Id: <1589764857-6800-10-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is no reason to implement it's own function for migration target allocation. Use standard one. Signed-off-by: Joonsoo Kim --- mm/gup.c | 61 ++++++++++--------------------------------------------------- 1 file changed, 10 insertions(+), 51 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index be9cb79..d88a965 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1594,58 +1594,16 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) } #ifdef CONFIG_CMA -static struct page *new_non_cma_page(struct page *page, +static struct page *alloc_migration_target_non_cma(struct page *page, struct alloc_control *ac) { - /* - * We want to make sure we allocate the new page from the same node - * as the source page. - */ - int nid = page_to_nid(page); - /* - * Trying to allocate a page for migration. Ignore allocation - * failure warnings. We don't force __GFP_THISNODE here because - * this node here is the node where we have CMA reservation and - * in some case these nodes will have really less non movable - * allocation memory. - */ - gfp_t gfp_mask = GFP_USER | __GFP_NOWARN; - - if (PageHighMem(page)) - gfp_mask |= __GFP_HIGHMEM; - - if (PageHuge(page)) { - struct hstate *h = page_hstate(page); - struct alloc_control ac = { - .nid = nid, - .nmask = NULL, - .gfp_mask = __GFP_NOWARN, - .skip_cma = true, - }; - - return alloc_huge_page_nodemask(h, &ac); - } - - if (PageTransHuge(page)) { - struct page *thp; - /* - * ignore allocation failure warnings - */ - gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN; - - /* - * Remove the movable mask so that we don't allocate from - * CMA area again. - */ - thp_gfpmask &= ~__GFP_MOVABLE; - thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER); - if (!thp) - return NULL; - prep_transhuge_page(thp); - return thp; - } + struct alloc_control __ac = { + .nid = page_to_nid(page), + .gfp_mask = GFP_USER | __GFP_NOWARN, + .skip_cma = true, + }; - return __alloc_pages_node(nid, gfp_mask, 0); + return alloc_migration_target(page, &__ac); } static long check_and_migrate_cma_pages(struct task_struct *tsk, @@ -1707,8 +1665,9 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, for (i = 0; i < nr_pages; i++) put_page(pages[i]); - if (migrate_pages(&cma_page_list, new_non_cma_page, - NULL, NULL, MIGRATE_SYNC, MR_CONTIG_RANGE)) { + if (migrate_pages(&cma_page_list, + alloc_migration_target_non_cma, NULL, NULL, + MIGRATE_SYNC, MR_CONTIG_RANGE)) { /* * some of the pages failed migration. Do get_user_pages * without migration. From patchwork Mon May 18 01:20:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554537 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AF9B1912 for ; Mon, 18 May 2020 01:22:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7CFAE207F9 for ; Mon, 18 May 2020 01:22:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hm5UwG+Z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7CFAE207F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 96AE180011; Sun, 17 May 2020 21:22:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8F3D08000B; Sun, 17 May 2020 21:22:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BBE380011; Sun, 17 May 2020 21:22:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0116.hostedemail.com [216.40.44.116]) by kanga.kvack.org (Postfix) with ESMTP id 56C8B8000B for ; Sun, 17 May 2020 21:22:13 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1753C180AD81D for ; Mon, 18 May 2020 01:22:13 +0000 (UTC) X-FDA: 76828088946.10.power87_87bf30dd40736 X-Spam-Summary: 2,0,0,1eca9c911b5f5e87,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3871:3872:4117:4250:4321:5007:6119:6261:6653:7576:8957:9413:9592:10004:11026:11232:11233:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13161:13229:14096:14110:14181:14394:14721:21080:21220:21444:21451:21627:21666:21990:30034:30054,0,RBL:209.85.214.194:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: power87_87bf30dd40736 X-Filterd-Recvd-Size: 6212 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:22:12 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id w19so1765010ply.11 for ; Sun, 17 May 2020 18:22:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HXYaPMT+SVgZJKyj+XorbmOxAFnVq7RiQ7gBCNzN/kI=; b=hm5UwG+Zx1o4N1ihTvYSjyxBc7/uy2dWCq2ulN17NpxWmVr7Kg63sHo+4FsHYXc+rj jVA+TehaieVkUt7meL7EpEKdaLCkSSmvaXZUJ44cHrVSmAOFIPe8J48UmXIaL7Rc2bhC GMLIlyx7MMwdq9gSaSph6/S0rR7m22pz3dPmn5QgJPXweDSP/h3YK4mYO6iPIwJ8UMpo JyS0Y7zbp2sNY0wLCUtme6wNtr4Lcah9wceoS7cO1zusXHd4lmQ7ea6vE34H0FwXPhpL AkeznJkwuNW5OuMET5RFr6Udm1oplUzPDluv8v1dwVeibQdBPwfn8yJ8QxRvifoEF8zw GcDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HXYaPMT+SVgZJKyj+XorbmOxAFnVq7RiQ7gBCNzN/kI=; b=SPIP9UYTTknPz7uMutHF65qvoKZLALFn/0kA59T4blduerbJn5vVINZfrrRWU9GLXe QSOp8+WoYNw7WKzaQnhFHx652hu4UyXsRRHyICixRI60PW7iXAFROgbF8cxHPTVGc+Fn VCWVYhYERXQcYtDfYZpM13In6flIHaw34Ygpqfp/1p4zI5HbwfVM9NtR+DyVws5IC1Op wxUNk+zuEQfqvYgdha1ftxjLr9qvrUOLDRH6q7wpF/6nQqtfz+qaQPnChQAuanYWf3mw j5hH/rz3/T9kqsMCQETh+AK4C70Lnm89l5zZwxPZEqCR4D9qQofz+H/hxVCWXL9MtgaR 4qPQ== X-Gm-Message-State: AOAM532HH1Fj4HPkrdHYDsKeELLlVZcGw2N0SU/RkSRG5dQ+KQvLX45J kZPYUpeu06xcSkcwgmJxZxs= X-Google-Smtp-Source: ABdhPJzQLX/QK88KtjHmhxzKx7FYJGgbKyKlaasqEJOPwsmOPrl3zaNCB3La0U1yyYqhwFK4kGSOyQ== X-Received: by 2002:a17:90a:9606:: with SMTP id v6mr10046595pjo.20.1589764931926; Sun, 17 May 2020 18:22:11 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.22.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:22:11 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 10/11] mm/mempolicy: use standard migration target allocation function Date: Mon, 18 May 2020 10:20:56 +0900 Message-Id: <1589764857-6800-11-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is no reason to implement it's own function for migration target allocation. Use standard one. Signed-off-by: Joonsoo Kim --- mm/internal.h | 3 --- mm/mempolicy.c | 33 ++++----------------------------- mm/migrate.c | 4 +++- 3 files changed, 7 insertions(+), 33 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index abe94a7..5ade079 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -612,9 +612,6 @@ static inline bool is_migrate_highatomic_page(struct page *page) } void setup_zone_pageset(struct zone *zone); -struct alloc_control; -extern struct page *alloc_new_node_page(struct page *page, - struct alloc_control *ac); struct alloc_control { int nid; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 7241621..8d3ccab 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1065,33 +1065,6 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist, return 0; } -/* page allocation callback for NUMA node migration */ -struct page *alloc_new_node_page(struct page *page, struct alloc_control *__ac) -{ - if (PageHuge(page)) { - struct hstate *h = page_hstate(page); - struct alloc_control ac = { - .nid = __ac->nid, - .nmask = NULL, - .thisnode = true, - }; - - return alloc_huge_page_nodemask(h, &ac); - } else if (PageTransHuge(page)) { - struct page *thp; - - thp = alloc_pages_node(__ac->nid, - (GFP_TRANSHUGE | __GFP_THISNODE), - HPAGE_PMD_ORDER); - if (!thp) - return NULL; - prep_transhuge_page(thp); - return thp; - } else - return __alloc_pages_node(__ac->nid, GFP_HIGHUSER_MOVABLE | - __GFP_THISNODE, 0); -} - /* * Migrate pages from one node to a target node. * Returns error or the number of pages not migrated. @@ -1104,6 +1077,8 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest, int err = 0; struct alloc_control ac = { .nid = dest, + .gfp_mask = GFP_HIGHUSER_MOVABLE, + .thisnode = true, }; nodes_clear(nmask); @@ -1119,8 +1094,8 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest, flags | MPOL_MF_DISCONTIG_OK, &pagelist); if (!list_empty(&pagelist)) { - err = migrate_pages(&pagelist, alloc_new_node_page, NULL, &ac, - MIGRATE_SYNC, MR_SYSCALL); + err = migrate_pages(&pagelist, alloc_migration_target, NULL, + &ac, MIGRATE_SYNC, MR_SYSCALL); if (err) putback_movable_pages(&pagelist); } diff --git a/mm/migrate.c b/mm/migrate.c index 029af0b..3dfb108 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1574,9 +1574,11 @@ static int do_move_pages_to_node(struct mm_struct *mm, int err; struct alloc_control ac = { .nid = node, + .gfp_mask = GFP_HIGHUSER_MOVABLE, + .thisnode = true, }; - err = migrate_pages(pagelist, alloc_new_node_page, NULL, &ac, + err = migrate_pages(pagelist, alloc_migration_target, NULL, &ac, MIGRATE_SYNC, MR_SYSCALL); if (err) putback_movable_pages(pagelist); From patchwork Mon May 18 01:20:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11554539 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2FB0F912 for ; Mon, 18 May 2020 01:22:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F1D83207F9 for ; Mon, 18 May 2020 01:22:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="E7ZwSIMm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1D83207F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D615E80012; Sun, 17 May 2020 21:22:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CC41C8000B; Sun, 17 May 2020 21:22:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B91AF80012; Sun, 17 May 2020 21:22:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 94B268000B for ; Sun, 17 May 2020 21:22:16 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4F878127F for ; Mon, 18 May 2020 01:22:16 +0000 (UTC) X-FDA: 76828089072.14.hate84_88350e6c9865c X-Spam-Summary: 2,0,0,b695931b98709f3e,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1542:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2731:3138:3139:3140:3141:3142:3353:3867:3870:3871:4321:4605:5007:6119:6261:6653:7576:7903:8660:9413:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:13148:13161:13229:13230:13255:14181:14394:14721:21080:21444:21451:21627:21666:21990:30054,0,RBL:209.85.210.194:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: hate84_88350e6c9865c X-Filterd-Recvd-Size: 5578 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 May 2020 01:22:15 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id x2so4195322pfx.7 for ; Sun, 17 May 2020 18:22:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tK4U7isuNc1L5J2aGZLuInkmuw8DqNCKJzG8CU9ZYFs=; b=E7ZwSIMmH6P8ZPBcOQfqyXmFDWq+HehSfLxpuDA4qHAVd9ukCtQfvxx7S93GtpfSr8 dPjJq4FsbL3qqGvyD57C8/THom8Wac66YCO42EZ6Psu5tiPqHt29KY2GrNzm4tUqE2xZ FGpC/fsqeTTd3a1Fh3c/zus6s+NUpGCkphBZ1xyGBdL3mFvTyuCGLXNIElbJ6WbCF98i jzB0FdzVAJacZ5QqFePf0D460dBUT69l+TEB2i0X2xM8mCpSVeCJSvB5bNow+4rZTDWr aycAcceJLF9t7pifBjtDK/GyOVz9sMpESGs7ilN+d0qk5LbhunvnxmjlzmZybQ/QWsOF xqrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tK4U7isuNc1L5J2aGZLuInkmuw8DqNCKJzG8CU9ZYFs=; b=CP0PCBW9JSyR2AP3ZA26uh/olD2CCWyZGMHpSqAsE40j3noJZwwpSk3dnHlz03y4g/ wGsS+9wtaXX5MSRA03+POi1m2gd1nUsdlUy35oBEp+ZPsirKs5e/rS0Omrj2OALMgvo4 SfH0nLRNqKZpZ+tbopoeGkn+WN5JtF3CBm2pGNiPPMTVYZJf9xv9i0xM7LZ+Yjjyt6Fr tVablnN6CQe0bJaxrZwRgllWiGasW1PnbwdkZ0xfEVHgvyU74nKzV6iS5+DhDPEN1ifU kPUCoDEsMemeOGFgaYbrEmt7feN4FMDP99d+HtYPSMcN8Hdnkp9pmkBeLmlkeOBBp7zN zoyg== X-Gm-Message-State: AOAM530MGIhGLgnhMGxGhlKjRC6oaRzxohGZEjkbvwgFs5SicH60DyXm mEtCWSpDKZWTRdvfjeHP/2E= X-Google-Smtp-Source: ABdhPJxFZ9JSbKcpTJVti63DQlE4HO3bBBYCHnEunJ+kKmGdRZXaLdyvNFaiwoBGrEBHbq87nzKQpA== X-Received: by 2002:a63:2f41:: with SMTP id v62mr12895534pgv.178.1589764935068; Sun, 17 May 2020 18:22:15 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id fw4sm1544376pjb.31.2020.05.17.18.22.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 May 2020 18:22:14 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH 11/11] mm/page_alloc: use standard migration target allocation function directly Date: Mon, 18 May 2020 10:20:57 +0900 Message-Id: <1589764857-6800-12-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1589764857-6800-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is no need to make a function in order to call standard migration target allocation function. Use standard one directly. Signed-off-by: Joonsoo Kim --- include/linux/page-isolation.h | 2 -- mm/page_alloc.c | 9 +++++++-- mm/page_isolation.c | 11 ----------- 3 files changed, 7 insertions(+), 15 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 35e3bdb..20a4b63 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -62,6 +62,4 @@ undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, int isol_flags); -struct page *alloc_migrate_target(struct page *page, struct alloc_control *ac); - #endif diff --git a/mm/page_alloc.c b/mm/page_alloc.c index afdd0fb..2a7ab2b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8288,6 +8288,11 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, unsigned long pfn = start; unsigned int tries = 0; int ret = 0; + struct alloc_control ac = { + .nid = zone_to_nid(cc->zone), + .nmask = &node_states[N_MEMORY], + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; migrate_prep(); @@ -8314,8 +8319,8 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, &cc->migratepages); cc->nr_migratepages -= nr_reclaimed; - ret = migrate_pages(&cc->migratepages, alloc_migrate_target, - NULL, NULL, cc->mode, MR_CONTIG_RANGE); + ret = migrate_pages(&cc->migratepages, alloc_migration_target, + NULL, &ac, cc->mode, MR_CONTIG_RANGE); } if (ret < 0) { putback_movable_pages(&cc->migratepages); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index aba799d..03d6cad 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -297,14 +297,3 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, return pfn < end_pfn ? -EBUSY : 0; } - -struct page *alloc_migrate_target(struct page *page, struct alloc_control *__ac) -{ - struct alloc_control ac = { - .nid = page_to_nid(page), - .nmask = &node_states[N_MEMORY], - .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, - }; - - return alloc_migration_target(page, &ac); -}