From patchwork Thu Jan 16 04:11:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Xinhai X-Patchwork-Id: 11335995 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 11AF8138D for ; Thu, 16 Jan 2020 04:13:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C8A9A24671 for ; Thu, 16 Jan 2020 04:13:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GcSvQVYk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C8A9A24671 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E72808E0030; Wed, 15 Jan 2020 23:13:19 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E22068E0026; Wed, 15 Jan 2020 23:13:19 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D13778E0030; Wed, 15 Jan 2020 23:13:19 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id BB2918E0026 for ; Wed, 15 Jan 2020 23:13:19 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 7D09A180AD817 for ; Thu, 16 Jan 2020 04:13:19 +0000 (UTC) X-FDA: 76382177718.18.front02_91688e4db653e X-Spam-Summary: 2,0,0,c204b54156fde2a9,d41d8cd98f00b204,lixinhai.lxh@gmail.com,::akpm@linux-foundation.org:mhocko@suse.com:mike.kravetz@oracle.com,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1345:1437:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2901:2910:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:4117:4321:4605:5007:6261:6653:7514:9413:9592:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12895:12986:13255:14096:14181:14394:14687:14721:21080:21444:21451:21627:21666:21990:30051:30054:30064:30070,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: front02_91688e4db653e X-Filterd-Recvd-Size: 6325 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 04:13:19 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id d16so17715953wre.10 for ; Wed, 15 Jan 2020 20:13:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=ShEoZ+W1nuazvv4VojjaXO77AUpnsleIwe1MphYrQmQ=; b=GcSvQVYk46csYJQexHd/B2J21QQKYB9amJEwX3+FZ/F0pateRjFM3lVPdrhg10lEEv PNUOgjW5lQFDRWYK09r8t88SN01PMiaYHei9GecWAylMbif4Z8efL4msGWMKjpZ6z0GD ExYjPXDw57ZRuAIZlLeaYD/hAYK8aPs25KSjDdzDlBsjeno9l44J4GpewVPtzFPPf+Tg I35W5+/9+0y0AVI70fFhrKfrwMf2lbIk8IX5qAsbXJvCK4UOXaBtzFlmaLDpIFZ8BoaW HyiJPETYRhWh6iN9NwGv78rxI2hehI1az83c0YZwPr7R2e/vF6XXHs4dZtpp/aB6Uf7N 5UBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ShEoZ+W1nuazvv4VojjaXO77AUpnsleIwe1MphYrQmQ=; b=n6O5PLTO+e/jL97yJGHKUwfYMm2oF1XsrDrEcjAplc0jOk8uZDxNcoQptqFrXpAMSf WGq4ETl8iNm+KYtvTDkGApRvwlJke97EssnxWqQDox3FksDqjMWo9ZznfHr3JJXjnfRq NJjVfhu4jmuRy6+kAFuaVP87TdpJRz8YaC8Ve0jr8jlKqDyBJOIQuAgc0jrLFHXG7EtY PpNCJ4j6nbNVmwhaD9y3VQ67iYMw1abA/sRrJUw8gXWNQSDKBUR7/I5x3SSgEUpGk3cS qHPv8gJxVgweMN5lkT615UC3/7fWdMbsVGoiK2uKVfl6C+TTLrz7Vw47dXhnMlrUnFNT EsWA== X-Gm-Message-State: APjAAAUcS0H0B14NPMwcMwzZ1slKKsxlr8o543x8lez6ssu9SkeT9RgZ tmx8KwFnYx4HQccI10yBV7ytDQ1x X-Google-Smtp-Source: APXvYqxpm1K1UAAI4+Ivij6VmxVjuu5fVUNFyBFErHSGTZ4lbMmIn8GEPvqfrTApZPK8ZlE48sG5yw== X-Received: by 2002:adf:df90:: with SMTP id z16mr832224wrl.273.1579147997594; Wed, 15 Jan 2020 20:13:17 -0800 (PST) Received: from localhost.localdomain.localdomain ([131.228.2.21]) by smtp.gmail.com with ESMTPSA id k16sm28055095wru.0.2020.01.15.20.13.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Jan 2020 20:13:16 -0800 (PST) From: Li Xinhai To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, Michal Hocko , Mike Kravetz Subject: [PATCH v4] mm/mempolicy,hugetlb: Checking hstate for hugetlbfs page in vma_migratable Date: Thu, 16 Jan 2020 04:11:25 +0000 Message-Id: <1579147885-23511-1-git-send-email-lixinhai.lxh@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Checking hstate at early phase when isolating page, instead of during unmap and move phase, to avoid useless isolation. Signed-off-by: Li Xinhai Cc: Michal Hocko Cc: Mike Kravetz --- include/linux/hugetlb.h | 10 ++++++++++ include/linux/mempolicy.h | 29 +---------------------------- mm/mempolicy.c | 28 ++++++++++++++++++++++++++++ 3 files changed, 39 insertions(+), 28 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 31d4920..c9d871d 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -598,6 +598,11 @@ static inline bool hugepage_migration_supported(struct hstate *h) return arch_hugetlb_migration_supported(h); } +static inline bool vm_hugepage_migration_supported(struct vm_area_struct *vma) +{ + return hugepage_migration_supported(hstate_vma(vma)); +} + /* * Movability check is different as compared to migration check. * It determines whether or not a huge page should be placed on @@ -809,6 +814,11 @@ static inline bool hugepage_migration_supported(struct hstate *h) return false; } +static inline bool vm_hugepage_migration_supported(struct vm_area_struct *vma) +{ + return false; +} + static inline bool hugepage_movable_supported(struct hstate *h) { return false; diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 5228c62..8165278 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -173,34 +173,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, extern void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol); /* Check if a vma is migratable */ -static inline bool vma_migratable(struct vm_area_struct *vma) -{ - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) - return false; - - /* - * DAX device mappings require predictable access latency, so avoid - * incurring periodic faults. - */ - if (vma_is_dax(vma)) - return false; - -#ifndef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION - if (vma->vm_flags & VM_HUGETLB) - return false; -#endif - - /* - * Migration allocates pages in the highest zone. If we cannot - * do so then migration (at least from node to node) is not - * possible. - */ - if (vma->vm_file && - gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) - < policy_zone) - return false; - return true; -} +extern bool vma_migratable(struct vm_area_struct *vma); extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned long); extern void mpol_put_task_policy(struct task_struct *); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 067cf7d..8a01fb1 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1714,6 +1714,34 @@ static int kernel_get_mempolicy(int __user *policy, #endif /* CONFIG_COMPAT */ +bool vma_migratable(struct vm_area_struct *vma) +{ + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) + return false; + + /* + * DAX device mappings require predictable access latency, so avoid + * incurring periodic faults. + */ + if (vma_is_dax(vma)) + return false; + + if (is_vm_hugetlb_page(vma) && + !vm_hugepage_migration_supported(vma)) + return false; + + /* + * Migration allocates pages in the highest zone. If we cannot + * do so then migration (at least from node to node) is not + * possible. + */ + if (vma->vm_file && + gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) + < policy_zone) + return false; + return true; +} + struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, unsigned long addr) {