From patchwork Fri Jan 10 03:26:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Yang X-Patchwork-Id: 11326565 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8FB2A92A for ; Fri, 10 Jan 2020 03:26:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6705920709 for ; Fri, 10 Jan 2020 03:26:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6705920709 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F2DE08E0006; Thu, 9 Jan 2020 22:26:26 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E68028E0001; Thu, 9 Jan 2020 22:26:26 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D09378E0006; Thu, 9 Jan 2020 22:26:26 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id B1F3A8E0001 for ; Thu, 9 Jan 2020 22:26:26 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 6A68C180AD806 for ; Fri, 10 Jan 2020 03:26:26 +0000 (UTC) X-FDA: 76360286772.28.loaf35_725fd4a72792c X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,richardw.yang@linux.intel.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:kirill.shutemov@linux.intel.com:richardw.yang@linux.intel.com,RULES_HIT:30054:30070,0,RBL:134.134.136.100:@linux.intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: loaf35_725fd4a72792c X-Filterd-Recvd-Size: 3754 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jan 2020 03:26:25 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Jan 2020 19:26:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,415,1571727600"; d="scan'208";a="216520441" Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by orsmga008.jf.intel.com with ESMTP; 09 Jan 2020 19:26:24 -0800 From: Wei Yang To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kirill.shutemov@linux.intel.com, Wei Yang Subject: [PATCH 2/2] mm/huge_memory.c: use head to emphasize the purpose of page Date: Fri, 10 Jan 2020 11:26:10 +0800 Message-Id: <20200110032610.26499-2-richardw.yang@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200110032610.26499-1-richardw.yang@linux.intel.com> References: <20200110032610.26499-1-richardw.yang@linux.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: During split huge page, it checks the property of the page. Currently we do the check on page and head without emphasizing the check is on the compound page. In case the page passed to split_huge_page_to_list is a tail page, audience would take some time to think about whether the check is on compound page or tail page itself. To make it explicit, use head instead of page for those checks. After this, audience would be more clear about the checks are on compound page and the page is used to do the split and dump error message if failed. Signed-off-by: Wei Yang Acked-by: Kirill A. Shutemov --- mm/huge_memory.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 90165f68cf13..69e623216211 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2688,7 +2688,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) { struct page *head = compound_head(page); struct pglist_data *pgdata = NODE_DATA(page_to_nid(head)); - struct deferred_split *ds_queue = get_deferred_split_queue(page); + struct deferred_split *ds_queue = get_deferred_split_queue(head); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int count, mapcount, extra_pins, ret; @@ -2697,10 +2697,10 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) pgoff_t end; VM_BUG_ON_PAGE(is_huge_zero_page(head), head); - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(!PageCompound(page), page); + VM_BUG_ON_PAGE(!PageLocked(head), head); + VM_BUG_ON_PAGE(!PageCompound(head), head); - if (PageWriteback(page)) + if (PageWriteback(head)) return -EBUSY; if (PageAnon(head)) { @@ -2751,7 +2751,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) goto out_unlock; } - mlocked = PageMlocked(page); + mlocked = PageMlocked(head); unmap_page(head); VM_BUG_ON_PAGE(compound_mapcount(head), head); @@ -2785,10 +2785,10 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } spin_unlock(&ds_queue->split_queue_lock); if (mapping) { - if (PageSwapBacked(page)) - __dec_node_page_state(page, NR_SHMEM_THPS); + if (PageSwapBacked(head)) + __dec_node_page_state(head, NR_SHMEM_THPS); else - __dec_node_page_state(page, NR_FILE_THPS); + __dec_node_page_state(head, NR_FILE_THPS); } __split_huge_page(page, list, end, flags);