From patchwork Tue Mar 29 13:09:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chen Wandun X-Patchwork-Id: 12794829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B28DC433EF for ; Tue, 29 Mar 2022 12:50:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B51B28D0003; Tue, 29 Mar 2022 08:50:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AB36A8D0001; Tue, 29 Mar 2022 08:50:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 956508D0003; Tue, 29 Mar 2022 08:50:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id 817C28D0001 for ; Tue, 29 Mar 2022 08:50:56 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 21691A4A6D for ; Tue, 29 Mar 2022 12:50:56 +0000 (UTC) X-FDA: 79297408512.25.7EEAF1A Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf12.hostedemail.com (Postfix) with ESMTP id 58E9940010 for ; Tue, 29 Mar 2022 12:50:55 +0000 (UTC) Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.56]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KSTpd29xRzBrhp; Tue, 29 Mar 2022 20:46:49 +0800 (CST) Received: from dggpemm500002.china.huawei.com (7.185.36.229) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Tue, 29 Mar 2022 20:50:51 +0800 Received: from localhost.localdomain (10.175.112.125) by dggpemm500002.china.huawei.com (7.185.36.229) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Tue, 29 Mar 2022 20:50:51 +0800 From: Chen Wandun To: , , Subject: [PATCH 1/2] mm: fix contiguous memmap assumptions about split page Date: Tue, 29 Mar 2022 21:09:27 +0800 Message-ID: <20220329130928.266323-2-chenwandun@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220329130928.266323-1-chenwandun@huawei.com> References: <20220329130928.266323-1-chenwandun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500002.china.huawei.com (7.185.36.229) X-CFilter-Loop: Reflected Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of chenwandun@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=chenwandun@huawei.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 58E9940010 X-Stat-Signature: 7hs1yj5pei4w3r6dpk19t9x83o5uh5gi X-HE-Tag: 1648558255-95878 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It isn't true for only SPARSEMEM configs to assume that a compound page has virtually contiguous page structs, so use nth_page to iterate each page. Inspired by: https://lore.kernel.org/linux-mm/20220204195852.1751729-8-willy@infradead.org/ Signed-off-by: Chen Wandun --- mm/compaction.c | 6 +++--- mm/huge_memory.c | 2 +- mm/page_alloc.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index c3e37aa9ff9e..ddff13b968a2 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -87,7 +87,7 @@ static unsigned long release_freepages(struct list_head *freelist) static void split_map_pages(struct list_head *list) { unsigned int i, order, nr_pages; - struct page *page, *next; + struct page *page, *next, *tmp; LIST_HEAD(tmp_list); list_for_each_entry_safe(page, next, list, lru) { @@ -101,8 +101,8 @@ static void split_map_pages(struct list_head *list) split_page(page, order); for (i = 0; i < nr_pages; i++) { - list_add(&page->lru, &tmp_list); - page++; + tmp = nth_page(page, i); + list_add(&tmp->lru, &tmp_list); } } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2fe38212e07c..d77fc2ad581d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2297,7 +2297,7 @@ static void lru_add_page_tail(struct page *head, struct page *tail, static void __split_huge_page_tail(struct page *head, int tail, struct lruvec *lruvec, struct list_head *list) { - struct page *page_tail = head + tail; + struct page *page_tail = nth_page(head, tail); VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f648decfe39d..855211dea13e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3513,7 +3513,7 @@ void split_page(struct page *page, unsigned int order) VM_BUG_ON_PAGE(!page_count(page), page); for (i = 1; i < (1 << order); i++) - set_page_refcounted(page + i); + set_page_refcounted(nth_page(page, i)); split_page_owner(page, 1 << order); split_page_memcg(page, 1 << order); } From patchwork Tue Mar 29 13:09:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chen Wandun X-Patchwork-Id: 12794830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A9E5C433F5 for ; Tue, 29 Mar 2022 12:50:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F2F78D0005; Tue, 29 Mar 2022 08:50:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 007628D0001; Tue, 29 Mar 2022 08:50:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D9B228D0005; Tue, 29 Mar 2022 08:50:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0233.hostedemail.com [216.40.44.233]) by kanga.kvack.org (Postfix) with ESMTP id B21E48D0001 for ; Tue, 29 Mar 2022 08:50:57 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6B632A4A70 for ; Tue, 29 Mar 2022 12:50:57 +0000 (UTC) X-FDA: 79297408554.16.9069B8F Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf25.hostedemail.com (Postfix) with ESMTP id 41BB0A000A for ; Tue, 29 Mar 2022 12:50:56 +0000 (UTC) Received: from dggpemm500022.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KSTv04kk1z1GD2C; Tue, 29 Mar 2022 20:50:36 +0800 (CST) Received: from dggpemm500002.china.huawei.com (7.185.36.229) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Tue, 29 Mar 2022 20:50:52 +0800 Received: from localhost.localdomain (10.175.112.125) by dggpemm500002.china.huawei.com (7.185.36.229) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Tue, 29 Mar 2022 20:50:52 +0800 From: Chen Wandun To: , , Subject: [PATCH 2/2] mm: fix contiguous memmap assumptions about alloc/free pages Date: Tue, 29 Mar 2022 21:09:28 +0800 Message-ID: <20220329130928.266323-3-chenwandun@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220329130928.266323-1-chenwandun@huawei.com> References: <20220329130928.266323-1-chenwandun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500002.china.huawei.com (7.185.36.229) X-CFilter-Loop: Reflected X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of chenwandun@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=chenwandun@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 41BB0A000A X-Stat-Signature: pwwzr1f3grg5nhf5g9rfzhwf6xq93qf5 X-HE-Tag: 1648558256-786317 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It isn't true for only SPARSEMEM configs to assume that a compound page has virtually contiguous page structs, so use nth_page to iterate each page. Signed-off-by: Chen Wandun --- include/linux/mm.h | 2 ++ mm/page_alloc.c | 12 +++++++----- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 355075fb2654..ef48cfef7c67 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -212,9 +212,11 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *, #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) +#define page_nth(head, tail) (page_to_pfn(tail) - page_to_pfn(head)) #define folio_page_idx(folio, p) (page_to_pfn(p) - folio_pfn(folio)) #else #define nth_page(page,n) ((page) + (n)) +#define page_nth(head, tail) ((tail) - (head)) #define folio_page_idx(folio, p) ((p) - &(folio)->page) #endif diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 855211dea13e..09bc63992d20 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -721,7 +721,7 @@ static void prep_compound_head(struct page *page, unsigned int order) static void prep_compound_tail(struct page *head, int tail_idx) { - struct page *p = head + tail_idx; + struct page *p = nth_page(head, tail_idx); p->mapping = TAIL_MAPPING; set_compound_head(p, head); @@ -1213,7 +1213,7 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) ret = 0; goto out; } - switch (page - head_page) { + switch (page_nth(head_page, page)) { case 1: /* the first tail page: ->mapping may be compound_mapcount() */ if (unlikely(compound_mapcount(page))) { @@ -1322,6 +1322,7 @@ static __always_inline bool free_pages_prepare(struct page *page, if (unlikely(order)) { bool compound = PageCompound(page); int i; + struct page *tail_page; VM_BUG_ON_PAGE(compound && compound_order(page) != order, page); @@ -1330,13 +1331,14 @@ static __always_inline bool free_pages_prepare(struct page *page, ClearPageHasHWPoisoned(page); } for (i = 1; i < (1 << order); i++) { + tail_page = nth_page(page, i); if (compound) - bad += free_tail_pages_check(page, page + i); - if (unlikely(check_free_page(page + i))) { + bad += free_tail_pages_check(page, tail_page); + if (unlikely(check_free_page(tail_page))) { bad++; continue; } - (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; + tail_page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; } } if (PageMappingFlags(page))