From patchwork Wed Mar 30 10:25:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chen Wandun X-Patchwork-Id: 12795638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D54DFC433EF for ; Wed, 30 Mar 2022 10:06:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EC26D8D0005; Wed, 30 Mar 2022 06:06:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E22F28D0003; Wed, 30 Mar 2022 06:06:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C005E8D0005; Wed, 30 Mar 2022 06:06:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id AFF4E8D0003 for ; Wed, 30 Mar 2022 06:06:45 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 78A76359 for ; Wed, 30 Mar 2022 10:06:45 +0000 (UTC) X-FDA: 79300623570.14.776DBB5 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf01.hostedemail.com (Postfix) with ESMTP id 7BCA540016 for ; Wed, 30 Mar 2022 10:06:44 +0000 (UTC) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KT29W0RhCzgY96; Wed, 30 Mar 2022 18:05:03 +0800 (CST) Received: from dggpemm500002.china.huawei.com (7.185.36.229) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 30 Mar 2022 18:06:42 +0800 Received: from localhost.localdomain (10.175.112.125) by dggpemm500002.china.huawei.com (7.185.36.229) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 30 Mar 2022 18:06:41 +0800 From: Chen Wandun To: , , , Subject: [PATCH v2 2/2] mm: fix contiguous memmap assumptions about alloc/free pages Date: Wed, 30 Mar 2022 18:25:34 +0800 Message-ID: <20220330102534.1053240-3-chenwandun@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 In-Reply-To: <20220330102534.1053240-1-chenwandun@huawei.com> References: <20220330102534.1053240-1-chenwandun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500002.china.huawei.com (7.185.36.229) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 7BCA540016 X-Rspam-User: Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of chenwandun@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=chenwandun@huawei.com X-Stat-Signature: sjp57b1zudygxhr6yee8rfdgcayiumx3 X-HE-Tag: 1648634804-518495 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It isn't true for only SPARSEMEM configs to assume that a compound page has virtually contiguous page structs, so use nth_page to iterate each page. Signed-off-by: Chen Wandun --- mm/page_alloc.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 855211dea13e..758d8f069b32 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -721,7 +721,7 @@ static void prep_compound_head(struct page *page, unsigned int order) static void prep_compound_tail(struct page *head, int tail_idx) { - struct page *p = head + tail_idx; + struct page *p = nth_page(head, tail_idx); p->mapping = TAIL_MAPPING; set_compound_head(p, head); @@ -1199,10 +1199,10 @@ static inline int check_free_page(struct page *page) return 1; } -static int free_tail_pages_check(struct page *head_page, struct page *page) +static int free_tail_pages_check(struct page *head_page, int index) { + struct page *page = nth_page(head_page, index); int ret = 1; - /* * We rely page->lru.next never has bit 0 set, unless the page * is PageTail(). Let's make sure that's true even for poisoned ->lru. @@ -1213,7 +1213,7 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) ret = 0; goto out; } - switch (page - head_page) { + switch (index) { case 1: /* the first tail page: ->mapping may be compound_mapcount() */ if (unlikely(compound_mapcount(page))) { @@ -1322,6 +1322,7 @@ static __always_inline bool free_pages_prepare(struct page *page, if (unlikely(order)) { bool compound = PageCompound(page); int i; + struct page *tail_page; VM_BUG_ON_PAGE(compound && compound_order(page) != order, page); @@ -1330,13 +1331,14 @@ static __always_inline bool free_pages_prepare(struct page *page, ClearPageHasHWPoisoned(page); } for (i = 1; i < (1 << order); i++) { + tail_page = nth_page(page, i); if (compound) - bad += free_tail_pages_check(page, page + i); - if (unlikely(check_free_page(page + i))) { + bad += free_tail_pages_check(page, i); + if (unlikely(check_free_page(tail_page))) { bad++; continue; } - (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; + tail_page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; } } if (PageMappingFlags(page))