From patchwork Fri Apr 29 12:18:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12831931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC091C433FE for ; Fri, 29 Apr 2022 12:18:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6332B6B0073; Fri, 29 Apr 2022 08:18:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5DF136B0074; Fri, 29 Apr 2022 08:18:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A7FC6B0075; Fri, 29 Apr 2022 08:18:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 3A1EA6B0073 for ; Fri, 29 Apr 2022 08:18:40 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0FE8F27C35 for ; Fri, 29 Apr 2022 12:18:40 +0000 (UTC) X-FDA: 79409820000.01.189B0FE Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by imf29.hostedemail.com (Postfix) with ESMTP id 37F54120042 for ; Fri, 29 Apr 2022 12:18:36 +0000 (UTC) Received: by mail-pf1-f176.google.com with SMTP id x23so1483358pff.9 for ; Fri, 29 Apr 2022 05:18:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Sb7cKusP7HmwEMFLljkRCnuPfmcRLCQDQWff8XX78N4=; b=ujf++oGTRkcmvgbJDzD35LqcmXlDmxx9MV7JBsCDJlobs2RrJt7Oin3Tp6YJWZSb4v 9pD3a81OGs+Rvr9bhGzIguAqbwWgBnZtv6XULJJozikDR56kziPLN1jhnHZTf/HusSzV w63qNhi0UKdgf6Q9Fq+cz+V4qw1+H3gfd1FB5CgYLwAD7FKIg+U2A3PKdV62na10dGaP uHFViPBg8CIa2xG7vAItmaI0KXTl/Td/1o3ipN73lzlFryjc/F+CyUtsT9v0X6W/1RuW a9MOjWqpYGACasXXh7MNQMe9KERCZSJqchBd9Dvm3EvJmQbv8dV4Tg9o5hNkY+lx9T2W j4pQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Sb7cKusP7HmwEMFLljkRCnuPfmcRLCQDQWff8XX78N4=; b=8HNRElPRlsAxGbO4dLo8j5S9Kgtca0n1rMOlY09AmbcWnJVa4+f5KZIprUaakR5Cwp Qu+BGNb2XlBV7k21OUOx42hV4jdDJlRAnh7xvfmf9Ua3HvqPRz47Ilu2PhH5fji4NXr5 9uyFy043p73DZkdOzr2Gys/AhFV3St6LfKoEtPwxQxcuV31ysiXt8girGSu3vnPh1miY lekJYnPIG5uJC26U3VIAV0zym95pkgWY14ZxXZdG6NWlf/soKIo0ZTOzx+sy2QJnU3SQ fvklRkc/BooyB9z67ax72xM/4pUCjNYy95O4KM4sijx2ko3J+7ZkpqMO2pkHJ5EqwK/N QDQA== X-Gm-Message-State: AOAM530p8yO+CkXS5Nc/m/kOPmr1rHxVuWmillQrBCWxd1oSAlcfzWEl TAMZUi6Gic3mIqjuNH+PnxUF3w== X-Google-Smtp-Source: ABdhPJyV4XCJ1uaJg+Pde1nxM1j4YKLXpIc83hyQsnEeiM1URERFxVgDV1/VvFaifwpneZ3ujEuceg== X-Received: by 2002:a63:d145:0:b0:3c1:4ba0:d890 with SMTP id c5-20020a63d145000000b003c14ba0d890mr10020401pgj.607.1651234718613; Fri, 29 Apr 2022 05:18:38 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.239]) by smtp.gmail.com with ESMTPSA id k11-20020a056a00168b00b004f7e1555538sm3101421pfc.190.2022.04.29.05.18.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Apr 2022 05:18:38 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v9 1/4] mm: hugetlb_vmemmap: disable hugetlb_optimize_vmemmap when struct page crosses page boundaries Date: Fri, 29 Apr 2022 20:18:13 +0800 Message-Id: <20220429121816.37541-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220429121816.37541-1-songmuchun@bytedance.com> References: <20220429121816.37541-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 37F54120042 X-Stat-Signature: u8f6xjhddx6pktkok5wqkunqs6megnim Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=ujf++oGT; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf29.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651234716-699903 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the size of "struct page" is not the power of two but with the feature of minimizing overhead of struct page associated with each HugeTLB is enabled, then the vmemmap pages of HugeTLB will be corrupted after remapping (panic is about to happen in theory). But this only exists when !CONFIG_MEMCG && !CONFIG_SLUB on x86_64. However, it is not a conventional configuration nowadays. So it is not a real word issue, just the result of a code review. But we cannot prevent anyone from configuring that combined configure. This hugetlb_optimize_vmemmap should be disable in this case to fix this issue. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- mm/hugetlb_vmemmap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 29554c6ef2ae..6254bb2d4ae5 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -28,12 +28,6 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static int __init hugetlb_vmemmap_early_param(char *buf) { - /* We cannot optimize if a "struct page" crosses page boundaries. */ - if (!is_power_of_2(sizeof(struct page))) { - pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); - return 0; - } - if (!buf) return -EINVAL; @@ -119,6 +113,12 @@ void __init hugetlb_vmemmap_init(struct hstate *h) if (!hugetlb_optimize_vmemmap_enabled()) return; + if (!is_power_of_2(sizeof(struct page))) { + pr_warn_once("cannot optimize vmemmap pages because \"struct page\" crosses page boundaries\n"); + static_branch_disable(&hugetlb_optimize_vmemmap_key); + return; + } + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page is not to be freed to buddy allocator, the other tail