From patchwork Thu May 12 04:11:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12847008 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B313BC433FE for ; Thu, 12 May 2022 04:12:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A54C6B0075; Thu, 12 May 2022 00:12:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 454B06B0078; Thu, 12 May 2022 00:12:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31CE48D0001; Thu, 12 May 2022 00:12:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 256336B0075 for ; Thu, 12 May 2022 00:12:17 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id EB7E580313 for ; Thu, 12 May 2022 04:12:16 +0000 (UTC) X-FDA: 79455768672.07.61F30E0 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf02.hostedemail.com (Postfix) with ESMTP id DA4C7800BC for ; Thu, 12 May 2022 04:12:08 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id n18so3755758plg.5 for ; Wed, 11 May 2022 21:12:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ODWBdYqZzg8lChYiYiJ2sSIUo4NFfWVrgG6YTxSVP1A=; b=Ngz3D77zs/xp/vA5hFw2cdVMSnC9oUL2wHY3z6LuPl03ViyKz1AScEySaj0q61TiCz uqU+p3NVdxinAmmFIiVPWACAd+D+OJwT1pvROHz93ukkMBxzYHMMa6HJk/5MSUOc3/q7 r8xUk0sBKeepaJ9/TLXUsvd1YXF0Jsibof8Jm94zf08p1DtPfr0PQ2WHxpGNFi1S/YtB BxbHoPbBz7VPBLD68majR/kPzuxNmP/5/jHY/ak5hJlBKrZCq+ww6hGUEzGSVKkG7zVd UBkf5XzMz8oeuQvEO0yB1EP0sLz6A2aB+VfK/cUX/E50kYCTLfk7rnXyoOjuvhBkA9JD xvzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ODWBdYqZzg8lChYiYiJ2sSIUo4NFfWVrgG6YTxSVP1A=; b=5ocGL8FZOFxztzFAlRE1pTxc+gy2+UIzH5HwPHgvvznAE+Qbzvp/Qg6vIDuicCv+Qo D2yiQel5yL50+L8iYtalRUAPVNwHRF9xmiULD4RKZu/spf7at5yh1TG8Ne9rLRwdIkog ib9bBIjBtXQpUWwLQplshMhIfB/C6PCF0zMKBqBL2hGU06SUzg1r913VtbyGQCs4Zpw1 4VU3xfH1zc6PC9CO6SlX/3Aiaequ18AVU9p0susnWlY71vVb0OvOcvsqL4FpciZrtDWH +jJiGIdKAYE6VQwcIRbGwiTmN8FLH7WlhrfTWRPhv3Bb8R8obxPEluwpwPebqQ43arBQ jj+Q== X-Gm-Message-State: AOAM530B5yS9KG62G+XsY/mLsD2L/ZeM2BR1nVnuAuOKmj7+cyoSAuwA XKRpcq3uHhKDUzGckL2fczcUGw== X-Google-Smtp-Source: ABdhPJwqaLJm3UoDc04JP46WrAIcZihbJfdbwPqN7BfOz3+DV9hE3OQoNbHhMpW+tWmXbzr/1bgWNA== X-Received: by 2002:a17:90b:4b84:b0:1dc:93c0:ba01 with SMTP id lr4-20020a17090b4b8400b001dc93c0ba01mr8869867pjb.70.1652328735414; Wed, 11 May 2022 21:12:15 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id q13-20020a170902edcd00b0015e8d4eb2dcsm2695161plk.294.2022.05.11.21.12.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 May 2022 21:12:15 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v11 1/4] mm: hugetlb_vmemmap: disable hugetlb_optimize_vmemmap when struct page crosses page boundaries Date: Thu, 12 May 2022 12:11:39 +0800 Message-Id: <20220512041142.39501-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220512041142.39501-1-songmuchun@bytedance.com> References: <20220512041142.39501-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Ngz3D77z; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf02.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: DA4C7800BC X-Rspam-User: X-Stat-Signature: oug481z7hn3faqyer681kb7sby1infby X-HE-Tag: 1652328728-951821 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the size of "struct page" is not the power of two but with the feature of minimizing overhead of struct page associated with each HugeTLB is enabled, then the vmemmap pages of HugeTLB will be corrupted after remapping (panic is about to happen in theory). But this only exists when !CONFIG_MEMCG && !CONFIG_SLUB on x86_64. However, it is not a conventional configuration nowadays. So it is not a real word issue, just the result of a code review. But we cannot prevent anyone from configuring that combined configure. This hugetlb_optimize_vmemmap should be disable in this case to fix this issue. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- mm/hugetlb_vmemmap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 29554c6ef2ae..6254bb2d4ae5 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -28,12 +28,6 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static int __init hugetlb_vmemmap_early_param(char *buf) { - /* We cannot optimize if a "struct page" crosses page boundaries. */ - if (!is_power_of_2(sizeof(struct page))) { - pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); - return 0; - } - if (!buf) return -EINVAL; @@ -119,6 +113,12 @@ void __init hugetlb_vmemmap_init(struct hstate *h) if (!hugetlb_optimize_vmemmap_enabled()) return; + if (!is_power_of_2(sizeof(struct page))) { + pr_warn_once("cannot optimize vmemmap pages because \"struct page\" crosses page boundaries\n"); + static_branch_disable(&hugetlb_optimize_vmemmap_key); + return; + } + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page is not to be freed to buddy allocator, the other tail