From patchwork Mon Mar 7 13:07:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12771826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93A56C433EF for ; Mon, 7 Mar 2022 13:08:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0EEBC8D0003; Mon, 7 Mar 2022 08:08:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 09EC68D0001; Mon, 7 Mar 2022 08:08:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA8908D0003; Mon, 7 Mar 2022 08:08:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id DC0F68D0001 for ; Mon, 7 Mar 2022 08:08:42 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9A83D181CAEFE for ; Mon, 7 Mar 2022 13:08:42 +0000 (UTC) X-FDA: 79217619684.23.6DAE401 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf12.hostedemail.com (Postfix) with ESMTP id 438AA40015 for ; Mon, 7 Mar 2022 13:08:42 +0000 (UTC) Received: by mail-pj1-f45.google.com with SMTP id mg21-20020a17090b371500b001bef9e4657cso13224442pjb.0 for ; Mon, 07 Mar 2022 05:08:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qW5FPsjHoeDJhXjcnei9BSeHnfSk9h1jeGzbOieqiTI=; b=Mvh1dUO5XlJTKhAQPXB10OJdy5tionKj/pcUVRBSYgQQ2XYHy1Iz09aVnM8gn13pbT EfWrBCO54EEaP743f2KxxiJxmO91AS90PxkE59Tm8IdWJ1VeUXxxzDAPZfqa0FVIc5Kt E48Otybo5+HsZ+bTexjnn3sjO7EI1uLgscUPFpnxC98gu8EFOrOCo+vHgSG6mim7+FPZ 6KDalI6lStC0WZ/nnCrs4/DiFH01Lk+MJJU5LBOZuZSZ2VN3VuTONhCzoLfC5gplBR9s Q7DcsNwsG2ebV7Fq/XznHUuizt6xizgYcLLBn8LXpctNu2/4P+BUymdiQJIwo5i6zsof 3D2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qW5FPsjHoeDJhXjcnei9BSeHnfSk9h1jeGzbOieqiTI=; b=eu6+8HWulq+E5gs47RSKCBcV43MIrFzXAesSzyY50fAja0HicJpUjejkcgsav4q9PY 71svQTYDdIhk9TKmtBUq7FqhParJePGaIbKSiCbi/GqnyrNfjR9l9ZgbHnH43FTSxVe3 /e5XhBJt5vLXNWMNBARkKre9REDOAL6llx8xnXXcPlGiVJKSIThtd7Lrqic62KXrYlLJ MYQYUBXcv139ygzvYBeKM0NwEP49HzYA+h1J3Uk2/OfXQTiOq3R94ljB0rhdVmKnJWth PMhr/PXEXuD+0/vRAu+vwBbyRmdhwWOUvWPigNoIBAlwfwbBoXNe2TUrfb8fn120z1P+ Bg3Q== X-Gm-Message-State: AOAM531H7k4hDQsCEdSlIbIXBy6WqsqardmbtvdVFt9GRhNu8qCYnBqf XlMa2sRNCgyDy7DQvSbNtc5J6w== X-Google-Smtp-Source: ABdhPJyi/s6JrRd7CnoW3N3QL8rs8llPB8YRmzbheDpz2rgM3I7HPPl+hZkDsa92nWf2RiLq/+elKQ== X-Received: by 2002:a17:902:8bc2:b0:14d:6d13:a389 with SMTP id r2-20020a1709028bc200b0014d6d13a389mr11944727plo.2.1646658521277; Mon, 07 Mar 2022 05:08:41 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id x9-20020aa79409000000b004f704d33ca0sm3258528pfo.136.2022.03.07.05.08.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Mar 2022 05:08:41 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 1/4] mm: hugetlb: disable freeing vmemmap pages when struct page crosses page boundaries Date: Mon, 7 Mar 2022 21:07:05 +0800 Message-Id: <20220307130708.58771-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220307130708.58771-1-songmuchun@bytedance.com> References: <20220307130708.58771-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 438AA40015 X-Rspam-User: Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Mvh1dUO5; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: agfhqoe79zbzsmin7azatrjmo64cfyb7 X-HE-Tag: 1646658522-431846 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the size of "struct page" is not the power of two and this feature is enabled, then the vmemmap pages of HugeTLB will be corrupted after remapping (panic is about to happen in theory). But this only exists when !CONFIG_MEMCG && !CONFIG_SLUB on x86_64. However, it is not a conventional configuration nowadays. So it is not a real word issue, just the result of a code review. But we cannot prevent anyone from configuring that combined configure. This feature should be disable in this case to fix this issue. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index b3118dba0518..49bc7f845438 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -121,6 +121,18 @@ void __init hugetlb_vmemmap_init(struct hstate *h) if (!hugetlb_free_vmemmap_enabled()) return; + if (IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON) && + !is_power_of_2(sizeof(struct page))) { + /* + * The hugetlb_free_vmemmap_enabled_key can be enabled when + * CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON. It should + * be disabled if "struct page" crosses page boundaries. + */ + pr_warn_once("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); + static_branch_disable(&hugetlb_free_vmemmap_enabled_key); + return; + } + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page is not to be freed to buddy allocator, the other tail