From patchwork Wed Jul 10 05:43:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13728883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F08F5C3DA41 for ; Wed, 10 Jul 2024 05:38:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 668896B0085; Wed, 10 Jul 2024 01:38:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 617106B0088; Wed, 10 Jul 2024 01:38:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4DEE96B008A; Wed, 10 Jul 2024 01:38:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2BB756B0085 for ; Wed, 10 Jul 2024 01:38:40 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 962D1A1B1F for ; Wed, 10 Jul 2024 05:38:39 +0000 (UTC) X-FDA: 82322738358.05.551ECDB Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf05.hostedemail.com (Postfix) with ESMTP id F3E4010000A for ; Wed, 10 Jul 2024 05:38:37 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=rwB9O0tj; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of alexs@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=alexs@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720589892; a=rsa-sha256; cv=none; b=kASjcth7KjujpY4kvJ5pQoGKdojjAdSHTRs5g9vfdYRdsfFzh+5v5MuIPp3TMN2q6tNi1C JX69GkZuLnbJ3CfOhhStiG29QiTEhReAaMU7Hmvi3GZIHEjPahXpuCzxxRV9FnVVICuON5 XdPopLVQQv1hB3QhTbt5Z8wSW/mO2zg= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=rwB9O0tj; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of alexs@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=alexs@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720589892; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=jmcBiBPUapPwr9kAWEmg6kXehoGM6LzGFrnZc9W06AA=; b=iujPETNCzdFz4HGxiv9MZ8DJrOQwtJ8pytOE++SiduMeyG8U9CS1Z31abzkoqsqHtbxwM/ 9aTa6c9LdZAACsIbl4z2c8+AXbGWwY9dRM3SuGoiHamEDKbL3RPovvzHkTG5sqF5Hnj8sO 8i5fWczDY25cic5Bm/0rDCAWBER9doI= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id D9EB261741; Wed, 10 Jul 2024 05:38:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C8921C32782; Wed, 10 Jul 2024 05:38:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1720589916; bh=it+lZnG8qkE+IDVtq8XvafLTCOle5Ypn5AUBVAIbtok=; h=From:To:Cc:Subject:Date:From; b=rwB9O0tjOmWTH7b+Jobb4sv4nhtpk7dp3Ax0d1ZIjx42t/HjR2M5GLNsv1fftL2kN zJTqVaglzg/nTjChdLWzpUlXfSeeqSubCbh2tZpesARXZXdva3ixJNd5g13MdWe5yi kl8QuazuQv8rdtaF17KI3BCz2fr/Uca838vx0D50bo+7A0H6vjqXMvFMIE11TfHW2Y OtgpGyZAaj5k1UnI7tFWflJ7e9dYziNREcKF8u70Zl5uCaPhHGTUkb41wPNsvimxgq 2xclOoVNqblTybM5rA0p9SbxEdmhaSy2yxFh9ivXh1X1wi02q8QOMLfCJWkum9c3sD ghnib4x0WNQaQ== From: alexs@kernel.org To: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: "Alex Shi (Tencent)" , Randy Dunlap , Yoann Congal , Masahiro Yamada , Petr Mladek , Suren Baghdasaryan Subject: [PATCH v3 1/2] mm/memcg: alignment memcg_data define condition Date: Wed, 10 Jul 2024 13:43:35 +0800 Message-ID: <20240710054336.190410-1-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: F3E4010000A X-Stat-Signature: pmzcuun9xas85b95oydixdwpswsjsppw X-Rspam-User: X-HE-Tag: 1720589917-231207 X-HE-Meta: U2FsdGVkX18E9RTzGKFbECH6T9YjU4sIjUp0LkprX/Uk61Q4MTbKrMvy3BfO9Gi6NIXX1kQ3IzFr+86eJbVaf6uJm8WawQhWaDQXT6f/63YlHkb6rcMU/S65qmcQ7okVSd3M8fP2ebK3Sx4hwoJPDEDp9WRkMWaoGz2SN8LoTpk+VunjazRB8Ta3RR0QeYXj2ZNuZUcXMrv/uNgIx7wb8AdgxyFz2zsmoOFNKy4XsFUID/3d42qL4Zt6R4N7mAPqovPfG9A4lVCFxfaGCG3p8o5OvxRY6W6q6wCTkhzwuJFNYUbN7FJmXw6/G1GHuvoE8TgVbPi3VOak3yEsh79VUqEezTxTL94igy6l/1Iia96TgDAnaaBVgng+lmddROdpJPmNpuilOZAhkVGDKV5L+jBvS+LV9jKLMx4Z0LF6WxydntxN7Z11/Yuc1/bi1hiDOs5Uv14Bk7J9kn4SUywMMhDcwY8S/LQIFFJO+NaF7qtdjhvreZc+Pf+rGedXdl+7BhttAJvHQSqFAiw9h7vZHD9f/Tir8gIurewksKLXiKtgXV7PgPFkO/eA28Z7i81qV85gePs/VdvqWHLb3mYvnoQ9ugMnouor87WEvqAhQrNgUJX9pR/+S+0/9WPHClR6V3XzLHUhcWtjvUe3vAGFgD9gCk5Tua1Z+aJGp0ZI3O5HrQkq6+623MisM+b11b+HlkE3Dx8ITwH0r92hNkHZQwGJQKKvkpEuue18BRnSEDfdcm9NkqUPtWhoL3sYKN0lQJjB1YT4RDNlWTHjWCrUXCO+Klu3ahro9osrV/m0OOCOgCZ2hbMUhfBkMLXCXBtXYpXounrVlAfmdU9RKNfhLLIep8soZGCmB1YopQwkxwzCuqyaPFjtOGBjkxpGZeVCoLmwZ6QOkKQRHJFGa2I7biHC3f4/I1K+E9WCtQ/AHwsu8J/DZvZ9/fqO3a2pD+JbB4s/0JqJGe2zjtC6fMa v+2BcklY LdQJ2NvSvsU7HOlvTL4r0WsBsLCvPdPZGqyB51xSTtogQIRuXrvCgNAW7Tu+w5E3YfeSG0JsYIfIGitJBPDcOJRin1uOi3Sx+6BD9Vgns+NtZS2MPWhQ0quwuf6FS8HpgrTHwTw/ng/xA6uWCPtTM5n1BjoMQl8hNyXtFTdZHEbZNh84oExxzE5p00dvKzoNagocyWk5EUSi2si6xE0VqLbLNi/6hOWNjCkUNxtzWZwwxclksPkQbAF2Wc7INLZsgKC6YWPkZdOoD+nELYEg2UeH1j2iqC180ZA5Wa/hU9gukkVtgRpC5Sud+xUCv4zL6dGsWxidIpEmRMElNXiuc3FccpznGhLTgunk3kHxrflHa1psSrln3EN+Q4jMQ8WaG4IFknntXjX+hMb4S4mnct+wJo0YTV57wBnpZzR10WssG8VsDmCH9H0ZXVfZAJo7Ycgy43eBYKTiVvZwrShtOGVd4yw1CzET3rhpWCrVsgpoEaLaCxNMzqnOLTg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Alex Shi (Tencent)" commit 21c690a349ba ("mm: introduce slabobj_ext to support slab object extensions") changed the folio/page->memcg_data define condition from MEMCG to SLAB_OBJ_EXT. And selected SLAB_OBJ_EXT for MEMCG, just for SLAB_MATCH(memcg_data, obj_exts), even no other relationship between them. Above action make memcg_data exposed and include SLAB_OBJ_EXT for !MEMCG. That's incorrect in logcial and pay on code size. As Vlastimil Babka suggested, let's add _unused_slab_obj_ext for SLAB_MATCH for slab.obj_exts while !MEMCG. That could resolve the match issue, clean up the feature logical. And decouple the SLAB_OBJ_EXT from MEMCG in next patch. Signed-off-by: Alex Shi (Tencent) Cc: Randy Dunlap Cc: Yoann Congal Cc: Masahiro Yamada Cc: Petr Mladek Cc: Suren Baghdasaryan Cc: Vlastimil Babka --- v1->v3: take Vlastimil's suggestion and move SLAB_OBJ_EXT/MEMCG decouple to 2nd patch. --- include/linux/mm_types.h | 8 ++++++-- mm/slab.h | 4 ++++ 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index ef09c4eef6d3..4ac3abc673d3 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -180,8 +180,10 @@ struct page { /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ atomic_t _refcount; -#ifdef CONFIG_SLAB_OBJ_EXT +#ifdef CONFIG_MEMCG unsigned long memcg_data; +#elif defined(CONFIG_SLAB_OBJ_EXT) + unsigned long _unused_slab_obj_ext; #endif /* @@ -343,8 +345,10 @@ struct folio { }; atomic_t _mapcount; atomic_t _refcount; -#ifdef CONFIG_SLAB_OBJ_EXT +#ifdef CONFIG_MEMCG unsigned long memcg_data; +#elif defined(CONFIG_SLAB_OBJ_EXT) + unsigned long _unused_slab_obj_ext; #endif #if defined(WANT_PAGE_VIRTUAL) void *virtual; diff --git a/mm/slab.h b/mm/slab.h index 3586e6183224..8ffdd4f315f8 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -98,7 +98,11 @@ SLAB_MATCH(flags, __page_flags); SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ SLAB_MATCH(_refcount, __page_refcount); #ifdef CONFIG_SLAB_OBJ_EXT +#ifdef CONFIG_MEMCG SLAB_MATCH(memcg_data, obj_exts); +#else +SLAB_MATCH(_unused_slab_obj_ext, obj_exts); +#endif #endif #undef SLAB_MATCH static_assert(sizeof(struct slab) <= sizeof(struct page));