From patchwork Thu Jan 21 12:27:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEA37C433E6 for ; Thu, 21 Jan 2021 12:30:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 96D14239FE for ; Thu, 21 Jan 2021 12:30:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731184AbhAUMa3 (ORCPT ); Thu, 21 Jan 2021 07:30:29 -0500 Received: from mail.kernel.org ([198.145.29.99]:56318 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731138AbhAUM3s (ORCPT ); Thu, 21 Jan 2021 07:29:48 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id DEAD6239FF; Thu, 21 Jan 2021 12:28:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232139; bh=0FIDx2HilEZBzfBHTwB/3do9lxZLUmSzqfQ+KNSAkko=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N3q4Zj1fPwNHXX7u+q92p7e2Bx8HgrmXjAXxil46IXYidoNyrB1+gmFssLBGCTP2e L/BZKdkPkXcZ9Tu28mgKVF6qUKsbJ6jYHdcOcGb+epLzau0GQ4XegrgAz0xtt33piR n/ahOOXBaroN8wtxpZTeSA2j0Ux5n0vj/5HbOHhxzT2VCd84UfUhew78dic1Ql6oux EnkI5RPuQ2Wgn4Z9cb+62xwsWVsDP9tK7+HYB/h6BLPr9OUWjkeLu4zVGNLnF2kKsH SYFPmc4VhpFJtu/2oFlwZHM/rzYdvp61G7CzrXOjbPrV2d4/1efYWao0/sRsHMbuAT fPBjkDzsw0d4g== From: Mike Rapoport To: Andrew Morton Cc: Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , David Hildenbrand , Elena Reshetova , "H. Peter Anvin" , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mark Rutland , Mike Rapoport , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Rick Edgecombe , Roman Gushchin , Shakeel Butt , Shuah Khan , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org, Hagen Paul Pfeifer , Palmer Dabbelt Subject: [PATCH v16 08/11] secretmem: add memcg accounting Date: Thu, 21 Jan 2021 14:27:20 +0200 Message-Id: <20210121122723.3446-9-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Mike Rapoport Account memory consumed by secretmem to memcg. The accounting is updated when the memory is actually allocated and freed. Signed-off-by: Mike Rapoport Acked-by: Roman Gushchin Reviewed-by: Shakeel Butt Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Elena Reshetova Cc: Hagen Paul Pfeifer Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon --- mm/filemap.c | 3 ++- mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++- 2 files changed, 37 insertions(+), 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 2d0c6721879d..bb28dd6d9e22 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -42,6 +42,7 @@ #include #include #include +#include #include "internal.h" #define CREATE_TRACE_POINTS @@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page, page->mapping = mapping; page->index = offset; - if (!huge) { + if (!huge && !page_is_secretmem(page)) { error = mem_cgroup_charge(page, current->mm, gfp); if (error) goto error; diff --git a/mm/secretmem.c b/mm/secretmem.c index 469211c7cc3a..05026460e2ee 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -44,6 +45,32 @@ struct secretmem_ctx { static struct cma *secretmem_cma; +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order) +{ + int err; + + err = memcg_kmem_charge_page(page, gfp, order); + if (err) + return err; + + /* + * seceremem caches are unreclaimable kernel allocations, so treat + * them as unreclaimable slab memory for VM statistics purposes + */ + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); + + return 0; +} + +static void secretmem_unaccount_pages(struct page *page, int order) +{ + + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + -PAGE_SIZE << order); + memcg_kmem_uncharge_page(page, order); +} + static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) { unsigned long nr_pages = (1 << PMD_PAGE_ORDER); @@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) if (!page) return -ENOMEM; + err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER); + if (err) + goto err_cma_release; + /* * clear the data left from the prevoius user before dropping the * pages from the direct map @@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) err = set_direct_map_invalid_noflush(page, nr_pages); if (err) - goto err_cma_release; + goto err_memcg_uncharge; addr = (unsigned long)page_address(page); err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE); @@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) * won't fail */ set_direct_map_default_noflush(page, nr_pages); +err_memcg_uncharge: + secretmem_unaccount_pages(page, PMD_PAGE_ORDER); err_cma_release: cma_release(secretmem_cma, page, nr_pages); return err; @@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool, int i; set_direct_map_default_noflush(page, nr_pages); + secretmem_unaccount_pages(page, PMD_PAGE_ORDER); for (i = 0; i < nr_pages; i++) clear_highpage(page + i);