From patchwork Thu Nov 4 15:56:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qian Cai X-Patchwork-Id: 12603385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F11DCC433FE for ; Thu, 4 Nov 2021 15:58:42 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B6A0D611C9 for ; Thu, 4 Nov 2021 15:58:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B6A0D611C9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:CC :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=Z9nhnKFLTg0d+o6iZph6wMTLAdFTSoodvtkqpgTgssc=; b=IEfnlxdGkA7Wnt S/nDKAk1FGX5mDwZWXz6KAApNYGZVsyV1HFMdhrALJeEJ2JUfG26DYvkoMegs1FygGSnj5UHBx0QG U48SwhlBiWwczpvB/J7qzBTz2FmEZGnzHqw2duAMmSA3FZ8+r0IebkwqrPkebC/7Csg3NW/JRjzyD jDZZBL2A+3zlTmjCDCxs7Y6tpL2hPpykMdmxY3AeYwLgpcGZ5flyhitRNRJYIhKB21pobilD8I5uu W7xPh6zEIq8XZhBnkmI93VLa9Uv7eMpBENtaiK+G3Te+8W+ZWujdghowmWtQEr4a9StZD2M8/1++o kNABMkOsY9W9LLX2g7GA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mif76-009L4a-1S; Thu, 04 Nov 2021 15:57:16 +0000 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mif72-009L2c-8A for linux-arm-kernel@lists.infradead.org; Thu, 04 Nov 2021 15:57:13 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1636041432; x=1667577432; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=MTq5hZatD7hD9SHlPbhbXOdMKu5hYT3GVGdSgkvs6HY=; b=NM4eViVnea37bZ7zrUaAN3TdQLjNuRtMKnqaE9H8TaJx8NCZlKDmHzMp pQDsEprBOOLnnKg8qbCmEeWFCgSf8+hw2VS9HMYYts65jV7B9yDglf2Il Vgfg5VV+JfIiMZEQh7vAVRUUzpjMRIp4zFnbqfAt2d8LmW3RDIjQY2iLX w=; Received: from unknown (HELO ironmsg05-sd.qualcomm.com) ([10.53.140.145]) by alexa-out-sd-01.qualcomm.com with ESMTP; 04 Nov 2021 08:57:09 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg05-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2021 08:57:09 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.7; Thu, 4 Nov 2021 08:57:08 -0700 Received: from qian-HP-Z2-SFF-G5-Workstation.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.7; Thu, 4 Nov 2021 08:57:07 -0700 From: Qian Cai To: Catalin Marinas , Will Deacon CC: Mike Rapoport , Andrew Morton , , , , Qian Cai Subject: [PATCH] arm64: Track no early_pgtable_alloc() for kmemleak Date: Thu, 4 Nov 2021 11:56:23 -0400 Message-ID: <20211104155623.11158-1-quic_qiancai@quicinc.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211104_085712_387375_725BC2E4 X-CRM114-Status: GOOD ( 16.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org After switched page size from 64KB to 4KB on several arm64 servers here, kmemleak starts to run out of early memory pool due to a huge number of those early_pgtable_alloc() calls: kmemleak_alloc_phys() memblock_alloc_range_nid() memblock_phys_alloc_range() early_pgtable_alloc() init_pmd() alloc_init_pud() __create_pgd_mapping() __map_memblock() paging_init() setup_arch() start_kernel() Increased the default value of DEBUG_KMEMLEAK_MEM_POOL_SIZE by 4 times won't be enough for a server with 200GB+ memory. There isn't much interesting to check memory leaks for those early page tables and those early memory mappings should not reference to other memory. Hence, no kmemleak false positives, and we can safely skip tracking those early allocations from kmemleak like we did in the commit fed84c785270 ("mm/memblock.c: skip kmemleak for kasan_init()") without needing to introduce complications to automatically scale the value depends on the runtime memory size etc. After the patch, the default value of DEBUG_KMEMLEAK_MEM_POOL_SIZE becomes sufficient again. Signed-off-by: Qian Cai Reviewed-by: Catalin Marinas --- arch/arm64/mm/mmu.c | 3 ++- include/linux/memblock.h | 1 + mm/memblock.c | 10 +++++++--- 3 files changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index d77bf06d6a6d..4d3cfbaa92a7 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -96,7 +96,8 @@ static phys_addr_t __init early_pgtable_alloc(int shift) phys_addr_t phys; void *ptr; - phys = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); + phys = memblock_phys_alloc_range(PAGE_SIZE, PAGE_SIZE, 0, + MEMBLOCK_ALLOC_PGTABLE); if (!phys) panic("Failed to allocate page table page\n"); diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 7df557b16c1e..de903055b01c 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -390,6 +390,7 @@ static inline int memblock_get_region_node(const struct memblock_region *r) #define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0) #define MEMBLOCK_ALLOC_ACCESSIBLE 0 #define MEMBLOCK_ALLOC_KASAN 1 +#define MEMBLOCK_ALLOC_PGTABLE 2 /* We are using top down, so it is safe to use 0 here */ #define MEMBLOCK_LOW_LIMIT 0 diff --git a/mm/memblock.c b/mm/memblock.c index 659bf0ffb086..13bc56a641c0 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -287,7 +287,8 @@ static phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size, { /* pump up @end */ if (end == MEMBLOCK_ALLOC_ACCESSIBLE || - end == MEMBLOCK_ALLOC_KASAN) + end == MEMBLOCK_ALLOC_KASAN || + end == MEMBLOCK_ALLOC_PGTABLE) end = memblock.current_limit; /* avoid allocating the first page */ @@ -1387,8 +1388,11 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, return 0; done: - /* Skip kmemleak for kasan_init() due to high volume. */ - if (end != MEMBLOCK_ALLOC_KASAN) + /* + * Skip kmemleak for kasan_init() and early_pgtable_alloc() due to high + * volume. + */ + if (end != MEMBLOCK_ALLOC_KASAN && end != MEMBLOCK_ALLOC_PGTABLE) /* * The min_count is set to 0 so that memblock allocated * blocks are never reported as leaks. This is because many