From patchwork Mon Aug 1 08:04:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12933611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8890AC00144 for ; Mon, 1 Aug 2022 08:06:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DyWhm/ZCQ7WMQUymu5bhg9pnBm4GmHV/mstIv11tzeE=; b=4kOjPsYeKeuVTq ZrSVJZibgTfppT96jN1GC7qkK89gaZgq0HxhsbcY4iERmW25WvYgGAzWN6d3MvEMbq3DxNqHzRskK A9cI+n09+K5l86m6ttZrtPJBS9W1S4tamOZOaOlvpshXZRkeLj1b1jvWRZkbXzcsgK6vAR556NJp/ qJpb5zX3ViPNFO9P3+dMhCjecq/KMDyx+uQwWMK9V2QkymUFaQUKR1+Iv+plZSYtTZ9XmeGcgyp4U VMBlcYy/MUZHWfUWT34/D7o4qQGeudlGJjvSjjlwtJ1s53WADQerig+MQ1v8B2YZMx8HNbnUGhh84 jpSAQNQVa4EGdlEbg39w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIQQO-003l2r-Po; Mon, 01 Aug 2022 08:05:16 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIQPu-003ko9-Bc for linux-arm-kernel@lists.infradead.org; Mon, 01 Aug 2022 08:04:47 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C9EEF60F68; Mon, 1 Aug 2022 08:04:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47132C43142; Mon, 1 Aug 2022 08:04:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659341085; bh=SyAn2OHlBoyp4SJs3YcoSeH4zAJTR/slnZqVIScbiks=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Lb4X4JP+kNTAv0eGRJhEMXlCO1mRzMi4zoNFWWJleYiOLzI/Wai1ETeVIr/3h0u/3 mI0r14pSSP/C04M+CHkHZL72Kvdhw86NOTOPnf+Y7dDf/2kHtBc/9WGP/gUlEGx5Eb xBz1XfEwKYldWQBSs14yy31vvtxQgjDmY9rHHq23iAS8DDmTQNhKN9lwJL8M+uGHV2 iWqdH0u6tdfYyCqn6a0SOjKkI1xoIS2P5OHQcDKafsAbtXO7KMotIJk8JoAIF25Zgj pzqueuu6VcfX8p56Likx83mfhhgbO8grf3KLwf+x0QDt1N4qz21buo+0tXcffqJXP8 4Q2uYr35AtWLw== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 3/4] arm64/mmu: move helpers for hotplug page tables freeing close to callers Date: Mon, 1 Aug 2022 11:04:17 +0300 Message-Id: <20220801080418.120311-4-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220801080418.120311-1-rppt@kernel.org> References: <20220801080418.120311-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220801_010446_509536_41BE0D4D X-CRM114-Status: GOOD ( 13.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport to minimize extra ifdefery when unmap_*() methods will be used to remap crash kernel. Signed-off-by: Mike Rapoport --- arch/arm64/mm/mmu.c | 50 ++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index baa2dda2dcce..2f548fb2244c 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -837,30 +837,6 @@ static void free_hotplug_page_range(struct page *page, size_t size, } } -static void free_hotplug_pgtable_page(struct page *page) -{ - free_hotplug_page_range(page, PAGE_SIZE, NULL); -} - -static bool pgtable_range_aligned(unsigned long start, unsigned long end, - unsigned long floor, unsigned long ceiling, - unsigned long mask) -{ - start &= mask; - if (start < floor) - return false; - - if (ceiling) { - ceiling &= mask; - if (!ceiling) - return false; - } - - if (end - 1 > ceiling - 1) - return false; - return true; -} - static void unmap_pte_range(pmd_t *pmdp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) @@ -993,6 +969,30 @@ static void unmap_range(unsigned long addr, unsigned long end, } while (addr = next, addr < end); } +static bool pgtable_range_aligned(unsigned long start, unsigned long end, + unsigned long floor, unsigned long ceiling, + unsigned long mask) +{ + start &= mask; + if (start < floor) + return false; + + if (ceiling) { + ceiling &= mask; + if (!ceiling) + return false; + } + + if (end - 1 > ceiling - 1) + return false; + return true; +} + +static void free_hotplug_pgtable_page(struct page *page) +{ + free_hotplug_page_range(page, PAGE_SIZE, NULL); +} + static void free_empty_pte_table(pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling) @@ -1146,7 +1146,7 @@ static void free_empty_tables(unsigned long addr, unsigned long end, free_empty_p4d_table(pgdp, addr, next, floor, ceiling); } while (addr = next, addr < end); } -#endif +#endif /* CONFIG_MEMORY_HOTPLUG */ #if !ARM64_KERNEL_USES_PMD_MAPS int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,