From patchwork Fri Dec 22 07:02:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13503013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2127C41535 for ; Fri, 22 Dec 2023 07:02:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 22D226B007B; Fri, 22 Dec 2023 02:02:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B78C6B0081; Fri, 22 Dec 2023 02:02:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F24506B0082; Fri, 22 Dec 2023 02:02:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id DC9856B007B for ; Fri, 22 Dec 2023 02:02:15 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id AAE6B1C170A for ; Fri, 22 Dec 2023 07:02:15 +0000 (UTC) X-FDA: 81593560230.19.34DAA2D Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf30.hostedemail.com (Postfix) with ESMTP id 3FF4280017 for ; Fri, 22 Dec 2023 07:02:11 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703228533; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=De8FrUeSzDWm1Kg74g5mW0eEIU3qZxoBvEMCaMryQo4=; b=SjretmiahEvZNGnGUBpQspQ/GmGdj/860lHxNs8JpC3o6SsvmKHzrtLUOV1YEA+XC4v45H xEnzYW9lAKObIDk1WGI5BJSnvu3XnmGWTobx837ubYzJr+NRrCV/xONILEsXYMRYU6sY93 M0V4bGdkTeMvVqDW9Qk+lW10ZxnYOmY= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703228533; a=rsa-sha256; cv=none; b=PB5YXqNZPdgDqNqpsuLjA6v0/ZVOdXxYS3BDXx5NPN29EHKZ3KmdTopGtsZ8k6XlLH6Iga y878eOXsZg8g2gltuV1BJ/deAl9I/GLW1bvKZC0MiBxWu70XaMSJzARTaBkfHWQAYWnWTc NK/GYQN2r/1jUUyeRuFed3+5X3Lyork= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4SxJ6N2pcGz1FFMc; Fri, 22 Dec 2023 14:58:20 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 233D8180027; Fri, 22 Dec 2023 15:02:06 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 22 Dec 2023 15:02:05 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport CC: , Kefeng Wang Subject: [PATCH] mm: remove unnecessary ia64 code and comment Date: Fri, 22 Dec 2023 15:02:03 +0800 Message-ID: <20231222070203.2966980-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Queue-Id: 3FF4280017 X-Rspam-User: X-Stat-Signature: c6pfnfmk6f48angz1hku6qb53qr8uj8j X-Rspamd-Server: rspam01 X-HE-Tag: 1703228531-877383 X-HE-Meta: U2FsdGVkX18YWowpLj7Cg8hYr/0vRENatIJUAyIGwPAbV5ZxT9xV38iozOL8QDO0eYkVEdwwxtmtUUOTLAoCJRYL4OFksR+1oDPJyorTiSXmGZ1wElpgS7tCZAq6glXz6GazW84T9gLE94VP1pdYVEydK9WNfqy9D3RRKSlc0RYcOTS6QQSXxDl1NiWhHoCOeP5VloWbXnxcRzEqmi/ghfurO2ya+lxf2o7n8jlvM+KtMvGri2oPGkG1pXZrRJ6mTTjzRaefNJg9qDo8vETvfhxxTR0fQVl+bif2gCiNtnJimKsfrDOZRGw2ptabtBD/NKJYjvWiZaK4QWPIizGH40G+3xgU94U1K+ckDcldhjEliGwhq++AmcHMs8u2czJfgHgqC3RBNTa5sZpAQ45dQO7bRFFTGlG+QGSAENdDoxOJ0XOik7+pNo4IX5PWQXIzJq+Jzfz6PKf/QSUg09s2svFv+WAoLYqVfuxe1Y9dKvVzwtnnNe1IemMoT1UbTF5HFJhtbo6r7Wpsy4amZXhpMpnAiv8mW9IS9sfrYZdRRcJ4GfbiXaZPrvRD5bVVTEbiMIJ8rM4h4UJmVCbT8Z+z8K4+MHwSi1ynB0funkoRK6lEbS0YEwgWS0O2B4pCTLXCDlX9V8geA75b2FAFD3CQ+397MOSDJvAKuMhIvlAcaEUpNztU60JVaa4pOTSmSZs8+4+ZgjFwdG3rI/6pINg0T+NkECGEi5ehp0ksn8r8OZDykg/3R0uwtwTHKg/FAe24J5MspclSeZxOuTV2kZDdJyyo5QgDDaeI9Z8PMgGJMWue2Xhpmyh/1SKb03u5Pr2Z15bw3Jlh6r4XxhfsCsOynw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: IA64 has gone with commit cf8e8658100d ("arch: Remove Itanium (IA-64) architecture"), remove unnecessary ia64 special mm code and comment too. Signed-off-by: Kefeng Wang Reviewed-by: Mike Rapoport (IBM) --- mm/Kconfig | 2 +- mm/memory.c | 4 +--- mm/mm_init.c | 48 +++++++++++++++++++----------------------------- mm/page_owner.c | 1 - 4 files changed, 21 insertions(+), 34 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index 7824eeb53f7a..a6e1c51959c0 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -732,7 +732,7 @@ config DEFAULT_MMAP_MIN_ADDR from userspace allocation. Keeping a user from writing to low pages can help reduce the impact of kernel NULL pointer bugs. - For most ia64, ppc64 and x86 users with lots of address space + For most ppc64 and x86 users with lots of address space a value of 65536 is reasonable and should cause no problems. On arm and other archs it should not be higher than 32768. Programs which use vm86 functionality or have some need to map diff --git a/mm/memory.c b/mm/memory.c index 716648268fed..86ca40a3f681 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -123,9 +123,7 @@ static bool vmf_orig_pte_uffd_wp(struct vm_fault *vmf) /* * A number of key systems in x86 including ioremap() rely on the assumption * that high_memory defines the upper bound on direct map memory, then end - * of ZONE_NORMAL. Under CONFIG_DISCONTIG this means that max_low_pfn and - * highstart_pfn must be the same; there must be no gap between ZONE_NORMAL - * and ZONE_HIGHMEM. + * of ZONE_NORMAL. */ void *high_memory; EXPORT_SYMBOL(high_memory); diff --git a/mm/mm_init.c b/mm/mm_init.c index ac3d911c34fd..2390dca94d70 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1467,8 +1467,7 @@ void __init set_pageblock_order(void) /* * Assume the largest contiguous order of interest is a huge page. - * This value may be variable depending on boot parameters on IA64 and - * powerpc. + * This value may be variable depending on boot parameters on powerpc. */ pageblock_order = order; } @@ -1629,8 +1628,8 @@ void __init *memmap_alloc(phys_addr_t size, phys_addr_t align, #ifdef CONFIG_FLATMEM static void __init alloc_node_mem_map(struct pglist_data *pgdat) { - unsigned long __maybe_unused start = 0; - unsigned long __maybe_unused offset = 0; + unsigned long start, offset, size, end; + struct page *map; /* Skip empty nodes */ if (!pgdat->node_spanned_pages) @@ -1638,33 +1637,24 @@ static void __init alloc_node_mem_map(struct pglist_data *pgdat) start = pgdat->node_start_pfn & ~(MAX_ORDER_NR_PAGES - 1); offset = pgdat->node_start_pfn - start; - /* ia64 gets its own node_mem_map, before this, without bootmem */ - if (!pgdat->node_mem_map) { - unsigned long size, end; - struct page *map; - - /* - * The zone's endpoints aren't required to be MAX_ORDER - * aligned but the node_mem_map endpoints must be in order - * for the buddy allocator to function correctly. - */ - end = pgdat_end_pfn(pgdat); - end = ALIGN(end, MAX_ORDER_NR_PAGES); - size = (end - start) * sizeof(struct page); - map = memmap_alloc(size, SMP_CACHE_BYTES, MEMBLOCK_LOW_LIMIT, - pgdat->node_id, false); - if (!map) - panic("Failed to allocate %ld bytes for node %d memory map\n", - size, pgdat->node_id); - pgdat->node_mem_map = map + offset; - } - pr_debug("%s: node %d, pgdat %08lx, node_mem_map %08lx\n", - __func__, pgdat->node_id, (unsigned long)pgdat, - (unsigned long)pgdat->node_mem_map); -#ifndef CONFIG_NUMA /* - * With no DISCONTIG, the global mem_map is just set as node 0's + * The zone's endpoints aren't required to be MAX_ORDER + * aligned but the node_mem_map endpoints must be in order + * for the buddy allocator to function correctly. */ + end = ALIGN(pgdat_end_pfn(pgdat), MAX_ORDER_NR_PAGES); + size = (end - start) * sizeof(struct page); + map = memmap_alloc(size, SMP_CACHE_BYTES, MEMBLOCK_LOW_LIMIT, + pgdat->node_id, false); + if (!map) + panic("Failed to allocate %ld bytes for node %d memory map\n", + size, pgdat->node_id); + pgdat->node_mem_map = map + offset; + pr_debug("%s: node %d, pgdat %08lx, node_mem_map %08lx\n", + __func__, pgdat->node_id, (unsigned long)pgdat, + (unsigned long)pgdat->node_mem_map); +#ifndef CONFIG_NUMA + /* the global mem_map is just set as node 0's */ if (pgdat == NODE_DATA(0)) { mem_map = NODE_DATA(0)->node_mem_map; if (page_to_pfn(mem_map) != pgdat->node_start_pfn) diff --git a/mm/page_owner.c b/mm/page_owner.c index e7eba7688881..040dbf26a986 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -121,7 +121,6 @@ static noinline depot_stack_handle_t save_stack(gfp_t flags) * Sometimes page metadata allocation tracking requires more * memory to be allocated: * - when new stack trace is saved to stack depot - * - when backtrace itself is calculated (ia64) */ if (current->in_page_owner) return dummy_handle;