From patchwork Wed Oct 4 15:00:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 9984867 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CC96760291 for ; Wed, 4 Oct 2017 15:01:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB8E828B47 for ; Wed, 4 Oct 2017 15:01:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B9A1728B5D; Wed, 4 Oct 2017 15:01:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D6BFE28B50 for ; Wed, 4 Oct 2017 15:01:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752311AbdJDPBB (ORCPT ); Wed, 4 Oct 2017 11:01:01 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:57214 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752267AbdJDPBA (ORCPT ); Wed, 4 Oct 2017 11:01:00 -0400 Received: by mail-wm0-f68.google.com with SMTP id l68so9925921wmd.5; Wed, 04 Oct 2017 08:00:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=HYJaadQq+NvQqSFB/xxQy99CR1lCr6EP2cbAzL4kMms=; b=jflBf5YPCtd9DvHCByN7ziHKzQqXX3ctvjF6PQFytDrSyGALRgz+nR+I3Xy2y7iig3 cKNkPsVkVcJ0jI3FRdPRI5d+NQs5QuRW+GMpVjy76T+UV2zLBbhPP+P4c5DJmRCnL6oP 0OzsCyUxMOS1vJ3/6yvciGnkGOXPR4J8wztRfzS4rXG1TuIVuiiqsL9cHFAYkSdqIB7M kiHDw78dEVISZLL2A1IfpS+xuoKpFFKLZ74upveiGvxzcasGWA7m8BR1Iu9ygPunrZBy tqYYf5CKXDYqdki/eYGVIrwfrxYzfVAQZ9AAho0zVUgo/Y5fvZ0JfX+wBMcknwz908Rc HzaA== X-Gm-Message-State: AMCzsaUJ4MRF4fl7utM5qfcdcP0qyhsEHjNPKvspKvajfAcsfRVtaKW2 K/536S4WuJssi+vfNLTvnWmJvw== X-Google-Smtp-Source: AOwi7QBFSXWUwqK7CMLoOWPqbZ0pppKSjpZcyfg0rLvWHMYyqB7Un7D81w0VM/0km6ES2J9CkAIIYw== X-Received: by 10.28.54.22 with SMTP id d22mr7200083wma.120.1507129259181; Wed, 04 Oct 2017 08:00:59 -0700 (PDT) Received: from tiehlicka.suse.cz (prg-ext-pat.suse.com. [213.151.95.130]) by smtp.gmail.com with ESMTPSA id u18sm2770339wrg.94.2017.10.04.08.00.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 04 Oct 2017 08:00:58 -0700 (PDT) From: Michal Hocko To: Andrew Morton Cc: LKML , Michal Hocko , David Howells , Ingo Molnar , Jeff Dike , linux-mips@linux-mips.org, linux-sh@vger.kernel.org, Ralf Baechle , Richard Weinberger , Rich Felker , uclinux-h8-devel@lists.sourceforge.jp, Yoshinori Sato Subject: [PATCH] mm, arch: remove empty_bad_page* Date: Wed, 4 Oct 2017 17:00:45 +0200 Message-Id: <20171004150045.30755-1-mhocko@kernel.org> X-Mailer: git-send-email 2.14.2 Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Michal Hocko empty_bad_page and empty_bad_pte_table seems to be a relict from old days which is not used by any code for a long time. I have tried to find when exactly but this is not really all that straightforward due to many code movements - traces disappear around 2.4 times. Anyway no code really references neither empty_bad_page nor empty_bad_pte_table. We only allocate the storage which is not used by anybody so remove them. Cc: Yoshinori Sato Cc: Ralf Baechle Cc: David Howells Cc: Rich Felker Cc: Jeff Dike Cc: Richard Weinberger Cc: Ingo Molnar Cc: Cc: Cc: Signed-off-by: Michal Hocko Acked-by: Ingo Molnar Acked-by: Ralf Baechle --- Hi, Pasha Tatashin made me look closer at include/linux/page-flags.h * PG_reserved is set for special pages, which can never be swapped out. Some * of them might not even exist (eg empty_bad_page)... in http://lkml.kernel.org/r/691dba28-718c-e9a9-d006-88505eb5cd7e@oracle.com because it was the first time I have heard about empty_bad_page. It seems that this is no longer needed but there are some relicts in arch code. Please note that I have no ways to test this other than run it through my compile (cross arch) test battery and there were no failures. arch/frv/mm/init.c | 14 -------------- arch/h8300/mm/init.c | 13 ------------- arch/mips/include/asm/pgtable-64.h | 8 +------- arch/mn10300/kernel/head.S | 8 -------- arch/sh/kernel/head_64.S | 8 -------- arch/um/kernel/mem.c | 3 --- include/linux/page-flags.h | 2 +- 7 files changed, 2 insertions(+), 54 deletions(-) diff --git a/arch/frv/mm/init.c b/arch/frv/mm/init.c index 328f0a292316..cf464100e838 100644 --- a/arch/frv/mm/init.c +++ b/arch/frv/mm/init.c @@ -42,21 +42,9 @@ #undef DEBUG /* - * BAD_PAGE is the page that is used for page faults when linux - * is out-of-memory. Older versions of linux just did a - * do_exit(), but using this instead means there is less risk - * for a process dying in kernel mode, possibly leaving a inode - * unused etc.. - * - * BAD_PAGETABLE is the accompanying page-table: it is initialized - * to point to BAD_PAGE entries. - * * ZERO_PAGE is a special page that is used for zero-initialized * data and COW. */ -static unsigned long empty_bad_page_table; -static unsigned long empty_bad_page; - unsigned long empty_zero_page; EXPORT_SYMBOL(empty_zero_page); @@ -72,8 +60,6 @@ void __init paging_init(void) unsigned long zones_size[MAX_NR_ZONES] = {0, }; /* allocate some pages for kernel housekeeping tasks */ - empty_bad_page_table = (unsigned long) alloc_bootmem_pages(PAGE_SIZE); - empty_bad_page = (unsigned long) alloc_bootmem_pages(PAGE_SIZE); empty_zero_page = (unsigned long) alloc_bootmem_pages(PAGE_SIZE); memset((void *) empty_zero_page, 0, PAGE_SIZE); diff --git a/arch/h8300/mm/init.c b/arch/h8300/mm/init.c index 495a3d6b539b..85c51cf782a5 100644 --- a/arch/h8300/mm/init.c +++ b/arch/h8300/mm/init.c @@ -39,20 +39,9 @@ #include /* - * BAD_PAGE is the page that is used for page faults when linux - * is out-of-memory. Older versions of linux just did a - * do_exit(), but using this instead means there is less risk - * for a process dying in kernel mode, possibly leaving a inode - * unused etc.. - * - * BAD_PAGETABLE is the accompanying page-table: it is initialized - * to point to BAD_PAGE entries. - * * ZERO_PAGE is a special page that is used for zero-initialized * data and COW. */ -static unsigned long empty_bad_page_table; -static unsigned long empty_bad_page; unsigned long empty_zero_page; /* @@ -77,8 +66,6 @@ void __init paging_init(void) * Initialize the bad page table and bad page to point * to a couple of allocated pages. */ - empty_bad_page_table = (unsigned long)alloc_bootmem_pages(PAGE_SIZE); - empty_bad_page = (unsigned long)alloc_bootmem_pages(PAGE_SIZE); empty_zero_page = (unsigned long)alloc_bootmem_pages(PAGE_SIZE); memset((void *)empty_zero_page, 0, PAGE_SIZE); diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index 67fe6dc5211c..0036ea0c7173 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -31,12 +31,7 @@ * tables. Each page table is also a single 4K page, giving 512 (== * PTRS_PER_PTE) 8 byte ptes. Each pud entry is initialized to point to * invalid_pmd_table, each pmd entry is initialized to point to - * invalid_pte_table, each pte is initialized to 0. When memory is low, - * and a pmd table or a page table allocation fails, empty_bad_pmd_table - * and empty_bad_page_table is returned back to higher layer code, so - * that the failure is recognized later on. Linux does not seem to - * handle these failures very well though. The empty_bad_page_table has - * invalid pte entries in it, to force page faults. + * invalid_pte_table, each pte is initialized to 0. * * Kernel mappings: kernel mappings are held in the swapper_pg_table. * The layout is identical to userspace except it's indexed with the @@ -175,7 +170,6 @@ printk("%s:%d: bad pgd %016lx.\n", __FILE__, __LINE__, pgd_val(e)) extern pte_t invalid_pte_table[PTRS_PER_PTE]; -extern pte_t empty_bad_page_table[PTRS_PER_PTE]; #ifndef __PAGETABLE_PUD_FOLDED /* diff --git a/arch/mn10300/kernel/head.S b/arch/mn10300/kernel/head.S index 73e00fc78072..0b15f759e0d2 100644 --- a/arch/mn10300/kernel/head.S +++ b/arch/mn10300/kernel/head.S @@ -433,14 +433,6 @@ ENTRY(swapper_pg_dir) ENTRY(empty_zero_page) .space PAGE_SIZE - .balign PAGE_SIZE -ENTRY(empty_bad_page) - .space PAGE_SIZE - - .balign PAGE_SIZE -ENTRY(empty_bad_pte_table) - .space PAGE_SIZE - .balign PAGE_SIZE ENTRY(large_page_table) .space PAGE_SIZE diff --git a/arch/sh/kernel/head_64.S b/arch/sh/kernel/head_64.S index defd851abefa..cca491397a28 100644 --- a/arch/sh/kernel/head_64.S +++ b/arch/sh/kernel/head_64.S @@ -101,14 +101,6 @@ mmu_pdtp_cache: .space PAGE_SIZE, 0 - .global empty_bad_page -empty_bad_page: - .space PAGE_SIZE, 0 - - .global empty_bad_pte_table -empty_bad_pte_table: - .space PAGE_SIZE, 0 - .global fpu_in_use fpu_in_use: .quad 0 diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c index e7437ec62710..3c0e470ea646 100644 --- a/arch/um/kernel/mem.c +++ b/arch/um/kernel/mem.c @@ -22,8 +22,6 @@ /* allocated in paging_init, zeroed in mem_init, and unchanged thereafter */ unsigned long *empty_zero_page = NULL; EXPORT_SYMBOL(empty_zero_page); -/* allocated in paging_init and unchanged thereafter */ -static unsigned long *empty_bad_page = NULL; /* * Initialized during boot, and readonly for initializing page tables @@ -146,7 +144,6 @@ void __init paging_init(void) int i; empty_zero_page = (unsigned long *) alloc_bootmem_low_pages(PAGE_SIZE); - empty_bad_page = (unsigned long *) alloc_bootmem_low_pages(PAGE_SIZE); for (i = 0; i < ARRAY_SIZE(zones_size); i++) zones_size[i] = 0; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index ba2d470d2d0a..048b763e939d 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -17,7 +17,7 @@ * Various page->flags bits: * * PG_reserved is set for special pages, which can never be swapped out. Some - * of them might not even exist (eg empty_bad_page)... + * of them might not even exist... * * The PG_private bitflag is set on pagecache pages if they contain filesystem * specific data (which is normally at page->private). It can be used by