From patchwork Mon May 27 11:11:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 10962563 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0CB9933 for ; Mon, 27 May 2019 11:12:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D0C5328AD4 for ; Mon, 27 May 2019 11:12:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C505B28AD9; Mon, 27 May 2019 11:12:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4ED0428AD4 for ; Mon, 27 May 2019 11:12:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726942AbfE0LM3 (ORCPT ); Mon, 27 May 2019 07:12:29 -0400 Received: from mx1.redhat.com ([209.132.183.28]:41410 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726887AbfE0LM2 (ORCPT ); Mon, 27 May 2019 07:12:28 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id AADC387621; Mon, 27 May 2019 11:12:27 +0000 (UTC) Received: from t460s.redhat.com (ovpn-117-89.ams2.redhat.com [10.36.117.89]) by smtp.corp.redhat.com (Postfix) with ESMTP id B59CC2AA83; Mon, 27 May 2019 11:12:23 +0000 (UTC) From: David Hildenbrand To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-arm-kernel@lists.infradead.org, akpm@linux-foundation.org, Dan Williams , Wei Yang , Igor Mammedov , David Hildenbrand , Catalin Marinas , Will Deacon , Mark Rutland , Ard Biesheuvel , Chintan Pandya , Mike Rapoport , Jun Yao , Yu Zhao , Robin Murphy , Anshuman Khandual Subject: [PATCH v3 04/11] arm64/mm: Add temporary arch_remove_memory() implementation Date: Mon, 27 May 2019 13:11:45 +0200 Message-Id: <20190527111152.16324-5-david@redhat.com> In-Reply-To: <20190527111152.16324-1-david@redhat.com> References: <20190527111152.16324-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Mon, 27 May 2019 11:12:27 +0000 (UTC) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A proper arch_remove_memory() implementation is on its way, which also cleanly removes page tables in arch_add_memory() in case something goes wrong. As we want to use arch_remove_memory() in case something goes wrong during memory hotplug after arch_add_memory() finished, let's add a temporary hack that is sufficient enough until we get a proper implementation that cleans up page table entries. We will remove CONFIG_MEMORY_HOTREMOVE around this code in follow up patches. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: Andrew Morton Cc: Ard Biesheuvel Cc: Chintan Pandya Cc: Mike Rapoport Cc: Jun Yao Cc: Yu Zhao Cc: Robin Murphy Cc: Anshuman Khandual Signed-off-by: David Hildenbrand --- arch/arm64/mm/mmu.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index a1bfc4413982..e569a543c384 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1084,4 +1084,23 @@ int arch_add_memory(int nid, u64 start, u64 size, return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, restrictions); } +#ifdef CONFIG_MEMORY_HOTREMOVE +void arch_remove_memory(int nid, u64 start, u64 size, + struct vmem_altmap *altmap) +{ + unsigned long start_pfn = start >> PAGE_SHIFT; + unsigned long nr_pages = size >> PAGE_SHIFT; + struct zone *zone; + + /* + * FIXME: Cleanup page tables (also in arch_add_memory() in case + * adding fails). Until then, this function should only be used + * during memory hotplug (adding memory), not for memory + * unplug. ARCH_ENABLE_MEMORY_HOTREMOVE must not be + * unlocked yet. + */ + zone = page_zone(pfn_to_page(start_pfn)); + __remove_pages(zone, start_pfn, nr_pages, altmap); +} +#endif #endif