From patchwork Tue Aug 6 02:21:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13754346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1D2A9C3DA4A for ; Tue, 6 Aug 2024 02:22:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=yV7dY4XDiIXGgrOmDcJWCpFmJrDLDUqsW20nfMTrSyE=; b=ZulJdGHWZCpESJMX0azlQxvHpP yTK63572orHNRAjwU3uPFJfghgw/6VKgnHyzpUdeZt/vj0j5TRT3vIcjJXKUFcYohxD9NjzR4SLGi 0ftrXLNsHl2w5RRV1u6726MfUaahQms4DQFrPpk7a9UbH8CZ7l36tlgdxbGQITVCAYa2BdXBu9WPk NE3/us6Ivs7n5IKrAgmIC74is2Bvn3TbSSvJWo27RmhVScdm+CtCf2z7GfUXQjrcG1UJz4oP78cG1 zmE4GogV1+SZ3SB5cZARDT9RzNsi/iGOIhzObaJ6Xln+oVfX9uMa8VNEykK67Z6REVOrpGV9YYBYR ybnlzVRw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sb9qC-00000000G8u-1tbo; Tue, 06 Aug 2024 02:22:24 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sb9pD-00000000FpI-3pD3 for linux-arm-kernel@lists.infradead.org; Tue, 06 Aug 2024 02:21:25 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-6698f11853aso6016547b3.0 for ; Mon, 05 Aug 2024 19:21:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722910882; x=1723515682; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yV7dY4XDiIXGgrOmDcJWCpFmJrDLDUqsW20nfMTrSyE=; b=essDp1899C1IpCLN9DdILJns69KrxP0I08dJHjaiSew2jrjYW3URb9yeXCbdsQF4jK CAIdbSGezlqqwh3Pwoov3E2acrfDowhVLU19Uv60lTtbfq9KcLF8OF8WKiNkbixwTbbN pqSkghqS3eAXMZTWopnJcVM6PvjiMNkTOqj/ed/Ye/k2i5gbnpHQVCQLbYj8PXXpdyk4 8dNy3+ubYMGXIBWAKiutgrR05z52mrkLS/cZ+MfYH0V9Sr+8DTnAIX0y+oBRmaxDDAR0 9JK7VrLMx25S3i3Zqcqvi9CkbKbEdyp63BrHDFMLNU6eDWIp8EjGj/w76S8i22RfF/2S bQ5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722910882; x=1723515682; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yV7dY4XDiIXGgrOmDcJWCpFmJrDLDUqsW20nfMTrSyE=; b=hoGQw1H1n7Z4VtpWaFdhdmNdfPqyAJVr8GREw5Ia5xMidq1SpB5d1CRYwJ2Shc74lj Seguv6xEsB7JvnmLNPps2rnoSKcHCN+NmfCRTgP9YzCxY/odE19U0mAhYVd2dNRQlLVl qjSfDQZGI2fzQn0Eqq40hwKOcWkTeKv06e47VmmLHKYoqcs+aYEdFxI3wwnd0Zr+vcW/ ygjEHw9bjzwHzSjpfaN07NbILPzdo7bRPsp80BAbRbYfIIRMdBHQCShLtMS9yQRjMkPP VoSA8CaNCoiTBKtcI3gm0a88giEytA5LRrJI8jw5ZQ9PPWitpR6+ICN4OCaB+SFFnlLl vaMg== X-Forwarded-Encrypted: i=1; AJvYcCWR7gF9k7DbcHwkSYlexwqMwqnIC+kaQ03QI5JvN7xvA7NdOMp1QIwHjjDQu+3YvAn0Iw5nJkIhgcyApwMUX0toDJPLotIC6fRD3DYLay6gLG01Udo= X-Gm-Message-State: AOJu0Ywmj/rGVZFe772Y1otQrglLeqC6LDeNQW211y20RSUH7kj8chTj FaJn0RYOq2xitygkz9aHVuYda6s5/IFIrOeW2O0A726CwaFE/5N8JXwGDpk7Zjj+o9PNhDGpW/j CnQ== X-Google-Smtp-Source: AGHT+IGSY6zrX4Qs7JrUctkrSdbtisepSykseFX5DdXlWxT8TG2QUbuQRw2CO7r/i/yRAh7feSGbV18ovH8= X-Received: from yuzhao2.bld.corp.google.com ([2a00:79e0:2e28:6:261c:802b:6b55:e09c]) (user=yuzhao job=sendgmr) by 2002:a05:6902:2b84:b0:e0b:cce3:45c7 with SMTP id 3f1490d57ef6-e0bde3ef075mr24855276.9.1722910881783; Mon, 05 Aug 2024 19:21:21 -0700 (PDT) Date: Mon, 5 Aug 2024 20:21:11 -0600 In-Reply-To: <20240806022114.3320543-1-yuzhao@google.com> Mime-Version: 1.0 References: <20240806022114.3320543-1-yuzhao@google.com> X-Mailer: git-send-email 2.46.0.rc2.264.g509ed76dc8-goog Message-ID: <20240806022114.3320543-2-yuzhao@google.com> Subject: [RFC PATCH 1/4] mm: HVO: introduce helper function to update and flush pgtable From: Yu Zhao To: Catalin Marinas , Will Deacon Cc: Andrew Morton , David Rientjes , Douglas Anderson , Frank van der Linden , Mark Rutland , Muchun Song , Nanyong Sun , Yang Shi , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song , Yu Zhao X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240805_192124_333120_6339F6F0 X-CRM114-Status: GOOD ( 16.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Nanyong Sun Add pmd/pte update and tlb flush helper function to update page table. This refactoring patch is designed to facilitate each architecture to implement its own special logic in preparation for the arm64 architecture to follow the necessary break-before-make sequence when updating page tables. Signed-off-by: Nanyong Sun Reviewed-by: Muchun Song Signed-off-by: Yu Zhao --- mm/hugetlb_vmemmap.c | 55 ++++++++++++++++++++++++++++++++++---------- 1 file changed, 43 insertions(+), 12 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 829112b0a914..2dd92e58f304 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -46,6 +46,37 @@ struct vmemmap_remap_walk { unsigned long flags; }; +#ifndef vmemmap_update_pmd +static inline void vmemmap_update_pmd(unsigned long addr, + pmd_t *pmdp, pte_t *ptep) +{ + pmd_populate_kernel(&init_mm, pmdp, ptep); +} +#endif + +#ifndef vmemmap_update_pte +static inline void vmemmap_update_pte(unsigned long addr, + pte_t *ptep, pte_t pte) +{ + set_pte_at(&init_mm, addr, ptep, pte); +} +#endif + +#ifndef vmemmap_flush_tlb_all +static inline void vmemmap_flush_tlb_all(void) +{ + flush_tlb_all(); +} +#endif + +#ifndef vmemmap_flush_tlb_range +static inline void vmemmap_flush_tlb_range(unsigned long start, + unsigned long end) +{ + flush_tlb_kernel_range(start, end); +} +#endif + static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start, struct vmemmap_remap_walk *walk) { @@ -81,9 +112,9 @@ static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start, /* Make pte visible before pmd. See comment in pmd_install(). */ smp_wmb(); - pmd_populate_kernel(&init_mm, pmd, pgtable); + vmemmap_update_pmd(start, pmd, pgtable); if (!(walk->flags & VMEMMAP_SPLIT_NO_TLB_FLUSH)) - flush_tlb_kernel_range(start, start + PMD_SIZE); + vmemmap_flush_tlb_range(start, start + PMD_SIZE); } else { pte_free_kernel(&init_mm, pgtable); } @@ -171,7 +202,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end, return ret; if (walk->remap_pte && !(walk->flags & VMEMMAP_REMAP_NO_TLB_FLUSH)) - flush_tlb_kernel_range(start, end); + vmemmap_flush_tlb_range(start, end); return 0; } @@ -220,15 +251,15 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, /* * Makes sure that preceding stores to the page contents from - * vmemmap_remap_free() become visible before the set_pte_at() - * write. + * vmemmap_remap_free() become visible before the + * vmemmap_update_pte() write. */ smp_wmb(); } entry = mk_pte(walk->reuse_page, pgprot); list_add(&page->lru, walk->vmemmap_pages); - set_pte_at(&init_mm, addr, pte, entry); + vmemmap_update_pte(addr, pte, entry); } /* @@ -267,10 +298,10 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, /* * Makes sure that preceding stores to the page contents become visible - * before the set_pte_at() write. + * before the vmemmap_update_pte() write. */ smp_wmb(); - set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); + vmemmap_update_pte(addr, pte, mk_pte(page, pgprot)); } /** @@ -536,7 +567,7 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, } if (restored) - flush_tlb_all(); + vmemmap_flush_tlb_all(); if (!ret) ret = restored; return ret; @@ -664,7 +695,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l break; } - flush_tlb_all(); + vmemmap_flush_tlb_all(); /* avoid writes from page_ref_add_unless() while folding vmemmap */ synchronize_rcu(); @@ -684,7 +715,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l * allowing more vmemmap remaps to occur. */ if (ret == -ENOMEM && !list_empty(&vmemmap_pages)) { - flush_tlb_all(); + vmemmap_flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); INIT_LIST_HEAD(&vmemmap_pages); __hugetlb_vmemmap_optimize_folio(h, folio, &vmemmap_pages, @@ -692,7 +723,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l } } - flush_tlb_all(); + vmemmap_flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); }