From patchwork Sat Jan 13 09:44:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nanyong Sun X-Patchwork-Id: 13518906 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3168C4707C for ; Sat, 13 Jan 2024 08:48:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 780596B0092; Sat, 13 Jan 2024 03:48:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F7CF8D0005; Sat, 13 Jan 2024 03:48:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 260838D0003; Sat, 13 Jan 2024 03:48:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F00026B0092 for ; Sat, 13 Jan 2024 03:48:22 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B7E02C0674 for ; Sat, 13 Jan 2024 08:48:22 +0000 (UTC) X-FDA: 81673661244.07.E2D51A5 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf01.hostedemail.com (Postfix) with ESMTP id 2CF694000D for ; Sat, 13 Jan 2024 08:48:19 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of sunnanyong@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=sunnanyong@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705135701; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dvLlTM0WXVWEZhImNM4s3rHk6rhpAi103dlOK2FXokA=; b=hNKs7/v/hrBf35aqFInLhw+3CIM7QdbYrFY1jJy4i9JXR0yP9aTS/wB/9l16I+asgwUBlo xgIHrps2eK9bRON/9rVd7JZ9xiuzRYrD2piQXbFUIYXcGvqawGAXo3KSHqcoyO3m+XZjkE BogO/hO4kgA/L13PL4VT6S4futoYdDQ= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of sunnanyong@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=sunnanyong@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705135701; a=rsa-sha256; cv=none; b=IolaHmOtAeX4F8DGHGCNF28PPNcp9y2VjxUH2eJW+WQcfa13cqgyPRBgZ0oYnAYkHOBIc1 96bfMwON67sJ5AU02BZ/U5N43SmYTSWfsgzaLywxNoVuQZktcIuAk8xjc9R2Sw7gk7+MZ2 rgpNcqpauX1ANRu1LZxaQr5ZI643mfo= Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4TBsV64Wj5z1Q7CJ; Sat, 13 Jan 2024 16:47:26 +0800 (CST) Received: from kwepemm600003.china.huawei.com (unknown [7.193.23.202]) by mail.maildlp.com (Postfix) with ESMTPS id C49BF1402E2; Sat, 13 Jan 2024 16:48:14 +0800 (CST) Received: from huawei.com (10.175.113.32) by kwepemm600003.china.huawei.com (7.193.23.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Sat, 13 Jan 2024 16:48:13 +0800 From: Nanyong Sun To: , , , , , CC: , , , , , Subject: [PATCH v3 1/3] mm: HVO: introduce helper function to update and flush pgtable Date: Sat, 13 Jan 2024 17:44:34 +0800 Message-ID: <20240113094436.2506396-2-sunnanyong@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240113094436.2506396-1-sunnanyong@huawei.com> References: <20240113094436.2506396-1-sunnanyong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600003.china.huawei.com (7.193.23.202) X-Rspamd-Queue-Id: 2CF694000D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: q7axzb87cyk5qn75nmw54d6ucmf1y76z X-HE-Tag: 1705135699-757117 X-HE-Meta: U2FsdGVkX18Ul13JCuWJLmvTER7DzLbiIDLkpjWwQapLgyEgEsa7wL+8reKYDiDY4zKfh64ul4w7WdfMTZvE3IhZrikB42zXG4de5KrLdM3Di2QqNbGANys2G+FRTnar0JVdQ4+WqHCUABAd0RWxi5tbQrh5yOlbdo03KUZ+dkuWVa3eOkDbREIwNp5QpXAfHa4cZD2BRedQye+qn4GLA+ccVfeQpKVhtv9BIoQiUFwwWjGGBTPD/ujxPbcxTom1a1QeOveAHf0d3jbq+AVyGveuKetamaikmTB7p2DG9CFZB64DYlKZ06jPw6x0SBmDUgxf5p4CZ7FUM5ihe6S7Kzmj2lAO5w7NfQd+PkejrSi3wMVTs8xxLOKnmEeDIwyZqQmajAM7nza+cObsW4QrhhVm46ncivr8DOCgfW88p61+dioMl3DFpoksGMnis7C0SK3rjOIoQArxHMepfgGaIBgvzVp2Uje2p5LnSa1Ty9R6twIQMpzN/Xp6G8zvK3IdRukEA1kN3kRUsgOAax8NpppOPLQ7EqSXLCyAIOOqsraIihaRN1CgKlQ7upA1VfLr/4IKambVSgTehDcxewo6N7mKYpCwEyza77g/PdnpgzlWiml7f9NomwiPXZ/+RdIHPOzl2qP/FgpsGFrSQpMowNAXbN95EjOGAUD0f6RV66iGk/Ewh/BAb0r7j1j+HvbhjWZGosGt/9TdJKi3o2bkWeWaDCBqNTN8dbaLW8/U6+oZIAg1qWsO8IhhAB8gnXb//OZH8Pw9U2YDLhXWkHtdcOQok7RKtCJI1gQvOgDpiH+LWYcApEqJcUVOLV0Je1lGiYY0h6bNnguskcyLC6DAe8xQgHD63TFf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add pmd/pte update and tlb flush helper function to update page table. This refactoring patch is designed to facilitate each architecture to implement its own special logic in preparation for the arm64 architecture to follow the necessary break-before-make sequence when updating page tables. Signed-off-by: Nanyong Sun Reviewed-by: Muchun Song --- mm/hugetlb_vmemmap.c | 55 ++++++++++++++++++++++++++++++++++---------- 1 file changed, 43 insertions(+), 12 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index da177e49d956..f1f5702bce4f 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -46,6 +46,37 @@ struct vmemmap_remap_walk { unsigned long flags; }; +#ifndef vmemmap_update_pmd +static inline void vmemmap_update_pmd(unsigned long addr, + pmd_t *pmdp, pte_t *ptep) +{ + pmd_populate_kernel(&init_mm, pmdp, ptep); +} +#endif + +#ifndef vmemmap_update_pte +static inline void vmemmap_update_pte(unsigned long addr, + pte_t *ptep, pte_t pte) +{ + set_pte_at(&init_mm, addr, ptep, pte); +} +#endif + +#ifndef vmemmap_flush_tlb_all +static inline void vmemmap_flush_tlb_all(void) +{ + flush_tlb_all(); +} +#endif + +#ifndef vmemmap_flush_tlb_range +static inline void vmemmap_flush_tlb_range(unsigned long start, + unsigned long end) +{ + flush_tlb_kernel_range(start, end); +} +#endif + static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start, struct vmemmap_remap_walk *walk) { @@ -81,9 +112,9 @@ static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start, /* Make pte visible before pmd. See comment in pmd_install(). */ smp_wmb(); - pmd_populate_kernel(&init_mm, pmd, pgtable); + vmemmap_update_pmd(start, pmd, pgtable); if (!(walk->flags & VMEMMAP_SPLIT_NO_TLB_FLUSH)) - flush_tlb_kernel_range(start, start + PMD_SIZE); + vmemmap_flush_tlb_range(start, start + PMD_SIZE); } else { pte_free_kernel(&init_mm, pgtable); } @@ -171,7 +202,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end, return ret; if (walk->remap_pte && !(walk->flags & VMEMMAP_REMAP_NO_TLB_FLUSH)) - flush_tlb_kernel_range(start, end); + vmemmap_flush_tlb_range(start, end); return 0; } @@ -217,15 +248,15 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, /* * Makes sure that preceding stores to the page contents from - * vmemmap_remap_free() become visible before the set_pte_at() - * write. + * vmemmap_remap_free() become visible before the + * vmemmap_update_pte() write. */ smp_wmb(); } entry = mk_pte(walk->reuse_page, pgprot); list_add(&page->lru, walk->vmemmap_pages); - set_pte_at(&init_mm, addr, pte, entry); + vmemmap_update_pte(addr, pte, entry); } /* @@ -264,10 +295,10 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, /* * Makes sure that preceding stores to the page contents become visible - * before the set_pte_at() write. + * before the vmemmap_update_pte() write. */ smp_wmb(); - set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); + vmemmap_update_pte(addr, pte, mk_pte(page, pgprot)); } /** @@ -519,7 +550,7 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, } if (restored) - flush_tlb_all(); + vmemmap_flush_tlb_all(); if (!ret) ret = restored; return ret; @@ -642,7 +673,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l break; } - flush_tlb_all(); + vmemmap_flush_tlb_all(); list_for_each_entry(folio, folio_list, lru) { int ret; @@ -659,7 +690,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l * allowing more vmemmap remaps to occur. */ if (ret == -ENOMEM && !list_empty(&vmemmap_pages)) { - flush_tlb_all(); + vmemmap_flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); INIT_LIST_HEAD(&vmemmap_pages); __hugetlb_vmemmap_optimize_folio(h, folio, &vmemmap_pages, @@ -667,7 +698,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l } } - flush_tlb_all(); + vmemmap_flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); }