From patchwork Sun Dec 4 14:11:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qinglin Pan X-Patchwork-Id: 13063869 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 37284C4321E for ; Sun, 4 Dec 2022 14:12:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FHifR8dHq8prkL0qAaWPtipMlSYd3ZUutX/K4uNXLQw=; b=iLt6Y9G50REjq5 ggcg83I3ZkzU/CULKvhIs5MVG2JZSIwRNqD1T6asoLiJSlQq+lEzwQoRqvWlxt3va8vr/0v9z48mq DxGjQ/ke2tXa2+vDmMCzSSdLxqYz4ohApi/veMOQk39t+eCRRCzJE8uHnKhrm+Gac4Eis+siYogrA PSSwg79+MsWMi72qIq0NKQX8nX+4HjVEbCnIemL0sf/TM73F+V2EtVr+qv2kIKbgrx0n2SB4/47u4 KCXWAGRK7RdQ/T67DJsidSHC+0f4cWgjgYtY4bh/k3yzXHqsHsrUbejkKZ1ZlUFXw5G7fjxrtQt+H zVMAbAEwO3t6bCgxb1pA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p1pix-008gG2-Ij; Sun, 04 Dec 2022 14:12:07 +0000 Received: from smtp84.cstnet.cn ([159.226.251.84] helo=cstnet.cn) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p1pip-008gAy-Pz for linux-riscv@lists.infradead.org; Sun, 04 Dec 2022 14:12:03 +0000 Received: from localhost.localdomain (unknown [124.16.141.248]) by APP-05 (Coremail) with SMTP id zQCowAC3v_elqoxjJM+sBA--.64691S3; Sun, 04 Dec 2022 22:11:52 +0800 (CST) From: panqinglin2020@iscas.ac.cn To: palmer@dabbelt.com, linux-riscv@lists.infradead.org Cc: jeff@riscv.org, xuyinan@ict.ac.cn, conor@kernel.org, ajones@ventanamicro.com, alex@ghiti.fr, jszhang@kernel.org, Qinglin Pan Subject: [PATCH v9 1/3] riscv: mm: modify pte format for Svnapot Date: Sun, 4 Dec 2022 22:11:35 +0800 Message-Id: <20221204141137.691790-2-panqinglin2020@iscas.ac.cn> X-Mailer: git-send-email 2.37.4 In-Reply-To: <20221204141137.691790-1-panqinglin2020@iscas.ac.cn> References: <20221204141137.691790-1-panqinglin2020@iscas.ac.cn> MIME-Version: 1.0 X-CM-TRANSID: zQCowAC3v_elqoxjJM+sBA--.64691S3 X-Coremail-Antispam: 1UD129KBjvJXoW3XF15tw48Gr1rZF48ur4fGrg_yoWfGFyxpr 4kCF9akFs8Gw1fCayIyrn8Wwn8Jw4DGa9xKw109r48JFW7AryxAryDC3W3Xr1DXFWvqa48 Ka95WF4fuanxAaUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPl14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_JFI_Gr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1l84 ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVCY1x0267AKxVW8Jr0_Cr1U M2vYz4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64 kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm 72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYx C7M4kE6xkIj40Ew7xC0wCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC2 0s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI 0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv2 0xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2js IE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZF pf9x0JUf9N3UUUUU= X-Originating-IP: [124.16.141.248] X-CM-SenderInfo: 5sdq1xpqjox0asqsiq5lvft2wodfhubq/1tbiAwcKDGOMqY4BlAAAsf X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221204_061200_420835_7A1689F4 X-CRM114-Status: GOOD ( 15.95 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Qinglin Pan Add one alternative to enable/disable svnapot support, enable this static key when "svnapot" is in the "riscv,isa" field of fdt and SVNAPOT compile option is set. It will influence the behavior of has_svnapot. All code dependent on svnapot should make sure that has_svnapot return true firstly. Modify PTE definition for Svnapot, and creates some functions in pgtable.h to mark a PTE as napot and check if it is a Svnapot PTE. Until now, only 64KB napot size is supported in spec, so some macros has only 64KB version. Signed-off-by: Qinglin Pan Reviewed-by: Andrew Jones diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index ef8d66de5f38..1d8477c0af7c 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -395,6 +395,20 @@ config RISCV_ISA_C If you don't know what to do here, say Y. +config RISCV_ISA_SVNAPOT + bool "SVNAPOT extension support" + depends on 64BIT && MMU + select RISCV_ALTERNATIVE + default y + help + Allow kernel to detect SVNAPOT ISA-extension dynamically in boot time + and enable its usage. + + SVNAPOT extension helps to mark contiguous PTEs as a range + of contiguous virtual-to-physical translations, with a naturally + aligned power-of-2 (NAPOT) granularity larger than the base 4KB page + size. + config RISCV_ISA_SVPBMT bool "SVPBMT extension support" depends on 64BIT && MMU diff --git a/arch/riscv/include/asm/errata_list.h b/arch/riscv/include/asm/errata_list.h index 4180312d2a70..beadb1126ed9 100644 --- a/arch/riscv/include/asm/errata_list.h +++ b/arch/riscv/include/asm/errata_list.h @@ -22,9 +22,10 @@ #define ERRATA_THEAD_NUMBER 3 #endif -#define CPUFEATURE_SVPBMT 0 -#define CPUFEATURE_ZICBOM 1 -#define CPUFEATURE_NUMBER 2 +#define CPUFEATURE_SVPBMT 0 +#define CPUFEATURE_ZICBOM 1 +#define CPUFEATURE_SVNAPOT 2 +#define CPUFEATURE_NUMBER 3 #ifdef __ASSEMBLY__ @@ -156,6 +157,14 @@ asm volatile(ALTERNATIVE( \ : "=r" (__ovl) : \ : "memory") +#define ALT_SVNAPOT(_val) \ +asm(ALTERNATIVE( \ + "li %0, 0", \ + "li %0, 1", \ + 0, CPUFEATURE_SVNAPOT, CONFIG_RISCV_ISA_SVNAPOT) \ + : "=r" (_val) : \ + : "memory") + #endif /* __ASSEMBLY__ */ #endif diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index b22525290073..4cbc1f45ab26 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -54,6 +54,7 @@ extern unsigned long elf_hwcap; */ enum riscv_isa_ext_id { RISCV_ISA_EXT_SSCOFPMF = RISCV_ISA_EXT_BASE, + RISCV_ISA_EXT_SVNAPOT, RISCV_ISA_EXT_SVPBMT, RISCV_ISA_EXT_ZICBOM, RISCV_ISA_EXT_ZIHINTPAUSE, @@ -87,7 +88,6 @@ static __always_inline int riscv_isa_ext2key(int num) { switch (num) { case RISCV_ISA_EXT_f: - return RISCV_ISA_EXT_KEY_FPU; case RISCV_ISA_EXT_d: return RISCV_ISA_EXT_KEY_FPU; case RISCV_ISA_EXT_ZIHINTPAUSE: diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h index ac70b0fd9a9a..349fad5e35de 100644 --- a/arch/riscv/include/asm/page.h +++ b/arch/riscv/include/asm/page.h @@ -16,11 +16,6 @@ #define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT) #define PAGE_MASK (~(PAGE_SIZE - 1)) -#ifdef CONFIG_64BIT -#define HUGE_MAX_HSTATE 2 -#else -#define HUGE_MAX_HSTATE 1 -#endif #define HPAGE_SHIFT PMD_SHIFT #define HPAGE_SIZE (_AC(1, UL) << HPAGE_SHIFT) #define HPAGE_MASK (~(HPAGE_SIZE - 1)) diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index dc42375c2357..9611833907ec 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -74,6 +74,40 @@ typedef struct { */ #define _PAGE_PFN_MASK GENMASK(53, 10) +/* + * [63] Svnapot definitions: + * 0 Svnapot disabled + * 1 Svnapot enabled + */ +#define _PAGE_NAPOT_SHIFT 63 +#define _PAGE_NAPOT BIT(_PAGE_NAPOT_SHIFT) +/* + * Only 64KB (order 4) napot ptes supported. + */ +#define NAPOT_CONT_ORDER_BASE 4 +enum napot_cont_order { + NAPOT_CONT64KB_ORDER = NAPOT_CONT_ORDER_BASE, + NAPOT_ORDER_MAX, +}; + +#define for_each_napot_order(order) \ + for (order = NAPOT_CONT_ORDER_BASE; order < NAPOT_ORDER_MAX; order++) +#define for_each_napot_order_rev(order) \ + for (order = NAPOT_ORDER_MAX - 1; \ + order >= NAPOT_CONT_ORDER_BASE; order--) +#define napot_cont_order(val) (__builtin_ctzl((val.pte >> _PAGE_PFN_SHIFT) << 1)) + +#define napot_cont_shift(order) ((order) + PAGE_SHIFT) +#define napot_cont_size(order) BIT(napot_cont_shift(order)) +#define napot_cont_mask(order) (~(napot_cont_size(order) - 1UL)) +#define napot_pte_num(order) BIT(order) + +#ifdef CONFIG_RISCV_ISA_SVNAPOT +#define HUGE_MAX_HSTATE (2 + (NAPOT_ORDER_MAX - NAPOT_CONT_ORDER_BASE)) +#else +#define HUGE_MAX_HSTATE 2 +#endif + /* * [62:61] Svpbmt Memory Type definitions: * diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index c61ae83aadee..99957b1270f2 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -6,10 +6,12 @@ #ifndef _ASM_RISCV_PGTABLE_H #define _ASM_RISCV_PGTABLE_H +#include #include #include #include +#include #ifndef CONFIG_MMU #define KERNEL_LINK_ADDR PAGE_OFFSET @@ -264,10 +266,49 @@ static inline pte_t pud_pte(pud_t pud) return __pte(pud_val(pud)); } +static __always_inline bool has_svnapot(void) +{ + unsigned int _val; + + ALT_SVNAPOT(_val); + + return _val; +} + +#ifdef CONFIG_RISCV_ISA_SVNAPOT + +static inline unsigned long pte_napot(pte_t pte) +{ + return pte_val(pte) & _PAGE_NAPOT; +} + +static inline pte_t pte_mknapot(pte_t pte, unsigned int order) +{ + int pos = order - 1 + _PAGE_PFN_SHIFT; + unsigned long napot_bit = BIT(pos); + unsigned long napot_mask = ~GENMASK(pos, _PAGE_PFN_SHIFT); + + return __pte((pte_val(pte) & napot_mask) | napot_bit | _PAGE_NAPOT); +} + +#else + +static inline unsigned long pte_napot(pte_t pte) +{ + return 0; +} + +#endif /* CONFIG_RISCV_ISA_SVNAPOT */ + /* Yields the page frame number (PFN) of a page table entry */ static inline unsigned long pte_pfn(pte_t pte) { - return __page_val_to_pfn(pte_val(pte)); + unsigned long res = __page_val_to_pfn(pte_val(pte)); + + if (has_svnapot() && pte_napot(pte)) + res = res & (res - 1UL); + + return res; } #define pte_page(x) pfn_to_page(pte_pfn(x)) diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index bf9dd6764bad..88495f5fcafd 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -165,6 +165,7 @@ static struct riscv_isa_ext_data isa_ext_arr[] = { __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), __RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL), + __RISCV_ISA_EXT_DATA(svnapot, RISCV_ISA_EXT_SVNAPOT), __RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT), __RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM), __RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE), diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 694267d1fe81..eeed66c3d497 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -205,6 +205,7 @@ void __init riscv_fill_hwcap(void) SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE); SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC); SET_ISA_EXT_MAP("svinval", RISCV_ISA_EXT_SVINVAL); + SET_ISA_EXT_MAP("svnapot", RISCV_ISA_EXT_SVNAPOT); } #undef SET_ISA_EXT_MAP } @@ -252,6 +253,17 @@ void __init riscv_fill_hwcap(void) } #ifdef CONFIG_RISCV_ALTERNATIVE +static bool __init_or_module cpufeature_probe_svnapot(unsigned int stage) +{ + if (!IS_ENABLED(CONFIG_RISCV_ISA_SVNAPOT)) + return false; + + if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) + return false; + + return riscv_isa_extension_available(NULL, SVNAPOT); +} + static bool __init_or_module cpufeature_probe_svpbmt(unsigned int stage) { if (!IS_ENABLED(CONFIG_RISCV_ISA_SVPBMT)) @@ -289,6 +301,9 @@ static u32 __init_or_module cpufeature_probe(unsigned int stage) { u32 cpu_req_feature = 0; + if (cpufeature_probe_svnapot(stage)) + cpu_req_feature |= BIT(CPUFEATURE_SVNAPOT); + if (cpufeature_probe_svpbmt(stage)) cpu_req_feature |= BIT(CPUFEATURE_SVPBMT); From patchwork Sun Dec 4 14:11:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qinglin Pan X-Patchwork-Id: 13063868 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 83377C3A5A7 for ; Sun, 4 Dec 2022 14:12:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pPLlOqEueQSTnVN02QsRxru8a63ZqvcNsGPcHZPKXhU=; b=Hbm72zyGKSLdtv 9Yq9tUSKB18fOcKLiBisuNrYWqp0GtiXjQUPR+g+mxt344KwVtPjVMOCfGexsDKJoJIJsM5NFwXlV ToYJZHFJMpoSMbdU+TbPZErEuwpizA+XsG8vpKT+Z91QQ1cAA4UeCFsXgCplx8k38E7HeOH/g79Q/ A14VWxj2/bC6+L6tmGlBO5aWJ8yeoGIot5CXzuphvVPuaVQDkNlpYWFktcac/TxDqk+UoOxvp+3mW n26ToYJ320i4EGybigD/dBgbfsrf2nk7V9Y01t4JPQK3oRHu8krcc3fMnZ1l4fYPpndFDcqSid86u eYZlJc+35hGtutR2N5Kw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p1pj0-008gGk-Pa; Sun, 04 Dec 2022 14:12:10 +0000 Received: from smtp84.cstnet.cn ([159.226.251.84] helo=cstnet.cn) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p1pip-008gCi-Q1 for linux-riscv@lists.infradead.org; Sun, 04 Dec 2022 14:12:05 +0000 Received: from localhost.localdomain (unknown [124.16.141.248]) by APP-05 (Coremail) with SMTP id zQCowAC3v_elqoxjJM+sBA--.64691S4; Sun, 04 Dec 2022 22:11:54 +0800 (CST) From: panqinglin2020@iscas.ac.cn To: palmer@dabbelt.com, linux-riscv@lists.infradead.org Cc: jeff@riscv.org, xuyinan@ict.ac.cn, conor@kernel.org, ajones@ventanamicro.com, alex@ghiti.fr, jszhang@kernel.org, Qinglin Pan Subject: [PATCH v9 2/3] riscv: mm: support Svnapot in hugetlb page Date: Sun, 4 Dec 2022 22:11:36 +0800 Message-Id: <20221204141137.691790-3-panqinglin2020@iscas.ac.cn> X-Mailer: git-send-email 2.37.4 In-Reply-To: <20221204141137.691790-1-panqinglin2020@iscas.ac.cn> References: <20221204141137.691790-1-panqinglin2020@iscas.ac.cn> MIME-Version: 1.0 X-CM-TRANSID: zQCowAC3v_elqoxjJM+sBA--.64691S4 X-Coremail-Antispam: 1UD129KBjvJXoWxKrW8Wr47AFWkWry5Ar13Arb_yoWfKw1kpF 1xCa4SqrW7tw1UGw4IqrZ8Jrn8tw1FqayUJ3s3Ja1rAw1IvrZxZasxC34ayr9xGrWkX3yx CrWrKFsxAF9rXF7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_JFI_Gr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1l84 ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVCY1x0267AKxVW8Jr0_Cr1U M2vYz4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64 kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm 72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYx C7M4kE6xkIj40Ew7xC0wCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC2 0s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI 0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv2 0xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2js IE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZF pf9x0JUF2NtUUUUU= X-Originating-IP: [124.16.141.248] X-CM-SenderInfo: 5sdq1xpqjox0asqsiq5lvft2wodfhubq/1tbiCgQKDGOMDo-VCgAAs5 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221204_061200_438526_FD35878E X-CRM114-Status: GOOD ( 13.19 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Qinglin Pan Svnapot can be used to support 64KB hugetlb page, so it can become a new option when using hugetlbfs. Add a basic implementation of hugetlb page, and support 64KB as a size in it by using Svnapot. For test, boot kernel with command line contains "default_hugepagesz=64K hugepagesz=64K hugepages=20" and run a simple test like this: tools/testing/selftests/vm/map_hugetlb 1 16 And it should be passed. Signed-off-by: Qinglin Pan Reviewed-by: Andrew Jones diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 1d8477c0af7c..be5c1edea70f 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -43,7 +43,7 @@ config RISCV select ARCH_USE_QUEUED_RWLOCKS select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU select ARCH_WANT_FRAME_POINTERS - select ARCH_WANT_GENERAL_HUGETLB + select ARCH_WANT_GENERAL_HUGETLB if !RISCV_ISA_SVNAPOT select ARCH_WANT_HUGE_PMD_SHARE if 64BIT select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h index ec19d6afc896..fe6f23006641 100644 --- a/arch/riscv/include/asm/hugetlb.h +++ b/arch/riscv/include/asm/hugetlb.h @@ -2,7 +2,6 @@ #ifndef _ASM_RISCV_HUGETLB_H #define _ASM_RISCV_HUGETLB_H -#include #include static inline void arch_clear_hugepage_flags(struct page *page) @@ -11,4 +10,37 @@ static inline void arch_clear_hugepage_flags(struct page *page) } #define arch_clear_hugepage_flags arch_clear_hugepage_flags +#ifdef CONFIG_RISCV_ISA_SVNAPOT +#define __HAVE_ARCH_HUGE_PTE_CLEAR +void huge_pte_clear(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned long sz); + +#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT +void set_huge_pte_at(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, pte_t pte); + +#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR +pte_t huge_ptep_get_and_clear(struct mm_struct *mm, + unsigned long addr, pte_t *ptep); + +#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH +pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep); + +#define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT +void huge_ptep_set_wrprotect(struct mm_struct *mm, + unsigned long addr, pte_t *ptep); + +#define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS +int huge_ptep_set_access_flags(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, + pte_t pte, int dirty); + +pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags); +#define arch_make_huge_pte arch_make_huge_pte + +#endif /*CONFIG_RISCV_ISA_SVNAPOT*/ + +#include + #endif /* _ASM_RISCV_HUGETLB_H */ diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c index 932dadfdca54..49f92f8cd431 100644 --- a/arch/riscv/mm/hugetlbpage.c +++ b/arch/riscv/mm/hugetlbpage.c @@ -2,6 +2,301 @@ #include #include +#ifdef CONFIG_RISCV_ISA_SVNAPOT +pte_t *huge_pte_alloc(struct mm_struct *mm, + struct vm_area_struct *vma, + unsigned long addr, + unsigned long sz) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte = NULL; + unsigned long order; + + pgd = pgd_offset(mm, addr); + p4d = p4d_alloc(mm, pgd, addr); + if (!p4d) + return NULL; + + pud = pud_alloc(mm, p4d, addr); + if (!pud) + return NULL; + + if (sz == PUD_SIZE) { + pte = (pte_t *)pud; + goto out; + } + + if (sz == PMD_SIZE) { + if (want_pmd_share(vma, addr) && pud_none(*pud)) + pte = huge_pmd_share(mm, vma, addr, pud); + else + pte = (pte_t *)pmd_alloc(mm, pud, addr); + goto out; + } + + pmd = pmd_alloc(mm, pud, addr); + if (!pmd) + return NULL; + + for_each_napot_order(order) { + if (napot_cont_size(order) == sz) { + pte = pte_alloc_map(mm, pmd, (addr & napot_cont_mask(order))); + break; + } + } + +out: + WARN_ON_ONCE(pte && pte_present(*pte) && !pte_huge(*pte)); + return pte; +} + +pte_t *huge_pte_offset(struct mm_struct *mm, + unsigned long addr, + unsigned long sz) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte = NULL; + unsigned long order; + + pgd = pgd_offset(mm, addr); + if (!pgd_present(*pgd)) + return NULL; + p4d = p4d_offset(pgd, addr); + if (!p4d_present(*p4d)) + return NULL; + + pud = pud_offset(p4d, addr); + if (sz == PUD_SIZE) + /* must be pud huge, non-present or none */ + return (pte_t *)pud; + if (!pud_present(*pud)) + return NULL; + + pmd = pmd_offset(pud, addr); + if (sz == PMD_SIZE) + /* must be pmd huge, non-present or none */ + return (pte_t *)pmd; + if (!pmd_present(*pmd)) + return NULL; + + for_each_napot_order(order) { + if (napot_cont_size(order) == sz) { + pte = pte_offset_kernel(pmd, (addr & napot_cont_mask(order))); + break; + } + } + return pte; +} + +static pte_t get_clear_contig(struct mm_struct *mm, + unsigned long addr, + pte_t *ptep, + unsigned long pte_num) +{ + pte_t orig_pte = ptep_get(ptep); + unsigned long i; + + for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++) { + pte_t pte = ptep_get_and_clear(mm, addr, ptep); + + if (pte_dirty(pte)) + orig_pte = pte_mkdirty(orig_pte); + + if (pte_young(pte)) + orig_pte = pte_mkyoung(orig_pte); + } + return orig_pte; +} + +static pte_t get_clear_contig_flush(struct mm_struct *mm, + unsigned long addr, + pte_t *ptep, + unsigned long pte_num) +{ + pte_t orig_pte = get_clear_contig(mm, addr, ptep, pte_num); + bool valid = !pte_none(orig_pte); + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); + + if (valid) + flush_tlb_range(&vma, addr, addr + (PAGE_SIZE * pte_num)); + + return orig_pte; +} + +pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags) +{ + unsigned long order; + + for_each_napot_order(order) { + if (shift == napot_cont_shift(order)) { + entry = pte_mknapot(entry, order); + break; + } + } + if (order == NAPOT_ORDER_MAX) + entry = pte_mkhuge(entry); + + return entry; +} + +void set_huge_pte_at(struct mm_struct *mm, + unsigned long addr, + pte_t *ptep, + pte_t pte) +{ + int i; + int pte_num; + + if (!pte_napot(pte)) { + set_pte_at(mm, addr, ptep, pte); + return; + } + + pte_num = napot_pte_num(napot_cont_order(pte)); + for (i = 0; i < pte_num; i++, ptep++, addr += PAGE_SIZE) + set_pte_at(mm, addr, ptep, pte); +} + +int huge_ptep_set_access_flags(struct vm_area_struct *vma, + unsigned long addr, + pte_t *ptep, + pte_t pte, + int dirty) +{ + pte_t orig_pte; + int i; + int pte_num; + struct mm_struct *mm = vma->vm_mm; + + if (!pte_napot(pte)) + return ptep_set_access_flags(vma, addr, ptep, pte, dirty); + + pte_num = napot_pte_num(napot_cont_order(pte)); + ptep = huge_pte_offset(mm, addr, + napot_cont_size(napot_cont_order(pte))); + orig_pte = get_clear_contig_flush(mm, addr, ptep, pte_num); + + if (pte_dirty(orig_pte)) + pte = pte_mkdirty(pte); + + if (pte_young(orig_pte)) + pte = pte_mkyoung(pte); + + for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++) + set_pte_at(mm, addr, ptep, pte); + + return true; +} + +pte_t huge_ptep_get_and_clear(struct mm_struct *mm, + unsigned long addr, + pte_t *ptep) +{ + int pte_num; + pte_t orig_pte = ptep_get(ptep); + + if (!pte_napot(orig_pte)) + return ptep_get_and_clear(mm, addr, ptep); + + pte_num = napot_pte_num(napot_cont_order(orig_pte)); + + return get_clear_contig(mm, addr, ptep, pte_num); +} + +void huge_ptep_set_wrprotect(struct mm_struct *mm, + unsigned long addr, + pte_t *ptep) +{ + int i; + int pte_num; + pte_t pte = ptep_get(ptep); + + if (!pte_napot(pte)) { + ptep_set_wrprotect(mm, addr, ptep); + return; + } + + pte_num = napot_pte_num(napot_cont_order(pte)); + ptep = huge_pte_offset(mm, addr, napot_cont_size(napot_cont_order(pte))); + + for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++) + ptep_set_wrprotect(mm, addr, ptep); +} + +pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, + pte_t *ptep) +{ + int pte_num; + pte_t pte = ptep_get(ptep); + + if (!pte_napot(pte)) + return ptep_clear_flush(vma, addr, ptep); + + pte_num = napot_pte_num(napot_cont_order(pte)); + + return get_clear_contig_flush(vma->vm_mm, addr, ptep, pte_num); +} + +void huge_pte_clear(struct mm_struct *mm, + unsigned long addr, + pte_t *ptep, + unsigned long sz) +{ + int i, pte_num; + pte_t pte = READ_ONCE(*ptep); + + if (!pte_napot(pte)) { + pte_clear(mm, addr, ptep); + return; + } + + pte_num = napot_pte_num(napot_cont_order(pte)); + for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++) + pte_clear(mm, addr, ptep); +} + +bool __init is_napot_size(unsigned long size) +{ + unsigned long order; + + if (!has_svnapot()) + return false; + + for_each_napot_order(order) { + if (size == napot_cont_size(order)) + return true; + } + return false; +} + +static __init int napot_hugetlbpages_init(void) +{ + if (has_svnapot()) { + unsigned long order; + + for_each_napot_order(order) + hugetlb_add_hstate(order); + } + return 0; +} +arch_initcall(napot_hugetlbpages_init); + +#else + +bool __init is_napot_size(unsigned long size) +{ + return false; +} + +#endif /*CONFIG_RISCV_ISA_SVNAPOT*/ + int pud_huge(pud_t pud) { return pud_leaf(pud); @@ -18,6 +313,8 @@ bool __init arch_hugetlb_valid_size(unsigned long size) return true; else if (IS_ENABLED(CONFIG_64BIT) && size == PUD_SIZE) return true; + else if (is_napot_size(size)) + return true; else return false; } From patchwork Sun Dec 4 14:11:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qinglin Pan X-Patchwork-Id: 13063870 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9A8D1C4321E for ; Sun, 4 Dec 2022 14:12:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7bgRb0qr9SDiUxdF6NalRnlZXpKpuypAgyxeAeWkxWc=; b=VIo+4G0B5jRNW4 +uoJYh42HjyIEorbGbMPvLJXvQq2uDQwgvUfaOnhyEkKu5vnXC5ctQkAyYdaxrw8N18v3fkxz4YUI D83hGsIibUjMxZpT75qB07bzKbJ+4IsFjTOIma81lDONr+V0TV8/zO+Akn0dcv6wbpVKsv0cSeY/W 4Znd4i1q1YPWpaCyJcyP/ISV2rg+glAJ6rVzMjYEbkCV/A7cYxxjZCtQ+2xDp7tWjsrlxSlFlnk+Z g09pnLM0fAb9ilArJoXH+Qwmb8yJWkUoKFRh8zqMpqiDDN5UirlXy9oZF6BbPMfIqHtyLMj1M0Vak DWmTxjB4o3zKUNA9qFEg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p1piy-008gGE-Rc; Sun, 04 Dec 2022 14:12:08 +0000 Received: from smtp84.cstnet.cn ([159.226.251.84] helo=cstnet.cn) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p1piq-008gD4-9s for linux-riscv@lists.infradead.org; Sun, 04 Dec 2022 14:12:04 +0000 Received: from localhost.localdomain (unknown [124.16.141.248]) by APP-05 (Coremail) with SMTP id zQCowAC3v_elqoxjJM+sBA--.64691S5; Sun, 04 Dec 2022 22:11:56 +0800 (CST) From: panqinglin2020@iscas.ac.cn To: palmer@dabbelt.com, linux-riscv@lists.infradead.org Cc: jeff@riscv.org, xuyinan@ict.ac.cn, conor@kernel.org, ajones@ventanamicro.com, alex@ghiti.fr, jszhang@kernel.org, Qinglin Pan Subject: [PATCH v9 3/3] riscv: mm: support Svnapot in huge vmap Date: Sun, 4 Dec 2022 22:11:37 +0800 Message-Id: <20221204141137.691790-4-panqinglin2020@iscas.ac.cn> X-Mailer: git-send-email 2.37.4 In-Reply-To: <20221204141137.691790-1-panqinglin2020@iscas.ac.cn> References: <20221204141137.691790-1-panqinglin2020@iscas.ac.cn> MIME-Version: 1.0 X-CM-TRANSID: zQCowAC3v_elqoxjJM+sBA--.64691S5 X-Coremail-Antispam: 1UD129KBjvJXoW7ZF1DZryrWw17KF1xXw4fKrg_yoW8Kw4fpF 4UCr1kG3yFqryS9rWjyr1Fgan8Xr15uayUJayfJr95AwsrJF13uryrAas0vryUXrZ5JFy3 ArZ0qrWfu3s8AFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JrWl82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Gr0_Xr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1l84 ACjcxK6I8E87Iv67AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVCY1x0267AKxVW8Jr0_Cr1U M2vYz4IE04k24VAvwVAKI4IrM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64 kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm 72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYx C7M4kE6xkIj40Ew7xC0wCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC2 0s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI 0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv2 0xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2js IE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZF pf9x0JU9Z23UUUUU= X-Originating-IP: [124.16.141.248] X-CM-SenderInfo: 5sdq1xpqjox0asqsiq5lvft2wodfhubq/1tbiBwIKDGOMqccBOwAAs4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221204_061200_750754_DFDDC8AC X-CRM114-Status: UNSURE ( 7.06 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Qinglin Pan As HAVE_ARCH_HUGE_VMAP and HAVE_ARCH_HUGE_VMALLOC is supported, we can implement arch_vmap_pte_range_map_size and arch_vmap_pte_supported_shift for Svnapot to support huge vmap about napot size. It can be tested by huge vmap used in pci driver. Huge vmalloc with svnapot can be tested by changing test_vmalloc module like [1], and probe this module to run fix_size_alloc_test with use_huge true. [1]https://github.com/RV-VM/linux-vm-support/commit/33f4ee399c36d355c412ebe5334ca46fd727f8f5 Signed-off-by: Qinglin Pan Reviewed-by: Andrew Jones Acked-by: Conor Dooley diff --git a/arch/riscv/include/asm/vmalloc.h b/arch/riscv/include/asm/vmalloc.h index 48da5371f1e9..6f7447d563ab 100644 --- a/arch/riscv/include/asm/vmalloc.h +++ b/arch/riscv/include/asm/vmalloc.h @@ -17,6 +17,65 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot) return true; } -#endif +#ifdef CONFIG_RISCV_ISA_SVNAPOT +#include +#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size +static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, unsigned long end, + u64 pfn, unsigned int max_page_shift) +{ + unsigned long size, order; + unsigned long map_size = PAGE_SIZE; + + if (!has_svnapot()) + return map_size; + + for_each_napot_order_rev(order) { + if (napot_cont_shift(order) > max_page_shift) + continue; + + size = napot_cont_size(order); + if (end - addr < size) + continue; + + if (!IS_ALIGNED(addr, size)) + continue; + + if (!IS_ALIGNED(PFN_PHYS(pfn), size)) + continue; + + map_size = size; + break; + } + + return map_size; +} + +#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift +static inline int arch_vmap_pte_supported_shift(unsigned long size) +{ + int shift = PAGE_SHIFT; + unsigned long order; + + if (!has_svnapot()) + return shift; + + WARN_ON_ONCE(size >= PMD_SIZE); + + for_each_napot_order_rev(order) { + if (napot_cont_size(order) > size) + continue; + + if (!IS_ALIGNED(size, napot_cont_size(order))) + continue; + + shift = napot_cont_shift(order); + break; + } + + return shift; +} + +#endif /* CONFIG_RISCV_ISA_SVNAPOT */ +#endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ #endif /* _ASM_RISCV_VMALLOC_H */