From patchwork Fri Aug 2 15:14:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13751643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 23DD7C52D6F for ; Fri, 2 Aug 2024 15:16:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6FrT3ZkKRTDP+7KJTA3OW+AccJ7hbq0YpdmNi6maw/w=; b=UY/oSu05Cx5lw2 c4XZDdKpYUVdfEEp5aU5gIA2v3SCfFTksZE8NBSTzvvUPfIDWlBockXse6zlXSCox7IvWOLwjjFm5 LjR19wS0/+XoMTOu9QEgCxEa1WWs/qjISctfa4eszqshbkWypPNFr1Rqx6Cm1XeYsnkE0/CoxloGn yrWX2g2EeH18o+b49IOB+SbuzfbKuPebepQsTwAMSlCsw+2TquEmRJh8oPrvUjDbrV8h4ogYLWrZk v+SxWzqlAQ4MG2s6uqufJaSILKd0aK9HYNyd+xyNSdSYlu5agn4aASgUjscN7AGl3My/ZMUtFC/oX U/lZu0/6xqTADX479N5w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sZu1I-00000009D8K-1rwW; Fri, 02 Aug 2024 15:16:40 +0000 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sZu1G-00000009D6a-1chV for linux-riscv@lists.infradead.org; Fri, 02 Aug 2024 15:16:39 +0000 Received: by mail-wr1-x42f.google.com with SMTP id ffacd0b85a97d-368380828d6so5337959f8f.1 for ; Fri, 02 Aug 2024 08:16:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1722611796; x=1723216596; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5juqdFHx6bq1L8uGHC8KAcytqmq0yTnlf857QDP6MLo=; b=YACc05+En63UE+0YQGP6ybC8o/Mluy58KV9NXi/VPffS99aNQWp40G9xLPSxD+/4jf ibrAdBI5ql4bl/QZ4vM38T2Se1UwAW06lgRsUKagy5SUa2GByW1PqCx4H4QY549EgAhN OhMMvYTAQFI0RrUUSiDKQ9CHcxgfwrO+uiiFxXGhPpoQwsZjLe4awKDriiM9KDoUt28Y /WtGD85wyiaKX99wzh760LtubA8+H8PES5Y7FoL/afY/GlqJwi3Lfmy4D6bZopqZpejJ mrWKmLHaE9g4967N3wAR5OJXDRlo6VL/bnNlZxFfQTZyFJAjooFoBawCUN/IZe50374K lAvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722611796; x=1723216596; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5juqdFHx6bq1L8uGHC8KAcytqmq0yTnlf857QDP6MLo=; b=YUeZ7mFEozY3578TSqVG29omwXFZAwYA8evH0S9btoZyUDOpcUF7zsp7JYfvZDww0t iovzsS0j6v+r2gKfhtjyd4hkZCSSHLNEP2aFxATe/DY4EpM2/N8GEtXw5i/4m4EKqy/8 dc+t4iftK+WJ+I166pi8lSJ2PVSLWz02tr0R4vBzhdkM6Cf/KoWG1y7IKllaqZiW1EyU guj2gC8rP3/+65kci2Wer3rmrpLpLB3YetTbmISqLcstSpo4TdkMMieMsLrv0q0NMYg0 mEmKKqeWzb/OVluqmNG+/5MQumYlplcORdu3dxeGSFSawGRlNtdqo+MPNTCId+91OO2v 2gtA== X-Forwarded-Encrypted: i=1; AJvYcCUzkgHTSwsvroiHePe7r2edXTBBVwCLtYaI0MkmM3IJ++r0kR1z0JP4/EDlnG9bFBX8vM5uxeFyIOYbogY5alkhE6JE4dNCeRMoHU9wEimc X-Gm-Message-State: AOJu0YwL4XhUVnNpB0R2IifkCpH4IcGEXlmLuP7qC53j9c4qtL+l0ozB r8QVydjwKAX44qOdANBu3fxRuP311Sset06UDdi6/CD8f/QQAEcLfF8TkFaiszY= X-Google-Smtp-Source: AGHT+IEF5tp7ncv5iXPHoA3R3svUCtTjvmewfdQGfXhbRlwySugbqyJNH05NnKLNTKXkwTci1pBy4Q== X-Received: by 2002:adf:f80c:0:b0:368:12ef:92d8 with SMTP id ffacd0b85a97d-36bbc17ec98mr2408572f8f.52.1722611796133; Fri, 02 Aug 2024 08:16:36 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-36bbcf0cc9csm2206159f8f.17.2024.08.02.08.16.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Aug 2024 08:16:35 -0700 (PDT) From: Alexandre Ghiti To: Catalin Marinas , Will Deacon , Ryan Roberts , Mark Rutland , Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrew Morton , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-mm@kvack.org Cc: Alexandre Ghiti Subject: [PATCH v3 2/9] riscv: Restore the pfn in a NAPOT pte when manipulated by core mm code Date: Fri, 2 Aug 2024 17:14:23 +0200 Message-Id: <20240802151430.99114-3-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240802151430.99114-1-alexghiti@rivosinc.com> References: <20240802151430.99114-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240802_081638_450079_F71E2CF9 X-CRM114-Status: GOOD ( 24.87 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The core mm code expects to be able to extract the pfn from a pte. NAPOT mappings work differently since its ptes actually point to the first pfn of the mapping, the other bits being used to encode the size of the mapping. So modify ptep_get() so that it returns a pte value that contains the *real* pfn (which is then different from what the HW expects) and right before storing the ptes to the page table, reset the pfn LSBs to the size of the mapping. And make sure that all NAPOT mappings are set using set_ptes(). Signed-off-by: Alexandre Ghiti --- arch/riscv/include/asm/pgtable-64.h | 11 ++++ arch/riscv/include/asm/pgtable.h | 91 ++++++++++++++++++++++++++--- arch/riscv/mm/hugetlbpage.c | 9 +-- 3 files changed, 96 insertions(+), 15 deletions(-) diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 0897dd99ab8d..cddbe426f618 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -104,6 +104,17 @@ enum napot_cont_order { #define napot_cont_mask(order) (~(napot_cont_size(order) - 1UL)) #define napot_pte_num(order) BIT(order) +static inline bool is_napot_order(unsigned int order) +{ + unsigned int napot_order; + + for_each_napot_order(napot_order) + if (order == napot_order) + return true; + + return false; +} + #ifdef CONFIG_RISCV_ISA_SVNAPOT #define HUGE_MAX_HSTATE (2 + (NAPOT_ORDER_MAX - NAPOT_CONT_ORDER_BASE)) #else diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 089f3c9f56a3..34c4c360d4ce 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -300,6 +300,8 @@ static inline unsigned long pte_napot(pte_t pte) return pte_val(pte) & _PAGE_NAPOT; } +#define pte_valid_napot(pte) (pte_present(pte) && pte_napot(pte)) + static inline pte_t pte_mknapot(pte_t pte, unsigned int order) { int pos = order - 1 + _PAGE_PFN_SHIFT; @@ -309,6 +311,12 @@ static inline pte_t pte_mknapot(pte_t pte, unsigned int order) return __pte((pte_val(pte) & napot_mask) | napot_bit | _PAGE_NAPOT); } +/* pte at entry must *not* encode the mapping size in the pfn LSBs. */ +static inline pte_t pte_clear_napot(pte_t pte) +{ + return __pte(pte_val(pte) & ~_PAGE_NAPOT); +} + #else static __always_inline bool has_svnapot(void) { return false; } @@ -318,17 +326,14 @@ static inline unsigned long pte_napot(pte_t pte) return 0; } +#define pte_valid_napot(pte) false + #endif /* CONFIG_RISCV_ISA_SVNAPOT */ /* Yields the page frame number (PFN) of a page table entry */ static inline unsigned long pte_pfn(pte_t pte) { - unsigned long res = __page_val_to_pfn(pte_val(pte)); - - if (has_svnapot() && pte_napot(pte)) - res = res & (res - 1UL); - - return res; + return __page_val_to_pfn(pte_val(pte)); } #define pte_page(x) pfn_to_page(pte_pfn(x)) @@ -553,8 +558,13 @@ static inline void __set_pte_at(struct mm_struct *mm, pte_t *ptep, pte_t pteval) #define PFN_PTE_SHIFT _PAGE_PFN_SHIFT -static inline void set_ptes(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval, unsigned int nr) +static inline pte_t __ptep_get(pte_t *ptep) +{ + return READ_ONCE(*ptep); +} + +static inline void __set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { page_table_check_ptes_set(mm, ptep, pteval, nr); @@ -563,10 +573,13 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, if (--nr == 0) break; ptep++; + + if (unlikely(pte_valid_napot(pteval))) + continue; + pte_val(pteval) += 1 << _PAGE_PFN_SHIFT; } } -#define set_ptes set_ptes static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) @@ -621,6 +634,66 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma, return ptep_test_and_clear_young(vma, address, ptep); } +#ifdef CONFIG_RISCV_ISA_SVNAPOT +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) +{ + if (unlikely(pte_valid_napot(pteval))) { + unsigned int order = ilog2(nr); + + if (!is_napot_order(order)) { + /* + * Something's weird, we are given a NAPOT pte but the + * size of the mapping is not a known NAPOT mapping + * size, so clear the NAPOT bit and map this without + * NAPOT support: core mm only manipulates pte with the + * real pfn so we know the pte is valid without the N + * bit. + */ + pr_err("Incorrect NAPOT mapping, resetting.\n"); + pteval = pte_clear_napot(pteval); + } else { + /* + * NAPOT ptes that arrive here only have the N bit set + * and their pfn does not contain the mapping size, so + * set that here. + */ + pteval = pte_mknapot(pteval, order); + } + } + + __set_ptes(mm, addr, ptep, pteval, nr); +} +#define set_ptes set_ptes + +static inline pte_t ptep_get(pte_t *ptep) +{ + pte_t pte = __ptep_get(ptep); + + /* + * The pte we load has the N bit set and the size of the mapping in + * the pfn LSBs: keep the N bit and replace the mapping size with + * the *real* pfn since the core mm code expects to find it there. + * The mapping size will be reset just before being written to the + * page table in set_ptes(). + */ + if (unlikely(pte_valid_napot(pte))) { + unsigned int order = napot_cont_order(pte); + int pos = order - 1 + _PAGE_PFN_SHIFT; + unsigned long napot_mask = ~GENMASK(pos, _PAGE_PFN_SHIFT); + pte_t *orig_ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * napot_pte_num(order)); + + pte = __pte((pte_val(pte) & napot_mask) + ((ptep - orig_ptep) << _PAGE_PFN_SHIFT)); + } + + return pte; +} +#define ptep_get ptep_get +#else +#define set_ptes __set_ptes +#define ptep_get __ptep_get +#endif /* CONFIG_RISCV_ISA_SVNAPOT */ + #define pgprot_nx pgprot_nx static inline pgprot_t pgprot_nx(pgprot_t _prot) { diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c index 6b09cd1ef41c..59ed26ce6857 100644 --- a/arch/riscv/mm/hugetlbpage.c +++ b/arch/riscv/mm/hugetlbpage.c @@ -256,8 +256,7 @@ void set_huge_pte_at(struct mm_struct *mm, clear_flush(mm, addr, ptep, pgsize, pte_num); - for (i = 0; i < pte_num; i++, ptep++, addr += pgsize) - set_pte_at(mm, addr, ptep, pte); + set_ptes(mm, addr, ptep, pte, pte_num); } int huge_ptep_set_access_flags(struct vm_area_struct *vma, @@ -284,8 +283,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, if (pte_young(orig_pte)) pte = pte_mkyoung(pte); - for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++) - set_pte_at(mm, addr, ptep, pte); + set_ptes(mm, addr, ptep, pte, pte_num); return true; } @@ -325,8 +323,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, orig_pte = pte_wrprotect(orig_pte); - for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++) - set_pte_at(mm, addr, ptep, orig_pte); + set_ptes(mm, addr, ptep, orig_pte, pte_num); } pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,