From patchwork Mon Oct 30 13:30:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13440589 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 78025C4332F for ; Mon, 30 Oct 2023 13:32:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZygxALYCYtqdS6SEdq0rSwceIaDT2BUQkC42jIX7E48=; b=RQIIUNraAiBiKP jB7tU+TpCZtopTTpRErkhDFL7jtkBQ4LzLU7ZO/siEERJmQqSxxXTocjDT46Jd2zE3Ra+/ZfI0uv2 b7g0BidZpPkv//Im/9DO34hKi8c9HSRdn92+ghIl8H8fy17h3X0WaXBVOapahMZN9b/Yp/5ohd0y3 jK4kyS2kZ3l1sTxPHfRLqvt2Mnt6Bh2E7a/54lfaLuIXO/JE8uAXL375/wL548Sc+GLkZD4G7rHm7 0t8xS3BHKqK5k3xjHlDkUDODHnZNWkbGiitJ6h/f0x6ACpqsUiA09G9iDBV8I3nuOr/2IciBd/iwM A2F7lJuR+JHUoi+WTWAQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qxSNG-003Pp1-2w; Mon, 30 Oct 2023 13:32:10 +0000 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qxSND-003PmW-2s for linux-riscv@lists.infradead.org; Mon, 30 Oct 2023 13:32:10 +0000 Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-40838915cecso35424115e9.2 for ; Mon, 30 Oct 2023 06:32:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1698672723; x=1699277523; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hd/FLevx18XhR5UIn9smwlv93JRL3lzcOo0hNfovrwU=; b=H2Mv2idpSBnD6+/5GgAln0Gom4VFDp7ZYRi3C+3iH1xoTNy+Fs14d7SnOd+oy66JqJ 1zBV3+wzrMm5FNiNVf1/+dLr57h5uslzSYX3MOrqiWfFgKZuW7/+ZjSOq0Go8ypN2Sp4 zkh+LNFHThfqkxUiNIHxZico2yRZFfV0rY4y4EBEIJZhUnr4nyOms0JcR9j/GB90Hb0F TG7frHB3KHT5OEscm3DZUsRXNng4foKw7SzMqaIHm5mxpEcmVMkGvbTHAbe4ToBCt04l /qe0WVDo2IfRvqewu48MRaveCSh59Y/reJ9PF22pHyOv6QFTslJd+GLRaNC9tCb/Zf9K k7Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698672723; x=1699277523; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hd/FLevx18XhR5UIn9smwlv93JRL3lzcOo0hNfovrwU=; b=OseEG9Kq5v1gDLDZT2BsLldsoh3eZdrn+sLtx8ZXbMvJKIDakCXhXuaAS+fBRDawTh iGYs8XEszhz/awfUEAkwGAsNOkz68i4iAY+AXURZYwZa4T6DRbmTGW77vHGUHoTAnHI2 r9/wd+tuUPPtD0raP9geph0KrJKGlFAUQYA03BicBFYCZv4yF1KQgGUk6C4xK99kNNI0 VyfJOuUtmvavrTx7eWI0MX+eL1sgVBlvkyixd8iyMGbtEiaIA7fYxIE5ahVYSp2h+/8H 3ljq4esxWYrUMUIb9D27IrGv/ypcxSN9tqUbfgf7FtuxvHyTP9ZGIB0lma9JwDGtQ9lw CO4w== X-Gm-Message-State: AOJu0YwDz+jqCY3s5z/ujm80VG1v+KnSkks5NBYpe7HN4OcjKRKQcsCq kmxHaJafsC2+YEztTwM/C4w+DA== X-Google-Smtp-Source: AGHT+IGz8B0HEGjTPMhdhfOE7+2f/YykM3hv3LZ+YFlrU3T3FZRfdHfdZDwT6aaKJLc5fO13EcZA5Q== X-Received: by 2002:a5d:4b0c:0:b0:32c:f8cb:f908 with SMTP id v12-20020a5d4b0c000000b0032cf8cbf908mr8512441wrq.59.1698672723353; Mon, 30 Oct 2023 06:32:03 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id a15-20020adfed0f000000b0032d88e370basm8262398wro.34.2023.10.30.06.32.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Oct 2023 06:32:02 -0700 (PDT) From: Alexandre Ghiti To: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Mayuresh Chitale , Vincent Chen , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Samuel Holland , Lad Prabhakar Cc: Alexandre Ghiti , Andrew Jones , Samuel Holland , Lad Prabhakar Subject: [PATCH v6 1/4] riscv: Improve tlb_flush() Date: Mon, 30 Oct 2023 14:30:25 +0100 Message-Id: <20231030133027.19542-2-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231030133027.19542-1-alexghiti@rivosinc.com> References: <20231030133027.19542-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231030_063207_931043_18F265E5 X-CRM114-Status: GOOD ( 12.51 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org For now, tlb_flush() simply calls flush_tlb_mm() which results in a flush of the whole TLB. So let's use mmu_gather fields to provide a more fine-grained flush of the TLB. Signed-off-by: Alexandre Ghiti Reviewed-by: Andrew Jones Reviewed-by: Samuel Holland Tested-by: Lad Prabhakar # On RZ/Five SMARC --- arch/riscv/include/asm/tlb.h | 8 +++++++- arch/riscv/include/asm/tlbflush.h | 3 +++ arch/riscv/mm/tlbflush.c | 7 +++++++ 3 files changed, 17 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h index 120bcf2ed8a8..1eb5682b2af6 100644 --- a/arch/riscv/include/asm/tlb.h +++ b/arch/riscv/include/asm/tlb.h @@ -15,7 +15,13 @@ static void tlb_flush(struct mmu_gather *tlb); static inline void tlb_flush(struct mmu_gather *tlb) { - flush_tlb_mm(tlb->mm); +#ifdef CONFIG_MMU + if (tlb->fullmm || tlb->need_flush_all) + flush_tlb_mm(tlb->mm); + else + flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end, + tlb_get_unmap_size(tlb)); +#endif } #endif /* _ASM_RISCV_TLB_H */ diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index a09196f8de68..f5c4fb0ae642 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -32,6 +32,8 @@ static inline void local_flush_tlb_page(unsigned long addr) #if defined(CONFIG_SMP) && defined(CONFIG_MMU) void flush_tlb_all(void); void flush_tlb_mm(struct mm_struct *mm); +void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, + unsigned long end, unsigned int page_size); void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr); void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); @@ -52,6 +54,7 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, } #define flush_tlb_mm(mm) flush_tlb_all() +#define flush_tlb_mm_range(mm, start, end, page_size) flush_tlb_all() #endif /* !CONFIG_SMP || !CONFIG_MMU */ /* Flush a range of kernel pages */ diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 77be59aadc73..fa03289853d8 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -132,6 +132,13 @@ void flush_tlb_mm(struct mm_struct *mm) __flush_tlb_range(mm, 0, -1, PAGE_SIZE); } +void flush_tlb_mm_range(struct mm_struct *mm, + unsigned long start, unsigned long end, + unsigned int page_size) +{ + __flush_tlb_range(mm, start, end - start, page_size); +} + void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { __flush_tlb_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); From patchwork Mon Oct 30 13:30:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13440603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 078D6C4332F for ; Mon, 30 Oct 2023 13:33:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gI5hQVjJ/tq2h80SsJnavWIHUsqsjHBCg6YT5PQFK3Q=; b=wVTP0BUrzMxZMV dwpYPFDHZ8dN9nrndMMTLzhfYcuB7jXL7B5RcBxMbcQu45z9AUmjLZK0fCWO36FRu300j6Dy5bMSB 16S7p2aupaaqia7CegvAl7h2tzLXMkxMK95t/OqGVlQ4+ssjxk7Jept8GBmxgOImHZU7pyYUbutK4 /FJykRa0trehKYbmKSpWW5ydJCR0mY/ZPpu6Bnz/HDoInA+NpTIL6kHgYcD5gnic5+lTNrKDObSWw M2hKdNEY82AhuucnYKwEZ0pSo8dy6VZZiYQVoivo60gmM9kmqZ6WHWgzCTdM6uNISomMym3TVeX84 EozQuKOZ50yKasnLDHMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qxSOE-003Q4U-1c; Mon, 30 Oct 2023 13:33:10 +0000 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qxSOB-003Q1k-0m for linux-riscv@lists.infradead.org; Mon, 30 Oct 2023 13:33:08 +0000 Received: by mail-wr1-x432.google.com with SMTP id ffacd0b85a97d-32f70391608so1900526f8f.2 for ; Mon, 30 Oct 2023 06:33:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1698672785; x=1699277585; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Lg+2phEYGkxNvU/hCz5W3gC6On0lJ4QhzztTYgaxDUQ=; b=pR+M9mbQjwPxiscmYmVhEWZYMM5dMrqLAgJ5tpVp/wgwE6olYuuqJ6tR9PnkGDn2L2 NRSXBLX2hKhXVY+8QO67jh+SpOYEUZrFfdLDF7zten+NoPoYZ5uhZ9jR9hCeRdh9M6bJ Fu+D/O4P16A9qzYgNlAT3z2E7b8pAEyo4NEjNZZU4nx5XamEX6KnEJk+czazwxZqlGAJ OdLXiHnvx/XMDX3TjW7/rW1Gdj0Bwj0EAHHKnbZ/tLdKA7myBRJ1dLhStYpQLdGuNnEH NFeXJP6lTlQ+pBnR1HrbbzL3Th+Kytmos8CcRBGwTUzqQZdkhUIdo5EUXOYb5UeXJjph +lNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698672785; x=1699277585; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Lg+2phEYGkxNvU/hCz5W3gC6On0lJ4QhzztTYgaxDUQ=; b=VDCA6lVtOA5WtX9KyEVAQz9uKUhLivLD6yT32OTnToOftNSwb3T/0/qZesfQ0NTjD3 6/LN0Ri0NiBYFYIE5BuBBApH1s7d6Q0mr+wCYKtw4mP2sl0lYXqNK0hMl18cu3iI9NKf BCkxKmJ/C45Nip8rZHCx77VPraaYf2yKoRT0rbDB9VdYx2BQzOXJ9U1w6Wtsh6RAEAc9 Ai4axDFEWPV/kyupu36IJTb4eGe53+DE8iKLK3dklkIJJ5BWIkJMUiu8pq6O7TiOiULq JtNodosxj+ayVuF2mgYS8HiqJm1sVsQMDb9Hz8+x2IGNtHDEQ1GvnBw036yamQlSwh3h PMVA== X-Gm-Message-State: AOJu0Yw/hG5x7wg3v1WrN9G+ngmxYA1FkGcdUxvjoYFEHZNbkPGURcgj eFqOvepXDQHYrhFz3m6EfTcbsQ== X-Google-Smtp-Source: AGHT+IFSVGIn3GkkCdVjrv9GJYab3GvRdyYNVKTc+rLB8DHUCDxVeEERoRHJ6rwLZ6Dt9ZCmv4emsA== X-Received: by 2002:adf:d1ca:0:b0:32f:7967:aa4d with SMTP id b10-20020adfd1ca000000b0032f7967aa4dmr5491624wrd.68.1698672784653; Mon, 30 Oct 2023 06:33:04 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id d25-20020adfa419000000b0032f79e55eb8sm6061601wra.16.2023.10.30.06.33.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Oct 2023 06:33:04 -0700 (PDT) From: Alexandre Ghiti To: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Mayuresh Chitale , Vincent Chen , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Samuel Holland , Lad Prabhakar Cc: Alexandre Ghiti , Samuel Holland , Lad Prabhakar Subject: [PATCH v6 2/4] riscv: Improve flush_tlb_range() for hugetlb pages Date: Mon, 30 Oct 2023 14:30:26 +0100 Message-Id: <20231030133027.19542-3-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231030133027.19542-1-alexghiti@rivosinc.com> References: <20231030133027.19542-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231030_063307_278895_CB380349 X-CRM114-Status: GOOD ( 15.23 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org flush_tlb_range() uses a fixed stride of PAGE_SIZE and in its current form, when a hugetlb mapping needs to be flushed, flush_tlb_range() flushes the whole tlb: so set a stride of the size of the hugetlb mapping in order to only flush the hugetlb mapping. However, if the hugepage is a NAPOT region, all PTEs that constitute this mapping must be invalidated, so the stride size must actually be the size of the PTE. Note that THPs are directly handled by flush_pmd_tlb_range(). Signed-off-by: Alexandre Ghiti Reviewed-by: Samuel Holland Tested-by: Lad Prabhakar # On RZ/Five SMARC --- arch/riscv/mm/tlbflush.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index fa03289853d8..b6d712a82306 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include @@ -147,7 +148,33 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE); + unsigned long stride_size; + + if (!is_vm_hugetlb_page(vma)) { + stride_size = PAGE_SIZE; + } else { + stride_size = huge_page_size(hstate_vma(vma)); + + /* + * As stated in the privileged specification, every PTE in a + * NAPOT region must be invalidated, so reset the stride in that + * case. + */ + if (has_svnapot()) { + if (stride_size >= PGDIR_SIZE) + stride_size = PGDIR_SIZE; + else if (stride_size >= P4D_SIZE) + stride_size = P4D_SIZE; + else if (stride_size >= PUD_SIZE) + stride_size = PUD_SIZE; + else if (stride_size >= PMD_SIZE) + stride_size = PMD_SIZE; + else + stride_size = PAGE_SIZE; + } + } + + __flush_tlb_range(vma->vm_mm, start, end - start, stride_size); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, From patchwork Mon Oct 30 13:30:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13440604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44904C4332F for ; Mon, 30 Oct 2023 13:34:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HkD9WyegCRACn/NrEyte/cDOyynokCU1asESq8Ufvoc=; b=eCGmCrJGCkiCfP tjUcTRCYAJmCS/d6Bw3YghQYCegbxGKONeQn4IPnQY9Oxh9b+ZoFxM/u3hXhCk7sbD1+54ANpyiPf ulsWN6T/hy9t+LIJBNauWRaVXeRHxbC48Ox9C08jXXkHB4xhtA6JbrnmadwqOe+bcS6Dfh+qAJ/Kk 2TCS/0JA8nILw3crrwQparYh07Fz7TyyA/WcPGu5Ic3fVLEPkQVjTkuXTrWdnldMy6+BMrSGvWU8c BsM+4PTqY0bbBGCwNt41x4Rg+06Sid/qMmEi6OIuTUtD4TphGZ/S9wJuNLizGgULqTMza7Uem1eUy 8kYEEiK25LtQxpLB6tlw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qxSPD-003QGa-2G; Mon, 30 Oct 2023 13:34:11 +0000 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qxSPB-003QEm-0L for linux-riscv@lists.infradead.org; Mon, 30 Oct 2023 13:34:10 +0000 Received: by mail-wm1-x32d.google.com with SMTP id 5b1f17b1804b1-40806e4106dso26236265e9.1 for ; Mon, 30 Oct 2023 06:34:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1698672846; x=1699277646; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WjbHAGeb+QJH3Z54LpRLDIACfKOrdrc4j0Xi4VRv9pM=; b=MU2nFricxKS806MIxcSz9/TrmNyJAB0ZNGHYJttKI3MiBi+O4bM8qMXhrkzV7riMVR rKLKaLoj69h26b6qw2QoICGYlRcIcw1JnLylc1g4TPy39Y0Vu+e574KVMSjMmD9uv/aK hG9yVkDdBmACnZawrDbaF5SFH5D7cL8m0zUiBGOMoyyAJDvM+WUKtQ+wf64CpzRtsmp4 679DAgGxV6pjl93Kgf5QsOiIKqBKSIGm9dMObKKLNC4Gj1p9g03Fnze4SnBRWcungmBF DK4ltOA9bvJw7zZJ5UGfoWeKblAq/qvVh5yB3tp2NZwV+FsvSHrk6vohD9chvBREPKhr nseQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698672846; x=1699277646; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WjbHAGeb+QJH3Z54LpRLDIACfKOrdrc4j0Xi4VRv9pM=; b=RALkzK5AZWptO38+eSqAkhphDI7eNvYJZYwu3KWNi1BwVXRf45WXc+MmEx23d0U67T FkAoqwWS3+vUtefe1Ne8wycrUCEGf3my3EB/ybaqaRZhto6AhHQAtNvSzeUtn64VjffY feXjK1J0p7tbSlxKrMqteRmUMj/36jx5kvBFlAswJO9bXE33KmmMrWKG+YLuLX+27cXB B3+0a1cA64gnlF1uIHdyh39eF6SoXq+kJTEdYUPWmELSIQl7kbQKrv9j1I4br3soIysP zDfoPJ6m0iHMGlFNEw2fR5O6TYSuIAnyv4KweguJ0FFeUvrvRywHHXvoFhsUYeGRgVV1 slYQ== X-Gm-Message-State: AOJu0YyttdwdZhL+sEa7V2sEToX5uZfmsf6SXewwrmMG9OMRW/WjYlLZ ELGDr4nF5ck0w83dqNhsbTUIKw== X-Google-Smtp-Source: AGHT+IFEwmaXHp3jmsk1vb25lVzXIQCUCLQ6L9e+gIHQJU/IHdkS+vcxqaHWYOIB/ULsgQSEOHAiyQ== X-Received: by 2002:a05:600c:1c14:b0:405:358c:ba74 with SMTP id j20-20020a05600c1c1400b00405358cba74mr14533940wms.0.1698672846098; Mon, 30 Oct 2023 06:34:06 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id o38-20020a05600c512600b00407752f5ab6sm9384422wms.6.2023.10.30.06.34.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Oct 2023 06:34:05 -0700 (PDT) From: Alexandre Ghiti To: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Mayuresh Chitale , Vincent Chen , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Samuel Holland , Lad Prabhakar Cc: Alexandre Ghiti , Andrew Jones , Lad Prabhakar , Samuel Holland Subject: [PATCH v6 3/4] riscv: Make __flush_tlb_range() loop over pte instead of flushing the whole tlb Date: Mon, 30 Oct 2023 14:30:27 +0100 Message-Id: <20231030133027.19542-4-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231030133027.19542-1-alexghiti@rivosinc.com> References: <20231030133027.19542-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231030_063409_141532_AF6F8726 X-CRM114-Status: GOOD ( 22.92 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Currently, when the range to flush covers more than one page (a 4K page or a hugepage), __flush_tlb_range() flushes the whole tlb. Flushing the whole tlb comes with a greater cost than flushing a single entry so we should flush single entries up to a certain threshold so that: threshold * cost of flushing a single entry < cost of flushing the whole tlb. Co-developed-by: Mayuresh Chitale Signed-off-by: Mayuresh Chitale Signed-off-by: Alexandre Ghiti Reviewed-by: Andrew Jones Tested-by: Lad Prabhakar # On RZ/Five SMARC Reviewed-by: Samuel Holland Tested-by: Samuel Holland --- arch/riscv/include/asm/sbi.h | 3 - arch/riscv/include/asm/tlbflush.h | 3 + arch/riscv/kernel/sbi.c | 32 +++------ arch/riscv/mm/tlbflush.c | 115 +++++++++++++++--------------- 4 files changed, 72 insertions(+), 81 deletions(-) diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 12dfda6bb924..0892f4421bc4 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -280,9 +280,6 @@ void sbi_set_timer(uint64_t stime_value); void sbi_shutdown(void); void sbi_send_ipi(unsigned int cpu); int sbi_remote_fence_i(const struct cpumask *cpu_mask); -int sbi_remote_sfence_vma(const struct cpumask *cpu_mask, - unsigned long start, - unsigned long size); int sbi_remote_sfence_vma_asid(const struct cpumask *cpu_mask, unsigned long start, diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index f5c4fb0ae642..170a49c531c6 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -11,6 +11,9 @@ #include #include +#define FLUSH_TLB_MAX_SIZE ((unsigned long)-1) +#define FLUSH_TLB_NO_ASID ((unsigned long)-1) + #ifdef CONFIG_MMU extern unsigned long asid_mask; diff --git a/arch/riscv/kernel/sbi.c b/arch/riscv/kernel/sbi.c index c672c8ba9a2a..5a62ed1da453 100644 --- a/arch/riscv/kernel/sbi.c +++ b/arch/riscv/kernel/sbi.c @@ -11,6 +11,7 @@ #include #include #include +#include /* default SBI version is 0.1 */ unsigned long sbi_spec_version __ro_after_init = SBI_SPEC_VERSION_DEFAULT; @@ -376,32 +377,15 @@ int sbi_remote_fence_i(const struct cpumask *cpu_mask) } EXPORT_SYMBOL(sbi_remote_fence_i); -/** - * sbi_remote_sfence_vma() - Execute SFENCE.VMA instructions on given remote - * harts for the specified virtual address range. - * @cpu_mask: A cpu mask containing all the target harts. - * @start: Start of the virtual address - * @size: Total size of the virtual address range. - * - * Return: 0 on success, appropriate linux error code otherwise. - */ -int sbi_remote_sfence_vma(const struct cpumask *cpu_mask, - unsigned long start, - unsigned long size) -{ - return __sbi_rfence(SBI_EXT_RFENCE_REMOTE_SFENCE_VMA, - cpu_mask, start, size, 0, 0); -} -EXPORT_SYMBOL(sbi_remote_sfence_vma); - /** * sbi_remote_sfence_vma_asid() - Execute SFENCE.VMA instructions on given - * remote harts for a virtual address range belonging to a specific ASID. + * remote harts for a virtual address range belonging to a specific ASID or not. * * @cpu_mask: A cpu mask containing all the target harts. * @start: Start of the virtual address * @size: Total size of the virtual address range. - * @asid: The value of address space identifier (ASID). + * @asid: The value of address space identifier (ASID), or FLUSH_TLB_NO_ASID + * for flushing all address spaces. * * Return: 0 on success, appropriate linux error code otherwise. */ @@ -410,8 +394,12 @@ int sbi_remote_sfence_vma_asid(const struct cpumask *cpu_mask, unsigned long size, unsigned long asid) { - return __sbi_rfence(SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID, - cpu_mask, start, size, asid, 0); + if (asid == FLUSH_TLB_NO_ASID) + return __sbi_rfence(SBI_EXT_RFENCE_REMOTE_SFENCE_VMA, + cpu_mask, start, size, 0, 0); + else + return __sbi_rfence(SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID, + cpu_mask, start, size, asid, 0); } EXPORT_SYMBOL(sbi_remote_sfence_vma_asid); diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index b6d712a82306..e46fefc70927 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -9,28 +9,50 @@ static inline void local_flush_tlb_all_asid(unsigned long asid) { - __asm__ __volatile__ ("sfence.vma x0, %0" - : - : "r" (asid) - : "memory"); + if (asid != FLUSH_TLB_NO_ASID) + __asm__ __volatile__ ("sfence.vma x0, %0" + : + : "r" (asid) + : "memory"); + else + local_flush_tlb_all(); } static inline void local_flush_tlb_page_asid(unsigned long addr, unsigned long asid) { - __asm__ __volatile__ ("sfence.vma %0, %1" - : - : "r" (addr), "r" (asid) - : "memory"); + if (asid != FLUSH_TLB_NO_ASID) + __asm__ __volatile__ ("sfence.vma %0, %1" + : + : "r" (addr), "r" (asid) + : "memory"); + else + local_flush_tlb_page(addr); } -static inline void local_flush_tlb_range(unsigned long start, - unsigned long size, unsigned long stride) +/* + * Flush entire TLB if number of entries to be flushed is greater + * than the threshold below. + */ +static unsigned long tlb_flush_all_threshold __read_mostly = 64; + +static void local_flush_tlb_range_threshold_asid(unsigned long start, + unsigned long size, + unsigned long stride, + unsigned long asid) { - if (size <= stride) - local_flush_tlb_page(start); - else - local_flush_tlb_all(); + unsigned long nr_ptes_in_range = DIV_ROUND_UP(size, stride); + int i; + + if (nr_ptes_in_range > tlb_flush_all_threshold) { + local_flush_tlb_all_asid(asid); + return; + } + + for (i = 0; i < nr_ptes_in_range; ++i) { + local_flush_tlb_page_asid(start, asid); + start += stride; + } } static inline void local_flush_tlb_range_asid(unsigned long start, @@ -38,8 +60,10 @@ static inline void local_flush_tlb_range_asid(unsigned long start, { if (size <= stride) local_flush_tlb_page_asid(start, asid); - else + else if (size == FLUSH_TLB_MAX_SIZE) local_flush_tlb_all_asid(asid); + else + local_flush_tlb_range_threshold_asid(start, size, stride, asid); } static void __ipi_flush_tlb_all(void *info) @@ -52,7 +76,7 @@ void flush_tlb_all(void) if (riscv_use_ipi_for_rfence()) on_each_cpu(__ipi_flush_tlb_all, NULL, 1); else - sbi_remote_sfence_vma(NULL, 0, -1); + sbi_remote_sfence_vma_asid(NULL, 0, FLUSH_TLB_MAX_SIZE, FLUSH_TLB_NO_ASID); } struct flush_tlb_range_data { @@ -69,18 +93,12 @@ static void __ipi_flush_tlb_range_asid(void *info) local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); } -static void __ipi_flush_tlb_range(void *info) -{ - struct flush_tlb_range_data *d = info; - - local_flush_tlb_range(d->start, d->size, d->stride); -} - static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, unsigned long size, unsigned long stride) { struct flush_tlb_range_data ftd; struct cpumask *cmask = mm_cpumask(mm); + unsigned long asid = FLUSH_TLB_NO_ASID; unsigned int cpuid; bool broadcast; @@ -90,39 +108,24 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, cpuid = get_cpu(); /* check if the tlbflush needs to be sent to other CPUs */ broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids; - if (static_branch_unlikely(&use_asid_allocator)) { - unsigned long asid = atomic_long_read(&mm->context.id) & asid_mask; - - if (broadcast) { - if (riscv_use_ipi_for_rfence()) { - ftd.asid = asid; - ftd.start = start; - ftd.size = size; - ftd.stride = stride; - on_each_cpu_mask(cmask, - __ipi_flush_tlb_range_asid, - &ftd, 1); - } else - sbi_remote_sfence_vma_asid(cmask, - start, size, asid); - } else { - local_flush_tlb_range_asid(start, size, stride, asid); - } + + if (static_branch_unlikely(&use_asid_allocator)) + asid = atomic_long_read(&mm->context.id) & asid_mask; + + if (broadcast) { + if (riscv_use_ipi_for_rfence()) { + ftd.asid = asid; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); } else { - if (broadcast) { - if (riscv_use_ipi_for_rfence()) { - ftd.asid = 0; - ftd.start = start; - ftd.size = size; - ftd.stride = stride; - on_each_cpu_mask(cmask, - __ipi_flush_tlb_range, - &ftd, 1); - } else - sbi_remote_sfence_vma(cmask, start, size); - } else { - local_flush_tlb_range(start, size, stride); - } + local_flush_tlb_range_asid(start, size, stride, asid); } put_cpu(); @@ -130,7 +133,7 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __flush_tlb_range(mm, 0, -1, PAGE_SIZE); + __flush_tlb_range(mm, 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE); } void flush_tlb_mm_range(struct mm_struct *mm, From patchwork Mon Oct 30 13:30:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13440605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7C053C4332F for ; Mon, 30 Oct 2023 13:35:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lCfT1JWI/4urpiiqNIR++Rg0QT7gBQXz0b/1LUMdDO4=; b=U10c3ABOA633GJ y3zj5uhhk0KVGyZQNsqM7PPXRGvJvgU3+v9Md5zg5PShLTWagWKDvzCOY2c/f9td8CXoytokVGYgC kder8tEV9uiTm+i1BR0x4s8kO++milHnQ1bFMqEEoa1kfUwXr4RnNwkcX/FY7H86Axp/T9LEJgiEn DqD3HgEXq2AHrdc5EaAi+Ny1kS/2zGTqVk9r0x4BSDKjORHyXGE6ZbB2Hlm72HfAMs0WZ03vvdwF9 qRlob11aQGNmv9cmE6MdiUbM4gn161pqduA5MLhaD7t90RtfQW52cyzCKH56Iuy2lCNJT9rcULoSi 262AcyZ5WERqr9VprtAA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qxSQD-003QPO-2x; Mon, 30 Oct 2023 13:35:13 +0000 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qxSQB-003QOJ-1n for linux-riscv@lists.infradead.org; Mon, 30 Oct 2023 13:35:12 +0000 Received: by mail-wr1-x42d.google.com with SMTP id ffacd0b85a97d-32f8441dfb5so829434f8f.0 for ; Mon, 30 Oct 2023 06:35:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1698672908; x=1699277708; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Z+Voqb64JUu/S+my4bjKiePrsxn/AGTGcWHZA6URcLY=; b=2R0jxgvp1S1mzkazHvjegoJKJ83ypYukz5et1slDeEw8uY6sTDzz618W83W0cuOYV/ I9z8X7Koj9MoMCQXowAWVGcpBOaa30YMIwlx3QtsXEFNFNxBrHXkWdYkrD5TGs0Q6Uqf vLWubE2MuOpAdg3qUzzxKu8+YwNAoeZPrJF9vKLhVcNafocuj9jLgEFR3LIAz16rVjCH 9vWqf3gYqZqdGWEBunJAS++o28yXXCoTrDj97HvnaHjpnuNlhuepOFP7J7BIUQkJMXqj tzBZQXhP1oOD3YWW37/WnlTPnyBaL9Rty0ZaSTc46BG4/YOxLbypvdPEGBmTCtzx0Bac 6hJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698672908; x=1699277708; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Z+Voqb64JUu/S+my4bjKiePrsxn/AGTGcWHZA6URcLY=; b=j/g/SlCmNrw4dYlZdxBV7Z1oY/HZwV6tMpkoVKW6usILOELVR5ay7wXg73RMJZEKRa BiuTn2Rv8ErtD7vlkF8DOpWSw0b/HEALfJUo/W8HF6LAVL5KCjH07vtJyi8lpGh/MDYJ tz5tKw+dOrnCzAqPYCehI4gHe/5IdOGnXsnfzabv7/GsAU+c06zhxejdFpVp5IB30eq6 BVA8bmZdSFhi1NdA90w9W+wLj+gnrk4oXKL4Enk/lwRzWczdVxUk82qVRZedRKJSJofH nW4zs8WOkUexO+S4Um8AG+dEHMeOGZJaGhX9uFhU7y7YXGz9yC4F35lKNoqg6Wh9AMEZ vnmA== X-Gm-Message-State: AOJu0YzFRPgf9o0GMucq6FWmTnIMAIDsjRdvhcq6NGvPPSxzKyy/znG/ 7/uURKACJWld4I90oC7+kuVceA== X-Google-Smtp-Source: AGHT+IHEw6T+OK2cXmEQUaFlVp9JTsP+3ENAfV/VP4H/l3hIJec/QWwZbGuNwmLLAlaAsmBtrNDznA== X-Received: by 2002:a05:6000:1882:b0:32f:7f03:9a with SMTP id a2-20020a056000188200b0032f7f03009amr4279324wri.55.1698672907709; Mon, 30 Oct 2023 06:35:07 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id o12-20020a056000010c00b003232f167df5sm8252241wrx.108.2023.10.30.06.35.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Oct 2023 06:35:07 -0700 (PDT) From: Alexandre Ghiti To: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Mayuresh Chitale , Vincent Chen , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Samuel Holland , Lad Prabhakar Cc: Alexandre Ghiti , Andrew Jones , Lad Prabhakar , Samuel Holland Subject: [PATCH v6 4/4] riscv: Improve flush_tlb_kernel_range() Date: Mon, 30 Oct 2023 14:30:28 +0100 Message-Id: <20231030133027.19542-5-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231030133027.19542-1-alexghiti@rivosinc.com> References: <20231030133027.19542-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231030_063511_595964_821B3255 X-CRM114-Status: GOOD ( 17.59 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This function used to simply flush the whole tlb of all harts, be more subtile and try to only flush the range. The problem is that we can only use PAGE_SIZE as stride since we don't know the size of the underlying mapping and then this function will be improved only if the size of the region to flush is < threshold * PAGE_SIZE. Signed-off-by: Alexandre Ghiti Reviewed-by: Andrew Jones Tested-by: Lad Prabhakar # On RZ/Five SMARC Reviewed-by: Samuel Holland Tested-by: Samuel Holland --- arch/riscv/include/asm/tlbflush.h | 11 +++++----- arch/riscv/mm/tlbflush.c | 34 ++++++++++++++++++++++--------- 2 files changed, 30 insertions(+), 15 deletions(-) diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index 170a49c531c6..8f3418c5f172 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -40,6 +40,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr); void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +void flush_tlb_kernel_range(unsigned long start, unsigned long end); #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, @@ -56,15 +57,15 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, local_flush_tlb_all(); } -#define flush_tlb_mm(mm) flush_tlb_all() -#define flush_tlb_mm_range(mm, start, end, page_size) flush_tlb_all() -#endif /* !CONFIG_SMP || !CONFIG_MMU */ - /* Flush a range of kernel pages */ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) { - flush_tlb_all(); + local_flush_tlb_all(); } +#define flush_tlb_mm(mm) flush_tlb_all() +#define flush_tlb_mm_range(mm, start, end, page_size) flush_tlb_all() +#endif /* !CONFIG_SMP || !CONFIG_MMU */ + #endif /* _ASM_RISCV_TLBFLUSH_H */ diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index e46fefc70927..e6659d7368b3 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -97,20 +97,27 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, unsigned long size, unsigned long stride) { struct flush_tlb_range_data ftd; - struct cpumask *cmask = mm_cpumask(mm); + const struct cpumask *cmask; unsigned long asid = FLUSH_TLB_NO_ASID; - unsigned int cpuid; bool broadcast; - if (cpumask_empty(cmask)) - return; + if (mm) { + unsigned int cpuid; + + cmask = mm_cpumask(mm); + if (cpumask_empty(cmask)) + return; - cpuid = get_cpu(); - /* check if the tlbflush needs to be sent to other CPUs */ - broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids; + cpuid = get_cpu(); + /* check if the tlbflush needs to be sent to other CPUs */ + broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids; - if (static_branch_unlikely(&use_asid_allocator)) - asid = atomic_long_read(&mm->context.id) & asid_mask; + if (static_branch_unlikely(&use_asid_allocator)) + asid = atomic_long_read(&mm->context.id) & asid_mask; + } else { + cmask = cpu_online_mask; + broadcast = true; + } if (broadcast) { if (riscv_use_ipi_for_rfence()) { @@ -128,7 +135,8 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, local_flush_tlb_range_asid(start, size, stride, asid); } - put_cpu(); + if (mm) + put_cpu(); } void flush_tlb_mm(struct mm_struct *mm) @@ -179,6 +187,12 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, __flush_tlb_range(vma->vm_mm, start, end - start, stride_size); } + +void flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + __flush_tlb_range(NULL, start, end - start, PAGE_SIZE); +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)