From patchwork Thu Dec 7 15:03:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13483401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A4A18C10DC3 for ; Thu, 7 Dec 2023 15:04:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=7l7SB5vs8G6FqtHWX5Y+8/CcEoGCGoX5ch5CgyPqI2w=; b=hvVyj+CAuEAyHe U0nX/6y0Y9f7hny6piO8afld8rd3vumSq6BYus16sTjp536iHDDkh7Ey2n72GWciXl8ysWspZyHZV homIHulOovhwlO9qA6FIjxZrfdSHFYoTAhoteW8tK0maBnEJhGKOpF6F/9mv9sAZuuf7D6d0c+fAf cdwDcH5HQUdtKJ4UV1XEN6QC0K1NeJ+FeGLkDaWZFXqjSvsUxjlZkpxN3ssSG+vEniPt2gEQUcugW kr7c5SQWLf+ZL0sFMkAL6kygU++nl1N+BcOlottpL62EzeABFedah9Kd/T5JL4QJ6m5f6jXylV7c3 qWAhaVLPqguWSj9dKJfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rBFuu-00D6lU-0M; Thu, 07 Dec 2023 15:03:56 +0000 Received: from mail-wm1-x335.google.com ([2a00:1450:4864:20::335]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rBFuq-00D6kX-2P for linux-arm-kernel@lists.infradead.org; Thu, 07 Dec 2023 15:03:54 +0000 Received: by mail-wm1-x335.google.com with SMTP id 5b1f17b1804b1-40c032962c5so12431625e9.3 for ; Thu, 07 Dec 2023 07:03:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1701961431; x=1702566231; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=nzUGYpwYAIocf74DCpTux4YFllX1H7F6zPZyiGHWp1Y=; b=n5PnaTbYcOjxlJtXr0Wzui4D5XPX6Ex9vG3FHND3+KtCIiR4MyNYpZui/Pfce2uGaH 3nplRcEi3UQkGNQHTl9L59UCIrQjXybOQzpDOFQRo1ajpCo1FAx3YzrHi09tddRzzbGf CoE/BCt8YotR7N2o206kneFISUWy8CkcWAfIUdnNXFR9H0VPETRd3f2reMpXZGw0zqIQ OpQJLOJcmwrvk5/0LXUhgoUSMNCx7y+t57Nnri+9SB1gZKcbyZHoErVmvVvjgJzRR8O2 qMo1kL1AiAWuZW9YzcNkqkzQRDjbPTPhVDikefJ9HQVN4m/IrEvOpKlEgIMTyUKFundA +1Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701961431; x=1702566231; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=nzUGYpwYAIocf74DCpTux4YFllX1H7F6zPZyiGHWp1Y=; b=ksWXzrKX5/2RVI3Cf1FAtzvhRFMif74a8Oz/dlmP2L0xeuP3yNHvqEvQcY7KBQZAzC IIi+77peteWnah2R5YnwiNXS/nXKRvH//SuuJJqdGpkLhB6U2K2wYWovl71vu2aS35mv uzZDxkFEBr77pSzFizS4tVEfhZVI3Zu2i8qJc5ShRhJ/aecENZViCfIgbqAm3uZi6Ypn rJjfICofW99Kn+XOz78AnBq9m11eZUy0Zr90dB3moBDDn+eBU7PykHTOyAO17Gs9/8xK ltcCQ94zWscRFof4P/d8RZtGAsOIEo1mEddWGXJItiPdDO8JrrBxWV3bd3Sp/7CjjySW vqXw== X-Gm-Message-State: AOJu0Yy7jLulBUGSf0q8p/eaEAAdjWsmOLCm7XdGpmOpwYX9oiwE1Zur w9Dgl0axBBvkaZcZPHKb3hJ3kQ== X-Google-Smtp-Source: AGHT+IEfU8bcWPltllq4UxddBOusN19bKPp7FZtm1tNb3JVSGuXfSF/1fbnNg3hbcLptFO8P4+JaSw== X-Received: by 2002:a05:600c:198b:b0:40c:5ee:2dda with SMTP id t11-20020a05600c198b00b0040c05ee2ddamr1118146wmq.177.1701961430948; Thu, 07 Dec 2023 07:03:50 -0800 (PST) Received: from alex-rivos.home (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id u13-20020a05600c19cd00b0040b42df75fcsm2187533wmq.39.2023.12.07.07.03.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 07:03:50 -0800 (PST) From: Alexandre Ghiti To: Catalin Marinas , Will Deacon , Thomas Bogendoerfer , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrew Morton , Ved Shanbhogue , Matt Evans , Dylan Jhong , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-mm@kvack.org Cc: Alexandre Ghiti Subject: [PATCH RFC/RFT 0/4] Remove preventive sfence.vma Date: Thu, 7 Dec 2023 16:03:44 +0100 Message-Id: <20231207150348.82096-1-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231207_070353_013286_90FA0920 X-CRM114-Status: GOOD ( 18.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In RISC-V, after a new mapping is established, a sfence.vma needs to be emitted for different reasons: - if the uarch caches invalid entries, we need to invalidate it otherwise we would trap on this invalid entry, - if the uarch does not cache invalid entries, a reordered access could fail to see the new mapping and then trap (sfence.vma acts as a fence). We can actually avoid emitting those (mostly) useless and costly sfence.vma by handling the traps instead: - for new kernel mappings: only vmalloc mappings need to be taken care of, other new mapping are rare and already emit the required sfence.vma if needed. That must be achieved very early in the exception path as explained in patch 1, and this also fixes our fragile way of dealing with vmalloc faults. - for new user mappings: that can be handled in the page fault path as done in patch 3. Patch 2 is certainly a TEMP patch which allows to detect at runtime if a uarch caches invalid TLB entries. Patch 4 is a TEMP patch which allows to expose through debugfs the different sfence.vma that are emitted, which can be used for benchmarking. On our uarch that does not cache invalid entries and a 6.5 kernel, the gains are measurable: * Kernel boot: 6% * ltp - mmapstress01: 8% * lmbench - lat_pagefault: 20% * lmbench - lat_mmap: 5% On uarchs that cache invalid entries, the results are more mitigated and need to be explored more thoroughly (if anyone is interested!): that can be explained by the extra page faults, which depending on "how much" the uarch caches invalid entries, could kill the benefits of removing the preventive sfence.vma. Ved Shanbhogue has prepared a new extension to be used by uarchs that do not cache invalid entries, which will certainly be used instead of patch 2. Thanks to Ved and Matt Evans for triggering the discussion that led to this patchset! That's an RFC, so please don't mind the checkpatch warnings and dirty comments. It applies on 6.6. Any feedback, test or relevant benchmark are welcome :) Alexandre Ghiti (4): riscv: Stop emitting preventive sfence.vma for new vmalloc mappings riscv: Add a runtime detection of invalid TLB entries caching riscv: Stop emitting preventive sfence.vma for new userspace mappings TEMP: riscv: Add debugfs interface to retrieve #sfence.vma arch/arm64/include/asm/pgtable.h | 2 +- arch/mips/include/asm/pgtable.h | 6 +- arch/powerpc/include/asm/book3s/64/tlbflush.h | 8 +- arch/riscv/include/asm/cacheflush.h | 19 ++- arch/riscv/include/asm/pgtable.h | 45 ++++--- arch/riscv/include/asm/thread_info.h | 5 + arch/riscv/include/asm/tlbflush.h | 4 + arch/riscv/kernel/asm-offsets.c | 5 + arch/riscv/kernel/entry.S | 94 +++++++++++++ arch/riscv/kernel/sbi.c | 12 ++ arch/riscv/mm/init.c | 126 ++++++++++++++++++ arch/riscv/mm/tlbflush.c | 17 +++ include/linux/pgtable.h | 8 +- mm/memory.c | 12 +- 14 files changed, 331 insertions(+), 32 deletions(-)