From patchwork Fri May 19 00:52:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13247560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A89BC7EE29 for ; Fri, 19 May 2023 00:52:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230313AbjESAwk (ORCPT ); Thu, 18 May 2023 20:52:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbjESAwi (ORCPT ); Thu, 18 May 2023 20:52:38 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AD8610C9 for ; Thu, 18 May 2023 17:52:37 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id ca18e2360f4ac-763c3442563so208606139f.1 for ; Thu, 18 May 2023 17:52:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684457556; x=1687049556; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1BCTijk0qbakQetFWj3/kUngjgg7qU/t2LH2AytbwyA=; b=eE5te6zoFAK2wnSpxBZQ8/GvCVM/YU5q8XLGMd7hL2YzwpknB+5DitnUAkizbYbh1n IY8Bkrk8FDzrSE2EN9Tahdj4W/aVn5jI+L/enmtskNyQOEGmxcjB3GKrd5yq0n3O8xvF Dl+tkUsHET+v3dVcKShodU8sBuCVZR5cl4QhZ2Ow8/gr5Jxb7fafGSMf/DV80tJ1zxlL lsHpt4BVrtsw5TOlgwRbkkp1PGQhssx6eiqt+Iy+8S4Su3kHt5p/AcyZK1nXWVxsKqhL tJPjhJ21dseJrn4qx41zMbONsEifZviXGrzV1yfD3veI7kV0zyT79kgeTSev08o1M9lD +FDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684457556; x=1687049556; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1BCTijk0qbakQetFWj3/kUngjgg7qU/t2LH2AytbwyA=; b=CKt2svc7dztejc8aMtQFZnc2JZhViF8gj30AmxTploaEDR2vgZr3VxUYmo+9+dDJvS hsEYGkivsaas4y7lIvbrsco+1Sz2f01mqhTUSLd1uqmQgJpLijDDdpDDkWaFLjluSM3Q KRvTx6vhi3S7VQTCkGTnTA2qrnd/B1iLu4XheyJBo9wSZCBZhId5ChaZeIwKg+rbLV3b Tdd4DUi/POUEM4AGwZGkrit3zROyuRcy5gGXgQ8nwgDz06IHAc4Br0Gtr8sfhoPcq5pb e/uFBc/8JBXW1OQGkg6KOOXBT6pIDDCudmY/ay5XBDFLgoCeCYKm34RojH8+Y24Oe6H3 zf/g== X-Gm-Message-State: AC+VfDytY++8699Y8QyrLTt/1v2oHaw1NSXkb7AXDK7f4MYbOj523Kq9 hav//IECoMlHA2xqzBWRc7THGWsID5hS X-Google-Smtp-Source: ACHHUZ581E6RpVMDY5203uPsCEyRhBUbBRGNjHjBc8BerIRAb1q/rKHdq3qeP58gIFJtWoY+BNvHyKVGxYvL X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:95c3:0:b0:41a:9030:981 with SMTP id b61-20020a0295c3000000b0041a90300981mr4875jai.0.1684457556756; Thu, 18 May 2023 17:52:36 -0700 (PDT) Date: Fri, 19 May 2023 00:52:26 +0000 In-Reply-To: <20230519005231.3027912-1-rananta@google.com> Mime-Version: 1.0 References: <20230519005231.3027912-1-rananta@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230519005231.3027912-2-rananta@google.com> Subject: [PATCH v4 1/6] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/tlbflush.h | 108 +++++++++++++++--------------- 1 file changed, 55 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25d..4775378b6da1b 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,61 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE +/* When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale = 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num = 31 and + * scale or num = 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) do { \ + int num = 0; \ + int scale = 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 == 1) { \ + addr = __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start += stride; \ + pages -= stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num = __TLBI_RANGE_NUM(pages, scale); \ + if (num >= 0) { \ + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -= __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num = 0; - int scale = 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -307,56 +354,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); asid = ASID(vma->vm_mm); - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale = 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num = 31 and - * scale or num = 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 == 1) { - addr = __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start += stride; - pages -= stride >> PAGE_SHIFT; - continue; - } - - num = __TLBI_RANGE_NUM(pages, scale); - if (num >= 0) { - addr = __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -= __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); + dsb(ish); } From patchwork Fri May 19 00:52:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13247561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C073EC77B73 for ; Fri, 19 May 2023 00:52:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230365AbjESAwl (ORCPT ); Thu, 18 May 2023 20:52:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230274AbjESAwj (ORCPT ); Thu, 18 May 2023 20:52:39 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BC29E4D for ; Thu, 18 May 2023 17:52:38 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id ca18e2360f4ac-76c6c1b16d2so420678939f.1 for ; Thu, 18 May 2023 17:52:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684457557; x=1687049557; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6Jo/RCBNZEVvtCkzM3auZwKFc9mCMN65uMrYmj8Cg6c=; b=kBespD8hLjQXTQczEpM+vtlVlvFRNvCvTwFkfOOMbDJNZGnbpkolporHMFfB8gqI+r 4Swj3TIK/pJudwZ4nwJn0CA9k9ZC4Ni7zhcNjYzPweBpisRgVJ5jEq6jsD1AiZlwV7de 0dQLS3rMgDzJGHCtnWKs3cO3VBGqa1/sOj8vMeZsBfDu2/CjK7AH5mweDyyi6XIBj7sr 8Z3OmXV4+eNON2F9Gu3pHJO2lT3w4v+vDaojHuDwiiZoIAFtnC3uxbWj3QCtyPMVM036 bp9nXnILknAgqio//mWIB3wZvVMy4jX0mqescCGh9GQZd77HNzTxW1BXEyPie8DgwGQN caRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684457557; x=1687049557; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6Jo/RCBNZEVvtCkzM3auZwKFc9mCMN65uMrYmj8Cg6c=; b=PPUmiiJeOlGLUiqkBpTt1icCIXoIro7O8z7oOcqt9gb8GeyMk8rZ39Wqm9QNC5T3es U2wKjie+a+AuoM1NA7dwLOoSbsISkYUotFh+WcCtjirfg7spZIEP87CI+oG3ZcDz0UXV anLOev5zgBa4J6KCX495oWkfzba6kihCxN7FSbPmr0zV5xdHSlIGRemCWkpTj0B1GR5o Sg2WcjPLU3R/wuJckqcsSLQ8z2k/SYRp7hT0tcfjiesMBt+jfs3mJXH47FjMWhlOGW3i sNkQhDXkY+hDtxMgsPXLPPkhNdbDbN0MyP+QrJnARDz1cVR1Yi+fx6sG2907BKG4hJqN jSZQ== X-Gm-Message-State: AC+VfDyIB5M4xWKiinnLjygEu8yZQD3mG/c31uNwH9q6h2oV5FES+yLb TYHprNcCpFbouuDN3GVqnyVHHQ6NFeHc X-Google-Smtp-Source: ACHHUZ6jvrPcWHxA7WBN9olQ1vxAdHrwW+CSix/U34owWIIgzWv9ikGQKqOo589qtSQqIbhWXYTMGpi013xy X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a5e:8e4b:0:b0:76c:6fd8:282 with SMTP id r11-20020a5e8e4b000000b0076c6fd80282mr87940ioo.2.1684457557713; Thu, 18 May 2023 17:52:37 -0700 (PDT) Date: Fri, 19 May 2023 00:52:27 +0000 In-Reply-To: <20230519005231.3027912-1-rananta@google.com> Mime-Version: 1.0 References: <20230519005231.3027912-1-rananta@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230519005231.3027912-3-rananta@google.com> Subject: [PATCH v4 2/6] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define __kvm_tlb_flush_vmid_range() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conviniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 +++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 39 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 35 +++++++++++++++++++++++++++ 4 files changed, 88 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 43c3bc0f9544d..33352d9399e32 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -79,6 +79,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] @@ -225,6 +226,8 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, phys_addr_t end); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 728e01d4536b0..81d30737dc7c9 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,16 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } +static void +handle___kvm_tlb_flush_vmid_range(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(phys_addr_t, end, host_ctxt, 3); + + __kvm_tlb_flush_vmid_range(kern_hyp_va(mmu), start, end); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -315,6 +325,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), + HANDLE_FUNC(__kvm_tlb_flush_vmid_range), HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index 978179133f4b9..d4ea549c4b5c4 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -130,6 +130,45 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, phys_addr_t end) +{ + struct tlb_inv_context cxt; + unsigned long pages, stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + end = round_up(end, stride); + pages = (end - start) >> PAGE_SHIFT; + + if (!system_supports_tlb_range() || pages >= MAX_TLBI_RANGE_PAGES) { + __kvm_tlb_flush_vmid(mmu); + return; + } + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment below in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e9..f34d6dd9e4674 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,41 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, phys_addr_t end) +{ + struct tlb_inv_context cxt; + unsigned long pages, stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + end = round_up(end, stride); + pages = (end - start) >> PAGE_SHIFT; + + if (!system_supports_tlb_range() || pages >= MAX_TLBI_RANGE_PAGES) { + __kvm_tlb_flush_vmid(mmu); + return; + } + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; From patchwork Fri May 19 00:52:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13247562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4BFCC7EE2C for ; Fri, 19 May 2023 00:52:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230288AbjESAwn (ORCPT ); Thu, 18 May 2023 20:52:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230338AbjESAwk (ORCPT ); Thu, 18 May 2023 20:52:40 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DEA0E4D for ; Thu, 18 May 2023 17:52:39 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-ba8337a5861so684239276.0 for ; Thu, 18 May 2023 17:52:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684457559; x=1687049559; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eI3+o1vQveSU3bkuomrpolxp+t8dV7nu1enlALoGo84=; b=a993HPTO+g13j6obSzomXT8KISL/8mFSfYyDK/WG+ODvYVrzRo67FTVKRPNz4OtgR8 wbVvJI1UMWLNfzwvZ6Pc+IJB58PSbJkfDiM7qjnlf7+dulbrrXb1G2a556mStetHAB6X JhHn1bNc4gU6kdv2mB7svIf9qBYfY7+waIVlwadN4raN8L/2tiFCRgVB+TPiKZLB7r/M 5U85nJno9Reys1JZsaXHeJhYDz/wgCGwk4E2MFHocZEBRMlHKx4oBrY4LsUH0GALzyEz 5Yv9sRc5Gbx9MggBYHqKoqG75yo3yYxMGrKKLnLWjFd6xpGajIigzjT6XfGU/JHAF/8O qoJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684457559; x=1687049559; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eI3+o1vQveSU3bkuomrpolxp+t8dV7nu1enlALoGo84=; b=LRFonP1gW9mI6LLKaaTPcyAOySAIGizS6rjlzC+x1m7iJiTtFKxJUPLJIgqNtjczSA lgJXgkpPw5zkQ7tYgsA2MadXeQxAGd4qJhXNt5VpxoGCed+4WP7/BxkBwnNARiyeiqkv SL+m9UQfXjs2JsL3jOm3YBBQ7jCIfD8H5kjos62U1o4FipNkTsfqnvTDfOnrfdbJONl6 7GwE7yGqbqc2B0wq8WyKGDDl4c1kbiSFcmQOlrnoZXr5akRziyd9pAmb426bWKFfNY9Q hMpnqRTQfrfjwOWPQzrcn8ykJUF8OkUviZjQj7Z4SDVGQ+m/ArR6b6hSXa545YLnojCL s70A== X-Gm-Message-State: AC+VfDwmUQ7lJO6ZOTow9n2s+T0Gnrb3DzkdhlHo5Dnhwk6HsCyreM9b q8j1GP2xYB/EwJgblc9gV9cid25U8tCO X-Google-Smtp-Source: ACHHUZ7MGOicfXqOEIRYhYQir8c9VFNkG60dMuEYJ6FiAi22aFdneLwrpZOUgMMNGFimpKhr++I/T6Q/ojCK X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:1343:b0:ba8:4d1c:dd04 with SMTP id g3-20020a056902134300b00ba84d1cdd04mr94689ybu.1.1684457558794; Thu, 18 May 2023 17:52:38 -0700 (PDT) Date: Fri, 19 May 2023 00:52:28 +0000 In-Reply-To: <20230519005231.3027912-1-rananta@google.com> Mime-Version: 1.0 References: <20230519005231.3027912-1-rananta@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230519005231.3027912-4-rananta@google.com> Subject: [PATCH v4 3/6] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Implement kvm_arch_flush_remote_tlbs_range() for arm64 to invalidate the given range in the TLB. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/hyp/nvhe/tlb.c | 4 +--- arch/arm64/kvm/mmu.c | 11 +++++++++++ 3 files changed, 15 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 81ab41b84f436..343fb530eea9c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1081,6 +1081,9 @@ struct kvm *kvm_arch_alloc_vm(void); #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS int kvm_arch_flush_remote_tlbs(struct kvm *kvm); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index d4ea549c4b5c4..d2c7c1bc6d441 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -150,10 +150,8 @@ void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, return; } - dsb(ishst); - /* Switch to requested VMID */ - __tlb_switch_to_guest(mmu, &cxt); + __tlb_switch_to_guest(mmu, &cxt, false); __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d0a0d3dca9316..e3673b4c10292 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -92,6 +92,17 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 0; } +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) +{ + phys_addr_t start, end; + + start = start_gfn << PAGE_SHIFT; + end = (start_gfn + pages) << PAGE_SHIFT; + + kvm_call_hyp(__kvm_tlb_flush_vmid_range, &kvm->arch.mmu, start, end); + return 0; +} + static bool kvm_is_device_pfn(unsigned long pfn) { return !pfn_is_map_memory(pfn); From patchwork Fri May 19 00:52:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13247563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B754BC77B7D for ; Fri, 19 May 2023 00:52:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230109AbjESAwr (ORCPT ); Thu, 18 May 2023 20:52:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbjESAwl (ORCPT ); Thu, 18 May 2023 20:52:41 -0400 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5336910C9 for ; Thu, 18 May 2023 17:52:40 -0700 (PDT) Received: by mail-il1-x14a.google.com with SMTP id e9e14a558f8ab-338280a9459so43810385ab.0 for ; Thu, 18 May 2023 17:52:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684457559; x=1687049559; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ux8TuKlucx7+2ANrW96xfpSzC0srXjlkGkFfov8Qh8A=; b=nfSM+9p6k+4DETvto+y0GWvj0T9Pn8cMCjrch3oOD/Wt9bRMqYA9UGLB5sbBIFm3bE ErGeQr5UdDquGK62ZHTcCSsHQAXPCCLZVOkbzbU6VHi8oy1BBkXlDTx1rUSKGHyXEkcw CRrFCWWvlBNXVCva7myDBYAhGGf/U7gd9GAipG1KEYCz1LLOkJLWaSE8T5nm1cEdVlR8 5tP2tqRGprEouckvewWTYmoYRDgGUs1jXpJueLe9UdjlIwEd5LWNV3rcOeywTjR8qcTr 7fzQ54BFSSQK1VZGjDitJwXDp65Cwumn+1ym/giNrZSEg0XMB2EwauEgdfRfOpuh9nka y5xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684457559; x=1687049559; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ux8TuKlucx7+2ANrW96xfpSzC0srXjlkGkFfov8Qh8A=; b=K/rYb1bCdnkF/ysLhJ22MhsCuikegFMeP/bNO/ulPW074jdpZm16pY7jfWcSvyaONS 1uw9k0v0AEcfjLttoeT6hAIxOu02CiStBr4ws84M9sTU0KUVzB1StRgDefLWsZnsGTO3 D9PZ2789OZmHQVaYmD23aIkdG1b4MucB8GHeOq1aQzWNwEWqVHuMFdXrowprItppeDDx 4Ja6r9fYHpwMpiKcjmCgOxSNtx9PMsL8JcqGYHnzj280hS9JRtzjFRw3BV0H4OfpE+qb vZp+lgMfZQSbDLrqQl8eJRSFln5UD880TYJ/+pbPefi0j5p08s0Y2Pv9pbAMD4BNmK8X Ab6A== X-Gm-Message-State: AC+VfDzGJrxrjYIm7jx6qxcik9TxNSQIA34sdrbxt3juPK7tUnr6wWmV b/iOiizygjz+qYfmPbvQP4GMKJv5wAvE X-Google-Smtp-Source: ACHHUZ4mrHRBUPTPC63HrBmyYMpoPQXoQJAtvWnxdjoUIHpL+YcJ3GzassZ0JGPJascAshL3vG7BYy3W5E7i X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6e02:786:b0:335:fee3:37c1 with SMTP id q6-20020a056e02078600b00335fee337c1mr28210ils.3.1684457559778; Thu, 18 May 2023 17:52:39 -0700 (PDT) Date: Fri, 19 May 2023 00:52:29 +0000 In-Reply-To: <20230519005231.3027912-1-rananta@google.com> Mime-Version: 1.0 References: <20230519005231.3027912-1-rananta@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230519005231.3027912-5-rananta@google.com> Subject: [PATCH v4 4/6] KVM: arm64: Flush only the memslot after write-protect From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org After write-protecting the region, currently KVM invalidates the entire TLB entries using kvm_flush_remote_tlbs(). Instead, scope the invalidation only to the targeted memslot. If supported, the architecture would use the range-based TLBI instructions to flush the memslot or else fallback to flushing all of the TLBs. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e3673b4c10292..2ea6eb4ea763e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -996,7 +996,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) write_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); write_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } /** From patchwork Fri May 19 00:52:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13247564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AE56C77B73 for ; Fri, 19 May 2023 00:52:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230510AbjESAwu (ORCPT ); Thu, 18 May 2023 20:52:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230459AbjESAwp (ORCPT ); Thu, 18 May 2023 20:52:45 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ADE3410CF for ; Thu, 18 May 2023 17:52:41 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-ba82ed6e450so4879726276.2 for ; Thu, 18 May 2023 17:52:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684457561; x=1687049561; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=e6DsOoIF+x/A7ZYSTdRHf2cD4dqAihxaeRbkm/Wf8QE=; b=5jParhefjh0HI3DzorDBwf9sb8RHRycHwmfDukeUSFn84zbi/9wRDfQaKoBx20Z2pS jgAJQ4ttTjCt4O4bkCtGffdz6NOopVPRPNoSjGutVL/MA59xkj/2i64IMZ4xsfb5JZvo kQknAlNduF12wnSsQxHbve4iQrVfw5KgPyNjRsYycjJCm6HjknBcGXCHSIYIS5n8fG/P xg/hoe/5+I5sWP+59TvQr8H21v4IS7jO4EfQnXFGK7qjua8eIK0OYqkQwb0pnsQqec5q 4KabcCQrLkUiQ1gkV268Wti7RamOnI6hXcNQ2d3KXg8+bwW6KOn9g17yKdwKwZ3kjqrY RESA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684457561; x=1687049561; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=e6DsOoIF+x/A7ZYSTdRHf2cD4dqAihxaeRbkm/Wf8QE=; b=L0FFathaYThPhvEyfIvYjUhdKR5gBapZpvdHtqnhxamrH3JqvD9BRHGR2KjSXRB8rL Pwstzm6wpHX8rFRv0c2mmrgNuYKkQ3Asw3PFSyGx4f58w+MbzOJM+9EQTPvzsxyQcIih wCautfxEx4TAOw+fWW3VzC+fgZ2eflcTX2xiIqfxJn3E/X+LH47ExOaqmqU146lTMCsU yfgtT383pkDu++no7WPi0pWBrTHN4KkmcaQssglU804i/hqO19sHX7RCMW4kKG2tT1dS psnedesCB37v0SWOonH1ooredj31VJSzl6qhOCav2Zzhojauxab2E+NPKnUB8JbYjCgB pavw== X-Gm-Message-State: AC+VfDx3MpofYDMYevSIEtXudtX4KO5FJykl1CdKxxy3aWsY0YE6NZgf PNnOams0PQLMIowRKS6sqht/UBKHbcvD X-Google-Smtp-Source: ACHHUZ4Lgs2jxcq+llYDlHCAR5udROfxgqvoQbhs56Qn9JKtSHbYquhrdkyDxuf1xI7e7oDzhrf6uwFOdXpl X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:b906:0:b0:ba8:9096:df50 with SMTP id x6-20020a25b906000000b00ba89096df50mr26549ybj.9.1684457560955; Thu, 18 May 2023 17:52:40 -0700 (PDT) Date: Fri, 19 May 2023 00:52:30 +0000 In-Reply-To: <20230519005231.3027912-1-rananta@google.com> Mime-Version: 1.0 References: <20230519005231.3027912-1-rananta@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230519005231.3027912-6-rananta@google.com> Subject: [PATCH v4 5/6] KVM: arm64: Invalidate the table entries upon a range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, during the operations such as a hugepage collapse, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. Specifically, if the VM is faulting on many hugepages (say after dirty-logging), it creates a performance penalty for the guest whose pages have already been faulted earlier as they would have to refill their TLBs again. Instead, call __kvm_tlb_flush_vmid_range() for table entries. If the system supports it, only the required range will be flushed. Else, it'll fallback to the previous mechanism. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3d61bd3e591d2..b8f0dbd12f773 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -745,10 +745,13 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, * Perform the appropriate TLB invalidation based on the evicted pte * value (if any). */ - if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); - else if (kvm_pte_valid(ctx->old)) + if (kvm_pte_table(ctx->old, ctx->level)) { + u64 end = ctx->addr + kvm_granule_size(ctx->level); + + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, ctx->addr, end); + } else if (kvm_pte_valid(ctx->old)) { kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + } if (stage2_pte_is_counted(ctx->old)) mm_ops->put_page(ctx->ptep); From patchwork Fri May 19 00:52:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13247565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D745C77B73 for ; Fri, 19 May 2023 00:52:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230233AbjESAwx (ORCPT ); Thu, 18 May 2023 20:52:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229529AbjESAwp (ORCPT ); Thu, 18 May 2023 20:52:45 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC8A510DF for ; Thu, 18 May 2023 17:52:42 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id ca18e2360f4ac-76c63aadc10so27405539f.0 for ; Thu, 18 May 2023 17:52:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684457562; x=1687049562; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rGsmPoc/T4Ss3kfVhJrmlF6LuTv5aUCcul/iCKRGGfw=; b=xW05sinAfw8C44WTP/9vyg4AsrI69Imj2ZmwYQLJZF8kEHkqon5WhJrYk6DNXK7Gw6 ORpvYUQiCd6GWx/VkoJW+RFSdfhvlRh0N9mjXQWCD6GhGofxw22goCURFVI2zQtlWjI3 q40Px7tzzEXTuwbuR/ZiN0k40rk4wWlizlUEyM3BENmLsLioAlGtY649VgAWbXD8mNoT jqpj50br7Y7C26Wwzs7ZYrY5XUZtlZuQUEmP94MjpBLWtfJKC0Hj0gvsmYFqFZ6EU0/5 BTWcttwAOtqDV1LAtO7UFkbUjq6YDBOiaJ3QlnQPpYOq6lbHz3ZzDuYKfXJ/7nDGH2YH /vSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684457562; x=1687049562; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rGsmPoc/T4Ss3kfVhJrmlF6LuTv5aUCcul/iCKRGGfw=; b=dsyVcpTOktB4ccot3dl9/XEh1C1JgL5sAF722egZzWTedLqAeEVtp0y1LYm6w+TvBJ otgVhNH1yeYHDB4+Xn49qNb1E81XsIp29qvhJhXpge35GqVVBFmaIP83E5a+KOANxKLd 9IqWZNH+Lba7Pm1KA+rINqBJyraLRAFnvOKzmLHLwuIpqw9OA5vRhck+G5sx0KQWxUnX wMjALZwWd0uqTH1GH3ci2dAl3NiOeXQW9TWqgYkdZMCRFPvXItUZnZePmwn3Qwuqtpf+ 9NccVpS+ituYFEwIwcQEKFa7HojvF5e3TJ1Kvq8QRxdR7IEV/cjMZJUwyim/Y9cL/iqU oBbQ== X-Gm-Message-State: AC+VfDz3zVNhenYMVyW1pC2WADkx6FALZVhMNbktZeaLHG0qT47k6flC s9Y5c98aRX19tMeRWW1HkfEDrH4noZ7R X-Google-Smtp-Source: ACHHUZ6TXO388gzfHLwp1jHRgCji0izRBrbXcyR7uAZvijYpyuzGsSLXba63Usu2McZd5g1WZCldx8ML+Rhn X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6602:2ac4:b0:763:b184:fe92 with SMTP id m4-20020a0566022ac400b00763b184fe92mr5060981iov.0.1684457562041; Thu, 18 May 2023 17:52:42 -0700 (PDT) Date: Fri, 19 May 2023 00:52:31 +0000 In-Reply-To: <20230519005231.3027912-1-rananta@google.com> Mime-Version: 1.0 References: <20230519005231.3027912-1-rananta@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230519005231.3027912-7-rananta@google.com> Subject: [PATCH v4 6/6] KVM: arm64: Use TLBI range-based intructions for unmap From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck. Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition this upon S2FWB in order to avoid walking the page-table again to perform the CMOs after issuing the TLBI. Rename stage2_put_pte() to stage2_unmap_put_pte() as the function now serves the stage-2 unmap walker specifically, rather than acting generic. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 35 ++++++++++++++++++++++++++++++----- 1 file changed, 30 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b8f0dbd12f773..5832ee3418fb0 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -771,16 +771,34 @@ static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t n smp_store_release(ctx->ptep, new); } -static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, - struct kvm_pgtable_mm_ops *mm_ops) +static bool stage2_unmap_defer_tlb_flush(struct kvm_pgtable *pgt) { + /* + * If FEAT_TLBIRANGE is implemented, defer the individial PTE + * TLB invalidations until the entire walk is finished, and + * then use the range-based TLBI instructions to do the + * invalidations. Condition this upon S2FWB in order to avoid + * a page-table walk again to perform the CMOs after TLBI. + */ + return system_supports_tlb_range() && stage2_has_fwb(pgt); +} + +static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, + struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) +{ + struct kvm_pgtable *pgt = ctx->arg; + /* * Clear the existing PTE, and perform break-before-make with * TLB maintenance if it was valid. */ if (kvm_pte_valid(ctx->old)) { kvm_clear_pte(ctx->ptep); - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + + if (!stage2_unmap_defer_tlb_flush(pgt)) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); } mm_ops->put_page(ctx->ptep); @@ -1015,7 +1033,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops); + stage2_unmap_put_pte(ctx, mmu, mm_ops); if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), @@ -1029,13 +1047,20 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { + int ret; struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, .arg = pgt, .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; - return kvm_pgtable_walk(pgt, addr, size, &walker); + ret = kvm_pgtable_walk(pgt, addr, size, &walker); + if (stage2_unmap_defer_tlb_flush(pgt)) + /* Perform the deferred TLB invalidations */ + kvm_call_hyp(__kvm_tlb_flush_vmid_range, pgt->mmu, + addr, addr + size); + + return ret; } struct stage2_attr_data {