From patchwork Thu Oct 10 18:24:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13831071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C477D2444F for ; Thu, 10 Oct 2024 20:28:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Reply-To:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To: From:Subject:Message-ID:References:Mime-Version:In-Reply-To:Date: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=KbunjaDKwfDzESX7bCyZnVXI4Jj60WAsli6cgjlK44o=; b=AEog2PvZjtwXIjtl/59HiurQO7 vd7kNarZwNQMTyHExssjAUGWCG5UwPyn08vBHQvft10yNmDet3vRnPbI/M5wooLbrF+OnowDnUTDD tnzr1DlHI/rdCnTuv+IzeUNWJBuym0HJ71fWEPK+fjUrualIZQmKIdXurVKp9mYUxMpUi9dg7MOC8 JuPf7JZiP+6t4pkQZ9aD2qzeXAmLR0IJuQgS440z6DAEG/r2FFNJpwRbPKvwnkYITCXDM34Vesuw5 rl2V6YBDFl9hbQGJq40GQhQC6IlbdD2nD6k7eQGYht4aBjtv5jFRDxm8m4XNoBCKy8q9PaeuQ2hUf sEU53jCQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1syzlG-0000000EGOX-2Gsb; Thu, 10 Oct 2024 20:27:50 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1syxsg-0000000DrYz-2BT6 for linux-arm-kernel@lists.infradead.org; Thu, 10 Oct 2024 18:27:24 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-6e22f8dc491so24503007b3.1 for ; Thu, 10 Oct 2024 11:27:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584841; x=1729189641; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KbunjaDKwfDzESX7bCyZnVXI4Jj60WAsli6cgjlK44o=; b=d36EgFFD42cTaH04fi/y4xFv+qp2ifFMxUrP1SQnXseIpwdmnbEzTQ8DSdY5otquib Fe4VayuL6/R0aasi9i7aCCPV4Gv1zlkR0JHNaWNpB/Uu253T6upc+xiprDa9B/mJybKL 77xr20zI26cg8Mq8sN8ZrtHXoNuERM9pAq2+ZDWuy6zeA9+GuTt7xnLiNbaJwQKdPD2U 8iRnxJv76mYNSesvHPOFSlccB6NnI7KLu7OUcwUJa63Q2yECwrEM4nizOHw+DX60vWLd /6jJWIBZt18OZ6zN0L0nZtncrUpAsn3nxdPUEj281wfWOhAwbAmyb2k5lD1F9+UTfMxZ W5xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584841; x=1729189641; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KbunjaDKwfDzESX7bCyZnVXI4Jj60WAsli6cgjlK44o=; b=hR+0NArmRX0LG4+cpz1+qmEhPqGF9RT/q+FBWETpVxwFzB3vEEd8R6G9sxjq7UHHj+ gPS9M5j/mXeNOsF0Xt4RVnpsntzRD5LxzPHYcZh8yHULa73sbBzYj+idiKUETU3m3Yhi QFNV1SFieC64Zx3nWI1VT2HU9A1HKDRLbwH7sEBX3wIIyunQ5GDdb692XcJwjX9EGzMO 5Bk++7HqFsi8VGZs3ikK9jC6HUqV4+r7Tn1t6Z6Il26A8ryHVBe5LGypgZ42IGJI8TKj Tn+dQ1uCERVb3+hQQ92YQchTK2rTR7XTiC+yVEmLCI+yrXaJror9ZqpEHmB/Sp05IzL8 Trog== X-Forwarded-Encrypted: i=1; AJvYcCVKkKiQd+bT2vYm8CBTLVjwt7Ql87cLotFerTTFEPHt8KyuM9m+RusEfmNg68J3CZjXlVtDtJW40eAhsc00Ui6p@lists.infradead.org X-Gm-Message-State: AOJu0Yy8eCJmqtLyFkyi73FFMJIJ0WTemFFzhDoIdusN7c/bMbq1HUn/ u7Z6VpsYe8il6h6Ud+DNNdRLrh5UsxIHPiiWRnvQKEcPReX5In2kH3JdA4l1QDNee1P7V8jiNYr LXQ== X-Google-Smtp-Source: AGHT+IEbamZE/KJDJ4kw63fZFZN0PThnG54xKd82AQ09AwFkx3KIodEPvEqsDBHZyT6DCHkY/GuTD4px7aQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a25:6c05:0:b0:e24:c3eb:ad03 with SMTP id 3f1490d57ef6-e28fe540170mr1289276.10.1728584839383; Thu, 10 Oct 2024 11:27:19 -0700 (PDT) Date: Thu, 10 Oct 2024 11:24:12 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-71-seanjc@google.com> Subject: [PATCH v13 70/85] KVM: MIPS: Mark "struct page" pfns accessed only in "slow" page fault path From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241010_112722_648024_9E66034C X-CRM114-Status: GOOD ( 13.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Mark pages accessed only in the slow page fault path in order to remove an unnecessary user of kvm_pfn_to_refcounted_page(). Marking pages accessed in the primary MMU during KVM page fault handling isn't harmful, but it's largely pointless and likely a waste of a cycles since the primary MMU will call into KVM via mmu_notifiers when aging pages. I.e. KVM participates in a "pull" model, so there's no need to also "push" updates. Signed-off-by: Sean Christopherson --- arch/mips/kvm/mmu.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 4da9ce4eb54d..f1e4b618ec6d 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -484,8 +484,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, struct kvm *kvm = vcpu->kvm; gfn_t gfn = gpa >> PAGE_SHIFT; pte_t *ptep; - kvm_pfn_t pfn = 0; /* silence bogus GCC warning */ - bool pfn_valid = false; int ret = 0; spin_lock(&kvm->mmu_lock); @@ -498,12 +496,9 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, } /* Track access to pages marked old */ - if (!pte_young(*ptep)) { + if (!pte_young(*ptep)) set_pte(ptep, pte_mkyoung(*ptep)); - pfn = pte_pfn(*ptep); - pfn_valid = true; - /* call kvm_set_pfn_accessed() after unlock */ - } + if (write_fault && !pte_dirty(*ptep)) { if (!pte_write(*ptep)) { ret = -EFAULT; @@ -512,7 +507,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, /* Track dirtying of writeable pages */ set_pte(ptep, pte_mkdirty(*ptep)); - pfn = pte_pfn(*ptep); mark_page_dirty(kvm, gfn); } @@ -523,8 +517,6 @@ static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, out: spin_unlock(&kvm->mmu_lock); - if (pfn_valid) - kvm_set_pfn_accessed(pfn); return ret; }