Message ID | 20241010182427.1434605-56-seanjc@google.com (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show
Return-Path: <linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org> X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 006F6D2444F for <linux-riscv@archiver.kernel.org>; Thu, 10 Oct 2024 21:46:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID :References:Mime-Version:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6KJedHEf2SytfhI8hto1yN7Wlp2EDp2GRBQ3t9n3P2o=; b=CSVZxc7rMPz9wi ZSfavjfIh2Kmm4wmplgaMnd+R8+37AAewIscq+qsyNUhsugd61bAS7963hEYdB0gc/shpjc4okevc eUluUA/1aRVREDtwCs5AmhBeUihq0F8uGS0bFUbIlHLMWBJGIkZynwzdB7g4ahNJOs6EToHxvplpw kLY6yGJ8sFfC6vh3KCfLGOVQ6QUDBbL7QgP6bYUcXcx3l5JLw3cT3qBv/GHRH3viIUjIUpKWOZbi7 53wZLUiicZiyb1sdHOyz2vHWaRgcPhYmE4a2hno8JK+Oc1CYXIAv/rnMfuvocnBOqLy7/Gn8/TG4D 5tTmEZUMXpgpJcINl80A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1sz0zK-0000000ESTE-3mgv; Thu, 10 Oct 2024 21:46:26 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1syxsB-0000000DrAM-1Lff for linux-riscv@lists.infradead.org; Thu, 10 Oct 2024 18:26:53 +0000 Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-71e019ab268so1589445b3a.2 for <linux-riscv@lists.infradead.org>; Thu, 10 Oct 2024 11:26:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584810; x=1729189610; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=K6YUqJX41UvLftiTjNafBx3/10X+6KBTIzQjXduU2xw=; b=0NCiVq29rFGnhx5H88IoL3ATW2C2rUWOjXZSdIu+X2IO47lSMoJniHN5TNSeqAXiCE vdYQvKFrx3oDn1gRyQMyhpHTz+6quCptZkRRo3o7mi+i280e9CsMWrw12MOxLkLf/z0d n9S3M2ueKS5c0w86AOpvJSx6WYvwwYWNDLTNWHkWQHmHyk73I3u+hhvSuESEc9jxPtu9 I8RiTJ2ti60nNLRKX10QJc1F/7mmAIhzBpW5L7gTAasZ32ELbTbyr66jwUWqpuYSli8q TzZYiRXhNgW10vDG12czBFzP40DQAV1W9mSZKLrtG3DwUE5f8tQuEUoHd9oyl6g+1FqZ QTyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584810; x=1729189610; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=K6YUqJX41UvLftiTjNafBx3/10X+6KBTIzQjXduU2xw=; b=mAhgD5o2g2O/wh3Hxq262yuzToEgSiwkVF3A7+U/f8yfPch29PuWgx+A23j270cIWe BuIs3cC4ZQowGA9oBor5mb9U931Cho/szi8urpHOI++JEWaPYM80MtF699WD98tf8FcX JiJsg5PGq9EZsvizY4xvT1JdyWFgSB2d7K6/6RYDErgyKkZamz+jo592HnG85vwbETXb VGLTITTdrLQLtGKL5I1UEdLd2HmvWfuE5qIjL+lQVRhJqvUmV/jDKRdZQpS3IYHE2MfK XDy7zOhLkbV8YJVx1sZobI1d+lvZ6CsTXkbVpVtZgwxGjiSyImmEz4xcP6MdteNROgbw 47Nw== X-Forwarded-Encrypted: i=1; AJvYcCWsn4jVc1tS6wk7Eke5p+Czev8ZpuHfMn9mqqdH2v7RJ3UOhUJfK7YoFul7R68zqHEHwF8974CwdZbsbw==@lists.infradead.org X-Gm-Message-State: AOJu0YwkqnWEqeP1pA8C5LjkRoPfFiwrEy8iNaXu6I65TKClUsnMkjHm y8y3W0fSlom+/pIG1seJKyzlG//PZmwH0nnPvSusqI6yLkg7yP3Dg/yXMig0NZj2t9Z52/zCJ8w v1g== X-Google-Smtp-Source: AGHT+IEO1ECBC98fGvB+GRqYHAQaBdxJcqvY9DmP3QI4gZm0KdOyij1fLmFl0bPXgfvra8JJdQe8JNFxBHM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:6a00:9199:b0:71e:1e8:e337 with SMTP id d2e1a72fcca58-71e1dbe467fmr8496b3a.4.1728584809106; Thu, 10 Oct 2024 11:26:49 -0700 (PDT) Date: Thu, 10 Oct 2024 11:23:57 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-56-seanjc@google.com> Subject: [PATCH v13 55/85] KVM: arm64: Mark "struct page" pfns accessed/dirty before dropping mmu_lock From: Sean Christopherson <seanjc@google.com> To: Paolo Bonzini <pbonzini@redhat.com>, Marc Zyngier <maz@kernel.org>, Oliver Upton <oliver.upton@linux.dev>, Tianrui Zhao <zhaotianrui@loongson.cn>, Bibo Mao <maobibo@loongson.cn>, Huacai Chen <chenhuacai@kernel.org>, Michael Ellerman <mpe@ellerman.id.au>, Anup Patel <anup@brainfault.org>, Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, Christian Borntraeger <borntraeger@linux.ibm.com>, Janosch Frank <frankja@linux.ibm.com>, Claudio Imbrenda <imbrenda@linux.ibm.com>, Sean Christopherson <seanjc@google.com> Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " <alex.bennee@linaro.org>, Yan Zhao <yan.y.zhao@intel.com>, David Matlack <dmatlack@google.com>, David Stevens <stevensd@chromium.org>, Andrew Jones <ajones@ventanamicro.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241010_112651_491535_34900D27 X-CRM114-Status: GOOD ( 10.36 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: <linux-riscv.lists.infradead.org> List-Unsubscribe: <http://lists.infradead.org/mailman/options/linux-riscv>, <mailto:linux-riscv-request@lists.infradead.org?subject=unsubscribe> List-Archive: <http://lists.infradead.org/pipermail/linux-riscv/> List-Post: <mailto:linux-riscv@lists.infradead.org> List-Help: <mailto:linux-riscv-request@lists.infradead.org?subject=help> List-Subscribe: <http://lists.infradead.org/mailman/listinfo/linux-riscv>, <mailto:linux-riscv-request@lists.infradead.org?subject=subscribe> Reply-To: Sean Christopherson <seanjc@google.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" <linux-riscv-bounces@lists.infradead.org> Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org |
Series |
KVM: Stop grabbing references to PFNMAP'd pages
|
expand
|
Context | Check | Description |
---|---|---|
conchuod/vmtest-fixes-PR | fail | merge-conflict |
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index dd221587fcca..ecc6c2b56c43 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1692,15 +1692,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } out_unlock: + if (writable && !ret) + kvm_release_pfn_dirty(pfn); + else + kvm_release_pfn_clean(pfn); + read_unlock(&kvm->mmu_lock); /* Mark the page dirty only if the fault is handled successfully */ - if (writable && !ret) { - kvm_set_pfn_dirty(pfn); + if (writable && !ret) mark_page_dirty_in_slot(kvm, memslot, gfn); - } - kvm_release_pfn_clean(pfn); return ret != -EAGAIN ? ret : 0; }
Mark pages/folios accessed+dirty prior to dropping mmu_lock, as marking a page/folio dirty after it has been written back can make some filesystems unhappy (backing KVM guests will such filesystem files is uncommon, and the race is minuscule, hence the lack of complaints). While scary sounding, practically speaking the worst case scenario is that KVM would trigger this WARN in filemap_unaccount_folio(): /* * At this point folio must be either written or cleaned by * truncate. Dirty folio here signals a bug and loss of * unwritten data - on ordinary filesystems. * * But it's harmless on in-memory filesystems like tmpfs; and can * occur when a driver which did get_user_pages() sets page dirty * before putting it, while the inode is being finally evicted. * * Below fixes dirty accounting after removing the folio entirely * but leaves the dirty flag set: it has no effect for truncated * folio and anyway will be cleared before returning folio to * buddy allocator. */ if (WARN_ON_ONCE(folio_test_dirty(folio) && mapping_can_writeback(mapping))) folio_account_cleaned(folio, inode_to_wb(mapping->host)); KVM won't actually write memory because the stage-2 mappings are protected by the mmu_notifier, i.e. there is no risk of loss of data, even if the VM were backed by memory that needs writeback. See the link below for additional details. This will also allow converting arm64 to kvm_release_faultin_page(), which requires that mmu_lock be held (for the aforementioned reason). Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com Signed-off-by: Sean Christopherson <seanjc@google.com> --- arch/arm64/kvm/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)