diff mbox series

[stable,6.1.y,2/2] KVM: arm64: Prevent unconditional donation of unmapped regions from the host

Message ID 20230920192729.694309-2-surajjs@amazon.com (mailing list archive)
State New, archived
Headers show
Series [stable,6.1.y,1/2] KVM: arm64: Prevent the donation of no-map pages | expand

Commit Message

Jitindar Singh, Suraj Sept. 20, 2023, 7:27 p.m. UTC
From: Will Deacon <will@kernel.org>

commit 09cce60bddd6461a93a5bf434265a47827d1bc6f upstream.

Since host stage-2 mappings are created lazily, we cannot rely solely on
the pte in order to recover the target physical address when checking a
host-initiated memory transition as this permits donation of unmapped
regions corresponding to MMIO or "no-map" memory.

Instead of inspecting the pte, move the addr_is_allowed_memory() check
into the host callback function where it is passed the physical address
directly from the walker.

Cc: Quentin Perret <qperret@google.com>
Fixes: e82edcc75c4e ("KVM: arm64: Implement do_share() helper for sharing memory")
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230518095844.1178-1-will@kernel.org
[ bp: s/ctx->addr/addr in __check_page_state_visitor due to missing commit
      "KVM: arm64: Combine visitor arguments into a context structure"
      in stable.
]
Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

Comments

Marc Zyngier Sept. 21, 2023, 7:15 a.m. UTC | #1
On Wed, 20 Sep 2023 20:27:29 +0100,
Suraj Jitindar Singh <surajjs@amazon.com> wrote:
> 
> From: Will Deacon <will@kernel.org>
> 
> commit 09cce60bddd6461a93a5bf434265a47827d1bc6f upstream.
> 
> Since host stage-2 mappings are created lazily, we cannot rely solely on
> the pte in order to recover the target physical address when checking a
> host-initiated memory transition as this permits donation of unmapped
> regions corresponding to MMIO or "no-map" memory.
> 
> Instead of inspecting the pte, move the addr_is_allowed_memory() check
> into the host callback function where it is passed the physical address
> directly from the walker.
> 
> Cc: Quentin Perret <qperret@google.com>
> Fixes: e82edcc75c4e ("KVM: arm64: Implement do_share() helper for sharing memory")
> Signed-off-by: Will Deacon <will@kernel.org>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Link: https://lore.kernel.org/r/20230518095844.1178-1-will@kernel.org
> [ bp: s/ctx->addr/addr in __check_page_state_visitor due to missing commit
>       "KVM: arm64: Combine visitor arguments into a context structure"
>       in stable.
> ]

Same question.

> Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>

Again, I find this backport pretty pointless. What is the rationale
for it?

	M.
Suraj Jitindar Singh Sept. 21, 2023, 10:25 p.m. UTC | #2
On Thu, 2023-09-21 at 08:15 +0100, Marc Zyngier wrote:
> On Wed, 20 Sep 2023 20:27:29 +0100,
> Suraj Jitindar Singh <surajjs@amazon.com> wrote:
> > 
> > From: Will Deacon <will@kernel.org>
> > 
> > commit 09cce60bddd6461a93a5bf434265a47827d1bc6f upstream.
> > 
> > Since host stage-2 mappings are created lazily, we cannot rely
> > solely on
> > the pte in order to recover the target physical address when
> > checking a
> > host-initiated memory transition as this permits donation of
> > unmapped
> > regions corresponding to MMIO or "no-map" memory.
> > 
> > Instead of inspecting the pte, move the addr_is_allowed_memory()
> > check
> > into the host callback function where it is passed the physical
> > address
> > directly from the walker.
> > 
> > Cc: Quentin Perret <qperret@google.com>
> > Fixes: e82edcc75c4e ("KVM: arm64: Implement do_share() helper for
> > sharing memory")
> > Signed-off-by: Will Deacon <will@kernel.org>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Link:
> > https://lore.kernel.org/r/20230518095844.1178-1-will@kernel.org
> > [ bp: s/ctx->addr/addr in __check_page_state_visitor due to missing
> > commit
> >       "KVM: arm64: Combine visitor arguments into a context
> > structure"
> >       in stable.
> > ]
> 
> Same question.

Noting what changes were made to the patch from the upstream mainline
version when it was applied to the stable tree.

> 
> > Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>
> 
> Again, I find this backport pretty pointless. What is the rationale
> for it?

The 2 patches were backported to address CVE-2023-21264.
This one addresses the CVE.

Thanks

> 
>         M.
>
diff mbox series

Patch

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index 0f6c053686c7..0faa330a41ed 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -424,7 +424,7 @@  struct pkvm_mem_share {
 
 struct check_walk_data {
 	enum pkvm_page_state	desired;
-	enum pkvm_page_state	(*get_page_state)(kvm_pte_t pte);
+	enum pkvm_page_state	(*get_page_state)(kvm_pte_t pte, u64 addr);
 };
 
 static int __check_page_state_visitor(u64 addr, u64 end, u32 level,
@@ -435,10 +435,7 @@  static int __check_page_state_visitor(u64 addr, u64 end, u32 level,
 	struct check_walk_data *d = arg;
 	kvm_pte_t pte = *ptep;
 
-	if (kvm_pte_valid(pte) && !addr_is_allowed_memory(kvm_pte_to_phys(pte)))
-		return -EINVAL;
-
-	return d->get_page_state(pte) == d->desired ? 0 : -EPERM;
+	return d->get_page_state(pte, addr) == d->desired ? 0 : -EPERM;
 }
 
 static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
@@ -453,8 +450,11 @@  static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
 	return kvm_pgtable_walk(pgt, addr, size, &walker);
 }
 
-static enum pkvm_page_state host_get_page_state(kvm_pte_t pte)
+static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr)
 {
+	if (!addr_is_allowed_memory(addr))
+		return PKVM_NOPAGE;
+
 	if (!kvm_pte_valid(pte) && pte)
 		return PKVM_NOPAGE;
 
@@ -521,7 +521,7 @@  static int host_initiate_unshare(u64 *completer_addr,
 	return __host_set_page_state_range(addr, size, PKVM_PAGE_OWNED);
 }
 
-static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte)
+static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte, u64 addr)
 {
 	if (!kvm_pte_valid(pte))
 		return PKVM_NOPAGE;