From patchwork Thu Nov 28 00:55:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13887506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7CF0ED6D251 for ; Thu, 28 Nov 2024 01:09:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Reply-To:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To: From:Subject:Message-ID:References:Mime-Version:In-Reply-To:Date: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=XPdOPo1b4SKFKQSHegYD/Ajgnk0S0WQ/8A/Yy7v0mTM=; b=BaKJKylRd64o612CmCgh4IT0RP y1hOZurmfrH7zTdFscKh6xKUiR8HPo63cJ69Q573dIfN4nm7tUIFfUQkyT7UYQ2VMAd+STK4aTtWn m6c2G7S/8a3Bu+W7YwN7Gs0kDh3Xi5n0zQVLT0EviE86nY9hEocsmeNcgSQBuk18QBCBEJaa/sw69 f4PDvdN9kFyig7McOdGF4/9l9Q4IggTrLQKN8Y9jyCrO5nPwtDf8A2q0TE8RWAIKRXTWeO1qPVEdy 1afgtrHFxPTKGqTz99BzntOrJqs4YVGeNfa9dVLLA4zJLWGtUk9R1+7Kheo7Aop4mFRTniXBmM99z +6I5kadg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tGT1c-0000000EPH4-0OH0; Thu, 28 Nov 2024 01:08:56 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tGSpJ-0000000EM13-1glu for linux-arm-kernel@lists.infradead.org; Thu, 28 Nov 2024 00:56:15 +0000 Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-7ea8baba60dso320027a12.3 for ; Wed, 27 Nov 2024 16:56:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732755372; x=1733360172; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=XPdOPo1b4SKFKQSHegYD/Ajgnk0S0WQ/8A/Yy7v0mTM=; b=Qk5gDR+b11QGAplPxAGpfMMrlsAp+9/jG9NYQOlyWad2TkQ4b8NxP/Fx7DCtt978dN 9AiAjmJpFi7s+IKeCgKFzuerATJA3Z/ie6kT64wWXD0sau1gun/886b2R5mDlGR7oQ/5 JLTGUJJPrCzh2cMo0XENgihdpoFYGgprTg4KpLT8q4uhd3TIj5DY+9ZWncg5Knq3P2g4 qmjgvbnMTvqKsmPJrS9HnmFYtW7ExOsZZkBUz7HoK5Y7VaCc6MFtVPzIxEr20y6XppBx URHpaFYKFgHv/7wn79LTlL4x1MGJmsg7KnAeMzgvBUjEt8OEKP1QCIYr4Of84CoyH1cr m7vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732755372; x=1733360172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=XPdOPo1b4SKFKQSHegYD/Ajgnk0S0WQ/8A/Yy7v0mTM=; b=IjMquB2y205Scl8CtnA6iC0j1QecUb14B8T5F4LOEOm62p8rmL5wRMRT1eKgFABGJP 6W5H2CEPvJGWXC1zf/HgkliK7n9vjpGSQpxunrxrOCYWnfNs9Ftj6frfPSef744Jj+OB vlazqPdhGwoOZuSr8/jda4iv/DWY7ceZXnEEw+87NuSEr/0Y5FDqivRVbn4mep9ZbkT5 TPkH2n2TWmb4S92E8QuK5Oi8yKjttRcOlpdrZylapGGLyYsf8iVP+cjK7JgcQkAxv7+J EAZ015+R4sTJXXQw6WprUUz2joeUPnJZw/sanWAyhiqnX6ylYRbKHyg4H6o13cKO1aA3 k8Ww== X-Gm-Message-State: AOJu0YzQwYt4KORLcJhiErwYCaP/oiVxlRhalCUKTm8w6QTJ3/A30qsN Ipu3JeLnNU8xp8DncUK3BxrznIhtUrVqcW9sFhTVDQq4Zifm91kmNVX//l8fMBmtGSmT6HjDY0O VMw== X-Google-Smtp-Source: AGHT+IF9c9BZlswGZi0etgSQ5KCNJj2OyH3UwmpvQEnlX/diiMpr3XRQbr6nF80wsA0CHS1H0/HXdIsKW9o= X-Received: from pfbcj22.prod.google.com ([2002:a05:6a00:2996:b0:724:f614:656f]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:33a1:b0:1db:ddba:8795 with SMTP id adf61e73a8af0-1e0e0b7d919mr7994888637.36.1732755372410; Wed, 27 Nov 2024 16:56:12 -0800 (PST) Date: Wed, 27 Nov 2024 16:55:44 -0800 In-Reply-To: <20241128005547.4077116-1-seanjc@google.com> Mime-Version: 1.0 References: <20241128005547.4077116-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241128005547.4077116-14-seanjc@google.com> Subject: [PATCH v4 13/16] KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ) From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Paolo Bonzini , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Andrew Jones , James Houghton , Muhammad Usama Anjum X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241127_165613_484114_A8CBEE42 X-CRM114-Status: GOOD ( 18.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add two phases to mmu_stress_test to verify that KVM correctly handles guest memory that was writable, and then made read-only in the primary MMU, and then made writable again. Add bonus coverage for x86 and arm64 to verify that all of guest memory was marked read-only. Making forward progress (without making memory writable) requires arch specific code to skip over the faulting instruction, but the test can at least verify each vCPU's starting page was made read-only for other architectures. Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/mmu_stress_test.c | 104 +++++++++++++++++- 1 file changed, 101 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c index 0918fade9267..d9c76b4c0d88 100644 --- a/tools/testing/selftests/kvm/mmu_stress_test.c +++ b/tools/testing/selftests/kvm/mmu_stress_test.c @@ -17,6 +17,8 @@ #include "processor.h" #include "ucall_common.h" +static bool mprotect_ro_done; + static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride) { uint64_t gpa; @@ -32,6 +34,42 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride) *((volatile uint64_t *)gpa); GUEST_SYNC(2); + /* + * Write to the region while mprotect(PROT_READ) is underway. Keep + * looping until the memory is guaranteed to be read-only, otherwise + * vCPUs may complete their writes and advance to the next stage + * prematurely. + * + * For architectures that support skipping the faulting instruction, + * generate the store via inline assembly to ensure the exact length + * of the instruction is known and stable (vcpu_arch_put_guest() on + * fixed-length architectures should work, but the cost of paranoia + * is low in this case). For x86, hand-code the exact opcode so that + * there is no room for variability in the generated instruction. + */ + do { + for (gpa = start_gpa; gpa < end_gpa; gpa += stride) +#ifdef __x86_64__ + asm volatile(".byte 0x48,0x89,0x00" :: "a"(gpa) : "memory"); /* mov %rax, (%rax) */ +#elif defined(__aarch64__) + asm volatile("str %0, [%0]" :: "r" (gpa) : "memory"); +#else + vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa); +#endif + } while (!READ_ONCE(mprotect_ro_done)); + + /* + * Only architectures that write the entire range can explicitly sync, + * as other architectures will be stuck on the write fault. + */ +#if defined(__x86_64__) || defined(__aarch64__) + GUEST_SYNC(3); +#endif + + for (gpa = start_gpa; gpa < end_gpa; gpa += stride) + vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa); + GUEST_SYNC(4); + GUEST_ASSERT(0); } @@ -79,6 +117,7 @@ static void *vcpu_worker(void *data) struct vcpu_info *info = data; struct kvm_vcpu *vcpu = info->vcpu; struct kvm_vm *vm = vcpu->vm; + int r; vcpu_args_set(vcpu, 3, info->start_gpa, info->end_gpa, vm->page_size); @@ -101,6 +140,57 @@ static void *vcpu_worker(void *data) /* Stage 2, read all of guest memory, which is now read-only. */ run_vcpu(vcpu, 2); + + /* + * Stage 3, write guest memory and verify KVM returns -EFAULT for once + * the mprotect(PROT_READ) lands. Only architectures that support + * validating *all* of guest memory sync for this stage, as vCPUs will + * be stuck on the faulting instruction for other architectures. Go to + * stage 3 without a rendezvous + */ + do { + r = _vcpu_run(vcpu); + } while (!r); + TEST_ASSERT(r == -1 && errno == EFAULT, + "Expected EFAULT on write to RO memory, got r = %d, errno = %d", r, errno); + +#if defined(__x86_64__) || defined(__aarch64__) + /* + * Verify *all* writes from the guest hit EFAULT due to the VMA now + * being read-only. x86 and arm64 only at this time as skipping the + * instruction that hits the EFAULT requires advancing the program + * counter, which is arch specific and relies on inline assembly. + */ +#ifdef __x86_64__ + vcpu->run->kvm_valid_regs = KVM_SYNC_X86_REGS; +#endif + for (;;) { + r = _vcpu_run(vcpu); + if (!r) + break; + TEST_ASSERT_EQ(errno, EFAULT); +#if defined(__x86_64__) + WRITE_ONCE(vcpu->run->kvm_dirty_regs, KVM_SYNC_X86_REGS); + vcpu->run->s.regs.regs.rip += 3; +#elif defined(__aarch64__) + vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), + vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc)) + 4); +#endif + + } + assert_sync_stage(vcpu, 3); +#endif /* __x86_64__ || __aarch64__ */ + rendezvous_with_boss(); + + /* + * Stage 4. Run to completion, waiting for mprotect(PROT_WRITE) to + * make the memory writable again. + */ + do { + r = _vcpu_run(vcpu); + } while (r && errno == EFAULT); + TEST_ASSERT_EQ(r, 0); + assert_sync_stage(vcpu, 4); rendezvous_with_boss(); return NULL; @@ -183,7 +273,7 @@ int main(int argc, char *argv[]) const uint64_t start_gpa = SZ_4G; const int first_slot = 1; - struct timespec time_start, time_run1, time_reset, time_run2, time_ro; + struct timespec time_start, time_run1, time_reset, time_run2, time_ro, time_rw; uint64_t max_gpa, gpa, slot_size, max_mem, i; int max_slots, slot, opt, fd; bool hugepages = false; @@ -288,19 +378,27 @@ int main(int argc, char *argv[]) rendezvous_with_vcpus(&time_run2, "run 2"); mprotect(mem, slot_size, PROT_READ); + usleep(10); + mprotect_ro_done = true; + sync_global_to_guest(vm, mprotect_ro_done); + rendezvous_with_vcpus(&time_ro, "mprotect RO"); + mprotect(mem, slot_size, PROT_READ | PROT_WRITE); + rendezvous_with_vcpus(&time_rw, "mprotect RW"); + time_rw = timespec_sub(time_rw, time_ro); time_ro = timespec_sub(time_ro, time_run2); time_run2 = timespec_sub(time_run2, time_reset); time_reset = timespec_sub(time_reset, time_run1); time_run1 = timespec_sub(time_run1, time_start); pr_info("run1 = %ld.%.9lds, reset = %ld.%.9lds, run2 = %ld.%.9lds, " - "ro = %ld.%.9lds\n", + "ro = %ld.%.9lds, rw = %ld.%.9lds\n", time_run1.tv_sec, time_run1.tv_nsec, time_reset.tv_sec, time_reset.tv_nsec, time_run2.tv_sec, time_run2.tv_nsec, - time_ro.tv_sec, time_ro.tv_nsec); + time_ro.tv_sec, time_ro.tv_nsec, + time_rw.tv_sec, time_rw.tv_nsec); /* * Delete even numbered slots (arbitrary) and unmap the first half of