From patchwork Fri Aug 20 22:50:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12450533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE678C4338F for ; Fri, 20 Aug 2021 22:53:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7EAEE61057 for ; Fri, 20 Aug 2021 22:53:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7EAEE61057 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ae+zjeDbyLkrgcd/gCHnZ88BwKMnQrxGV1aRv6j6yq8=; b=RxHssIR+2I7CC0 CsEkNFMK8CKIa9DovRiT366tB/OHvaO6ndkDaOv8xAmLX6yuFGbRttZXJWZY5EOQznPzyWHAVsAuU ZKntUus+R7WVNPulOnXJP78EmQ4wdkkE8KfycPr+y/Xi+Nq+HG4rUt1s9eO7/nCeAz49W/vqLYpT6 c8tD2+Ca+ln2UsA3Nif/AZTcLGePoJb4dTGdu4JaSDxpImhXon27OZmZGNTH07DQMpYcX8tSnq7Er OAvhoBUkSNWq/7Ff5fyYHr/TB+dVAV/Rvjl/wXO2B71wLQT4hv4At8ZKNsozhT4/SrFeYer8Z29FO JUYc73kyVEEAtzki7cKA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mHDMi-00CEY7-DH; Fri, 20 Aug 2021 22:51:57 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mHDLO-00CDmD-Re for linux-arm-kernel@lists.infradead.org; Fri, 20 Aug 2021 22:50:36 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id z2-20020a5b0b02000000b005982be23a34so4538431ybp.19 for ; Fri, 20 Aug 2021 15:50:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Hob1dmJMrw94HSjTjJyO7V/z3K389Z6kceuVVvWteEg=; b=ZBPi0uJzte5sV+sLJPV9xwjqxEWGEetp8xrdF8k54k/wc3cJN67VgOhUhjKmKQ4Xhd YqdQSUKFPYacTLIBn+m+Gpa7jzLVqyIiKc5ijD3DiDu8v4DLcG3UldH9EAJ+m5ExOzrL O9ISw4lQ5FhzsoHmconTDOnb4joYqyvZ/niIJc3XkIo2bAKvQrwzQnUpMTk9YgBiDKRd dq/pHixNezQy4KZymANBRBoW1GxOCTFLow8hIjYvjzXPqasXApGiLjIE1l6UWAluDTZn kKovOEM7RvHsvBfgpkcVrSEeKDhfS15FlFRJF+Kn+3xtB+5p+YZ+b3m0NahrfffdwBHk Myuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Hob1dmJMrw94HSjTjJyO7V/z3K389Z6kceuVVvWteEg=; b=DIe88Mfi5ltskQvR4rxjtSbeIvzhsrtBfXYwCyxQ2gWjdcFEuqPau7bX46ipOsKuXw KC86LnD4HlDosUM+dSdIJsQOjYR1x3WEg+s6+OdIYR1KH+ilMR96YerQXDYs5HdTn8md LSRbTCycQcqC9We5QefaAyIokBtXI3KFdUEsZCQzuvg65/G0fSqiMTG2GhfO6mpbh3sr aZVfLI/An1F/fso3IMNtvZvs7SCqRy9GoA/uggyukG1Dqgn+F2dPscmfJUhqWMp9d0N2 tpsp8r6eS1VCmAEG1JXlstU7gbganAfNZlnOnElkq69kg/L4SRF99qLlPminw6XKLt4p qXbQ== X-Gm-Message-State: AOAM532YmRhIt/3M+IBNM4Rni3QoFLPNOR7YXIeUnEAQTLOs0lq6uhzJ tphSkk5SumQ7gHJOWYhTVnxji2GvDeY= X-Google-Smtp-Source: ABdhPJxecYCoriXfTjXSBq3bkWOC7Zv8YgYOW9njfOH3mosyAnU4b5iytiV5UE6gH7mGyPMEYVj2u6gPToc= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:90:200:f11d:a281:af9b:5de6]) (user=seanjc job=sendgmr) by 2002:a25:3bcb:: with SMTP id i194mr27272983yba.442.1629499833270; Fri, 20 Aug 2021 15:50:33 -0700 (PDT) Date: Fri, 20 Aug 2021 15:50:01 -0700 In-Reply-To: <20210820225002.310652-1-seanjc@google.com> Message-Id: <20210820225002.310652-5-seanjc@google.com> Mime-Version: 1.0 References: <20210820225002.310652-1-seanjc@google.com> X-Mailer: git-send-email 2.33.0.rc2.250.ged5fa647cd-goog Subject: [PATCH v2 4/5] KVM: selftests: Add a test for KVM_RUN+rseq to detect task migration bugs From: Sean Christopherson To: Russell King , Catalin Marinas , Will Deacon , Guo Ren , Thomas Bogendoerfer , Michael Ellerman , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Steven Rostedt , Ingo Molnar , Oleg Nesterov , Thomas Gleixner , Peter Zijlstra , Andy Lutomirski , Mathieu Desnoyers , "Paul E. McKenney" , Boqun Feng , Paolo Bonzini , Shuah Khan Cc: Benjamin Herrenschmidt , Paul Mackerras , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Foley , Shakeel Butt , Sean Christopherson , Ben Gardon X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210820_155034_958062_BE485089 X-CRM114-Status: GOOD ( 25.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a test to verify an rseq's CPU ID is updated correctly if the task is migrated while the kernel is handling KVM_RUN. This is a regression test for a bug introduced by commit 72c3c0fe54a3 ("x86/kvm: Use generic xfer to guest work function"), where TIF_NOTIFY_RESUME would be cleared by KVM without updating rseq, leading to a stale CPU ID and other badness. Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 3 + tools/testing/selftests/kvm/rseq_test.c | 154 ++++++++++++++++++++++++ 3 files changed, 158 insertions(+) create mode 100644 tools/testing/selftests/kvm/rseq_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 0709af0144c8..6d031ff6b68e 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -47,6 +47,7 @@ /kvm_page_table_test /memslot_modification_stress_test /memslot_perf_test +/rseq_test /set_memory_region_test /steal_time /kvm_binary_stats_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 5832f510a16c..0756e79cb513 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -80,6 +80,7 @@ TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus TEST_GEN_PROGS_x86_64 += kvm_page_table_test TEST_GEN_PROGS_x86_64 += memslot_modification_stress_test TEST_GEN_PROGS_x86_64 += memslot_perf_test +TEST_GEN_PROGS_x86_64 += rseq_test TEST_GEN_PROGS_x86_64 += set_memory_region_test TEST_GEN_PROGS_x86_64 += steal_time TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test @@ -92,6 +93,7 @@ TEST_GEN_PROGS_aarch64 += dirty_log_test TEST_GEN_PROGS_aarch64 += dirty_log_perf_test TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus TEST_GEN_PROGS_aarch64 += kvm_page_table_test +TEST_GEN_PROGS_aarch64 += rseq_test TEST_GEN_PROGS_aarch64 += set_memory_region_test TEST_GEN_PROGS_aarch64 += steal_time TEST_GEN_PROGS_aarch64 += kvm_binary_stats_test @@ -103,6 +105,7 @@ TEST_GEN_PROGS_s390x += demand_paging_test TEST_GEN_PROGS_s390x += dirty_log_test TEST_GEN_PROGS_s390x += kvm_create_max_vcpus TEST_GEN_PROGS_s390x += kvm_page_table_test +TEST_GEN_PROGS_s390x += rseq_test TEST_GEN_PROGS_s390x += set_memory_region_test TEST_GEN_PROGS_s390x += kvm_binary_stats_test diff --git a/tools/testing/selftests/kvm/rseq_test.c b/tools/testing/selftests/kvm/rseq_test.c new file mode 100644 index 000000000000..d28d7ba1a64a --- /dev/null +++ b/tools/testing/selftests/kvm/rseq_test.c @@ -0,0 +1,154 @@ +// SPDX-License-Identifier: GPL-2.0-only +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "processor.h" +#include "test_util.h" + +#define VCPU_ID 0 + +static __thread volatile struct rseq __rseq = { + .cpu_id = RSEQ_CPU_ID_UNINITIALIZED, +}; + +#define RSEQ_SIG 0xdeadbeef + +static pthread_t migration_thread; +static cpu_set_t possible_mask; +static bool done; + +static atomic_t seq_cnt; + +static void guest_code(void) +{ + for (;;) + GUEST_SYNC(0); +} + +static void sys_rseq(int flags) +{ + int r; + + r = syscall(__NR_rseq, &__rseq, sizeof(__rseq), flags, RSEQ_SIG); + TEST_ASSERT(!r, "rseq failed, errno = %d (%s)", errno, strerror(errno)); +} + +static void *migration_worker(void *ign) +{ + cpu_set_t allowed_mask; + int r, i, nr_cpus, cpu; + + CPU_ZERO(&allowed_mask); + + nr_cpus = CPU_COUNT(&possible_mask); + + for (i = 0; i < 20000; i++) { + cpu = i % nr_cpus; + if (!CPU_ISSET(cpu, &possible_mask)) + continue; + + CPU_SET(cpu, &allowed_mask); + + /* + * Bump the sequence count twice to allow the reader to detect + * that a migration may have occurred in between rseq and sched + * CPU ID reads. An odd sequence count indicates a migration + * is in-progress, while a completely different count indicates + * a migration occurred since the count was last read. + */ + atomic_inc(&seq_cnt); + r = sched_setaffinity(0, sizeof(allowed_mask), &allowed_mask); + TEST_ASSERT(!r, "sched_setaffinity failed, errno = %d (%s)", + errno, strerror(errno)); + atomic_inc(&seq_cnt); + + CPU_CLR(cpu, &allowed_mask); + + /* + * Let the read-side get back into KVM_RUN to improve the odds + * of task migration coinciding with KVM's run loop. + */ + usleep(1); + } + done = true; + return NULL; +} + +int main(int argc, char *argv[]) +{ + struct kvm_vm *vm; + u32 cpu, rseq_cpu; + int r, snapshot; + + /* Tell stdout not to buffer its content */ + setbuf(stdout, NULL); + + r = sched_getaffinity(0, sizeof(possible_mask), &possible_mask); + TEST_ASSERT(!r, "sched_getaffinity failed, errno = %d (%s)", errno, + strerror(errno)); + + if (CPU_COUNT(&possible_mask) < 2) { + print_skip("Only one CPU, task migration not possible\n"); + exit(KSFT_SKIP); + } + + sys_rseq(0); + + /* + * Create and run a dummy VM that immediately exits to userspace via + * GUEST_SYNC, while concurrently migrating the process by setting its + * CPU affinity. + */ + vm = vm_create_default(VCPU_ID, 0, guest_code); + + pthread_create(&migration_thread, NULL, migration_worker, 0); + + while (!done) { + vcpu_run(vm, VCPU_ID); + TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC, + "Guest failed?"); + + /* + * Verify rseq's CPU matches sched's CPU. Ensure migration + * doesn't occur between sched_getcpu() and reading the rseq + * cpu_id by rereading both if the sequence count changes, or + * if the count is odd (migration in-progress). + */ + do { + /* + * Drop bit 0 to force a mismatch if the count is odd, + * i.e. if a migration is in-progress. + */ + snapshot = atomic_read(&seq_cnt) & ~1; + smp_rmb(); + cpu = sched_getcpu(); + rseq_cpu = READ_ONCE(__rseq.cpu_id); + smp_rmb(); + } while (snapshot != atomic_read(&seq_cnt)); + + TEST_ASSERT(rseq_cpu == cpu, + "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu); + } + + pthread_join(migration_thread, NULL); + + kvm_vm_free(vm); + + sys_rseq(RSEQ_FLAG_UNREGISTER); + + return 0; +}