From patchwork Tue Aug 9 06:06:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 12939331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDCA2C25B08 for ; Tue, 9 Aug 2022 04:06:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233692AbiHIEG2 (ORCPT ); Tue, 9 Aug 2022 00:06:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233702AbiHIEGX (ORCPT ); Tue, 9 Aug 2022 00:06:23 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1F1BDDFDE for ; Mon, 8 Aug 2022 21:06:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660017981; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v7Wypa4oOYW4jZXZehboAwCarfEU3fnRZX5wpPKbgRs=; b=gYWTgZEYOxiKibDkc8/8xSYoOgEP0xt4QhWWU29TeeJmjZ9ZuO6ZvZH+FuixBhk0Grro3p C/Zd6hDRJWEc++Jc1H5z0nClxJBS7Hzxpzwo+vW4CuACY07d1CHsHgdDPvAdHtksPaf05z VsCx8JOmCbm5xYzF6tRA+QNJCFpZkGc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-170-JGUT5zIuPAWUE7gJa_yylg-1; Tue, 09 Aug 2022 00:06:16 -0400 X-MC-Unique: JGUT5zIuPAWUE7gJa_yylg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 892AF85A58A; Tue, 9 Aug 2022 04:06:15 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-20.bne.redhat.com [10.64.54.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E4B932166B26; Tue, 9 Aug 2022 04:06:10 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, maz@kernel.org, oliver.upton@linux.dev, andrew.jones@linux.dev, seanjc@google.com, mathieu.desnoyers@efficios.com, fweimer@redhat.com, yihyu@redhat.com, shan.gavin@gmail.com Subject: [PATCH 1/2] KVM: selftests: Make rseq compatible with glibc-2.35 Date: Tue, 9 Aug 2022 14:06:26 +0800 Message-Id: <20220809060627.115847-2-gshan@redhat.com> In-Reply-To: <20220809060627.115847-1-gshan@redhat.com> References: <20220809060627.115847-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The rseq information is registered by TLS, starting from glibc-2.35. In this case, the test always fails due to syscall(__NR_rseq). For example, on RHEL9.1 where upstream glibc-2.35 features are enabled on downstream glibc-2.34, the test fails like below. # ./rseq_test ==== Test Assertion Failure ==== rseq_test.c:60: !r pid=112043 tid=112043 errno=22 - Invalid argument 1 0x0000000000401973: main at rseq_test.c:226 2 0x0000ffff84b6c79b: ?? ??:0 3 0x0000ffff84b6c86b: ?? ??:0 4 0x0000000000401b6f: _start at ??:? rseq failed, errno = 22 (Invalid argument) # rpm -aq | grep glibc-2 glibc-2.34-39.el9.aarch64 Fix the issue by using the registered rseq information from TLS if it exists. Otherwise, we're going to register our own rseq information as before. Reported-by: Yihuang Yu Suggested-by: Florian Weimer Suggested-by: Mathieu Desnoyers Signed-off-by: Gavin Shan --- tools/testing/selftests/kvm/rseq_test.c | 30 +++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/rseq_test.c b/tools/testing/selftests/kvm/rseq_test.c index a54d4d05a058..acb1bf1f06b3 100644 --- a/tools/testing/selftests/kvm/rseq_test.c +++ b/tools/testing/selftests/kvm/rseq_test.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -36,6 +37,8 @@ static __thread volatile struct rseq __rseq = { */ #define NR_TASK_MIGRATIONS 100000 +static bool __rseq_ownership; +static volatile struct rseq *__rseq_info; static pthread_t migration_thread; static cpu_set_t possible_mask; static int min_cpu, max_cpu; @@ -49,11 +52,33 @@ static void guest_code(void) GUEST_SYNC(0); } +static void sys_rseq_ownership(void) +{ + long *offset; + unsigned int *size, *flags; + + offset = dlsym(RTLD_NEXT, "__rseq_offset"); + size = dlsym(RTLD_NEXT, "__rseq_size"); + flags = dlsym(RTLD_NEXT, "__rseq_flags"); + + if (offset && size && *size && flags) { + __rseq_ownership = false; + __rseq_info = (struct rseq *)((uintptr_t)__builtin_thread_pointer() + + *offset); + } else { + __rseq_ownership = true; + __rseq_info = &__rseq; + } +} + static void sys_rseq(int flags) { int r; - r = syscall(__NR_rseq, &__rseq, sizeof(__rseq), flags, RSEQ_SIG); + if (!__rseq_ownership) + return; + + r = syscall(__NR_rseq, __rseq_info, sizeof(*__rseq_info), flags, RSEQ_SIG); TEST_ASSERT(!r, "rseq failed, errno = %d (%s)", errno, strerror(errno)); } @@ -218,6 +243,7 @@ int main(int argc, char *argv[]) calc_min_max_cpu(); + sys_rseq_ownership(); sys_rseq(0); /* @@ -256,7 +282,7 @@ int main(int argc, char *argv[]) */ smp_rmb(); cpu = sched_getcpu(); - rseq_cpu = READ_ONCE(__rseq.cpu_id); + rseq_cpu = READ_ONCE(__rseq_info->cpu_id); smp_rmb(); } while (snapshot != atomic_read(&seq_cnt)); From patchwork Tue Aug 9 06:06:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 12939332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C081C19F2D for ; Tue, 9 Aug 2022 04:06:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234409AbiHIEGg (ORCPT ); Tue, 9 Aug 2022 00:06:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232047AbiHIEG3 (ORCPT ); Tue, 9 Aug 2022 00:06:29 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5A6D719024 for ; Mon, 8 Aug 2022 21:06:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660017984; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GpMqE31lls01IVsW/AFKiHmLzH6RkUJAa9i/teIiT1c=; b=dzw0HZFZmctcDcatOVZJxt20X0NSWR/MSgfMfN2vM/PtdxjymV9XDJmKH2G+6NWfhQs/Po YzvG6EZ172IKEjkA1evdpW5cF39lPVUrawxFglR1rUfkxDEfkoaf2wCQOv6NqW6Y8odc64 HvcpfTu3IHPo63LYlffJuKuKvlUQqCo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-135-ahVzwH5mPJ6nYZb0VLyzfg-1; Tue, 09 Aug 2022 00:06:21 -0400 X-MC-Unique: ahVzwH5mPJ6nYZb0VLyzfg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BF4D8804191; Tue, 9 Aug 2022 04:06:20 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-20.bne.redhat.com [10.64.54.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 26FE72166B26; Tue, 9 Aug 2022 04:06:15 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, maz@kernel.org, oliver.upton@linux.dev, andrew.jones@linux.dev, seanjc@google.com, mathieu.desnoyers@efficios.com, fweimer@redhat.com, yihyu@redhat.com, shan.gavin@gmail.com Subject: [PATCH 2/2] KVM: selftests: Use getcpu() instead of sched_getcpu() in rseq_test Date: Tue, 9 Aug 2022 14:06:27 +0800 Message-Id: <20220809060627.115847-3-gshan@redhat.com> In-Reply-To: <20220809060627.115847-1-gshan@redhat.com> References: <20220809060627.115847-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org sched_getcpu() is glibc dependent and it can simply return the CPU ID from the registered rseq information, as Florian Weimer pointed. In this case, it's pointless to compare the return value from sched_getcpu() and that fetched from the registered rseq information. Fix the issue by replacing sched_getcpu() with getcpu(), as Florian suggested. The comments are modified accordingly. Reported-by: Yihuang Yu Suggested-by: Florian Weimer Suggested-by: Mathieu Desnoyers Signed-off-by: Gavin Shan --- tools/testing/selftests/kvm/rseq_test.c | 32 ++++++++++++------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/tools/testing/selftests/kvm/rseq_test.c b/tools/testing/selftests/kvm/rseq_test.c index acb1bf1f06b3..621b9b15049b 100644 --- a/tools/testing/selftests/kvm/rseq_test.c +++ b/tools/testing/selftests/kvm/rseq_test.c @@ -126,7 +126,7 @@ static void *migration_worker(void *__rseq_tid) atomic_inc(&seq_cnt); /* - * Ensure the odd count is visible while sched_getcpu() isn't + * Ensure the odd count is visible while getcpu() isn't * stable, i.e. while changing affinity is in-progress. */ smp_wmb(); @@ -167,10 +167,10 @@ static void *migration_worker(void *__rseq_tid) * check completes. * * 3. To ensure the read-side makes efficient forward progress, - * e.g. if sched_getcpu() involves a syscall. Stalling the - * read-side means the test will spend more time waiting for - * sched_getcpu() to stabilize and less time trying to hit - * the timing-dependent bug. + * e.g. if getcpu() involves a syscall. Stalling the read-side + * means the test will spend more time waiting for getcpu() + * to stabilize and less time trying to hit the timing-dependent + * bug. * * Because any bug in this area is likely to be timing-dependent, * run with a range of delays at 1us intervals from 1us to 10us @@ -264,9 +264,9 @@ int main(int argc, char *argv[]) /* * Verify rseq's CPU matches sched's CPU. Ensure migration - * doesn't occur between sched_getcpu() and reading the rseq - * cpu_id by rereading both if the sequence count changes, or - * if the count is odd (migration in-progress). + * doesn't occur between getcpu() and reading the rseq cpu_id + * by rereading both if the sequence count changes, or if the + * count is odd (migration in-progress). */ do { /* @@ -276,15 +276,15 @@ int main(int argc, char *argv[]) snapshot = atomic_read(&seq_cnt) & ~1; /* - * Ensure reading sched_getcpu() and rseq.cpu_id - * complete in a single "no migration" window, i.e. are - * not reordered across the seq_cnt reads. + * Ensure reading getcpu() and rseq.cpu_id complete in + * a single "no migration" window, i.e. are not reordered + * across the seq_cnt reads. */ smp_rmb(); - cpu = sched_getcpu(); + r = getcpu(&cpu, NULL); rseq_cpu = READ_ONCE(__rseq_info->cpu_id); smp_rmb(); - } while (snapshot != atomic_read(&seq_cnt)); + } while (r || snapshot != atomic_read(&seq_cnt)); TEST_ASSERT(rseq_cpu == cpu, "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu); @@ -293,9 +293,9 @@ int main(int argc, char *argv[]) /* * Sanity check that the test was able to enter the guest a reasonable * number of times, e.g. didn't get stalled too often/long waiting for - * sched_getcpu() to stabilize. A 2:1 migration:KVM_RUN ratio is a - * fairly conservative ratio on x86-64, which can do _more_ KVM_RUNs - * than migrations given the 1us+ delay in the migration task. + * getcpu() to stabilize. A 2:1 migration:KVM_RUN ratio is a fairly + * conservative ratio on x86-64, which can do _more_ KVM_RUNs than + * migrations given the 1us+ delay in the migration task. */ TEST_ASSERT(i > (NR_TASK_MIGRATIONS / 2), "Only performed %d KVM_RUNs, task stalled too much?\n", i);