From patchwork Wed Aug 10 17:58:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 12940868 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DFF8C25B07 for ; Wed, 10 Aug 2022 17:59:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231213AbiHJR7Q (ORCPT ); Wed, 10 Aug 2022 13:59:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232338AbiHJR7N (ORCPT ); Wed, 10 Aug 2022 13:59:13 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23A9E6715D for ; Wed, 10 Aug 2022 10:59:13 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-32a144ac47fso27122767b3.8 for ; Wed, 10 Aug 2022 10:59:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=lgJgdCZQ0TjMlmfx7G+B0ikIHptGNxeUSngBv5sFA9A=; b=KRO+rLT3JiNjFyPXdkIXNyw8h//BddzaHJPUU/5MmAKHooSUoSPod0N1BiaIENugBo ISorjqOg+6RlJlNC9S9H6r/Gmemg7KZ+L+xhoIDP5LkYbbqivI+ILkTzqtDYcco9WeuI LG/MBkmV4u+ywFiFJDwroKn+fI5Y5pQC1CYli6jq4yPNTottUt4ZrBmkF6WzPbcXJHhU tEgkcBkwuklTkCZaqdhoZrPZ5ZPaite8N5xYp0UrHD53HB8vQiN+w9re1y8jsn3GFjR3 BgoGLkuh66RI7qBwR7MEsl14bEzPLfQEUaCGPYZpEe8KZaXMaDGUna4N3f9MW7acJq/D 5UQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=lgJgdCZQ0TjMlmfx7G+B0ikIHptGNxeUSngBv5sFA9A=; b=vHuLrhNw49oL63hwpaXaqjf5oaEeF3dmwANHCheccny79cUttW3V3N3pkrpQVWDB9G aooPpbw3yjLkz/R5skTAskuj6Vrl3QLZmGdfTV9OKQ1wxUKPgyoZi0FzKpkWN0SkNuqc VNwEqrnz/AwMYX7tQATsGF+fsBXGSU/dqWOzhulskvrIK10ie3RO9SWbcscP2CWp1J6J PORhtAN4bn5OvZyoFL2ejgMm0ALX4L2r4byvxRDhyzN5rEO6HOd1vgiI0V3aI8QF2zx3 Ca/dTx5onunA0CEAonPLlHwj66hwJYa7pZI3U6iOVt+VHbdhRnNk9NyPJEntwqWulwA4 Cc4Q== X-Gm-Message-State: ACgBeo2qJ94i+zy1BtYUI4sVdH4PbP4DZlS2GefEiQn1tOKD6QFSA9h5 y5mRfDl5zjhN1mqREaQ/+gFqKurR0jhbiZs9hq11h+uhEamfeZctD5KT1Vug1rHd6l0At3DPHn8 geSJMuiHc5pSuJZqTwdz0wg4YpUBLOE3XaygBNm7aCA4g0Ez3dFZ4XEnovQPyZMpxrxbIlsY= X-Google-Smtp-Source: AA6agR6S6sOA+m7GxergCwP45znt2tRzu66mu27Bfvewsp7X0/1/CBKUaO1M2FWP1Fi7aPGCVQ2VpiBHVPItIUuZZQ== X-Received: from coltonlewis-kvm.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:14ce]) (user=coltonlewis job=sendgmr) by 2002:a25:77ce:0:b0:67c:4bb5:5101 with SMTP id s197-20020a2577ce000000b0067c4bb55101mr3279437ybc.198.1660154352332; Wed, 10 Aug 2022 10:59:12 -0700 (PDT) Date: Wed, 10 Aug 2022 17:58:28 +0000 In-Reply-To: <20220810175830.2175089-1-coltonlewis@google.com> Message-Id: <20220810175830.2175089-2-coltonlewis@google.com> Mime-Version: 1.0 References: <20220810175830.2175089-1-coltonlewis@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH 1/3] KVM: selftests: Add random table to randomize memory access From: Colton Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, maz@kernel.org, dmatlack@google.com, seanjc@google.com, oupton@google.com, ricarkol@google.com, Colton Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Linear access through all pages does not seem to replicate performance problems with realistic dirty logging workloads. Make the test more sophisticated through random access. Each vcpu has its own sequence of random numbers that are refilled after every iteration. Having the main thread fill the table for every vcpu is less efficient than having each vcpu generate its own numbers, but this ensures threading nondeterminism won't destroy reproducibility with a given random seed. Signed-off-by: Colton Lewis --- .../selftests/kvm/dirty_log_perf_test.c | 13 ++++- .../selftests/kvm/include/perf_test_util.h | 4 ++ .../selftests/kvm/lib/perf_test_util.c | 47 +++++++++++++++++++ 3 files changed, 63 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index f99e39a672d3..80a1cbe7fbb0 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -132,6 +132,7 @@ struct test_params { bool partition_vcpu_memory_access; enum vm_mem_backing_src_type backing_src; int slots; + uint32_t random_seed; }; static void toggle_dirty_logging(struct kvm_vm *vm, int slots, bool enable) @@ -243,6 +244,10 @@ static void run_test(enum vm_guest_mode mode, void *arg) /* Start the iterations */ iteration = 0; host_quit = false; + srandom(p->random_seed); + pr_info("Random seed: %d\n", p->random_seed); + alloc_random_table(nr_vcpus, guest_percpu_mem_size >> vm->page_shift); + fill_random_table(nr_vcpus, guest_percpu_mem_size >> vm->page_shift); clock_gettime(CLOCK_MONOTONIC, &start); for (i = 0; i < nr_vcpus; i++) @@ -270,6 +275,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) ts_diff.tv_sec, ts_diff.tv_nsec); while (iteration < p->iterations) { + fill_random_table(nr_vcpus, guest_percpu_mem_size >> vm->page_shift); /* * Incrementing the iteration number will start the vCPUs * dirtying memory again. @@ -380,6 +386,7 @@ static void help(char *name) printf(" -v: specify the number of vCPUs to run.\n"); printf(" -o: Overlap guest memory accesses instead of partitioning\n" " them into a separate region of memory for each vCPU.\n"); + printf(" -r: specify the starting random seed.\n"); backing_src_help("-s"); printf(" -x: Split the memory region into this number of memslots.\n" " (default: 1)\n"); @@ -396,6 +403,7 @@ int main(int argc, char *argv[]) .partition_vcpu_memory_access = true, .backing_src = DEFAULT_VM_MEM_SRC, .slots = 1, + .random_seed = time(NULL), }; int opt; @@ -406,7 +414,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "eghi:p:m:nb:f:v:os:x:")) != -1) { + while ((opt = getopt(argc, argv, "eghi:p:m:nb:f:v:or:s:x:")) != -1) { switch (opt) { case 'e': /* 'e' is for evil. */ @@ -442,6 +450,9 @@ int main(int argc, char *argv[]) case 'o': p.partition_vcpu_memory_access = false; break; + case 'r': + p.random_seed = atoi(optarg); + break; case 's': p.backing_src = parse_backing_src_type(optarg); break; diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h index eaa88df0555a..597875d0c3db 100644 --- a/tools/testing/selftests/kvm/include/perf_test_util.h +++ b/tools/testing/selftests/kvm/include/perf_test_util.h @@ -44,6 +44,10 @@ struct perf_test_args { }; extern struct perf_test_args perf_test_args; +extern uint32_t **random_table; + +void alloc_random_table(uint32_t nr_vcpus, uint32_t nr_randoms); +void fill_random_table(uint32_t nr_vcpus, uint32_t nr_randoms); struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus, uint64_t vcpu_memory_bytes, int slots, diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 9618b37c66f7..b04e8d2c0f37 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -9,6 +9,10 @@ #include "processor.h" struct perf_test_args perf_test_args; +/* This pointer points to guest memory and must be converted with + * addr_gva2hva to be accessed from the host. + */ +uint32_t **random_table; /* * Guest virtual memory offset of the testing memory slot. @@ -70,6 +74,49 @@ void perf_test_guest_code(uint32_t vcpu_idx) } } +void alloc_random_table(uint32_t nr_vcpus, uint32_t nr_randoms) +{ + struct perf_test_args *pta = &perf_test_args; + uint32_t **host_random_table; + + random_table = (uint32_t **)vm_vaddr_alloc( + pta->vm, + nr_vcpus * sizeof(uint32_t *), + (vm_vaddr_t)0); + host_random_table = addr_gva2hva(pta->vm, (vm_vaddr_t)random_table); + pr_debug("Random start addr: %p %p.\n", random_table, host_random_table); + + for (uint32_t i = 0; i < nr_vcpus; i++) { + host_random_table[i] = (uint32_t *)vm_vaddr_alloc( + pta->vm, + nr_randoms * sizeof(uint32_t), + (vm_vaddr_t)0); + pr_debug("Random row addr: %p %p.\n", + host_random_table[i], + addr_gva2hva(pta->vm, (vm_vaddr_t)host_random_table[i])); + } +} + +void fill_random_table(uint32_t nr_vcpus, uint32_t nr_randoms) +{ + struct perf_test_args *pta = &perf_test_args; + uint32_t **host_random_table = addr_gva2hva(pta->vm, (vm_vaddr_t)random_table); + uint32_t *host_row; + + pr_debug("Random start addr: %p %p.\n", random_table, host_random_table); + + for (uint32_t i = 0; i < nr_vcpus; i++) { + host_row = addr_gva2hva(pta->vm, (vm_vaddr_t)host_random_table[i]); + pr_debug("Random row addr: %p %p.\n", host_random_table[i], host_row); + + for (uint32_t j = 0; j < nr_randoms; j++) + host_row[j] = random(); + + pr_debug("New randoms row %d: %d, %d, %d...\n", + i, host_row[0], host_row[1], host_row[2]); + } +} + void perf_test_setup_vcpus(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vcpus[], uint64_t vcpu_memory_bytes, From patchwork Wed Aug 10 17:58:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 12940869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82983C00140 for ; Wed, 10 Aug 2022 17:59:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233437AbiHJR7f (ORCPT ); Wed, 10 Aug 2022 13:59:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233299AbiHJR70 (ORCPT ); Wed, 10 Aug 2022 13:59:26 -0400 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E08857435E for ; Wed, 10 Aug 2022 10:59:20 -0700 (PDT) Received: by mail-il1-x14a.google.com with SMTP id s3-20020a056e021a0300b002e10f0e8965so6145983ild.23 for ; Wed, 10 Aug 2022 10:59:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=8YxIK1vO/kVXwILnIyHigQIhxhgLW6OGRDtVKQjE7JM=; b=UC082f0wHKovpE2PqGNkoLp+nLJC8K2fFdudOOGz58g1ZA8b3olf/vC61Bhe513nhc ErmZEy22ArrMeRKrsh6JOatHWtSh++B7zLsrTP6FRo36LRP6xCL0Ee+nFkOXTYXYeNKg k9VP7ing3TRDBDuHRfhaMjZ02UYJpdHXJeGCNDmlGQHCI6ey0LqILFSztlZulxDFuWoP Ky1OnSYJ4c1OO3K3xceGKijf2L2fxUM8kpfo7cvRpwnCukpLkFo+IavB+c6kbzRWdigF Tw8ZICNZP5UqaAPmVpwaMWvqpZNeSq3RS507li4/NBb512VSvLNAHm7plGVQ5+/6vQ37 NVzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=8YxIK1vO/kVXwILnIyHigQIhxhgLW6OGRDtVKQjE7JM=; b=hKpoCEjLbC0j19a3qlMA3oVuWwwgnysLNFJPEjftEcpPls93v9BxXZnwKqCXodLPFW RP9Qt33J4/O7IBSb3LViakJ1vTshUbFh3OivZsN6PyA5iqB//cJ04vAj3klntypmW1Xs ny2DX7N0tHjxHJLjCVk52JuQ3MdmX5mvB5YHOHF+HlARJUlOpWdVnKrsXieb2rhogF8i y3x2KnpwmQ1xKJaCrYf1/kRtZe3oyLkiokrbqpMVpNCSS2ZjEGbiH9HneZeAm9YdmTnS HfGndUm0zQOC5y8pXYcmOBkPN11FDW+Qh3Ouv5b2Vpxh+Eny5Iyx/1PSeHdqjzYBQJS2 sK3A== X-Gm-Message-State: ACgBeo2S/7H6du0dfwXEO1XP/ynpFWWVeuWdrK7+PPRvrKwrpYzTleKb M1mxm7FaCGNoJilNQGlE4uRLqlNfi7Gx37f21gEs5iwG3WpXfK/e4oxz92yxSIyd121VgZYhQXF rjwpN5jTaqd/HR1YRzKyCdatwk2hoFhEeCfybksqoOC+Yljj9hR4LKI6f5SEL8nVtbhuRTlc= X-Google-Smtp-Source: AA6agR5fFvi1FFC5xNy6E9BQCHq8iMFojKgjZGksQGanQ9sH/HRl/Aty79xb64Y2Wj++aOS0bRpT5WLbEEOICZa4kg== X-Received: from coltonlewis-kvm.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:14ce]) (user=coltonlewis job=sendgmr) by 2002:a05:6e02:1c8a:b0:2de:dd01:e091 with SMTP id w10-20020a056e021c8a00b002dedd01e091mr13350069ill.233.1660154359835; Wed, 10 Aug 2022 10:59:19 -0700 (PDT) Date: Wed, 10 Aug 2022 17:58:29 +0000 In-Reply-To: <20220810175830.2175089-1-coltonlewis@google.com> Message-Id: <20220810175830.2175089-3-coltonlewis@google.com> Mime-Version: 1.0 References: <20220810175830.2175089-1-coltonlewis@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH 2/3] KVM: selftests: Randomize which pages are written vs read From: Colton Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, maz@kernel.org, dmatlack@google.com, seanjc@google.com, oupton@google.com, ricarkol@google.com, Colton Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Randomize which pages are written vs read by using the random number table for each page modulo 100. This changes how the -w argument works. It is now a percentage from 0 to 100 inclusive that represents what percentage of accesses are writes. It keeps the same default of 100 percent writes. Signed-off-by: Colton Lewis --- tools/testing/selftests/kvm/dirty_log_perf_test.c | 12 +++++++----- tools/testing/selftests/kvm/lib/perf_test_util.c | 4 ++-- 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 80a1cbe7fbb0..dcc5d44fc757 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -381,8 +381,8 @@ static void help(char *name) " (default: 1G)\n"); printf(" -f: specify the fraction of pages which should be written to\n" " as opposed to simply read, in the form\n" - " 1/.\n" - " (default: 1 i.e. all pages are written to.)\n"); + " [0-100]%% of pages to write.\n" + " (default: 100 i.e. all pages are written to.)\n"); printf(" -v: specify the number of vCPUs to run.\n"); printf(" -o: Overlap guest memory accesses instead of partitioning\n" " them into a separate region of memory for each vCPU.\n"); @@ -399,7 +399,7 @@ int main(int argc, char *argv[]) int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS); struct test_params p = { .iterations = TEST_HOST_LOOP_N, - .wr_fract = 1, + .wr_fract = 100, .partition_vcpu_memory_access = true, .backing_src = DEFAULT_VM_MEM_SRC, .slots = 1, @@ -439,8 +439,10 @@ int main(int argc, char *argv[]) break; case 'f': p.wr_fract = atoi(optarg); - TEST_ASSERT(p.wr_fract >= 1, - "Write fraction cannot be less than one"); + TEST_ASSERT(p.wr_fract >= 0, + "Write fraction cannot be less than 0"); + TEST_ASSERT(p.wr_fract <= 100, + "Write fraction cannot be greater than 100"); break; case 'v': nr_vcpus = atoi(optarg); diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index b04e8d2c0f37..3c7b93349fef 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -64,7 +64,7 @@ void perf_test_guest_code(uint32_t vcpu_idx) for (i = 0; i < pages; i++) { uint64_t addr = gva + (i * pta->guest_page_size); - if (i % pta->wr_fract == 0) + if (random_table[vcpu_idx][i] % 100 < pta->wr_fract) *(uint64_t *)addr = 0x0123456789ABCDEF; else READ_ONCE(*(uint64_t *)addr); @@ -168,7 +168,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus, pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); /* By default vCPUs will write to memory. */ - pta->wr_fract = 1; + pta->wr_fract = 100; /* * Snapshot the non-huge page size. This is used by the guest code to From patchwork Wed Aug 10 17:58:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 12940870 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FA6DC00140 for ; Wed, 10 Aug 2022 17:59:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233239AbiHJR7j (ORCPT ); Wed, 10 Aug 2022 13:59:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233395AbiHJR7e (ORCPT ); Wed, 10 Aug 2022 13:59:34 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A49ED796BE for ; Wed, 10 Aug 2022 10:59:25 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id x19-20020a25e013000000b0067c0cedc96bso5927441ybg.21 for ; Wed, 10 Aug 2022 10:59:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=U5B2F0DiGgGWUmE1r6w4IgG4Iss0hpMKcitTq51NslI=; b=MXLWrP3pJeKgzgPZozVFAWUNs29y945ExqJ6a5BPpV3wEMO5ZmOVNqkS98o7lYkOLT A9YB5b3NcMzaeb6JsjGYhGju0YpI6CzlSijhEOm8QKpEzURjIF2qRf5w6yGB+C8r33mj urt9xQYqIqSH/PdnNTu96XNYSl4LKXKJLevvtwmRtKNjgrR7XDJisQiCmgKdK4DjZC15 JtwARlF1DKHSM1L4tyFutz1r2cqw4Xvv1Zux00+AztOt2tu7rcNnZk19xQAXB2z7m5/R U5dCM97E1El0Xb489+MkNjdOBShTCXDsmKpNvRLfrXHoeg4sdicAnXOEmblQEsq+fLA9 X4qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=U5B2F0DiGgGWUmE1r6w4IgG4Iss0hpMKcitTq51NslI=; b=48uybaMQ3ntkC9bZDjiTBPBnLzfVEQvBQ1lKSmatU8/jsqbl9Zs/Iy4/uQfOAf7Z9a TtBvUPqg8R2rKZwVuhc4fT5YNexUqe/X1py91s7WwrI2ATpq3Rp5Fvms5DmzkIRcMXFO hnLgC42uvvtqPeV8zVtd48O3UDmpul5q3ODEiufu++90jBAP6ouSNFFki2JfZN31ZYax HAJw3krewv1Ylv1kNCHo4PgSUWP3bbT2BZN50WZe9b3DYZyAk8TTd/a2ceh+QAI7UJg5 guXRDcacAD7wwJZpnTdZVQsOMAFllIdfsGhiNEx3U7+bnwLYIXME5pEAvQsA2JtvP6QM KT7Q== X-Gm-Message-State: ACgBeo2x6FdOy3h1GAhP9ltALTHy/X1hDMNejzQyEf01xLkemDbu5GG8 LSF8uIsI1lXQ2S/xBRLAKp9cMfHqTeCW10hCS3mt8bP2fjNYGOVd7ECUg+YARiMEks3Mpb0nPQn ECKdSEnPOwN9ziLxjdkAiXFv8SlFCcQLULnK+i0TIcjsJeYs1KGw8SG9G3WiYDPvA6RnBuwE= X-Google-Smtp-Source: AA6agR7D71JIRL71J4a0BRxBJSq3GYVfq+gyJZieiw8b3KxA9z1+4ATlb+B9ngEwie292f7zamYEsJAPIrcwPkLa5g== X-Received: from coltonlewis-kvm.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:14ce]) (user=coltonlewis job=sendgmr) by 2002:a25:3bca:0:b0:67c:f8:593b with SMTP id i193-20020a253bca000000b0067c00f8593bmr11852747yba.88.1660154364770; Wed, 10 Aug 2022 10:59:24 -0700 (PDT) Date: Wed, 10 Aug 2022 17:58:30 +0000 In-Reply-To: <20220810175830.2175089-1-coltonlewis@google.com> Message-Id: <20220810175830.2175089-4-coltonlewis@google.com> Mime-Version: 1.0 References: <20220810175830.2175089-1-coltonlewis@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH 3/3] KVM: selftests: Randomize page access order From: Colton Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, maz@kernel.org, dmatlack@google.com, seanjc@google.com, oupton@google.com, ricarkol@google.com, Colton Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add the ability to use random_table to randomize the order in which pages are accessed. Add the -a argument to enable this new behavior. This should make accesses less predictable and make for a more realistic test. It includes the possibility that the same pages may be hit multiple times during an iteration. Signed-off-by: Colton Lewis --- .../testing/selftests/kvm/dirty_log_perf_test.c | 11 +++++++++-- .../selftests/kvm/include/perf_test_util.h | 2 ++ .../testing/selftests/kvm/lib/perf_test_util.c | 17 ++++++++++++++++- 3 files changed, 27 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index dcc5d44fc757..265cb4f7e088 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -132,6 +132,7 @@ struct test_params { bool partition_vcpu_memory_access; enum vm_mem_backing_src_type backing_src; int slots; + bool random_access; uint32_t random_seed; }; @@ -227,6 +228,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) p->partition_vcpu_memory_access); perf_test_set_wr_fract(vm, p->wr_fract); + perf_test_set_random_access(vm, p->random_access); guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm->page_shift; guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages); @@ -357,10 +359,11 @@ static void run_test(enum vm_guest_mode mode, void *arg) static void help(char *name) { puts(""); - printf("usage: %s [-h] [-i iterations] [-p offset] [-g] " + printf("usage: %s [-h] [-a] [-r random seed] [-i iterations] [-p offset] [-g] " "[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-s mem type]" "[-x memslots]\n", name); puts(""); + printf(" -a: access memory randomly rather than in order.\n"); printf(" -i: specify iteration counts (default: %"PRIu64")\n", TEST_HOST_LOOP_N); printf(" -g: Do not enable KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2. This\n" @@ -403,6 +406,7 @@ int main(int argc, char *argv[]) .partition_vcpu_memory_access = true, .backing_src = DEFAULT_VM_MEM_SRC, .slots = 1, + .random_access = false, .random_seed = time(NULL), }; int opt; @@ -414,8 +418,11 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "eghi:p:m:nb:f:v:or:s:x:")) != -1) { + while ((opt = getopt(argc, argv, "aeghi:p:m:nb:f:v:or:s:x:")) != -1) { switch (opt) { + case 'a': + p.random_access = true; + break; case 'e': /* 'e' is for evil. */ run_vcpus_while_disabling_dirty_logging = true; diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h index 597875d0c3db..6c6f81ce2216 100644 --- a/tools/testing/selftests/kvm/include/perf_test_util.h +++ b/tools/testing/selftests/kvm/include/perf_test_util.h @@ -39,6 +39,7 @@ struct perf_test_args { /* Run vCPUs in L2 instead of L1, if the architecture supports it. */ bool nested; + bool random_access; struct perf_test_vcpu_args vcpu_args[KVM_MAX_VCPUS]; }; @@ -56,6 +57,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus, void perf_test_destroy_vm(struct kvm_vm *vm); void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract); +void perf_test_set_random_access(struct kvm_vm *vm, bool random_access); void perf_test_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct perf_test_vcpu_args *)); void perf_test_join_vcpu_threads(int vcpus); diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 3c7b93349fef..9838d1ad9166 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -52,6 +52,9 @@ void perf_test_guest_code(uint32_t vcpu_idx) struct perf_test_vcpu_args *vcpu_args = &pta->vcpu_args[vcpu_idx]; uint64_t gva; uint64_t pages; + uint64_t addr; + bool random_access = pta->random_access; + bool populated = false; int i; gva = vcpu_args->gva; @@ -62,7 +65,11 @@ void perf_test_guest_code(uint32_t vcpu_idx) while (true) { for (i = 0; i < pages; i++) { - uint64_t addr = gva + (i * pta->guest_page_size); + if (populated && random_access) + addr = gva + + ((random_table[vcpu_idx][i] % pages) * pta->guest_page_size); + else + addr = gva + (i * pta->guest_page_size); if (random_table[vcpu_idx][i] % 100 < pta->wr_fract) *(uint64_t *)addr = 0x0123456789ABCDEF; @@ -70,6 +77,7 @@ void perf_test_guest_code(uint32_t vcpu_idx) READ_ONCE(*(uint64_t *)addr); } + populated = true; GUEST_SYNC(1); } } @@ -169,6 +177,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus, /* By default vCPUs will write to memory. */ pta->wr_fract = 100; + pta->random_access = false; /* * Snapshot the non-huge page size. This is used by the guest code to @@ -276,6 +285,12 @@ void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract) sync_global_to_guest(vm, perf_test_args); } +void perf_test_set_random_access(struct kvm_vm *vm, bool random_access) +{ + perf_test_args.random_access = random_access; + sync_global_to_guest(vm, perf_test_args); +} + uint64_t __weak perf_test_nested_pages(int nr_vcpus) { return 0;