From patchwork Fri Apr 21 16:52:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13220577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 91011C7618E for ; Fri, 21 Apr 2023 18:01:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=fkKCtliPqfdLf4CA1kza8qTGz2nCyhE7sqPT+nnodz4=; b=kIATl1DykJ2jiAq5d9Et8rI18B GrDe0YNjA6idQBjZ0uNkn/Axa7Q4yYjkrh6C/YcgfmkmO0Q8uBGkxYpw8xAEiGdUg1Fav3Gtblw3Z F9ha6SymI9Sf0hINdHsjWRoLRR2vTh5jO6filY4xW6WLoSyvl991/AT16pIbZHDzIrIxnjVd8XLNy 9loHthx6GGKsXaM5Hn8gZXYBaKt/QUksEs3NnZAkCt+6AOhC+sC8Rcm275uDTfSsLXp0MyLFGG28X CfSdNryqKnbgCbaIATalEZZW6EV6XZ99HNvvk7xs1RNT6+SQUYvBPhsio/2OwlnEm5pxNONZUJmAb klxTrzUg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ppv4D-00BYvq-0O; Fri, 21 Apr 2023 18:01:05 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppu0W-00BR4O-2h for linux-riscv@lists.infradead.org; Fri, 21 Apr 2023 16:53:15 +0000 Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-517bfcfe83fso1481419a12.2 for ; Fri, 21 Apr 2023 09:53:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682095991; x=1684687991; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XgxikP+X3kYGOAuFnf+zh/dPS0rwAngpLKDcQUsUAcM=; b=zi2WcQ51MX/dRjCsXVVb2UKWOyQqYWvQb2rVS+55Q5JnG1KMg+BKMNc2m6CICefKnM sOoWMaaiusHacDgSsH1s32sSWTdgiP6wsllHt4URTvZ0ZcdB98xurTcCxqKUnxgdyzgm c4orxbQVHQYl0D2JZPa34BcJ9O4mJAskKn3jMGVJzfd6NUdjkgFcz0V7h9rDItezG/hK 7JYrpPZ3On/wItGK9UD3eRq1xE0xBjYT4fDySVm0kYqRmDOuZGjHmdOR2nk+iWWuGquk ltSMCNgY7ajHQwNBZQBzYSt6DngLePJXMrG1yPF8WE+pTN4Iej8hv/0YWshRY0JJCK2Q ghZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682095991; x=1684687991; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XgxikP+X3kYGOAuFnf+zh/dPS0rwAngpLKDcQUsUAcM=; b=PvZNBD8JxvHJXQlmtCwswWKfHVeVPWhr2K/FQ4EoACZN+vlUz1QdO4MlLBE6V9m6LY m/20nY2yt76Do5jEYhjNYbHkGYtOkzQY9S/b6mRWlYJ7XjxSwNjOnJEEVpy4qvOJCHge Hnj78q0MVCLe8mxLd6UdKLG1KFDV6/su0wN+i6HRxlYMR/95lS8Hqs0/nnIvwRb0qK83 qIc1TzEpAlK772AbUoJ3j20PRy9RQu1jfdXEKxMxSe8kXxSZOsjQXi+YLDQo6mnL6gun +nRRl9IGkVR0a+bnMY58VuJ4cemQ/7vn32tbbW7EJLQAKkm0B55wbo+VxfNpgpj/Gb/n SqTw== X-Gm-Message-State: AAQBX9cNeHCFxvaJxewVK3zTZNXJvUZzfk900j5+TCr1ZMlmYa2DE0GS jfPlVHjKLcjWSrkJyqn3040kjNtb3L8l X-Google-Smtp-Source: AKy350bLqvmQkloLHtSlULbO+KpqiU67EC/akbM+X8VT7JeFOolfsbExNw2my01sT0w0GqSqV5g1UxiZlB+p X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a63:f54b:0:b0:520:e728:8894 with SMTP id e11-20020a63f54b000000b00520e7288894mr1352406pgk.5.1682095990900; Fri, 21 Apr 2023 09:53:10 -0700 (PDT) Date: Fri, 21 Apr 2023 09:52:57 -0700 In-Reply-To: <20230421165305.804301-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230421165305.804301-1-vipinsh@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421165305.804301-2-vipinsh@google.com> Subject: [PATCH 1/9] KVM: selftests: Allow dirty_log_perf_test to clear dirty memory in chunks From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230421_095312_877631_E69F7335 X-CRM114-Status: GOOD ( 15.70 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org In dirty_log_perf_test, provide option 'k' to specify the size of the chunks and clear dirty memory in chunks in each iteration. If this option is not provided then fallback to old way of clearing whole memslot in one call per iteration. In production environment whole memslot is rarely cleared in a single call, instead clearing operation is split across multiple calls to reduce time between clearing and sending memory to a remote host. This change mimics the production usecases and allow to get metrics based on that. Signed-off-by: Vipin Sharma --- .../selftests/kvm/dirty_log_perf_test.c | 19 ++++++++++++--- .../testing/selftests/kvm/include/memstress.h | 12 ++++++++-- tools/testing/selftests/kvm/lib/memstress.c | 24 ++++++++++++++----- 3 files changed, 44 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 416719e20518..0852a7ba42e1 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -134,6 +134,7 @@ struct test_params { uint32_t write_percent; uint32_t random_seed; bool random_access; + uint64_t clear_chunk_size; }; static void run_test(enum vm_guest_mode mode, void *arg) @@ -144,6 +145,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) uint64_t guest_num_pages; uint64_t host_num_pages; uint64_t pages_per_slot; + uint64_t pages_per_clear; struct timespec start; struct timespec ts_diff; struct timespec get_dirty_log_total = (struct timespec){0}; @@ -164,6 +166,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages); host_num_pages = vm_num_host_pages(mode, guest_num_pages); pages_per_slot = host_num_pages / p->slots; + pages_per_clear = p->clear_chunk_size / getpagesize(); bitmaps = memstress_alloc_bitmaps(p->slots, pages_per_slot); @@ -244,8 +247,9 @@ static void run_test(enum vm_guest_mode mode, void *arg) if (dirty_log_manual_caps) { clock_gettime(CLOCK_MONOTONIC, &start); - memstress_clear_dirty_log(vm, bitmaps, p->slots, - pages_per_slot); + memstress_clear_dirty_log_in_chunks(vm, bitmaps, p->slots, + pages_per_slot, + pages_per_clear); ts_diff = timespec_elapsed(start); clear_dirty_log_total = timespec_add(clear_dirty_log_total, ts_diff); @@ -343,6 +347,11 @@ static void help(char *name) " To leave the application task unpinned, drop the final entry:\n\n" " ./dirty_log_perf_test -v 3 -c 22,23,24\n\n" " (default: no pinning)\n"); + printf(" -k: Specify the chunk size in which dirty memory gets cleared\n" + " in memslots in each iteration. If the size is bigger than\n" + " the memslot size then whole memslot is cleared in one call.\n" + " Size must be aligned to the host page size. e.g. 10M or 3G\n" + " (default: UINT64_MAX, clears whole memslot in one call)\n"); puts(""); exit(0); } @@ -358,6 +367,7 @@ int main(int argc, char *argv[]) .slots = 1, .random_seed = 1, .write_percent = 100, + .clear_chunk_size = UINT64_MAX, }; int opt; @@ -368,7 +378,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ab:c:eghi:m:nop:r:s:v:x:w:")) != -1) { + while ((opt = getopt(argc, argv, "ab:c:eghi:k:m:nop:r:s:v:x:w:")) != -1) { switch (opt) { case 'a': p.random_access = true; @@ -392,6 +402,9 @@ int main(int argc, char *argv[]) case 'i': p.iterations = atoi_positive("Number of iterations", optarg); break; + case 'k': + p.clear_chunk_size = parse_size(optarg); + break; case 'm': guest_modes_cmdline(optarg); break; diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h index ce4e603050ea..2acc93f76fc3 100644 --- a/tools/testing/selftests/kvm/include/memstress.h +++ b/tools/testing/selftests/kvm/include/memstress.h @@ -75,8 +75,16 @@ void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vc void memstress_enable_dirty_logging(struct kvm_vm *vm, int slots); void memstress_disable_dirty_logging(struct kvm_vm *vm, int slots); void memstress_get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots); -void memstress_clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], - int slots, uint64_t pages_per_slot); +void memstress_clear_dirty_log_in_chunks(struct kvm_vm *vm, + unsigned long *bitmaps[], int slots, + uint64_t pages_per_slot, + uint64_t pages_per_clear); +static inline void memstress_clear_dirty_log(struct kvm_vm *vm, + unsigned long *bitmaps[], int slots, + uint64_t pages_per_slot) { + memstress_clear_dirty_log_in_chunks(vm, bitmaps, slots, pages_per_slot, + pages_per_slot); +} unsigned long **memstress_alloc_bitmaps(int slots, uint64_t pages_per_slot); void memstress_free_bitmaps(unsigned long *bitmaps[], int slots); diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c index 3632956c6bcf..e0c701ab4e9a 100644 --- a/tools/testing/selftests/kvm/lib/memstress.c +++ b/tools/testing/selftests/kvm/lib/memstress.c @@ -355,16 +355,28 @@ void memstress_get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int sl } } -void memstress_clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], - int slots, uint64_t pages_per_slot) +void memstress_clear_dirty_log_in_chunks(struct kvm_vm *vm, + unsigned long *bitmaps[], int slots, + uint64_t pages_per_slot, + uint64_t pages_per_clear) { - int i; + int i, slot; + uint64_t from, clear_pages_count; for (i = 0; i < slots; i++) { - int slot = MEMSTRESS_MEM_SLOT_INDEX + i; - - kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], 0, pages_per_slot); + slot = MEMSTRESS_MEM_SLOT_INDEX + i; + from = 0; + clear_pages_count = pages_per_clear; + + while (from < pages_per_slot) { + if (from + clear_pages_count > pages_per_slot) + clear_pages_count = pages_per_slot - from; + kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], from, + clear_pages_count); + from += clear_pages_count; + } } + } unsigned long **memstress_alloc_bitmaps(int slots, uint64_t pages_per_slot) From patchwork Fri Apr 21 16:52:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13220575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 99019C7618E for ; Fri, 21 Apr 2023 18:01:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=20pKAnxq+BPO6lWIvCobk6bRxK70PijVLAdre5Vqrb4=; b=xQ2LnigLAZlrgj/XLH2ZhmJbbk pz/kOTB+eoDkMktq4huUnFxonqf1sLsE+GXhkDyJnxftQIilQa1bZKIHYHgKKXMQ0JX4BVYq7SllN gzi+qQQBuzhPXdGj6FDhVXD0aurF+afyKoog3UFHtLkOBp4/+nTWfYh6c9RenL1NrEIfuPvZBL5MD q2ostaF4SfCr4+jkSy6sVQARAvonzgDmn1LxBsGrxQwi0iGMq9rPzao/Y0Nxp5Uoy53SpeWoWQyB8 5Sttjw+5ebK2ZtPTR8/Jxi19nbrJ8LkDMsEeJ4iDWfsP5Bgz6tKl2XShb0as0VFLn8vevVgwBtmje LrHFBC6w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ppv4D-00BYw8-2y; Fri, 21 Apr 2023 18:01:05 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppu0Y-00BR5Y-2j for linux-riscv@lists.infradead.org; Fri, 21 Apr 2023 16:53:16 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-24763adb145so2206484a91.2 for ; Fri, 21 Apr 2023 09:53:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682095992; x=1684687992; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dM0sknYnz5Tr9YshwaHjySCVB3V9EWCXlRX5uFgEya8=; b=cKoOxJh4k5Df809KvDrC16SHpyyANbnD88ODpXNOuka0EEfmMXb9/GGZHQjZmiaSGC PHXwzp6oJJCuBjtwH55J3xqdIM10HnXCfOmj/GRFwtdNPqo3g+mBSidFvw86YFvbU0sY QFH5aE4DJ8NgYwN0BX1m2H/84CVYfPBQlivAZjfI0w0gbJssCQFPLo6dhrf9i30gQDYK RVT9NmA3RwkhJ9wGFaj9EOO7LpEIdOpFNM2q37g3x6fVQdiBTnwi7UMa2h5IS92inTkJ 5XzKSeq94NI91RbwFfXmAf2d+oezqCbzgzE8MFn7zbTcJm/Y08tOtjCRc2EpRU8GmuIw Joug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682095992; x=1684687992; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dM0sknYnz5Tr9YshwaHjySCVB3V9EWCXlRX5uFgEya8=; b=ivr/sOAx0yN7V/kw6vMUOkWsx52NXmqC4IdUds66KLTyYqzH69bWozf1eMP35mQLSV ZXj6gv1O0SJWiuOoV/JaAbnkRVImLhQlNMwAYdkL5ViSBcRYSqs+UAAWIVwMQmUDyBh7 z+gtlpOGLrkblrKpIlqvkEcmJWGorbEp9eQd9bV6IrQvmKaSWA1jBdR6FlZhGiedMuEn xJa/nRERXErXI6BKiNb8/qQwgbCsA5c7uB3nVq/b1CdAs7XCqblADCWn2esU90fYGcUj kFL2+SMsb5xkL3C+i/uXrLgVMuuBo0M2D5s2dECEDnjiyK6zkv8osmbzJsM57C7zW++3 CyhQ== X-Gm-Message-State: AAQBX9f/VfxsJhKqGrkxWaLoR1XiQE4PiPx0OmH6ZcJSnCtUuSRDwkoZ DvL2TQ9YgS21fD6rehcuHa++YcmavWOE X-Google-Smtp-Source: AKy350YMrLXAhkSUtVuYqUb/z6RwX7RpSjICLEgXjDkxf6HBxqanxF3VIJFd0FfCazAdAkwHsavol8ftYq4a X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:90a:740d:b0:244:8f24:783d with SMTP id a13-20020a17090a740d00b002448f24783dmr1553862pjg.4.1682095992639; Fri, 21 Apr 2023 09:53:12 -0700 (PDT) Date: Fri, 21 Apr 2023 09:52:58 -0700 In-Reply-To: <20230421165305.804301-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230421165305.804301-1-vipinsh@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421165305.804301-3-vipinsh@google.com> Subject: [PATCH 2/9] KVM: selftests: Add optional delay between consecutive Clear-Dirty-Log calls From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230421_095314_902540_07813918 X-CRM114-Status: GOOD ( 13.87 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org In dirty_log_perf_test, add option "-l" to wait between consecutive Clear-Dirty-Log calls. Accept delay in milliseconds. This allows dirty_log_perf_test to mimic real world use where after clearing dirty memory, some time is spent in transferring memory before making a subsequeunt Clear-Dirty-Log call. Signed-off-by: Vipin Sharma --- .../testing/selftests/kvm/dirty_log_perf_test.c | 17 +++++++++++++++-- tools/testing/selftests/kvm/include/memstress.h | 5 +++-- tools/testing/selftests/kvm/lib/memstress.c | 10 +++++++++- 3 files changed, 27 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 0852a7ba42e1..338f03a4a550 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -135,6 +135,7 @@ struct test_params { uint32_t random_seed; bool random_access; uint64_t clear_chunk_size; + int clear_chunk_wait_time_ms }; static void run_test(enum vm_guest_mode mode, void *arg) @@ -249,7 +250,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) clock_gettime(CLOCK_MONOTONIC, &start); memstress_clear_dirty_log_in_chunks(vm, bitmaps, p->slots, pages_per_slot, - pages_per_clear); + pages_per_clear, + p->clear_chunk_wait_time_ms); ts_diff = timespec_elapsed(start); clear_dirty_log_total = timespec_add(clear_dirty_log_total, ts_diff); @@ -352,6 +354,11 @@ static void help(char *name) " the memslot size then whole memslot is cleared in one call.\n" " Size must be aligned to the host page size. e.g. 10M or 3G\n" " (default: UINT64_MAX, clears whole memslot in one call)\n"); + printf(" -l: Specify time in milliseconds to wait after Clear-Dirty-Log\n" + " call. This allows to mimic use cases where flow is to get\n" + " dirty log followed by multiple clear dirty log calls and\n" + " sending corresponding memory to destination (in this test\n" + " sending will be just idle waiting)\n"); puts(""); exit(0); } @@ -368,6 +375,7 @@ int main(int argc, char *argv[]) .random_seed = 1, .write_percent = 100, .clear_chunk_size = UINT64_MAX, + .clear_chunk_wait_time_ms = 0, }; int opt; @@ -378,7 +386,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ab:c:eghi:k:m:nop:r:s:v:x:w:")) != -1) { + while ((opt = getopt(argc, argv, "ab:c:eghi:k:l:m:nop:r:s:v:x:w:")) != -1) { switch (opt) { case 'a': p.random_access = true; @@ -405,6 +413,11 @@ int main(int argc, char *argv[]) case 'k': p.clear_chunk_size = parse_size(optarg); break; + case 'l': + p.clear_chunk_wait_time_ms = + atoi_non_negative("Clear dirty log chunks wait time", + optarg); + break; case 'm': guest_modes_cmdline(optarg); break; diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h index 2acc93f76fc3..01fdcea80360 100644 --- a/tools/testing/selftests/kvm/include/memstress.h +++ b/tools/testing/selftests/kvm/include/memstress.h @@ -78,12 +78,13 @@ void memstress_get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int sl void memstress_clear_dirty_log_in_chunks(struct kvm_vm *vm, unsigned long *bitmaps[], int slots, uint64_t pages_per_slot, - uint64_t pages_per_clear); + uint64_t pages_per_clear, + int wait_ms); static inline void memstress_clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots, uint64_t pages_per_slot) { memstress_clear_dirty_log_in_chunks(vm, bitmaps, slots, pages_per_slot, - pages_per_slot); + pages_per_slot, 0); } unsigned long **memstress_alloc_bitmaps(int slots, uint64_t pages_per_slot); void memstress_free_bitmaps(unsigned long *bitmaps[], int slots); diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c index e0c701ab4e9a..483ecbc53a5b 100644 --- a/tools/testing/selftests/kvm/lib/memstress.c +++ b/tools/testing/selftests/kvm/lib/memstress.c @@ -358,10 +358,15 @@ void memstress_get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int sl void memstress_clear_dirty_log_in_chunks(struct kvm_vm *vm, unsigned long *bitmaps[], int slots, uint64_t pages_per_slot, - uint64_t pages_per_clear) + uint64_t pages_per_clear, + int wait_ms) { int i, slot; uint64_t from, clear_pages_count; + struct timespec wait = { + .tv_sec = wait_ms / 1000, + .tv_nsec = (wait_ms % 1000) * 1000000ull, + }; for (i = 0; i < slots; i++) { slot = MEMSTRESS_MEM_SLOT_INDEX + i; @@ -374,6 +379,9 @@ void memstress_clear_dirty_log_in_chunks(struct kvm_vm *vm, kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], from, clear_pages_count); from += clear_pages_count; + if (wait_ms) + nanosleep(&wait, NULL); + } } From patchwork Fri Apr 21 16:52:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13220579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F813C77B78 for ; Fri, 21 Apr 2023 18:01:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=MxLJiGFE1veCXy9S6DMBp+rmZvNRdVo//fGohZ89vUU=; b=IKb7Y5wCwv8suzRPz5O4kWTnJ7 XJHdQljAVm+ef6xBZaO6MsFe/jPWQ9YSI9yPlJsalk5h09NMq/6YXM+hXgHxyoNss5Qn+X23ktNar nODdhQV2INSHtQ9VDUIT4hf8i5h9thO5LKgWVaCwQSzKy6Fb7deDxy+seuob0cYoqi3X3Azgsdt+m r4WPc7dPw0tMXIArjHOlkHUG+sDK9iV/3IDeCq+U9ssOct/nWxozWL5OyEhLjk5IfhhhifKXVsXsV 8dMg24rebZwHPIyjPdlo5xLlP59RY8rKA9+wV166s8OEv6Y2Rj6uMeFEyZjVPS5d84z13oZ+82sxj DkEhku3Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ppv4E-00BYwI-1K; Fri, 21 Apr 2023 18:01:06 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppu0b-00BR6B-0P for linux-riscv@lists.infradead.org; Fri, 21 Apr 2023 16:53:18 +0000 Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1a66bd44f6dso15805945ad.0 for ; Fri, 21 Apr 2023 09:53:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682095994; x=1684687994; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2PAF8eecHjdHkncCHVDxnuwZD238/O/D5uX087DV6gk=; b=PTkyUTn2160dQG/w0LBEubHoXT6/kolM07eAXpOkI5VQKBjVge2/G3D9DJWv4EmluF jwrYT87c/pr1Xst3R4sPDfwUW1QW6N7gw0yt507TfR8+t+X9AZokv5qPyHgjsmekGrNG N5lb8NDbmayYdr4sbEsTRxgCzT7JDO2aRCdXd3ULGGsFS/IzMoTjppouMIJbEHzjtzJn 0UQdx9F3D1PLMFaK3GeO580/u61tzYb2QXdQjUSDj2cShYPf8ASAPVv3u7/P4wOk+ciC 66tN/SEGEQMjy6UGRqCfjY+yHppjjUO6tyxdYKf7p4Z/hg9iu3GtO1l0LIPj0nE4z4c0 zDPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682095994; x=1684687994; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2PAF8eecHjdHkncCHVDxnuwZD238/O/D5uX087DV6gk=; b=Qs/keRm0CDeDYTJGbRCSC+6wfYZ/UG9RCRfOF+9HBkLT7SFvcrBSUK84ri1+XYDpxs dRqIcm8ABM0uJAnlLTL52VwcuqIEySde7f+v7JwZjbmGjV4+nTpKtu4jWocjOMFtCXVx PFvJzWq7S+VlDOcMyhMpeCRCrRl1klIrJs7SZwT/slOMqxr71ZmBEeJJ+/QqMsqphReh rgXum0+DtrTTOWkGpf2TDJmTZMmmj8RC72vnuiVDryme0uwUc1EN6IsQakI71xHTMXzA Y16ZZwdX3VFCPeNwNocb7qDsb9/O1lsqPyutGYdUyfZ+wueWIIk0qO1SjAPaExUWjb6W QxoQ== X-Gm-Message-State: AAQBX9fyDtAPT45gh35nMgWQUm3ruA4bC/b6W0GOns58qBxUHXAf3LUv X+a3jTR2FFLOgqvXNMw+yAbOvhQDQdWY X-Google-Smtp-Source: AKy350ZhDTjaiDsWlopRxc48Dw1pNwwJqPHfzna6+NHQQh4Cp6jlmLNI1MOs2L3SqIF7smNjuMPM7IP//W/I X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:902:7b85:b0:1a0:4321:920e with SMTP id w5-20020a1709027b8500b001a04321920emr1816879pll.12.1682095994335; Fri, 21 Apr 2023 09:53:14 -0700 (PDT) Date: Fri, 21 Apr 2023 09:52:59 -0700 In-Reply-To: <20230421165305.804301-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230421165305.804301-1-vipinsh@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421165305.804301-4-vipinsh@google.com> Subject: [PATCH 3/9] KVM: selftests: Pass count of read and write accesses from guest to host From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230421_095317_176444_695740D9 X-CRM114-Status: GOOD ( 13.77 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Pass number of read and write accesses done in the memstress' guest code to userspace. These counts will be one way to measure vCPU performances during memstress and dirty logging related tests. For example, in dirty_log_perf_test this can be used to measure impact of dirty and clear log APIs on vCPUs performances. In current dirty_log_perf_test, each vCPU executes in lockstep to the current iteration in userspace, therefore, these access counts will not provide much useful information except for observing individual vCPUs read vs write accesses. However, in future commits, dirty_log_perf_test behavior will be changed to allow vCPUs to execute independent of userspace iterations. This will mimic real world workload where guest keeps on executing while VMM is collecting and clearing dirty logs separately. With read and write accesses known for each vCPU, impact of get and clear dirty log APIs can be quantified. Note that these access counts will not be 100% reliable in knowing vCPUs performances since vCPUs scheduling can impact the progress. Signed-off-by: Vipin Sharma --- tools/testing/selftests/kvm/lib/memstress.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c index 483ecbc53a5b..9c2e360e610f 100644 --- a/tools/testing/selftests/kvm/lib/memstress.c +++ b/tools/testing/selftests/kvm/lib/memstress.c @@ -50,6 +50,8 @@ void memstress_guest_code(uint32_t vcpu_idx) struct memstress_args *args = &memstress_args; struct memstress_vcpu_args *vcpu_args = &args->vcpu_args[vcpu_idx]; struct guest_random_state rand_state; + uint64_t write_access; + uint64_t read_access; uint64_t gva; uint64_t pages; uint64_t addr; @@ -65,6 +67,8 @@ void memstress_guest_code(uint32_t vcpu_idx) GUEST_ASSERT(vcpu_args->vcpu_idx == vcpu_idx); while (true) { + write_access = 0; + read_access = 0; for (i = 0; i < pages; i++) { if (args->random_access) page = guest_random_u32(&rand_state) % pages; @@ -73,13 +77,16 @@ void memstress_guest_code(uint32_t vcpu_idx) addr = gva + (page * args->guest_page_size); - if (guest_random_u32(&rand_state) % 100 < args->write_percent) + if (guest_random_u32(&rand_state) % 100 < args->write_percent) { *(uint64_t *)addr = 0x0123456789ABCDEF; - else + write_access++; + } else { READ_ONCE(*(uint64_t *)addr); + read_access++; + } } - GUEST_SYNC(1); + GUEST_SYNC_ARGS(1, read_access, write_access, 0, 0); } } From patchwork Fri Apr 21 16:53:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13220578 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B63FAC7618E for ; Fri, 21 Apr 2023 18:01:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=KvWbSbQQDXmmtMSAR6zmttymFQHZVVGMmP25qu/dQd8=; b=saD3xY9YUd9lMOD1j/aASc9QXc mVSqrgM7sebXH3u13qCTVH4hr2rQ+xfYZTTcGuU+++2QGeIm2KlJkTB1+pwYGFdpsQmO5dVTkLBfP 5P3ZnaBVAejKRUnOGM28bBJAjoCvvWThOKL6N1YLEwxsVAXKSCFzHpCjVMpRSVsYgd/vlHrzDDpEy P//cwNZDiILD4zlHlTShQGPbcQmI9ZBeW0KuSNHaSKZJltKhFNG+PI3V7ozLKqVd0lo1UE9bs7xgG EoD9sar2g6l3kHuwy0ScyuTIoFz/LjsfSxRVUozTIEQT3qLh2GZpoPJVidiH+YOsnrax4ExzxD5DZ s6gVxxkw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ppv4F-00BYxM-2o; Fri, 21 Apr 2023 18:01:07 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppu0b-00BR7W-0U for linux-riscv@lists.infradead.org; Fri, 21 Apr 2023 16:53:20 +0000 Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-51b121871ecso1487746a12.3 for ; Fri, 21 Apr 2023 09:53:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682095996; x=1684687996; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QAMadok09XxgFvaDO3J6Lgqv+ZDwnOLb2IYSq9G7u+A=; b=5l1CzLumfw4VvnGKs1LekidIOHm6RR2ZhVTA1I+whH2HeXgm2o8PhxoPUQLTHfydb9 MbBxamdeRZWqIzwIX6cPpq9NGQ3282H+LGUyBZpLlO6iAgfWncKkZnXQevgjEqCzsFvI /ie9WS7cdYCl1aCB7+4MGfZNo6IMNnRdfZplWSdxceYc0BIUM6d0gO/blOvoZokSUVGw Y1vWnwlcSKtpR4AbEKzTUp+leY3IooaW0MIg6qShyiSd1O990V5j6KZFgnSxmwclrSfF iRRS2VvoFH4pfGHIeK5C/wh3ElWiAy4UgpcFTrfc4JtiPWNVC38kZkoYq7eJ04Cn12As uI2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682095996; x=1684687996; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QAMadok09XxgFvaDO3J6Lgqv+ZDwnOLb2IYSq9G7u+A=; b=blRAdShwzQO7bjx05f8tbpM2b1Qs3CG5du833ILkS61F+tThQiiikF2OTJ3sAbfdG8 6JusoikrbSISnGej97Jp3Vll49Cf9fv4IghBCNio+2O4W+JCnkg9DYprR9CmqeahsyU2 3/q+tg4Smqn6KhxnOIbnE2LP6M5zMZM4Un/4zkVsNKeuLldZ21j85Feh5sqgiRKSg3oF kmEB7lWdDRJ2qmI6udh8RP14Lh0nM4bCeo6iGOi0C2i3q/bjBEuf96NTVZQo7vDMudpi e3Op0ufxmfdcTWhm+q36LMFOoHSKHzV/BBxcFADZlTW39SsJEuyeDWg7Q3rI8oKza07c NTvw== X-Gm-Message-State: AAQBX9c6QC4avbl6dcH4mzMCkMAqRopjjkbK4qqCXq7GwyULDWkIJqnA fmnnM44VMSQf7s4CX6tKWOYUFEDvpZdS X-Google-Smtp-Source: AKy350ZjCHa4QHnHiWhcJq+auFhrVPVMASZyckOmf5zCb6ojNKQ4+uKrop3naLHjd6GRNHOZE2DYdu7UHwlH X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a63:105e:0:b0:520:53fa:9878 with SMTP id 30-20020a63105e000000b0052053fa9878mr1424111pgq.6.1682095996166; Fri, 21 Apr 2023 09:53:16 -0700 (PDT) Date: Fri, 21 Apr 2023 09:53:00 -0700 In-Reply-To: <20230421165305.804301-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230421165305.804301-1-vipinsh@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421165305.804301-5-vipinsh@google.com> Subject: [PATCH 4/9] KVM: selftests: Print read and write accesses of pages by vCPUs in dirty_log_perf_test From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230421_095317_202465_C56C2A52 X-CRM114-Status: GOOD ( 11.32 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Fetch read and write accesses of pages from guest code and print count across all vCPUs in dirty_log_perf_test. This data provides progress made by vCPUs during dirty logging operations. Since, vCPUs execute in lockstep with userspace dirty log iterations, this metric is not very interesting. However, in future commits when dirty_log_perf_test can execute vCPUs independently from dirty log iterations then this metric can give good measure of vCPUs performance during dirty logging. Signed-off-by: Vipin Sharma --- .../selftests/kvm/dirty_log_perf_test.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 338f03a4a550..0a08a3d21123 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include "kvm_util.h" @@ -66,17 +67,22 @@ static u64 dirty_log_manual_caps; static bool host_quit; static int iteration; static int vcpu_last_completed_iteration[KVM_MAX_VCPUS]; +static atomic_ullong total_reads; +static atomic_ullong total_writes; static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) { struct kvm_vcpu *vcpu = vcpu_args->vcpu; int vcpu_idx = vcpu_args->vcpu_idx; uint64_t pages_count = 0; + uint64_t reads = 0; + uint64_t writes = 0; struct kvm_run *run; struct timespec start; struct timespec ts_diff; struct timespec total = (struct timespec){0}; struct timespec avg; + struct ucall uc = {}; int ret; run = vcpu->run; @@ -89,7 +95,7 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) ts_diff = timespec_elapsed(start); TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); - TEST_ASSERT(get_ucall(vcpu, NULL) == UCALL_SYNC, + TEST_ASSERT(get_ucall(vcpu, &uc) == UCALL_SYNC, "Invalid guest sync status: exit_reason=%s\n", exit_reason_str(run->exit_reason)); @@ -101,6 +107,8 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) if (current_iteration) { pages_count += vcpu_args->pages; total = timespec_add(total, ts_diff); + reads += uc.args[2]; + writes += uc.args[3]; pr_debug("vCPU %d iteration %d dirty memory time: %ld.%.9lds\n", vcpu_idx, current_iteration, ts_diff.tv_sec, ts_diff.tv_nsec); @@ -123,6 +131,8 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) pr_debug("\nvCPU %d dirtied 0x%lx pages over %d iterations in %ld.%.9lds. (Avg %ld.%.9lds/iteration)\n", vcpu_idx, pages_count, vcpu_last_completed_iteration[vcpu_idx], total.tv_sec, total.tv_nsec, avg.tv_sec, avg.tv_nsec); + atomic_fetch_add(&total_reads, reads); + atomic_fetch_add(&total_writes, writes); } struct test_params { @@ -176,6 +186,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) dirty_log_manual_caps); arch_setup_vm(vm, nr_vcpus); + atomic_store(&total_reads, 0); + atomic_store(&total_writes, 0); /* Start the iterations */ iteration = 0; @@ -295,6 +307,10 @@ static void run_test(enum vm_guest_mode mode, void *arg) clear_dirty_log_total.tv_nsec, avg.tv_sec, avg.tv_nsec); } + pr_info("Total pages touched: %llu (Reads: %llu, Writes: %llu)\n", + atomic_load(&total_reads) + atomic_load(&total_writes), + atomic_load(&total_reads), atomic_load(&total_writes)); + memstress_free_bitmaps(bitmaps, p->slots); arch_cleanup_vm(vm); memstress_destroy_vm(vm); From patchwork Fri Apr 21 16:53:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13220576 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BC884C77B76 for ; Fri, 21 Apr 2023 18:01:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Wb2pPoeTl7Kdrd5UyvWt0IoDeLbLIvWe/0GGP3P9qsY=; b=P2BAOccyIsmOnUflJu3ZeONAOT f/WaO7TkEcUnlvMrIbtF9ExIFf09xXZlE1oMAJ18k/zEYBROkSkG4Qnmt0ZvF9rLlBmt3ONDjBoKF 0eEe2ftFXS6+Rz0yOZjhYHFJB+nufpHDbTxkVoMpMoeegDdhCvXhfQD1slS2sMWQinSl8EL1D41RS RnT1Du53uOPKsjVqEA43MosUaLBtVZh1uSduamrxfA4wR6VDcexjVkcfN3KlQIxzw8lWDnsnTTYmw XyRp532mFRpivhDRf7I48M0qbrjUjVGr12L0NiFLY0Cc1QvhXuMN/LPXW4cO8i3IszvmllzXkXiBk XZJM+yZQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ppv4J-00BYzn-1o; Fri, 21 Apr 2023 18:01:11 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppu0e-00BR9J-1v for linux-riscv@lists.infradead.org; Fri, 21 Apr 2023 16:53:22 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b922aa3725fso3282845276.0 for ; Fri, 21 Apr 2023 09:53:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682095998; x=1684687998; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aIXwJZ86g2cMsPal848/IoLZKDxpkMIUoWM/xoiNZjQ=; b=OIjqFWC1f6JrRiHtRugX+y84MBGm6ZcE+DgB55EOotUGyniIQQqePlnbirQ3o2dDFU ZnbasVHpI23zQIfcI6Jn3dVfZwuX6HKTuVl0G4OM/jmCEWcu3RScvyE/peQMI6YRxFAi gf0XUXKKZmW/2xI2sZZBymHSazOLXzzNqXiPCfYZYBJgFR1ppXYdTKevwBVLtyFVmsIu +l8sz+E+NsmIMDQ/1tM0lFmpgOnwSQOEUwLEGaRH1NEwhUlguM0BNFYJ1HKGHsauRDG0 npWVDOT9XxNmTt4xr+n6quK+ZKfLBdDgq8rhh/Dtt9wDzk6+CyFCPfqqi4sAI0/YzshM AXnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682095998; x=1684687998; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aIXwJZ86g2cMsPal848/IoLZKDxpkMIUoWM/xoiNZjQ=; b=lMwCvyG6hvsxV2Dt22QZYOn354UkVNV4rIhqPopSIV0lNofld69Z++DiT+V/Fd5i6G o9SfX6LBcuOxRFjfbfluRYqnHkNzY9xX6yX4WPiuQHsQ4mCOAJtgfEMUgyZsmhJXI4Y2 io4G1ZM8t9YHPsfORjmYzhHe8a19/3aIeWMOmz1c4V2YWw9JnabWuEhKuPr5NboSkAhG FMU5P4q0u/5lJIrDOMOzAIwCHbzZM3j/tZZZtDRIT8NAirtxOdN//MEeOR3tPTFQE2/2 E1ih90WfsDYvMt0geWDFqupIYStHjlG2EMEB0NoqHxPBfhidpQYcx5gKBFBcRSq2+933 bCxQ== X-Gm-Message-State: AAQBX9egcm0fbmfXXVbwP0n7W/wuK6ehQuWn5MRLshD1XRNQaiN6VVDa uz3jAjbPe9u3h/eQ9d/Y4bN5T25d3WIw X-Google-Smtp-Source: AKy350bXgyJDTEgckFAM/H87961TV8R1B/izBkH2FZgQ+FxNt6ajRDFj7SNSbgxdm0eZ4bQuJyBfwm/9rS7m X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a25:d147:0:b0:b8c:ad5:6b4e with SMTP id i68-20020a25d147000000b00b8c0ad56b4emr1983907ybg.12.1682095998019; Fri, 21 Apr 2023 09:53:18 -0700 (PDT) Date: Fri, 21 Apr 2023 09:53:01 -0700 In-Reply-To: <20230421165305.804301-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230421165305.804301-1-vipinsh@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421165305.804301-6-vipinsh@google.com> Subject: [PATCH 5/9] KVM: selftests: Allow independent execution of vCPUs in dirty_log_perf_test From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230421_095320_657523_9699FE89 X-CRM114-Status: GOOD ( 16.46 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Allow vCPUs to execute independent of dirty log iterations after initialization is complete. Hide this feature behind the new option "-j". This change makes dirty_log_perf_test execute like real world workflows where guest vCPUs keep on executing while VMM collects dirty logs. Total pages touched during execution of test will give good estimate of how vCPUs are performing while dirty logging is enabled. Signed-off-by: Vipin Sharma --- .../selftests/kvm/dirty_log_perf_test.c | 60 ++++++++++++------- 1 file changed, 40 insertions(+), 20 deletions(-) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 0a08a3d21123..ffdad535fdaa 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -69,6 +69,7 @@ static int iteration; static int vcpu_last_completed_iteration[KVM_MAX_VCPUS]; static atomic_ullong total_reads; static atomic_ullong total_writes; +static bool lockstep_iterations; static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) { @@ -83,12 +84,16 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) struct timespec total = (struct timespec){0}; struct timespec avg; struct ucall uc = {}; + int current_iteration = -1; int ret; run = vcpu->run; while (!READ_ONCE(host_quit)) { - int current_iteration = READ_ONCE(iteration); + if (lockstep_iterations) + current_iteration = READ_ONCE(iteration); + else + current_iteration++; clock_gettime(CLOCK_MONOTONIC, &start); ret = _vcpu_run(vcpu); @@ -118,13 +123,19 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) ts_diff.tv_nsec); } - /* - * Keep running the guest while dirty logging is being disabled - * (iteration is negative) so that vCPUs are accessing memory - * for the entire duration of zapping collapsible SPTEs. - */ - while (current_iteration == READ_ONCE(iteration) && - READ_ONCE(iteration) >= 0 && !READ_ONCE(host_quit)) {} + if (lockstep_iterations) { + /* + * Keep running the guest while dirty logging is being disabled + * (iteration is negative) so that vCPUs are accessing memory + * for the entire duration of zapping collapsible SPTEs. + */ + while (current_iteration == READ_ONCE(iteration) && + READ_ONCE(iteration) >= 0 && !READ_ONCE(host_quit)) + ; + } else { + while (!READ_ONCE(iteration)) + ; + } } avg = timespec_div(total, vcpu_last_completed_iteration[vcpu_idx]); @@ -238,17 +249,19 @@ static void run_test(enum vm_guest_mode mode, void *arg) clock_gettime(CLOCK_MONOTONIC, &start); iteration++; - pr_debug("Starting iteration %d\n", iteration); - for (i = 0; i < nr_vcpus; i++) { - while (READ_ONCE(vcpu_last_completed_iteration[i]) - != iteration) - ; - } + if (lockstep_iterations) { + pr_debug("Starting iteration %d\n", iteration); + for (i = 0; i < nr_vcpus; i++) { + while (READ_ONCE(vcpu_last_completed_iteration[i]) + != iteration) + ; + } - ts_diff = timespec_elapsed(start); - vcpu_dirty_total = timespec_add(vcpu_dirty_total, ts_diff); - pr_info("Iteration %d dirty memory time: %ld.%.9lds\n", - iteration, ts_diff.tv_sec, ts_diff.tv_nsec); + ts_diff = timespec_elapsed(start); + vcpu_dirty_total = timespec_add(vcpu_dirty_total, ts_diff); + pr_info("Iteration %d dirty memory time: %ld.%.9lds\n", + iteration, ts_diff.tv_sec, ts_diff.tv_nsec); + } clock_gettime(CLOCK_MONOTONIC, &start); memstress_get_dirty_log(vm, bitmaps, p->slots); @@ -365,6 +378,10 @@ static void help(char *name) " To leave the application task unpinned, drop the final entry:\n\n" " ./dirty_log_perf_test -v 3 -c 22,23,24\n\n" " (default: no pinning)\n"); + printf(" -j: Execute vCPUs independent of dirty log iterations\n" + " Independent vCPUs execution will allow them to continuously\n" + " dirty memory while main thread is collecting and clearing\n" + " dirty logs in the main thread's iterations.\n"); printf(" -k: Specify the chunk size in which dirty memory gets cleared\n" " in memslots in each iteration. If the size is bigger than\n" " the memslot size then whole memslot is cleared in one call.\n" @@ -399,10 +416,10 @@ int main(int argc, char *argv[]) kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2); dirty_log_manual_caps &= (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | KVM_DIRTY_LOG_INITIALLY_SET); - + lockstep_iterations = true; guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ab:c:eghi:k:l:m:nop:r:s:v:x:w:")) != -1) { + while ((opt = getopt(argc, argv, "ab:c:eghi:jk:l:m:nop:r:s:v:x:w:")) != -1) { switch (opt) { case 'a': p.random_access = true; @@ -426,6 +443,9 @@ int main(int argc, char *argv[]) case 'i': p.iterations = atoi_positive("Number of iterations", optarg); break; + case 'j': + lockstep_iterations = false; + break; case 'k': p.clear_chunk_size = parse_size(optarg); break; From patchwork Fri Apr 21 16:53:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13220496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E60C4C77B76 for ; Fri, 21 Apr 2023 17:38:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=PK5U9E9wW96HptyeTjt5C9Vv3kdj2i0Sf1DxYGDcGJI=; b=lNjLz7FGvMul5iuiYfNyaK1ppo 5mGq/ukuayCyhaZFh0He79O7pWC5SVDrAbTb8HoC7qSrfGKtPIdvZA99KP7bX8ELti9rmO5PrBb7u zA1PPBB49VHMmYSb6HD0/9tDUJ/x3KnOFP2k2ah4bFOTTdmXfzm5CEHIREP04xacUgp8gh9M5ncNs FGsPIhbqh74TUyoMAnWYTvA1j+1HboOvjGWMLwwPcbv00LLOFuMQKqbJfi8yXRS5R3Ysiwrng30Gc 1VkjRgPd0kqGtsCBS3U9xRxcbKWilxMgi7cKTld+vAOo/ly59tfvQ0DqtxgM39XCtIpGoW8ClIi9+ jecDctRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ppuiT-00BW40-10; Fri, 21 Apr 2023 17:38:37 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppuiR-00BW2d-1V for linux-riscv@bombadil.infradead.org; Fri, 21 Apr 2023 17:38:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=aDUI7xCr7dWf/FkK1CK1yXAqGXh9n1NZNFj54qb+KHU=; b=mrqz3BuEA8tWjSYdJf5xdDz3Yx wqsMAecT8wMECpGd5m+luAxBa413frPXTxDJENiNANcWRFmv3I4dlZ3Ew2wMMvsYtAI7GNXSEd6hq VDAAbll3EPNxZJXDiRoipvu3eK7J/Dr5wkhEs8gh3wJKjpzn4oCOq+JcuZFMDHEtW8jqOhLIDsH1O kt56OvWsini4TfH/Pw7mrUuzKe1AV06GMKqWF69MWXuWKsUX2qTNPcg152H+xUhQYVh0O2CePnHF2 UUmzXNuhlcS8I3sl8YeYjexUn87rUhUy9SKPwXQn8HYljoicOerT1/6Mpj6Pu2URvPKBBEcDVVeUc K8rZkdKQ==; Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppu0h-005NC2-0s for linux-riscv@lists.infradead.org; Fri, 21 Apr 2023 16:53:27 +0000 Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-63b656aacc6so2758012b3a.3 for ; Fri, 21 Apr 2023 09:53:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682095999; x=1684687999; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aDUI7xCr7dWf/FkK1CK1yXAqGXh9n1NZNFj54qb+KHU=; b=xTDct9zxZ9N8ALbPUp/viW7lM9BRRuwGhBUMNIpBcky3DENAIpUsYaKC4hxy6ElANa XJM0ALn9fzQGXWa5TwxvUayRtY+2kF+0hSFrS4Z13rLNDvpjdJ1bTkvnlg/aszqM3kw6 JwtYRMMCwqvts/H9WPbPOWtINNCHPcx52iHlR+Mni7gOLO5npE1/M3beOvxeJdJAoVtK oYpcB0E2UWuNNPfDE3N6K7L1PsDilnRRB45iG1BH2Hs9bkLyvvb6zqJnC7OL6glGnKD+ tqNazVeTkEV+I2VUzEqOogcJ5o+xCg88TmCeF1jcEtkdTpDT5v8zElAEZqy2mA1rdmJ4 ZLUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682095999; x=1684687999; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aDUI7xCr7dWf/FkK1CK1yXAqGXh9n1NZNFj54qb+KHU=; b=TrNWQSWVBVkZnvMrvToyH0q3+LbKn9HU/aHsSmaz75DyLlizdGR0P6LdF48qHGe4w+ OYXTPjrey2a5Gdvp1XkL9a9SMR+NufWH1pXJx/vKLzAUugbwRHvdAjl8L+G429njEJjL tcn3NHnIXm5entsSra+iMpmypO5J9zmiyvpcUAu90w1VlK8wsOlDW6th287yaLyeu1/Z EPgDwzJ+kiV3SvIPf0pLZX/YaoA1G3wzd+RW589IH/OnXuG6sOjgOpab6WEB5rAaQgDL 7V4QA4VyItiLIygVactJoEe2EFcp08WL4E7vKer/HbH+eK+crHrJqRe3UfjGToNK6Gkp phqg== X-Gm-Message-State: AAQBX9eoGHylXcxKz7KjdhoKeF4IpJGqDOhh+YW2tZtvnV8+QEuoudPa 8Fr9NXaTCAnlBiXljC3VHFA0husyzA5T X-Google-Smtp-Source: AKy350Y++Ixj3yZzgu1Tm9pviEO/066Z0jMSz5NI0cEsiZPMFyAUWWqVudz+4dZEwY3ZReh5ODp8Q4bxx5mF X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a05:6a00:134b:b0:63d:5dcd:bc06 with SMTP id k11-20020a056a00134b00b0063d5dcdbc06mr2146455pfu.5.1682095999596; Fri, 21 Apr 2023 09:53:19 -0700 (PDT) Date: Fri, 21 Apr 2023 09:53:02 -0700 In-Reply-To: <20230421165305.804301-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230421165305.804301-1-vipinsh@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421165305.804301-7-vipinsh@google.com> Subject: [PATCH 6/9] KVM: arm64: Correct the kvm_pgtable_stage2_flush() documentation From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230421_175323_433720_B0962074 X-CRM114-Status: UNSURE ( 8.67 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Remove _range suffix from kvm_pgtable_stage2_flush_range which is used in documentation of kvm_pgtable_stage2_flush(). There is no function named kvm_pgtable_stage2_flush_range(). Fixes: 93c66b40d728 ("KVM: arm64: Add support for stage-2 cache flushing in generic page-table") Signed-off-by: Vipin Sharma --- arch/arm64/include/asm/kvm_pgtable.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 4cd6762bda80..4cd62506c198 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -605,9 +605,8 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr); /** - * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to Point - * of Coherency for guest stage-2 address - * range. + * kvm_pgtable_stage2_flush() - Clean and invalidate data cache to Point of + * Coherency for guest stage-2 address range. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address from which to flush. * @size: Size of the range. From patchwork Fri Apr 21 16:53:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13220497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 85969C77B76 for ; Fri, 21 Apr 2023 17:38:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=U4fmE9jX/WAt+SV8RdU9hLysyvwYWWksBIK9+1SA0Ks=; b=EOIxOXUp2n7ZW7IG/nhMG6sN4j 7nUxOsmgoHxRCZj+roANH426NWHsbOoGqajqS0pnJ+U/oyweosKiIeDdXjnWOi0osvz0J7YzjIq3x vKf+KQ6iaRrRjAadB307w4mQNHeqaA490vXoqVB4vNIx9CtR7aS5+d/Flecc69xJikVSqOvbqMFOe oP3Szf1hf/AariSeV3wgkQWXlDsf9oJ+cGxD4An1etSTz1lBWlPyhDEYpELAI/f/yfCbYkcZtNApf oqWTvd8ENvzg+26l4eqnz5cf7FRo2w+45+J/7wwql7lAu5dVArhbwG8RmoVGOLOjpuRFmDbjy+sWF ZFZtL+FA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ppuiZ-00BW69-1E; Fri, 21 Apr 2023 17:38:43 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppuiV-00BW4g-27 for linux-riscv@bombadil.infradead.org; Fri, 21 Apr 2023 17:38:39 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=gE2N+oWt7hK60UExWH8Apuvl/6Zc9TUNxMrpzS6/3Tk=; b=W/j20STzLPS8u+qvGvD+ng5CMb nHJ219LTXpXwEm3mAEmkEEy+Y26CSjMd15EXj6BqywFY9SLuVrsefZg5XhardU1JIPEKeWJWvw+Pj 79WdkLzsfRU7AdWueaen4vNbEPAjQ+Fb6tKXsPzDNlXPxRL8Z+4wgWiDNWMjm2YAu9n7r7B5IjfzQ +MBmf332BIibttruivpVglz/OCuwDzt45hQzizeTurkHsaVGVLjw3Tj0+ulZUuU3xEbV6uwTzIDh1 EB3wZLC4GZENg1F9HwiB6wpLOUGLLUunvwyWOVaKGVpVBCuQQt53C+M7PZKIsx1a0sVTWcyH8hB5g tv/7gNhA==; Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppu0j-005NC7-2b for linux-riscv@lists.infradead.org; Fri, 21 Apr 2023 16:53:29 +0000 Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1a52677bd54so15773365ad.3 for ; Fri, 21 Apr 2023 09:53:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682096001; x=1684688001; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gE2N+oWt7hK60UExWH8Apuvl/6Zc9TUNxMrpzS6/3Tk=; b=gSJswkUtsqGy+NAfu8py5auMpkfLQNTOEqX3uJ9U9Og/ThQanQOKyGUkdPpPpGces1 JpO7/8XndhNyW9IWOI0DDDaGS4u0h+nk5XDS2GaU/CIIPt+gKrJUunjiSwrs3SD6+lmK wh8ezPSLRVsI9k+kb5tXPIHwUoJgXCm0nOXfCCMzRUm+aSooj6bPriuueEgRC25O2ve2 DnSPWIvPfpaLGGXJeHYj1BpIYqB/JxSdn66Mw4UCn5rgAEpuf+2j/iCwE5JFs61Ikr7f Ym5a7YO8FpydTjeb6RKG5TTrhNhKd8qH9v5YJRVyuzemRjvxOLMfGgUvt+dkgVK1UreB wi4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682096001; x=1684688001; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gE2N+oWt7hK60UExWH8Apuvl/6Zc9TUNxMrpzS6/3Tk=; b=AWQd77Km+g6S0yAcA/pOjxzBuPOCInEZXs0IlhRmKEjIoMYjQzrVt57rwtLzY/pmET NUJs1sTTZr/AWtyN2HxgbvXHvJF8b+FV7grg87RIReSLnZGIXD5g4DoVMfPFr/9LroR4 bxap2LyoIK6gckn5Pr+zoJw2OPbLhKGVphZJ7fwruMZpxV6HWScGzTf5lzlTBzlMKZNR XsPuXzNgSg0NDo4xcyAZRZTO+RS4OI3EwECxlpKrjAv4oOhcyoyxwt8Vzv8rag0ghtKD 9W9eiH6V/6+NROw6Lyoyb1ds3UaFl1JULyw27Jaahyeof7uG6sWySuzjWu/3UMWA76W2 Qnwg== X-Gm-Message-State: AAQBX9civKaMNBZHbj+QTL7uCsgb22738aoZ/R2lDbC31A38vW0l27rF PDIk2KuC6llQRETBU7by3Tr3crpUIc27 X-Google-Smtp-Source: AKy350Zm39wmROPzIjLdWsr/QX3YmQtoV9lDDZ9cKibG8CUkTfGAonW6liBLjD1f7VOqEfq8woe5teAtwAON X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:902:ca14:b0:1a6:b4bf:8956 with SMTP id w20-20020a170902ca1400b001a6b4bf8956mr1835188pld.12.1682096001326; Fri, 21 Apr 2023 09:53:21 -0700 (PDT) Date: Fri, 21 Apr 2023 09:53:03 -0700 In-Reply-To: <20230421165305.804301-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230421165305.804301-1-vipinsh@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421165305.804301-8-vipinsh@google.com> Subject: [PATCH 7/9] KVM: mmu: Move mmu lock/unlock to arch code for clear dirty log From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230421_175327_543724_EA65FD1F X-CRM114-Status: GOOD ( 14.06 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move mmu_lock lock and unlock calls from common code in kvm_clear_dirty_log_protect() to arch specific code in kvm_arch_mmu_enable_log_dirty_pt_masked(). None of the other code inside the for loop of kvm_arch_mmu_enable_log_dirty_pt_masked() needs mmu_lock exclusivity apart from the arch specific API call. Future commits will change clear dirty log operations under mmu read lock instead of write lock for ARM and, potentially, x86 architectures. No functional changes intended. Signed-off-by: Vipin Sharma --- arch/arm64/kvm/mmu.c | 2 ++ arch/mips/kvm/mmu.c | 2 ++ arch/riscv/kvm/mmu.c | 2 ++ arch/x86/kvm/mmu/mmu.c | 3 +++ virt/kvm/dirty_ring.c | 2 -- virt/kvm/kvm_main.c | 4 ---- 6 files changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 7113587222ff..dc1c9059604e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1002,7 +1002,9 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { + write_lock(&kvm->mmu_lock); kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); + write_unlock(&kvm->mmu_lock); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index e8c08988ed37..b8d4723d197e 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -415,11 +415,13 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { + spin_lock(&kvm->mmu_lock); gfn_t base_gfn = slot->base_gfn + gfn_offset; gfn_t start = base_gfn + __ffs(mask); gfn_t end = base_gfn + __fls(mask); kvm_mips_mkclean_gpa_pt(kvm, start, end); + spin_unlock(&kvm->mmu_lock); } /* diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 78211aed36fa..425fa11dcf9c 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -395,11 +395,13 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, gfn_t gfn_offset, unsigned long mask) { + spin_lock(&kvm->mmu_lock); phys_addr_t base_gfn = slot->base_gfn + gfn_offset; phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; gstage_wp_range(kvm, start, end); + spin_unlock(&kvm->mmu_lock); } void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 144c5a01cd77..f1dc549b01cb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1367,6 +1367,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { + write_lock(&kvm->mmu_lock); /* * Huge pages are NOT write protected when we start dirty logging in * initially-all-set mode; must write protect them here so that they @@ -1397,6 +1398,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, kvm_mmu_clear_dirty_pt_masked(kvm, slot, gfn_offset, mask); else kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); + + write_unlock(&kvm->mmu_lock); } int kvm_cpu_dirty_log_size(void) diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c index c1cd7dfe4a90..d894c58d2152 100644 --- a/virt/kvm/dirty_ring.c +++ b/virt/kvm/dirty_ring.c @@ -66,9 +66,7 @@ static void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask) if (!memslot || (offset + __fls(mask)) >= memslot->npages) return; - KVM_MMU_LOCK(kvm); kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask); - KVM_MMU_UNLOCK(kvm); } int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f40b72eb0e7b..378c40e958b6 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2157,7 +2157,6 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) dirty_bitmap_buffer = kvm_second_dirty_bitmap(memslot); memset(dirty_bitmap_buffer, 0, n); - KVM_MMU_LOCK(kvm); for (i = 0; i < n / sizeof(long); i++) { unsigned long mask; gfn_t offset; @@ -2173,7 +2172,6 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask); } - KVM_MMU_UNLOCK(kvm); } if (flush) @@ -2268,7 +2266,6 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, if (copy_from_user(dirty_bitmap_buffer, log->dirty_bitmap, n)) return -EFAULT; - KVM_MMU_LOCK(kvm); for (offset = log->first_page, i = offset / BITS_PER_LONG, n = DIV_ROUND_UP(log->num_pages, BITS_PER_LONG); n--; i++, offset += BITS_PER_LONG) { @@ -2291,7 +2288,6 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, offset, mask); } } - KVM_MMU_UNLOCK(kvm); if (flush) kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); From patchwork Fri Apr 21 16:53:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13220495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0EB8DC7618E for ; Fri, 21 Apr 2023 17:38:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=YANyb7eE0+Nhia0moYnspiJGtJcnnPukkO2b3fPW98M=; b=bAkwzWnr1jFrvz4nlTXbOZXnfW KxESGH8J8sjrV9zpPXQODujGmFlPxW8JAfSY2wGDxdoGk4GrpcY7tNA3qWFEqORIqWguwqrmBOY6P lGHwBFOwgvJ1SWj9vu2iDrD0h5mj7VNr5M3RdzeNnyRWSXTh3VeWgfO+RISIn03p8aGibUJvSlvFh yy+rVhn+3eMv42X/ZWmAV4o2tzTA0+T9tLK2L5ZdhZAT4cX1NmBcOEfQQ6G4p3SlQVo7sc9kLxeXd 7C10PsXo4b34WslKeKRk6YDjai27nfvku+xvQGp879lqBO/LpS+WBh/t+gILuC0ggx7MTL5CENbQs tCFeSuBQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ppuiQ-00BW2b-0H; Fri, 21 Apr 2023 17:38:34 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppuiO-00BW2B-1p for linux-riscv@bombadil.infradead.org; Fri, 21 Apr 2023 17:38:32 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=qC7e8besLo1ag72RTnGUdqUNToUJA9of6W+yz9oDvfY=; b=JpBP8FqWcIGIXnDSATQEiFeIj6 3czfVjlccOpwA0rHGU4xFyUlsVXDYbOEqoFUWEoIvfGH4qQU47zw16EDsMh28uVWx/zeIzxNrY5jz cYh7+kgf+rf5GNPHGa1AmuSaPB6FmCCzzvQM2Go6QEfoz3aR1UMhNn2iAVTP0GPl/Pv3ubg2qaUUv Obk2LV4MwF0agIYmdIz0qIqoHuYmhturt0i/GKMIZKj10WW/rDGYiXTROiFGZL6P6R5BFuGoZxMq4 5qJq7KFZMieG2rWWLN6ZlBwS8OA5Sv4Fa15TA76WtjKufECNDFcezx2cmNk5jdobnPmILbptB8U6T +LjV52UA==; Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppu0l-005NCn-2e for linux-riscv@lists.infradead.org; Fri, 21 Apr 2023 16:53:30 +0000 Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-63b79d8043eso13016896b3a.0 for ; Fri, 21 Apr 2023 09:53:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682096003; x=1684688003; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qC7e8besLo1ag72RTnGUdqUNToUJA9of6W+yz9oDvfY=; b=GOG4w3Pi6Eguql63DqPQoIqooDk2CqgiNgSILELR6uyDzdTl7Vzyy+xaIluVcSjhyC RvCjMUQV09KGtSZBzaSFu2ULcU3Y3JJpf5vC4vVKPaROFTvqMFHnKW7ByaMxXfk+EzAG 99Opx1mgfDUi8BUkLeU+cbao2T7UFu79PD2joe0F54/b7N6UHkeWd9KQWvVxtVFSqqBN 4IaiBkj2jyw7kaprKEY+F3kKrY4x7q4duJcpFclPPBqAR7WayXdFPSvh7rZW6yMWTzoe DS8bOW/ilapcUDSa/xMSHG8hRonLGXyfdOJIEtji0VIqlLUQKZGmAZZyv+1a4rxUZ26s iMZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682096003; x=1684688003; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qC7e8besLo1ag72RTnGUdqUNToUJA9of6W+yz9oDvfY=; b=YmoqAoS9ePScpQbx3mkn+OQrXaAUg3XKpz/ZhfQKrpAAw75VvNq4kPkfzlxxOppJW+ mQbki/sfXHx+YFrICp+4gMNJ7GD3qcJtGqtlS0qEVPck/YqL3oEIe9vSz8gKJ2yzydRb SNP/38OoHBq27CpaENQo+Wl2HjUl4Qa0HXd5K0UOJ77KDPPn9P6DUM2RP0qbAeJWYgtB W4U2n0IekQAm7M+M5Q8PFd4wBlqFvDFjWbFHcNnbyQ5ELwenJgLqTCu0wJdxBzGUp3Be vdugCWlf+8Ac0K4KOO8u80S7ImsOU0kB+C01HG/kkEx6miY+/uY0ZHS0HUIAZc+10Y52 /EPA== X-Gm-Message-State: AAQBX9eTNHYq14q99dyvlY4QkmwQ+UWBG28PN1xGmwrHzqvl23lKdDI6 Fy/4DoUtf49uSYcMx77yUnWEe52H2mWo X-Google-Smtp-Source: AKy350YK5qI2wTpVn2+n76a4c4LFk6EXRJamXsvoVbLWqldR9ZGNS6cWNNDnQcdFV3v/V2rDAy2uQXHrhTp1 X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:90a:a6a:b0:244:9909:6e60 with SMTP id o97-20020a17090a0a6a00b0024499096e60mr1346561pjo.3.1682096003103; Fri, 21 Apr 2023 09:53:23 -0700 (PDT) Date: Fri, 21 Apr 2023 09:53:04 -0700 In-Reply-To: <20230421165305.804301-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230421165305.804301-1-vipinsh@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421165305.804301-9-vipinsh@google.com> Subject: [PATCH 8/9] KMV: arm64: Allow stage2_apply_range_sched() to pass page table walker flags From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230421_175328_899073_E2DE4951 X-CRM114-Status: GOOD ( 14.98 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Allow stage2_apply_range_sched() to pass enum kvm_pgtable_walk_flags{} to stage 2 walkers. Pass 0 as the flag to make this change no-op This capability will be used in future commits to enable clear-dirty-log operation under MMU read lock. Current users of stage2_apply_range_*() API run under assumption of holding MMU write lock. Stage2 page table walkers then run under the same assumption. In future commits when clear-dirty-log operation under MMU read lock is added then there needs to be a way to pass this shared intent to page table walkers. No functional changes intended. Signed-off-by: Vipin Sharma --- arch/arm64/include/asm/kvm_pgtable.h | 12 +++++++++--- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 4 ++-- arch/arm64/kvm/hyp/pgtable.c | 16 ++++++++++------ arch/arm64/kvm/mmu.c | 26 ++++++++++++++++---------- 4 files changed, 37 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 4cd62506c198..79a452d78e08 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -508,6 +508,7 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address from which to remove the mapping. * @size: Size of the mapping. + * @flags: Page-table walker flags. * * The offset of @addr within a page is ignored and @size is rounded-up to * the next page boundary. @@ -520,7 +521,8 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, * * Return: 0 on success, negative error code on failure. */ -int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); +int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_wrprotect() - Write-protect guest stage-2 address range @@ -528,6 +530,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address from which to write-protect, * @size: Size of the range. + * @flags: Page-table walker flags. * * The offset of @addr within a page is ignored and @size is rounded-up to * the next page boundary. @@ -538,7 +541,8 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); * * Return: 0 on success, negative error code on failure. */ -int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); +int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_stage2_mkyoung() - Set the access flag in a page-table entry. @@ -610,13 +614,15 @@ bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr); * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address from which to flush. * @size: Size of the range. + * @flags: Page-table walker flags. * * The offset of @addr within a page is ignored and @size is rounded-up to * the next page boundary. * * Return: 0 on success, negative error code on failure. */ -int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags); /** * kvm_pgtable_walk() - Walk a page-table. diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 552653fa18be..bac3c2c31cbe 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -326,11 +326,11 @@ static int host_stage2_unmap_dev_all(void) /* Unmap all non-memory regions to recycle the pages */ for (i = 0; i < hyp_memblock_nr; i++, addr = reg->base + reg->size) { reg = &hyp_memory[i]; - ret = kvm_pgtable_stage2_unmap(pgt, addr, reg->base - addr); + ret = kvm_pgtable_stage2_unmap(pgt, addr, reg->base - addr, 0); if (ret) return ret; } - return kvm_pgtable_stage2_unmap(pgt, addr, BIT(pgt->ia_bits) - addr); + return kvm_pgtable_stage2_unmap(pgt, addr, BIT(pgt->ia_bits) - addr, 0); } struct kvm_mem_range { diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3d61bd3e591d..3a585e1fba11 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1024,12 +1024,14 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, return 0; } -int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags) { struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, .arg = pgt, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .flags = flags | KVM_PGTABLE_WALK_LEAF | + KVM_PGTABLE_WALK_TABLE_POST, }; return kvm_pgtable_walk(pgt, addr, size, &walker); @@ -1108,11 +1110,12 @@ static int stage2_update_leaf_attrs(struct kvm_pgtable *pgt, u64 addr, return 0; } -int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags) { return stage2_update_leaf_attrs(pgt, addr, size, 0, KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W, - NULL, NULL, 0); + NULL, NULL, flags); } kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) @@ -1193,11 +1196,12 @@ static int stage2_flush_walker(const struct kvm_pgtable_visit_ctx *ctx, return 0; } -int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size, + enum kvm_pgtable_walk_flags flags) { struct kvm_pgtable_walker walker = { .cb = stage2_flush_walker, - .flags = KVM_PGTABLE_WALK_LEAF, + .flags = flags | KVM_PGTABLE_WALK_LEAF, .arg = pgt, }; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index dc1c9059604e..e0189cdda43d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -48,7 +48,9 @@ static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end) */ static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end, - int (*fn)(struct kvm_pgtable *, u64, u64), + enum kvm_pgtable_walk_flags flags, + int (*fn)(struct kvm_pgtable *, u64, u64, + enum kvm_pgtable_walk_flags), bool resched) { struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); @@ -61,7 +63,7 @@ static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, return -EINVAL; next = stage2_range_addr_end(addr, end); - ret = fn(pgt, addr, next - addr); + ret = fn(pgt, addr, next - addr, flags); if (ret) break; @@ -72,8 +74,8 @@ static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, return ret; } -#define stage2_apply_range_resched(mmu, addr, end, fn) \ - stage2_apply_range(mmu, addr, end, fn, true) +#define stage2_apply_range_resched(mmu, addr, end, flags, fn) \ + stage2_apply_range(mmu, addr, end, flags, fn, true) static bool memslot_is_logging(struct kvm_memory_slot *memslot) { @@ -236,7 +238,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap, + WARN_ON(stage2_apply_range(mmu, start, end, 0, kvm_pgtable_stage2_unmap, may_block)); } @@ -251,7 +253,8 @@ static void stage2_flush_memslot(struct kvm *kvm, phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; phys_addr_t end = addr + PAGE_SIZE * memslot->npages; - stage2_apply_range_resched(&kvm->arch.mmu, addr, end, kvm_pgtable_stage2_flush); + stage2_apply_range_resched(&kvm->arch.mmu, addr, end, 0, + kvm_pgtable_stage2_flush); } /** @@ -932,10 +935,13 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, * @mmu: The KVM stage-2 MMU pointer * @addr: Start address of range * @end: End address of range + * @flags: Page-table walker flags. */ -static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end) +static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end, + enum kvm_pgtable_walk_flags flags) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect); + stage2_apply_range_resched(mmu, addr, end, flags, + kvm_pgtable_stage2_wrprotect); } /** @@ -964,7 +970,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; write_lock(&kvm->mmu_lock); - stage2_wp_range(&kvm->arch.mmu, start, end); + stage2_wp_range(&kvm->arch.mmu, start, end, 0); write_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); } @@ -988,7 +994,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; - stage2_wp_range(&kvm->arch.mmu, start, end); + stage2_wp_range(&kvm->arch.mmu, start, end, 0); } /* From patchwork Fri Apr 21 16:53:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13220498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 641EBC7618E for ; Fri, 21 Apr 2023 17:38:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=QbwBK3IZRPaGL8bmuD4EMEkidtl3bOXLsAVLZmWajgo=; b=Bhgezknfc7oyjramfnmoNSqRKV R4jcC9S79MxVQpq+WrC03gfnjNCXdl2wVc1NksvfuWofgKUAlbDvUTe3eM2Cq6hieqB21t6D/h48i UZIwKrnaPYCMMYmpeOUstzgWUX8CUarSx0o3qmMaY2JIdJW/qSF9Yr0hhM9UrkbQQ5KqHZQT4XCea 9VLMk3nutjNGWER5fpXOwz9vlNE4dHpIUU5+26/FixQ/MyLOAxB7Irp6NytbQdKsaLtsB95V6TVB3 vcmmgpRhxy/7CbbQ3e+V4DAmK2WVQZTTL6NPsDM7KQRWlfZde7cz5Vvv1i5zUz4EGV6Yhvd509a10 n8EAr00g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ppuii-00BWAX-2V; Fri, 21 Apr 2023 17:38:52 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppuiZ-00BW5i-34 for linux-riscv@bombadil.infradead.org; Fri, 21 Apr 2023 17:38:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=EZkpp0Q8jjYvxCvjV0ny/+Rc/wVVpL1bL7ck4xeVhlc=; b=WLhuiiGrQDbVxBSdDazpiUacvd K6P7rS7QX6AzbJAk7IYqcZ5vvswpzqX+/FVuHeiiOY/iOXWWu9XLT7MrTP4K3Yy2phqXtDdbB94Rn 0IoTvK0TfO16OFaJvOVwkAg5zOUF7e0UvgH9u4Ix4pryaR61G+kWSHwkK93n2xiIYMnJ0HR25MpoI fO3aYOJsLA8JQ5dkvR1+wd6cp953xQ9XnPqWKJ6DeK4jegZ+ADa/0sZ/IFi5xrddnYfKzT9YBqHrd kTd9teQNqgHkXnW4wuIug/6cFX0m55wh8Y3XZpC3XVvNVyNOX1jyo7Q6Y2+R6bCHtIXQXEa0aNFMZ Lnie657g==; Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppu0n-005NDO-05 for linux-riscv@lists.infradead.org; Fri, 21 Apr 2023 16:53:31 +0000 Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-5144902c15eso1476169a12.2 for ; Fri, 21 Apr 2023 09:53:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682096005; x=1684688005; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EZkpp0Q8jjYvxCvjV0ny/+Rc/wVVpL1bL7ck4xeVhlc=; b=0h1fkH+Qg9TAAOhRy5ByqB0jrA5qVCSK4v4VOevyiT2ApjSQTkdNJB5NveNHYdDJyn UJHQdihO3xA3kzsZEepsUYjUUldoQQHE3IwikHmH626x/P3dqcuE74kUFvi/fYfmL8id EDHpOrTTDrdls8Ay6MgcRCV+nPsBif06WnlTwtqTPlBkRDbsQd0Ihsksz1E+BPVTlwL+ Wr1jDKNxfs+qyRuwYH973UQwgCLkK4mXV3n/qkGEbaFtetzvIDazfRsZNp9s0jwVrFs5 k56stUv4Ja+Quv0wmKFN8uJmrJL9UvFWW6XEpyc/9yhzn8gHY1vEcRlztKntO6uIDVGX jZXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682096005; x=1684688005; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EZkpp0Q8jjYvxCvjV0ny/+Rc/wVVpL1bL7ck4xeVhlc=; b=ebfyx6MtugmJe8tGA1Jpso+OSUCouEK+Cd9fBPWpQwZn02kxk1sOvw1UJz6F6AohcI 5Oc4H3vTe+lgqu8E7yK9AazYoLKj3hFhaeVc2NZvAN9kR0RAZtF9pW+G7SjSIfYx0fua LFuK4fuQfR+HljVK6vT8kJR55Tuod4Nc1ckmRXG5qFDpKLO72GDFU3oIo5h6+Qmpl9xX jbJfvccqAQwUrgHNBFMPzPTXbggwPxc+wDkKaJcvsS9LCYjdy2PGx9YQmyvomj4Nbj9I lkVlMVqhGOiktkKeax3KjGPhoyZVKtD2cstHAPaZX3fqIyl5jozEoonjaOo0cMXafEm1 o8pA== X-Gm-Message-State: AAQBX9cORu5Cw09Ldfmzl75NrG+mYtJSAyNVWCXBnGkdUCqWmRdiEByo QqYdm1WnZkqMTP1f3U36a/PVJAFfrhGR X-Google-Smtp-Source: AKy350bDKAEiJjC/uv+A89c39cSNVyu8ADQH7b61iTn5O1CeNgQ8Eu2os+9yNYOIL6Af0Fk180jnNb9Y53o2 X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a17:90a:690d:b0:23f:a851:4f04 with SMTP id r13-20020a17090a690d00b0023fa8514f04mr1467442pjj.3.1682096004881; Fri, 21 Apr 2023 09:53:24 -0700 (PDT) Date: Fri, 21 Apr 2023 09:53:05 -0700 In-Reply-To: <20230421165305.804301-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230421165305.804301-1-vipinsh@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421165305.804301-10-vipinsh@google.com> Subject: [PATCH 9/9] KVM: arm64: Run clear-dirty-log under MMU read lock From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230421_175329_328498_04B86715 X-CRM114-Status: GOOD ( 10.32 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Take MMU read lock for write protecting PTEs and use shared page table walker for clearing dirty logs. Clearing dirty logs are currently performed under MMU write locks. This means vCPUs write protection fault, which also take MMU read lock, will be blocked during this operation. This causes guest degradation and especially noticeable on VMs with lot of vCPUs. Taking MMU read lock will allow vCPUs to execute parallelly and reduces the impact on vCPUs performance. Tested improvement on a ARM Ampere Altra host (64 CPUs, 256 GB memory and single NUMA node) via dirty_log_perf_test for 48 vCPU, 96 GB memory, 8GB clear chunk size, 1 second wait between Clear-Dirty-Log calls and configuration: Test command: ./dirty_log_perf_test -s anonymous_hugetlb_2mb -b 2G -v 48 -l 1 -k 8G -j -m 2 Before: Total pages touched: 50331648 (Reads: 0, Writes: 50331648) After: Total pages touched: 125304832 (Reads: 0, Writes: 125304832) Signed-off-by: Vipin Sharma --- arch/arm64/kvm/mmu.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e0189cdda43d..3f2117d93998 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -67,8 +67,12 @@ static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, if (ret) break; - if (resched && next != end) - cond_resched_rwlock_write(&kvm->mmu_lock); + if (resched && next != end) { + if (flags & KVM_PGTABLE_WALK_SHARED) + cond_resched_rwlock_read(&kvm->mmu_lock); + else + cond_resched_rwlock_write(&kvm->mmu_lock); + } } while (addr = next, addr != end); return ret; @@ -994,7 +998,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; - stage2_wp_range(&kvm->arch.mmu, start, end, 0); + stage2_wp_range(&kvm->arch.mmu, start, end, KVM_PGTABLE_WALK_SHARED); } /* @@ -1008,9 +1012,9 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { - write_lock(&kvm->mmu_lock); + read_lock(&kvm->mmu_lock); kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); - write_unlock(&kvm->mmu_lock); + read_unlock(&kvm->mmu_lock); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb)